id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
5761645
pes2o/s2orc
v3-fos-license
Personal Health Practices Health Issue There are differences in health practices and self-rated health among different socio-demographic groups of women. The relationship between socio-demographic status and a) a range of health behaviours and b) a combination of multiple risk and multiple health promoting practices were examined. The relationship between self-rated health and health practices was also assessed. Key Findings There were geographic differences in health practices with women in British Columbia having the highest odds of engaging in multiple health promoting practices, while women in Quebec had the lowest. Reports of engaging in multiple risk behaviours were most common in Ontario. Women from Ontario had the highest odds of reporting very good/excellent health and women from British Columbia had among the lowest odds. The data supported a strong social gradient between an increase in income/education and healthy practices, especially those that are health promoting. However, women with higher education were more likely to be overweight and those with higher incomes were more likely to drink alcohol regularly. Immigrant women were less likely to engage in multiple health risk practices compared to Canadian-born women. However, they were less likely to report very good/ excellent health than non- immigrants. While marriage appeared to have a generally protective effect on women's health practices, single women were more likely to be physically active and have a normal weight. Data Gaps and Recommendations More sensitive indicators need to be developed to better understand possible reasons for the socioeconomic gradient. Data collection should focus on both rural and Aboriginal populations. Background In recent years, differences in health outcomes by socioeconomic position have been recognized as a persisting trend in public health. [1] A prominent hypothesis in the literature has been that the increased mortality risk associ-ated with low levels of income and education is due to an increased prevalence of risky health practices, such as smoking, binge drinking and physical inactivity. [1] However, a large body of research and theory demonstrates that such practices develop from a complex interplay of factors, including income, education, gender, age, social support, cultural background and physical environment, which create a range of life contexts within which an individual's capacity to adopt healthy practices is either enhanced or constrained. The health practices selected for discussion in this chapter are those that have been shown to have different patterns in men and women: eating practices, exercise, weight control (reflecting the links between weight and food intake and exercise patterns), smoking, alcohol consumption, use of pain medication, and use of complementary and alternative therapies. Given the fact that there has already been considerable analysis of the differences in health behaviours between men and women, this chapter focuses on the differences in health behaviours and selfrated health among different socio-demographic groups of women. Health practices have been shown to have an impact on subjective views of health, including self-rated health and global quality of life, [2] findings that support the WHO's definition of health as "a state of complete physical, mental and social well-being and not merely the absence of disease." [3] Findings from longitudinal analyses have shown that self-perceived health is predictive of mortality, chronic disease incidence, recovery from illness, functional decline and the use of medical services. [2,4] Further, measures of self-rated health have been found to be valid tests with good test-retest reliability and predictive power. [5] Because health status data are not available, self-rated health is used as a proxy for health status in this report. Methods This chapter considers the social context of women's health practices and self-reported health. A summary of the literature is followed by a new analysis of data from the Canadian Community Health Survey (CCHS), which addresses the following questions: • What is the relation between women's socio-demographic status and their health practices? • What is the relation between women's socio-demographic status and their multiple risk-and multiple health-promoting practices? • What is the relation between women's self-rated health and their health practices? A secondary analysis of data from the CCHS, Cycle 1.1 (2000)(2001) was conducted. The CCHS is a national, cross-sectional survey that had a total of 125,574 respondents from 136 health regions across the country. Measures Binge drinking is defined as the consumption of five or more alcoholic beverages on at least one occasion in the past 12 months. Change to improve health indicates whether individuals have made a change in their lifestyle to improve their health in the past 12 months. Consumption of fruits and vegetables is measured by asking respondents the total number of servings of fruits and vegetables they consume per day. Data are presented for those who consumed more than five servings and for those who consumed less than five servings per day. Health practices The first section under "Literature Review" defines the health practices examined in this report. Further details on these measures can be obtained from Statistics Canada's CCHS (2001) documentation. Level of physical activity is measured by asking respondents whether or not they had engaged in various leisuretime physical activities in the previous three months (e.g. walking, swimming, gardening, golfing, weight training, jogging or running) as well as the frequency and duration of these activities. Estimates of the amount of energy expenditure were used to classify respondents as active, moderately active or inactive. Active respondents engaged in a sufficient amount of physical activity to achieve cardiovascular health benefits, and moderately active respondents experienced some health benefits but little cardiovascular benefit. Data are presented for those considered active and inactive. Immigrants Long-term immigrants -women who arrived in Canada 10 or more years ago. Recent immigrants -women who have lived in Canada for less than 10 years. Multiple health-promoting practices is an index that identifies respondents who engaged in two or more positive health practices, including physical activity, consulting an alternative health care provider, making a change to improve health in the past 12 months, and consuming more than five servings of fruits and vegetables per day. Multiple risk practices is an index that identifies respondents who engaged in two or more negative health practices -smoking, binge drinking, physical inactivity, use of pain relievers, and/or consuming less than five servings of fruits and vegetables per day. The practices selected for the index are factors known to affect women's and men's health. Patterns of overweight are measured using the Canadian guidelines for body weight classification in adults. The level of health risk is determined by measuring body mass index (BMI). Overweight is defined as a BMI of 25 to 27. Regular drinker is defined as someone who has consumed alcohol once a month or more frequently in the last 12 months. Self-rated health, which is a subjective, global assessment of one's health, is rated on a 5-point scale ranging from poor to excellent. In this report, comparisons are made between those who rated their health as excellent or very good versus those who rated their health as good, fair or poor. Smoking status identifies individuals who currently smoke on a daily or occasional basis (in contrast to never smokers and past smokers). Use of complementary and alternative therapies is defined as consultation with a chiropractor or an alternative health provider, such as an acupuncturist, homeopath, reflexologist or massage therapist, about physical, emotional or mental health in the past 12 months. Use of pain relievers in the past month refers to use of pain relievers such as Aspirin or Tylenol (including arthritis medicine and anti-inflammatories) in the past 30 days. Education: categorized as high education -secondary school graduation or more; low education -less than secondary school graduation. Household income: This is based on income adequacy, which takes into account the household income as well as the number of people in the household, divided into two categories. High income -middle or high-income adequacy (~ 80%); low income -lowest income adequacy (~10%). Marital Status: Married -married and common law; Combined single -never married, separated, divorced and widowed. Immigrant status: immigrant or non-immigrant. Gender and Health Practices Recent national reports [6][7][8] indicate that men and women have distinctly different lifestyles; they differ not only in whether they adopt certain health-related habits but also in their concerns about, or attitudes towards, health. [6] On the positive side, women appear to be more attuned to health issues and thus are more likely to make healthy lifestyle choices. They tend to make healthy food choices (80% versus 63%) and are much more weight conscious than men (59% versus 41%). [6] As a result, they are much less likely to be overweight or obese than men (36% versus 48%). [6] Except in the youngest age groups, fewer women than men smoke (25% versus 28%), and they are much more likely to abstain from alcohol or to drink in moderation (25% versus 50%). [6] Women are also more likely to use complementary and alternative medicines (18% versus 14%). [8] However, this does not hold true for all types of alternative care; for instance, men and women were equally likely to have seen a chiropractor. [6] On the negative side, women are significantly less physically active than men (19% versus 25%) and use painkillers more regularly (70% versus 58%). [6]. In addition, certain health practices of young women show some disturbing trends: they are five times as likely as young men to be underweight, [8] and their smoking rates exceed those of young men, the only age group in which this is so. [9] Also of concern are the rising rates of binge drinking and risky sexual behaviour among young women, which are now comparable to those of young men. [8] The following section provides a summary of what is known about the associations between socio-demographic factors, women's health practices and self-rated health. Data on what is known about men are presented where they illustrate a noteworthy difference between the sexes. Geographic Variation and Health Practices In general, unhealthy practices among women tend to dominate in Quebec and the Atlantic provinces. Quebec and Prince Edward Island (P.E.I.) have the highest rates of smoking (32%), followed closely by Newfoundland and Labrador, and Nova Scotia (31% each). [9] Quebec has the highest reported rate of regular drinkers (57%). In contrast, British Columbia and Alberta have the highest reported rates of physical activity (27% and 26% respectively) and P.E.I. the lowest (14%). In terms of women taking action to improve health, reports are highest in Ontario and B.C., and lowest in Saskatchewan and Newfoundland and Labrador (39% versus 41%). Finally, reported consultations with alternative health care practitioners are highest in B.C. [9] In light of the concentration of poor health practices in Quebec and the Atlantic provinces, it is interesting that Quebec has the highest rate of excellent/very good selfreported health (27% among men and women), followed closely by Newfoundland and Labrador (26%). Saskatchewan and Nova Scotia have the lowest reported rates of excellent/very good health (17% and 20% respectively). [7] Once again, these data suggest that the factors affecting self-rated health are complex and not well understood. Income and Education It is well documented that women and men with lower socio-economic status (SES) are significantly more likely to lead a sedentary lifestyle, to have poorer dietary habits, to be overweight and to smoke cigarettes than women and men with higher SES; [9,10] as well, they are more resistant to changing their health practices. [10,11] The literature shows that activity levels increase as incomes increase, a finding that is supported by the results of the three waves of the National Population Health Survey (NPHS). [6][7][8] According to NPHS data (1996)(1997), 51% of women with the highest incomes are physically inactive as compared with 60% at the lowest income level. [8] Patterns of alcohol use by socio-economic status are more complex. [12,13] Findings from the 1996-1997 NPHS indicate that rates of binge drinking are greatest among women in the highest income bracket (19%) as compared with those in the middle and lower ones (14% and 10% respectively). Women who are regular drinkers are also more likely to have higher incomes as well as to be older, single and have a higher level of education. [14] With respect to food intake, lower-income women are more likely than those with higher incomes to describe their eating habits as fair or poor and to express concerns about the cost of low-fat foods. [8] Finally, women with higher household incomes are more likely (20%) than those at lower income levels (12%) to report using alternative health care. [6] Similar trends exist with regard to education. Age In general, advancing age is associated with poorer health practices and lower perceptions of personal health. As women age they tend to gain weight and engage in less physical activity [15,16], and they are more likely to report fair/poor health if they have experienced unhealthy weight gain. [2,17,18] According to NPHS data, young women have the lowest rates of obesity (5%) and women aged 55 to 64 the highest rates (approximately 17%), a pattern that is consistent across all three waves of the NPHS survey. [6] Smoking rates among young women (12-17 years of age) in Canada are a growing concern, particularly as they now exceed those of young men. Continuing a trend observed in 1994-1995, the rate of smoking among girls aged 12 to 14 (10%) and 15 to 17 (29%) has remained substantially higher than among young men of the same age (6% and 22% respectively). Among women, smoking rates are highest (approximately 32%) in the 18 to 54 age range and lowest (15%) in those 65 and older. [9] In the midage groups the percentage of women who use tobacco is approaching that of men, in part because men have quit at higher rates than women. Smoking among women in this age group is associated with lower income and education, heavier drinking and inactivity. [14] Increasing alcohol use among young women has also been shown to be a growing trend. [19] Young women (20-24 years) are among the largest consumers and abusers of alcohol. In fact, the proportion of women aged 20 to 24 classified as regular drinkers (who consumed one drink or more per month) almost doubled from 1994-1995 to 1996-1997. [7,8] Among women over the age of 64, the prevalence of regular drinking continues to decline gradually. [8] For women, the use of alternative care is most common in young to middle adulthood. Of those in the 25-44 and 45-64 age groups, 19% reported consulting an alternative practitioner in 1998-1999, as compared with approximately 11% for both 18-to 24-year-olds and those aged 65 and over. [20] Immigrant Status According to data from the (1994-1995) National Population Health Survey (NPHS), vital statistics (1985-1987 and 1990-1992) and the General Social Survey (GSS), female immigrants (particularly recent immigrants from non-European, non-traditional source countries) experience better health status than women born in Canada. [21][22][23] This finding is supported by both Australian and U.S. studies showing that for almost every health status indicator and socio-demographic characteristic, female immigrants who have spent less than 10 years in their host country are healthier than long-term immigrants (more than 10 years) and the native-born population. [24][25][26] Data from the 1996-1997 NPHS indicate that recent female immigrants are less likely to be regular alcohol drinkers and smokers, and less likely to be overweight than Canadian-born women. [23,24] On the other hand, they are also less likely to engage in physical activity and more likely to have poor nutritional habits than their native-born counterparts. [27,28] However, only 5% of Asian-born immigrants are obese as opposed to 12% of Canadians. [7] In contrast, another Canadian study [29] based on data from the GSS (1985 and1991) found that the health status of female immigrants did not differ significantly from that of native-born Canadians, nor were there changes in self-reported health status over time. Canadian studies show that male and female immigrant health practices change over time to resemble those of native-born Canadians. According to this research, new female immigrants smoke less, use less alcohol and are less likely to be obese than long-term immigrants. [21,30] A Canadian survey indicated that recent arrivals in Windsor consumed less alcohol than their Canadian-born counterparts, although alcohol use was more prevalent among recent immigrants with higher education and income than among recent immigrants with lower education and income. [31] Marital Status The literature shows that being married or living in a common-law relationship has a mixed effect on health practices. [32][33][34] Partnered women consume less alcohol and have fewer alcohol-related problems than single women. [32] Marriage has also been shown to have a positive effect on the quality of women's diets. On the negative side, married women with young children were less likely to be physically active than their single counterparts. [33] Research on the association between marital status and BMI is limited. However, one study on body image showed that body dissatisfaction occurs at comparable levels among married and single individuals. [34] Effects of Social, Economic and Demographic Factors on Self-Rated Health Both income and education show a distinct, independent, positive gradient with self-rated health. [2,35] In Canada, Shields and Shooshtari [2] found that women in lowerincome households had higher odds of reporting fair/ poor health and lower odds of reporting very good/excellent health than those in more affluent households. Similarly, women with a post-secondary degree had higher odds of reporting very good/excellent health than those with less education. Age also shows a distinct, independent, positive gradient with self-rated health. [35,36] One study [36] demonstrates that health practices have a greater impact on the self-rating of younger age groups, whereas functional ability has a greater influence on the ratings of seniors. In terms of self-rated health, Shields and Shooshtari [2] found that women who had never been married had higher odds of reporting fair/poor health than women who were currently married or who had previously been married. Recent Canadian studies have reached conflicting conclusions with respect to immigrants' self-reported health status. One study [22] used the 1994-1995 NPHS database and found immigrants experienced better health status than individuals born in Canada. In contrast, a more recent study [37] using the same database found that immigrants were more likely than non-immigrants to report poor health status. Within the immigrant group, immigrants of European origin and long-term immigrants were more likely to report fair or poor health status than their non-European counterparts who had recently arrived in Canada. [38] Health practices and self-rated health Less information is available on health practices and selfrated health than on demographic factors and self-rated health. Canadian women in all age groups are slightly less likely than men to report very good or excellent health. [2] Important factors affecting women's perceptions of fair/ poor health are unhealthy weight gain and a reduction in physical activity, whereas for men smoking and alcohol consumption are more predictive of reports of poor health. [39,40] In comparison with men, social structural factors (e.g. income, employment and education) also play a more important role in determining the health of women. [39] When the impacts of changes in health practices over time are considered, women's self-rated health status is not affected by improved health practices, such as increased physical activity. [2] However, other changessuch as a negative change in physical status or psychosocial factors -were associated with a corresponding shift in self-rated health. [2] Clearly, an individual's assessment of his or her health is a complex process that requires further research. Provincial and Territorial Variations The geographic variation in women's health practices is depicted in Figures 1 and 2. Women in British Columbia had the highest odds of engaging in multiple health-promoting practices (odds ration [OR] 1.11, confidence interval [CI] 1.04, 1.18). In contrast, women in Quebec had the lowest odds of engaging in multiple health-promoting practices (OR 0.13, CI 0.12, 0.14). Reports of multiple health risk factors were most common in Ontario. Of the remaining regions, women in the North and the Atlantic provinces were more likely to engage in multiple risk practices than women in Quebec, 1 Multiple health promoting practices is defined as having two or more of the following health promoting behaviours: being physically active, consulting an alternative health care provider, doing something in previous 12 months to improve health, consuming fruits and vegetables more than 5 times/serving per day. 2 All results were from Statistics Canada bootstrap programs. * Outcome is with multiple promoting practices. Odds of Reporting Multiple Health-Promoting Practices 1,2 Controlling for Selected Demographic Factors but their odds were approximately half those of women from Ontario (OR 0.50, CI 0.44, 0.57 and OR 0.47, CI 0.44, 0.50 respectively). Our analysis of self-rated health by region revealed that when compared with women from Ontario, women from the other geographic regions had lower odds of reporting very good/excellent self-rated health (see Figure 3). Of the remaining regions, women from the Atlantic provinces were more likely than those from British Columbia, the Prairies, or the North to report very good/excellent health (OR 0.85, CI 0.79, 0.92). Income and Education Our findings support prior research suggesting a strong social gradient between an increase in income and educa- 1 Multiple health risk is defined as having two or more of the following health risk factors: smoking, using pain relievers, binge drinking, being physically inactive, consuming fruits and vegetables fewer than 5 times/serving per day. 2 All results were from Statistics Canada bootstrap programs. * Outcome is with multiple risk factors. 1 Multiple health promoting practices is defined as having two or more of the following health promoting behaviours: being physically active, consulting an alternative health care provider, doing something in previous 12 months to improve health, consuming fruits and vegetables more than 5 times/serving per day. 2 Multiple health risk is defined as having two or more of the following health risk factors: smoking, using pain relievers, binge drinking, being physically inactive, consuming fruits and vegetables fewer than 5 times/serving per day. * All results were from Statistics Canada bootstrap program tion and healthy practices (see Figure 4). The trend was most apparent with respect to health-promoting practices. For example, 15.1% of women with higher incomes versus 8.5% of those with lower incomes consulted an alternative health care provider. Higher-income earners made changes to improve their health (69.1% versus 61.8%), they consumed more than five servings of fruits and vegetables (43.7% versus 37.2%) and were more physically active (18.76% versus 16.04%). Similarly, the likelihood of women with high levels of education compared to those with a low education level consulting an alternative health care provider was 17.0% versus 6.3%, making changes to improve their health was 70.7% versus 64.1%, and eating more than five servings of fruits and vegetables was 44.0% versus 38.8%. Odds Of Reporting Excellent or Very Good Health Controlling for Multiple Health-Promoting Practices, 1 Multiple Health Risk Factors 2 and Other Selected Demographic Factors Our findings in terms of the association between health risk factors, income and education followed a less distinct trend. The likelihood of higher income earners compared to low income earners smoking was 22.9% versus 33.1%, engaging in binge drinking was 1.9% versus 2.7%, and being inactive was 52.9% versus 60.1%. On the other hand, women with higher incomes were just as likely as those with lower incomes to be overweight (20.3% versus 19.3%) and the proportion of regular drinking was 52.2% versus 33.1%. Smoking, physical activity, binge drinking and weight control displayed a different relation to income than to education. There was no difference in the proportion of current smokers between women with lower and higher levels of education (23.6% versus 23.8%), nor was there a difference between level of education and physical activity (low 18.0% versus high 18.7%). In addition, 2.1% of women with high levels of education reported binge drinking versus 1.6% of their less educated counterparts. Income/Education and Health Practices (not age-adjusted) Figure 4 Income/Education and Health Practices (not age-adjusted). Source: Statistics Canada, CCHS, 2000-2001. 1 Low education: less than secondary school graduate. 2 High education: at least secondary school graduate. Finally, 21.6% women with more education were overweight versus 14.8% of those with less education. As shown elsewhere, [2] income and educational levels showed a strong, positive social gradient with self-rated health (see Figure 3). Age With the exception of physical activity, the analysis did not reveal a distinct trend between age and health-promoting practices (see Figure 5). There was little difference between the three age groups with respect to the likelihood of making changes to improve health, and likewise little difference in terms of fruit and vegetable consumption, although 48.4% of those 65 years of age and over consumed more than five servings of fruits and vegetables daily as compared with 39.8% of those aged 20 to 44. Of women 20 to 44 years of age, 17.2% consulted alternative health care providers, while of women 45 to 64 years of age, 16.0% did so. Health risk activities were most likely among women 20 to 44 years of age and 45 to 64 years of age. The proportion smoking was 29.5% and 23.66% respectively, and regular drinking was 57.0% and 27.50% respectively. The proportion engaging in binge drinking was 3.0% among women aged 20 to 44 and 3.2% among those aged 12 to 19. Consistent with other findings, [2] the current results indicate that reports of very good/excellent health were higher among younger women (see Figure 3). Immigrant Status Findings in this area were mixed (see Figure 6). The rates of smoking, regular drinking and multiple risk practices among non-immigrants reinforced previous findings. [21,22] However, the proportion of Canadian-born women who were physically active was 19.6% as compared with 14.7% among long-term immigrants, and the Age and Health Practices (not age-adjusted) Prior research [21][22][23][24][25][26][27][28][29] suggests that the longer individuals stay in their host country, the more convergent their health practices become with those of their native-born counterparts. However, the current results in this regard were inconsistent. Although there was little difference between Canadian-born women and long-term immigrant Canadian women with respect to consumption of fruits and vegetables (42.5% versus 44.64% respectively) and being overweight (20.2% versus 19.6% respectively), Canadian-born women reported rates of smoking of 26.9% versus 12.9% reported by long-term immigrant Canadian women, and regular drinking of 52.4% versus 38.8% by long-term immigrant women. The analysis revealed that immigrants (both long-term and more recent) were less likely to engage in multiple risk health practices than their Canadian-born counterparts. The proportions of recent immigrants, long-term immigrants and non-immigrants who made changes to improve their health in the previous year were 84.9%, 82.9% and 65.1% respectively. Finally, immigrants (both long-term and more recent) were less likely than non-immigrants to report very good/ excellent health (OR 0.74, CI 0.68, 0.79) (see Figure 3). Marital Status The findings presented here partially support the claim that marriage has a protective effect on women's health practices (see Figure 7). The proportion of married/partnered women compared with un-partnered women who reported smoking was 21.7% versus 26.6% and for binge Immigrant Status and Health Practices (not age-adjusted) * Insufficient sample size to report drinking was 1.1% versus 3.1%. With respect to the consumption of more than five servings of fruits and vegetables daily and consulting an alternative health care provider the proportions for married/partnered women compared with un-partnered women were 43.9% versus 40.8%, and 15.1% versus 12.4% respectively. On the other hand, the proportion of married/partnered women who reported being physically active was 16.3% versus 21.4% for their single counterparts. Proportions reported for overweight were 24.2% and 13.6% for the partnered and un-partnered groups respectively. There was little difference between the two groups with respect to changes made to improve their health in the previous year (69.7% married versus 67.7% single). Finally, married women had slightly higher odds of reporting very good/excellent health than their single counterparts (OR 1.11, CI 1.05, 1.18) (see Figure 3). Geographic Variation The results of the analysis of geographic variation in health practices were largely consistent with previous research. The healthiest practices were found in British Columbia, where rates of physical activity may be higher partly because of more clement weather and a distinct culture that values physical exercise. Other practices more common in British Columbia, such as attention to food selection and consultations with alternative health care practitioners, are consistent with B.C.'s position as one of the top three wealthy provinces in the country. [41] On the other hand, women from Quebec, a province of comparable wealth, had the lowest odds of engaging in multiple health-promoting practices. The marked difference in behaviours, despite similar provincial economic circumstances, suggests that health practices may be more closely associated with historic differences in cultural values than with socio-economic status. [42] The finding that, compared with Ontario, all other regions in Canada had lower odds of engaging in risky health Marital Status and Health Practices (not age-adjusted) Figure 7 Marital Status and Health Practices (not age-adjusted). Source: Statistics Canada, CCHS, 2000-2001. 1 Married status includes married and common-law. 2 Single status includes widowed, separated and divorced. practices was surprising given previous research showing riskier health practices in Quebec, the Atlantic provinces and the North. [7,8] This discrepancy may be due, in part, to the indices for multiple health risk used in the report. Despite engaging in riskier health practices, women from Ontario had higher odds than women from other geographic regions of reporting very good/excellent health. Also, the relatively high odds of very good/excellent health of women from the Atlantic provinces conflict with previous research showing a strong socio-economic gradient with self-rated health. [2] The discrepancy between poor health behaviours and positive self-perceptions of health in Quebec, as well as the surprising results from the Atlantic provinces, raises the possibility that other components beyond traditional socio-demographic factors, such as social, cultural, political and environmental contexts, may affect perceptions of health. [43,44] Income and Education Although the finding of a social gradient between healthier practices and increased income and education supports previous research, the trend was more consistent for health-promoting than for health risk practices. [1,7,10] Previous research indicating that health risk practices are more common among those with lower incomes and less education was not consistently supported by our findings, although income appears to have more of a protective effect on health practices than education. Indeed, women with higher levels of education appear to be as likely, and in some instances (binge drinking, overweight) more likely, to engage in risky health practices. In particular, the present finding that being overweight is more common among women with higher education conflicts with previous research. However, these results need to be interpreted with caution, as the data in Figure 6 were not ageadjusted and this could be causing age confounding. The difference could also be a result of variation in the definition of the education variable rather than a new finding. Clearly, more research is needed to explore the complex interplay of income and education with other social factors, including social support and the physical environment. In addition, an increased awareness among health professionals of the cluster of risky health practices more common among higher income earners and women with more education is also of importance. Age Results in this area did not support previous findings of a substantial trend towards poorer health practices (particularly smoking) among very young (12-to 19-year-old) women. [7,43] Indeed, Figure 2 shows that all older age groups are at higher risk for multiple risk factors than 12to 19-year-olds. However, the results were limited by the fact that the CCHS data are cross-sectional, and therefore changes over time within the age groups were not evident in our study. Further, because the samples were small the parameters used to define the age cohorts were necessarily broad. Another possibility is a "cohort effect" in which a generation of women who engaged in risky health practices is now in the 20 to 44 age grouping, resulting in a clustering of poor health practices in this age group. Finally, previous analysis [8] compared smoking and drinking rates between young women and young men rather than comparing age groups of women. More research is needed on the health practices of young women to determine the age groups at greatest risk. Despite healthier behaviours, older women were less likely to report very good/excellent health. It is thought that the higher incidence of chronic health conditions among older women may contribute to their lower ratings of health. Immigrant Status Comprehensive explanations for the differences between long-term immigrants' and non-immigrants' health practices are complex. Some of the differences may be due to income disparities between immigrant and non-immigrant women. [8] Increased physical activity, attention to diet and consultations with alternative health providers are strongly associated with higher incomes and are also significantly more prevalent among non-immigrants. [8] Age may also be a contributing factor, in that immigrant women tend to be older than their Canadian-born counterparts, and advancing age is associated with a more sedentary lifestyle. [8] Finally, the consistently higher reports of smoking by non-immigrants may have a cultural as well as an economic component. Marital Status The results confirm prior research on the protective effects of partnership. [32] Given that partnered women tend to have higher household incomes, their higher rates of consultation with alternative health care providers and attention to diet are consistent with the association between income level and these practices. The slightly greater likelihood that married women will report very good/excellent health fits with numerous studies indicating improved mental and physical health with marriage. [44,45] Further research is required in this area to determine additional factors influencing the health practices of single women in relation to health status. Of particular interest is the impact of social values and community resources on women's ability and motivation to adopt self-care practices, and the subsequent impact on health status. Limitations to the Analysis Because of the cross-sectional nature of the data in the CCHS it was not possible to draw conclusions about causal relations and outcomes. This limitation was particularly relevant with respect to the analysis of age-related differences in women's health behaviours, in that data on changes over time within the age groups would have strengthened the analysis. A further limitation relates to how health practice variables should be operationalized. The smoking variable determined current smoking status. As a result of the substantial decrease in smokers in the past three decades (50% to 30%) it is possible that a large number of those who were not current smokers were actually former smokers. Grouping former smokers with non-smokers could result in misleading differences between socio-demographic groups of women. The "active" classification of exercise includes both high-intensity and moderate exercise, which could also lead to discrepancies in results between socio-demographic groups. Finally, because of limited cell sizes, the data in the tables on income, education, immigrant status and marital status are not adjusted for age. As a result, age differences may confound the results, and the reader should interpret the findings with caution. Policy Recommendations 1. This study has highlighted subgroups of women who demonstrate particularly poor health practices: women of low income, established Canadian immigrants (as compared with recent immigrants), and women living in Ontario and Quebec. It has also pointed to possible discrepancies between women's health practices and their self-rated health, particularly among young women and women from Quebec. On the basis of our findings, the following recommendations are made for future policy consideration. 2. Develop more sensitive indicators to capture other potential influences on women's health. Kawachi et al. [44] found that women experience higher rates of illness and death in those U.S. states that allow them lower levels of political participation and economic autonomy. Developing indices to measure the effects of broader influences on health, such as women's political participation, economic autonomy, employment and earnings, and reproductive rights, would provide important information with respect to women's health. 3. Develop the tools and resources necessary to conduct longitudinal studies on the personal health practices and health outcomes of female immigrants. The lack of data on immigrants' health practices limits the extent to which we can understand the underlying causes of the changes in those practices and the subsequent impact they may have on health outcomes. 4. Develop the tools and resources necessary to gather more data on the factors beyond traditional socio-demographic variables that may affect health practices and perceptions of health. In particular, more sensitive indicators are needed to help understand influences such as the education and income level of women's parents on their health practices, as well as the influence of cultural and geographic norms. 5. Address the lack of information on health practices of women in rural areas, and in particular in Nunavut, the Yukon and the Northwest Territories. Over 20% of women live in rural areas. [8] Women in rural areas are known to have substantially lower levels of income, education and employment. Given the strong association between these factors and poor health practices and selfrated health, the lack of data on Canada's rural areas, and particularly on women in the North, is disconcerting. 6. Address the lack of information available on Aboriginal women's health practices by ensuring their inclusion in national surveys. Given what is already known about the poor health practices and self-rated health of Aboriginal women, [7] there is a pressing need for more information on the health practices of this vulnerable subgroup. 7. Conduct further research to elucidate the contextual factors underlying regional differences in culture, values and behaviour as they relate to health and health practices. 8. Acknowledge the importance of socio-economic and cultural conditions and their influence on health practices. The results from this study support findings from previous analyses suggesting that socio-economic factors play a significant role in health practices. Therefore, in addition to focusing on individual health behaviours there is a need to look at policy directives that aim to decrease both socio-economic inequities and inequities between cultures. 9. Develop targeted health education programs promoting healthy individual behaviours for Aboriginal women, women with lower incomes and young women (20-44 years of age). 10. Conduct more research to understand the cultural reasons for the health practices of subgroups of women. Of particular concern are vulnerable subgroups, including
2014-10-01T00:00:00.000Z
2004-08-25T00:00:00.000
{ "year": 2004, "sha1": "c8c60e4fd1e898f6d54a3a9463bc1d082ed13cfd", "oa_license": "CCBY", "oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/1472-6874-4-S1-S4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8c60e4fd1e898f6d54a3a9463bc1d082ed13cfd", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
212580081
pes2o/s2orc
v3-fos-license
Approaches, barriers, and facilitators to abortion-related work in U.S. health departments: perspectives of maternal and child heath and family planning professionals Background Public health agencies in the United States have engaged in abortion-related activities for nearly 50 years. Prior research indicates that, while most state health departments engage in some abortion-related work, their efforts reflect what is required by law rather than the breadth of core public health activities. In contrast, local health departments appear to engage in abortion-related activities less often but, when they do, initiate a broader range of activities. Methods This study aimed to: 1) describe the abortion-related activities undertaken by maternal and child health (MCH) and family planning professionals in state and local health departments; 2) understand how health departments approach their programmatic work on abortion, and 3) examine the facilitators and barriers to whether and how abortion work is implemented. Between November 2017 and June 2018, we conducted key informant interviews with 29 professionals working in 22 state and local health departments across the U.S. Interview data were thematically coded and analyzed using an iterative approach. Results MCH and family planning professionals described a range of abortion-related activities undertaken within their health departments. We identified three approaches to this work: those mandated strictly by law or policy; those initiated when mandated by law but informed by public health principles (e.g., scientific accuracy, expert engagement, lack of bias, promoting access to care) in implementation; and those initiated by professionals within the department to meet identified needs. More state health departments engaged in activities when mandated, and more local health departments initiated activities based on identified needs. Key barriers and facilitators included political climate, funding opportunities and restrictions, and departmental leadership. Conclusions Although state health departments are tasked with implementing legally-required abortion-related activities, some agencies bring public health principles to their mandated work. Efforts are needed to engage public health professionals in developing and implementing best practices around engaging in abortion-related activities. Background State and local health departments are a critical part of the public health infrastructure in the United States, tasked with protecting and promoting the health of individuals and communities across a wide range of health issues. The specific roles and responsibilities of these agencies have evolved over time and, as a result, their organizational structure and authority varies considerably across level of government, geographic region, and area of public health [1]. Maternal and child health (MCH) and family planning are key programmatic areas of state and local health departments [2,3] and have been studied extensively (e.g., [4,5]). In contrast, while health departments have engaged with abortion for nearly 50 years [6], these activities have received much less scholarly attention. Health departments and abortion The earliest governmental public health efforts related to abortion followed soon after its legalization and involved established public health tasks, including data surveillance and clinical quality improvement [6,7]. In the late 1960s, the federal Centers for Disease Control and Prevention (CDC) established the national abortion surveillance system, based on reporting by and collaborations with state health departments. These data have been used to document the number and characteristics of women having legal abortions, as well as the safety of different procedures and care settings [8]. The CDC's role in investigating abortion morbidity and mortality in collaboration with state and local health departments provided data for major judicial decisions and clinical improvements [9]. In 2016, 47 state health departments reported annual abortion data to the CDC's surveillance system [10], and nearly all states report detailed abortion data on their departmental websites that are available to the public [11,12]. The federal Title X Family Planning Program, first enacted in 1970, has also required health departments to engage with abortion. Title X granteeswhich include some state and local health departments [13] distribute funds to local clinics to deliver contraception, sexually transmitted infection, and other preventive services. Regulations have restricted Title X funds from paying for abortion services, but had required that pregnant women be offered non-directive information and counseling about their pregnancy options (including prenatal care, adoption and abortion) and be given referrals upon request [13,14]. In 2019, the Trump Administration revised the regulations governing the Title X Program, prohibiting referrals to abortion and removing the requirement for pregnancy options counseling [15]. Title X grantees are responsible for ensuring these regulations are followed; thus, many state and local health departments have attended to abortion as part of their Title X activities for many years. Over the past decade, health departments have taken on expanded roles in response to an increasing number of abortion-related policies enacted by state legislatures [16]. Some of these laws are antithetical to public health principles [17,18]. For example, some state legislatures have tasked health departments with implementing regulations that single out abortion-providing facilities with requirements that are not mandated for facilities that offer other procedures of equivalent risk [19]; these regulations are not based in scientific evidence [20,21]. These laws have resulted in facility closures that limit women's ability to obtain abortion care [22,23]. Some state legislatures have required health departments to produce and distribute health information materials for abortion-seeking patients that include scientifically inaccurate information, such as a disproven link between abortion and breast cancer [16]. This trend raises important questions about the use of the government public health infrastructure for the political purpose of impeding abortion care. Ideally, if health departments were to have a role in abortion, whether and how to engage in an abortion-related activity would be determined by identified needs and potential for positive impact on patients' health [17]. Health departments would use established frameworks to guide and monitor the abortion-related activities provided in any public health jurisdiction, and would integrate abortion within the scope of their maternal and reproductive health activities. The CDC's Essential Public Health Services (EPHS) framework, for example, describes and organizes the spectrum of public health tasks into ten types of core activitiessuch as monitoring the health status of the population, providing health information to the public, facilitating linkages to needed services, providing quality assurance, developing the public health workforce, and evaluating health services [24]. A public health approach to abortion would be informed by this type of broad-based framework and grounded in public health principles, such as basing policy and practice in the best available scientific evidence, assuring conditions in which people can be healthy, promoting health equity, meeting community needs, and assuring availability of health care (e.g., [1,17,18,[25][26][27][28]. Examining abortion-related activities in health departments Research on the abortion-related activities of health departments has been limited. Most is known about the history of federal and state involvement in abortion surveillance [6,10]; much less is known about the programmatic abortion-related activities of state and local health departments. In a previous study, we systematically investigated the public-facing websites of state and local health departments to describe their activities related to abortion [11]. We coded all mentions of abortion on these website pages using the EPHS framework in order to understand the scope of these efforts. We found that most state health departments engage in some abortionrelated activities; however, these largely reflect legal requirements rather than the range of core public health activities outlined by the EPHS framework. As expected, nearly all states conduct data surveillance and enforce some laws related to abortion. Activities to educate the public and provide referrals to services were mandated by legislation, rather than evidence-based health promotion goals. None of the state health departments were engaged in innovative research activities to develop best practices. We also found that few local health departments addressed abortion, although those that did engaged in a broader range of core public health activities. The website study provided a useful window into understanding the typical abortion-related activities of state and local health departments, but did not provide an in-depth look at those activities, nor did it examine how the health departments approached mandated activities, or the reasons health departments took a particular approach. In the present study, we seek to better understand the abortionrelated work of state and local health departments by interviewing public health professionals. Our specific aims were to: 1) describe the abortion-related activities undertaken by MCH and family planning professionals in state and local health departments; 2) understand how health departments approach their programmatic work on abortion, and 3) examine the facilitators and barriers to whether and how abortion work is implemented. Participant recruitment Between November 2017 and June 2018, we conducted key informant interviews with state and local health department employees based in MCH and/or family planning divisions. We chose these divisions because abortion would fit within their scope of service (in concept, if not in practice). We employed a purposeful sampling strategy to identify potential respondents. Respondents were eligible if they were currently working or had previously worked in MCH, family planning or an equivalent division within a state or local health department. We identified potential respondents through professional directories, professional conferences targeting state and local leaders, reviews of state and local health department websites, our team's professional networks, and referrals from other respondents (i.e., snowball sampling). We contacted potential respondents by phone and/or email to request their participation in the study. Prior to the interview, we sent all respondents an information sheet that described the study aims and procedures. We adjusted our recruitment strategies over the study period to capture a diverse geographic representation and balance between state and local health department representation in the final sample, so that we might examine findings by these key characteristics. We explicitly aimed to capture a range of experiences by geographic region and department level based on our understanding of differences in their roles and responsibilities [1]. We approached 66 individuals as potential respondents over the study period. Twenty-two agreed to participate and completed the interview, seven referred the interviewer to a different contact within their agency, 12 expressly declined participation, and 25 did not respond to the request. We tracked reasons for decline, which included lack of time and/or interest, lack of relevance to their work, unwillingness to take steps to receive approval from superiors, denial of agency permission, and concern over the political implications of the topic. We considered recruitment complete when sufficient diversity in geographic and state/local representation had been achieved and no new themes emerged from interviews. A few respondents invited colleagues to participate in the interview to add other perspectives. As a result, the final sample for this analysis comprised 29 MCH/family planning respondents representing 22 health departments. All but one respondent was a current health department employee. The distribution of interviews by department type and geographic region is provided in Table 1. Study procedures Interviews were semi-structured, following a general interview guide but allowing respondents to introduce topics that they thought were relevant to the discussion. The interview guide included questions about their department's activities related to maternal and reproductive health care (including prenatal care, family planning, and abortion); the motivations for developing these activities; and the barriers and facilitators to integrating abortion into their department's activities. Specifically, we asked about programmatic activities commonly undertaken by MCH and family planning divisionsinterventions, programs, policies and toolsrather than data surveillance or facility regulation often done elsewhere in health departments. Questions were open-ended and modified over time to probe emerging themes. We note that these interviews were collected prior to the Trump Administration's 2019 changes to the Title X Program [15]; therefore, responses may reflect prior Title X policies or activities that are no longer in effect. One member of the study team conducted all interviews over the phone. Interviews lasted 30 to 90 min. Interviews were audio-recorded and transcribed verbatim, and field notes were written at the end of each interview. Respondents were offered a $50 gift card in appreciation for their participation. The study protocol was reviewed and deemed exempt by the institutional review board of the University of California, San Francisco. Analysis The analysis used a hybrid approach to thematic analysis that included both deductive coding based on primary research aims and inductive coding of themes that emerged from the data [29,30]. First, we categorized all abortionrelated activities described by respondents using a previously developed codebook based on the CDC's 10 Essential Public Health Services (EPHS) framework [11]. One author extracted all interview text describing abortion-related activities into a spreadsheet. The study team coded a short list of these activities using the extant codebook, discussed discrepancies, and revised the codebook. Three authors then independently coded all abortion-related activities, with at least two authors coding each activity. Together, the team resolved coding discrepancies and made final decisions about code application by consensus. Next, two authors independently reviewed a subset of interview transcripts and developed preliminary thematic codes regarding the approaches to, barriers to, and facilitators of abortion-related activities. These were revised through discussion and applied to all transcripts using Dedoose qualitative data management software (SocioCultural Research Consultants, 2016). The first author analyzed the coded data for thematic patterns, including commonalities and differences across interviews. We examined how themes varied across respondent characteristics, specifically department level and geographic region. The quotations presented indicate whether the respondent was from a state or local health department and their region, except in cases where we were concerned a health department could be identified. All members of the team reviewed all transcripts and provided ongoing input on the analysis. COREQ guidelines for the reporting of qualitative research were used to guide the presentation of these methods and results [31]. Abortion-related activities within health departments Nearly all respondentsat 11 of 12 state health departments and all 10 local health departmentsdescribed some abortion-related activities taking place within their agencies. Examples of the range of activities in health departments: collecting data from abortion providers and preparing analytic reports; developing state-mandated materials for abortion-seeking patients; developing policies related to abortion care, referrals and funding; enforcing federal and state laws regarding payment for abortion; facilitating linkages to abortion and/or non-abortion services (e.g., pregnancy resource centers, prenatal care, adoption assistance); training abortion providers on administrative policies; and monitoring Title X family planning providers' provision of pregnancy options counseling. Table 2 presents a summary of the types of activities described, categorized according to the EPHS framework. Respondents in state health departments described abortion-related activities across 8 of 10 categories in the EPHS framework. The most common Essential Services for state health departments were those that Enforce Laws (EPHS6, 11 respondents), Link to Services (EPHS7, 7 respondents), and Evaluate Effectiveness, Accessibility and Quality (EPHS9, 6 respondents). Respondents in local health departments described activities across 7 of 10 Essential Services. The most common were activities that Link to Services (EPHS7, 10 respondents), Enforce Laws (EPHS6, 6 respondents), and Develop Policies (EPHS5, 5 respondents). None of the state or local respondents described activities relating to EPHS2 (Diagnose or Investigate) or EPHS10 (Innovative Research); in addition, none of the local respondents described EPHS3 activities (Inform and Educate). One state health department reported no abortion-related activities. Others reported very few activities. To some extent, this may reflect that MCH and family planning divisions are not necessarily involved in the entire range of abortion-related work in which a health department engages. As one state respondent in the Midwest explained, "We're a really big health department, so there could be [other] people. There probably is a whole licensing division that licenses the facilities, but I don't work them closely." A state respondent in the West made a similar point about the department's Medicaid division, which was responsible for the implementation of legislation and activities around use of state Medicaid funds to pay for abortion. Some respondents asserted that the lack of abortionrelated work in their department was not deliberate; as one state respondent in the Midwest noted, abortion had simply never come up in discussions: "It's not that there is an intentional effort to not talk about or deal with or think about it." A respondent from another state health department in the Northeast that engaged in few abortion-related activities similarly noted, "It has never come up. I'm trying to even imagine in what universe that would come up." How health departments approach abortion-related activities Respondents described the impetus for the abortionrelated activities undertaken within their divisions of their health department. Based on our prior research [11], which indicated that legislative mandate was a strong driver of abortion-related work, we thematically coded each department's activities based on the extent to which efforts were driven by legislative mandate or initiated by the department. During this analysis, we identified a third scenarioa middle groundin which MCH and family EPHS1 -Monitor health status to identify and solve community health problems 3 1 • Collect abortion data from facilities ("induced termination of pregnancy" forms) • Prepare regular surveillance reports (e.g., for state legislature) EPHS2 -Diagnose and investigate health problems and health hazards in the community 0 0 (No examples provided) EPHS3 -Inform, educate and empower people about health issues 5 0 • Develop and update state-mandated abortion information, consent forms, and websites ("Women's Right to Know") EPHS4 -Mobilize community partnerships to identify and solve health problems 2 2 • Convene provider workgroups to address availability and provision • Partner with community and social service agencies to address availability and referrals • Develop inter-agency partnerships to address referrals EPHS5 -Develop policies and plans that support individual and community health efforts 3 5 • Develop internal policies related to pregnancy options counseling and abortion referrals • Develop administrative policies and systems for Medicaid or other state coverage of abortion (e.g., enrollment forms, billing processes) • Review policies to understand extent that abortion-related services are allowed by law EPHS6 -Enforce laws and regulations that protect health and ensure safety 11 6 • Enforce federal/state requirements regarding funding for abortion • Implement state laws that allow use of public funding to pay for abortions • Develop and update state-mandated abortion information, consent forms, and websites • Collect mandated data and prepare reports on abortion-related topics, as required by law • Conduct research and provide legislative testimony on abortion-related policies as requested by legislature • Maintain resource directories for abortion services and/or alternatives to abortion, as required by law • Implement laws that require the provision of abortion alternatives (e.g., funding for CPCs) EPHS7 -Link people to personal health services and assure the provision of health care when otherwise unavailable 7 1 0 • Provide pregnancy options counseling (e.g., at Title X clinics) • Facilitate linkages for women seeking abortion services (e.g. information guides, case management, insurance coverage, funding, transportation, childcare) • Provide resources linking to alternatives to abortion (e,g., CPCs, hotlines) • Provide direct abortion services at clinic/hospital • Pay for abortion services through state public funding EPHS8 -Assure a competent public and personal healthcare workforce 1 4 • Train Title X providers about pregnancy options counseling and referrals • Train abortion providers on Medicaid policies, billing, reimbursement, presumptive eligibility, etc. • Train local clinic staff and programs about abortion (generally), pregnancy options counseling, and referrals EPHS9 -Evaluate effectiveness, accessibility and quality of personal and population-based health services 6 2 • Conduct quality assurance monitoring of pregnancy options counseling at Title X clinics • Conduct quality assurance monitoring of publicly-funded CPCs planning professionals in health departments found flexibility as they implemented mandated activities. In this section, we describe a continuum of approaches to abortion-related work. We provide examples of how health departments went about their abortion-related work in three categories: 1) executing mandated activities as prescribed, 2) bringing a public health approach to mandated activities, and 3) initiating public health focused efforts without mandate to meet identified needs. The distribution of these categories is presented in Table 3. Some respondents described more than one approach to abortion-related work; therefore, a health department may fall into more than one category. This indicates that, within some departments, different activities were undertaken for different reasons or approached in different ways. Category 1: Executing mandated activities as prescribed Respondents from 14 of 22 health departments described engaging in abortion-related activities that were executed only when required by lawand only in the ways prescribed by the law. This was particularly common for state health departments (11 of 12 state respondents) but also reported by a few local health departments (3 of 10). Respondents often cited the specific laws and regulations that dictated their abortion-related work. For example, as respondents in MCH and family planning divisions, many respondents described their responsibility for ensuring that clinic sites met the legal requirements of the federal Title X Family Planning Program. One state respondent in the West described being "vigilant" about ensuring that pregnancy options counseling be made available at their Title X family planning clinic sites, as required (at the time of the interview): "The first thing that we do according to Title Xand I can cut and paste that regulation for youis that we offer options counseling for women who request it. That, of course, is something all of our sites do. And we check up every site that we have … We follow up to make sure that happens." Another state respondent in the South explained their department's legal responsibility in ensuring compliance with policies regarding how (and how not) to make referrals to abortion providers: "To the extent that we discuss [abortion] is within the range of a general understanding that that is an option …. All we can do, legally, as a Title X siteby federal mandateis say 'Well, if you want an abortion, here's a number to contact. You have to set up the appointment.' We literally cannot do it. We are, by funding, not allowed to do it [for her]. Some state respondents described activities to develop information for people seeking abortion, either for public availability on the health department website or direct distribution to patients at the abortion clinic appointment. For some of these states, the process of developing and distributing information materials was strictly prescribed. State legislation dictated the specific information that the health department had to include in the materials: "It's all spelled out in the law... You have to read about your procedure, you have to look at an image of a fetus at whatever week you might be in, and you [have to] print out a form at the end and sign it." Other examples of mandated activities included reporting of abortion data to the state legislature, developing certification requirements for abortion providers being reimbursed by the state, and executing policies regarding state insurance coverage for abortion care. Category 2: Bringing a public health approach to mandated activities Five of 22 respondentsall in state health departments described engaging in abortion-related activities that were initiated when mandated by law, but implemented with some amount of flexibility. In these cases, health department respondents described taking an active role in decision-making around how to implement the required abortion-related activities. They met legal or funding requirements and also incorporated common public health principlesan emphasis on scientific accuracy, clinical expert engagement, presenting unbiased and neutral information, and/or promoting access to careinto the department's abortion-related work. The most common activity under this theme was the development of state-mandated information materials about abortion. Some states described examples where all or some of the content of websites and/or patient materials regarding abortion was left to the discretion of the health department. In such cases, health departments often sought outside clinical expertise (e.g., from medical boards, nurse consultants, obstetrician/gynecologists), working to ensure that materials "The legislature passed a law that required a statemandated consent form for abortion, and tasked … the department of public health with developing this form …. And so [the department] brought together a group of abortion providers … to help write this consent form, to make sure it was accurate and appropriate, and useful to providers." Another state respondent in the West similarly described working with the state medical board to ensure that the information on its mandated website was both "scientifically based and … unbiased in its approach." Presenting unbiased and neutral information was an explicit goal, as members of the medical board and the health department held politically diverse opinions about abortion. Together, they "agreed on the common ground that it's not about what they believed, that it was about what was the best information to be providing to a woman making a choice about her pregnancy and helping her for truly informed consent." The same department developed an online resource guide for pregnant womenmandated generally by state law, but approached using public health principles to include a broader range of services than required by the state. The resulting guide included information about where to seek prenatal care, pregnancy support, faith-based social services, health services, as well as family planning and abortion care. In general, respondents felt positively about bringing public health principles to the implementation of the state requirements, seeing it as "fortunate" that the health department could "take the legislation that could have gone otherwise" if not guided by the health department. One state respondent in the Midwest noted that their division within the health department "volunteered, actually, because we wanted to … make sure it was done appropriately and with accuracy." Another in the West agreed: "I was just so pleased that we took the legislation that could have really been harmful to women's access... I was really glad to see that we were allowed to make it something that was actually useful and met everybody's needs." In a few cases, the efforts to bring public health principles to legal or funding requirements resulted in ongoing health department engagement around abortion. For example, for one state department in the Northeast, a statemandated information requirement was the impetus for convening a working group of abortion providers, but it created an opportunity for the health department to think more comprehensively about their abortion work. The department continued to convene the provider group to help identify clinic training needs, barriers to access, and programmatic and policy priorities related to abortion. Category 3: Initiating public health approaches to abortion without mandate Ten of 22 respondents described initiating abortionrelated activities that were not prompted by a legal or funding requirement but by identified need. Departmentinitiated efforts were much more common for local health departments (8 of 10) than state health departments (2 of 12). Often, departments aimed to incorporate abortion into ongoing activities around pregnancy-related care. Respondents described varied activities related to abortion, although many examples focused on improving referrals to abortion services. One local health department began offering pregnancy options counseling and referrals to high school students using the department's mobile clinic. Respondents in two separate local health departments described wanting to ensure "a warm handoff" for patients receiving a positive pregnancy test at their public family planning clinics. As one local respondent in the South explained, "It is one thing to hand a patient a few papers and say 'go do this.' It's another thing to [say], 'Let me really link you with this person." A few health departments initiated activities to reach marginalized populations with information on how to access abortion care. One state department in the West began by asking "Where are we going to find folks who might benefit from the services?" and built networks with social service providers, community action organizations, and groups working with migrant farmers to reach undocumented immigrants who might be seeking abortion. A local respondent in the West described working with their department's home visiting program to ensure that women enrolled in the program received unbiased counseling and referrals about their pregnancy options: "[We want to make sure staff are] not just finding that she's pregnant and saying, "Great! You're pregnant! How can I help you to have a healthy baby?' but 'So, you're pregnant. What does that mean for you? What do you need? Let's have a conversation about it. How's your mental health? What kind of referrals can I give you?" Other department-initiated abortion activities focused on expanding clinical services, including: providing abortion services in department outpatient clinics and hospitals, improving the quality of post-abortion contraceptive care, working with community health centers to expand access to medication abortion, and planning for potential increases in abortion patient volume if neighboring states enact restrictive abortion policies. One state health department in the West began collecting and reporting on abortion data, not because it was required by law, but because they did so for other, similar public health issues of interest: "It's not mandated, however, that we take the data and report on it. That is something that's kind of born from our shop. I think that's important. That a public health department is actually interested in the abortion datatracking it and reporting, kind of making it part of the story, making it part of the narrative [of family planning success]." Initiating new abortion-related activities often required convincing other health department colleagues of abortion's relevance. As one local respondent in the Northeast described: "We had special symbols on how to find places in the [resource] guide. And one of them was a symbol for abortion care. I remember being in a meeting with someone who worked in another part of [the city], and they were just surprised that there was abortion in there. And they were like, 'Do you really need that?' And I was like, 'Yeah, you really need that.'" Facilitators of and barriers to abortion-related activities Across all three categories of approaches, respondents described factors that affected both whether and how their departments engaged in abortion-related activities. These facilitators and barriers were related; that is, the overarching factors that support abortion work in one health department hamper it another. We describe these briefly. Political climate Many health department respondents discussed the state or local political climate as either a facilitating or restrictive factor in engaging in abortion-related activities. Even in the absence of specific laws mandating health departments to engage with abortion in particular ways, they felt the impact of the environment in which they operate. For one state respondent, conservative state politics keeps the department from initiating abortionrelated activities, especially those that facilitate access: "I think the political climate here would probably not promote any abortion-related services …. Even in the absence of Title X regulation putting a prohibition on it, I don't think that the state would touch it with a ten-foot pole." In contrast, other respondents operated in a political environment where leaders supported abortion rights and gave the department freedom to implement abortion-related activities that align with public health principles. One local respondent in the West described their department as "very lucky" for being able to participate in campaigns that promote access to abortion, supported both by the health department director and the city's mayor: "I think we have the luxury to do that here in a way that you wouldn't [elsewhere]."A few respondents described the need to take in account the range of political opinions across their state when initiating or considering how to implement abortion-related work. A state respondent in the West explained: "We're diverse, politically. We are very progressive, [but] we have an incredibly conservative side of the state … It's something we're challenged with and just have to work with. We can't just assume that everybody in our state thinks the way we do." Funding Many respondents noted that whether and how their department engaged with abortion was affected by the specific requirements of federal and state funding sources. One state health department respondent in the Midwest noted: "Our [department's] major program is the Title X Family Planning Program … .and that has such a separation from abortion that I think that everybody's just really careful to try to not get too involved in abortion services. Because we want to be compliant with our funding source through that program." For some respondents, at both the state and local level, fear of losing existing funding due to restrictions on Title X funding led to trepidation about engaging in abortionrelated activities, as they feared it could jeopardize funding for their entire program. One local respondent in the South described: "anything [that may] screw with our Title X funding is a terrible idea." This was most commonly expressed in relation to Title X funding, but was described about other sources as well, such as the Title V Maternal and Child Health Block Grant. Respondents also discussed feeling constrained by specific demands of funding, which gave limited room for creativity, flexibility, or innovation in whether they initiated their own activities or how they implemented mandated activities. Since no federal funding and little state funding is specified for or inclusive of facilitating access to abortion, the department has no mechanism through which to explore including abortion in their work. A state respondent in the South noted: "A lot of times, you just are kind of siloed to what the funders are asking you, rather than having just a general discussion of what are the needs of the community, and how can we manage our programs to better fit those needs of the community." In contrast, respondents based in departments that received funding from diverse funding streams and those that did not receive Title X funds described feeling less constrained by funding. A few described the opportunities that came with being in a state that provides insurance coverage for abortion care with public funds, and a few had access to funding specifically for initiating abortion-related activities. One local respondent in the West described being able to initiate a public information campaign about availability of abortion services with the receipt of dedicated internal funding: "I think [abortion] was always kind of under the surface, but not a priority. And then it became a priority. And then we were able to do things about it because there was a windfall of money, or we had some savings …. I thought 'Let's spend it on this campaign.' And I had support from the staff to do that. I was not taking money away from any other program. That might have been really a hot button." Departmental leadership Respondents believed that individual leadership within their department could nudge a department toward or away from bringing a public health approach to abortionrelated activities. One local respondent in the West described an environment where division staff were open to the idea of addressing abortion in their work, but did not have the support of leadership to develop or implement ideas. When the leadership changed, abortion "became something that we did, and also talk [ed] about more." Having department or division leadership that prioritized "an evidence-based approach to public health," innovation and, more specifically, inclusion of abortion in public health, was also described as crucial to facilitating abortion-related activities. This was particularly true when the political climate might be less supportive. A state respondent noted: "I am eternally grateful to be doing this work [on abortion here]. I can count on the support of my boss and my boss's boss, and her boss, who's the Commissioner …. And it's not that we haven't had troubles with this; we have anti-choice state legislators that are trying to pass anti-choice laws every year." In contrast, departmental leadership that was opposed to abortion or did not want to initiate new public health oriented activities related to abortion was seen as a formidable barrier, even in states with supportive policy environments for abortion. Proposing new activities related to abortion, one state respondent noted, "wouldn't make it past our division chief." A local respondent in the Northeast described the challenge of changing the minds of long-term staff about the department's involvement in sensitive subjects like abortion: "It's like putting a crack in a boulder. Somebody has to keep hitting it over and over and over again …. [Our greatest success has been] where there are one or two people who are just amazing and put in heroic amounts of effort and time into it." Discussion In accounts of the roles of public health professionals in health departments, there are typically two poles of work described: one that requires implementing laws and upholding the public health bureaucracy, and the other that involves initiating activities to improve and advocate for changes in social, policy, and environmental factors that adversely impact health [32]. Consistent with this framework and our previous research [11], this study found evidence of the strong influence of federal and state policies on the abortion-related activities undertaken within MCH or family planning divisions of health departments. In a time of increasing state legislation around abortion [16], it is not surprising to find that that state health departments, in particular, are implementing abortion-related activities dictated by law. Fundamentally, implementing and enforcing laws and policies is a key responsibility of health departments. We also found clear evidence that some health departments, particularly local health departments, initiate abortion-related activitiessuch as facilitating linkages to abortion services and assuring the provision of abortion in the communityguided by core public health principles and frameworks (e.g., [17,24,[26][27][28]. A key finding of our in-depth interviewsone not identified on public-facing websites and not addressed in frameworks that describe polarities of public health practiceis that, even in the context of legally required activities, some health departments found room to incorporate public health principles. For example, study respondents described bringing research evidence to mandated activities and convening clinical experts to ensure products produced would both be evidence based and meet their patient care needs. One concrete example was the development of state-mandated information ("Women's Right to Know") materials distributed to women presenting for abortion. In some states, the details for these materials were formally stipulated, and the health department's role was solely to implement the law as written, even if this necessitated including inaccurate information. In other states, however, the law afforded enough flexibility for health department professionals to bring their public health expertise and training to their task. In these cases, respondents spoke of ensuring materials were evidence-based and language was unbiased, engaging clinical experts in the development process, and aiming to make materials useful for providers and patients alike. From this, we conclude that the implementation of mandated abortion-related activities canat timesbe guided by the frameworks, principles and values core to the public health profession. Our findings suggest that regardless of political climate, public health professionals in health departments have a range of options to bring public health principles to abortion-related activities. Further exploration is needed to understand the factors that allow them to do so, especially in circumstances where the abortionrelated activities are mandated by the state legislature. This study has limitations. First, our flexible interview guide allowed for deep exploration, but limited our ability to make comparisons across the entire sample. Second, our findings are limited by the knowledge and experiences of our specific respondents, as well their willingness to share them with us. As noted by respondents, activities taking place elsewhere in the health department may not be known to those in the MCH or family planning divisions. This likely explains the undercount of vital statistics collection presented in Table 2 (EPHS 1) compared to prior research [8,10,20]. Third, due to the scope of the study, we did not explore differences by respondents' placement within a MCH vs. family planning division, which could affect their engagement with abortion. Fourth, abortion is a politically sensitive topic. It is possible that our respondents were not forthcoming when asked about the facilitators of and barriers to engaging with abortion in their department, despite assurances of confidentiality. Finally, while we reached out to health department officials across states and localities, many potential respondents declined to participate. This may be due to actual or perceived political constraints, departmental policies about engaging in research, or personal comfort talking about abortion. While we did have respondents from a range of geographies and political climates among our sample, the findings may have limited applicability to other health agencies or jurisdictions. In particular, further research is needed to understand the influence of the state abortion policy environment on local health departments, especially in situations where state and local politics do not align. This study also has considerable strengths. To our knowledge, this is the first study that includes perspectives of public health professionals to understand whether and how health departments engage with abortion. Research on the role of health departments in the availability and provision of abortion is minimal; our studies begin to fill that gap. This paper, in particular, adds the voices and experiences of MCH and family planning professionals themselvesa rich source that provides insight beyond that of publicly available websites, materials, or statutes. Conclusions This study finds that MCH and family planning professionals in health departments are engaging in a range of abortion-related activities. Much of this work is responsive to federal and state requirements, rather than initiated and guided by core public health frameworks, principles, and values. Nonetheless, there is compelling evidence that some health departmentsat both the state and local level and in diverse political settingsare able to bring a public health approach to abortionrelated activities. New efforts are needed to engage public health professionals in developing and implementing best practices around abortion-related activities.
2020-03-07T16:03:43.954Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "10e4d79361a339a20a07da182cc1e2e9a4fb820c", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-8389-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f9943507b519e015399eefe4bc9ca8398c31a7b", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
253762057
pes2o/s2orc
v3-fos-license
Towards Bootstrapping a Chatbot on Industrial Heritage through Term and Relation Extraction We describe initial work in developing a methodology for the automatic generation of a conversational agent or ‘chatbot’ through term and relation extraction from a relevant corpus of language data. We develop our approach in the domain of industrial heritage in the 18th and 19th centuries, and more specifically on the industrial history of canals and mills in Ireland. We collected a corpus of relevant newspaper reports and Wikipedia articles, which we deemed representative of a layman’s understanding of this topic. We used the Saffron toolkit to extract relevant terms and relations between the terms from the corpus and leveraged the extracted knowledge to query the British Library Digital Collection and the Project Gutenberg library. We leveraged the extracted terms and relations in identifying possible answers for a constructed set of questions based on the extracted terms, by matching them with sentences in the British Library Digital Collection and the Project Gutenberg library. In a final step, we then took this data set of question-answer pairs to train a chatbot. We evaluate our approach by manually assessing the appropriateness of the generated answers for a random sample, each of which is judged by four annotators. Introduction Conversational agents or 'chatbots' are a convenient way of making information available, as can be witnessed from the significant growth of chatbots used in all kinds of settings, from banks to public services. 1 Also in cultural heritage settings, chatbots are now being employed more and more to interact with visitors to websites and virtual exhibi- Figure 1: Example of a "mill race" or "mill run" that was used to provide continuous water power to mills (image: author). tions. 2 Although frameworks such as Rasa (Bocklisch et al., 2017) enable the development of sophisticated chatbots that allow for fluent dialogue, an important bottleneck is in collecting and defining the training data for such systems. Training data comes in the form of 'intent-question' pairs, for example: order -Are you open?; Can I order?; Will you deliver. The definition and collection of such training data for any given application domain are challenging and costly, in particular for more specific and content-rich topics such as in cultural heritage settings. The range of possible intents will be significantly larger and more varied than in typical commercial settings such as ordering products or services. We, therefore, explore the use of term and relation extraction from a relevant corpus of language data as a bootstrapping step in identifying relevant concepts that can serve as intents. In this paper, we describe our work towards developing a methodology where we focus on term and relation extraction for end-to-end text gener- ation. This allows us to be independent of the existing resources needed to train a conversational agent. We develop our approach in the domain of industrial heritage, and more specifically on the industrial history of canals and mills in Ireland. Abu-Shawar and Atwell (2016) focus on transforming corpora to a specific chatbot format, which is used to retrain a chatbot system. For this task, the authors use different dialogue corpora, i.e., such as the British National Corpus of English (BNC) and the Quran, which is a monologue corpus where verse and following verse are turns. The main goal of this automation process is the ability to generate different chatbot prototypes that communicate in different languages based on the corpus. Related Work In contrast to previous work, our approach does not leverage classification methods to align a question to a predefined intent or answer, respectively. Additionally, leveraging term and relation extraction on a relevant corpus of language data, our approach is not limited to existing resources, such as the Ubuntu Dialogue Corpus ( Data This section provides insights on the resources used to build a chatbot in the domain of industrial heritage in the 18th and 19th centuries, i.e. the Galway Data Set, British Library Digital Collection and the Project Gutenberg library. Galway Data Set For our work, we initially leveraged 14 online resources to extract the required data for term and relation extraction in the domain of industrial heritage (see Table 1). In addition to online resources, we also leverage Wikipedia, 3 a freely available encyclopaedia that is built by a collaborative effort of voluntary contributors, to further increase the data set for term and relation extraction. British Library Digital Collection The British Library Digital Collection (BLDC) includes a collection of digitised books created by the British Library. This is a collection of books that have been digitised and processed using Optical Character Recognition (OCR) software to make the text machine-readable. We used the Curatr online platform (cf. Section 5.2) to access BLDC to retrieve a corpus in the domain of industrial heritage. 4 Project Gutenberg library Project Gutenberg 5 is the oldest digital library founded in 1971 and aims to digitise and archive cultural works. Most of the items in its collection are the full texts of books or individual stories in the public domain. All files can be accessed for free under an open format layout, which stores more than 50,000 items in its collection. Most items are in the English language, but many non-English works are also available. There are multiple affiliated projects that provide additional content, including region-and language-specific works. We selected 100 items from the Project Gutenberg library that represent the 18th and the 19th century. 6 Methodology Within our work, we first leverage the Galway data set to extract most relevant terms and relations between them in the domain of industrial heritage. We use these terms and relations in the next step to extract sentences from the BLDC and the Project Gutenberg corpus containing these terms and relations. Term Extraction For our initial step in extracting the most relevant terms within the targeted domain, we leveraged the identified online resources and collected documents relating to the city of Galway. Once the documents were collected, we employ the Saffron framework 7 (see Section 5.1) to extract the 100 most relevant terms from the collected documents, with a maximum term length of four words. Candidate term retrieval is the first step in the term extraction process. Saffron extracts potential candidate terms using noun phrase extraction, which are filtered based on term length, as specified in the configuration. After selecting candidate terms, Saffron evaluates their relevance to the domain and ranks them accordingly from the most relevant to the least relevant. Saffron uses a combination of scoring functions calculated for each of the candidate terms. It combines functions, such as comboBasic, totalTfIdf, cValue and residualIdf (Astrakhantsev, 2018), which are based on occurrence frequencies. More in detail, we leverage frequencies of candidate terms across the documents or occurrences as part of other candidate terms, and that are based on reference corpora, i.e., comparing occurrences in the data set versus a generic reference data set ("weirdness" function, with Wikipedia being used as reference corpus). Finally, a voting algorithm (Zhang et al., 2008) is then used to combine the functions. The final set of terms is selected from the original list of candidate terms after ranking, by filtering the top 100 terms of the list. Relation Extraction In the next step, we first parsed the initial data set and extracted the dependencies between tokens by the usage of the Stanza 8 dependency parser (Qi et al., 2020). We retrieved the dependencies where the extracted terms were identified with their relation, e.g. subj(flow, water). Finally, we identified triples, where two terms are linked through a relation. As an example, from the extracted dependencies subj(leave, canal) and obj(leave, river) we construct the triple subj obj(canal, leave, river). Conversational Data Set Creation In the final step, the extracted terms and the relations were used to query the BLDC corpus and the Project Gutenberg library to obtain a more relevant data to train the chatbot system. With this, we obtained four different data sets, i.e.: • subject or object term data set: A subject or object term has to be present in the sentence from the BLDC and the Project Gutenberg corpus. • subject and object term data set: The subject and the object term of the same triple have to be present in the sentence from the BLDC and the Project Gutenberg corpus. • subject or object term and relation data set: The subject or object term with its relation within the same triple have to be present in the sentence from the BLDC Corpus or the Project Gutenberg corpus. • concatenated corpus: A weighted corpus of the sub-corpora mentioned above is generated. The final data set to train the chatbot is derived from the Galway data set, the BLDC and the Project Gutenberg corpus, which represents a broad overview of the industrial environment of late 18th and 19th century Ireland. As discussed, from the collected documents, key terms and relations between them were identified using the knowledge extraction framework Saffron and the dependency parser Stanza. The terms and relations serve as a means of extracting high-relevance sentences that inform the chatbot's proficiency. This resulted in 659,433 relevant sentences (Table 2), which contained at least one of the extracted terms. We used 90% of the sentences for training and 10% for validation (development set) purposes. We filter this corpus based on subject and object terms in combination with the relation that appeared in the sentence, resulting in four sub-corpora for chatbot generation. From the held-out evaluation set, 50 sentences were randomly selected for manual evaluation by the four annotators. Question Generation As end-to-end chatbots are trained based on question-answer pairs, we use the extracted terms and relations for the question part and embed them within manually defined questions. As an example, the extracted term canal would become What is a canal? Table 3 shows the patterns used to construct the questions needed to train the chatbot. Using the OpenNMT toolkit (Section 5.3), the chatbot learns to properly respond to a question through the identified sentences, which contains the relevant (extracted) terms and relations. Experimental Setup In this section, we give an overview of the Saffron framework used for term and relation extraction. We leverage these terms to query the BLDC corpus through the Curatr online platform. Furthermore, we provide information on OpenNMT and the architecture of the trained sequence-to-sequence neural network. Finally, we provide insights on the evaluation approach. Term Extraction with Saffron Term extraction was performed with the knowledge extraction framework Saffron. This open-source tool allows us to extract terms (i.e. multi-word expressions) of the domain of the corpus, i.e. here the industrial history of canals and mills. Several parameters can be specified, such as N , the number of terms extracted, which we set up to 100 in order to cover a range of various terms (Bordea et al., 2013). The minimum and the maximum length of the terms can be determined, which we set to one and four words, in order to obtain generic terms (e.g. canal) as well as more specific ones (e.g. mill race). Curatr Curatr 9 is an online platform providing access to the British Library Digital Collection. The platform hosts digitised versions of all English-language books from the British Library collection, corresponding to over thirty-five thousand unique titles, from 1700 to 1899. The data collection consists of over forty-six thousand unique volumes of text. The system enables queries on the equivalent of over 12 million individual pages of text, which can be searched and sorted by author, title, year, and the actual full-text of the volumes themselves. This allows us to identify content relating to specific themes within little known or very long, unwieldy texts. As Curatr supports the creation and export of smaller sub-corpora, we used it to filter the entire collection to produce a much smaller set of texts for closer inspection. We used the terms mills and canals to retrieve a corpus in the domain of industrial heritage in the 18th and 19th centuries. Text Generation The neural models for text generation were performed with the OpenNMT toolkit (Klein et al., 2017). We used the transform-based network with its default setting. The network used a six-layer encoder-decoder model with the attention mechanism enabled (Vaswani et al., 2017). To cover the entire vocabulary of the training set, we use sentencepiece to split the words into subword units. The training approach uses a batch size of 4,096, leveraging the ADAM optimiser (Kingma and Ba, 2015). We set the word embeddings' size to 500, and hidden layers to size 500, dropout = 0.1, respectively. We used a maximum sentence length of 50. Evaluation Approach The evaluation of responses of open-domain conversational agents, such as chatbots, is still an open question (Liu et al., 2016) since a variety of answers can be considered as correct. Therefore, we randomly selected 50 question-term pairs (out of the 100 pairs of the evaluation set) and evaluated manually the generated answers. Following the error classes by (Coughlin, 2003), four volunteers 9 http://erdos.ucd.ie/curatr/about were assessing the chatbot's responses to the questions into three classes: • Unacceptable = 1. Absolutely not comprehensible and/or little or no information generated accurately. • Possibly Acceptable = 2. Possibly comprehensible (given enough context and/or time to work it out); some information generated accurately. • Acceptable = 3. Not perfect (stylistically or grammatically odd), but definitely comprehensible, AND with all important information generated accurately. In addition to the manual evaluation, we analysed the Inter Annotation Agreement (IAA) between the four annotators. For this, the Fleiss' Kappa (Fleiss et al., 1971) was calculated (Equation 1). P (actual agreement) and P e (expected agreement) measure the reliability of agreement between a fixed number of annotators when assigning categorical ratings to several items or classifying items. Results and Discussion In this section, we present the evaluation results of generated answers and provide some further insights into the challenges of generating accurate responses. Table 4 illustrates the manual evaluation of the 50 automatically generated answers. All annotators marked each answer either as unacceptable (1), possible acceptable (2) or acceptable (3). The scores from the annotation campaign range from 1.30 to 2.54. As seen from the table, the annotators evaluated the responses generated from the subject and object term training set with the highest score, an average of 2.40. The answers generated from the training set concatenated corpus were annotated with the lowest scores. The chatbot trained on subject and object term benefits from various generated questions containing two relevant terms, while all other corpora contain more general questions with only one term or the combination of a term and its relation, depending on if terms and relations were used to extract the sentences. Evaluation Results Inter Annotator Agreement Due to the annotation approach with four annotators, we calculated the Fleiss' κ score based on the evaluation of the quality of the generated answers. Table 5 shows the different scores for each of the different corpora the chatbot was trained on. The annotators achieved a fair agreement 10 evaluating the first three corpora, and moderate agreement 11 (Fleiss et al., 1971) evaluating the chatbot's answers trained on the concatenated corpus. Discussion As mentioned before, the evaluation of an opendomain chatbot is still an open challenge, as various answers can be considered as correct. In a manual analysis of the chatbot's responses, we observed meaningful answers across all different data sets. Nevertheless, the first two examples in Table 6, Tell me about a cotton and a sawmill) and What is the relation between a gate and a river, illustrate examples, where the system did not provide a meaningful answers. While the first example only gives answer related to one term, i.e., sawmill. Although both terms are present in the generated answer, the second example demonstrates an incomprehensible answer where additional information is missing in the sentence. The third example illustrates the 10 fair agreement = 0.21 -0.40 11 moderate agreement = 0.41 -0.60 lexical ambiguity of the word lock. 12 Whereas the desired answer relating to the enclosure consisting of a section of canal, the chatbot generates an answer with the meaning of a fastener fitted to a door. Analysing the remaining examples in Table 6, i.e., What is the relation between a log and water? or What is the relation between a lumber and a mill, the chatbot provided acceptable responses. For the former example, the chatbot provides the information on switching the transport of logs from water to railroads, while for the latter, the system generates a meaningful explanation that logs were cut into lumber by a mill. Conclusion and Future Work We described an approach towards the bootstrapping of a chatbot in the cultural domain through the application of readily available tools for term and relation extraction and natural language generation. The evaluation shows that the end-to-end neural model produces acceptable results when asking questions about the industrial heritage of the 18th and 19th centuries. Nevertheless, the approach does require more extension and refinement to be useful for automatic chatbot development. The current limitations of the approach are in data collection, term and relation extraction and eval-12 from Princeton WordNet, 1: a fastener fitted to a door or drawer to keep it firmly closed, 2: enclosure consisting of a section of canal that can be closed to control the water level; used to raise or lower vessels that pass through it. http://wordnetweb.princeton.edu/ perl/webwn?s=lock Question #1 Tell me about a cotton and a sawmill Generated Answer A sawmill or lumber mill is a facility where logs are cut into lumber. Question #2 What is the relation between a gate and a river? Generated Answer On the clare side of the river, at the foot of thomond gate 587 limerick. Question #3 Tell me about a city and a lock Generated Answer 287 the gates of the city to be put in repair with locks and keys Question #4 What is the relation between a log and water? Generated Answer Availability of railroad transportation for logs and lumber encouraged building of rail mills away from navigable water. Question #5 What is the relation between a lumber and a paper? Generated Answer The larger pieces of wood that wo n't make lumber are chipped into wood chips and provide a source of supply for paper mills. Question #6 What is the relation between a lumber and a mill? Generated Answer A sawmill or lumber mill is a facility where logs are cut into lumber. Question #7 What is the relation between a boat and a wheel? Generated Answer [ 25 ] these floating mills had a wheel that was attached to a boat moored in a fast flowing river. Question #8 Tell me about a machine and water Generated Answer The water power could be used for driving the splint and box-making machines. Question #9 Tell me about a lock and water Generated Answer The lock is filled with water from upstream. Table 6: Examples of questions and the generated answers from the best performing model. uation. In particular, our current data set consists only of a small number of contemporary documents of general but not of specific relevance to the application under consideration, i.e. industrial heritage of Ireland in the 18th and 19th centuries. Instead, in future work, we want to focus on data collection in digital libraries on contemporary as well as historical documents specifically on topics that are of direct relevance to this application context. While we extracted relevant terms and relations in the targeted domain, these terms are mostly single-word terms. Therefore, we are planning to focus on extracting more multi-word terms, which will help us to identify relevant sentences for training the chatbot system. Further, the generation of questions based on the extracted terms and relations is currently limited to a template-based approach. We envision that the inclusion of neural models, such as Text-To-Text Transfer Transformer (T5) (Raffel et al., 2020) will generate better natural language questions. Furthermore, we plan to incorporate multi-modal approaches, i.e. incorporating images within the chatbot, for visual representation as well as for dis-ambiguation approaches. Finally, we would like to include relevant historical expertise to better inform our approach from the use case perspective.
2022-11-23T14:06:55.317Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "73dff7b72c4f4a8eef81982bac79a31e2056e08a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "73dff7b72c4f4a8eef81982bac79a31e2056e08a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
7167349
pes2o/s2orc
v3-fos-license
Atrophic Gastritis: A Related Factor for Osteoporosis in Elderly Women Purpose Osteoporosis poses a great threat to the aging society. Hypochlorhydric or achlorhydric conditions are risk factors for osteoporosis. Atrophic gastritis also decreases gastric acid production; however, the role of atrophic gastritis as a related factor for osteoporosis is unclear. We investigated the relationship between atrophic gastritis and osteoporosis in postmenopausal women over 60 years of age. Subjects and Methods A total of 401 postmenopausal women were included in this cross-sectional study, which was conducted during their medical check-ups. Bone mineral densitometry was measured using a dual energy X-ray absorptiometry. Atrophic gastritis was defined endoscopically if gastric mucosa in the antrum and the body were found to be atrophied and thinned and submucosal vessels could be well visualized. Results The proportion of people with atrophic gastritis was higher in the osteoporotic group than in the group without osteoporosis. A linear relationship was observed in the proportion of atrophic gastritis according to the categories of normal, osteopenia, and osteoporosis at the lumbar spine (p for trend = 0.039) and femur (p for trend = 0.001). A multiple logistic regression analysis revealed that the presence of atrophic gastritis was associated with an increased odds of osteoporosis after adjusting for age, body mass index, triglyceride, high-density lipoprotein cholesterol, alcohol consumption, and smoking status (odds ratio 1.89, 95% confidence interval 1.15–3.11). Conclusions Atrophic gastritis is associated with an increased likelihood of osteoporosis in Korean elderly women. Introduction Osteoporosis is a metabolic bone disease characterized by the decrease in bone mass with microarchitectual disruption and enhanced skeletal fragility, resulting in an increase for fracture risk [1]. Osteoporotic fractures cause disability and a substantial burden to the society due to both loss of labor and increases in medical expenses. An estimated nine million osteoporotic fractures occurred worldwide in 2000; of these, 1.6 million were hip fractures, 1.7 million occurred in the forearm, and 1.4 million were vertebral fractures [2]. Fragility fractures accounted for 0.83% of the global burden associated with noncommunicable diseases. Osteoporotic fractures contributed to more disabilityadjusted life years lost than the common cancers, except for lung cancer, in Europe [2]. The incidences of osteoporosis and osteoporotic fractures are greater in women than in men [3], and bone mineral density decreases according to age [4]. The other risk factors for osteoporosis include cigarette smoking [5], excessive alcohol consumption [6], vitamin D deficiency [7], and low dietary calcium [8]. Calcium is ionized in acidic conditions and absorbed in the small bowel. Therefore, in either hypochlorhydric or achlorhydric stomachs, calcium absorption is impaired [9]. Conditions causing a decrease in gastric acid secretion status, including gastric surgery, and use of proton pump inhibitors increase the risk for low bone mass or fractures [10,11]. Atrophic gastritis, another hypochlorhydric condition, can adversely affect bone mineral density; however, studies about atrophic gastritis and bone mineral density are sparse and inconclusive [12,13]. Moreover, to the best of our knowledge, no study has evaluated this association in the elderly over 60 years of age. The aim of this study was to investigate the relationship between atrophic gastritis and osteoporosis in postmenopausal women aged 60 or older. Study subjects Participants in this study had undergone routine health checkups at the Center for Health Promotion in the Korea University Anam Hospital located in Seoul, Korea between March 1, 2007 andMarch 31, 2009. A total of 12,593 persons were examined during this period. Men (n = 6801), persons below 60 years of age (n = 4783), pre-or peri-menopausal women or those with unknown menopausal status (n = 238), persons who had taken drugs that can affect bone mineral density such as glucocorticoids, estrogen, calcium, vitamin D, or bisphosphonates (n = 197), persons who were not examined with dual energy X-ray absorptiometry (n = 137), persons who were not examined with esophagogastroduodenoscopy (n = 30), those with history of gastric surgery (n = 2), and those whose endoscopic biopsy result was dysplasia (n = 4) were excluded from this study. The final study sample had a total of 401 postmenopausal women aged 60 or older. All participants signed the consent form and the Institutional Review Board at the Korea University Anam Hospital approved this study (IRB No. AN09141-001). Anthropometric and laboratory measurements All participants wore light clothing without shoes during anthropometric measurement. Height and weight were estimated to the nearest 0.1 cm and 0.1 kg, respectively. Body mass index (BMI) was calculated using the following equation: weight (kg) divided by the square of height (m). Blood pressure was measured on the upper arm after 10 min of rest using an automated blood pressure monitoring device (MP800, MEKICS, Chuncheon, Korea). Endoscopic examination and histologic assessment Biennial gastric cancer screening has been recommended for individuals 40 years and older because of the high prevalence of gastric cancer in Korea, with either an upper gastrointestinal series or endoscopy [14]. Standardized esophagogastroduodenoscopy (GIF-H260, Olympus Co., Tokyo, Japan) was performed by one of two experienced endoscopists at Korea University Anam Hospital, each of whom had at least 5 years of endoscopic experience with over 10,000 cases. Endoscopic findings were described by the overall impression regarding the presence of gastritis in the antrum and the body of the stomach. Atrophic gastritis was defined endoscopically if gastric mucosa in the antrum and the body were atrophied and thinned and submucosal vessels could be well visualized. A single highly experienced endoscopist (JYA) reviewed all endoscopic images, and the diagnosis was confirmed after careful evaluation. A single pathologist (CHK), who was unaware of the clinical details, completed the histologic assessment. The presence of Helicobacter pylori was assessed by hematoxylin and eosin and cresyl-violet staining based on the Updated Sydney System [15]. Only a subset of participants (n = 130) underwent H. pylori testing. Bone mineral density Bone mineral density (BMD) (g/cm 2 ) of central skeletal sites (lumbar spine, total hip, and femoral neck) was evaluated using dual energy X-ray absorptiometry (Discovery-W, Hologic, Bedford, MA, USA). Lumbar spine BMD was measured using the average value for L1 to L4. Femur BMD was chosen as the lowest value between total hip and femoral neck BMD. Osteopenia or osteoporosis was diagnosed using World Health Organization criteria (-2.5,T-score,-1.0 or T-score#-2.5). Statistical analysis SPSS version 12.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Results are presented as means 6 standard deviation, median with interquartile range, or frequencies and percentages. p,0.05 was considered to be statistically significant. Student t-tests, Mann-Whitney U test, chi-square tests, or Fisher's exact tests were used to compare the anthropometric, laboratory, social, and endoscopic differences according to the presence of osteoporosis. The chi-square test was used to compare the proportion of atrophic gastritis according to three groups of BMD at the lumbar spine and femur and the linear trend was calculated by using the linear-by-linear association. A multiple logistic regression analysis was performed to assess the association between atrophic gastritis and osteoporosis. Variables that had significant association (p,0.05) with the dependent variable (osteoporosis) in univariate analysis or known risk factors for both osteoporosis and atrophic gastritis were included in the model as covariates. Initially, the analysis was performed without adjustment. Then, age and BMI were adjusted in model 2. In model 3, age, BMI, TG, and HDL-C were adjusted. Lastly, in addition to the covariates in model 3, alcohol consumption and smoking status were adjusted in model 4. Results The clinical and biochemical characteristics of the study subjects are presented in Table 1. Osteoporotic patients were older (66.0 versus 63.0 years), had lower BMI values (23.4 versus 24.5 kg/m 2 ), higher TG (129.5 versus 121.0 mg/dL), and lower HDL-C levels (50.0 versus 52.0 mg/dL) than subjects without osteoporosis. The proportion of people with atrophic gastritis was higher (56.9 versus 43.1%) in the osteoporotic group than in the group without osteoporosis. The percentage of H. pylori infections was greater in the osteoporosis group than in the non-osteoporotic group; however, the difference was not significant. Figure 1 describes the trend of an increasing percentage of atrophic gastritis in the 3 groups of BMD. A linear relationship was observed in the proportion of atrophic gastritis according to the categories of normal, osteopenia, and osteoporosis at the lumbar spine (p for trend = 0.039) and femur (p for trend = 0.001). A multiple logistic regression analysis demonstrated that subjects with atrophic gastritis had an increased likelihood of osteoporosis even after adjusting for age, BMI, TG, HDL-C, alcohol consumption, and smoking (model 4, OR 3.10, 95% CI 1.44-6.68) at the femur ( Table 2). A similar result was observed after adjustment was made for anthropometric, laboratory, and social parameters when the dependent variable was regarded as the presence of osteoporosis at the lumbar spine or femur (model 4, OR 1.89, 95% CI 1.15-3.11). Discussion Atrophic gastritis increased the likelihood of osteoporosis in postmenopausal women over 60 years of age. The association remained significant even after controlling for anthropometric, laboratory, and social variables. To the best of our knowledge, this is the first report that evaluates the relationship between atrophic gastritis and osteoporosis in the elderly over 60 years of age. Osteoporosis poses a great threat to the aging society. Aging is accompanied by the risk of osteoporosis and associated fractures, thereby causing increased disability-adjusted life years lost worldwide [2]. Aside from aging, osteoporosis and osteoporotic fractures have many other risk factors. Females are at high risk for osteoporosis. The peak bone mass of women was lower than that of men, and the BMD levels of postmenopausal women decrease abruptly after menopause because of a lack of estrogen [16,17]. Cigarette smoking increases postmenopausal bone loss [5], and heavy alcohol drinking exerts a negative effect on bone health [6]. Bone serves as a reservoir for the storage of calcium, and vitamin D plays a critical role in gastrointestinal calcium absorption. Therefore, vitamin D deficiency [7] and low dietary calcium [8] facilitate bone loss and osteoporotic fractures. Along with other risk factors for osteoporosis, hypochlorhydric or achlorhydric conditions, including gastrectomy, and the use of antacids are important because the dissolution and absorption of calcium salts decrease in non-acidic conditions [9,11,18]. Stomach resection adversely affected bone metabolism and decreased BMD even in a partial gastrectomy group [19]. Gastrectomy, including bariatric surgery, increases the risk of osteoporosis and fractures due to weight loss and change of body composition as well as calcium malabsorption [10,18]. In rats subjected to gastrectomy and fundectomy, blood calcium concentration decreased slightly within three weeks after surgery, reflecting an impaired capacity of converting insoluble calcium into soluble calcium salts [20]. Acidsuppressive medication use could also raise the risk of fractures. A meta-analysis revealed that proton-pump inhibitors increase the risk of hip, spine, and any-site fractures by 30, 56, and 16%, respectively [11]. Long-term use of proton-pump inhibitors markedly increased the risk of hip fractures in another study [21]. Atrophic gastritis, which is more prevalent in the elderly and associated with H. pylori infection [22], is characterized by the loss of an appropriate number of glands in the gastric mucosa [23] and therefore causes a hypochlorhydric or achlorhydric stomach. As a result, the absorption of minerals and vitamins could be hampered; however, the relationship between the presence of atrophic gastritis and micronutrient absorption has been poorly studied [9]. Moreover, studies about atrophic gastritis and osteoporosis are rare and inconclusive [12,13]. We found that atrophic gastritis is associated with osteoporosis in postmenopausal women aged 60 or older after adjusting for age, BMI, TG, HDL-C, alcohol consumption, and smoking status. Previous studies reported that there was no relationship between BMD and atrophic gastritis [13,24]; however, the participants in these studies were relatively young women below the age of 60. Aging is related to the presence and the progression of atrophic gastritis and intestinal metaplasia [22,25]. In a Korean study, the prevalence of atrophic gastritis was more than 50% in antrum and 23.5% in body for those older than 60 [22], which was in accordance with our result (56.9% and 43.1% respectively in osteoporotic and nonosteoporotic participants, Table 1). One of the possible reasons that no relationship between BMD and atrophic gastritis was seen in these studies is that the ability to produce acid in relatively young participants remained, and moderate concentrations of acid secretion would be enough to induce reasonable calcium absorption in the small intestine. In addition, the study sample of these studies had a heterogeneous nature, including different ethnicities [24], and the sample sizes in these studies were small. The present study had a larger sample size and only included women over 60 years of age of a single population (native Korean). In a study of pepsinogen I and BMD, decreased lumbar spine BMD correlated with a lower serum level of group 1 pepsinogens [12]. The concentration of pepsinogen I indicates the functional ability of the oxyntic mucosa, and thereby, a decreased pepsinogen I level is a marker of mucosal atrophy [26]. This result is in agreement with our results, supporting the hypothesis that the grade of atrophy in the oxyntic mucosa is linearly correlated with the BMD. Our report had several strengths, including its relatively large sample size, and that it was drawn from a homogenous population of elderly women over 60 years of age. However, this study also had some weaknesses. First, the design of this study is crosssectional, not allowing for detection of causality. Second, the diagnosis of atrophic gastritis was based on endoscopic findings, not on biopsy specimens due to the nature of routine health checkups. In recent publications, however, a correlation was found between endoscopic and histological findings in diagnosing atrophic gastritis in Korean samples [27,28]. The young age group (below 50 years) was associated with decreased sensitivity of endoscopic diagnosis of atrophic gastritis in a study [28], which infers that the sensitivity was fair in the elderly. We included only women over 60 years of age in the study to avoid uncertainty in the diagnosis of atrophic gastritis. Third, we only checked the status of H. pylori infection from some participants due to the nature of health examinations, and we did not assess the use of drugs such as proton-pump inhibitors or antibiotics that can affect biopsy results for H. pylori. Finally, we did not check the use of antiacid medications such as proton-pump inhibitors or histamine 2 blockers that can affect bone mineral density. In conclusion, atrophic gastritis is associated with an increased odds of osteoporosis in elderly women after adjusting for anthropometric, laboratory, and social parameters. Further studies are needed to identify our conclusions, which need to be confirmed in studies with a prospective design, larger sample, and other populations. Supporting Information Table S1 Data of atrophic gastritis, bone mineral density, and other related variables. (XLS)
2018-04-03T04:35:51.867Z
2014-07-08T00:00:00.000
{ "year": 2014, "sha1": "09dc94d6d1dd0672a2dc04bbadbcca9587a2eea1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0101852&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b8693a7871d3c063c02ee3096d78d9452d908d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209179561
pes2o/s2orc
v3-fos-license
Advances and considerations in AD tau-targeted immunotherapy The multifactorial and complex nature of Alzheimer’s disease (AD) has made it difficult to identify therapeutic targets that are causally involved in the disease process. However, accumulating evidence from experimental and clinical studies that investigate the early disease process point towards the required role of tau in AD etiology. Importantly, a large number of studies investigate and characterize the plethora of pathological forms of tau protein involved in disease onset and propagation. Immunotherapy is one of the most clinical approaches anticipated to make a difference in the field of AD therapeutics. Tau –targeted immunotherapy is the new direction after the failure of amyloid beta (Aß)-targeted immunotherapy and the growing number of studies that highlight the Aß-independent disease process. It is now well established that immunotherapy alone will most likely be insufficient as a monotherapy. Therefore, this review discusses updates on tau-targeted immunotherapy studies, AD-relevant tau species, updates on promising biomarkers and a prospect on combination therapies to surround the disease propagation in an efficient and timely manner. Introduction Despite all the failed immunotherapy clinical trials in Alzheimer's disease (AD), amyloid beta (Aß) and tau targeted immunotherapies are still at the forefront of therapeutic approaches (Hoskin et al., 2019;Cummings et al., 2019). Besides being the identifying hallmarks of the disease, Aß and tau aggregates have been extensively studied and directly linked to neurodegeneration. Aß and tau, primarily the oligomeric species of each, have been shown to be neurotoxic both extracellularly and intracellularly (Frost et al., 2009;Sebastián Serrano et al., 2018). Aß and tau oligomers have been shown to disrupt membrane and synaptic integrity as well as calcium balance, long term potentiation, cellular cytoskeleton and, most importantly, synaptic spines and synaptic communication that leads to progressive cognitive decline (Polanco et al., 2017). The role of tau is more agreed upon compared to the controversy around the consequential role of Aß in the disease process. Nevertheless, there is evidence that suggests a synergistic toxic effect between Aß and tau aggregates which might be exploited by immunotherapy (Pascoal et al., 2017). Some studies suggest that Aß malfunction lies upstream of tau malfunction and triggers tau pathology (Bloom, 2014;Hurtado et al., 2010;Lewis et al., 2001;Götz et al., 2001). However, targeting Aß by immunotherapy did not lead to a decrease in tau pathology nor slowed down cognitive decline in clinical trials (Panza et al., 2019;Medina, 2018). Targeting Aß was shown to be ineffective in early intervention studies in patients with mild cognitive impairment (MCI) and prodromal Alzheimer's disease (AD) (Cummings et al., 2018). Other studies supporting this idea have shown that Aß pathology itself is not linked to the neurodegeneration and dementia observed (Giacobini and Gold, 2013) but rather, AD development and the associated neurodegeneration correlate to APP malfunction (Lewczuk et al., 2018). Moreover, APP has been shown to be involved in keeping synaptic and axonal integrity (Rusu et al., 2007) and in intracellular transport (Rodrigues et al., 2012;Kametani and Hasegawa, 2018) In addition, APP fragments have been shown to disrupt synaptic plasticity and cellular metabolism, and accumulate in dystrophic neurites (Kametani and Hasegawa, 2018) in both sporadic and familial AD. To add, APP malfunction has been closely linked to tau phosphorylation, aggregation and accumulation (Takahashi et al., 2015). Therefore, all the above support the notion that APP, and not Aß, along with tau are the main drivers of AD. While there are continuous efforts on identifying novel drug targets along the APP metabolism pathway, APP metabolism-targeting therapeutic methods remain far from being established. (Takahashi et al., 2015). On the other hand, tau malfunction correlates with the onset and propagation of AD pathology. In addition, tau load in the brain correlates with cognitive decline, and removal of tau aggregates, in their different forms, have attenuated pathology spread and cognitive decline in animal models. For these reasons, tau-targeted immunotherapy is on the rise. This short review will cover: 1) the latest updates on the current on-going tau-targeted immunotherapy clinical trials, 2) targeting the different pathological forms of tau, 3) the journey to identify biomarkers that will aid in early disease detection and thus early immunotherapy intervention, and 4) a futuristic look onto possible combinational immunotherapy approaches. Update on ongoing Immunotherapy clinical trials One of the major reasons believed to be behind the failure of Aß-targeted immunotherapy clinical trials is that Aß load in the brain does not correlate with the level of dementia. Neither reducing Aß plaques nor oligomers has helped in slowing down neurodegeneration. Therefore, the general conclusion was that Aß accumulation is irrelevant to neurodegeneration and might be just a consequence of the disease process (Panza et al., 2019). Given that tau pathology follows the Braak and Braak staging of AD and correlates well with the timeline of neurodegeneration and dementia progression (Lowe et al., 2017), outcomes with tau immunotherapy appear promising. In addition, several types of dementia are now classified as tauopathies including AD. That is because these disorders all share tau malfunction, and misfolded tau propagation is a prominent feature of the disease stages and overall pathology. Knowing that tau is a microtubule binding protein and is therefore directly involved in the cellular integrity, i.e. is not a byproduct of a mutation like Aß, tau malfunction is more likely to be causally linked to neurodegeneration. To add, misfolded forms of tau have been shown to be toxic, able to seed normal forms of tau and propagate the pathology. Therefore, stopping the propagation of abnormal tau is likely to have more beneficial functional effects than targeting non-toxic aggregates of cellular byproducts like Aß. Also, several humanized tau antibodies have also been produced and are currently being tested in clinical trials. AADvac-1 (NCT02031198) is the first in-man active tau antibody to go into phase I clinical trials in 2016 (Novak et al., 2017;Novak et al., 2018aNovak et al., , 2018b. It had very favorable safety profiles and resulted in less cognitive decline and brain atrophy in patients with mild to moderate AD. AADvac-1 is currently in phase II trials awaiting disease modifying results in patients with mild AD. In September 2019, results of phase II were announced in a press release by Axon Neuroscience. AADvac-1 showed a promising safety profile and immunogenicity in phase II with no significant adverse events observed between immunized and placebo patients. Interestingly, AADvac-1 also showed a statistically significant decrease in blood (NfL) and CSF (pTau181 and pTau217) biomarkers, bringing patients' biomarkers levels close to normal. This strongly suggests that AADvac-1 is targeting and affecting tau pathology. The press release also mentioned a cognitive improvement among the young population only. ABBV-8E12 is another tau antibody in phase II clinical trials (NCT02880956). 8E12 is a monoclonal tau antibody that targets extracellular pathological tau aggregates. Results were very promising as 8E10 was shown to target larger insoluble tau aggregates and reduce tau seeding and propagation. 8E12 has also showed favorable safety profiles in progressive supranuclear palsy (PSP) patients and it has entered phase II clinical trial which will evaluate the long-term safety and tolerability in PSP and mild AD patients, as well as evaluate the ability of the antibody to delay cognitive decline. Phase II trial on PSP patients was terminated in July of 2019 after failed a futility assessment. Experts in the field report that this result was expected due to the old age of the patients (Alzforum, 26 Jul 2019). However, phase II on MCI and early AD patients is still ongoing without changes. The study is projected to finish in 2021 (West et al., 2017). Another monoclonal antibody, however targeting a different form of tau, is BIIB092 (NCT02460094) (also IPN007 or BMS-986168). This humanized antibody targets extracellular N-termini of fragmented tau. This antibody also showed favorable safety profiles in phase I trial in PSP patients and reductions in cerebral spinal fluid (CSF) free fragmented tau levels Dam et al., 2018). It is now enrolled in a phase II trial in MCI and mild AD patients (NCT03068468). The clinical trial is projected to finish in 2020 (Ratti et al., 2018;Boxer et al., 2019). Last but not least, RO7105705 is a monoclonal tau-antibody in phase II trial (NCT02820896). Since it is reported to target the N-terminus of all six extracellular human tau isoforms, in both monomeric and oligomeric forms, it appears to be very promising. After showing a favorable safety profile, RO7105705 antibody is in two clinical trials on "probable AD" patients, i.e., based on PET imaging or CSF Aß42 levels. The two trials aim at measuring tau in the brain and cognitive performance at three different doses in comparison to placebo. The studies are projected to finish by 2020 and 2021 (NCT03289143) (Kerchner et al., 2017). Several other tau antibodies have also entered the clinical trials marathon. One interesting antibody currently in phase I is reported to bind and remodel both Aß and tau misfolded aggregates. This antibody, NPT088, was shown to reduce Aß and p-tau pathology (Levenson et al., 2016). It was also shown to notably reverse brain atrophy and cognitive deficits in aged Tg2576 hAPP mice (Levenson et al., 2016). Overall, the promising aspect about this race of clinical trials is that almost each of the utilized antibodies targets a different tau epitope. However, concurrently, most of the antibodies that made it to clinical trials, target the N-terminus of a form of extracellular tau. With the disease etiology being more supportive of tau involvement, and with more studies aiming at identifying AD-relevant pathological tau species, tau immunotherapy is more promising than any time before (Table 1). Hyperphosphorylated tau In AD, tau pathology is described as insoluble intracellular neurofibrillary tangles (NFTs) consisting of hyper-phosphorylated and aggregated tau species. Studies have shown, however, that the smaller soluble tau aggregates are the toxic tau species underlying the spread of pathology and neurodegeneration Shafiei et al., 2017;Fá et al., 2016). Those species mainly consist of hyperphosphorylated and misfolded tau conformations (oligomers, seed-competent monomers) (Mirbaha et al., 2018;. As mentioned earlier, the population of pathological tau species in AD is very heterogeneous. This heterogeneity is currently an important obstacle in the face of tau-targeted immunotherapy. The success of immunotherapy in AD and other tauopathies greatly depends on the clear characterization of most of the pathological tau species. There are three main differences between pathological and physiological tau. First, the ability to acquired toxicity due to loss of function/gain of toxic function. Second, seed competency and the distinct epitopes that get exposed due to misfolding and third, lead to aberrant aggregation. It was recently shown that the exposure of an aggregation-prone domain due to misfolding renders tau conformations seed competent (Dujardin et al., 2018;Mirbaha et al., 2018) However, what renders a tau conformation toxic is still undetermined, aside from the loss of normal function. Hyperphosphorylation, on one hand, is one form of toxic loss of function due to its direct role in tau detachment from the microtubules. Hyperphosphorylation is shown to precede or initiate tau misfolding in AD human brains (Tai et al., 2014). In turn, misfolded soluble tau spreads from cell-to-cell in a neuronal network dependent pattern starting in the entorhinal cortex, going through the hippocampus and reaching the isocortex (Kaufman et al., 2018;Neddens et al., 2018). This transcellular tau spreading and seeding competency highlights the importance of immunotherapy as a leading therapeutic approach in removing/ neutralizing misfolded tau species as well as stopping the spreading and seeding processes. Aside from the tau-targeted antibodies that are currently in clinical trials, many studies are currently characterizing novel AD-relevant tau species and epitopes as immunotherapy targets. Reviewing most of the recent studies on tau-relevant species and immunotherapies, the complex heterogeneity of AD-relevant pathologic tau species becomes clearer and clearer. However, a few AD-relevant tau species stand out as the most promising immunotherapy targets. This section will highlight the most recent and therapeutically promising pathological species in animal models. Some phospho-epitopes have been specifically and directly linked to AD. An interesting study by Neddens et al., (2018) characterized a group of the most commonly studied phospho-epitopes in terms of their levels over the course of the disease, i.e. over the Braak stages. All the investigated sites increased gradually over the course of the disease, however, pSer396 and pSer422 were specifically hyperphosphorylated at earlier disease stages. Characterizing early disease stage-relevant tau species is a key step-forward in targeting toxic species early on, before full disease pathology and cognitive decline. Recently, the first crystal structure of C5.2, a monoclonal antibody that is specific to pSer396 epitope was generated. This antibody recognizes the middle region of tau where the microtubule binding domain (MTBD) is located (Perdersen et al., 2017;Chukwu et al., 2018). Further analysis revealed that the pSer396 phosphorylation site leads to a switch from alpha-helix to β-strand motif that misfolds tau and stabilizes a toxic conformation (Chukwu et al., 2018). Early prevention of pSer396 phosphorylation by immunotherapy is potentially promising to prevent propagation of this pathological tau species. C5.2 is the only antibody that targets the pSer396 site (C terminal epitope). PHF1, a monoclonal antibody that recognizes pSer396/Ser404 epitope, has been heavily tested using passive immunotherapy. PHF1 was tested in P301S, P301L, htau/PS1 and rTg4510 mouse models (Liu et al., 2016;Boutajangout et al., 2010;Chai et al., 2011;d'Abramo et al., 2013;Shahpasand et al., 2018). However, results showed that PHF1 immunization is mostly effective in reducing motor rather than cognitive decline. This highlights the difficulty in reversing cognitive and memory deficits after a certain point in the course of the disease. In other words, these results also reassure that early intervention with immunotherapy, before the onset of symptoms, is necessary to see promising effects on mouse cognitive behavior. These studies raise the question of whether C5.2 antibody will be effective in reducing cognitive decline when tested in immunotherapy studies. Also, keeping in mind that different tau epitopes get hyperphosphorylated along the disease spectrum (Neddens et al., 2018). Therefore, other questions arise as to whether early immunotherapy interventions against early phosphorylation sites helps in reducing the formation of other phosphorylated species that appear later over the course of the disease (Chukwu et al., 2018). Another antibody, TWF9, recognizes beta-sheet conformations on tau oligomeric species (Herline et al., 2018). This antibody was shown to be specific to tau species in human AD and MCI samples. Detailed epitope description was not provided in this study; however, it was also shown to be specific to Aß oligomeric species as well as PHFs. Intriguingly, a short-term administration of TWF9 lead to reduced memory deficits as well as reduced soluble levels of phosphorylated tau on 3xTg-AD old mice (18-22 months). To date, this is one of the few studies showing disease-course modifying or late intervention passive immunotherapy against soluble p-tau species. pT231 is another phospho-tau epitope that is highly implicated in earlier stages of AD-relevant neurodegeneration (Albayram et al., 2016;Nakamura et al., 2013). pT231 phospho-tau monomer has been shown to be seed competent. It also colocalizes with pathogenic oligomeric species and hyperphosphorylated PHF species as well as AT8 and Alz50-recognized species (Luna-Munoz et al., 2007;Shahpasand et al., 2018;Kanaan et al., 2016). These findings suggest that immunotherapy targeting pT231 tau might be effective. PHF-6, a monoclonal antibody specific to pT231 was evaluated in rTg4510 mouse model alongside PHF13 (pSer396) (Sankaranarayanan et al., 2015). Both antibodies showed reduction in soluble but not insoluble forms of tau. This reduction was accompanied with NOR memory cognitive improvement. However, again, the antibodies were administered in early disease stages. pT231 exists in two isomeric conformations, cis and trans, due to the presence of a proline residue near the phosphorylation site. Cis pT231 tau (cis p-tau) is implicated more in neurodegeneration than trans p-tau. It is even suggested that cis p-tau is a very early disease marker, perhaps the first phosphorylated form of pathological tau, however, only in TBI induced neurodegeneration (Kondo et al., 2015). cis mAb, a monoclonal cis p-tau antibody, have been shown to block neuronal toxicity and improve cortical based behavioral functions like decision making and risk taking (Kondo et al., 2015). Nevertheless, cis p-tau have not been implicated in AD-specific studies (Albayram et al., 2018;Naserkhaki et al., 2019;Albayram et al., 2017). Overall, these results highlight the promising therapeutic potential of targeting pT231 epitope. Further characterization, however, is needed to definitively determine its role (particularly cis p-tau) in AD-relevant neurodegeneration. In addition, according to the study done by Neddens et al. (2018), pT231 is high in level at early disease stages in the entorhinal cortex only. Nevertheless, pT231 levels do not increase in the isocortex except until Braak 4 stage (Neddens et al., 2018), which suggests that pT231 might not be as reflective of AD disease progression as other early phosphorylation sites like pSer396 for example. Toxic conformations Toxic conformations of tau, on the other hand, have been heavily implicated in early disease pathology. Monoclonal antibodies like MC-1, Alz50, TOMA1 and TOMA2 have been implicated for their promising potential for immunotherapy against soluble toxic conformations and tau oligomeric species; the most toxic tau species in neurodegenerative disease including AD. Despite the non-promising pre-clinical results seen by MC-1 antibody, Eli Lilly humanized MC-1 antibody which have been tested in three clinical trials. The first two studies were phase I studies in healthy, AD, and MCI patients. A third phase II study is now active in patients with early symptomatic AD. No specific details or results have been posted about any of the studies (Jicha et al., 1997;Alam et al., 2017;Vitale et al., 2018;Schroeder et al., 2016). This continuation of clinical trials is driven most likely by the assumption/hope that early intervention will be more effective than late intervention. Another tau specific conformational antibody is the Alz50. This antibody is like MC-1 in that it recognizes a discontinuous epitope, part of the N terminal and part of the middle tau region (Jicha et al., 1997). ALz50 is shown to bind to early disease stage high molecular weight soluble tau species and not to fibrillary tau or thioflavin-positive tau. However, N terminal targeted antibodies have been shown to be poor antibodies for AD relevant pathological tau species. For instance, almost all the AD brain tau species are missing the N-terminal where it is shown that the N terminal is an aggregation-limiting domain. Also, N-terminal antibodies have been thought to target physiological tau in addition to truncated tau and have been shown to be not very effective in depleting the tau species pool of pathological tau species (Chukwu et al., 2018). The MTBR, on the contrary, have been shown to be an aggregation promoting domain. Extension of tau residues in MTBR 2 and 3, due to mutations or phosphorylation, promotes beta sheet conformation and the stacking of tau misfolded monomers to form oligomers and bigger protofibrils (Zabik et al., 2017;Eschmann et al., 2015). In addition, it was shown that the C and N termini, on the other hand, are exposed in a similar fashion in both aggregated and normal tau. (Huang et al., 2018). This strongly suggests that the N and C termini are not part of the misfolding process and that antibodies targeting the MTBR or tau mid-regions might be more effective than those targeting the N or C termini in identifying pathological tau. Few antibodies that target misfolded oligomeric species are tested in immunotherapy studies. TOMA1 is one of the few antibodies that showed tau oligomers depletion from the brain and an increase of oligomers in the CSF accompanied with cognitive improvement Kolarova et al., 2016;Sengupta et al., 2017) (Fig. 1). All the studies mentioned above utilized passive immunotherapy approaches. Passive immunotherapy via monoclonal antibodies is the most common immunotherapy in development today due to its higher safety profiles and several other advantages over active immunotherapy . However, some studies, that proved it safe, chose to test active immunization due to its higher specificity to the pathological species targeted in the studied animal models. Active immunization in animal models have shown to be effective in reducing soluble and insoluble tau aggregates' levels, as well as those of hyperphosphorylated tau that are believed to drive neurodegeneration and NFTs load (Sigurdsson, 2016;Sigurdsson, 2018;Theunis et al., 2013;Theunis et al., 2017). However, there are only a few studies that showed behavioral cognitive benefit along the tau reduction (Asuni et al., 2007;Asuni et al., 2007;Troquier et al., 2012).This raises many questions about whether active immunotherapy-generated antibodies are targeting the neurodegeneration-relevant tau species. Besides characterizing the neurodegeneration relevant tau species in an animal model, active immunotherapy requires additional characterization of the relevant antibodies produced before a conclusion can be drawn out of a study. Not only that, active immunotherapy is dependent on the patients' immune system to function in an efficient way (Siegrist and Aspinall, 2009). This produces a lot of variability due to the variable levels of functionality and immunogenicity of the patients' immune systems in a clinical trial, and therefore produce unreliable results (Novak et al., 2018a) (Fig. 1). Targeting extracellular and intracellular tau Tau is a microtubule binding protein; therefore, it primarily exists inside the neurons under physiological conditions. Under pathological conditions, phosphorylation of tau leads to its detachment from the microtubules and its consequent loss of normal function, gain of toxic function, aggregation, and release to the extracellular space (Yamada, 2017;Kanmert et al., 2015;Magnoni et al., 2012). Tau is also released from neurons as a result of neuronal communication and neuronal death. This consequently contributes to tau aggregation and the spread of pathological tau forms (Sebastián-Serrano et al., 2018;Medina and Avila, 2014). Extracellular tau is known to exert cellular toxicity via mechanisms separate from those of intracellular tau by interacting with muscarinic cellular receptors or disrupting the cellular membrane (Gómez-Ramos et al., 2006;Yamada and Hamaguchi, 2018). However, extracellular tau is most likely rapidly degraded. Besides the heterogeneity of pathological and non-pathological tau species (Evans et al., 2018;Frost et al., 2009;Wu et al., 2013), most antibodies are too bulky to enter the neuron for intracellular-tau clearance (Nisbet et al., 2017;Wu et al., 2018). Therefore, immunotherapy is thought to merely function through extracellular tau clearance via neutralizing available toxic tau species, blocking further transcellular spread and aggregation (McEwan et al., 2017), and clearing the antibody-tau complex either to be degraded by microglia (Funk et al., 2015) or to be released to the periphery (blood) . Passive immunotherapy studies in animal models have shown that targeting extracellular tau reduces intracellular tau aggregate pools. (Yanamandra et al., 2013;Gu et al., 2013;Castillo-Carranza et al., 2015). One of the explanations of this observation is the shifting of the intra-extra-cellular tau ratio balance, i.e., promoting tau release from the neurons after extracellular clearance and thus becoming available for clearance with antibodies Umeda et al., 2015). This is highlighted in Castillo-Carranza et al., 2015, where they observe a prominent reduction in both intracellular and extracellular tau oligomeric species after tau oligomers-targeted passive immunotherapy. This reduction was accompanied by a reversal of cognitive and motor function loss in 8 month old JNPL3 mouse model. Antibody-tau oligomer complex was shown to be cleared to the blood. It was not clear, however, whether the antibodies were able to internalize into the cells, or the observed results were due to only-extracellular tau oligomers clearance. Another study by Umeda et al. applied immunotherapy in aged mice of an aggressive tauopathy model. The antibodies used targeted phospho-epitopes and reduction in intracellular tau was observed without clear evidence of antibody internalization into the neurons. That reduction in phosphorylated pathogenic tau also resulted in marked improvement in synaptic and memory functions. It is worthy to mention that the most effective antibody in this study recognized tau oligomers in addition to a phospho-epitope (Umeda et al., 2015). This point is expanded on in the combinational therapy section of the review. A recent study by Wu et al., 2018, tracked tau antibodies in the brain and the blood of live animals via two-photon microscopy after intravenous immunotherapy . They reported that approximately 25-50% of the antibodies crossed the blood brain barrier and approximately 80% of it resided in the brain up until 14 days after injection. These antibodies target a phospho-tau epitope and were shown to be effective in reducing tau pathology and cognitive impairment. These antibodies were shown to internalize into the neurons via clathrin mediated endocytosis which is a very promising step towards future immunotherapy clinical trials Gu et al., 2013;Krishnaswamy et al., 2014). Nevertheless, one of the biggest challenges faced in developing immunotherapy involves antibodies crossing the BBB and reaching their respective targets, whether that be tau or Aß. Increasing the antibody efficacy can be done through exploiting natural BBB transport systems such as carrier mediated transport (Ohtsuki and Terasaki, 2007), active efflux transport (Sanchez-Covarrubias et al., 2014) and receptor mediated transport (RMT). Pharmaceutical companies as well as academic scientists have used RMT systems to deliver therapeutic agents. One such pathway involves the transferrin receptor (TfR), where TfR is expressed in the brain endothelial cells and is responsible for maintaining brain iron homeostasis by transporting iron by endocytosis. (Barar et al., 2016;Pardridge, 2002). Antibody production has heavily advanced and using bispecific antibodies is seeming very attractive. Bispecific antibodies are two antigen specific immunoglobulin chains combined into a single construct. This technology has been widely used in cancer therapy. Bispecific antibodies can be engineered for one arm to recognize a BBB RMT receptor and another arm to the pathological target. The RMT receptor recognizing arm permits access to the brain while the other promote the therapeutic effect. In 2014, Yu et al. demonstrated that their bispecific antibody which target the TfR receptor as well as the α-secretase, crosses into the BBB and reduces amyloid α load in nonhuman primates. The amyloid α reduction directly correlated with the amount of antibody that crosses the BBB (Yu et al., 2014). In another study, a single chain Fv antibody was fused with heavy chain chimeric antibody recognizing TfR, which reduced 60% of amyloid without increased amyloid in the blood (Weber et al., 2018). These experimental data have provided a valid strategy to overcome the BBB and deliver the necessary dose of therapeutic antibody. These strategies need to be further evaluated in clinical studies particularly trials using tau targeted immunotherapy. Meanwhile, while multiple forms of tau aggregates are being targeted in clinical trials, a marked increase in biomarker studies for early detection of AD is taking place (Cummings et al., 2018). Tau immunotherapy clinical trials are thought to result in slowing down or preventing neurodegeneration and dementia if backed up with early detection and therefore early intervention. Biomarkers and immunotherapy: earlier detection means earlier intervention Previously, biomarkers were only studied through autopsy. However, in 2018, screening for biomarkers using PET/MRI imaging, CSF, or blood was added as an official diagnostic tool in clinical trials by national institute of aging (Lee et al., 2019). Not all current clinical trials demand screening for biomarkers prior to enrolling patients, however, the number of clinical trials that demand such measures is on the rise. Nevertheless, the effectiveness of including biomarker screening as a diagnostic tool in clinical trials heavily depends on the level of predictability of biomarkers to the disease stages. Current biomarkers do not predict the disease stages and there are no established biomarkers that predict AD prior to presentation of symptoms. The feasibility/ease of obtaining imaging or fluid biomarkers and the economic burden of it plays a big role in their incorporation in clinical trials (Lee et al., 2019). In addition, imaging techniques are more beneficial for disease confirmation, whereas fluid biomarkers are more beneficial for early disease detection. A combination of both is most likely needed for early disease detection, monitoring disease progression and clinical trials outcomes (Mattsson et al., 2009). So far, Aß and tau, in their different forms, are the most common targets in biomarkers studies for both early disease detection and progression (Lashley et al., 2018;Schöll et al., 2019). The change of Aß 42 levels in CSF and its use as an AD biomarker is controversial. Opposing results have been shown by several studies (Polanco et al., 2017). However, it is well accepted that Aß 42 levels decrease in CSF as the disease progresses Fagan et al., 2007;Olsson et al., 2016). The rationale behind it is that Aß aggregates sequester the soluble Aß 42 into larger plaques in the brain, which leaves less soluble Aß units to circulate in the CSF (Bateman et al., 2012;Lashley et al., 2018). However, if that is true, the same concept should apply to tau levels in the CSF, as well, in relation to tangle formation along the disease spectrum. However, p-tau and t-tau levels in the CSF increase with the progression of the disease. This questions whether any specific species of protein, regardless of its increase or decrease, is predictive of disease propagation Polanco et al., 2017). T-tau increases in the CSF in several conditions in addition to neurodegeneration like in post-stroke (Hesse et al., 2001) or TBI individuals (Franz et al., 2003) and therefore might not be 100% reflective of AD disease progression (Lewczuk et al., 2018). Therefore, t-tau is not a reliable biomarker on its own. (Schraen-Maschke et al., 2008;Jadhav et al., 2019). To obtain more accurate biomarkers for AD, the field has turned to combining tau and Aß measures and calculating the ratio of tau to Aß42 in the CSF as a measure of AD pathology Fagan et al., 2007). However, all these measures reflect disease stage pathology and not preclinical pre-symptomatic stages. Therefore, they are not predictive of the disease. New molecules, such as inflammation, neuronal lipid metabolism and synaptic function related biomarkers have been recently identified. This is considering that inflammation and synaptic dysfunction could be two of the earliest neurodegeneration-relevant events that will precede the accumulation of pathology and disease symptoms (Fyfe, 2019;Lista and Hampel, 2017). The increase of inflammation associated sTREM2 in CSF was recently found to correlate well with tau biomarkers and AD pathology. TREM2 is one of the very well-established genetic risk factors for AD and was recently found to correlate better with tau and not Aß pathology . FABP3, another novel potential biomarker, is a fatty acid binding protein that has been implicated in neurodegeneration and belongs to the family of APOE4. In specific, FABP3 has been shown to facilitate α-synuclein oligomerization and accumulation of Aß into plaques . In addition, FABP3 levels have been shown to increase in MCI and AD patients, and to be as equally indicative of AD pathology as total and phospho-tau . Neurogranin is yet another novel molecule that is promising as a predictive CSF biomarker for AD. Neurogranin is a dendritic protein that reflects the dendritic function and thus neurodegeneration. Changes in the CSF levels of neurogranin were also found to be predictive to disease progression and amyloid and tau pathology . In a recent longitudinal study on old non-demented patients in Germany, FABP3 and neurogranin along with the basic AD biomarkers (p-tau, t-tau, and Aß) were measured in a cohort of cognitively healthy elderly (60-80 years old) Mattsson et al., 2016). The increase in the CSF levels of both FABP3 and neurogranin was found to significantly predict the conversion from non-AD to AD disease state. More longitudinal clinical studies are warranted to better characterize the mentioned novel biomarkers and to investigate other potential biomarkers that better predict the conversion to AD. Moreover, it has been established that tau oligomers are most likely the most toxic and earliest state of pathological tau. Soluble tau oligomers have been shown to increase in the brain long before symptoms and pathology start (Sato et al., 2018;Maeda et al., 2006;Lasagna-Reeves et al., 2012) and correlate with the start of neuronal death and progression during disease stages more than larger or insoluble tau aggregates. However, the field lacks studies or clinical trials that look for tau oligomers in the CSF or plasma as biomarkers for disease progression. That is probably due to the difficulty of detecting those molecules in CSF or blood due to low amounts or highly variable conformations. A study by Sengupta et al. (2017), is one of the few, if not the only one looking at tau oligomers levels in CSF human samples of moderate to severe AD, mild AD, and control patients Sengupta et al., 2017). They interestingly show that tau oligomers' levels increase in the CSF with the severity of the disease. Another study by Kolarova et al. (2016) screened serum samples from AD, MCI and control patients and found that tau oligomers in the blood decrease possibly due to impaired clearance and thus accumulate in the brain . This explains the results seen by Sengupta et al. (2017). Although the work on oligomers is not recent, the nomenclature is (Cárdenas-Aguayo et al., 2014). Strong evidence points out that oligomeric species are the most correlated with cellular neurodegeneration, disease onset, progression and therefore more studies are warranted to characterize disease related tau oligomeric species and develop methods to detect them in the CSF or blood. CSF biomarkers are more characterized even though CSF sampling is much more invasive and cost-daunting (Mattsson et al., 2009;Zetterberg, 2019). However, in the past couple of years, there has been advancements in the field of blood biomarkers. Aß 42/40 ratios in the blood are now considered a promising predictor of the stage of dementia (Nakamura et al., 2018). The blood levels of t-tau are still controversial in terms of predictability of tau pathology and neurodegeneration in the brain, however, t-tau and p-tau are considered better blood biomarkers for AD, and the conversion from MCI to AD Zetterberg, 2017). Nevertheless, Aß, t-tau and p-tau blood levels are not considered reliable predictors of disease onset until today. Finally, plasma neurofilament light is considered, so far, the most reliable blood biomarker. It correlates well with CSF biomarkers and was recently found to predict onset of neurodegeneration in familial AD . However, the only problem with this biomarker is that it is not disease specific, rather it is neurodegeneration specific (Khalil et al., 2018). More studies are warranted to increase the sensitivity and replicability of blood biomarker studies in a disease specific manner. Reaching a reliable and sensitive biomarker will revolutionize the field of biomarkers and make disease detection/prediction available for everybody. Besides CSF and blood biomarkers, PET imaging is the complementary form of biomarker detection to identify disease stages and confirm target engagement after immunotherapy. Recently, a very interesting study that aimed at using a PET tracer to parallel the stages of AD in cognitively normal, MCI and AD patients. The AD stages were successfully recapitulated by tracing tau pathology in small brain regions like amygdala, entorhinal cortex and the hippocampus. In addition, the tracer F18-AV1451 successfully discriminated tau pathology levels between normal and MCI patients, as well as between MCI and AD patients Leuzy et al., 2019;Passamonti et al., 2017). This novel study is a significant advancement in the field of PET biomarkers in AD. Overall, PET imaging could be highly utilized in combination with CSF and blood biomarkers in clinical studies in smaller and specific populations like those with TREM2 variants Carmona et al., 2018), APOE4 carriers (Safieh et al., 2019;Shi et al., 2017) , and patients with TBI history (Gerson et al., 2016). An approach that resembles personalized medicine could be applied by utilizing PET imaging biomarkers to identify and confirm disease stage progression as well as identify higher risk populations at early disease stages (Table 2). Combination therapy is the future A decade of AD studies has led to one major conclusion: AD is one of the most progressive and multifactorial diseases in aging populations. The multifactorial nature of the disease stands between research and therapeutic development and makes it very difficult to stop or prevent disease progression with monotherapy approaches. Years of experimental and monotherapy clinical trials led to many advancements in understanding the complexity and timeline of the disease, identifying pathological protein species, and assessing the feasibility and risks associated with disease modifying therapies. However, most importantly, monotherapy studies have revealed the need for combination therapy approaches to overcome the complexity of the disease (Bittar et al., 2018) . A combination immunotherapy in AD would involve combining passive immunotherapy against multiple pathological proteins like Aß and tau. It is well established now that Aß and tau lie at the base of the pyramid of the AD disease process. Therefore, targeting both pathological forms of the proteins is expected to have a great therapeutic potential. Several studies also show a strong synergistic effect between the two proteins by which Aß exacerbates the toxicity of tau protein (or opposite) and promote further aggregation and propagation. This synergistic effect could be exploited in immunotherapy. Aß immunotherapy helps reduce early hyperphosphorylated tau pathology (Oddo et al., 2004). This highlights the advantage of using immunotherapy to reduce multiple related pathologies. This also further suggests a hierarchical relationship between Aß and tau and thus highlights the importance of using immunotherapy in a timely manner for the highest efficiency. Combination therapy could also imply targeting several pathological forms of the same protein as phospho-tau and tau oligomers. Passive phospho-tau immunotherapy in general proved to be effective in reducing tau pathology and blocking phospho-tau propagation of pathology (Nisbet et al., 2017;Chai et al., 2011). However, very few studies showed an effect with targeting phospho-tau on cognitive decline (Yanamandra et al., 2013). In addition, from those few studies, only a couple phospho-antibodies were tested in aged mice and showed pathology reduction and cognitive improvement (Yanamandra et al., 2015;Dai et al., 2015). The difficulty to obtain cognitive improvement with the successful clearance of insoluble phospho-tau species suggests that the targeted tau species does not play a major role in the process of neurodegeneration leading to cognitive failure. Therefore, soluble forms of pathological tau, like tau oligomers, come into the picture due to their higher toxicity and early on detection in the disease process Sengupta et al., 2017). It is also shown that tau oligomers are found both extracellularly and intracellularly where they exert neuronal toxicity and impair synaptic function. Therefore, due to the mentioned reasons, tau oligomers form an attractive extracellular target for passive immunotherapy. A study by Umeda et al. characterized three monoclonal antibodies that target phospho-tau epitopes, one of which was chosen for an immunotherapy experiment (Ta1505 targeting pSer413) (Umeda et al., 2015). Keeping in mind that the three antibodies exhibited high affinity profiles to their respective epitopes and were very specific to tau species from AD brains, Ta1505 was the only tau antibody that recognized and reduced tau oligomers in addition to pSer413 tau. Interestingly, Ta1505 was the most effective in reversing memory deficits in comparison to age matched (14 month old) controls and clearance of tau oligomers from both intracellular and extracellular spaces. Ta1505 also restored synaptic function shown by restoring the levels of synaptic proteins (Umeda et al., 2015). This provides promising evidence towards the need of targeting more than one toxic tau species at a time to get a better effect. Concluding thoughts As the field is eagerly waiting on results from ongoing tau-targeted immunotherapy clinical trials, these trials will shape the next era of AD therapeutic developments. Currently, most experimental studies in animal models support the requisite role of pathological tau in AD disease onset and progression, and thus the hopes are very high for tau-targeted treatments. That being said, it is also well established that AD is a complex multifactorial chronological disease. The matter of early treatment intervention is still controversial, as well as Aβ involvement in the disease process and the direction towards Aβ-independent treatments. In addition to pathological proteins, disease cascade involves inflammation and genetic factors that feed into a vicious cycle of continuous progression and, therefore are indispensable in the treatment plan. Is early combination therapy the solution? It is still very early to decide whether early intervention would make a difference and prevent disease progression. However, while it is well documented that when symptoms arise, the brain is already affected by synaptic loss, protein aggregation and inflammation that may not be reversible; multi approach early intervention sounds like a possible and safe venue for disease prevention. In addition, since tau pathology has been shown to start at least a decade before disease symptoms, the field now has a rough estimation on the average age at which early intervention might show promise, depending on the type of dementia. However, applying early intervention on the general population without reliable biomarkers testing preceding the start of the treatment is not practical. Therefore, early detection biomarkers stand out as a must step in testing for disease implications before early intervention. Moreover, routine early biomarker testing would be much more acceptable to the general public than straight forward early immunotherapy for example. At the moment, despite the tremendous efforts, biomarker studies are also still naïve. Nevertheless, with the promising advancements in biomarker studies, we are now closer to early disease detection and thus to early intervention. Full-length tau showing the discussed tau antibodies, their respective epitopes and their effect on soluble/insoluble tau levels and cognitive function via immunotherapy. Bittar
2019-12-11T14:18:49.907Z
2019-12-10T00:00:00.000
{ "year": 2019, "sha1": "d8efe4e38a98b1f70e416838ff97aba38772322c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nbd.2019.104707", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9ce078cea78a9dd86e6486367c753d1e1b5d6058", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271424613
pes2o/s2orc
v3-fos-license
User Cost of Housing Analysis of the German Real Estate Market Background: This study looks at the development of the German real estate landscape, which recorded a continuous rise in prices between 2009 and 2022. These were due to a genuine excess demand, a shortage of supply and favorable financing conditions. In the course of 2022, mortgage rates then rose after a historically long period of low interest rates. Aim: This study aims in particular to assess whether the financial advantage of home ownership remains robust. The aim is to determine the relative attractiveness of home ownership compared to renting in all 401 German districts and independent cities. Methods: The owner-occupier costs are determined using the user cost of housing approach and compared with the current rental prices in the respective cities and districts. Various factors are taken into account, including the purchase price, financing costs, maintenance and rental prices. Results: The results of the analysis show that home ownership still offers a considerable cost advantage on average across Germany: In mid-2023, it is around 57% cheaper per square meter to invest than to rent. However, it is important to point out that this result does not take into account differences in the population’s ability to afford home ownership. Furthermore, the analysis raises questions about the sustainability of past house price growth and whether this increase is due to fundamental factors or low mortgage rates. In this sense, the study confirms that home ownership on the German real estate market remains economically favorable compared to renting, even with increased interest rates. It underlines how important it is to consider the long-term benefits for wealth accumulation when deciding between owning and renting. Practical relevance/social implications: The study is of practical relevance for people who rent an apartment or house in Germany and are considering whether it is sensible and economical for them to invest in property. The findings from the analysis are helpful for current purchasing decisions. Originality/value: It also offers original results and provides the reader with added value, as it captures and evaluates current real estate developments on the German market in a well-founded manner. Introduction The real estate market plays a central role in the economy and the everyday lives of citizens in Germany.Changes in this market can have far-reaching effects, from the stability of the financial system to consumer choices.In particular, strong demand for real estate can quickly turn into speculative bubbles (imbalances).In this context, the "user cost of housing" approach offers an alternative to identify regional speculative bubbles and evaluate the advantageousness of homeownership compared to renting.This paper aims to apply this approach in the German real estate landscape and to analyze its impact on the market.The following two research questions are to be answered: Q1: Will owner-occupier housing costs be higher than rental costs in Germany in 2023?Q2: Has the rise in interest rates in 2022 put an end to the advantages of home ownership? The central motivation for this work lies in the need to better understand the German real estate market and the question of home ownership versus renting.It is of great importance to determine whether, despite possible interest rate increases, it is still more cost-effective to purchase a property instead of renting.This question not only concerns potential real estate buyers and tenants, but also has a significant impact on financial stability and the economic situation in Germany.Identifying speculative bubbles in the real estate market is crucial, as these bubbles can not only lead to distortions in the market but also jeopardize the financial well-being of citizens. The main objective of this paper is to apply the "user cost of housing" approach in Germany and to evaluate the cost of home ownership compared to renting.We will investigate whether it is still advantageous to buy a property even after a possible increase in interest rates.Our analysis will help not only to understand the current situation in the real estate market, but also to identify potential signs of speculative bubbles. In the "user cost of housing" approach, the costs for tenants and property owners are compared.This requires converting the purchase price of real estate into ongoing costs incurred by the owner.These costs include the purchase price, utility costs, financing costs, and lost returns on invested equity.Taxes, maintenance and depreciation costs as well as expected price increases of the real estate are also taken into account. Various data sources were used for this analysis.The purchase price per square meter of living space and price changes were used from available real estate date sources.Data on mortgage rates, yields on bearer bonds, and tax rates were obtained from reliable sources.These data sources enable the cost of home ownership to be calculated in comparison with renting in various regions of Germany for the period from 2017 to 2023. The results of the analysis show that in Germany, despite potential interest rate increases, it is still cheaper to buy than to rent residential property.In all 401 districts and independent cities in Germany, there is a positive cost advantage of home ownership.Even in metropolitan regions, where real estate prices have risen sharply, this cost advantage remains.The study also illustrates that the cost of home ownership is lower on average in the eastern states than in the western states.This could mean that it is financially more attractive for citizens in the eastern states to purchase residential property.This paper is organized as follows: In the next section (Section 1), the user cost of housing approach is explained in more detail, including the theoretical foundations and possible applications.Section 2 is devoted to the methodology, describing the calculation of housing costs and the data basis.Section 3 presents the results of the analysis, highlighting regional differences and metropolitan areas.Finally, Section 4 will interpret the results and summarize their significance for the German real estate market.The summary of this paper will then once again note the key points.This introduction is intended to provide the reader with a comprehensive overview of the paper, clarifying the motivation, the objective, the approach, the data, the results and the structure of the following sections. User Cost of Housing Approach In general, the real estate market is assumed to be divided into a rental housing market and a residential property market.It is assumed that both the purchase price of real estate and the rental price on the market are determined primarily by the macroeconomic factors of supply and demand (Gürtler & Rehan, 2008).Strong demand for real estate can quickly lead to speculative bubbles.In this context, the user cost of housing approach represents an alternative approach to identify regional speculative bubbles (Schier & Voigtländer, 2015).It allows for a comparison of the costs incurred by renters and the regular costs incurred by a homeowner (Lehmann, 2016;Voigtländer & Sagner, 2019).In addition, it aims to determine, the relative advantageousness of homeownership compared to renting (Voigtländer & Sagner, 2019).If the analysis then identifies strong differences in costs, this indicates a need for correction and thus provides evidence of overheating in the market (Schier & Voigtländer, 2015).The European Central Bank (2024) sees the user cost of housing approach as key driver of housing investment, by which the affordability of housing can be measured by the cost of the capital invested by a household in its housing, i.e. the user cost of housing. The user cost of housing approach is a widely used international approach that can be used to evaluate developments in the housing market (Poterba, 1984;Himmelberg et al., 2005).This approach was also used, for example, by the U.S. and Irish central banks to identify speculative excesses in their housing markets in the run-up to the financial crisis (Himmelberg et al., 2005;Browne et al., 2013).It is particularly well suited to reflecting the effect of monetary policy changes (Lehmann, 2016).According to Lehmann (2016), it is much better suited for this purpose than the price-to-rent ratio. Basically, the approach goes back to Poterba (1984) and Himmelberg et al. (2005), who used this approach to study the impact of taxes on residential use forms of buying and renting.The approach is based on the premise that households are in principle indifferent between buying a home or living for rent in the same property (Voigtländer & Sagner, 2019;Hill et al., 2023).However, this is only true if the relative costs of the two options are identical.For, example if the costs change in favor of homeownership, the relative attractiveness of buying a home increases and the demand for real estate increases.This increased demand in the market for owner-occupied property would then subsequently also lead to purchase price increases in the corresponding regions, and renting becomes relatively less expensive until a new equilibrium is reached. It should additionally be noted that the residential real estate market is rigid in the short term (Voigtländer & Sagner, 2019).For example, if the demand for housing in a region increases, new construction can only respond with delay, in most countries with a significant one.More importantly, relocations occur infrequently, so the speed of adjustment is very slow.This slow reaction speed, even in the face of a decline in demand, means that housing occupancy costs and rents can drift apart in the short term (Voigtländer & Sagner, 2019). Methodology In the user cost of housing approach, as in any empirical analysis, the data used and the underlying assumptions are reviewed and adjustments are made if necessary (Voigtländer & Sagner, 2019).Comparing the costs of renters and owners of a property is not trivial in this context because, after all, rental costs are incurred as a flow variable and the purchase price is due once.This is where the concept of owner-occupier costs comes in: The purchase price, including ancillary acquisition costs, taking into account financing costs and lost returns on the equity used to purchase the property, is converted into a flow variable (Voigtländer & Sagner, 2019).This allows for the comparison of rental costs and the costs faced by an owner-occupant.According to Schier and Voigtländer (2015), annual owner-occupied costs in the county at the time can be determined as follows: (1) P kt is the purchase price of the property in euros per square meter of living space.The term in the following parenthesis summarizes the ancillary costs incurred when purchasing the property: g stands for the amount of land transfer tax; depending on the federal state, between 3.50 and 6.50 percent of the purchase price is payable here in Germany. In addition, the calculation assumes that the property is purchased through a broker. The brokerage fee m differs thereby likewise between the German Lands of the Federal Republic.Here, as a rule, between 3.57 and 7.14 percent of the purchase price must be paid.However, since this is a purchase of a property for private use, according to the newly applicable commission division, the same must be shared between buyer and seller.Therefore, an average of just over 3.00 percent is to be expected.A flat rate of 2.00 percent is estimated for the mandatory entry in the land register e and any notary fees (n) incurred (Homeday, 2024;Homeday, 2023).As a rule, the purchase price is financed with a mortgage loan.The average debt ratio b in recent years was around 78.12 percent (Voigtländer & Zdrzalek, 2022). For the time-variable borrowing rate i F , we assume the average effective interest rate of German banks for housing loans to private households with an initial fixed interest rate of more than 10 years.The time-variable debit interest rate i F that could be determined for the period from 2003 to July 2023 is 3.26.To calculate this value, the average of the interest rates received from the bank for real estate financing during this period was formed.Data from the German Federal Bank (2023) was used for this purpose.In addition to the actual payments to be made for the house purchase, opportunity costs are incurred for the equity invested (on average 21.88 percent of the purchase price (Voigtländer & Zdrzalek, 2022)).As an opportunity interest rate, we assume the mean current yields of domestic bearer bonds iA (2.14% (German Federal Bank, 2020)).A mean value was also calculated for this, analogously to the time-variable borrowing rate i F . The income generated from the investment on the capital market must be taxed at the τ tax rate.For this purpose, the average tax rate was determined after accrual of the Finanzstitik (25.02% (calculation based on data from BMF, 2023)).Added to this are the annual costs in the form of maintenance s and depreciation a incurred by homeowners. A flat 3 percent is assumed for this (Clamor et al., 2013).These costs must be considered on an opportunity basis.Assuming that the residential property owner would not actually invest these costs in the property, for example in the form of a renovation or modernization measure, the property would lose value annually (Voigtländer & Sagner, 2019).Last, the long-term expected price increases of the residential property in the respective count ∆P with a negative sign have to be included in the calculation.The longterm price expectations are based here on the mean annual price increase rate of the years 2017 to 2023.The mean annual price increase rate was calculated as follows: (2) For this purpose, data from the second-largest German real estate portal (immowelt, 2023a) were used.The individual price increases were calculated for each of the 401 districts using the formula provided above.These are list prices. Data To determine the housing occupancy costs in each county, various data sources were utilized.The equation comprises numerous variables that are not available in a single dataset (at least not publicly accessible).Table 1 is intended to list the various components of the equation along with their data sources.See Table 1: Variables and data sources.Sources: Own representation.Formulas based on Schier and Voigtländer (2015). As can be seen from the table, both the purchase price in euros per square meter of living space (P kt ) and the purchase price change (∆P k ) were calculated using data from immowelt (2023a).In this calculation, the square meter prices of individual districts from 2017 to 2023 were included, and the average square meter price of the past years was determined, or the annual purchase price change was calculated using the equation described earlier. Both values for condominiums and houses were included in the calculation.An average purchase price in euros per square meter of living space was deliberately calculated, rather than using the 2023 value, to prevent speculative price exaggerations.Currently, only values from 2017 to 2023 are publicly available, which is why older values could not be included in the calculations. For the mortgage interest rate i F , we assume the average effective interest rate of German banks for home loans to private households with an initial fixed interest period of over 10 years, which is 3.26 percent.To determine this rate, monthly average interest rates from 2003 to July 2023 were used, and the annual average was calculated to subsequently determine the overall average effective interest rate.Data from the Deutsche Bundesbank (2023) were used for this purpose.As for the opportunity cost of capital, we assume the average yields on domestic bearer bonds i A (2.14%) (Deutsche Bundesbank, 2020).Similarly, an average was calculated for this using the same method as for the timevarying external capital interest rate i F .To determine the tax rate τ, the average tax rate was calculated based on financial statistics (25.02% (calculation based on BMF data, 2023)). Results of the analysis After carrying out the calculation, it quickly became clear that the price growth rates determined for the various districts and independent cities would lead to negative signs in the owner-occupier costs.As costs cannot be negative, but represent expenditure, the equation had to be adjusted.To prevent speculative exaggerations, the price growth rate was therefore capped at a maximum of 4.00 percent.The trend in property prices, as seen in the square meter prices in recent years (immowelt, 2023a), indicates that a price decrease can be expected in the coming years.Such high annual price growth rates would distort the current comparison between purchasing a property now or entering into a rental agreement.Other experts have set a maximum price growth rate of 3.00 percent (Voigtländer & Sagner, 2019) or 2.50 percent (Voigtländer & Sagner, 2022) in recent years.However, as these values are significantly lower than the observed price increases, the mentioned 4.00 percent is used here.Since the values used are only supply values and not actual sales prices, this measure appears necessary to eliminate overly optimistic offers from the calculation and to use realistic sales prices as the basis. After conducting the calculations, numerous results were recorded.First and foremost, the owner-occupier costs on the average across Germany amount to approximately €4.11 per square meter.For a 100 square meter apartment, this would correspond to costs of €411.00 per month or €4,932.00per year.In comparison, the average rental price in Germany is currently around €9.14 per square meter.Therefore, a 100 square meter rented apartment would cost €914.00per month or €10,968.00 per year.Considering these values, a cost advantage of ownership over renting can already be estimated.Currently, on average across Germany, this advantage is 56.57%.This means that the average advantage of homeownership over a rental agreement is about 56.57%, and this holds true on average across all 401 counties and independent cities in Germany.Thus, it currently costs people less than half as much in euros per square meter to invest in ownership rather than a comparable apartment. Metropolitan areas always play a significant role in the analysis of the German real estate market, as extreme developments are often observed here. Interpretation of the results After a comprehensive analysis and calculation, the question arises as to what insights can be derived for the real estate market and its future development.It is also important to determine whether and in what form the research questions Q1 and Q2 posed in the introduction can be answered.First and foremost, it is worth noting that even after an increase in interest rates towards more normal levels, investing in homeownership will still be significantly more attractive than renting.A positive cost advantage was found in all counties in Germany, even though the expected price increases in real estate were not fully incorporated into the evaluation.Therefore, in Germany at the current time, it is more expensive to rent a comparable apartment or house than to invest in one. However, it is important to note that the user cost of housing approach does not take into account that the approximately 22% assumed as the equity ratio cannot be universally applied to the population.While the average German had per capita financial assets of €88.600 in 2022 (Statista, 2023), there has been a trend for decline (Statista, 2023).Additionally, it raises the question of whether this high level of per capita financial assets is evenly distributed across the entire country or concentrated in high-income regions. In this regard, the analysis quickly reveals that the eastern states of German have high cost advantages, low owner-occupier costs, and "relatively lower" rental prices.Therefore, living in these states is generally more affordable than in western Germany.A closer look at the two southern states of Bavaria and Baden-Württemberg shows that they often have lower cost advantages and high rents. Even in the German metropolitan areas, despite significant price increases in recent years, positive cost advantages continue to exist.However, it should be noted that the high entry prices do not necessarily provide every citizen with the opportunity to invest in homeownership instead of renting. Overall, the following can be stated with regard to research questions Q1 and Q2: On average nationwide and without exception, owner-occupier costs are lower, in some cases significantly so.This means that it is still advantageous to invest in home ownership instead of paying monthly for a rented apartment or house.Research question Q1 must therefore be answered in the negative: Residential user costs are not higher than rental costs, but the other way around.With regard to research question Q2, it can be stated that the rise in interest rates has not ended the advantages of home ownership.As explained at the beginning of Chapter 3, the calculated price increase rate of real estate in the districts and independent cities even had to be limited to 4.00 percent, as the results would otherwise yield "negative costs" for owner-occupiers.In other words, owner-occupiers would mathematically receive money for living in their own property.It can therefore be concluded that the interest rate increases did not have a significant impact on the price increase rates either.This research question can therefore also be answered in the negative. Conclusion The study of the German real estate market using the "user cost of housing" approach shows that buying real estate is still cheaper than renting, despite possible interest rate increases.This cost advantage extends across all 401 counties and independent cities in Germany and is also maintained in metropolitan regions.It was found that the average cost of home ownership in Germany is about 57% lower than the cost of renting.This finding not only has an impact on potential property buyers and tenants, but also contributes to Germany's financial stability and economic situation.The study shows that, even taking into account possible price increases for real estate, home ownership remains advantageous over renting.The calculations take into account various factors, including financing costs, taxes, maintenance costs and expected price increases.Interestingly, the eastern states of Germany have lower average ownership costs than the western states, indicating that it may be more financially attractive for citizens in the eastern states to purchase residential property. It should be noted, however, that the "user cost of housing" approach does not take into account the individual financial situations of the population, especially the amount of equity that can be raised by citizens.This finding highlights the need to consider the distribution of assets in Germany.Lower cost advantages and "relatively lower" rental prices were found in the eastern states, indicating that living in these regions is generally more affordable than in the west.Finally, it was found that although high price increases were recorded in the real estate market in German metropolitan areas, the cost advantage of home ownership remains.The objective of the study, to determine whether home ownership will still be cheaper than renting in 2023, was therefore fully achieved.The two research questions Q1 and Q2 were also answered satisfactorily in this context. However, it is important to note that the high entry prices do not offer all citizens the opportunity to invest in home ownership.Overall, this study shows that the real estate market in Germany continues to offer favorable conditions for homeownership, despite existing challenges, particularly with respect to the financial situation of the population and the regional distribution of wealth.Future research directions could focus on analyzing these factors and their impact on the real estate market. Table 1 : Variables and data sources Table 2 : Analysis of Germany's metropolitan regions With the help of the user cost of housing analysis, it was possible to determine that Munich is currently the leader in terms of owner-occupancy costs and rents.None of the other metropolitan areas record such high owner-occupier and rental prices.However, at the same time, Munich does not exhibit the lowest cost advantage of homeownership over renting.With approximately 29.44%, homeownership in Düsseldorf is "least" advantageous when directly compared to the other German metropolitan areas.Hamburg (30.69%) and Munich (31.95%) narrowly miss claiming the top spot for "lowest cost advantage among German metropolitan areas".However, in the context of an overall analysis of the German real estate market, it is not only the largest cities that are of interest but also a comparison among the individual federal states.In the following table, the corresponding values for the 16 federal states were determined with the help of the user cost of housing analysis.See Table3: Analysis of the German federal states. Table 3 : Analysis of the German federal statesIn a direct comparison of the federal states, Saxony-Anhalt stands out as the most affordable state for homeownership, with owner-occupier costs of only €2.00 per square meter.On average, a 100 square meter apartment here costs only €200.00 per month or €2,400.00peryear.Thuringia (€2.25 per square meter) and Saxony (€2.44 per square meter) closely follow Saxony-Anhalt as the next most affordable options.In this comparison, the highest owner-occupier costs are found in Hamburg (€9.01 per square meter) and Berlin (€7.69 per square meter).Excluding the two city-states, Bavaria (€5.28 per square meter) and Baden-Württemberg (€5.26 per square meter) occupy the last positions in terms of affordability.Analyzing the average rental prices of the federal states also revealed that renting is most expensive in Baden-Württemberg (€11.00 per square meter) and Bavaria (€10.15 per square meter) when excluding Hamburg and Berlin from consideration.Currently, renting is most affordable on average in Saxony-Anhalt (€6.27 per square meter).Lastly, it is important to compare the cost advantages of the individual federal states.In this category, Saarland is currently the most attractive state for investing in homeownership rather than renting, with a cost savings of 69.24%.Thuringia (68.61%) and Saxony-Anhalt (68.58%) closely follows in this comparison.Currently, the states with the lowest cost advantages are once again Hamburg (30.69%) and Berlin (43.08%).Since these two city-states are excluded from this comparison, Bavaria currently has the lowest cost advantage at 49.80%.Just behind are Baden-Württemberg (52.52%) and Schleswig-Holstein (53.53%).
2024-07-25T15:27:00.584Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "2d28a0f5db05c32377797cab5f4a9859efe58c29", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.37355/acta-2024/1-03", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e6c810c6ad740b03b193af1003b93bd9e214c9d9", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
269363312
pes2o/s2orc
v3-fos-license
The impact of COVID-19 on the prognosis of deep vein thrombosis following anticoagulation treatment: a two-year single-center retrospective cohort study Background Coronavirus disease 2019 (COVID-19) has been proved as a significant risk factor for deep vein thrombosis (DVT) after several waves of pandemic. This study aims to further investigate impact of COVID-19 on prognosis of DVT following anticoagulation treatment. Methods A total of 197 patients with initially detected DVT and meanwhile accomplishing at least 3 months anticoagulation treatment were identified from our hospital between January 2021 and December 2022. DVT characteristics, clinical data, and exposure to COVID-19 were recorded for multivariable logistic regression analysis to identify DVT aggravation related risk factors. Propensity score matching (PSM) was used to balance baseline covariates. Kaplan–Meier curves and Log-Rank test were performed to exhibit distribution of DVT aggravation among different subgroups. Results In 2022, patients exhibited higher incidence rates of DVT aggravation compared to those in 2021 (HR:2.311, P = 0.0018). The exposure to COVID-19, increased red blood cell count, increased D-dimer level and reduced prothrombin time were found to be associated with DVT aggravation (P < 0.0001, P = 0.014, P < 0.001, P = 0.024), with only exposure to COVID-19 showing a significant difference between two years (2022:59/102, 57.84%, 2021:7/88, 7.37%, P < 0.001). In PSM-matched cohorts, the risk for DVT aggravation was 3.182 times higher in COVID-19 group compared to the control group (P < 0.0001). Exposure to COVID-19 increased the risk of DVT aggravation among patients who completed three months anticoagulant therapy (HR: 5.667, P < 0.0001), but did not increase incidence rate among patients who completed more than three months anticoagulant therapy (HR:1.198, P = 0.683). For patients with distal DVT, COVID-19 was associated with a significant increased risk of DVT recurrence (HR:4.203, P < 0.0001). Regarding principal diagnoses, incidence rate of DVT aggravation was significantly higher in COVID-19 group compared to the control group (Advanced lung cancer: P = 0.011, surgical history: P = 0.0365, benign lung diseases: P = 0.0418). Conclusions Our study reveals an increased risk of DVT aggravation following COVID-19 during anticoagulation treatment, particularly among patients with distal DVT or those who have completed only three months anticoagulant therapy. Adverse effects of COVID-19 on DVT prognosis were observed across various benign and malignant respiratory diseases. Additionally, extended-term anticoagulant therapy was identified as an effective approach to enhance DVT control among patients with COVID-19. Background Deep vein thrombosis (DVT) is a multifactorial disease.The classical clinical signs include pain and swelling in the lower limb, but the symptoms of DVT are hard to detect in most cases [1,2].Despite its relatively low incidence rate, acute DVT can lead to pulmonary embolism (PE), while nearly half of chronic DVT cases progress to the post-thrombotic syndrome (PTS) [3][4][5].These complications not only significantly impact quality of life but also can pose life-threatening instances.Therefore, it is crucial to implement appropriate treatment and timely preventive measures for DVT.Numerous studies have identified various risk factors associated with DVT, including body mass index (BMI) ≥ 30, tumor status, anti-tumor treatments, abnormal coagulation function, tuberculosis, and acute trauma, among others [6][7][8][9][10][11].In recent years, coronavirus disease 2019 (COVID-19) has been confirmed to be an important risk factor for DVT [12][13][14].Nevertheless, the potential impact of COVID-19 on the prognosis of DVT during treatment remains a topic requiring further exploration after several pandemic waves. Numerous trails and retrospective studies have been designed on the risk of DVT after COVID-19, but limited researches have focused on the impact of COVID-19 on prognosis of DVT after anticoagulation treatment [15][16][17][18].Therefore, our study aimed to evaluate the efficacy and safety of anticoagulant therapy for DVT in patients with various respiratory benign diseases and malignant diseases, both with and without concurrent COVID-19. Study population and clinical characteristics We retrospectively analyzed a total of 4,376 duplex ultrasound scan (DUS) reports of lower limb deep veins and corresponding clinical data in our hospital between January 2021 and December 2022.The DUS was performed following the whole leg compressive ultrasound protocol, including bilateral examination of common femoral, femoral, popliteal, and deep calf veins in accordance.DVT was defined as a visible intraluminal content in the noncompressible or partially compressible veins.This study specifically enrolled patients who were diagnosed with DVT for the first time and routinely received DUS at least once a month.All enrolled patients underwent telephone follow-up surveys, which primarily focused on their exposure history to COVID-19, selection of anticoagulant drugs, duration of anticoagulant therapy, and occurrence of hemorrhagic events.The exclusion criteria included: (1) DUS follow-up time being less than three months; (2) absence of any antithrombotic therapy; and (3) duration of anticoagulant therapy being less than 3 months.We recorded the baseline data, clinical characteristics and DVT attributes of enrolled patients for further analysis.Risk scoring results (Paudua score for medical patients, Caprini score for surgical patients, and Khorana score for tumor patients) for venous thromboembolism (VTE) were categorized into three risk grades as follows: grade 0 (Paudua score = 0, Caprini score = 0, and Khorana score = 0), grade 1 (Paudua score < 4, Caprini score < 3, and Khorana score < 3), and grade 2 (Paudua score ≥ 4, Caprini score ≥ 3, and Khorana score ≥ 3) [16,[19][20][21].The exposure history of COVID-19 was confirmed if COVID-19 antigen test or PCR test was positive within 6 months before or after diagnosis of DVT.Currently received treatment referred to the ongoing treatment within two weeks from the date as DVT was diagnosed.All enrolled patients with DVT underwent chest computered tomograhy angiography (CTA) to exclude PE. The dosage of anticoagulants in the study were presented as follows: (1) rivaroxaban, 15 mg twice daily in the first 3 weeks, followed by 20 mg once daily; (2) edoxaban, 60 mg once daily; (3) enoxaparine, 100 AXalU per kilogram every 12 h in first week, followed 100 AXalU per kilogram once daily.Dose reduction was specified basing on the results of renal function, platelet count, and level of D-dimer (DD). Evaluation of anticoagulation treatment was based on the results of follow-up DUS reports: (1) unchanged location and length of DVT was defined as stable condition; (2) reduced length of DVT or lower level from first detected location of DVT was defined as DVT remission; (3) increased length and upper level of DVT, or new detected location of DVT was defined as DVT aggravation.The stable DVT and DVT remission were both identified as the control of DVT.All enrolled patients were followed until death or the last follow up, and the endpoint of DVT was defined as the latest result obtained from DUS. Level of DD was also recorded for evaluation of anticoagulation treatments. Statistical analysis The enrolled patients were divided into two groups (2021 and 2022) based on the earliest date of DVT diagnosis.Baseline data differences between the two groups were analyzed using t-test and chi-square test.Kaplan-Meier cumulative risk curves were presented to illustrate the temporal distribution of DVT aggravation from the earliest diagnosed date to the endpoint.All enrolled patients were divided into the responding group (DVT remission) and the non-responders (DVT stabilization or aggravation).A multivariable logistic regression model was employed to evaluate the related risk factors influencing anticoagulation treatment efficacy. We conducted the propensity score-matched (PSM) analysis to investigate the impact of COVID-19 on the prognosis of DVT after at least 3 months of anticoagulation treatment.Therefore, all subjects were divided into two groups: the control group and COVID-19 group.The matching ratio between COVID-19 group and the control group was 1:2, using a nearest neighbor matching algorithm without replacement with distances determined by logistic regression.The matching covariates included age, diagnosis, and risk grades of DVT.Nearestneighbor matching was performed with a match tolerance of 0.2 units of the pooled estimate of the common standard deviation of the logits of the propensity scores.The difference in DVT aggravation between these two cohorts was analyzed using Kaplan-Meier curves and the Log-Rank test. In order to verify how COVID-19 influences DVT prognosis under different conditions within this 1:2 matched PSM set, we further divided these matched subjects into groups based on current principal diagnosis, duration of anticoagulation therapy, and type of DVT.Subgroup analyses were conducted to present the progression of DVT over time using Kaplan-Meier cumulative risk curves. All statistical analysis were performed by SPSS version 22.0 (IBM Corp., Armonk, NY, USA).P value of < 0.05 was considered statistically significant. Demographic characteristics In the present study, we identified 197 patients who were initially diagnosed with DVT, received treatment and underwent a follow-up of at least 3 months at Shanghai Pulmonary Hospital from 2021 to 2022 (Fig. 1).The baseline demographics and clinical characteristics of enrolled patients are presented in Table 1.Patients' clinical data indicated minimal disparities between the years 2021 and 2022.Despite the longer median follow-up duration in 2021 compared to that in 2022 (P < 0.0001), the control rate of DVT was higher in 2021 (76/95, 80.00%) than in 2022 (67/102, 65.69%, P = 0.005).Moreover, there was a significantly greater proportion of patients exposed to COVID-19 in 2022 (59/102,57.84%)than in 2021 (7/88, 7.37%, P < 0.001).There were more patients with low levels of platelet count (PLT) (27.5% VS 9.2%, P = 0.016) in 2022 compared with 2021.In addition, no significant differences were observed regarding the remaining baseline demographics, features of DVT, and basic clinical characteristics.Besides, there were no significant differences Comparison of DVT aggravation between COVID-19 group and control group in the unmatched and matched cohorts Based on the differences of baseline characteristics between 2021 and 2022, we divided all enrolled patients into two groups: the COVID-19 group (69/197, 35.03%) and the control group (128/197, 64.97%).In the unmatched cohorts, no significant differences were observed in baseline demographics and clinical characteristics between two groups (Table 2).The incidence of DVT aggravation was higher in the COVID-19 group compared to the control group (31/69, 44.92% VS 23/128, 17.97%, P < 0.001), despite the shorter average follow-up duration (148 days vs. 176 days, P = 0.002).A significantly lower proportion of patients achieved DVT remission in the COVID-19 group (14/69, 20.29%) compared to the control group (73/128, 57.03%; P < 0.001). After implementing a 1:2 PSM based on the factors described in the methods section, we successfully matched 65 patients from the COVID-19 group with 112 patients from the control group (Table 2).In the 1:2 matched PSM set, the COVID-19 group was also associated with increased incidence rate of DVT aggravation compared to the control group (44.62% vs. 17.86%,P < 0.001), as well as reduced incidence rates of DVT remission (20% vs. 57.14%,P < 0.001).However, it is important to note that patients in the COVID-19 group Fig. 3 Forest plot illustrating the odds ratio of the composite outcome of DVT aggravation in patients achieving DVT remission compared to those experiencing DVT recurrence or aggravation.Abbreviations: DVT, deep vein thrombosis; OR, odds ratio; 95% CI, 95% credible interval; DBP, diastolic blood pressure; BMI, body mass index; PE, pulmonary embolism; RBC, red blood cell count; WBC, white blood cell count; PLT, blood platelet count; PT, prothrombin time had a significantly shorter mean follow-up time than those in the control group (149 vs. 174.9days; P < 0.001).There were no significant differences in the remaining baseline characteristics between two groups, which was consistent with the results before matching (Table 2).The relative risk for DVT aggravation was found to be 3.182 times higher in COVID-19 group than in the control group (95%CI: 1.740-5.821,P < 0.0001), and the COVID-19 group (38/65, 58%) was 27% less likely to have an event of 6-month DVT remission compared to the control group (95/112, 85%, P < 0.0001) (Fig. 2B). Subgroup assessments In the matched cohorts, results of Kaplan-Meier curves and the Log-Rank test further explained how COVID-19 impacts the prognosis of DVT during the anticoagulant therapy under various conditions. In the analysis of principal diagnosis, patients in the COVID-19 group all exhibited significantly higher risks of DVT aggravation compared to those in the control group (Fig. 5).Among patients diagnosed with advanced lung cancer, COVID-19 increased the risk of DVT aggravation by 2.62 times (95%CI: 1.212-5.676,P = 0.011) when compared with the control group; however, it did not increase mortality risk (HR: 1.249, 95%CI: 0.683-2.283,P = 0.456), and there was no significant difference in the median survival time between two groups (839 VS 974 days, HR:1.249, 95%CI: 0.683-2.283,P = 0.456).Patients with a history of surgery in the COVID-19 group also had a significantly higher incidence rates of DVT aggravation than those in the control group (4/8, 50% VS 3/15, 20%, HR: 4.113, 95%CI: 0.734-23.040,P = 0.0365).Furthermore, among patients who diagnosed as benign lung diseases, the risk for DVT aggravation in COVID-19 group was 2.643 times higher than in the control group (95%CI: 0.872-8.018,P = 0.0418). Discussion In this present study, we observed significant higher incidence rate of DVT aggravation in 2022 compared to 2021 among patients who had completed standard anticoagulant therapy and had at least 3 months of follow-up.Given all enrolled patients had received the same treatment strategy following guideline recommendations during these two years, we further explored the related risk factors for DVT aggravation, and history of COVID-19, increased RBC, and reduced PT were identified as risk factors for DVT aggravation in this study.However, only the history of COVID-19 infection showed a significant difference between 2021 and 2022.Therefore, we considered that a higher number of patients exposed to COVID-19 in 2022 compared to 2021 was the most important risk factor for DVT aggravation during anticoagulant therapy in our study.Previous studies have demonstrated an increased risk of first-time DVT six months after COVID-19; furthermore, the findings of this study suggest that COVID-19 is also an important risk factor for poor prognosis of DVT during treatment [12,13,18].Regular DUS follow up during anticoagulant therapy was a good way to early detection of DVT aggravation [22][23][24]. In terms of the association between anticoagulation duration and prognosis of DVT, this study demonstrated that exposure to COVID-19 increases the risk of DVT aggravation among patients who have completed only a 3-month course of anticoagulant treatment, while no such effect is observed in patients receiving extendedterm anticoagulant therapy.The recommendations regarding extended-term anticoagulant therapy administration appear to vary across guidelines [25][26][27][28].Most guidelines suggest that the duration of anticoagulation should be determined based on an assessment of VTE risk factors, including advanced malignant tumors, paralysis, thrombophilia, and etc [20,29].Transient risk factors for DVT, such as inflammatory diseases and surgical history, typically warrant short-term anticoagulation until their removal.For patients diagnosed with COVID-19, some previous studies have also demonstrated that long term anticoagulation treatment does not seem to provide protection against DVT [13,30].However, our findings indicate that if exposure to COVID-19 occurs within 6 months before or after the diagnosis of DVT, prolonging the duration of anticoagulation treatment is necessary to ensure its effectiveness [31].Regarding the types of DVT, there was a division regarding the impact of COVID-19 on DVT prognosis.Our findings indicate that exposure to COVID-19 increases the risk of DVT aggravation in patients with distal DVT (unilateral or bilateral).Distal DVT typically presents with fewer clinical manifestations and a lower risk of PE compared to proximal DVT, which may lead to delayed or irregular follow-up assessments for patients with distal DVT [32].In our study, there was indeed more patients received extended-term anticoagulant therapy in patients with proximal DVT (31/66, 47%) compared to patients with distal DVT (38/114, 33%).The results highlight efficacy assessment of distal DVT has been neglected in clinical practice [1].Therefore, it is crucial to pay attention to the adverse effects of COVID-19 on different types of DVT and particularly strengthen treatment evaluation for patients with distal DVT who have been exposed to COVID-19. It has been previously reported that advanced lung cancer, thoracic surgery, and benign lung diseases requiring continuous anti-inflammatory treatment are all established risk factors for DVT.We also observed exposure to COVID-19 was associated with an increased incidence rate of DVT aggravation during anticoagulant therapy.Even allowing for the impact of different principal diagnoses on initial DVT diagnosis, exposure to COVID-19 will potentially affect the effect of anticoagulant therapy.This requires us not only to consider the currently principal diagnosis, but also to take COVID-19 into consideration when arranging anticoagulant therapy. Numerous risk scoring tools have been proposed for the practical clinical assessment of DVT.Despite the widespread use of Paudua score, Caprini score, and Khorana score, several studies have reported their limited reliability in predicting DVT due to incomplete inclusion of relevant risk factors [5,[33][34][35].Previous research has highlighted COVID-19 as an additional risk factor for DVT [12,13,18].Our study further demonstrates that COVID-19 increases the incidence rate of DVT aggravation during anticoagulation treatment.Therefore, we should give full consideration to COVID-19 in DVT evaluation and follow-up assessments, and there is an urgent need to establish a risk scoring system and treatment strategy for patients with both DVT and exposure to COVID-19, particularly within the context of long COVID-19 and potential pandemic in the future [15,18]. There were certain limitations in our study.Considering the retrospective nature of the study, potential selection bias and data incompleteness may impact the accuracy of our findings.As a single-center retrospective cohort study, the results might be influenced by specific practices in our hospital, and should be enhanced through future multi-center study.Furthermore, information from clinical database and telephone follow-ups might probably overrate patients' compliance, and the prognosis of DVT was directly influenced by the normalization of anticoagulation treatment, and patients' preference [2,29,32].Considering the poor compliance with follow-up CTA among patients with PE and the missing follow-up data of PE, the correlation analysis of PE with proximal DVT was not performed.The impact of COVID-19 on prognosis of DVT during anticoagulation s needs further validation through prospective randomized clinical trials. Conclusions This retrospective cohort study represents the first evidence of COVID-19's impact on the prognosis of DVT during anticoagulation treatment in a real-world setting.Exposure to COVID-19 was associated with the high rate of DVT aggravation.Meanwhile, COVID-19 has been observed to have adverse effect on the prognosis of DVT in various respiratory benign and malignant conditions.Once exposure to COVID-19 is confirmed, patients with distal DVT should receive appropriate anticoagulant therapy and regular follow up to prevent DVT aggravation.Moreover, extended-term anticoagulant therapy has been identified as an effective approach for improving the control rate of DVT among patients with COVID-19, but further investigations are warranted to determine the optimal extended period of anticoagulant therapy in the future. Fig. 2 Fig. 2 Cumulative incidence of DVT aggravation in different groups.(A) The cumulative incidence of DVT aggravation between patients in 2022 and those in 2021; (B) In the 1:2 PSM cohorts, the relative risk for DVT aggravation between COVID-19 group and the control group; (C) In the 1:2 PSM cohorts, the risk incidences for DVT aggravation between COVID-19 group and the control group at C among patients who completed only 3 months anticoagulant therapy; (D).In the 1:2 PSM cohorts, no significant difference in the DVT control rate between the COVID-19 group and the control group at D among patients who received more than three months of anticoagulant therapy Fig. 4 Fig. 4 Cumulative incidence of DVT aggravation in different types of DVT in the PSM cohorts.(A) The cumulative incidence of DVT aggravation among patients with distal; (B) The cumulative incidence of DVT aggravation among patients with proximal DVT Fig. 5 Fig. 5 Cumulative incidence of DVT aggravation or mortality among patients with different diseases in the PSM cohorts.(A) The cumulative incidence of DVT aggravation among patients with advanced lung cancer; (B) The all-cause mortality in patients with advanced lung cancer and DVT; (C) The cumulative incidence of postoperative DVT aggravation; (D) The cumulative incidence of DVT aggravation in patients with benign lung diseases
2024-04-26T13:04:42.023Z
2024-04-26T00:00:00.000
{ "year": 2024, "sha1": "53a404e90bb8acfe97cc5b6eec3ba0d5e766e735", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d4d38fd0f2cbafa0f531a61c684f0abea2cb5a76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263845775
pes2o/s2orc
v3-fos-license
“Going hybrid on a dime”: Insights for transformation in education toward sustainable quality development This study contributes to research on Quality in Education and examines what possibilities now exist for schools to reinvent and transform using technology-based systems as part of the equation. It is speculated that the pandemic has changed the future of work emphasizing hybrid solutions and networking. The purpose of this article is to present findings from phase two of the qualitative case study to examine what happened to a priva te school when it went “hybrid on a dime” to maintain attractive quality education. Introduction The global imperative to address sustainable development calls for organizations to re-examine their practices to meet complex societal challenges (UN Agenda 2030).Among the organizational actors, education has been singled out by UNESCO as essential to achieving sustainable development, articulating a new agenda to "reorient education to help people develop knowledge skills, values, and behaviors needed for sustainable development" (UNESCO, 2017).The UN argues that "obtaining a quality education is the foundation to improve people's lives and sustainable development" (ibid) by ensuring competencies and skill development to live and work in the 21st century, as well as foundational values of equity, and democracy through access to education.Yet as Sterling (2010) states there is a conflict between the current paradigm of schooling and what is needed to meet this challenge. For several decades, technology has been promoted as a transformative device for advancing education toward 21st-century living and work.In 2006, The EU Commission on education identified eight key competencies that support quality in education, of which digital competence, global awareness, and social skills were included (European Union, 2006).The call stretched the focus from technology as a mere tool, to technology as a context for interacting and learning.Similarly, U.S. educational programming under the umbrella initiative called Framework for 21st Century Learning (2009), promoted an integrated model of core subjects, digital media and technology skills, life and career skills, and learning skills such as communication, creativity, collaboration, and critical thinking.This global agenda provided the pedagogical impetus for redesigning schools to contribute to a sustainable future. Subsequently, numerous studies demonstrated that innovations were beginning to occur in some schools throughout Europe and North America (Baudry et al, 2011;Brecko et al, 2014;Brunvard & Byrd, 2011;).However, little evidence exists that schools were being redesigned sufficiently to meet the future needs of students and society Serdyukov (2017).Fischer et al (2020) and Boscconi et al (2013) found innovation remains incremental and superficial.These researchers suggest that this is not enough to stimulate deep transformation in education, as defined by changes in the way students learn, the way teachers teach, and how knowledge is created and shared.Fischer, et al 2020 argue that innovation in education is not merely about the application of technology in the classroom.Transforming, or in their words, reinventing education, requires a transformation in how we think about learning, teaching, and integrating the new media in broader systems of schooling.In their research, they conclude that schools fall short of transformation due to a heavy focus on the "automation" of technology as a device rather than a way of being. In 2007 Snyder introduced the Digital Culture theoretical model (2007; 2015) as a way to frame the complexities of transforming schools to prepare youth for 21st-century living and work.The model grew out of 10 years of research on the application of ICT in education (Snyder, 2008).Evident was that most technology-based innovations were contained in a classroom or two; a finding in line with the above-mentioned.Lacking was an understanding of the interdependency between organizational systems, pedagogical practice, and the values of the school, which were needed if technology would serve as a transformative device.Like others, Snyder (2007;2008) concluded that technology alone could not suffice as the driver of change.Placing technology in the hands of a few would not lead to the transformation that was needed to redesign schools for a sustainable future.Needed was a systems perspective in which the application of technology was determined by the goals, vision, and mission of the school and the needs of stakeholders.Moreover, that technology would be applied at the whole school level in concert with the guiding principles of the school's pedagogical and didactical design. Many now speculate that lessons from the Covid-19 Pandemic may be the disequilibrium needed for educators and society to walk through the mind-shift needed to a new paradigm for education to promote sustainable, attractive schooling that integrates technology with whole-school development.The COVID-19 pandemic introduced chaos in a myriad of forms throughout the global community.In education, school leaders, teachers, parents, and students had little choice but to forge new ways of learning and teaching.They had to rapidly adapt work structures and learning configurations with the support of technology, quite literally on a dime, while also maintaining the emotional and physical well-being of themselves, their families, and their students.The pandemic experience now provides an impetus for research and development to understand how schools can take the step into a new paradigm of the digital culture (Snyder, 2007), in which the way of working, learning, and interacting are transformed. The purpose of this article is to present findings from phase two of a qualitative case study to examine what happened to a school during the hybrid model phase.In particular, the focus is given to understanding how teaching and learning were impacted by the hybrid model, and the potential implications this has for sustainable quality development and transformation of schooling. Background In Spring 2020, we began a longitudinal study of a private school in Tampa Florida focusing on "leading during a pandemic", which has since been published (Snyder & Snyder, 2021).Findings from phase one illustrated how the school was able to adapt quickly to the complex conditions of the Covid-19 Pandemic.Evident was how the leadership team built upon the school's foundational values of collaboration, teaming, and networking, and the need to maintain education for societal growth.Tensil, et al., (2021) suggest that this is important as a sustainability strategy that interconnects performance with innovation, customer needs, and stakeholder engagement.The heavy emphasis on collaboration and inclusion in the school also reflects the mind-shift to which Sanders (2010) refers, with the focus on dynamic thinking, collaboration, and drawing on the strengths of the internal school-work systems. During this initial study (spring and summer 2020) the school was working on a new "back-toschool plan" to develop a more sustainable model for schooling over the coming year.Their "quick-fix" implementation of a 100% virtual approach during spring 2020 was deemed unsustainable if the pandemic continued.Their back-to-school plan was based on a hybrid model of schooling.Their motto was: "the building is closed, but we are open for learning".As researchers, we continued to follow the school during the Pandemic year (2020-2021) and observed innovations in teaching and learning that redesigned the school because of the hybrid model. As the year unfolded, we began to observe that the disequilibrium caused by the pandemic was potentially creating more value for the stakeholders (students, parents, teachers, and community).For example, in the 2021-2022 school year, enrollment at this independent school was the highest it had been in years (Niche.com, 2021), despite the challenges of the Pandemic.It is conceivable to surmise this is in part due to the increased perceived value (Zeithaml, 1988) of the school program, the attractive quality (Lilja & Wiklund, 2007) that was experienced by families throughout the pandemic, and the ability of the faculty and staff to embrace the disequilibrium using instructional and operational technology in ways not previously imagined (Snyder, et al. 2008).This stimulated curiosity about how the school shifted to a hybrid model on a dime and might provide insights into sustainable attractive quality in education. Beyond technology-driven school development: A systems and quality orientation Leading organizations in a globally connected, internet-based age are complex and are challenging for leaders to develop internal systems and structures that are flexible and responsive (Rill, 2016).New organizational systems need to meet customer needs (Fundin, et al, 2020) while being grounded and stable for building the kinds of healthy work environments that invite innovation and creativity to support sustainable development (Uhl-Bien & Arenam 2018).Practices within the field of quality management can provide insights to help educational leaders manage this balance between policy requirements, structure, process, and culture to be adaptive and responsive to customer needs. Quality management is an approach to organizational development focusing on the continuous improvement of products and services to meet and exceed customer needs (Deming, 1986).It is based on a set of guiding principles and values, combined with tools and processes, that are applied within a systems orientation (Capra & Luigi, 2016) to develop products and services.The core values function together as a system to align the work processes with the needs of customers, both internal and external.The organization's culture, defined by shared values, norms, and behaviors, is also recognized as an integral part of shaping and sustaining quality (Shingo, 2017).If the organizational culture is strong, it will fill co-workers with energy as well as shape their behaviors and decisions. The literature on quality management suggests that one of the ways organizations can be more responsive to changing conditions is to leverage attractive quality and perceived value as key elements for understanding how to build responsive systems (Johnson, 2021).The theory of attractive quality introduced by Kano et al., (1984) is often described as the surprise and delight attributes when purchasing a product or service, and is a strong driver of loyalty, word-of-mouth, and saleability (Lilja & Wiklund, 2007).Kano et al. (1984) proposed the theory of attractive quality as a method for describing the relationship between two aspects, the objective (product or service) and the subjective (experience of the user/customer).Vol.19 -Issue 1 -2023 (Yang, 2005) Yang ( 2005) modified Kano's original model to include quality factors that customers perceive (see figure 1).Yang (2005) altered the quality elements of Kano's model into the following eight dimensions based on the degree of importance to the consumer: highly attractive and less attractive, high-value-added and low-value-added, critical and necessary, potential and care-free.In the refined model, if two product requirements cannot be met simultaneously, perhaps due to technical and financial constraints, the company will determine which is more crucial to customer satisfaction (Chen et al., 2020;Matzler & Hinterhuber, 1998). Within the context of education, attractive quality can serve to determine the degree to which the school is designing learning environments that not only meet but also exceed the needs of its stakeholders.In contemporary quality management, this means developing schools from the perspective of their stakeholders rather than from a top-down model in which the needs of stakeholders are perceived to be known.This is an upside-down model to traditional schooling, which is typically designed around national curricula.Using the theory of Attractive Quality as a guide stimulates educators to ask new questions about what their students and other stakeholders need and want and to design schools that not only meet these needs but exceed expectations.This ups the ante from incremental innovation to long-term transformation. Creating the conditions for school transformation, which are governed by deep cultural traditions and values, may require leaders to think beyond the box (Rill, 2016).Achieving this mind shift will require leaders to move from Linear/static thinking with separate functions, to random/dynamic thinking, in which functions are seen as interrelated and systemic (Sanders, 2010).This has implications for both organizational structures as well as the organization's culture (Schein, 2004).Traditional structures of the 20th century will not suffice (van Kemenade & Hardjono, 2019).Suarez and Montes (2020) hypothesize that building organizational resilience requires organizational routines and simple rules, which combined with improvisation as the key ingredients for resilience, suggesting that the balance between structure and culture is paramount.Snyder & Snyder (2021) suggest that transforming schools toward sustainable quality requires a paradigm shift in what it means to organize and lead schools as a living system.They define Sustainability as the responsiveness of a living system to changes in the environment.Creating sustainable conditions for work requires a departure from isolation in any form, which assumes a fundamental shift toward systems thinking, fostering human networks through which energy systems self-organize to invent, innovate and sustain.Moreover, they suggest that values are drivers that keep adaptation in line with future goals, while the structures provide a framework for improvisation and innovation.To merely apply new technology without being grounded in a set of values, only reinforces temporary innovations.It is the interplay between innovative changes, supporting structures, and work culture that creates the conditions for leading sustainable quality development in today's society.The Digital Culture model (Snyder, 2007) merges research on technology in education with quality management and leading complex systems to provide educators with a systems model for redesigning schools toward the 21st century goals. The Digital Culture model: A systems framework for transforming schools The Digital Culture model includes four dimensions: communication, organizational systems, pedagogy, and technology.The Communication dimension represents human exchange that takes place through technology, including written, spoken, and visual forms.Questions related to this dimension include what kind of information is exchanged, who initiates, who is included, who responds, the timing of the exchange, length, sender-receiver relationship, and push-pulled information.The technology dimension represents digital media (information communication technology) that supports any combination of visual, auditory, or text-based communication.This includes the type of technology and how it is used.For example, email, chat, forums, intranet, Internet, videoconference, and visual software.The pedagogical dimension represents forms of exchange that support the sharing and building of ideas and learning, which includes collaboration, social networks, communities of practice, and online mentoring forums.The organizational systems dimensions represent identity, structure, and culture that are supported in a workplace by communication technology, for example, distributed work teams, open landscape offices, norms, values, behaviors, and codes.Figure 2 illustrates more in-depth underlying aspects in each of the dimensions associated with the digital culture.The inter-connecting circles represent the systems nature of the digital culture in which decisions are guided by the core of the school's work: the pedagogical practices.Among the elements embedded in this dimension are teaching and learning theory, classroom organization, and the role relations between student and teacher.The pedagogical principles are supported by organizational work systems which include the structure of teaching and learning (i.e.team teaching, multi-grade or single-grade classrooms, scheduling, etc).The school's approach to communication, both as an organizational component as well as within the classroom is also supported by the organizational systems.Technology is perceived as the system of tools that are designed and applied in the school to create conditions for success in teaching and learning.studies, we are reminded that as learners we are not just students in a classroom following a curriculum.We are members of a larger culture that becomes our curriculum.As we engage with one another in active exchange, we give meaning to a collective space.Using media and technologies contributes to our communication, giving rise to new knowledge to shape a global ecumene.Educators can take the next step and support the development of schools as living systems, not just bureaucratic institutions.As living systems, comprised of cultures and networks, schools can adapt their learning environments to respond to changes in society and prepare youth for lifelong learning and living in a global age.The Digital Culture model is used in this study as a framework to explore more in-depth how the case site addressed the challenges of redesigning their school during the pandemic to sustain attractive quality. Methodology The study presented in this article is based on a qualitative single-site case study of a school in Tampa, Florida USA.It is part of an ongoing longitudinal study to examine sustainable quality development in education.This portion of the study was conducted by three researchers, two university-based professors, and a doctoral student who is also employed at the school. Research questions Four questions guided this study: 1. What factors were important for designing and implementing a hybrid model of schooling? 2. What changes were made in the school to support a hybrid model of learning? 3. In what way did the hybrid model impact teaching and learning at the school? 4. In what ways did the hybrid model impact the culture of the school? Case description The case site was a PreK-3 through 8th grade private, non-sectarian independent school, founded in 1968 in Tampa, Florida.It is dedicated to a hands-on, child-centered philosophy based on best practices in education and knowledge gained from leading-edge brain research to accelerate learning.As of the 2021-2022 school year the enrollment was 520 students and a faculty and staff of over 130 equating to a student-to-teacher ratio of 8:1.The school is located in a large suburban neighborhood.Students are mostly from middle to upper-class families. Data Collection Qualitative data were collected through interviews, focus groups, and document analysis.Permission to conduct this study was given by the leadership of the school.Access to key informants was established through the leadership team.Consent was secured prior to interviews.All participants were contacted via email, in which the purpose of the study and the research questions were presented. The focus groups were conducted via a hybrid model by university-based researchers (on zoom) and school-based researchers (in-person).They were recorded and lasted one hour in length.Respondents were selected based on their role in the school and availability.Two separate focus groups were conducted and included four division leaders from the elementary school, one earlychildhood teacher, and the principal of the middle school.A second focus group was conducted with four teachers from the middle school and the principal of the middle school. Interviews: Two one-to-one interviews were conducted with a teacher from the pre-school, and a science teacher from the middle school.Respondents were selected based on their experience designing and delivering the hybrid model, as well as their availability.Both interviews were conducted by the external researcher, using zoom.The interviews were recorded and lasted 1,5 hours. Document analysis: The "back-to-school plan" and the "Family Remote Learning Plan" provided background information about the remote-hybrid learning model designed by the school in the summer of 2020. Survey data: Quantitative data from two quality assessment measures were also included: 1) the Contentment Foundation (Contentment.org,2021) provided evidence of employee well-being.The wellbeing assessment and analytics monitor 48 critical aspects of well-being at the individual, group, and whole-school levels.The survey collects data on physical health, psychological wellbeing, community climate, inner climate, relationship to experiences, and emotional efficacy; 2) Measure of Academic Progress (MAP), growth assessment measures student growth using the RIT (Rasch Unit) scale to help teachers measure and compare academic growth.The MAP Growth test is administered in the fall, winter, and spring of each school year to demonstrate academic growth and areas of instructional need within specific classrooms enabling teachers to identify the academic needs of their students with laser focus.It is grade level independent and dynamically adjusts to each student's performance as they take the test. Data Analysis Qualitative data were analyzed by the three researchers using a two-step approach.In The first step, inductive analysis techniques (Patton, 2002) helped to uncover themes, patterns, and categories embedded within the data.Analyses were conducted independently by the researchers during round one and then combined to compare the identified themes and patterns.Final refinements to the patterns, themes, and categories were determined from the combined analysis.In the second step, the patterns and themes were further analyzed using deductive techniques (Patton, 2002) during which the Digital Culture model served as a framework through which to further identify the themes and patterns found in the data. RQ 1: Factors important for designing and implementing a hybrid model of schooling According to the focus groups, interviews, and document analysis, shifting to a hybrid model of schooling on a dime involved factors that are categorized under the leadership and organisational systems dimension.Among them were: 1) creating a sense of community was essential for buy-in; 2) articulating a common purpose and vision grounded in the school's values served as a guide post for transformation during the crisis; 3) designing open channels of communication and feedback loops for all stakeholders; 4) structures and platforms that supported immediate competence development; 5) participation of all stakeholders; 6) out of the box thinking; 7) redesigning planning and scheduling; 8) team teaching.Below are some examples from the data that illustrate these key factors. Sense of community and common purpose To create a sense of common purpose, the leaders of the school developed a motto that served as an anchor point throughout the Pandemic.Hashtag "One Community" became the slogan to reflect a culture of connectedness and family.The "Family Remote Learning Plan", was designed to provide opportunities for children to be engaged in learning in the absence of being on the school's campus.The text reflects the school's intent to design innovative solutions that are grounded in the values and principles of the school.On page one, the following is stated: "We all understand that the face-to-face interactions in the vibrant and engaging classrooms our students and teachers enjoy each day are better in-person than occurring remotely.However, what we seek to create is a remote learning environment where teachers and students continue to be engaged in ways made possible by our many options for learning via the connectedness through the internet.There are many alternative and effective approaches available to our teachers to keep our student's minds active in meaningful ways.But parents, we need your help."(p. 1). Strategic plans, action plans, and resources A guide to teaching Remotely" was designed to assist teachers.The plan states, "The goal of our remote learning plan is to provide opportunities for our children to be engaged in learning in the absence of being here on our beautiful campus.We are asking you to think outside the box in how you approach your teaching via Canvas, Zoom, and our library of online instructional resources.Rather than ask children at the elementary and early childhood levels to be tied to a screen, try to provide opportunities for them to read, write, share ideas, explore, create, play, and move."(p.3).Included in the documentation was a library of resources, case examples, and a reminder to "breathe, be kind to yourself, reframe your thinking and think outside the box, see yourself as a member of a team in which resources are shared, [and remember] we're all in this together.The document also contained suggestions for how to integrate technology to stimulate learning and hybrid groups, including interactive power points, discussions on canvas, and the Kahoot learning game.Additional resources were also introduced including wideopenschool.org,a network for supporting learning from home that inspires kids, supports teachers, relieves families, and supports the community, among others. Feedback loops and continuous dialogue According to the interviews, the school's leadership team sought regular feedback from parents and teachers throughout the implementation phases to ensure continuous quality.The data collected enabled the school to adjust its policies and procedures to begin the school year.At the start of the school year, the teachers welcomed 64% (323) of its enrolled students on campus and 36% (178) remotely via Zoom.Many questions were asked about the reliability and bandwidth of the school network, and whether it could support the strain of so many simultaneous Zoom meetings.Open lines of communication were vital if the school was to keep its finger on the pulse of a polarized community, allowing as large a percentage as possible to be satisfied with the learning taking place both in and out of the classroom.Throughout the year, the school created return points for families to decide whether to move from remote to in-person or remain at home.Toward the end of the school year, 91% of students were on campus while only 9% remained remote. RQ 2: Changes in the school to support hybrid learning Implementing and sustaining the hybrid model of schooling required changes and innovation that are reflected in all four dimensions of the Digital Culture model.Table 1 highlights some of the specific changes that were made.Details have been placed in one of the four categories.However, it should be noted that there is an interdependent relationship between them (a systems orientation) that creates the heart of the schooling from the perspective of the digital culture. Pedagogical principles guided decisions Decisions were made from pedagogical principles of team teaching and cooperative learning, and social interaction, rather than technology.As one teacher shared, "for us, it was important to make connections equally with students who were learning from home and students in the classroom.With the technological solutions, we were able to form a resemblance of simultaneous learning with both groups".When asked further to explain what they meant, the teacher shared that learning is dynamic.In active classrooms, as was typical in this school, students' sense of inquiry and peer learning meant that teachers needed to be flexible and "spontaneous" to natural learning moments.Examples of technological innovations that supported this included iPad on tripods to facilitate natural movement in the classroom and to provide alternative classroom camera angles to maintain connection and engagement with the remote students.Zoom breakout rooms were used to place students in peer-learning groups to connect remote and classroom learners. Restructuring the flow of movement in the school One of the biggest innovations in the school is related to scheduling and the flow of students.Rather than moving students from class to class, the teachers moved to the classrooms.This stimulated many new developments in team teaching and cross-curricular planning.Planning documents changed from paper to digital and enhanced access to information and communication between parents and teachers and the teaching staff as a whole.A commitment to parent involvement stimulated new networks to support informed decision making; the creation of a family remote plan, and continuous communication forums. Technological solutions support teaching, learning, and communication To support classroom learning, technological solutions were designed around the pedagogical and communication needs of the teachers and students.For example, as indicated in Table 1 Collaborative solutions were implemented with Pear Deck and Adaptive Learning algorithms, while Canvas served as a platform through which to communicate, share, store and retrieve information.Communication with parents was supported daily through the sharing of materials on Canvas and zoom meetings.To support the value of presence and good communication, teachers were provided with microphones and speakers so they could be heard clearly through their masks.The Canvas platform was used to share and store information from students (i.e.assignments). Summary These are but a few of the examples of changes made during the Hyrbid model period.What is evident from the data is the system's orientation to the innovation that stimulated changes in multiple dimensions simultaneously.For example, the interdependence of decisions made from the pedagogical to the technological and organizational.The pedagogical principles in the school were the core feature in the dialogue about technological solutions.Rather than asking what technology was available, the designers asked, what is most important for us to achieve in our learning environment?The key values of social interaction, "OneCommunity" open lines of communication, and a commitment to continuous improvement became the guideposts that enabled the school to maintain and sustain quality in learning through the pandemic.The introduction of the CANVAS learning management system served multiple needs, including communication with students and parents.As well, teachers used the online planning system which stimulated changes in communication and program planning, and in some cases improved possibilities for cross-curricular planning. RQ 3 & 4; Impact on teaching, learning, and quality culture The Elementary school experience According to division leaders and teachers in the elementary school, the hybrid-remote learning platform and structure impacted both teaching and learning in a myriad of ways, as well as the organization of learning.Evident were also changes in the culture of the school.For example, in early childhood, teachers reported that "The tempo of life is slowing down and kids are starting to engage in their learning differently".In the fifth grade, teachers witnessed a change in behavior and social skill development among students.As one of the division leaders shared, "I heard more teachers last year say that they developed an appreciation from each other's areas of competence.The students demonstrated empathy and patience because we were all so visible in where we were and what we were trying to do.There was a different kind of transparency".I even noticed in our community increased empathy, support, and appreciation.We were challenged as teachers, and now I am wondering if our then 5th graders who are now 6th graders will continue to develop these social skills in Middle School. Other teachers and division leaders in the elementary school talked about the strength of teaming that was enhanced by the hybrid model.The "Specials" classes (i.e art, music, physical education) were integrated into the classroom as compared to the prior "pull-out model" in which kids would move to the specials class.This provided new opportunities for team teaching in a variety of ways, among them scheduling, providing flexibility and time for the general education teachers to connect with parents or with remote learners who needed extra help.As well, the integration of the specials in the general education classroom stimulates new insights and dialogue about partnering to co-design the curriculum, stimulating innovation in the already established team teaching and multi-aged classroom pedagogical approach present in the elementary school. The Middle School Experience In the middle school, the story of hybrid learning was quite different.The traditions in the middle school were designed around ability grouping, which dictated in part teaching team organization and the scheduling of classes.Providing simultaneous learning to classroom-based and remote learners challenged this model, causing teachers to sacrifice what they considered quality in education for better classroom management in a hybrid setting.As one teacher shared, "We made decisions in a fundamentally different way.Even though we thought it was the best way at the time, we saw problems when the kids started to come back to the classroom.Teachers in the middle school also echoed the challenges to provide quality education from a learner's perspective.As one person shared, "It was challenging because all of a sudden, we were thrown into scenes, such as: "Here is a school computer, a new way of connecting (ZOOM), "Here are things that you can do.While we had a good introduction from the leadership, we were challenged to put one more layer of learning on ourselves to be able to teach."I am having to try and learn something [technology] while using it, and to put forward the kind of quality for my students."Another teacher echoed this and shared, "When we went hybrid it was a new way of teaching: you have to address the kids in the room, and then combine the materials for the kids on ZOOM.It was like trying to fly an airplane and play a basketball game at the same time.Students are on zoom and they need a top-level education; It was a balance." Another challenge articulated was the loss of spontaneity to meet student's needs in the moment, and the ability to identify the needs of remote students when they couldn't see details in the face or body language, which were part of the communication teachers relied upon to provide quality teaching to their students."We needed to remember that students on zoom weren't seeing what the classroom kids were seeing.It was a remembering piece for me.If the student was less engaged, it was about remembering to get them engaged.the technology was also challenging.When it wasn't working, how to keep them connected?It was a constant juggling act, and trying to get your lesson plan to continue during this was challenging.The computer on one side of the room, the kids spread around the room made it hard to read the student when we couldn't see them so well".Another teacher explained further, "As a math teacher, I can read student's faces and see what are their needs.With remote connections, I couldn't see the students and what they were doing at that very moment.When we were full remote, we had different technology, we could see everyone remote and what they were doing/writing, having students verbally explain their work is very different from seeing what they were doing on paper." The flexibility and spontaneity were repeatedly shared as fundamental challenges for the middle school teachers.Two teachers shared in a dialogue, "The times when you want to be spontaneous, the child at home doesn't have that opportunity.Those teachable moments were limited and that hurt me not to provide that.If I were designing a room, the breakout rooms are great, but I want to be to fly on the wall hovering over.If I need to go into the breakout rooms then I changed the energy; it breaks their momentum."Onthe flip side, teachers in the middle school shared ways in which students began to own their learning differently."Kids are also given more freedom to engage in their learning and to own their learning space.In the sixth grade, students are invited to give identity and meaning to their workspace (for classroom-based kids this means their end of the table; for distance learners, it means their home space)."Kids pride themselves in their designs and they get creative with the materials they use.One kid made a "Covid fortress" outlining his table space with Christmas lights and signs."The school tested most students from kindergarten through 8th grade just two months before the pandemic began, providing a baseline on which to measure academic progress throughout the pandemic.In mid-March 2020, the school left for spring break and did not return for the remainder of the school year.This meant MAP testing could not be completed as shown in Figure 4.However, comparing winter 2019-20 to winter the following year, it is evident that almost all grade levels exhibit the same or higher mean RIT scores.This is significant due to the complex nature of the hybrid teaching and learning taking place throughout the school year. Aggregated Impact on Learning from the MAP Growth Assessment Data The school year began with 132 students (26% of the student body) learning remotely from home.It was assumed by teachers and administration that student growth would be stalled in some way as other schools have experienced.Interestingly, the opposite was true and student growth remained on or above the level of pre-pandemic learning.It should be noted that at the time of writing, limited data are available for the fall 2021-22 assessment window.The available data does show a trend downwards but this can be attributed to the summer learning loss described earlier. It is hypothesized that if we had data from spring 2019-20, the time when the whole school was remote and unable to test, data would show SLL decreases between spring and fall, mirroring the trends the school has experienced for years.The stability and even growth in student test scores have contributed to the increase in attractive quality experienced by the parents of the school.When the pandemic began, parents were rightfully worried about the effects COVID would have on their children's academic, social, and emotional growth.These same parents were surprised and delighted to learn not only did their children experience a year full of joy, but their academics did not suffer.The school has heard from many parents regarding their sincere appreciation of the teachers and staff for helping make this happen.The school's record re-enrollment, at a time when the economy was in flux, also supports the fact that customer loyalty and perceived quality of the schooling experience were at an all-time high. Impact on Faculty Wellbeing The impact on faculty well-being was measured by the Contentment Foundation analytics, which focused on physical health, psychological wellbeing, community climate, inner climate, relationship to experiences, and emotional efficacy.Figure 5 displays visually the aggregated whole school favorability rating generated each time a survey is taken.The lowest schoolwide favorability score, 69.2, was produced the same month as the global pandemic began.Shortly after this survey was taken, the whole school moved to fully remote learning.In August 2020, the school began arguably the toughest year of teaching in the school's history, according to interviews with teachers.Managing the high level of necessary engagement of students both online and in-person took a toll on the overall personal wellbeing of the teachers and staff, especially according to survey data, in the areas of diet, sleep, immune system, emotional wellness, purpose in life, self-gratitude, and growth mindset.As the school entered the 2021-22 school year, the faculty and staff recorded their highest ever overall wellbeing score of 72.49.The theory of attractive quality (Lilja & Wiklund, 2007) can be applied loosely in this situation as the teachers and staff are not necessarily the purchasers of a particular product or service.However, the high level of personal well-being, at a time when the school was beginning yet another tough school year, could be described as fundamental to the success of the first few weeks of school.Furthermore, when people are yearning for connection and positivity in their lives, the school, along with all its smiling teachers and staff members, is providing its customers with the definition of attractive quality.Parents and students are surprised and delighted with the depth of community the school provided to their families over these hard months and in turn, the perceived value of the school program and customer loyalty seems to have increased since the beginning of the pandemic. Analysis and Discussion How leaders respond to a pandemic without sacrificing quality is an indication of how sustainable and adaptable is the organization.When the changes are guided by customer needs they can serve as a contemporary indicator of Quality (Fundin, et al. 2020).Findings from this study illustrate that innovation in schools can be transformative if the right conditions are in place.In this case study, "going hybrid on a dime" was necessary for the school to maintain attractive quality in their education during a global pandemic.Adapting quickly was made possible by several critical factors that many researchers argue are essential to sustain quality development (Fundin, et al 202;Rigby, 2018;Tensel, et al., 2021), among them, we identified the following: The data also demonstrated that the redesign of schooling through technology-based solutions is complex and requires a fit between the pedagogy and the organisational structure of teaching and learning.When the fit is good, technology can extend and transform; when the fit is poor, technology can amputate teaching and learning.For example, the use of iPads and Canvas learning management systems extended teaching and learning, and also enhanced attractive quality.At the Elementary level, this adaptation appeared to innovate how teachers worked together, how the curriculum was designed in collaboration with different units and subjects, and how teachers partnered to give "relief" and create space for meeting the needs of individual students and parents who needed extra attention. Changes were made to the scheduling, grouping of students, and teaming of teachers, making the learning environment more attractive.This illustrates how teachers can be stimulated by technology to adapt and reinvent learning in a digital age (Fisher, et al., 2020).The data also suggests that the established culture of team teaching created important conditions for the hybrid model to thrive and transform the school, reinforcing Brunetti, et al (2020) finding that technological competence needs to be applied in a broader culture that supports continuous improvement and innovation. In Middle school, there was evidence of how attractive quality was amputated as teachers struggled to maintain the values for learning, based on ability grouping.The level of attractive quality diminished with the hybrid learning model, raising fundamental questions about what is required for integrating technology in schools to stimulate innovation and attractive quality transformation.This raises further questions about how team teaching can be advanced to provide more flexibility for teachers, along with the additional teacher resources added to the environment for responding to emerging needs. The relationship between communication and pedagogy was also highlighted as important to innovate and maintain attractive quality.Structural elements in scheduling, planning, and curriculum development were more flexible, and could be sacrificed to prioritize the social connections and well-being among students; a finding that is in line with both Fisher et al, (2020) and Brunetti, et al. (2020).Seen from an attractive quality perspective (Lija & Wiklund, 2007), teachers were responsive to the needs of their students and used the fundamental principles and values of the school's pedagogy to inform decisions about the use of technology in the hybrid learning situation.Moreover, the school's goal to create a sense of "being here" and "one community" drove the design of the hybrid model and the decisions that ensued during its implementation. On the surface, it may appear that technology was the conduit for the design of the hybrid model.We would caution the reader to look more deeply at the implications to understand that the digital culture was made possible by a deeper awareness among the school leaders of quality management and the importance of creating value for customers and stakeholders.Moreover, of a systems orientation to continuous development.Technology integration does not, by itself, generate value.Transformation occurs when educators work together, interconnected, to generate new and unexpected value, made possible by technology.Our analysis from this study is that the redesign of schooling took place through how the school communicated, consumed, created, and organized using technology in unexpected ways that added attractive value to its internal and external customers. Transforming the school over time is a non-linear process (Snyder, et al, 2008) which suggests that the four dimensions of the Digital Culture model operate in a dynamic integrated system of forces as the pendulum swings between them (see figure 6).The pendulum swing is guided by the values and philosophy in the organization to ensure that the different dimensions are strategically developed as an integrated whole and aligned with the goals and mission of the organization.A culture of continuous improvement and continuous professional development is essential to ensure sustainable quality development as the pendulum swings, perhaps multiple times to find a new place. If your focus as a leader, or as a school, leans too heavily in one direction or another, the needs of the other critical areas represented in the digital culture model will make themselves known, normally displayed through a lack of responsiveness to a particular area.For example, if we focus all our energy and time on developing pedagogy, naturally our focus is off on communication, and we might hear from our customers regarding this miss.However, due to the nature of gravity, the 'equilibrium' of life forces the pendulum towards communication thus keeping the organization balanced.Just an idea.Sustainable development is an ongoing 'process' in which we must continue to innovate in these different areas, rather than arrive at the destination of sustainability.Pendulums, in perpetual motion, help us visualize a never-ending 'dance' through and between the interconnectedness of life itself in a human organization.Perhaps this story about Corbett Prep's capacity to "Go hybrid on a dime" reinforces the larger story of developing a school over time as a strong cohesive living system, one that is strong because of its interconnections, interdependencies, and networking both within the school, and with parents and the larger community.Isn't the overarching message in this story, also about the resilience of Educators to invite, listen to, and engage others to help shape the journey, while searching for continuous feedback?The Digital Culture Model shows the power of a school to move on a dime to respond to the enormous challenges of maintaining and exceeding expectations for student learning, teacher well-being, parent satisfaction, and attractive quality during a lengthy pandemic.This is a picture of complexity that shows the strength that evolves from a school with a common purpose, where everyone is engaged and involved in shaping the school's continuous improvement journey, even in a pandemic.It also reflects the importance of developing processes for feedback and dialogue among organizational members to reflect on the appropriateness of innovations to transform schools toward enhanced quality and sustainable development.Allowing the digital culture's "pendulum to swing", guided by the values and goals Figure 3 : Figure 3: examples of aspects underlying the four dimensions of the digital culture Figure 4 Figure 4 represents MAP growth assessment data from winter 2020 to fall 2021, with different grade levels shown by color.(Dark blue represents Kindergarten and grey represents 8th grade)The school tested most students from kindergarten through 8th grade just two months before the pandemic began, providing a baseline on which to measure academic progress throughout the pandemic.In mid-March 2020, the school left for spring break and did not return for the remainder of the school year.This meant MAP testing could not be completed as shown in Figure4.However, comparing winter 2019-20 to winter the following year, it is evident that almost all Figure 4 : Figure 4: MAP Growth Assessment Data Over Time Figure 5 : Figure 5: Schoolwide Personal Wellbeing Survey Data of purpose and alignment of work systems 6. networking Figure 6 : Figure 6: The Pendulum Swing of the Digital Culture model Table 1 : Four Elements of the Digital Culture Model "Going Hybrid on a Dime" Seminar.net-International journal of media, technology and lifelong learning 13 Vol.19 -Issue 1 -2023
2023-10-12T15:08:06.648Z
2023-10-10T00:00:00.000
{ "year": 2023, "sha1": "8e252d630e361e6fc88b016a59216c5e63683731", "oa_license": "CCBY", "oa_url": "https://journals.oslomet.no/index.php/seminar/article/download/4762/4740", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d6c016bb2c06c0c3373e3254082d1c217eeffc09", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
268369566
pes2o/s2orc
v3-fos-license
Co/Al Co-Substituted Layered Manganese-Based Oxide Cathode for Stable and High-Rate Potassium-Ion Batteries Manganese-based layered oxides are promising cathode materials for potassium-ion batteries (PIBs) due to their low cost and high theoretical energy density. However, the Jahn-Teller effect of Mn3+ and sluggish diffusion kinetics lead to rapid electrode deterioration and a poor rate performance, greatly limiting their practical application. Here, we report a Co/Al co-substitution strategy to construct a P3-type K0.45Mn0.7Co0.2Al0.1O2 cathode material, where Co3+ and Al3+ ions occupy Mn3+ sites. This effectively suppresses the Jahn-Teller distortion and alleviates the severe phase transition during K+ intercalation/de-intercalation processes. In addition, the Co element contributes to K+ diffusion, while Al stabilizes the layer structure through strong Al-O bonds. As a result, the K0.45Mn0.7Co0.2Al0.1O2 cathode exhibits high capacities of 111 mAh g−1 and 81 mAh g−1 at 0.05 A g−1 and 1 A g−1, respectively. It also demonstrates a capacity retention of 71.6% after 500 cycles at 1 A g−1. Compared to the pristine K0.45MnO2, the K0.45Mn0.7Co0.2Al0.1O2 significantly alleviates severe phase transition, providing a more stable and effective pathway for K+ transport, as investigated by in situ X-ray diffraction. The synergistic effect of Co/Al co-substitution significantly enhances the structural stability and electrochemical performance, contributing to the development of new Mn-based cathode materials for PIBs. Introduction In the current era of rapidly depleting fossil fuels and the growing demand for renewable energy, lithium-ion batteries (LIBs) have been widely used in various electronic devices due to their high energy density and long cycle life [1][2][3].However, the huge consumption, rapidly increasing price and uneven distribution of lithium resources pose significant challenges to the sustainable development of LIBs.Consequently, there is a pressing need to identify suitable alternatives that are cost-effective and suitable for large-scale energy storage.In recent years, potassium-ion batteries (PIBs) have become highly promising candidates because of abundant resources and low cost [4][5][6][7].PIBs have a similar operating mechanism to LIBs.And the low redox potential of K + /K (−2.93 V vs. SHE, second only to Li + /Li) gives it a high theoretical energy density.In addition, K + exhibits fast diffusion kinetics in organic electrolytes because of its weak Lewis acidity [8,9].However, the inherent property of the large K + ionic radius (1.38 Å) greatly limits K + transport in the electrode material and the electrode material undergoes severe volume changes during electrochemical cycling, leading to rapid capacity loss [10,11].Therefore, the development of suitable electrode materials plays a crucial role in advancing PIB technology. Currently, the main cathode materials being investigated for PIBs are Prussian blue analogues [12,13], polyanionic compounds [14,15] and layered transition metal oxides [16][17][18].Among these, layered transition metal oxides are one of the most promising cathode materials thanks to the ease of synthesis and high theoretical capacity.Specifically, Mn-based layered oxides have garnered significant attention in PIBs owing to their low cost and environmental friendliness [19].However, the Jahn-Teller distortion of Mn 3+ leads to an unbalanced lengthening of the Mn-O bond and reduces the symmetry of the molecular structure, thus exacerbating the overall structural instability [20,21].In addition, slow K + transport, limited K + storage sites and severe phase transitions during cycling cause their poor performance in practical applications [22].Kim et al. [23] reported layered P3-K 0.5 MnO 2 with a complex phase transition during cycling, leading to rapid capacity loss and retaining only 70% of the initial discharge capacity after 50 cycles at 20 mA g −1 .Due to the poor diffusion of K + , this electrode shows a low specific capacity of only 38 mAh g −1 at 300 mA g −1 , indicating a poor rate performance.Studies have shown that metal-ion substitution is a powerful measure to improve the above defects in K x MnO 2 .For example, Ni [24], Co [25], Fe [26], Mg [27] and Ni-Ti [28] substitutions have demonstrated the ability to improve their structural stability and electrochemical properties.Zhang et al. [29] reported a K 0.3 Mn 0.95 Co 0.05 O 2 cathode.The Co doping suppressed Jahn-Teller distortion, allowing for more isotropic migration pathways for K + in the interlayer.This enhanced the ionic diffusion and, consequently, the rate capability.Zhong et al. [30] found that Co-Fe co-substitution could expand the interlayer distance and effectively suppressed the interlayer-gliding to achieve highly reversible phase evolution, thus improving the rate performance and cycle life of K 0.5 MnO 2 . Herein, we have modified K 0.45 MnO 2 (KMO) with two selected metal elements (Co and Al), and have innovatively synthesized the co-substituted P3-K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 (KMCAO).Through the careful consideration of chemical bonding, Al 3+ partially replaces the Mn 3+ sites, forming stronger Al-O bonds compared to the Mn-O bonds.This substitution serves to stabilize the layer structure [31].Additionally, the inclusion of Co 3+ ions facilitated electrochemical reactions, providing active sites for K + storage.Moreover, the Co 3+ /Al 3+ co-substitution of Mn 3+ sites effectively inhibited Jahn-Teller distortion, mitigating severe crystal structural transformations.Through in situ XRD, spherical aberration scanning transmission electron microscopy and first-principles simulation calculations, the KMCAO electrode exhibits a highly reversible phase structure transition during the cycling process.Meanwhile, Co/Al co-substitution widens the K layer spacing of the material and lowers the barrier energy for K + migration.Compared to the pristine KMO, the KMCAO cathode exhibits superior cycling stability and rate performance.The excellent electrochemical performance of the KMCAO || soft carbon full cell also demonstrates its potential for practical applications.These findings highlight the effectiveness of our modification strategy and provide new insights into the development of PIB cathode materials. Materials and Methods All reactants are analytical grade.P3-type K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 and K 0.45 MnO 2 samples are synthesized using a sol-gel method.Firstly, add 2 g of polyvinylpyrrolidone (PVP K90, Mw = 1,300,000) to 20 mL of deionized water and stir continuously to form a solution.KNO 3 , Mn(CH 3 COO) The soft carbon is synthesized from 3,4,9,10-perylenetetracarboxylic dianhydride (PTCDA) after pyrolysis at 900 • C for 10 h under a flowing Ar atmosphere [32]. Structure and Morphology The ICP test results of KMCAO and KMO are shown in Table S1.The chemical composition of the elements is consistent with the nominal composition.The electrical conductivities of KMCAO and KMO are measured using the four-probe method.The conductivity values obtained are 9.73 × 10 −6 and 6.21 × 10 −6 S cm −1 , respectively (Table S2).Higher conductivity will favor the diffusion of ions, thus improving electrochemical performance.Figures 1a and S1a show the Rietveld refinement XRD patterns of KMCAO and KMO.These patterns indicate that both samples exhibit a P3-type layered structure and belong to the R3m space group.Figure 1b shows the crystal structure of the P3 phase, where O ion layers are arranged in parallel in ABBCCA order, TM (Mn, Co, Al) ions occupy octahedral sites, and K ions occupy prismatic sites.The Rietveld refinement XRD reports of KMCAO and KMO are shown in Tables S3 and S4.The decreasing value of a (KMCAO: a = 2.8681 Å, KMO: a = 2.8745 Å) indicates that Co/Al co-substitution shrinks the TM layers, which helps stabilize the TM layers.The analysis of the cell parameters along the c-axis shows an enlargement of the K layers' spacing, as evidenced by the increase in c value (KMCAO: c = 21.1069Å, KMO: c = 20.8983Å).This expansion is supported by the observed shift of the diffraction peak of the KMCAO (003) plane to a smaller angle in the XRD pattern (Figure S1b).The expansion of the interlayer space facilitates the diffusion of K ions and reduces the electrostatic repulsion between O ions in the neighboring TM layers.These effects contribute to the mitigation of phase transitions and interlayer sliding [25]. A spherical-aberration-corrected transmission electron microscope (AC-TEM) is used to obtain detailed atomic-scale crystal structure information of KMCAO through annular bright-field (ABF) and high-angle annular dark-field (HAADF) imaging.In the ABF-STEM images, the O and K layers are represented by bright grey dotted contrasts, while the TM layer appears as dark dotted contrasts.Conversely, the HAADF-STEM images display a reversal of contrast.In the ABF-STEM images, the arrangement of the K and TM layers in an alternating manner, as well as the stacking of the O layer according to the ABBCCA sequence along the [010] zone axis, can be observed (Figure 1c).This is a typical P3-phase structure.The HAADF-STEM image confirms that the distance between the neighboring layers is about 0.7 nm (Figure 1d), which corresponds to the Rietveld refinement XRD results (c/3).In addition, the ABF-STEM image reveals a hexagonal symmetry in the arrangement of the TM atoms along the [001] zone axis (Figure 1e).In the HAADF-STEM image, the measured distance between adjacent TM atoms is about 0.28 nm, which is consistent with the cell parameter a in the Rietveld refinement XRD results (Figure 1f).The detailed crystal structure of KMO can also be observed in the ABF-STEM and HAADF-STEM images (Figure S2).The scanning electron microscopy (SEM) and transmission electron microscopy (TEM) images (Figure S3a,b,d,e) reveal that both the KMCAO and KMO samples exhibit irregular polygonal shapes in terms of their particle morphology.The average particle diameter is approximately 1 µm.The high-resolution TEM (HRTEM) images show clear lattice stripes, indicating their high crystallinity.The lattice spacing is measured to be approximately 0.24 nm, corresponding to the (012) plane of the P3-type layered structure (Figure S3c,f).And both inset Selected Area Electron Diffraction (SAED) patterns show the hexagonal structure.Further, the energy-dispersive spectroscopy (EDS) mapping images demonstrate a uniform distribution of K, Mn, (Co, Al) and O elements throughout all the particles in the samples (Figures 1g and S3g).A spherical-aberration-corrected transmission electron microscope (AC-TEM) i used to obtain detailed atomic-scale crystal structure information of KMCAO through an nular bright-field (ABF) and high-angle annular dark-field (HAADF) imaging.In the ABF STEM images, the O and K layers are represented by bright grey dotted contrasts, whil the TM layer appears as dark dotted contrasts.Conversely, the HAADF-STEM image display a reversal of contrast.In the ABF-STEM images, the arrangement of the K and TM layers in an alternating manner, as well as the stacking of the O layer according to th ABBCCA sequence along the [010] zone axis, can be observed (Figure 1c).This is a typica P3-phase structure.The HAADF-STEM image confirms that the distance between th neighboring layers is about 0.7 nm (Figure 1d), which corresponds to the Rietveld refine ment XRD results (c/3).In addition, the ABF-STEM image reveals a hexagonal symmetry in the arrangement of the TM atoms along the [001] zone axis (Figure 1e).In the HAADF STEM image, the measured distance between adjacent TM atoms is about 0.28 nm, which is consistent with the cell parameter a in the Rietveld refinement XRD results (Figure 1f) S4a).In the Co 2p spectrum of KMCAO, a double peak feature is observed at 780.1 eV and 795.2 eV, corresponding to Co 2p 3/2 and Co 2p 1/2 , respectively, indicating the trivalent state of Co ions in the sample (Figure 1i) [33].The Al 2p spectrum of the KMCAO exhibits a characteristic peak at 73.3 eV, corresponding to Al 3+ (Figure 1j) [34].The Mn 2p spectrum exhibits two main peaks that can be deconvoluted into four characteristic peaks (641.9 eV and 642.9 eV, 653.4 eV and 654.7 eV), which are attributed to Mn 2p 3/2 and Mn 2p 1/2 , respectively.This indicates the presence of both Mn 3+ and Mn 4+ in the sample (Figures 1h and S4b) [35].It is worth noting that the area of the Mn 4+ characteristic peaks is significantly larger in KMCAO compared to KMO, which effectively raises the average valence of Mn.The X-ray absorption near edge structure (XANES) tests are performed to further investigate the average oxidation state of Mn in KMCAO and KMO (Figure S4c).The comparison of the Mn 2 O 3 and MnO 2 reference spectra shows that the average oxidation state of Mn is an intermediate value between +3 and +4.The photon energy of the Mn K-edge of KMCAO shifts to a higher energy, indicating an increased oxidation state of Mn after Co/Al co-substitution.This finding is also consistent with the results obtained from theoretical chemical component valence calculations (Table S5).This co-substitution strategy could potentially mitigate the structural degradation caused by the Jahn-Teller effect of Mn 3+ , thereby enhancing the overall structural stability of the electrode materials. Electrochemical Performance The K storage performance of the prepared KMCAO and KMO cathodes was evaluated using cyclic voltammetry (CV) and constant current charge/discharge tests within the voltage range of 1.5~3.9V. Figure 2a,b show the typical CV curves for KMCAO and KMO cathodes, respectively, at a scan rate of 0.2 mV s −1 .KMCAO shows five pairs of redox peaks.It is generally believed that the pair of redox peaks at 1.78/1.50V may be related to the transformation of K + /vacancy order/disorder due to the mixing of transition metal ions [36].And the two pairs of redox peaks at 2.08/1.87V and 2.50/2.28V are attributed to the Mn 3+ /Mn 4+ redox pair, while the other two pairs of redox peaks at 2.84/3.14V and 3.58/3.81V are associated with Mn 3+ /Mn 4+ and Co 3+ /Co 4+ contributions [37,38].The redox peaks in KMO are all attributed to Mn 3+ /Mn 4+ [18].Compared to the pristine KMO, the Co/Al co-substitution significantly reduces the potential interval between the oxidation and reduction peaks, and the reduction in polarization will be beneficial for its practical application.Table S6 shows the calculation of voltage polarization.In addition, the better overlap of the CV curves indicates that KMCAO has remarkable reversibility during the electrochemical reaction process.The participation of active Co elements in the electrochemical reaction contributes to additional capacity, while the electrochemically inactive Al element stabilizes the layer structure through robust Al-O bonding [39].Figures 2c and S5 show the charge/discharge curves of KMCAO and KMO cathodes for different cycles at a rate of 0.1 A g −1 .The voltage plateaus correspond to the redox peaks.Co/Al cosubstitution in KMO can effectively smooth the charge-discharge profiles and increase the reversible capacity.At 0.1 A g −1 , KMCAO exhibits a high reversible discharge specific capacity of 102 mAh g −1 , with a capacity retention of 83.3% after 150 cycles.In contrast, the KMO cathode shows a rapid capacity decrease from 91 mAh g −1 to 48 mAh g −1 after 150 cycles, corresponding to a capacity retention of only 52.7% (Figure 2d).The SEM and TEM images after cycling for KMCAO and KMO are presented in Figure S6.The particle morphology of both KMCAO and KMO are preserved well after cycling.In comparison, KMCAO has a better integrity of particle morphology.In addition, the KMCAO cathode has an excellent rate performance with average discharge specific capacities of 111, 104, 96, 87, 77 and 67 mAh g −1 at 0.05, 0.1, 0.2, 0.5, 1 and 2 A g −1 , respectively.When the current density is reset to 0.05 A g −1 , a discharge specific capacity of 108 mAh g −1 can be obtained, which is close to the initial value (Figure 2e).The corresponding charge/discharge curves demonstrate the rapid K + storage property and low polarization characteristics of the KM-CAO cathode (Figures 2f and S7).The KMCAO cathode also exhibits a long cycle lifespan.After 500 cycles at 1 A g −1 , the discharge specific capacity is 58 mAh g −1 , corresponding to a capacity retention of 71.6%.In comparison, the KMO cathode retains only 49.2% of its capacity (Figure 2g).This improved cycling performance can be attributed to the successful regulation of the Mn average valence through Co/Al co-substitution, which suppresses the Jahn-Teller effect.Notably, the KMCAO cathode demonstrates competitive K + storage performance compared to previously reported layered oxide cathodes for PIBs (Table S7).The electrochemical performance of the high mass loadings of KMCAO is investigated (Figure S8).It also performs well.After 100 cycles at 0.1 A g −1 , the KMCAO with a high mass loading of 7.11 mg cm −2 maintains a high reversible capacity of 64 mAh g −1 and capacity retention of 77.1%.Also, the KMCAO demonstrates exceptional rate performance even with a high mass loading of 6.34 mg cm −2 .The average discharge capacities are 88, 81, 74, 65 and 54 mAh g −1 at 0.1, 0.2, 0.5, 1 and 2 A g −1 , respectively.Further, galvanostatic intermittent titration technique (GITT) and electrochemical impedance spectroscopy (EIS) tests are performed to investigate the K + diffusion dynamics of the KMCAO and KMO cathodes.Based on the GITT test results (Figure S9a,b), the K + diffusion coefficient of KMCAO is calculated to be 10 −9 -10 −11 cm 2 s −1 during the first discharge cycle, which is generally higher than that of the pristine KMO cathode (Figure S9c).The EIS fitting results show that the KMCAO cathode has a lower charge transfer resistance and faster K + diffusion dynamics compared to the KMO cathode (Figure S10 and Table S8). Potassium Storage Mechanism In order to investigate the detailed crystal structure evolution during the K + inter lation/de-intercalation process, in situ XRD experiments are conducted to monitor charging/discharging process of both the KMCAO and KMO cathodes (Figure 3).Dur the charging process, the (006) plane of KMCAO gradually shifts to a lower angle, wh the (101), ( 012) and (015) planes shift to higher angles.These observations indicate detachment of K + .K + acts as an electrostatic shield between the O layers.With K + extr tion, the shielding effect weakens and the electrostatic repulsion between O ions dom nates.This leads to an expansion of the c-axis and a contraction of the a-b plane [40].higher voltages, the (015) peak disappears and a new diffraction peak appears at abo 40.5°, corresponding to the (104) plane of the O3 phase.During discharge, the (015) pe Potassium Storage Mechanism In order to investigate the detailed crystal structure evolution during the K + intercalation/de-intercalation process, in situ XRD experiments are conducted to monitor the charging/discharging process of both the KMCAO and KMO cathodes (Figure 3).During the charging process, the (006) plane of KMCAO gradually shifts to a lower angle, while the (101), ( 012) and (015) planes shift to higher angles.These observations indicate the detachment of K + .K + acts as an electrostatic shield between the O layers.With K + extraction, the shielding effect weakens and the electrostatic repulsion between O ions dominates.This leads to an expansion of the c-axis and a contraction of the a-b plane [40].At higher voltages, the (015) peak disappears and a new diffraction peak appears at about 40.5 • , corresponding to the (104) plane of the O3 phase.During discharge, the (015) peak reappears as the voltage decreases.At this stage, the electrostatic repulsion between the O layers decreases with the re-insertion of K + .The (006) plane shifts back to a higher angle, and the (101), ( 012) and (015) planes shift to lower angles, restoring the initial P3 phase state.These shift behaviors of typical planes are the same in the first two charge/discharge cycles, showing the highly reversible K + intercalation/de-intercalation process (Figure 3a,b).In the case of KMO, the phase transition between P3 and O3 also occurs in the high-potential zone.However, a notable difference arises as the (006) plane experiences a sudden jump during the phase transition, corresponding to an abrupt change in the lattice parameter c value, which shows a drastic lattice distortion in KMO.On the other hand, the c value of KMCAO changes smoothly throughout the charging/discharging process (Figure S11).The (104) plane also exhibits a jump during the process (Figure 3c,d).In addition to the phase transition in the high-potential zone, the intensity of the KMO (006) plane diminishes in the low-potential zone (discharging to ~2.5 V and charging to ~2.8 V), indicating a weakening of the P3 phase feature.Furthermore, indistinguishable diffraction peaks appear near the (101) and (012) planes, while the (015) peak shows jumping displacements and disconnections.Therefore, we suggest that these complex and severe phase transitions may accelerate the further degradation of the lattice structure of the KMO cathode during charging/discharging, leading to the poor cycle stability observed during K + intercalation/de-intercalation [41]. rials 2024, 17, x FOR PEER REVIEW 8 of 13 peaks appear near the (101) and (012) planes, while the (015) peak shows jumping displacements and disconnections.Therefore, we suggest that these complex and severe phase transitions may accelerate the further degradation of the lattice structure of the KMO cathode during charging/discharging, leading to the poor cycle stability observed during K + intercalation/de-intercalation [41]. First-Principles Calculations First-principles calculations are conducted to further investigate the reasons for the superior electrochemical performance of the KMCAO cathode.First, based on the crystal structures of KMCAO (Figure 4a,b) and KMO (Figure S12a,b), the transition state structures are constructed using the Climbing Image Nudged Elastic Band (CINEB) method with linear interpolation points [42], illustrating the path for K + migration (the green balls in the figures).Subsequently, the migration energy barrier of K + in the KMCAO lattice is calculated to be about 0.61 eV (Figure 4c).The lower migration energy barriers favor K + intercalation and de-intercalation, thereby facilitating highly reversible redox reactions.In addition, the density of states (DOS) show that KMCAO exhibits a continuous distribution of the density of states near the Fermi energy level, with a significantly higher intensity of the density of states compared to KMO, which has a clear band gap near the Fermi energy level.This confirms the enhanced electrical conductivity of the KMCAO First-Principles Calculations First-principles calculations are conducted to further investigate the reasons for the superior electrochemical performance of the KMCAO cathode.First, based on the crystal structures of KMCAO (Figure 4a,b) and KMO (Figure S12a,b), the transition state structures are constructed using the Climbing Image Nudged Elastic Band (CINEB) method with linear interpolation points [42], illustrating the path for K + migration (the green balls in the figures).Subsequently, the migration energy barrier of K + in the KMCAO lattice is calculated to be about 0.61 eV (Figure 4c).The lower migration energy barriers favor K + intercalation and de-intercalation, thereby facilitating highly reversible redox reactions.In addition, the density of states (DOS) show that KMCAO exhibits a continuous distribution of the density of states near the Fermi energy level, with a significantly higher intensity of the density of states compared to KMO, which has a clear band gap near the Fermi energy level.This confirms the enhanced electrical conductivity of the KMCAO material.These results provide conclusive evidence for the excellent electrochemical performance of the KMCAO cathode.Overall, the Co/Al co-substitution offers more active channels for K + diffusion, which helps alleviate the irreversible phase transition during the reaction, thereby ensuring the maintenance of the structural stability of the crystals. Full Cell Demonstration To assess the practical application potential of the KMCAO cathode, performan tests are conducted on a potassium-ion full cell, paired with a soft carbon anode (Figu 5).The structural and morphological characterizations of the soft carbon are shown Figure S13.And Figure S14 shows its electrochemical performances.Before assembli the full cell, the soft carbon anode was pre-cycled at 0.01~1.5 V (vs.K + /K) to activate t material.The anode/cathode capacity ratio is adjusted to 1.2 to eliminate irreversibili The configuration and operational mechanism of typical full PIBs are shown in Figure Figure 5b shows the normalized charge/discharge curves of KMCAO and soft carbon el trodes in half/full PIBs.The full cell maintains a high specific capacity of 82 mAh g −1 af 100 cycles at 0.1 A g −1 (Figure 5c).It also exhibits a good rate performance, with avera discharge specific capacities of 85, 76, 73, 71 and 69 mAh g −1 at 0.1, 0.2, 0.3, 0.4 and 0.5 g −1 , respectively (Figure 5d).When tested at 0.3 A g −1 , the initial discharge specific capac of the full cell was 76 mAh g −1 and the initial coulombic efficiency was close to 99%.A the capacity remains at 80.2% after 300 cycles.(Figure 5e).These results highlight th KMCAO cathode has a great deal of potential for practical applications in PIBs. Full Cell Demonstration To assess the practical application potential of the KMCAO cathode, performance tests are conducted on a potassium-ion full cell, paired with a soft carbon anode (Figure 5).The structural and morphological characterizations of the soft carbon are shown in Figure S13.And Figure S14 shows its electrochemical performances.Before assembling the full cell, the soft carbon anode was pre-cycled at 0.01~1.5 V (vs.K + /K) to activate the material.The anode/cathode capacity ratio is adjusted to 1.2 to eliminate irreversibility.The configuration and operational mechanism of typical full PIBs are shown in Figure 5a. Figure 5b shows the normalized charge/discharge curves of KMCAO and soft carbon electrodes in half/full PIBs.The full cell maintains a high specific capacity of 82 mAh g −1 after 100 cycles at 0.1 A g −1 (Figure 5c).It also exhibits a good rate performance, with average discharge specific capacities of 85, 76, 73, 71 and 69 mAh g −1 at 0.1, 0.2, 0.3, 0.4 and 0.5 A g −1 , respectively (Figure 5d).When tested at 0.3 A g −1 , the initial discharge specific capacity of the full cell was 76 mAh g −1 and the initial coulombic efficiency was close to 99%.And the capacity remains at 80.2% after 300 cycles.(Figure 5e).These results highlight that KMCAO cathode has a great deal of potential for practical applications in PIBs. Conclusions In summary, the successful synthesis of a Co/Al co-substituted P3-type laye KMCAO cathode for PIBs has been achieved.The incorporation of Co 3+ /Al 3+ into the M sites effectively suppresses the Jahn-Teller distortion.Additionally, Co contributes to electrochemical reaction, enhancing capacity, while Al forms stable bonds with oxy further stabilizing the layer structure.The KMCAO cathode exhibits a specific discha capacity of 111 mAh g −1 at 0.05 A g −1 .It also shows an excellent rate performance wi specific capacity of 81 mAh g −1 at 1 A g −1 and retains 71.6% of its capacity after 500 cy The Co/Al co-substitution also results in a wider spacing between the K layers, redu K + diffusion barriers and faster K + diffusion dynamics compared to the pristine KMO c ode.Furthermore, the in situ XRD results show that the KMCAO cathode exhibits a mi phase transition during electrode cycling.The Co/Al co-substitution strategy employe this work provides an effective approach for designing high-performance cathode m rials in PIBs.S1.ICP measurement results of KMCAO and KMO.Table S2.The resulting electrical conductivities of KMCAO and KMO measured by the four-point probe method.Table S3.Structural parameters and atomic position of KMCAO from Rietveld refinement.Table S4.Structural parameters and atomic position of KMO from Rietveld refinement.Table S5 S8.The obtained resistance values for K 0.45 MnO 2 and K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 through fitting the EIS spectra along the equivalent circuit.Refs.[28,40,[43][44][45][46][47][48][49][50][51] are cited in the Supplementary Materials. Figure 1 . Figure 1.Structure characterizations of the KMCAO and KMO.(a) XRD Rietveld refinement of KMCAO.(b) P3 type structure schematic.(c) ABF-STEM and (d) HAADF-STEM images of KMCAO along the [010] zone axis.(e) ABF-STEM and (f) HAADF-STEM images of KMCAO along the [001] zone axis.(g) HAADF-STEM image of KMCAO and the corresponding EDS mappings for K, Mn, Co, Al and O elements.The XPS spectra of (h) Mn 2p, (i) Co 2p and (j) Al 2p of KMCAO.X-ray photoelectron spectroscopy (XPS) is used to characterize the surface chemical compositions and valences of the corresponding elements in KMCAO and KMO.The full spectrum clearly shows signals of K, Mn, Co, Al and O elements, confirming the successful co-substitution of Co and Al elements (FigureS4a).In the Co 2p spectrum of KMCAO, a double peak feature is observed at 780.1 eV and 795.2 eV, corresponding to Co 2p 3/2 and Co 2p 1/2 , respectively, indicating the trivalent state of Co ions in the sample (Figure1i)[33].The Al 2p spectrum of the KMCAO exhibits a characteristic peak at 73.3 eV, corresponding to Al 3+ (Figure1j)[34].The Mn 2p spectrum exhibits two main peaks that can be deconvoluted into four characteristic peaks (641.9 eV and 642.9 eV, 653.4 eV and 654.7 eV), which are attributed to Mn 2p 3/2 and Mn 2p 1/2 , respectively.This indicates the presence of both Mn 3+ and Mn 4+ in the sample (Figures1h and S4b) [35].It is worth noting Materials 2024 ,Figure 4 . Figure 4. First-principles calculations.Schematic of the KMCAO crystal structure showing the migration pathways (indicated by the green balls) from (a) the side and (b) the top views.(c) T corresponding migration energy barriers in KMCAO and KMO.Density of states of (d) KMCA and (e) KMO. Figure 4 . Figure 4. First-principles calculations.Schematic of the KMCAO crystal structure showing the K + migration pathways (indicated by the green balls) from (a) the side and (b) the top views.(c) The corresponding migration energy barriers in KMCAO and KMO.Density of states of (d) KMCAO and (e) KMO. Figure 5 . Figure 5. Electrochemical performance of potassium-ion full cell based on KMCAO/soft carbo 0.8~3.8V. (a) Schematic illustration of the cell configuration and operational mechanism of the PIBs.(b) Normalized charge/discharge curves of the half and full PIBs.(c) Cycling performan 0.1 A g −1 .(d) Rate performance at 0.1, 0.2, 0.3, 0.4 and 0.5 A g −1 .(e) Long-term cycling capabili 0.3 A g −1 . diffusion coefficient of K + for KMCAO and KMO. Figure S10.Nyquist plots and the equivalent circuit model.Figure S11.Lattice parameter c variation of (a) KMCAO and (b) KMO during the second charge/discharge process.Figure S12.Schematic crystal structure of KMO showing the K + migration pathways (described by the green balls) from (a) the side and (b) the top views.Figure S13.Structural and morphological characterizations of soft carbon.(a) XRD pattern.(b) SEM image.(c) Raman spectrum.Figure S14.Electrochemical performances in the potential range of 0.01-1.5 V of soft carbon.(a) Charge/discharge curves at 0.1 A g −1 .(b) Cycling performance at 0.1 A g −1 .(c) Rate performance at 0.1, 0.2, 0.5, 1 and 2 A g −1 .Table Author Contributions: Conceptualization, J.L. and X.W. (Xuanpeng Wang); Methodology, J.L.; Software, W.S.; Validation, W.S.; Formal analysis, G.Z.; Investigation, G.Z.; Resources, C.H.; Data curation, C.H.; Writing-original draft, J.L.; Writing-review & editing, J.M. and X.W. (Xuanpeng Wang); Visualization, X.W. (Xiujuan Wei); Supervision, X.W. (Xiujuan Wei); Project administration, X.W. (Xiujuan Wei); Funding acquisition, X.W. (Xuanpeng Wang).All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the National Natural Science Foundation of China (52373306, 52102225), the Natural Science Foundation of Hubei Province (2023AFA053), and the Hainan Provincial Joint Project of Sanya Yazhou Bay Science and Technology City (2021CXLH0007).And the APC was funded by the Hainan Provincial Joint Project of Sanya Yazhou Bay Science and Technology City (2021CXLH0007). 2 •4H 2 O, Co(CH 3 COO) 2 •4H 2 O and Al(NO 3 ) 3 •9H 2 O are dissolved into the solution by stoichiometric amounts.The mixed solution is dried at 80 • C for 15 h and then pre-sinters at 350 • C in air for 3 h to obtain a black solid.Finally, the black solid powder is calcined at 800 • C in air for 12 h to obtain K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 .After slowly cooling down to 150 • C, the products are transferred promptly and stored in an Ar-filled glove box.The synthesis procedure of K 0.45 MnO 2 is the same as that of K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 , except that Co(CH 3 COO) 2 •4H 2 O and Al(NO 3 ) 3 •9H 2 O are not added to the mixed solution. . Mn average valence calculation results of K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 and K 0.45 MnO 2. Table S6.The voltage polarization calculation results of K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 and K 0.45 MnO 2. Table S7.Electrochemical performance of K 0.45 Mn 0.7 Co 0.2 Al 0.1 O 2 compared with other layered oxide cathodes in PIBs.Table
2024-03-13T15:08:37.348Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "15707187bbdb48786d24ab0365f01e9ec5743467", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/6/1277/pdf?version=1710058930", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b7ab6f9092b9ad2868a53aaf59360fba37ca635", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237552371
pes2o/s2orc
v3-fos-license
Robust optimisation of computationally expensive models using adaptive multi-fidelity emulation Computationally expensive models are increasingly employed in the design process of engineering products and systems. Robust design in particular aims to obtain designs that exhibit near-optimal performance and low variability under uncertainty. Surrogate models are often employed to imitate the behaviour of expensive computational models. Surro-gates are trained from a reduced number of samples of the expensive model. A crucial component of the performance of a surrogate is the quality of the training set. Problems occur when sampling fails to obtain points located in an area of interest and/or where the computational budget only allows for a very limited number of runs of the expensive model. This paper employs a Gaussian process emulation approach to perform efficient single-loop robust optimisation of expensive models. The emulator is enhanced to propagate input uncertainty to the emulator output, allowing single-loop robust optimisation. Further, the emulator is trained with multi-fidelity data obtained via adaptive sampling to maximise the quality of the training set for the given computational budget. An illustrative example is presented to highlight how the method works, before it is applied to two industrial case studies. Introduction The chief aim of engineering design is to create systems that satisfy specific performance objectives and constraints over a period of time. Usually, there exist many feasible designs that satisfy the required objectives. For this reason, it is necessary to choose an optimal design according to some criterion. Modern engineering systems are inherently complex. This complexity means that endogenous (geometry, material properties) and exogenous (loads) information is never complete, and often varies throughout the life cycle of the system (e.g. degradation altering geometry, etc.). The objective of robust design is to determine a set of designs that exhibit high levels of performance with low variability, whilst taking uncertainties into account. The benefits of robust design include the assurance of high performance regardless of a variety of unknown factors and occurrences throughout the system's life cycle. Robust design is essentially a traditional optimisation task, but with an added constraint relating to the performance variability, or robustness, within some predefined neighbourhood of the input variables. There are various definitions of robustness. A detailed review of which is presented in Gabrel et al. [1] , leading to various methodologies for tackling the robust optimisation problem. The authors of [2] employed a reliability-based optimisation algorithm which utilised Monte Carlo integration to obtain an averaged performance value within the neighbourhood. Similarly, Ryan [3] employed a probability distribution estimation method to obtain an approximate distribution of the performance within the neighbourhood. Another approach utilised the Taylor expansion of the expectation and variance of the performance and attempted to minimise both criteria simultaneously. Alternatively, several papers chose to optimise the worst-case scenario rather than any sort of averaged performance [4,5] . Typically, the behaviour of modern engineering systems is modelled by computationally expensive simulators, which can be seen as mappings from the input space to the output space, denoted f : x ∈ χ → y ∈ R . However, working directly with f ( x ) is often infeasible due to computational expense. A widespread approach to tackle this problem is to replace f ( x ) with a surrogate model, which has been trained using data obtained from a small amount of simulator evaluations. One option is to train a Gaussian Process Emulator (GPE), which is defined by a mean function and a covariance function respectively. The mean function provides an inexpensive approximation to the simulator, η( x ) ≈ f ( x ) , whilst the covariance function provides a measure of output uncertainty at each set of inputs, V x [ η( x )] [6] . The result is that the robust design problem can be interpreted mathematically as Here x represents the set of input variables located within the hypercube, or neighbourhood, centered at x and bounded by x ± . Consequently, h j ( x ) and w ν ( x ) are the respective inequality and equality constraints of this neighbourhood. In this context, robust design is interpreted as a double-loop optimisation task, with the outer-loop optimising the overall performance, subject to the constraint functions, and the inner-loop optimising for robustness in neighbourhood of the input variables. The ability of the GPE to accurately approximate f ( x ) is directly related to the quality of the training set. There are two main approaches to address this issue: adaptive sampling schemes and supplementing the training set with data from multiple levels of fidelity. Adaptive sampling approaches tend to involve a utility function which measures some form of model improvement to select additional sample points. The most popular choice is expected improvement [7] , which has been widely used in reliability [8] , optimisation [9] and robust optimisation problems [10] , amongst others. Further, the concept can be extended to multiple performance functions by considering the expected improvement of the current Pareto front via hypervolume expected improvement [11] . Other schemes include maximising the probability of improvement [12] or selecting samples with high uncertainty [13] . Multi-fidelity (MF) approaches are applicable when more than one potential simulator exists for the system under study. Lower-fidelity (LF) samples are defined by a lower-computational cost, but lower accuracy, than higher-fidelity (HF) samples. Multi-fidelity surrogate approaches exploit LF samples to gain information of the behaviour of the underlying system, and HF samples to maintain the desired accuracy. Most multi-fidelity approaches utilise LF data and adaptive sampling to attempt to sample the HF points in regions of interest and maximise the effectiveness of the surrogate [14][15][16] . Employing a surrogate model reduces the computational cost involved in robust design problems considerably. However, when there are a large number of performance functions and/or input variables, the double-loop approach becomes increasingly inefficient. A solution is to collapse the problem into a single-loop approach as done for a single-fidelity surrogate in Ryan [17] . In that paper, a GPE was enhanced to provide exact values of output uncertainty in the presence of uncertain inputs. This paper provides a framework to perform efficient robust design on computationally expensive models. The framework adapts the single-loop approach discussed above to factor in multiple levels of fidelity, and supplements it with a hybrid adaptive sampling scheme. The paper is organised as follows. Section 2 provides an overview of various forms of Gaussian process emulation. The proposed approach is introduced in Section 3 and discusses the main components. An illustrative example and two industrial CFD case studies are presented in Section 4 . The final section provides relevant conclusions and highlights future work. Methodology overview This section provides an overview of various forms of Gaussian process emulation. The main steps involved in the training of a single-fidelity (SF) GPE are described. Two extensions of the GPE framework are then discussed: training a GPE with MF data and factoring in input uncertainty for a SF GPE. Single-fidelity Gaussian process emulation Computationally expensive models are deterministic mappings from some input x ∈ R d to an output y = f ( x) . Due to the computational expense, often only a limited number of input samples can be evaluated. Under the Bayesian paradigm, f ( x) can be regarded as a random variable as the output is unknown until it is computed (and thus observed) by the modeller. Gaussian process emulation follows a Bayesian framework to provide a statistical approximation of η( x ) ≈ f ( x ) . Initially, a Gaussian Process prior is placed on the output, in the form [18] η( (2) The first term is the mean of the emulator and provides the general trend; h ( x ) is a vector of q known regression functions, β is a vector of q unknown coefficients. The second term controls local behaviour; Z( x ) represents a Gaussian Process, with mean zero and covariance σ 2 c( x , x ; θ ) . Here σ 2 is a scalar parameter, and θ is a vector which specifies the smoothness of the inputs and ultimately dictates the behaviour of the correlation function Given a set of single-fidelity (i.e. data from only one model/simulator) training data, D = ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x n , y n ) , the emulator distribution, conditional on training data and the unknown hyperparameters β, σ , θ , is defined as where M(·) and C(·, ·) are the mean and covariance functions of the GP, respectively. Both β and σ 2 are assigned a prior and estimated via their respective maximum a posteriori estimates, ˆ β and ˆ σ 2 , while ˆ θ is often estimated via maximising a likelihood function [19] . Ultimately, the posterior distribution of at some unobserved input x * , conditional on the observed data, is given by Oakley [20] η( with posterior predictive mean function and posterior predictive covariance function Here is a matrix containing the correlation between each training point, t (x * ) is a vector containing correlation between x * and the training points, and H is a vector containing the training points evaluated at the regression function h (x ) . For the work in this paper, the Gaussian process prior is assumed to have mean zero, i.e. h (x ) T = 0 , which will be adopted from here onward. Multi-fidelity Gaussian process emulation Computationally expensive models are designed to capture the behaviour of an underlying physical system or product. It is often the case that more than one computational model is available, with each model corresponding to a varying degree of computational cost and accuracy. The models can usually be organised in levels of fidelity; a model with lower computational costs but less accuracy is considered to be of a lower fidelity than a more expensive and accurate model. For example, a 2-dimensional versus a 3-dimensional model, or a model with a coarse mesh versus one with a fine mesh. The concept of Gaussian Process Emulation can be extended to incorporate multiple levels of fidelity within the training process [21] , allowing for improved emulator performance or lower training costs. In the case where there are two levels of fidelity, denoted low-fidelity (LF) and high-fidelity (HF), this paper adopts the recursive multi-fidelity approach from [22] , to approximate the output from the HF model as: Here, η LF (x ) represents a GPE trained using data from the LF model, ρ LF ( x ) represents a regression function, and δ HF (x ) represents a Gaussian Process Emulator which models the discrepancy between the HF estimation of ρ LF ( x ) η LF ( x ) and the true HF simulator realisations. Both emulators are trained via the steps described in Section 2.1 . This can be generalised for t levels of fidelity in a recursive fashion: Single-fidelity robust Gaussian process emulation The two-looped approach to solving the robust optimisation problem (1) works by first attempting to minimise the objective functions η( x ) and V x [ η( x )] in the outer-loop. Once a potential solution is found, the inner-loop measures the robustness over the input distribution. As a result, the predictive distribution of the emulator given input uncertainty is found by marginalising over the input distribution: This marginalisation is the aforementioned inner-loop and often achieved via Monte Carlo sampling. In the case where the uncertainty within the inputs is normally distributed, i.e. for an unknown point x * ∼ N ( u , S ) , it is possible to extract the first and second moments of Eq. (10) via methods described in Quinonero-Candela et al. [23 , 24] . These moments provide analytical expressions for the mean, m ( u , S ) , and variance, v ( u , S ) , of p η( x * ) | u , S, D . Ultimately, having direct access to the mean and variance of the emulator conditional on the input uncertainty collapses the robust optimisation problem down to a single-loop: The full details and steps involved are discussed further in [17] . The resulting mean and variance functions are fundamentally Eqs. (6) and (7) corrected to factor in the input uncertainty. The added input uncertainty essentially flattens the output, with a decreased vertical amplitude and increased correlation. Proposed approach The goal of the proposed approach is to perform efficient robust optimisation of computationally expensive models. The method is a combination of the various forms of Gaussian process emulation discussed in the previous section, and is termed Multi-Fidelity Robust Gaussian Process Emulator (MF-RGPE). When employing a GPE for the purposes of robust optimisation, the two main considerations are the ability of the GPE to accurately portray the behaviour of the underlying expensive model, and the efficiency of the robust optimisation process. To address the former, the proposed approach utilises training data from multiple levels of fidelity obtained via an extension of the Expected Improvement (EI) criterion [7] to maximise the quality of the training set. To increase the efficiency, the proposed approach extends the robust GPE detailed in Section 2.3 to the multi-fidelity case. Further details of the steps are discussed in the following subsections. Generating training samples The framework begins with the design of experiment (DoE) of the LF model. Latin hypercube sampling (LHS) [25] is used as the space-filling algorithm to generate the initial samples. These samples are then evaluated on both the LF model and any relevant constraints functions, and are referred to as LF samples. To generate the initial HF samples, the LF samples are first sorted according to their objective and constraint values. A proportion of the top performing samples are selected to be part of the initial HF samples. The remaining initial HF samples are selected by filling the remaining space using a spacefilling algorithm. This is done to encourage sampling of high-interest areas, whilst not neglecting the general performance of the GPE elsewhere. The proportion used in this work was 20% of initial samples from the top performing LF samples and 80% resulting from the space-filling algorithm. The HF samples were then evaluated on both the HF model and any relevant constraint functions. Constructing the MF RGPE The MF RGPE provides an approximation of the HF output whilst considering input uncertainty, and is constructed in a similar fashion to the standard MF GPE described in Eq. (8) : Here, η RLF ( x ) represents a robust GPE trained using data from the LF model via the steps described in Section 2.3 and ρ RLF ( x ) represents a regression function. The last term, δ HF ( x ) , represents a Gaussian Process Emulator which models the discrepancy between the estimation of the output of the HF training data, without accounting for input uncertainty, i.e. ρ LF ( x ) η LF ( x ) , and the actual HF simulator output. In an industrial context, there will usually be a predetermined computational budget and the stopping criterion will be met once this budget is exhausted. Other stopping criterion may include reaching a certain threshold of performance, such as obtaining a suitable design or reducing overall GPE uncertainty below some required value. Adaptive sampling For a SF GPE, the expected improvement (EI) [7] at some point x is defined as Here y min represents the current best performing objective value amongst the training data, s denotes the standard deviation of the GPE and (·) and φ(·) represent the cumulative and probability density functions of a standard Gaussian random variable, respectively. EI attempts to locate samples that offer improved nominal performance against the current best sample. The method balances higher probability of a relatively small improvement (exploitation) versus a lower probability of high improvement (exploration). The concept of EI can also be applied to cases with more than one objective function by considering a hypervolume of improvement. Following the steps described in Li et al. [26] , given some reference point r , the HVEI at some point x against a set of solutions U is defined as where N U is the number of points in the set U and N Ob j is the number of objective functions. Both EI and HVEI are designed for optimising nominal performance. The proposed approach extends them for the purposes of robust design by employing the mean and standard deviation GPE output from the SF RGPE and MF RGPE within the EI process. Consequently, the robust EI and robust HVEI can therefore be defined as where y R min is the current best performing objective value amongst the training data whilst also taking input uncertainty into account. Fig. 1 illustrates the concept of robust HVEI in the case of two objective functions. The set of solutions, P , represents the robust Pareto solutions taken from the current training data. Utilising these solutions and some reference point r , a set of local upper bounds U can be constructed such that U i lies at the intercept of P i and P i +1 . The robust HVEI is thus the summation of the robust EI against each local upper bound, to give an overall value of improvement. To obtain promising adaptive samples, an algorithm known as subset simulation is used to explore the input space and locate samples with high values of robust HVEI. Subset simulation [ 27 ] is an efficient Monte Carlo technique that employs Markov Chain Monte Carlo (MCMC) and a simple evolutionary strategy to converge to and sample from extremely small, or rare, regions of the input domain. Samples within these rare regions are associated with superior performance according to some user-defined criteria of interest. Originally developed and applied to reliability problems with great success [28,29] , the authors of [30][31][32] proposed the analogy between sampling from a small failure region and sampling from a small region exhibiting high performance. Consequently, the algorithm was adapted for the purposes of optimisation, allowing it to be used to locate promising samples even in the case where the areas offering improvement are extremely rare. Moreover, it is suitable when dealing with high-dimensional problems. To increase efficiency, several samples are adaptively sampled in one optimisation iteration. An influence function [33] , denoted τ ( x ) , is employed to discourage the adaptive samples clustering in one area, by scaling the robust EI values after each new adaptive sample is taken, i.e. where x AS represents the latest adaptive sample. Robust design Once the computational budget is exhausted and the final batch of adaptive sampling completed, the final MF RGPEs can be utilised for robust design. Subset simulation is employed to locate the input regions corresponding to samples with high performance according to the MF RGPEs. These samples should be insensitive to perturbation in the values of the input variables, and given the computational resources, be validated on the HF model. The steps involved the the proposed method are outlined in a flowchart provided in Fig. 2 . Steps 1-4 involve generating the LF and initial HF training data, and are described in Section 3.1 . This provides the foundation for the construction of the initial MF RGPE in step 5, which is detailed in Section 3.2 . Provided the stopping criterion (step 6) has not been met, this MF RGPE is then used as a tool to attempt to locate samples with improved performance in step 7, using the adaptive sampling process from Section 3.3 . This procedure is repeated on a loop, with an improved MF RGPE constructed at each generation until the stopping criterion is met. Optimisation of the MF RGPE(s) takes place in step 8. Numerical examples This section provides three examples showcasing the MF RGPE approach discussed in the previous section. A synthetic example is first presented to showcase the concept of the approach before it is applied to two industrially-relevant test cases. In all examples, the regression function ρ RLF from Eq. (12) is set to one, as in each example there is no assumed prior knowledge regarding the relationship between the LF simulator output and HF simulator output. Synthetic example The motivation behind this synthetic example was to illustrate the main concepts of the proposed approach. The HF and LF functions are defined as The functions were constructed such that the LF function exhibited similar behaviour to the HF function, and as such could be used to infer regions of high interest. Additionally, both were designed to possess two maxima; a global maximum that was more sensitive to input uncertainty, and a local more robust maximum. The goal is to maximise f HF in the face of some input uncertainty, with the intention to favour the more robust local maximum. An initial batch of 50 LF samples were selected via LHS. The 4 samples with the highest objective values were then selected, alongside 16 further samples from LHS, to populate the HF training set. A MF RGPE was then constructed with training data normalised between 0 and 1, and input uncertainty for an unknown point x * is defined by the probability distribution x * ∼ N ( u , diag[0 . 01 , 0 . 01]) . Here u is the mean approximation of x * while diag[0 . 01 , 0 . 01] is a diagonal matrix containing the variance with respect to each input variable. A further 3 samples were obtained via the robust EI adaptive sampling algorithm, and the retrained MF RGPE employed for robust optimisation. Finally, the inputs were transformed back to their original domains. 2 ) , ( 3 π 2 ) and a local, more robust, optimum around x ≈ ( π 2 ) , ( π 2 ) in the bottom left. The adaptive samples all lie within these regions of interest, with a preference to the local, more robust optimum. The local optimum in the bottom left is considered more robust as it has a wider base, meaning there is a lower drop in performance given any perturbation in the inputs. Note that several of the initial batch of HF samples were already in proximity to the two optima, highlighting the importance of utilising the best performing LF samples. Further, the LF data provided valuable information in the regions where HF samples were sparse (e.g. top left), saving an adaptive sample being wasted in an area of low interest. The illustrative example was repeated 10,0 0 0 times, and the normalised error from the true robust optimum presented in Fig. 4 . The error was normalised to illustrate the discrepancy between the true robust optimum and the actual values more clearly. The goal of the study was to showcase the individual steps described in Fig. 2 and illustrate the merits of the approach. Overall, the majority of cases were within 1% of the true robust optimal input values. Industrial examples Design engineers often utilise computationally expensive models in their design process. It is often desirable to factor input uncertainty into this process. The proposed approach has been designed to assist design engineers in this task, within a reasonable computational budget. Computational Fluid Dynamics (CFD) [34] models are a common tool in engineering design. They are usually computationally expensive, which limits their ability to be used directly in practical applications, but makes them a prime candidate for the MF RGPE approach. Turbulated duct case study A frequent feature in turbine blades is the presence of turbulated internal cooling ducts. The presence of rib turbulators repeatedly perturb the boundary layer, which can result in significant heat transfer by promoting convective mixing with the core cooling flow. A downside is that this heat transfer comes at the cost of higher-pressure drop [35] . However, due to manufacturing constraints and degradation during the life cycle, the duct will likely diverge from the initial design at some point. The challenge is therefore to select a design that maximises heat flow, in this case Nusselt number, whilst minimising pressure drop in the face of input uncertainty. To address this challenge, a model of the turbulated duct was constructed using ANSYS software according to four geometric parameters that control the cross-sectional profile and angle of the turbulators, as shown in Fig. 5 . The range of parameter values are shown in Table 1 in the appendix. Within the ANSYS software, each combination of these four parameters would result in a unique turbulated duct geometry. This geometry was then meshed using an unstructured tetrahedral grid and solved using Reynolds-averaged NavierStokes (RANS) [34] to output the Nusselt number and pressure coefficient for that particular design. For the MF RGPE approach, the overall computational budget assigned was equivalent to 44 HF samples. The LF model consisted of a mesh of approximately one million elements and solved using k − ω SST RANS on ANSYS, whilst the HF model consisted of a mesh of approximately five million elements and solved using k − ω SST RANS. The approximate computational cost was 20 LF samples ≈ 1 HF sample. Consequently, two separate MF RGPEs were trained for the Nusselt number and pressure coefficient respectively, with the initial MF RGPEs trained using 80 LF samples and 20 HF samples. A further 20 HF samples were adaptively selected in two batches of 10 samples to supplement the training set. The training data was normalised between 0 and 1, with input uncertainty represented via a diagonal matrix S = diag [0 . 025 , 0 . 025 , 0 . 025 , 0 . 025]) . The final MF RGPEs were then optimised using a multi-objective subset simulation algorithm. For comparison, the case study was repeated with the same computational budget, but using only HF samples to construct two SF RGPEs. The initial SF RGPEs were trained using 40 HF samples. A further 4 HF samples were adaptively selected in four batches of a single sample to supplement the training set. Fig. 6 demonstrates the adaptive sampling process and the final Pareto front for the MF RGPE approach. On inspection, several of the initial HF samples (green dots, top left plot) were located in close proximity to the eventual Pareto front, again highlighting the advantages of incorporating the LF training data to locate regions of interest. Indeed, the general performance of the adaptive points (blue stars) is significantly better than that of the randomly sampled points, showcasing the benefits of adaptive sampling. Furthermore, by comparing the two batches of adaptive sampling, it is clear how the adaptive sampling process attempts to converge towards the true Pareto front. It should be noted that the adaptive sampling procedure was assisted by the LF data to discard areas of low interest. The optimisation process placed constraints on the output variance of the respective GPEs to ensure a certain level of performance. This is highlighted in the close proximity between the Pareto front and the best performing training samples, placing further emphasis on the importance of quality training data. A single training point lies above the Pareto front, however this point was deemed to lack the necessary robustness according to the final MF RGPEs. Fig. 7 contains the Pareto fronts from the MF RGPE approach (red stars) and the SF RGPE approach (blue dots). In general, the two Pareto fronts possess similar behaviour, although the MF RGPE Pareto solutions exhibit superior performance than the SF RGPE Pareto solutions. A potential contributor to this discrepancy is the fact that the SF approach was made up of a higher proportion of randomly sampled training data. However, it is standard practice to use a budget of at least ten training samples per input dimension [36] in order to have sufficient confidence in the output of the underlying surrogate. The MF RGPE approach circumvents this issue by utilising LF data to make up for any loss of information. As such, it is reasonable that increasing the proportion of adaptively sampled data in the SF RGPE case would not necessarily improve the performance, due to surrogate inaccuracy and added uncertainty disrupting the sampling process. A second contributor is the fact that the MF RGPE approach was able to infer regions of high interest from the LF training data to aid in the adaptive sampling procedure. Overall, the MF RGPE offered superior performance than the SF alternative for the same computational budget. Aerofoil case study The aerofoil test case involved obtaining a set of aerofoil solutions that maximise lift-to-drag ratio whilst minimising maximum blade thickness of a turbine blade in the face of potential perturbation of input values caused by uncertainty. A prospective aerofoil geometry was defined using the Class-Shape Transformation (CST) method [37] . In particular the Au and Al parameters are the weighting coefficients that help prescribe the thickness/shape at various locations along the upper and lower surfaces respectively. The parameters and their respective ranges are shown in Table 2 in the appendix. The LF model consisted of the aerofoil being solved over a range of angles of attack in XFOIL software, which performed a potential flow calculation without taking into account viscosity or a boundary layer. The HF model consisted of the aerofoil being solved via k-ω RANS in ANSYS. Unlike the turbulated duct case study, where the level of fidelity was solely due to mesh resolution, the fidelity in this case is dictated by two separate methods of varying accuracy and cost. It should be noted that the the definition of varying levels of fidelity is problem specific, with the only requirement that they exhibit similar behaviour in attempting to model the same underlying phenomena. The computational budget for the test case was approximately 240 HF samples. The comparative computational costs were approximately 20 LF samples per HF sample. Two separate MF RGPEs were trained for the lift-to-drag ratio and maximum thickness respectively, with the initial MF RGPEs trained using 600 LF samples and 120 HF samples. A further 80 HF samples were adaptively sampled in four batches of 20 to supplement the training set. The training data was normalised between 0 and 1, with input uncertainty represented via a 20 ×20 diagonal matrix S with each of the entries equal to 0 . 025 . The final MF RGPEs were then optimised using a multi-objective subset simulation algorithm. As in the previous example, the case study was repeated using the same computational budget comprising of only HF samples. The initial SF RGPEs were trained using 200 HF samples, with a further four batches of a 10 samples added via adaptive sampling. Fig. 9 presents the adaptive sampling process and the final MF RGPE Pareto front. As in the turbulated duct study, several of the initial HF samples (green dots, top left plot) exhibited high performance and there was a clear convergence towards the suspected true Pareto front as the number of adaptive samples increased. The Pareto front closely followed the path of the best performing training samples. The training sample with the lowest maximum thickness was omitted from the Pareto front, as the performance of this point was particularly sensitive to input perturbations. Fig. 10 contains the Pareto fronts from the MF RGPE approach (red stars) and the SF RGPE approach (blue dots). Both Pareto fronts initially rise relatively sharply before reaching a plateau with respect to the lift-to-drag coefficient. However, there is a significant discrepancy between the respective performance of the two Pareto fronts, with that of the MF RGPE completely dominating the SF RGPE counterpart. Whilst the SF RGPE approach wasted a number of samples searching in uncertain but ultimately low interest areas, the MF RGPE approach was able to discard these areas and target more promising locations due to the information provided by the LF data. Fig. 11 displays 20 validation samples for 4 designs taken from the MF RGPE Pareto front. As in the turbulated duct case study, the number of validation samples were limited due to the computational costs involved. The validation samples were selected to verify the performance of the MF RGPE approach across the Pareto front. It should be noted that there was zero discrepancy between the LF simulator and HF simulator output for the maximum thickness. As a result, there was significantly less GPE uncertainty for this objective, and the majority of the uncertainty bounds with respect to the maximum thickness is due to input uncertainty. Each validation sample was within the 2 σ uncertainty bounds of the original Pareto solution. Moreover, the aerofoils corresponding to the validation points for the third (in ascending order of L/D ratio) Pareto solution were plotted against the original for a visual depiction of input uncertainty. Conclusion A Gaussian process emulation approach to perform efficient single-loop robust optimisation of expensive models, denoted MF RGPE, was presented. The approach combines various enhancements of the Gaussian process emulation approach, utilising MF training data and factoring input uncertainty into the output of the emulator. MF RGPE addresses the two main issues found when employing emulation-based approaches for robust optimisation, namely the quality of the emulator training set, and the efficiency of the robust optimisation process itself. Provided lower-fidelity simulators exhibit similar behaviour to their higher-fidelity counterparts (as expected), the approach offers improvement over single-fidelity methods. This is due to the increased information available in both the adaptive sampling phase and in estimating the performance in areas without HF training data. In the situation where the LF simulator exhibits dissimilar, or even misleading behaviour, the approach essentially reverts to a SF GPE, albeit with a slightly reduced training set. Compared to other MF methods, MF RGPE offers increased efficiency in performing robust optimisation, by collapsing the problem down to a single-loop. An illustrative example highlighted some of the key concepts of the approach before two industrial test cases demonstrated its ability to outperform produce quality results. Future work involves augmenting the adaptive sampling regime with the ability to choose both the location and fidelity of an adaptive sample, as well as applying the approach to cases with more than two levels of fidelity.
2021-09-18T09:56:09.036Z
2021-08-08T00:00:00.000
{ "year": 2021, "sha1": "e341e2ebde242c277cc1a00a8afe5ba2517f9933", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.apm.2021.07.020", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e341e2ebde242c277cc1a00a8afe5ba2517f9933", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
96125368
pes2o/s2orc
v3-fos-license
Cu ( II ) Adsorption on Modified Bentonitic Clays : Different Isotherm Behaviors in Static and Dynamic Systems Cu (II) removal equilibrium from aqueous solutions using calcined clays “Bofe” and “Verde-lodo” has been studied by batch and fixed-bed in static and dynamic systems, respectively. Analyses were performed for physicochemical characterization of clays using the techniques: X-ray fluorescence (XRF), thermogravimetry (TG), N 2 adsorption (BET) and Cationic Exchange Capacity (CEC). Batch experiments were performed at a constant temperature, adjusting the pH of the solution in contact with clays. Adsorption assays in fixed bed were conducted at the flow rate determined through mass transfer zone (MTZ). Langmuir and Freundlich models were adjusted to equilibrium data. The results of characterization indicated that the temperature of 500 °C is best suited for the calcination of the clays. The maximum adsorption capacity was higher for dynamic system than fixed bed compared to static system, enhancing from 0.0748 to 0.1371 and from 0.0599 to 0.22 mmol.g of clay for “Bofe” and “Verde-lodo”, respectively. Introduction Clays and minerals such as montmorillonite, vermiculite, illite, kaolinite and bentonite are known as alternative materials used to adsorption of heavy metal due to several economic advantages [1][2][3][4][5][6][7][8][9] and their intrinsic properties, such as large specific surface area, excellent physical and chemical stability and properties structural and surface 10 .Other low-cost adsorbents have been investigated, mainly using bioadsorbents, such as algae 11 and chitosan 12 .However, experiments carried out in fixed bed presented limited results.Bentonite clays are widely used as barriers to avoid subsoil and underground landfill water contamination by leaching containing heavy metals. Although the results obtained in metal removal using clays are significant and promising, a better understanding of these results is still needed.Studies already performed on Brazilian calcined clay (Bofe type) in the removal of nickel 13,14 have shown the need for further research on heavy metal removal comparing adsorption capacity in both static and dynamic systems.The high load of galvanic toxic waste in the Southeast of Brazil is composed mainly of salts of cyanide and heavy metals such as copper, among others; these can be present in soluble and insoluble forms.Therefore, research on the mechanisms of removal of copper are required for remediation of this contaminant. In order to evaluate the removal of copper using Bofe and Verde-Lodo (VL) bentonite clays as adsorbents thermally modified, the adsorption experiments were conducted using static (batch) and dynamic systems (fixed-bed).Both clays have been chosen due to their relevant adsorptive properties 13 and abundance in the Brazil.Modified clays have shown enhanced adsorption capacity. Adsorbents Two types of bentonite clays "Bofe" and "Verde-lodo (VL)" from the Northeastern region of Brazil (Boa Vista-PB) were used as adsorbents.Initially, a study was conducted with both raw clays.However, these clays were not used in fixed bed as adsorbents due to their solubility.The clays were prepared by size classification and calcined at 500 °C for 24 hours in order to increase their mechanical resistance, dehydroxylation and to eliminate some impurities.In some cases, the adsorption capacity can also be enhanced on modified bentonitic clays.The temperature of calcination was determined by thermogravimetric analysis for samples Bofe and VL clays at a heating rate of 10 °C/min in air atmosphere. Metal adsorbate The adsorption tests were performed using an aqueous solution in 15.74 mmol.L -1 of copper metal, prepared dissolving appropriate amount of Cu(NO 3 ) 2 •3H 2 O in deionized water to desired concentrations. The Cu(II) solution pH was maintained at a level lower, minimizing the precipitation in to assure the occurrence of the adsorption process and avoiding the chemical precipitation of copper ions in the hydroxide form (Cu(OH) 2 ).The pH of the solutions was measured with pH-meters and it was kept at set values using nitric acid and ammonium hydroxide. Metal speciation Copper speciation diagrams were simulated using Hydra and Medusa softwares 15 to identify the different species in solution.Speciation was investigated considering the used stoichiometric ratio of copper salt. Clay characterization The chemical composition of raw and calcined samples of Bofe and VL clays were obtained by X-ray fluorescence analysis, using samples fused in borate matrix. Thermogravimetric analysis was carried out on a Micromeritics TGA in N 2 atmosphere (50 mL/min) at a heating rate of 10 °C/min.The samples were putted in platinum pans and scanned from room temperature to 1000 °C. The surface area was obtained by N 2 physisorption at 77 K using the BET method. The Cation Exchange Capacity (CEC) was determined in triplicate for raw and calcined samples of Bofe and VL clays.The ions concentration of Na + displaced by the NH 4 OH exchange solutions were measured by Atomic Absorption and expressed in meq (100 g) -1 of solid according to Equation 1: The zero-point load of solid elements in suspension (pH zpc ) was obtained using the potentiometrical titration methodology 16 .The titration was carried out with 0.5 M CH3COOH and 0.5 M NH4OH.For each point of the titration, was obtained by Equation 2: For CH 3 COOH or NH 4 OH addition, S can be expressed as either by Equations 3 and 4: Batch sorption procedure The adsorption experiments were performed using an aqueous solution of Cu(NO 3 ) 2 .3H 2 O in fixed concentrations, with temperature controlled under constant stirring of 150 rpm.At specific time intervals, solution aliquots were removed and centrifuged.The supernatant liquid was diluted and it concentration was determined by atomic absorption spectrometry. To evaluate the effect of contact time, the experiments were conducted using 1 g of clay per 100 mL of copper solution at 1.57 mmol.L -1 concentration.Temperature and pH were kept at 298 K and 5.0, respectively.The samples were shaken for 300 minutes. Equilibrium tests were performed with different concentrations of adsorbate and temperatures.For maintaining pH of the medium, solutions of 0.01 M HNO 3 or 0.01 M NH 4 (OH) were added to adjust the pH value.The pH was monitored before and after adsorption.The following conditions were maintained for the different sets of experiments: i) Effects of adsorbate concentration and adsorption isotherm: clay 1 g/100 mL.ii) Thermodynamics: clay 1 g/100 mL, time 300 minutes, pH 5.0, temperature 273, 298, 323 and 348 K. Langmuir's theoretical model 17 (Equation 5) and Freundlich's empirical model 18 (Equation 6) were adjusted to the adsorption isotherms. The adsorbed amount was obtained by Equation 7: The essential characteristics of the Langmuir isotherm can be expressed by the separation factor or equilibrium parameter (RL) given by Equation 8: The parameter RL indicates the curvature of the sorption isotherm: if RL > 1, the isotherm is not favorable; if RL = 1, linear behavior; 0 < RL < 1, favorable; RL = 0, irreversible. The thermodynamic parameters for the adsorption process ΔH (kJ.mol -1 ), ΔS (J(K.mol) - ) and ΔG (kJ.mol -1 ) were evaluated using thermodynamic Equations 9 and 10: ( ) The ln(Kd) vs. 1/T graph must be linear with inclination of the straight line (−ΔH/R) and intercept the y axis at (ΔS/R), providing the values for ΔH and ΔS.The variation in Gibbs free energy (ΔG) is the fundamental criterion of the process spontaneity. Column sorption procedure Adsorption experiments were performed in a porous bed system, consisting of an acrylic column, with 14 cm of height and 1.4 cm of internal diameter.The operating conditions were based on the experimental design, considering the study conducted in batch and preliminary fixed-bed tests. In order to determine the mass transfer zone, the amount of useful and total removal were calculated, which correspond to the capacity of metal removal until the breakthrough point (qU) and saturation point (qT), respectively.Equations 11 and 12 were obtained through the mass balance in the column using saturation data, based on its breakthrough curves, where the area below the curve (1−C/C0) until the breakthrough point is proportional to qU, and area until the bed exhaustion is proportional to qT: MTZ can then be calculated based on the qU/qT ratio according to Equation 13. MTZ has a maximum value which corresponds to the bed height (HL) and when the efficacy of mass transference increases.This value decreases until reaching the ideal condition, where MTZ is zero and the breakthrough curve is a step function. The percentage of total removal (%RT) during adsorption was obtained considering the metal fraction in solution retained in the adsorbent solid, from total effluent used in the adsorption process until bed saturation.The amount of adsorbed metal is calculated by considering the area of the curve (1−C/C0) vs. t 19 using Origin version 6.0 software. Bofe and VL clays characterization The chemical composition of the compounds raw and calcined of the Bofe and VL clays obtained by XRF are shown in Table 1.The average composition is consistent with the expected for this bentonite clay 20 .One can observe that the Bofe and VL clays are a polycationic bentonite due to the presence of Ca 2+ , Mg 2+ and Na + cations in both raw and calcined clay samples. Cationic capacity result exchanges (CEC) of raw and calcined clays and superficial respective areas obtained by BET are shown in the Table 2. Smectite clays from Paraíba generally present CEC values between 50 e 90 meq (100 g) -1 of clay 20 .Relatively high CEC values of raw clays indicated that they have a high isomorphic replacement level.On the other hand, calcined smectites at 500 °C have their exchange cation capacity drastically reduced in comparation to raw CEC clays.The high value of CEC with respect to the raw clays indicates that the minerals have a high level of isomorphic substitutions.In addition, the smectite calcined at 500 °C has a lower ability to exchange cations in relation to the raw clay. The values obtained to surface area by the BET method for raw and calcined Bofe clays were of 78.61 and 90.31 m 2 .g - , respectively, whereas for samples of VL clays the values were of 64.31 and 62.08 m 2 .g - , in the same order. Figure 1 shows TG and DTG curves for raw Bofe and VL clays.The DTG curve presents two peaks of mass loss.The first, between 50-105 °C refers to the loss of water, volatile compounds, microorganisms and organic material; these elements do not change clay structure.Mass losses occurred for two temperatures and correspond to 4.82 and 3.57% of mass.The second peak occurring between 450-500 °C shows the loss of hydroxyl, which starts to change the clay structure.Hydroxyl loss is interesting for the process, because it prevents the chemical precipitation of copper by alkalinity and it increases the clay stability for its application in fixed-bed adsorption columns. According to Figure 2, the pH ZPC values obtained for raw and calcined clays were of 6.1 and 5.4, respectively.Thus, in order to ensure that the calcined clay surface has a null or negative charge with the aim of making more favorable the adsorption of charged positively metal ions, the adsorbate solution pH should be kept in 5.5. Copper speciation The Figure 3 shows the speciation curve of the Cu 2+ ion in aqueous solution with nitrate ions at different concentrations determined through the HYDRA application.In the pH range of 4.8-5.3, the fraction of Cu 2+ ions in aqueous solution decreases and beginning the formation of copper oxide, which is precipitates.With the purpose that only the adsorption occurs, should be used a pH value below minimum precipitation, which corresponds to 5.0 for concentration of 1.57 mmol.L -1 . Batch adsorption Figure 4a and b present the curve for copper adsorption kinetics on Bofe and VL clays.The adsorption of copper ions into pores of clays occurred rapidly at the first moments of the process, remaining at equilibrium over time.The reduction of the initial concentration of the ion, under the condition of this study, was around 81% in both raw clays and around 42% in both calcined clays.The maximum adsorbed amount was around 0.13 and 0.08 mmol of copper g -1 of clay for raw and calcined clays, respectively. Through kinetic studies at static system, it is verified that removal capacity is reduced by clay calcinations.However, in preliminary tests, raw clays could not be applied in fixed bed due to their mechanical instability.When the raw clays were in contact with copper salt solutions, the clay adsorbed a great quantity of water, expanding the volume bed due to their blade defoliation, which are dissolved and dispersed along the flow, being the clay dragged down through the column when the flow is ascendant.At a descendent flow use, a bed waterproofing was also observed. On the other hand, clays also get steel hardness at temperatures above 180 °C20 .This fact, associated with Bofe 13 and VL clay characterization results, suggest that the calcination provides to the materials mechanical stabilization (not dissolving), getting that these do not expand or being waterproof in porous column.Therefore, the calcined Bofe and VL clays are more appropriated to be used as adsorbent for copper removal in fixed bed. Dynamic adsorption Dynamic adsorption experiments were carried out for different flow rates, varying values from 2.0 to 6.0 mL/min for Bofe and from 2.0 to 5.0 mL/min for VL clay (Figure 5).The definition of appropriate flow rate was based on mass transfer zone determination.The concentration values of copper adsorbate solution were 2.36 and 1.57 mmol.L -1 for adsorption on Bofe and VL clays, respectively.The breakthrough curves present distinct behaviors indicating the flow influence on diffusional resistances.The adsorption process presents strong resistance to bed saturation for the total flow range studied, it was known by the more extended breakthrough curves and broader mass transfer zones. Table 3 shows the values of MTZ, qU, qT and copper removal percentage on calcined clays.The least MTZ value (7.77 cm), as well as satisfactory values of amount of useful (qU) and total removal (qT) and total removal percentage was obtained at 4.0 mL/min for copper adsorption on calcined Bofe clay.For copper adsorption on calcined VL clay, the least MTZ value (7.35 cm) was obtained at 2.0 mL/min, but the breakthrough curve showed high resistance of saturation.Then, the most appropriate flow value for conducting the copper adsorption on calcined VL clay assays was 3.0 mL/min. The reproducibility of adsorption experiments can be seen in Figure 6, which shows breakthrough curves resulting from three performed tests at 4.0 and 3.0 mL/min for copper removal on calcined Bofe and VL clays, respectively.These results confirms the good reproducibility of the experimental, with a 0.34 and 0.33% average deviation for copper adsorption on calcined clays Bofe and VL, respectively.The variation curves are basically due to the axial bed dispersion and this phenomenon was not considered in the study. Adsorption isotherms The amount adsorbed of metal ions per mass unit of clay (q eq ) gradually increases as the initial concentration of the adsorbate solution increases, in both systems (batch and fixed bed).When the initial solution has a low concentration, the ratio between the number of ions and the number of adsorptive sites available is small; consequently, adsorption depends on the initial concentration.Therefore, as the concentration of ions increases, adsorption also increases.In high concentrations of ion, each unit mass of adsorbent is subjected to a larger number of ions, which depending on the system can present different behaviors, as can be seen in Figure 7.The Langmuir and Freundlich models were adjusted to experimental data through the Gauss-Newton nonlinear estimation method in the Statistic 7.0 for Windows ® software.The regression coefficients for both adjustments are shown in Table 4.The obtained parameters expressed by Equations 6 and 7 were associated with the process temperature. Analyzing Figure 7 and Table 4, it can be seen that Langmuir model simulated adequately the experimental adsorption isotherm data for Bofe clay in batch and for VL clay in fixed bed system, while Freundlich empiric model described more efficiently the adsorption isotherm data for Bofe clay in fixed bed and for VL clay in batch system. From the adsorption isotherms data, the maximum adsorbed capacities obtained in fixed bed system for both clays is higher than the results got in batch, which means that copper adsorption on Bofe and VL clays is related to the mobility of the particles.This behavior indicates that at higher concentrations of metal solution in batch occurred a formation of the electrical double layer, i. e., ions of the Stern layer and part of the diffuse layer.They form an ionic cloud around the particle, being attracted by the electric potential, which moves along with it during the flow of the suspension 21 .As a result, the particles start to behave as flow units of higher dimensions, whose radius is defined as the hydrodynamic radius of the particle, which decreases the maximum adsorbed capacity of copper on calcined clays.However, this behavior is not observed in Henry's infinite dilution region, that is, at very low concentrations.The Langmuir isotherm is specific for monolayer adsorption, which was the case in this study, while the Freundlich model is better applied to adsorption at heterogeneous sites on the surface of a solid, with a mechanism that has not yet been established.The Langmuir equilibrium coefficient b determines the direction to which the equilibrium adsorbate-adsorbent clay (solid phase) + Cu(II) (aqueous phase) = clay−Cu(II) moves.Higher values indicate that the equilibrium moves to the right side, with the resulting formation of the adsorbate-adsorbent complex. The values obtained for the Freundlich constant (n) are around 0.3 and 0.4 for adsorption in batch and in fixed bed, respectively.According to Treybal,22 this range indicates that the adsorptive characteristics of the clay are suitable for copper sorption. The values calculated for RL (Equation 8) determined by using the Langmuir constant obtained by the nonlinear method vs. the initial copper concentration for different systems were of 0.3076 (Cu/calcined Bofe/Batch), 0.6995 (Cu/calcined Bofe/fixed bed), 0.2907 (Cu/calcined VL/ Batch) and 0.7682 (Cu/calcined VL/fixed bed).According to the separation values, both adsorption systems can be considered favorable to copper sorption (0 < RL < 1), being more favorable to sorption conducted in batch. Adsorption thermodynamics Thermodynamic data were obtained through the static method in thermostatic finite bath under constant stirring for four different temperatures (273, 298, 323 and 348 K) and correlated by the Langmuir and Freundlich isotherms.The Figure 8 shows the adsorption isotherms for 1 g clay/100 mL adsorbate solution adjusted by the Langmuir and Freundlich models, at initial copper concentrations ranging from 0.08 to 2.36 mmol.L -1 . The thermodynamic parameters ΔH, ΔS and ΔG (Equations 9 and 10) presented in Table 5 were obtained from Figure 9.The negative ΔH indicates that the process is exothermal, confirming the adsorption theory for copper adsorption on calcined VL clay.The magnitude of the enthalpy variation achieved (+24.8 kJ.mol -1 ) for copper adsorption on calcined Bofe clay showed that it is an endothermic processes. A decrease in entropy during adsorption helps the stabilization of the metal-clay complex formed (ΔS < 0).ΔS values suggest a decrease in randomness at the solid/ solution interface during copper sorption on calcined VL clay compared with Bofe clay.The clays-Cu interactions occurred spontaneously and were accompanied by a decrease in Gibbs free energy (ΔG < 0).The degree of spontaneity was found to decline as the process temperature rose, ranging from −16.6 to −27.9 kJ.mol -1 for the temperature range between 273 and 348 K. Adsorption and ion exchange of Cu(II) on different clays had already been reported as endothermic [23][24][25][26][27][28] , as shown in Table 6.It was likely that adsorption of Cu(II) ions on clay surface require an activation energy and rise in temperature helped more Cu(II) ions to overcome this energy barrier and get attached to the surface 23 . Conclusions The chemical composition of clays was not modified by calcination, but the Cation Exchange Capacity (CEC) was reduced.Adsorption results indicated that the sorption kinetics of copper ions by clays is rapid, requiring a minimum 60 minutes to reach equilibrium.The study of mass transfer parameters, as well as the breakthrough curves showed that the better appropriate operating outflow rate values, i.e. which minimizes the diffusional resistances in the bed for removal of copper by Bofe and VL calcined clays were of 4 and 3 mL/min, respectively.The breakthrough curves demonstrated that increasing the outflow rate, the breaking point, the point of saturation throughput and total removal values tend to decrease.The Langmuir isotherm model correctly represents the equilibrium data obtained from experiments in bath and in fixed bed at room temperature.At low concentrations, the removal of copper has a linear equilibrium and it does not depend on the system applied (static or dynamic).Calcination produced hydroxyl loss, preventing chemical precipitation of copper by alkalinity.The adsorption capacity (qm) increased in fixed bed, which suggests that copper adsorption on calcined Bofe and VL clays is related to the mobility of the particles.Copper adsorption on calcined VL clay is exothermic whereas on Bofe clay is endothermic.The clay-Cu interactions occurred spontaneously, being followed by a decrease in Gibbs free energy. Figure 4 . Figure 4. Kinetic for copper adsorption on clays.a) Dimensionless solution concentration as a function of time, and b) Adsorbed amount at equilibrium (C 0 = 1.57mmol L -1 , d p = 0.855 mm). Figure 7 . Figure 7. Adsorption isotherms for copper removal on: a) calcined Bofe clay, and b) calcined VL clay, adjusted to the models Langmuir and Freudlich. Figure 8 . Figure 8. Adsorption isotherms for copper removal on: a) calcined Bofe clay, and b) calcined VL clay in different temperatures. Table 1 . Chemical clay for Bofe and VL analyses. *Loss on ignition. Table 2 . Cationic clay exchange capacity (CEC) and surface area. Table 3 . Values for MTZ, qU, qT and %RT for copper adsorption on calcined Bofe and VL clays. Table 4 . Langmuir and Freundlich parameters for Cu 2+ adsorption on Bofe and VL clays. Table 6 . Thermodynamic parameters data for Cu(II) in different clays.
2019-04-05T03:29:17.275Z
2011-11-29T00:00:00.000
{ "year": 2011, "sha1": "96ebdf8e25c9bf25caf297a47d6f650a9629750e", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/mr/a/zk9hcJzBCzB7QzTCSkvhWdq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "96ebdf8e25c9bf25caf297a47d6f650a9629750e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
257364433
pes2o/s2orc
v3-fos-license
Collaborative support for child abuse prevention: Perspectives of public health nurses and midwives regarding pregnant and postpartum women of concern Child abuse is a globally prevalent problem, and its numbers have continuously increased in Japan over the past 30 years. Prevention of child abuse depends on the support available to pregnant and postpartum women from the time of pregnancy. Public health nurses and midwives are expected to provide preventive support in cooperation, as they can support pregnant and postpartum women from close proximity and recognize their health problems and potential signs of child abuse. This study aimed to deduce the characteristics of pregnant and postpartum women of concern, as observed by public health nurses and midwives, from the perspective of child abuse prevention. The participants comprised ten public health nurses and ten midwives with five or more years of experience working at the Okayama Prefecture municipal health centers and obstetric medical institutions. Data were collected through a semi-structured interview survey and analyzed qualitatively and descriptively using an inductive approach. The characteristics of pregnant and postpartum women, as confirmed by public health nurses, included four main categories: having “difficulties in daily life;” “a sense of discomfort of not feeling like a normal pregnant woman;” “difficulty in child-rearing behavior;” and “multiple risk factors checked by objective indicators using an assessment tool.” The characteristics observed by midwives were grouped into four main categories: “mental and physical safety of the mother is in jeopardy;” have “difficulty in child-rearing behavior;” “difficulties in maintaining relationships with the surrounding people;” and “multiple risk factors recognized by an assessment tool.” Public health nurses evaluated pregnant and postpartum women’s daily life factors, while midwives evaluated the mothers’ health conditions, their feelings toward the fetus, and stable child-rearing skills. To prevent child abuse, they utilized their respective specialties to observe those pregnant and postpartum women of concern with multiple risk factors. Introduction In Japan, the number of child abuse consultations handled by child guidance centers nationwide has continually increased in the past 30 years, exceeding 200,000 cases for the first time in the financial year (FY) 2020. Prevention of child abuse requires cooperation among multiple organizations [1], including medical, healthcare, and welfare institutions [2], and social welfare organizations [3]. In Europe and the U.S., Nierop et al. [4] discovered a relationship between stress during pregnancy and postpartum depression. They pointed out the importance of putting in specific efforts from the pregnancy period onward to ensure abuse prevention. In a randomized controlled trial, Olds et al. [5] found that prenatal home visits by nurses reduced child maltreatment and revealed the importance of maternal involvement during pregnancy. Furthermore, Ashraf et al. [6] state that health care providers can identify risk factors and signs of abuse in the medical setting. Additionally, referrals to community resources, parenting education, and other preventive measures must be incorporated into clinical practice. In Japan, there are several mother-child support programs for the prevention of child abuse, such as "Healthy Start Oita" [7] and "Suzaka Trial" [8], which provide seamless support during pregnancy, childbirth, and the postpartum period, per the characteristics of each region (for example, whether the government and obstetric hospitals can easily collaborate on a daily basis). In 2011, the Okayama Prefecture started its operation of the "Contact system for support for mothers and children of concern during pregnancy" (hereinafter referred to as the "Okayama model") [9], strengthening seamless support for pregnant and postpartum women of concern (PPWC) in collaboration with obstetric facilities and the community. The Okayama model is characterized by early recognition of risk factors at obstetric facilities by focusing on medical and social backgrounds and efforts to provide support in cooperation with multiple professions throughout pregnancy. According to the Ministry of Health, Labour, and Welfare's FY 2018 Welfare Administration Report [10], the number of municipal consultation responses in the Okayama Prefecture was 850 in FY 2018 (3.54 consultations per thousand population per year), compared with 1,641 consultations in FY 2012 (6.29 consultations per thousand population per year), which represents a nationwide increase. However, the number of child abuse consultation responses has reduced. Kobayashi [11] analyzed child abuse and death cases and concluded that the key institutions for abuse prevention involved health and medical centers, hence stating the importance of cooperation between the two. A multidisciplinary approach through partnership represents a standard for preventing child abuse [12,13]. Early interventions with pregnant women at risk of child abuse are considered effective in preventing child abuse [14]. It is at administrative agencies, such as obstetric facilities where women are diagnosed with pregnancy and health centers where maternal and child health handbooks are issued, that public health nurses and midwives have contact with pregnant women at risk of child abuse or pregnant women about whom they feel a vague sense of alertness that "causes them to be concerned about something." Public health nurses (PHNs) and midwives must share information to provide continuous support for preventing abuse if either institution identifies PPWC. Both PHNs and midwives provide ongoing support to expectant mothers: the PHNs through long-term life support for pregnant and postpartum women and their families, and the midwife in her role as a close supporter of women's health. Hence, both are important partners in primary care for pregnant and postpartum women, and together they may be able to identify health problems and signs of potential child abuse. Stolper et al. [15] stated that a feeling that "there is something wrong here," a vague and intuitive sense of alertness, helps child health nurses become alert to situations that may lead to child abuse or maltreatment. Furthermore, a U.S. study by child abuse pediatricians (CAPs) reported that the diagnosis of child abuse is procured by a combination of intuitive responses elicited by family encounters and social information obtained outside those encounters [16]. Due to differences in their respective specialist disciplines, PHNs and midwives may have discrepancies regarding the identification of PPWC. Clarifying these differences is helpful for PHNs and midwives to understand each other's perspectives and cooperate. There are individual differences in the ability of health visitors to find and contact medically and socially high-risk pregnant women, depending on the person in charge [17]. There are cases wherein the affected individual may not receive support. Adachi et al. [18] clarified that whether a pregnant woman needs support is determined by the competence of the PHN. Similarly, it has been reported that there are individual differences in the ability of obstetric nurses to assess the risk of abuse depending on whether or not they have experience in caring for mothers about whom there are concerns related to child maltreatment [19]. Differences in the quality of their support (such as being able to recognize the risk of abuse) are influenced by their years of experience [20]. Experienced PHNs and midwives may have some standard assessment points when determining whether they are "concerned" about pregnant women and need to provide support. Clarifying these points will reduce the likelihood of such cases being overlooked due to individual abilities. The latest research on child abuse prevention has focused on the characteristics of at-risk pregnant and postpartum women and the social and medical factors related to child abuse. So far, no research has elucidated the intuitive concern to pay attention to situations that may lead to child abuse from the perspective of PHNs and midwives who provide direct support to pregnant and postpartum women. Therefore, this study aimed to clarify the characteristics of PPWC, as observed by experienced PHNs and midwives, in support of child abuse prevention. Our findings will promote abuse prevention and support from early pregnancy, resulting in the PHNs and midwives working together to ensure that these women in need of support are not overlooked. Operational definitions Pregnant and postpartum women of concern. The implication of PPWCs is that the PHNs and midwives feel concerned about them while providing support during pregnancy and worry about the possibility of issues leading to child abuse. The perspective of child abuse prevention. This refers to the perspectives of PHNs and midwives regarding awareness of the risk of child abuse due to the background factors, attitudes, and moods of pregnant and postpartum women, and on starting preventive support early on. Research design This was a qualitative descriptive study with an inductive approach. Research participants The participants included PHNs and midwives working at municipal health centers and obstetric medical institutions actively utilizing the contact system for the Okayama model in operation in the Okayama Prefecture. We requested cooperation in the study by written and verbal means from the general PHNs and the director of the nursing department of each institution. We also received recommendations of those who have had five years or more of experience in supporting mothers and children, including child abuse prevention. Data collection The participants' data were collected from August to November 2019. Semi-structured interviews were conducted based on an interview guide with the PHNs and midwives who agreed to participate at locations that were designated by the participants. Face-to-face interviews were conducted by the first author. The interview guide was developed after discussion with the faculty and graduate students in the field of adult and child health nursing, specializing in maternal support. During the interviews, the interviewees were asked to recall cases and situations where they felt "concerned" about pregnant and nursing mothers under the premise of abuse prevention and to describe their experiences and reasons for feeling concerned. The duration of the interviews with the PHNs was 54 minutes 46 seconds ± 10 minutes 59 seconds (mean ± SD) and with the midwives was 55 minutes and 16 seconds ± 10 minutes and 15 seconds (mean ± SD). All interviews were recorded on an integrated chip (IC) recorder with the consent of the participants and were transcribed verbatim, maintaining anonymity. Data analysis Data were analyzed using qualitative inductive analysis methods. During the analysis, personally identifiable information was anonymized. Data analysis was conducted simultaneously with data collection, with interviews transcribed immediately after each interview. The interview transcripts were cross-checked among the researchers. The contexts of the concerns that the PHNs and midwives felt about expectant mothers were extracted from the transcripts. Coding was performed while considering these contexts so that the meaning of the narratives could be understood, and subsequent categorization was carried out based on the similarities and differences. After aggregating similar categories and examining their relationships, the categories were grouped. The codes were extracted then compared by two co-researchers with experience in qualitative research. The authors specialize in community nursing, and their expertise in supporting children and their families to live safely and healthily in the community was helpful in capturing concerns about mothers during the coding process. When opinions differed during the categorization process, the researchers repeatedly reviewed the results until a consensus was reached. They checked for any gaps between the intentions of the participants and the interpretation of the data, and presented the results to the participants to confirm the accuracy of the content. Ethical considerations This research was approved by the Institutional Review Board of the Okayama University Graduate School of Health Sciences (D19-1). We briefed the participants verbally and in writing regarding the study's purpose, the provision of voluntary and free withdrawal from participation, protection of personal information ensuring anonymity of the data provided, data storage method, and information regarding the publication of research results. After this, written consent was obtained. Overview of research participants The research participants were ten PHNs working at municipal health centers in the Okayama Prefecture and ten midwives working at obstetric medical institutions, comprising 20 people in total. The number of years of experience as PHNs was 21.0±5.4 (mean±S.D.) years and 22.0 ±11.3 (mean±S.D.) years as midwives (Table 1). PPWC as seen by PHNs We extracted four main categories, 12 subcategories, 32 subordinate categories, and 168 codes during the data analysis (Table 2). Difficulties in daily life circumstances. This category covers the health workers' awareness that the pregnant or nursing mother had difficulties in her family background, living environment, socioeconoomic background, history, etc., and that stable living conditions were not in place, which caused them to be concerned about the mother's situation. This category comprised five subcategories: "daily life foundations are unstable," "lack of ability to support parents' families," "difficult to receive support from surrounding people," "cannot envision life plan after giving birth," and "in a dirty living environment." The findings showed that the pregnant and postpartum women's unstable financial conditions and "difficulties in getting accustomed to the area due to transfer" resulted in unstable daily life foundations. Furthermore, even if they wanted their parents' support, their situations showed a lack of ability to support the family home through factors such as the "lack of ability to support parents' families" due to "financial distress" and "health issues" among the family members at the parents' households. Furthermore, there were cases where the pregnant or parturient women refused "support or involvement" from PHNs or surrounding people. These women refused counseling when they were informed that they were pregnant and tried to return home after receiving the Mother and Child Health Handbook. They also had a "poor relationship with their parents" due to histories of childhood abuse and "having nobody to rely upon except parents", which resulted in it being "difficult to receive support from surrounding people." There were also cases where women were pregnant or parturient without plans for the daily life arrangements necessary for pregnancy or childbirth. Thus, a concern was raised regarding those who could not "envision a life plan after giving birth." Furthermore, there were concerns raised by PHNs regarding "difficulties in daily life" among these women or their socioeconomic backgrounds as they lived in "dirty living environments", with "garbage scattered in their rooms" without being cleaned or "unsanitary child-rearing spaces." Table 2. Characteristics of pregnant and postpartum women of concern as observed by public health nurses. PHNs' discomfort with the mannerisms of the pregnant women. This category represented the discomfort that PHNs felt toward pregnant and postpartum women during their interactions at the time of pregnancy notification due to the unique mood or behavior of the women presented. This involved three subcategories: "difficult to communicate with," "having a unique way of thinking and mood," and "having mental instability." Main Category Concerns about "having a unique mood" were mentioned, with PHNs stating, "I am not exactly sure, but intuitively feel concerned" (PHN:C) about the unusual appearance of the woman's hair or their complicated relationships with their companions at the time of pregnancy notification. There were also concerns about "having particular views on pregnancy and childbirth style," such as a strong desire for a painless home delivery and "having particular views on unique health methods and ways of thinking." These involved the beliefs of women wanting to control chronic illnesses through alternative healing powers. PHNs also mentioned concerns regarding the women "having mental instability," such as "emotional instability," as they had sad facial expressions, cried easily, or were easily swayed by their symptoms of mental illness. Meanwhile, PHNs witnessed cases wherein the pregnant women did not speak a word during the interview at the time of pregnancy notification, and their parents answered all questions. Sometimes, the conversations lacked factual statements regarding financial aspects, childcare supporters, and chronic illnesses, leading to "few remarks initiated by the individual." These factors indicated "discomfort of not feeling like a normal pregnant woman," with the PHNs sensing that it was "difficult to communicate with" the women due to their "poor facial expressions" or lack of "progress in verbal exchange" stemming from their inconsistent words and actions. Have difficulty in child-rearing behavior. This category indicated concerns by PHNs that pregnant and postpartum women may not be able to perform appropriate child-rearing behaviors due to their lack of interest and involvement in their fetuses or babies, and their lack of confidence and skills in raising them. This was composed of three subcategories, namely, "feelings are not directed toward the fetus/child," "having inappropriate child-rearing attitude toward the older child," and "feeling unsure about child-rearing techniques." Concerns raised by PHNs included the women "having behaviors and attitudes that do not celebrate pregnancy," with the nurses mentioning that the women did not view the pregnancy positively because it was unwanted. One PHN mentioned that the women "were unable to do the other things they wanted to do because of the pregnancy" (PHN:G). Other aspects included not finding their child cute and "being unable to feel affection for fetus/child," resulting in the women strongly prioritizing themselves and "not changing drinking or smoking habits for the fetus." Moreover, pregnant women expressed feelings of not being interested in or concerned about the fetus/child, such as not hugging the child even when it cried and "not showing care and attention toward the child." PHNs also raised concerns about the women "having cold attitudes toward the older child" by ignoring their other biological children or step-children and treating them in an aggressive manner. This was described in the following narratives: treating the older child as if they are dirty and not letting him/her touch the baby, intentionally ignoring him/her even when he/she cried, and brushing away the step-children when they came near the baby. In addition, the PHNs expressed concern about the women "being unable to take care of the older child," for example, through cleanliness and health management, due to insufficient child support. These women were viewed as "having inappropriate child-rearing attitude toward the older child." A typical example was: The woman would say that they are pregnant and they are having a difficult time, so they cannot do household chores, and they would ask the older sister in the upper grade of elementary school to even skip school to do household tasks or take care of the baby (PHN:J). Meanwhile, PHNs mentioned concerns about the women "feeling anxious about childrearing and not feeling confident." One participant mentioned that although the pregnant or parturient woman felt affectionate toward the fetus or child, they are "ultimately not giving affection, or because they do not receive affection themselves, they do not know how to do it and feel anxiety" (PHN:B). There was also concerns about the women "feeling unsure about child-rearing techniques" because they did not know how to raise their children as they had "insufficient child-rearing skills and knowledge." Have multiple risk factors recognized by an assessment tool. This category indicated the PHNs' concern toward those pregnant or parturient women with multiple risk factors for child abuse from objective indicators, such as medical records used for interviews or checklists within the organization. This category was composed of "multiple risk factors checked by an assessment tool." There are concerns of the woman being at high risk when there are multiple factors on the contact form for support for mothers and children [of concern during pregnancy] from the obstetrics facility, such as a history of mental illness, being in a step-family, or being of advanced maternal age. (PHN:A) PPWC as seen by midwives We extracted four main categories, nine subcategories, 33 subordinate categories, and 178 codes during the data analysis (Table 3). Mental and physical safety of mother is in jeopardy. This category referred to midwives feeling that pregnant and postpartum women were at risk of being unable to have safe births. It comprised two subcategories, namely "having risk of childbirth that jeopardizes maternal safety" and "having mental instability." Midwives mentioned concerns regarding cases where they were unable to keep track of the pregnancy until just before giving birth, such as "previous experience of childbirth with no prenatal care," "first visit or hospital transfer after 30 weeks of gestation," or when the pregnant or parturient woman was "being fixated on a desired childbirth style," such as a painless home delivery, without considering her body's safety. Furthermore, midwives felt concerned about the mother's mental and physical safety being in jeopardy if they had negative emotional expressions, like "feeling depressed and having sad facial expressions," or psychological issues, like "having mental instability." Have difficulty in child-rearing behavior. This category referred to the midwives' concerns about pregnant or parturient women experiencing difficulties in raising children because of their lack of affection or involvement with their children and siblings. It comprised four subcategories: "feelings are not directed toward the fetus/child," "no progress in preparations for childbirth and life after giving birth," "having an inappropriate child-rearing attitude toward older child," and "difficulties due to child-rearing not progressing as expected." Midwives felt concerned that the women did not have affection toward their children as they were "unable to accept the pregnancy" or that they were not thinking about their daily life after childbirth at all and making "no progress in preparations for childbirth and life after giving birth" because they did not want to acknowledge that they would become mothers. Furthermore, midwives mentioned concerns about difficulties occurring when child-rearing was not progressing as expected, such as "feeling confused due to being unable to raise the child as expected." This was because the challenges of raising a child exceeded the women's expectations and were different from what they had anticipated, as seen in the statement: "She became pregnant with infertility treatment, and it was all well and good at childbirth, but she Table 3. Characteristics of pregnant and postpartum women of concern as seen by midwives. Multiple risk factors checked by an assessment tool Multiple risk factors checked through a common form: "Contact system for support for mothers and children of concern during pregnancy" a,b,c, d cried about not thinking it would be so tough, and she does not want to take care of the baby" (MW:d). The women were also "fixating on child-rearing by the book" and "having many minor questions," as seen in the statement: "They would thoroughly read through the childrearing book and immediately contact nurses when something does not go exactly as mentioned and what they should do" (MW:d). Midwives also expressed concern about "difficulty in child-rearing behavior." For instance, they witnessed pregnant or postpartum women using harsh words or actions toward their older child in the waiting rooms or wards, with the women "having cold attitudes toward the older child," or "having inappropriate child-rearing attitudes toward older child" when the "older child is unkempt in appearance." Have difficulties in maintaining relationships with surrounding people. This category referred to midwives' concerns that the women lacked an immediate supporter who could cooperatively help them, rejected support, or did not have any relationship with surrounding people. This comprised two subcategories: "difficult to receive cooperation from surrounding people" and "difficult to communicate." Midwives felt concerned about the women engaging in child-rearing and household work by themselves after being discharged from the hospital due to "having minimal cooperation from husband or partner," "unreliable parents due to disagreements or history of abuse," or "having a small number of visitors and visits during hospitalization." One participant mentioned, "The husband has night shifts, so the woman is always thinking about how to stop [the baby from] crying" (MW:g). Furthermore, the midwives expressed concern about the pregnant or parturient woman finding it "difficult to receive cooperation from surrounding people" because they "refusing to have other people step in to individual affairsrefuse to let others into their personal matters:" "They have an atmosphere of not wanting others to get involved, such as, "It is fine, I will do everything by myself'" (MW:a). The midwives also expressed concern about "difficulties in maintaining relationships with surrounding people" because of difficulties in communication, such as in cases where there were "few remarks initiated by the individual," as seen in the statement: "The woman usually does not have conversations with the midwife, and even if she comes [to the medical examination] with her mother, it is just the mother talking, and there are few reactions from the woman herself" (MW:j). There were also instances of "having disjointed conversations," as seen in the statement, "The answer I get is slightly different from what I asked. She likes giving lots of answers to things she is interested in. She does not respond for the important parts" (MW:d). Have multiple risk factors recognized by an assessment tool. This category referred to concerns regarding pregnant and postpartum women with multiple risk factors related to child abuse, using objective indicators such as maternity interviews during initial visits and checklists used within facilities during maternity examinations. This category comprised "multiple risk factors checked by an assessment tool." A typical example is as follows: I try to make a template and pick up people who are likely to require follow-ups during pregnancy so that I can continuously do so [. . .] I try to keep an eye on people throughout their pregnancies when they have several risk factors (MW:h). PPWC as seen by PHNs and midwives Characteristics of how PHNs and midwives viewed PPWC included determining the target women by maximizing their specialist strengths in their respective professions and viewing PPWC as targets for support. Common aspects of PPWC, as seen by both PHNs and midwives, were those considered so-called specified pregnant women [21], who did not display affection toward the fetus/child because of undesired or unexpected pregnancies; those lacking "support from surrounding people" as they were unmarried, single mothers, or not obtaining cooperation from the husband; and those with child-rearing problems, such as "having inappropriate child-rearing attitudes toward the older child." Obstetrics and medical institutions are expected to provide information on specified pregnant women to administrative institutions as support targets for abuse prevention. This study showed that both PHNs and midwives commonly perceived specified pregnant women as targets for support and were conscious of their relationship with pregnant and postpartum women. Furthermore, PHNs were characterized by their focus on the daily life background, childrearing ability, and environment of the pregnant and postpartum women with high social risk, such as "having a family background where parents' home cannot be relied upon" or having difficulties in daily life due to unstable daily life foundations. The nurses viewed childcare in the context of a stable lifestyle. Previous research [22,23] reported that child-rearing supporters for pregnant and postpartum women from pregnancy to the first month after childbirth were mainly immediate family members, such as the women's husband, mother, and motherin-law. The present results indicate that the absence of people to rely on can pose a threat to the mental and physical stability of pregnant women. Meanwhile, midwives focused on maternal health management and stable fetal and childrearing skills for safe delivery. Their concerns were regarding pregnant and postpartum women with medical risks, such as their mental and physical safety, and those who "have difficulty in child-rearing behavior" that may continue after being discharged from the hospital. This included those not thinking at all about life after childbirth and making "no progress in preparations for childbirth and life after giving birth," those with anxiety due to mental and physical changes from lifestyle changes around the child experienced in the early postpartum period, and those with a lack of knowledge regarding breastfeeding skills and childcare. Thus, the midwives' perspectives regarding child-rearing behaviors focused on life after being discharged from the hospital. In this manner, the strengths of the specialties [24] of PHNs, who are close to the community, and midwives, who specialize in pregnancy and childbirth, were maximized when identifying the pregnant and postpartum women. Specified pregnant and postpartum women with high social risks have many overlapping elements, and they may also include pregnant women with high medical risks [25]. Thus, child abuse may be prevented by supporting pregnant women who may or may not require childrearing support due to various factors, whether or not these factors may lead to child abuse. Sharing feelings of "concerns" toward pregnant and postpartum women based on the perspectives of PHNs and midwives may allow pregnant women to be continuously given support without being overlooked. The characteristics of a PHN, who captures the daily life background and child-rearing environment of PPWC, as well as those of a midwife, who focuses on maternal safety and child-rearing skills, need to be mutually understood by the other profession. These two professionals also need to notice the risks that lead to abuse, sharing this information among supporters as soon as they become aware. Matsubara [26] indicated that the mothers and children deemed "of concern" by the PHNs in 18-month child health examinations might be concerned about unique aspects that do not fit into the general image held by PHNs, or aspects concerning a minority of people. Even among the PPWC, as seen by the PHNs in this study, the PHNs felt that certain women with unique moods or thoughts "have the discomfort of not feeling like a normal pregnant woman." Ozawa et al. [27] indicated that the job of a PHN is to guide the health conditions of an individual toward a better direction and to foster their ability to maintain an active life. When a PHN feels "concerned for some reason," then this indicates that the possibility of a problem is present and that the individual needs assistance. Furthermore, valuing the "concerning aspect" and process of verification will lead to an improvement in the quality of onsite practice. Therefore, future studies should verify whether a pregnant or parturient woman who is considered "of concern" in this study truly requires support, as well as determine the effect of providing support from the point where a concern for the woman arises. Suggestions for collaborative support to PPWC provided by PHNs and midwives Methods for determining PPWC for both the PHNs and midwives included identifying "concerning" aspects not only from subjective information, such as behavior and attitude during interviews with the women, but also from objective information, such as the questionnaire used at the time of the interview and contents of the risk assessment index. Since 2011, Okayama Prefecture has been using the "Contact form for support for mothers and children of concern during pregnancy" as a communication tool between obstetric medical institutions and administrative institutions using the Okayama model [9]. This system has been in operation throughout the Okayama Prefecture, with the current study also including using a contact form that was unique to the Prefecture. In recent years, risk assessments have been conducted using indicators that were independently created by child abuse prevention committees within obstetric and medical institutions. The importance of continuous collaborative support at related institutions, such as medical and administrative institutions, has been recognized [28]. Wada [29] stated that in the field of obstetrics medical care for pregnant women, there are individual differences in the "concerns" of midwives depending on the staff in charge. However, preparing a standard framework and dealing with issues as a team instead of relying on individual sensibilities can change the response from "being concerned" to "noticing" issues. Furthermore, activity reports regarding initiatives for preventing child abuse in the perinatal medical fields indicated the effectiveness of systematic risk determination using checklists and the promotion of collaboration with health institutions [30]. Both PHNs and midwives using common risk assessment indicators allow for the possibility of noticing that PPWC may require support. Consequently, if both the PHNs and midwives share their observations, they will not overlook the PPWC in need of support, establishing a support system from an early stage and serving as the first step for providing continuous monitoring support [13]. Yamaguchi et al. [31] reported the following as examples of midwives' concerns when a mother and child in the early postpartum period are discharged from the hospital: "single concern," such as support from surrounding people, child-rearing techniques, and mental illness complications; and "multiple concerns," which are combinations of such single concerns. "Single concerns" represent issues that mothers and children generally encounter, and care tends to focus on these concerns such that it is easy to provide care that is tailored to each mother and child. Meanwhile, "multiple concerns" are highly subjective and involve various factors, including the environment. Thus, there is a wide variety of care recipients and content, and time is required until the effects begin to improve. The present results reveal that both PHNs and midwives viewed those who "have multiple risk factors recognized by an assessment tool" as PPWC and demonstrated their respective expertise in responding to such women with multiple factors, suggesting the importance of collaborative support. A wide range of factors, such as inadequate prenatal check-ups [32], poverty, and housing instability [33], were consistent with previous studies. These findings suggest the importance of both parties demonstrating their expertise and working together to support pregnant and nursing mothers who are concerned about a combination of these factors. Yamazaki [34] indicated a need for related professions to gain a common understanding regarding contact methods between medical and health institutions and that information sharing will progress effectively through a mutual understanding of the differences in perceptions between the PHNs and midwives. Meanwhile, regarding collaboration issues between the PHNs and midwives, Hattori et al. [35] reported that the midwives were worried about whether their perspectives on "mothers and children of concern" were appropriately communicated to the PHNs. Many reports mention that turf issues and the division of roles hinder good inter-agency collaboration [36,37]. Furthermore, Karata et al. [38] stated that feedback of information from other institutions is essential for developing collaborations after the nurses at obstetrics and medical facilities provide information on "parents and children of concern." In the future, it will be necessary to determine how the PHNs and midwives view each other's characteristics regarding PPWC and how they provide support while investigating the ideal way for effective collaboration between the two professions. Research limitations and future issues This study has some limitations. Only ten PHNs and ten midwives participated, and their workplaces were limited to a single prefecture. Furthermore, we targeted PHNs and midwives with five or more years of experience in this study. However, both groups had an average of over 20 years of experience. Hence, their identification of PPWCs likely differed according to their years of experience. Future tasks involve the development of indicators to support the prevention of child abuse from the early stages of pregnancy without overlooking pregnant and postpartum women needing support, regardless of the number of years of experience of the nurses and midwives. Moreover, the construction of an effective collaborative support model for PHNs and midwives is needed to prevent child abuse. Conclusion The study's results revealed that each professional had their perspectives of determining target women by using their respective specialties and that had a common perspective of determining specified pregnant women as PPWC. PHNs focused on childcare in a stable lifestyle, wherein the foundations involved the daily life backgrounds of the pregnant or parturient woman. Contrarily, midwives focused on the health management of mothers and stable fetal and childrearing skills, alongside perspectives on child-rearing behavior after discharge from the hospital. Future research must determine how each professional views the other's characteristic perspectives for providing support, alongside investigating the ideal way for effective collaboration between these two professions.
2023-03-07T06:17:59.066Z
2023-03-06T00:00:00.000
{ "year": 2023, "sha1": "117f3b8aea6d48da4670ffe16b1b25ee3f5b4fd4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1b00e802498513ac1e9e0a3148da96f87b831637", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
8682811
pes2o/s2orc
v3-fos-license
Allowing for missing outcome data and incomplete uptake of randomised interventions, with application to an Internet-based alcohol trial Missing outcome data and incomplete uptake of randomised interventions are common problems, which complicate the analysis and interpretation of randomised controlled trials, and are rarely addressed well in practice. To promote the implementation of recent methodological developments, we describe sequences of randomisation-based analyses that can be used to explore both issues. We illustrate these in an Internet-based trial evaluating the use of a new interactive website for those seeking help to reduce their alcohol consumption, in which the primary outcome was available for less than half of the participants and uptake of the intervention was limited. For missing outcome data, we first employ data on intermediate outcomes and intervention use to make a missing at random assumption more plausible, with analyses based on general estimating equations, mixed models and multiple imputation. We then use data on the ease of obtaining outcome data and sensitivity analyses to explore departures from the missing at random assumption. For incomplete uptake of randomised interventions, we estimate structural mean models by using instrumental variable methods. In the alcohol trial, there is no evidence of benefit unless rather extreme assumptions are made about the missing data nor an important benefit in more extensive users of the intervention. These findings considerably aid the interpretation of the trial's results. More generally, the analyses proposed are applicable to many trials with missing outcome data or incomplete intervention uptake. To facilitate use by others, Stata code is provided for all methods. Copyright © 2011 John Wiley & Sons, Ltd. Introduction Missing outcome data and incomplete uptake of trial interventions are common problems in randomised controlled trials. A key consideration in handling both issues is the intention-to-treat (ITT) principle [1], which states that all individuals randomised in a clinical trial should be included in the analysis, in the groups to which they were randomised, regardless of any departures from randomised treatment. Following this principle preserves the benefit of randomisation, that the treatment groups cannot differ systematically on any factors except those assigned in the trial, and avoids selection bias. However, it is not universally agreed how the ITT principle applies when some outcomes are missing [2]. Further, the ITT principle does not tell us how to estimate the effects that might have been observed with better uptake of trial interventions. Missing outcome data are problematic because they cause a loss of power and can lead to biased estimates of intervention effects. Once data are missing, the loss of power cannot be reversed, but it can be minimised by appropriate analysis choices, in particular by including all observed data in the analysis [3]. Estimates of intervention effects are typically biased if the analysis makes the wrong assumption about the missing data. However, any analysis with missing data must make partly or completely untestable assumptions, so we can rarely be sure that we have the correct analysis. For this reason, sensitivity analysis is recommended [4][5][6]. The assumptions of many (but not all) statistical methods for handling missing data can be expressed using the framework of Little and Rubin [7]. Data are missing completely at random (MCAR) if the probability of data being missing does not depend on any missing or observed values. Data are missing at random (MAR) if the probability of a particular set of values being missing for an individual does not depend on the values themselves, conditional on the observed values of other variables. Otherwise, data are missing not at random (MNAR). Incomplete uptake of trial interventions often means that randomised groups have more similar experience than the investigators had intended, which usually causes the difference in outcomes to be smaller than it would have been with better uptake [8]. However, bias in the estimated intervention effect is not always towards zero: incomplete uptake in equivalence or non-inferiority trials, or in trials where non-trial interventions are available, can inflate differences between randomised groups [9]. Estimating the effect of allocating an intervention does not require adjustment for incomplete uptake, in contrast to estimating the effect of a particular level of intervention uptake. The latter is commonly carried out by per-protocol analysis, which excludes data observed when participants had poor intervention uptake. However, per-protocol analysis is undesirable because it is subject to selection bias. Randomisationrespecting alternatives achieve the same aim by using only comparisons of groups as randomised [8]. One such method is principal stratification [10], which leads to estimation of the complier-average causal effect (CACE) [11] in problems where intervention uptake is dichotomous. An alternative, suitable for quantitative intervention uptake, is the structural mean model (SMM) [12]. This paper aims to promote the implementation of recent methodological developments by describing a sequence of analyses that explores both issues and to illustrate the methods using data from an Internetbased trial. This trial is a good example because the issues are particularly acute, but they arise in a wide range of other trials. The Internet-based trial is described in Section 2. Methods for tackling missing data are described in Section 3, with results in Section 4. Methods for tackling incomplete uptake of interventions are described in Section 5, with results in Section 6. We conclude with a discussion in Section 7. The Down Your Drink trial Hazardous drinking in the general population is an important public health problem [13]. Brief interventions are effective [14] but hard to implement. The Internet is increasingly used to deliver behaviour change interventions [15], and a new 'Down Your Drink' (DYD) website was developed, building on psychological theories and aiming to engage users by providing interactive tools [16]. The DYD trial was a randomised evaluation of the DYD website compared with a non-interactive control website providing information only [17]. All stages of the trial-recruitment, randomisation, intervention and data collection-were conducted online. This presented a number of challenges [18]. The key challenge relevant to the present paper was whether the numbers of participants using the intervention website and providing follow-up data would be sufficient. The primary trial outcome was alcohol consumption in the previous week, which was recorded by the TOT-AL, a specially developed online questionnaire [19]. When an outcome assessment was due, participants received an email with a link to the trial website where they could complete the outcome questionnaires. Alcohol consumption was transformed in all analyses to log(number of units in the last week plus 1). This paper uses data, summarised in Table I, from the pilot trial, which recruited 3746 individuals from 16 February to 16 October 2007. Outcome data were collected at 1 and 3 months; we focus on estimating the intervention effect at 3 months. The correlation between baseline and 3-month alcohol consumption was 0.41 (0.45 for baseline and 1 month; 0.53 for 1 month and 3 months). Although baseline data were complete, poor follow-up response rates were anticipated because there was no personal contact with participants. To increase response rates, all participants who did not complete the outcome questionnaires within 7 days of the first email invitation received second and (if necessary) third invitations at weekly intervals. A fourth email inviting participants to provide their outcome data directly by email to the investigators yielded no further responses. Offline follow-up was attempted for users who had provided a telephone number or address, but was not successful [18]. Incentives were also trialled [20]. Participants were additionally randomised to complete only one of four secondary outcome measures in order to reduce the assessment burden and improve response rates Copyright [21]. The number of emails sent to each participant is summarised in Table II and is used in the analysis in Section 4. In the intervention arm, the mean log(TOT-AL C 1) is larger in later respondents than earlier respondents, suggesting a MNAR mechanism, with non-respondents perhaps having an even higher mean log(TOT-AL C 1). Similarly, low use of the website was a concern. It is hard to define and measure website use [22]; in particular, although each page download was recorded, the length of time that participants spent actually using the website is unknown. We summarised website use by the number of login sessions and the total number of pages downloaded in the first month; in calculating the latter, multiple downloads of the same page were counted only if they occurred in different login sessions. Individuals were automatically logged in to the intervention or control websites after randomisation, but a few who immediately left the trial website had no logins. The main findings of the trial were that alcohol consumption in responders dropped substantially from baseline to 1 month, and again slightly from 1 to 3 months, but that the drops were very similar across randomised groups [23]. The ratio of (geometric mean) 3-month alcohol consumption in the intervention group compared with the control group was 1.04 (95% confidence interval 0.94 to 1.16). However, missing data were substantial and more common in the intervention group (Table I). Use of the intervention website was greater than use of the control website, but the majority of participants in both arms had only one login session. The outstanding questions that this paper aims to answer are whether the results are robust to different assumptions about the missing data and whether interpretation is affected by incomplete use of the website. Missing outcome data: methods We propose a modelling strategy that starts with simple data on baseline and outcome and then progressively adds in intermediate outcomes, website use and ease-of-contact data. For the ith participant, let´i denote their randomised group, x i a vector of baseline covariates (assumed complete), y i1 , y i2 the outcomes at two follow-up times and r i1 , r i2 whether each outcome was observed (1) or missing (0). Our methods generalise easily to more than two follow-up times. If the data were complete, then an adjusted analysis for the outcome at follow-up time 2 would estimateˇin the model An unadjusted analysis is the same without the 0 x i term. Complete cases A first analysis fits model (1) in the subset with r i2 D 1, the 'complete cases'. This analysis is inefficient because it does not make use of individuals with y i2 missing but y i1 observed. It is valid if the model is correctly specified and the data are 'covariate-dependent missing completely at random' [24]: that is, if the missing data mechanism depends only on the baseline covariates included in the model. If the model is incorrectly specified (e.g. if it should contain a nonlinear function of x i ), then the analysis is in general valid only if the data are MCAR within randomised groups [25]. Using repeated outcome measures We now consider three methods that jointly model both y i1 and y i2 . These are valid if the data .y i1 ; y i2 / are MAR given .´i ; x i /: in particular, for participants with y i1 observed, dropout at follow-up time 2 is now allowed to depend on y i1 . A generalised estimating equations (GEE) approach [26] fits the model: Normality is not assumed, but the residual variance 2 is assumed to be equal at the two times. Estimation uses the standard estimating equations [26]; the parameter of main interest isˇ2. If the model is misspecified (in particular, if the residual variance is different at the two times), then valid standard errors can still be obtained by the robust (sandwich) method. With incomplete data, point estimates for a correctly specified model are valid if the data are MAR, whereas point estimates for an incorrectly specified model are valid if the data are MCAR; weighted estimating equations can relax the latter condition to MAR [27]. We allow the coefficients in the two components of (2) to be different: this amounts to allowing interactions between time and the baseline variables x and´. In general, it is best to use an unstructured working correlation matrix; with only two time points, this is the same as an exchangeable working correlation matrix. Copyright A mixed models approach [28] modifies the model defined by (2) and (3) by adding the distributional assumption and replacing (3) with an unconstrained variance-covariance matrix †. The model is estimated using restricted maximum likelihood. It may be appropriate to allow † to differ by randomised group. In multiple imputation (MI), several completed data sets are produced by drawing the missing values from their posterior predictive distribution thus acknowledging the uncertainty due to missing data under a MAR assumption [29,30]. This can be carried out using model (4). It is often sensible to draw imputations separately for each trial arm, because interactions between randomised group and baseline covariates may be of interest [31]. It is sometimes considered that MI offers a way to include all randomised individuals in the analysis (e.g. [32]). However, if the imputation model is the same as the analysis model, then MI is expected to give approximately the same results as a mixed model analysis [33]. Using compliance One way to make the MAR assumption more plausible is to introduce other post-randomisation variables v i into the analysis. Specifically, we now assume that .y i1 ; y i2 ; v i / are MAR given .´i ; so that observed values of v i are allowed to explain missingness of .y i1 ; y i2 /. Here, we take v i as the amount of intervention received (compliance), because this is likely to predict both outcomes .y i1 ; y i2 / and responses .r i1 ; r i2 /, but v i could also include trial outcomes that are more observed than y i2 . In the mixed model approach, v i can be included using the extended model [5] where i1 and i2 are still as defined in (2) and i3 D˛3 Cˇ3´i C 0 3 x i . † is modelled completely flexibly. The GEE approach is similar but without the normality assumption; † is modelled with an unstructured working correlation and equal variances, so it is advisable to scale the compliance variables to have variances similar to the outcome variables. Including v i is easiest under the MI approach, because it can simply be included in the imputation model and excluded from the analysis model. In many trials, compliance has a very different distribution across the two arms and may have different meaning. In this case, it is important to allow the association between v i and .y i1 ; y i2 / to vary by randomised group. This is most conveniently carried out in MI, by imputing separately by arm; it cannot be carried out using standard GEE implementations, but a mixed model could allow † in (5) to depend on´i . Sensitivity analyses The aforementioned models attempt to make a MAR assumption more plausible by including more data in the analysis [34]. However, MAR often remains at least questionable, if not implausible [35]. We now consider sensitivity analyses to departures from MAR. Following Kenward et al. [4], we embed the MAR model in a wider family of MNAR models indexed by one or more 'informative missing parameters' that express the magnitude of departures from MAR. We then use subject-matter knowledge to specify possible values of the informative missing parameters and re-estimate the intervention effect in each case. We use a pattern-mixture model [36] that extends Equation (1) by allowing a term ı Y that controls departures from MAR: The regression parameters subscripted CC can be estimated by fitting Equation (1) to the complete cases (r i2 D 1), but ı Y is not identified by the data. An important extension allows the informative missing parameter ı Y to differ between randomised groups: This model is plausible because, for example, missing data may well be more informative among individuals who have been encouraged to change their behaviour than among controls, and is important because treatment effects are most affected when departures from MAR behave differently in the two arms [37]. Parameters .ı Y 0 ; ı Y 1 / are not identified by the data: ı Y 0 is the mean difference between unobserved and observed outcomes in the control arm, adjusted for x, and ı Y 1 is the corresponding difference in the intervention arm. This model has previously been used with an informative prior distribution for .ı Y 0 ; ı Y 1 / that was elicited from investigators [37]. In the present paper, investigators' views are used to define plausible values of .ı Y 0 ; ı Y 1 / for sensitivity analysis, rather than tackling a fully Bayesian analysis. It is useful to consider three sensitivity analyses: first, to values of It is easy to estimate  C C and  ADD . Finally, the estimated parameters are independent, so we can estimate var Using the number of attempts Instead of specifying the informative missing parameter(s) based on subject-matter knowledge, it may be possible to estimate them using data on the number of attempts made to observe outcome y i2 . Let r i2k be the outcome of the kth attempt to observe the primary outcome y i2 , where r i2k D 1 indicates that the outcome was observed, r i2k D 0 indicates that it was not observed and r i2k D indicates that the kth attempt was not made (either because a previous attempt was successful or because the participant had refused or withdrawn from the trial). The association between r i2k and y i2 can be identified using Alho's model, which assumes that this association is the same for all k [38]: Here, we allow the probability of responding to vary between attempts, but we assume that the association between fully observed covariates and responding is the same at all attempts, although the latter assumption could easily be relaxed. ı R is an informative missing parameter, and ı R D 0 corresponds to MAR. Estimation of model (8) uses data on individuals with observed outcomes together with the numbers and baseline covariates of individuals with unobserved outcomes. The model may be fitted using a conditional likelihood supplemented by a set of estimating equations [38], but this algorithm is not guaranteed to converge. Alternative estimation methods are based on the full likelihood for model (8) jointly with model (4). A Bayesian approach has been used [39], and a likelihood-based approach is also possible; the likelihood involves integrating out the unobserved values of y i2 . Fitting the model by using the full likelihood directly estimates the parameters of (4); an alternative is to use the inverse of the response probability as a weight for analysis of complete cases [38], but care must be taken to obtain standard errors that allow for the often large uncertainty in the weights [39]. As in the previous section, an important extension to model (8) allows the informative missing parameter ı R to differ between randomised groups: where ı Ŕ i y i2 can also be written as ı R 1´i y i2 C ı R 0 .1 ´i /y i2 . Implementation In the DYD trial,´i D 1 for individuals randomised to the interactive website and 0 for the control website, and .y i1 ; y i2 / are the 1-month and 3-month alcohol consumption outcomes (log(TOT-AL C 1)). All analyses were performed both unadjusted and adjusted for x i , which comprises the baseline variables Table I; in general, we prefer the analysis adjusted for baseline covariates, especially the baseline value of the outcome. For GEEs, robust standard errors were used. For mixed models, the unstructured variance-covariance matrix was allowed to differ between randomised groups, although results with a common variance-covariance matrix (not shown) were very similar. Multiple imputations were drawn separately for each arm by using the chained equations approach [40] implemented in Stata [41,42]. For method MI1, the imputation model for the outcome at each time was a linear regression including the outcome at the other time and the baseline variables. For method MI2, the imputation models additionally included the log of one plus the numbers of pages hit at 1 and 3 months and the log of one plus the numbers of login sessions at 1 and 3 months. The distribution of the incomplete variables was not fully Normal, even after log transformation, so predictive mean matching was used to improve the imputations [31]. In each case, model (1) was fitted to each of 50 imputed data sets, and the results were combined using Rubin's rules [29]. Covariates were used in the imputation model even when unadjusted analyses were performed. For sensitivity analyses, the views of five DYD investigators were quantified before the trial results were known, and these views were used to choose values of the informative missing parameters .ı Y 0 ; ı Y 1 / in Equation (7). When the informative missing parameters were assumed the same in both arms, the investigators believed that the mean of the unobserved responses for alcohol consumption at 3 months could be as much as 75% more or 50% less than the mean of the observed responses: these suggest the sensitivity analyses ı Y 0 D ı Y 1 D log 1:75 and ı Y 0 D ı Y 1 D log 0:5. When the data were assumed to be informatively missing only in the control arm, the investigators believed that the mean of the unobserved responses could be as much as 50% more or 50% less than the mean of the observed responses: these suggest the sensitivity analyses .ı Y 0 ; ı Y 1 / D .log 1:5; 0/ and .log 0:5; 0/. We also choose the corresponding cases with the data informatively missing only in the intervention arm: .ı Y 0 ; ı Y 1 / D .0; log 1:5/ and .0; log 0:5/. In addition to these rather extreme sensitivity analyses, we also used more moderate sensitivity analyses with .ı Y 0 ; ı Y 1 / D .log 1:5; log 1:5/, .log 1:25; 0/ and .0; log 1:25/. Analysis of number of attempts used the number of email reminders that were sent to each participant. The conditional likelihood algorithm diverged in some cases, so we used the maximum likelihood approach. 'Alho 1' and 'Alho 2' refer to models (8) and (9), respectively. Stata code for these analyses is given in Appendix A.1. Results We summarise the results in Table III and display the covariate-adjusted results in Figure 1. The intervention effect is expressed as the ratio of the geometric mean alcohol consumption (plus 1 unit/week) at 3 months in the intervention group to the corresponding geometric mean in the control group. In the following text, we interpret these figures as percentage increases or decreases. All methods based on MAR, as well as complete-cases analysis, give very similar results: the point estimate represents a non-significant increase of between 4% and 12% due to the intervention, with a 95% confidence interval that does not extend below a 6% reduction. Sensitivity analyses show that the estimated intervention effect is not very sensitive to departures from MAR when the informative missing parameter is assumed to be equal across randomised groups, but is very sensitive to departures from MAR that occur differently in the randomised groups. Moderate sensitivity analyses (indicated by * in Table III and Figure 1) yield estimates ranging from an 8% reduction to a 23% increase in alcohol consumption, whereas more extreme sensitivity analyses range from a 32% reduction to a 56% increase. This suggests that the trial's results are only robust to departures from MAR that are similar in both randomised groups. Using the number of email reminders and the MNAR models (8) and (9) gives the estimates of the informative missing parameters in Table IV. For model (8), where the informative missing parameter is assumed equal across the two groups, the negative estimate of the informative missing parameter ı R suggests that heavier drinkers are more likely to be non-responders. However, the informative missing parameter is not significantly different from zero so that the data are consistent with a MAR assumption. For model (9), where the informative missing parameter is allowed to differ between groups, both estimates are again negative and that for the intervention group is larger in magnitude, suggesting that the tendency for heavier drinkers to be non-responders may be greater in the intervention group. Although the informative missing parameter in the intervention group is significantly different from zero (P D 0:03), a test for a difference between the arm-specific informative missing parameters is not significant (P D 0:26) nor is a test on 2 degrees of freedom for departure from MAR (P D 0:09). (8)). Alho 2: MNAR, common ı R across attempts (model (9)). These results do not provide good evidence against a MAR assumption, but they change the estimated intervention effects in Table III when the informative missing parameter is allowed to differ across randomised groups as in model (9). This MNAR analysis indicates a much larger increase due to intervention in the unadjusted analysis, and much wider confidence intervals in both unadjusted and adjusted analyses. We attribute these findings to the great sensitivity of estimated intervention effects to differences in informative missing parameter ı R between randomised groups, along with the difficulty of estimating this parameter. Incomplete uptake of interventions: methods Intervention receipt in some randomised trials can be summarised as a binary variable [43], whereas other trials have complex intervention receipt that may be summarised as one or more quantitative variables. We present a SMM that is applicable to both binary and quantitative cases, provided that intervention receipt is univariate. Structural mean model Structural mean models describe the relationship between the observed data and the counterfactual data that would have been observed with a different random allocation [12,44]. For the ith individual, we define y i .1/ as the outcome that would be observed if they were randomised to intervention and y i .0/ as the corresponding outcome if they were randomised to control. Exactly one of these potential outcomes is observed for each individual. Define d i .1/ as the ith individual's compliance (binary or quantitative) with the intervention, if they were allocated to intervention. We initially ignore compliance with the control. We now assume that the causal effect of the intervention is proportional to the compliance. This implies that individuals who would be complete non-compliers if allocated to intervention have no effect of allocation, the 'exclusion restriction' assumption. We then have the SMM where e i is a zero-mean error term whose presence allows treatment effects to vary between individuals. Model (10) implies that Estimation proceeds by noting that randomised group´i is independent of the potential outcomes y i .1/, y i .0/ and d i .1/, so each expectation in (11) can be computed in one arm of the trial. This leads to the estimating equation 1/ or 0 for individuals randomised to intervention or control, respectively. Baseline covariates x i that are uncorrelated with´i and e i may be used in two ways to improve the efficiency of the estimation procedure. First, we can condition on x i in (11) and model E OEy i .0/jx i D C x i , yielding the alternative estimating equation to which we add standard estimating equations for˛and [45], giving Equation (12) is easy to estimate because it is a standard instrumental variables (IV) model [46], in which´i is the instrument, d i is the 'endogenous' variable and x i is the 'exogenous' variable. A second approach, not adopted here, makes use of baseline covariates w i that predict d i in the intervention group. In this case, precision can be gained by using the interactions´i w i as additional instruments, but at the cost of further assumptions [47]. Again, this can be fitted using standard IV software. Interpretation Interpreting the estimated parameter is easiest when compliance is binary. In the aforementioned model, this means that d i .1/ is 0 or 1. In the statistical literature, the two groups formed are often known as 'compliers' (d i .1/ D 1) and 'non-compliers' (d i .1/ D 0), referring to an individual's compliance status if they were randomised to the intervention. In this setting, the 'exclusion restriction' assumption, which identifies the model, states that randomised allocation has no effect on non-compliers. The parameter can be interpreted without further assumptions as the CACE, the average of y i .1/ y i .0/ over the subgroup with d i .1/ D 1 [48]. If compliance is not naturally binary, it is tempting to dichotomise it. Clearly, a good definition of 'compliers' is needed. It is not necessary to assume that compliers all receive the same benefit of intervention, because the CACE represents an average overall compliers. However, it is essential to assume that non-compliers receive no benefit from intervention. It is therefore typically necessary to use a restrictive definition of non-compliance, classing any individual whose moderate compliance could have brought him or her benefit with the compliers. An alternative approach is to express compliance d i .1/ quantitatively. This typically makes the exclusion restriction more plausible, because the zero level can be chosen to represent no use of the intervention. However, without further assumptions, no simple interpretation of generalises the CACE. The further assumption usually made is E OEe i jd i D 0 so that model (10) is correctly specified. In this case, d can be interpreted as the average causal effect of allocation to intervention in the subgroup who would comply to an extent d : that is, Using control group compliance When the control group also receives some intervention, such as a placebo or a standard treatment, control-group compliance d i .0/ is also available. The SMM could then be extended as which allows for a causal effect of the control intervention. The original approach to this problem assumed that d i .1/ is a monotonic function of d i .0/ [49], but the method is very sensitive to departures from this assumption [50]. More recent causal estimation methods for models such as (13) use either an assumption that d i .0/ Ä d i .1/ for all i and Bayesian modelling with slightly informative priors [51], or informative priors for one of the treatment effects [52], or covariates that predict d i .0/ and d i .1/ differently but that do not modify the causal effect of treatment [45]. Because of the complexities of all these approaches, we would prefer to ignore d i .0/ when it is plausible that the control intervention has no causal effect (i.e. that 0 D 0 in (13)). Missing data Missing outcome data complicate estimation of the IV model. Standard implementations of IV are restricted to using complete cases only and are thus valid only under MCAR. Three approaches can be used to make them valid under MAR. First, inverse probability weighting (IPW) can be used [53,54]. Models are constructed for p.r i jx i ;´i D 1; d i / and p.r i jx i ;´i D 1/, and the 'stabilised weights' [55] Second, the 'adjusted treatment received' (ATR) method [56,57] is equivalent to IV regression for complete data and is valid when outcomes are MAR [54]. In this method, a linear regression model is first constructed for actual treatment receipt on randomised group and covariates (using all observations including those with missing y i ), and the residuals are estimated. The causal effect of actual treatment receipt is then estimated by linear regression of y i on actual treatment receipt, adjusting for Copyright the previously estimated residuals and the covariates. The standard errors from this second stage may be underestimated because they ignore uncertainty in the residuals [54]. Third, MI can be used. Implementation The DYD trial has complex intervention receipt: individuals could use the website on different numbers of occasions, for different lengths of time, and in different ways. Any attempt to estimate the effect of intervention receipt in such data relies on a plausible causal model describing how intervention receipt may affect outcomes. We describe one approach with dichotomised compliance and one with quantitative compliance. For dichotomised compliance, a non-zero cut-off was chosen, because almost all randomised individuals had at least one login (Table I). Section 5.2 argues for a relatively low cut-off, and we defined compliers as individuals who logged in more than once or accessed more than 10 pages of the website within the first 1 month from randomisation. Our analyses therefore rest on the assumption that an individual who accessed fewer than 10 pages on only one occasion received no benefit, and they estimate the average benefit of the intervention website over a wide range of use. For quantitative compliance, we defined d i .1/ in Equation (10) as the number of pages downloaded over the first month of the trial, but with an upper limit of 300 pages because we did not believe that use above this level would have further benefit. We did not use website uptake in the control group in the model, because the control website is unlikely to be effective. For the MI approach, we used the imputations constructed using compliance variables as in MI2 of Section 4. Stata code for these analyses is given in Appendix A.2. Results Of 1880 individuals allocated to intervention, 1461 (78%) were classed as compliers. As a result, the estimated CACE (Table V) was not very different from the ITT MAR estimate (Table III). IPW and ATR methods behaved very similarly. MI gave somewhat different results: although this is unexpected, it is consistent with the differences between MI1 and MI2 in Table III. Estimates of the causal effect per 100 pages downloaded were somewhat larger than the ITT estimates. This appears to be because the mean number of pages downloaded in the intervention group was 65, so the estimated effect of downloading 100 pages was approximately one and a half (100/65) times the ITT effect. The confidence interval for the intervention effect in these analyses does not extend below an 11% reduction. Conclusions for the Down Your Drink trial A concern with the DYD trial, and many other online trials, is that high rates of non-response and low intervention uptake makes it hard to draw conclusions about the intervention's effectiveness. Our analyses in this paper show that the conclusions were not substantially affected under a range of assumptions about the missing data mechanism, except when we assumed that the informative missing parameter differed between randomised groups. To the extent that the latter assumption may be implausible, our results appear reasonably robust. Similarly, conclusions were not substantially affected when we used causal models to consider the impact of downloading 100 website pages. The latter conclusion depends on a judgement that 100 website pages was a reasonable target for moderately conscientious website use. Our results therefore provide some support for the use of online trials in general, with two cautions: it is essential to consider the informative missingness parameters differing between randomised groups and to consider how much the observed intervention uptake falls short of what might be hoped for. Analyses allowing for non-response and low intervention uptake are best specified in advance and included in the analysis plan. Methodological conclusions These methods are of potential use in all trials and not just online trials. When rates of missing data are low, sensitivity analysis may be enough to demonstrate that missing data are not a problem. In other cases, including intermediate or other outcomes and/or compliance variables in MAR analyses is a useful strategy, although treatment effect estimates may only be changed when the auxiliary variables are strongly associated with outcome [58]. Sensitivity analyses are always helpful but depend on expert consideration of the plausible degree of departure from MAR. Data on number of attempts to obtain data, or more generally ease of contact, are often recorded and should be more widely used in analysis: results from the Alho model (8) or (9) can be a useful way to allow for extra uncertainty due to the possibility of MNAR data without the need to rely on expert opinion. With pressure on journal space, it may be convenient for all these alternative analyses to be included in web appendices. In the primary publication of the DYD trial [23], which was based on more data than those used here, the primary analysis was the adjusted complete-cases analysis, and web appendices presented alternative analyses for the missing data-a partial last observation carried forward (LOCF; see in the next section), MI and sensitivity analyses using (7)-and analyses adjusting for non-compliance. A particularly relevant question in a trial with a 'negative' result is whether this negative result is attributable to incomplete intervention uptake. In this context, it is important to formulate the causal question carefully, defining a parameter such as the CACE or the causal effect of a particular amount of intervention, and then consider the limits of the confidence interval for the parameter. Other methods We have not reported here an analysis using LOCF, one of the most widely used techniques [3], which simply replaces missing outcomes with the last observed value. LOCF rests on an assumption that outcomes do not change (on average in each arm) after participants drop out of the study, which is often implausible. In the DYD trial, with its large change in outcome after baseline, LOCF would yield implausibly different imputations for individuals with no post-baseline measurement and those with a 1-month measurement. Because LOCF is widely used, the primary DYD trial publication [23] reported a partial LOCF analysis that carried only post-baseline measurements forward. Unfortunately, usual justifications for LOCF rest not on the plausibility of its assumption but on approximate constancy of observed outcomes, or on an appeal to the ITT principle, or on conservatism: none of these are valid [2]. Our sensitivity analyses were based on data from baseline and follow-up time 2 only. Basing the sensitivity analyses on the mixed model (4) might be preferable but is technically more complicated and is unlikely to make much difference in view of the small differences between complete-cases and MARbased analyses (Table III). The Alho method could also be extended to allow for the repeated measures: for example, better estimating the informative missing parameter ı R by assuming it to be constant across follow-up times. Another possible assumption about the missing data is that they are 'latent ignorable', meaning that they would be MAR if the potential compliance d i .1/ were observed for everyone [59]. Extensions For binary outcomes, mixed models become more complex, and GEE or MI methods might be preferred. The MNAR methods can be applied equally well, and the exp .ı/ parameters can be interpreted Copyright as informatively missing odds ratios [60,61]. The SMMs described may still be used to estimate causal risk differences, but if causal odds ratios are wanted then generalised SMMs are needed [62]. Methods used for survival outcomes are typically very different from those that we have described. Here, missing data take the form of censoring, and the non-informative censoring assumption takes the place of the MAR assumption: departures from the non-informative censoring assumption are rarely considered but should be. SMMs are not suitable for survival outcomes, but the structural accelerated failure time model is a general alternative for handling incomplete intervention uptake [63,64], and hazard-based methods are available for handling all-or-nothing uptake [65]. In the DYD trial, all baseline covariates were complete. Incomplete baseline covariates are simply and efficiently handled by single imputation methods such as imputing the overall or centre-specific mean of the covariate [31]. Such simple methods would be inappropriate for missing outcomes: they are appropriate for missing baselines because baseline covariates are independent of randomised group, and adjustment for baseline covariates is not required for unbiased estimation [66]. Stata do-files to implement the analyses presented in this paper are given in Appendix A. A.1. Missing data The succeeding code uses four user-written commands. ice [67] and mim [68] implement MI and are available from the Statistical Software Components (SSC) archive. alho and rctmiss implement the 'number of attempts' model and sensitivity analyses, respectively, and are available from the first author's website by typing net from http://www.mrc-bsu.cam.ac.uk/IW_Stata/ in Stata. The code shows adjusted analyses; for unadjusted analyses, delete the global xvars and global time_xvars commands. The data are assumed to be in a file DYDwide.dta with one record per randomised individual. A.2. Allowing for incomplete intervention uptake The following code is for quantitative compliance: results with binary compliance are obtained by redefining treat.
2016-05-17T12:05:55.443Z
2011-09-21T00:00:00.000
{ "year": 2011, "sha1": "b1cf5df0b44518b208ae017a12349a0a0004c698", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/sim.4360", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b1cf5df0b44518b208ae017a12349a0a0004c698", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
7023874
pes2o/s2orc
v3-fos-license
Postoperative pain after one-visit root-canal treatment on teeth with vital pulps: Comparison of three different obturation technique Objectives. To investigate and compare postoperative pain after one-visit root canal treatment (RCT) on teeth with vital pulps using three different obturation techniques. Study Design. Two hundred and four patients (105 men and 99 women) aged 12 to 77 years were randomly assigned into three treatments groups: cold lateral compaction of gutta-percha (LC), Thermafil technique (TT), and Backfill - Thermafil obturation technique (BT). Postoperative pain was recorded on a visual analogue scale (VAS) of 0 - 10 after 2 and 6 hours, and 1, 2, 3, 4, 5, 6 and 7 days. Data were statistically analyzed using multivariate logistic regression analysis. Results. In the total sample, 87% of patients experienced discomfort or pain in some moment between RCT and the seventh day. The discomfort experienced was weak, light, moderate and intense in 6%, 44%, 20% and 6% of the cases, respectively. Mean pain levels were 0.4 ± 0.4, 0.4 ± 0.3, and 1.4 ± 0.7 in LC, BT, and TT groups, respectively. Patients of TT group experienced a significantly higher mean pain level compared to other two groups (p < 0.0001). In TT group, all patients felt some level of pain at six hours after RCT. Conclusions. Postoperative pain was significantly associated with the obturation technique used during root canal treatment. Patients whose teeth were filled with Thermafil obturators (TT technique) showed significantly higher levels of discomfort than patients whose teeth were filled using any of the other two techniques. Key words:Postoperative pain, root-canal obturation, root-canal treatment, Thermafil. Introduction Pain is an unwanted yet unfortunately common sensation after root canal treatment (RCT) which commences a few hours or days after treatment and is always an unpleasant experience for both patients and clinicians (1)(2). Root canal procedures are commonly believed to be the most painful dental treatment (3). The incidence of postoperative pain after RCT, mainly mild discomfort, was reported to range from 3% to �8% (4-�), but less than 12% of patients experienced severe pain (6). The reasons for postoperative pain can be many including chemical, mechanical, or microbial injuries to the periapical tissues that result in acute inflammation (7). No significant difference in postoperative pain has been found when one-visit RCT was compared with two-visit treatment (2,(8)(9)(10)(11). Mechanical factors, including overinstrumentation or extrusion of root-filling materials, have been associated to the presence of postoperative pain (1,�), sug-1,�), sug-�), suggesting that root canal instrumentation and obturation techniques may influence postoperative pain. In fact, several studies have found correlation between the root canal instrumentation technique and postoperative pain (12,1�). Nevertheless, no study has analy�ed the infl u-, 1�). Nevertheless, no study has analy�ed the infl u- 1�). Nevertheless, no study has analy�ed the influence of the obturation technique in postoperative pain. The aim of this study was to evaluate and compare postoperative pain after one-visit RCT using three different obturation techniques. Patient selection The Ethics Committee of the University approved the investigation. Consecutive patients (n = 338) attending a trained endodontist (LOA-E) for primary RCT on only one tooth were invited to participate in this prospective study. All diagnoses were vital pulps, either asymptomatic irreversible pulpitis caused by carious exposures, either normal pulp of patient being referred for intentional endodontic therapy for prosthetic reasons. The individual diagnosis was confirmed by obtaining the dental history, periradicular radiographs, periodontal evaluation, percussion, and cold test (EndoIce; Coltène/ Whaledent Inc, Cuyahoga Falls, OH). Previous NSAIDs or antibiotic treatment was recorded. All patients were informed of the aims and design of the investigation, and the first 270 that agreed to participate and signed an informed consent were included in the study. Patients were supplied written instructions on how to assess and record the postoperative pain. However, only 204 patients (10� men and �� women), with ages ranging 12 to 77 yr (mean: 42 ± 14 yr; median: 40) could be analysed finally because 66 subjects (dropout rate = 24%) did not completed and/or returned the questionnaires. Selection of the obturation technique Ninety patients were randomly assigned to each one of the three obturation techniques: 1) treatment with cold lateral compaction of gutta-percha (Group LC); 2) treatment with Thermafil technique (group TT); and treatment with Backfill Thermafil obturation technique (group BT). After drop out, 80 patients were assigned to LC group, 61 to TT group, and 63 to BT group. Table 1 Table 3. Postoperative pain experienced after root canal treatment (RCT) using Backfill Thermafil obturation technique (BT). Patients (n = 6�) completed a questionnaire containing a 10-cm visual analogue scale (VAS) (Huskisson 1974) to assess discomfort / pain at 2 and 6 hours and 1, 2, 3, 4, �, 6 and 7 days after the RCT. sealer, placed into the root canal, and fitted to the working length. Then, the gap for accessory cones was created using a #25 finger spreader (Dentsply Maillefer, Ballaigues, Switzerland). Excess gutta-percha was removed using a warm excavator. Thermafil technique, with a plastic carrier, was used to obturate the teeth of the TT group. A thin layer of AH Plus sealer was placed into the root canal with a paper point. A Thermafil obturator (taper .04), selected after verification, was heated in the Thermaprep ® Plus Oven (Densply, Maillefer, Ballaigues, Switzerland). The heated obturator was slowly inserted into the canal to the previously determined working length. A plugger was used to condense the coronal gutta-percha around the carrier until the gutta-percha hardened. Excess coronal gutta-percha and the plastic handle were removed with a round bur (ISO 016, Dentsply, Maillefer, Ballaigues, Switzerland) at 2000 rpm, without water cooling. Then, the gutta-percha was vertically condensed with pluggers nº 1/2 and 3/4 (Dentsply, Maillefer, Ballaigues, Switzerland). The teeth of the BT group were obturated with the modified master cone heat-softened backfilling technique (Backfill -Thermafil, BT), as described by Da Silva et al. (14). A gutta-percha master cone (taper .02, Maillefer), coated with AH Plus sealer, was first introduced into the canal. The master cone was condensed with a #25 finger spreader (Dentsply Maillefer, Ballaigues, Switzerland) and a Thermafil point si�e 04/�0 was used for back-filling of the canal. Excess coronal gutta-percha and the plastic handle were removed with a round bur and the root filling was vertically compacted as above. In the three groups, the teeth were temporized using a sterile cotton pellet and Cavit (3M, St Paul, MN, USA). Pain / discomfort assessment Each patient received instruction on how to use a ques-tionnaire for the numeric and verbal evaluation of pain / discomfort (�). The questionnaire contained a 10-cm visual analogue scale (VAS) (15) to assess discomfort / pain at 2 and 6 hours and 1, 2, 3, 4, �, 6 and 7 days after the RCT was completed. The questionnaire should be completed and returned a week later, when they came to check up. Statistical analysis Raw data were entered into Excel (Microsoft Corporation, Redmond, WA, USA). The relationship between obturation techniques and clinical factors and postobturation pain was analyzed using odds ratio as well as logistic regression models based on bivariate and multivariate analysis (p < 0.05). Student t test was used to compare mean pain levels. The SPSS statistical software (version 11.0, SPSS Inc., Chicago, IL, USA) was used. Results Seventeen percent of patients showed no postoperative pain, but 83% experienced discomfort or pain in some moment between the intervention and the seventh day. The discomfort experienced was weak, light, moderate and intense in 6%, 44%, 20% and 6% of the cases, respectively. Table 2 describes postoperative pain levels experienced by the participants when the LC obturation technique was used. Thirty per cent of patients experienced no pain at any time. The maximum postoperative pain level was "light". The higher percentage of patients feeling pain was found at 6 hours (70%) and the first day (6�%). The percentage of patients that felt pain decreased continuously from the six hours, being almost negligible on the seventh day after treatment (3%). The maximum pain intensity, at six hours, was 4, in 2.�% of the patients (n = 2), and the mean pain level was 0.4 ± 0.4. by the participants when the BT obturation technique was used. Nine per cent of patients experienced no pain at any time. No patient felt neither moderate nor intense pain, and the maximum postoperative pain level was "light". The higher percentage of patients feeling pain was found at 6 hours (91%) and the first day (76%). The maximum pain level in this group was 4, in 3% of the patient (n = 2), and decreased from the first day, and disappears on the fourth day. The mean pain level was 0.4 ± 0.3. Postoperative pain levels when the TT obturation technique was used are described in table 4. All the patients felt some level of pain at six hours after the treatment. The maximum postoperative pain level was "intense". Five percent of patients felt intense pain at six hours and 3% after one day. The higher percentage of patients feeling pain was found at 6 hours (100%) and after one day (�7%). The percentage of patients that felt pain decreased continuously but slowly from the six hours to the seven day. At the 7 day, still 7% of patients showed some pain. The maximum pain level was 8 in 3% of the patient (n = 2). The mean pain level was 1.4 ± 0.7, significantly higher than that found in the groups LT and BT (p < 0.0001). The percentages of patients feeling pain in the total sample and in each treatment group are shown in (Fig. 1). There were significant differences amongst the three techniques in relation to the pain (p < 0.01). At every time, the higher percentage of patients feeling pain corresponded to TT obturation technique (p < 0.01). The percentage of patient feeling postoperative pain in teeth obturated using LC and BT techniques decreased significantly after the first day, but in the teeth obturated with the TT technique maintained over fifty percent at 4th day and then decreased slowly. Mean pain level for all techniques was 0.71 ± 0.46 (Fig. 2). Mann-Whitney U test revealed a statistically significant difference between the median pain intensity depending on the obturation technique (p < 0.01). Higher mean pain level was reported at six hours after the treatment for all obturation techniques. At every time, the higher mean level of pain corresponded to TT obturation technique (p < 0.01). Multivariate logistic regression analysis was run for the dependent variable "presence of pain after 6 hours", adjusting for age, pulpal status, NSAIDs premedication, obturation technique and treatment length as covariates. Analysis showed that age (p = 0.03) and premedication with NSAIDs (p = 0.0021) were factors associated statistically to the presence of pain at 6 hours, but obturation technique did not correlated significantly (OR = 3.3�; C.I. ��% 0.�8 -10.82; p = 0.0�3). Discussion The purpose of this study was to compare the postoperative pain after endodontic therapy on teeth with vital pulps using three different obturation techniques. As long as we know, this is the first study analy�ing this topic. The results of this study suggest that the TT obturation technique is significantly associated to higher postoperative pain levels. Mild discomfort after root canal treatment is a common experience for patients (1). The reasons for postoperative pain, however, can be many (7). The main causes are mechanical, chemical, or microbial injuries to the periapical tissues that result in acute inflammation. In a clinical investigation, it is difficult to determine if a single or multiple factors elicit pain. If a root canal system was not cleaned properly, residual infection may cause exacerbation by imbalances in the host-bacteria relationship, synergistic or additive microbial interactions, or the presence of decisively pathogenic bacteria before the initiation of treatment (16). A mechanical reason may be overinstrumentation; chemical factors include the extrusion of medications, filling materials, or irri-irrigants (�,17). In the present study, as only vital cases were included, persisting infection can be excluded as a cause of postoperative pain. One of the main problems in studying pain is the patient's subjective evaluation and its measurement. For this reason, the methodology used in assessing pain level is critical (18). In this study, as well as other studies on endodontic postoperative pain (19-20) a VAS has been used. In this study, pain has also been verbally quantified in order to a better understanding by patients. Postoperative pain is common after endodontic treatment, so it is very important for the dentist to control this pain as well as to know how widespread the problem is (21). Root canal treatment must be carried out taking into account that instrumentation and obturation techniques can provoke periapical damage. Furthermore, several reports associate the extrusion of filling material to the presence of postoperative pain (1,�,17). In this study, cold lateral compaction technique (LC), obturation with Thermafil (TT) and a mixed obturation technique using Thermafil and a master gutta-percha cone (BT) (14) have been compared with regard to postoperative pain. Results of the present study show that, although seventeen percent of patient showed no postoperative pain at any time, strikingly, 83% experienced some pain level during the week after the root canal treatment. This percentage of patient feeling pain is the highest reported in the literature (4)(5)(6). This result must be understood taking in mind that 1 and 2 pain levels (weak pain) are only "postoperative discomfort". In addition, the Hawthorne effect, i.e., the change in the behaviour of a subject because of the special attention and status received from participation in an investigation (22) (26). Da Silva et al. (14) found overfilling in all teeth obturated with TT technique. However, Yesilsoy et al. (27) did not find correlation between sealer extrusion and post-obturation pain prevalence. The type of root canal instrumentation technique can influence on the discomfort or pain experienced during endodontic therapy. Goreva and Petrikas (12) reported that "crown down" preparation using completely rotating profile instruments and GT rotary files proved to be effective as regards prevention of postoperative pain. Makeeva and Turkina (13) have analyzed the effects of the method of mechanical root canal treatment on emergence of pain after endodontic therapy. These authors compared sound tools of the Sonic system, ultrasound tools of the Satelec Suprasson system, full-wind tools of ProTaper and System GT as well as handy K-files, finding that the least risk of pain emergence after endodontic treatment occurs with tooth canal widening by crown-down technique. In the present study, NSAIDs medication prior to root canal treatment was associated to higher levels of postoperative pain. Without doubt, the use of NSAIDs correlates with the presence of preoperative pain. Several studies have established that preoperative pain is a major determinant of postoperative pain or flare-up (6,28,2�). Nevertheless, in other published studies, the presence and severity of preoperative pain did not appear to have any significant effect on the prevalence of post-obturation pain (10). No correlation between pulpal status and post-operative pain levels has been found, in agreement with the result of Harrison et al. (1). However, it has been found that root canal treatment is more painful in teeth with irre-with irreversible pulpitis (2�,30). The outcome measure of future studies should be reported in terms of improvement or deterioration of pain level rather than mere prevalence of postoperative pain/ flare-up (4).
2016-05-12T22:15:10.714Z
2012-02-09T00:00:00.000
{ "year": 2012, "sha1": "94b340983277b80a42c991dd4f6eccaa191c9fe4", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3476040?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "94b340983277b80a42c991dd4f6eccaa191c9fe4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246805144
pes2o/s2orc
v3-fos-license
Development of an IoT-Based (LoRaWAN) Tractor Tracking System The use of new technologies and precision agriculture (PA) in farms has become more important due to the need for enough agricultural production for increasing world population opposed to decreasing farm areas. PA covers wide range of technologies like sensors, microcontroller-based devices, machine to machine communication technologies, global positioning systems but, the investment costs of these devices are literally expensive which become a constraint for farmers especially in developing countries. Internet of things (IoT) technology is a new era that agricultural production will be the one area mostly affected by. LoRaWAN is one of the new communication technologies for IoT which enables almost everything on the planet to be connected to internet and deliver high amount of data with no expense. In this research by using the advantages of LoRaWAN, a new IoT-based tractor tracking system including a LoRaWAN module and a web-based software was developed, and the test results were evaluated. As a result, it was found that the developed system was capable of measuring and sending tractor sensor data along with geospatial position of the tractor and serving the data on the web-based user interface. Introduction The world population is expected to reach 9 billion people in 2050, and today's 3.75 tons per hectare of wheat production should be as high as 6.25 tons per hectare (Meola 2017).This projection necessitates a 70% wheat production yield increase to feed that much population by 2050 however, it becomes more difficult to achieve this projection in countries where average farm areas are very small, the yield is low and PA technology use in farms is weak.Pierce & Novak (1999) have defined the PA as doing the right thing, in the right place and on the right time.Without enabling precision agriculture in the farms, yield and quality losses are inevitable.So, new technologies should be integrated into agricultural machines to implement PA to the farms to produce higher quantity and quality of product despite of the fact that the farm areas are consistently decreasing. Tractors are the main power source in the farms so, new technologies should be integrated on them at first to make the farm practices more efficient.However, developing countries have huge barriers to implement new technologies on their tractors.In these countries, farmers' income is very low which forces farmers to use very old tractors that are incompatible to use with new high-tech PA devices (Onwunde et al. 2018).On the contrary, in developed countries farms are so big that farmers can use new high-technology tractors with several sensors and GPS (global positioning system) implemented. In USA it is known that GNSS (global navigation satellite systems) technologies are used by more than 80% of farmers, and the use of GPS-based automatic steering systems has moved from 6% to 78% from 2005 to 2017 which requires the use of newer tractors (Erickson et al. 2017).Moreover, the use of precision farming technologies has reached 35% in Europe (Das et al 2019).In Turkey more than 45% of 1.8 million tractors (Turkish Statistical Institute 2019a) are more than 25 years old, and 21.4% of all are more than 10 years old (Turkish Statistical Institute 2019b) which makes the integration of PA technology on tractors very difficult.The farms in Adana region could be a good example of the need for implementation of new technologies on tractors however, farmers are having difficulty to invest in these devices.According to the survey that was done in Adana region with seven big farms, each one having more than 50 hectares of arable land and 3 tractors, it was revealed that in none of them PA technology was used on tractors (Civelek 2020).In a recent study to determine farmers' aim to use auto guidance systems in Adana region, it was found out that 96.4% of the farmers did not use PA technologies, since they did not know about it (Keskin et al. 2018).In another research had been conducted in the same region, however 35.9% of the farmers did not know about PA, 92.3% of them reported that they had followed new trends in agriculture, yet 61.5% of them were interested in satellite positioning systems on the contrary not being interested in automatic steering or variable rate application systems by 38.5% and 28.2%, respectively (Keskin & Sekerli 2016).The results of the last survey in the same region showed that besides most of the participated farmers were using computer and smart phones, none of them had unmanned vehicles, whereas 3 out of 422 farmers were using sensor equipped machinery (Saygılı et al. 2020). Today's information technology covers digitalization, big data, IoT and blockchain, and IoT is expected to have the biggest effect on agriculture.European Commission declares that the development of digitalization in agriculture depends on connecting tractors to the internet using IoT technology either using 2G or LPWAN (low-power wide area network) which have advantages like wide coverage area and low-investment costs (European Commission 2017).In the literature several studies were found based on the integration of IoT into agriculture.Some of these studies were solar-powered automated IoT-based drip irrigation system (Barman et al. 2020), IoT-based soil health monitoring and recommendation system (Bhatnagar & Chandra 2020), IoT-based technology for low-cost precision apiculture (Dasig Jr. & Mendez 2020), IoT-based smart tree management (Shabandri & Madara 2020) and frost prediction in highland crops management using IoT (Mendez & Dasig 2020) in which the importance of integration IoT in agriculture was emphasized.However, from the literature research no evidence could be found related to the use of IoT technology on tractors.LoRaWAN communication technology was used on the hardware because of having several advantages over other IoT-based communication technologies, like transmitting data up to 5 km urban and 15 km in suburban areas and no registration requirement which makes it free to use (Davcev et al. 2018;Sahana et al. 2020).The main objective of this manuscript was to explain the adoption easiness, performance, production and purchasing costs, and advantages of the developed IoT-based tractor tracking system consists of a hardware and a software. Materials Proposed IoT-based tractor tracking system was designed to provide farmers to track and analyze tractor performance data along with geospatial data.Overall design of the system consisted of two parts which were hardware and software. The hardware was designed to connect several sensors such as fuel flow meter and PTO (power take off) torque meter sensors to measure fuel consumption and PTO power use.The hardware also had a GPS module to get geospatial position of the tractor on the farm using GPS satellites.All these tractor performance data were combined and sent to a server on the cloud using LoRaWAN protocol (Figure 1).One of the reasons for using LoRaWAN was that in several places of rural areas GSM (global system for mobile communication) base station coverage area is not enough which results in no internet connection. Figure 1-General view of the designed system Hardware's circuit diagram and the design of the motherboard were developed using Proteus software.The design of the PCB (printed circuit board) required to select proper electronic components like resistance, capacitance, crystal, regulator and microprocessors.Selected components were placed and connected to each other based on the requirements of the hardware and short-circuit tests were also conducted on the Proteus, and then PCB was produced based on the overall design (Figure 2 and 3).Microchip's PIC18F46K22 microprocessor was used as the main microprocessor of the developed hardware.Used microprocessor was capable of gathering and processing data using ADC (analogue digital converter) on which several sensors were connected. Microchip's RN2483 LoRaWAN modulation module was used as the data transmission module to send data to the developed database on the cloud.This module was capable of sending data using either using 868 MHz frequency for Europe or 936 MHz frequency for USA. A MODBUS communication port was also added onto the PCB for connecting a GSM module to maintain connection of the hardware to the internet where LoRaWAN communication is unavailable.For using MODBUS protocol, a MAX487 transceiver was embedded onto the motherboard.This transceiver was capable of transmitting gathered and processed data to the main microprocessor using MODBUS protocol via RS-485 port. TELIT's SE868K7-A GPS module was chosen to get the real position data of the tractor (Figure 4).Small one-sided PCB was developed using Proteus software to integrate GPS module and to connect it to the motherboard over connection pins using 4-pin cable.The GPS module was controlled by the main microprocessor.GPS data was received in different modes by the module.After the module had fixed to the required GPS satellites, the data flowed in a row, delivered to the main microprocessor, and the data was parsed by the developed embedded software running in the main microprocessor.Using the parsed data, latitude, longitude and speed values were gathered with the sensor data.Example: $GPRMC,084722.000,A,3702.5115,N,03521.6524,E,0.17,206.91,270219,,,A*66 2 analogue and 2 digital I/O (input output) ports were added onto the developed PCB to connect different sensors to the developed hardware.Analogue ports were added to connect PTO torque meter to measure the torque and the speed of the PTO for calculation of the required power by the machine attached to the tractor, whereas digital input ports were added to connect two fuel flow meters, one for consumed fuel and the other one for returning fuel to tank from injectors, so that the fuel consumption could be calculated.The module was designed to run with a 3.3V battery to send the geospatial position of the tractor even when the engine was not running.Since most of the sensors needed higher voltage than the motherboard's supply voltage, tractor's battery was also used to supply energy for the sensors.To achieve this, a power connection port was added onto the developed hardware to energize the sensors.With the used 7805CV regulator on the developed PCB, 5V or 12V DC power could be supplied to the hardware based on the voltage required by the sensors. A PIC16F1826 microprocessor was also embedded onto the motherboard to record and send fuel flow meters' data in case of an unexpected cut down on the energy supplied from the tractor's accumulator due to a sudden stop of the engine.At the time of an energy loss, energy was supplied to the hardware for several milliseconds by a capacitor that was soldered onto the motherboard so that the PIC16F1826 microprocessor had enough time to calculate the last measured fuel flow and send it to the main microprocessor. The main microprocessor's embedded software was written using CCS C compiler and then it was programmed using Microchip Pickit3 circuit debugger.After the microprocessor was programmed, it was put under debugging test circle to find out and correct software errors using MPLAB software using the circuit debugger. To deliver the data from the developed hardware to the database on the cloud, Kerlink's Wirnet Station gateway was used in the trials (Figure 5).Used gateway was capable to use whether 868 or 925 MHz frequencies for connection, more than 15 km coverage range, easy installation, and low-power consumption. Figure 5-LoRaWAN gateway for data transmission The developed web-based software was consisted of front-end interface, back-end interface, and a database.A database was developed using MySQL to store the data that was sent by the developed hardware to the cloud.The database was capable of storing user information along with farm area, tractor and machine make and models, and also several tractor performance data tables were included, such as Nebraska tractor performance test results to be used as a guidance for farmers to select their tractors. The front-end interface of the software was developed on the web basis to provide flexibility for the user to get access from any type of device, such as mobile phone, tablet or PC.So, it was developed using HTML (hypertext markup language) and PHP (hypertext pre-processor) programming languages.To record data sent by the hardware to the database, an algorithm was developed using JSON (java script object notation) format using PHP (Algorithm 1).When a data packet was sent by the developed hardware over the gateway, the developed PHP file was triggered, and the data was recorded into the database.Data packet was sent in a combined format including developed module's data such as used identifier, battery status, used frequency, date, RSSI (received signal strength), SNR (signal to noise ratio) values and transmitted tractor performance and GPS data.When the developed PHP file was triggered, combined data was parsed into the blocks and recorded corresponding header of the table in the database. Methods As the developed hardware had 2 analogue ports for measurement of the tractor PTO torque and speed and 2 digital ports for connecting two fuel flowmeters, the measurement reliability tests of the hardware should have been done in laboratory tests.The hardware was put under calibration tests using these connection ports.For analogue port tests, voltages between 1 to 10V by 0.02V step increments were applied each port using AA Tech ADC-3303 voltage generator, measurements were read using a Fluke 17B+ multimeter, recorded in an Excel sheet, and evaluated statistically. For the calibration of the digital ports, pulses for different frequencies were applied to digital ports using UNI-T make UTG9005C model pulse generator.10 measurements were taken in each 30 seconds for 6 different frequencies which were 4, 10, 50, 1000, 16000 and 32000 Hz, respectively. For continuous data transmission and battery drain tests, the developed hardware was left in the laboratory sending data packages in every 2.5 minutes until the battery fully drained.The data packages were analyzed at the end of the battery life, and by using the time data recorded into the database battery life was calculated. For data transmission and GPS tests, Kerlink's Wirnet Station gateway was set up outside of the laboratory.For GPS data gathering and transmission tests, the developed hardware was taken several places in Cukurova University Campus.Data packages sent and recorded into the database by the developed hardware were analyzed.Developed web-based software was used to confirm the data package and reliability of the front-end user interface as shown in Figure 6.Tractor's geospatial data was also checked in the front-end user interface according to the related data point (Figure 7).Lastly, SNR (signal to noise ratio) values were gathered and analyzed in the data transmission tests. The developed hardware's bill of materials (BOM) was also calculated to compare the affordability of the developed hardware with other devices available in the market. Results and Discussion For analogue port calibration tests 451 data points were gathered between 1-10V.The results of the regression analyze according to the analogue sensor test were given in Table 1.The ANOVA results showed that the measured voltage had a linear increment with R 2 value of 0.99.Using gathered test data, a calibration formula was developed (Equation 1), and the main microprocessor was programmed using this formula to measure correct values for analogue ports.In the Equation 1, y is defined as applied voltage and x is the voltage measured by the developed hardware.y = 0.01737357 + 0.000189173 × x (Eq.1)For digital port calibration tests, the difference of the last recorded two pulses was calculated so that the exact pulse number could be found for 30 seconds.It was found that the difference of the pulses for each measurement points in each frequency range had a linear differentiation (Figure 8).Regression analyses showed that the changes in each 30 seconds of measurement points were linear with R 2 value of 0.99 for each frequency (Table 2).Data transmission and battery drain tests were also conducted to measure reliability of the developed hardware.To achieve this, the hardware was left to send dummy data to be saved in the developed database.Two trials were ended in total 6 months, the hardware created 49121 and 52766 data points, and the batteries were drained in 95 and 98 days which confirms the assumptions that were given in by Aqeel-Ur-Rehman et al. (2014). From the GPS tests that were conducted outside of the laboratory it was revealed that the position of the developed hardware was sensed with a maximum 3 meters of error which was declared by the producer of the GPS module.At the time of these tests, SNR values of the data sent by the hardware was also measured and analyzed.Average SNR value was found to be -11.13±1.21dB (0.95 confidence interval) with 2.47 standard deviation which showed similar results with the LoRaWAN-based IoT device developed for personal mobility vehicles by Santa et al. (2019). From the BOM costs calculations, it was found out that the hardware could be produced for $55 including PCB manufacturing costs and taxes, excluding software development for the hardware according to prices in May 2020.When one of the tractor manufacturer's 4G LTE (long-term evolution) based unused device price was considered, which was $795 in the same date including taxes and excluding subscription to the service, it was evaluated that the IoT-based devices could be produced cheaper than commercially available ones.This situation concludes that IoT-based tractor tracking devices could be competitive to the 4G LTE-based ones.However, according to the information gathered from dealers, when commercially available devices were compared with the developed hardware, it was found that some of these devices cannot meet the communication frequency regulations declared by the government in every country, so that the farmers cannot use these devices on their tractors even if they could have afforded. Conclusions With the study it was revealed that an IoT-based tractor-tracking system for tracking of the performance and geospatial position of the tractor could be produced with a low production cost.The test results for the measurement reliability of the developed hardware showed that the data gathered from the sensors attached to the tractor could be measured with high sensitivity.The gathered data could be transferred to the database on the cloud and showed on the web-based interface.The LoRaWAN communication technology used in the developed hardware had an advantage over GSM communication technologies not only by providing data transmission with no expense but also using a free communication frequency which has no restriction or need any license to use. The developed system in this research was not only low cost but also scalable.The developed database and web-based user interface are compatible with different sensors using LoRaWAN technology and communication protocol so, different sensors, such as soil moisture and temperature, could be set up in the farm and the data could be tracked using the developed software.When the necessity to achieve high yield and quality of agricultural products to feed increasing world population, it is clear that using newer technologies on tractors and agricultural machines is inevitable so, the developed system has an advantage for enabling farmers to purchase and adopt their tractors to use precision agriculture techniques in their farms, especially for developing countries. As a future study, the developed hardware has been being tested on a farm, and the gathered data is being analyzed for further development of the web-based interface by using mathematical models and embedding artificial intelligence for providing farmers to make detailed analyzes. Figure 2 - Figure 2-Design scheme of the developed hardware Figure 6 - Figure 6-Front-end presentation of the developed web-based software (Power equals to PTO power)
2022-02-14T16:12:22.023Z
2021-08-05T00:00:00.000
{ "year": 2021, "sha1": "20d996f8774812de6fa8062e72a578a101414e41", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15832/ankutbd.769200", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2300129fd8bf2b498f80b6507e23391e43a94fb5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
253159987
pes2o/s2orc
v3-fos-license
Therapeutic function of a novel rat induced pluripotent stem cell line in a 6-OHDA-induced rat model of Parkinson's disease Parkinson's disease (PD) is a progressive neurodegenerative movement disorder of the central nervous system that results from the loss of dopaminergic (DA) nigral neurons. Induced pluripotent stem cells (iPSCs) have shown potential for cell transplantation treatment of neurodegenerative disorders. In the present study, the small molecules CHIR99021 and RepSox (CR) significantly facilitated reprogramming and enhanced the efficiency of GFP+/iPS-like colonies [rat iPSCs induced by OCT3/4, Sox2, Klf4, c-Myc, Nanog and Lin28 + CR (RiPSCs-6F/CR)] generation by ~4.0-fold during lentivirus-mediated reprogramming of six transcription factors in rat embryonic fibroblasts. The generation of iPSCs was detected by reverse transcription-PCR, immunofluorescence and western blot analysis. Subsequently, RiPSCs-6F/CR were stereotactically transplanted into the right medial forebrain bundle (MFB) of 6-hydroxydopamine-lesioned rats with PD. The transplanted RiPSCs-6F/CR survived and functioned in the MFB of rats with PD for ≥20 weeks, and significantly improved functional restoration from their PD-related behavioral defects. Furthermore, the grafted RiPSCs-6F/CR could migrate and differentiate into various neurocytes in vivo, including γ aminobutyric acid-ergic, DA neurons and glial cells. In conclusion, the present study confirmed that RiPSCs-6F/CR induced by small molecules could be used as potential donor material for neural grafting to remodel basal ganglia circuitry in neurodegenerative diseases. Introduction Parkinson's disease (Pd) is the second most common neurodegenerative disease after Alzheimer's disease, affecting 1-3% of the population >60 years old worldwide (1). The main pathology of Pd is the progressive loss of dopaminergic (dA) neurons in the substantia nigra and formation of α-synuclein-containing Lewy bodies (2,3). Early Pd responds well to dA drugs, such as L-3,4-dihydroxyphenylalanine, dopamine agonists, and monoamine oxidase and catechol-O-methyl-transferase inhibitors (4). However, over time, these drugs begin to lose their effectiveness and cause side effects, including movement disorders and neuropsychiatric complications (5,6). At present, there are two main surgical methods that can be used as an effective adjuvant therapy to drug treatment: Nucleus lesioning surgery and deep brain stimulation (dBS) (3). Nucleus lesioning is irreversible damage to the brain nuclei, such as the globus pallidus internus and the subthalamic nucleus (STN), accompanied by serious complications including hemiparesis, visual disturbances and permanent speech deficits, whereas dBS targets the STN and medial globus pallidus nucleus (7,8). However, dBS exhibits complications, such as tolerance and electrode displacement, and is associated with a high cost; therefore, it is not a popular approach (9)(10)(11). In previous years, stem cell transplantation therapies have been considered a potential method for the treatment of Pd (12,13). In 2006, induced pluripotent stem cells (iPScs) reprogrammed from somatic cells became a hot topic in research (14,15). Notably, iPSc technology can efficiently generate patient-and disease-specific PSc lines, which can then differentiate into any desired cell type, potentially solving a major problem in stem cell therapy (14,(16)(17)(18)(19). Given the inherent self-renewal capacity, pluripotency and relatively low immunogenicity of iPScs (20), they provide a promising patient-derived cell resource for human genetic disease modeling and toxicity studies, thereby reducing the overall cost and associated risks of drug development and clinical trials (21)(22)(23). Moreover, functional midbrain dA progenitors and neural progenitor cells (NPcs) derived from iPScs could survive and restore motor function in the treatment of neurodegenerative diseases (24)(25)(26). Furthermore, Therapeutic function of a novel rat induced pluripotent stem cell line in a 6-OHDA-induced rat model of Parkinson's disease long-term survival and function of midbrain-like dA neurons derived from autologous human iPScs have been reported in a non-human primate model of Pd (27,28). The present study used a lentivirus encoding six reprogramming factors to reprogram rat embryonic fibroblasts (REFs) to generate pluripotent cells in vitro, and the small molecules cHIR99021 and RepSox (cR) significantly enhanced the generation of iPSc colonies. Subsequently, the novel rat iPScs were directly transplanted into the medial forebrain bundle (MFB) of rats with Pd to investigate the functional effects of the transplanted cells in vivo, which provided experimental evidence for studying the pathogenesis of Pd and identifying the potential of iPScs for neural transplantation. Materials and methods Animals. A total of 48 specific-pathogen-free, 8-week-old, healthy male Sprague-dawley (Sd) rats (weight ~200 g) and two pregnant Sd rats (weight ~350 g) were provided by the Experimental Animal center of Anhui Medical University (Hefei, China). The rats were maintained in groups of five per cage under a 12-h light/dark cycle, the relative humidity was controlled at 40-70% and the temperature was controlled at 23±2˚C, with ad libitum access to food and water. during the experimental period, if any rat started to show signs of immobility, a huddled posture, inability to eat, ruffled fur or self-mutilation, they were immediately sacrificed. In addition, animals were euthanized to prevent further suffering if they were unable to stand or displayed agonal breathing, severe muscular atrophy, severe ulcers or uncontrolled bleeding. The animals were anesthetized with an intraperitoneal injection of 3% sodium pentobarbital (50 mg/kg; Merck KgaA) before 6-OHdA injection, cell transplantation and perfusion. Subsequently, the rats with unsuccessful modeling were euthanized by cervical dislocation under anesthesia or were injected with an overdose of sodium pentobarbital (150 mg/kg) for euthanasia. In addition, complete cardiac and respiratory arrest were observed to verify animal death. All of the experimental animal procedures were approved by the Institutional Animal care and Use committee of Bengbu Medical college (Bengbu, china; approval no. 2020-025). Reprogramming rat fibroblasts to iPSCs. For reprogramming, the initial REFs from a male rat embryo at passage 3 (P3) were co-transduced with six lentiviruses carrying six reprogramming factors (OSKMNL) and GFP-tagged protein at day 0 with a multiplicity of infection of 10 for each lentivirus (10 viral particles/cell) and supplemented with 10 µg/ml polybrene. cells were incubated in the virus/polybrene-containing supernatants for 24 h at 37˚C, and then the medium was changed to fresh complete medium (H-dMEM containing 10% FBS). At day 2 post-transduction, REFs were re-plated on an irradiated Oricell ® ICR mouse embryonic fibroblast feeder layer (cyagen Biosciences, Inc.). The following day, the culture medium was replaced with dMEM/F-12 supplemented with 10% KnockOut Serum Replacement (Gibco; Thermo Fisher Scientific, Inc.), 0.1 mM β-mercaptoethanol, 1% NEAA, 2 mM L-glutamine, 100 U/ml penicillin and 0.1 mg/ml streptomycin, which was further supplemented with 3 µM CHIR99021 (Sigma-Aldrich; Merck KGaA), 1 µM RepSox (Selleck Chemicals) and 15 ng/ml fibroblast growth factor 2 (R&D Systems, Inc.) on day 4 (36,37). The GFP + /iPS-like colonies [rat iPScs induced by OcT3/4, Sox2, Klf4, c-Myc, Nanog and Lin28 + cR (RiPScs-6F/cR)] were mechanically picked 20-30 days after viral transduction and re-cultured on feeder layers. In addition, REFs treated with empty lentiviral particles and cR were used as a negative control and original REFs were used as blank control. The RiPScs-6F/cR were analyzed for chromosomal alterations by G-band karyotype analysis at P6. The cells (~1.0x10 6 cells) were treated with 1.0 g/l colchicine solution and were then treated with 0.025 M Kcl hypotonic solution for 30 min in a 37˚C water bath and with carrnoy fixative (methanol:glacial acetic acid, 3:1) in a 37˚C water bath for 5 min. Giemsa staining was performed following standard method (38). Subsequently, the well-spread chromosome metaphases were observed under an oil immersion objective (inverted fluorescence microscope; magnification, x1,000) and analyzed with VideoTesT-Karyo 3.1 software (NatureGene corp.). Alkaline phosphatase (AP) staining and IF. To detect AP activity, the compact cell colonies formed from REFs were washed with PBS three times and stained with an AP kit (cat. no. c3206; Beyotime Institute of Biotechnology) according to the manufacturer's protocol. The full and bright GFP-positive colonies were selected under a fluorescence microscope. To biologically characterize RiPScs-6F/cR, cells seeded on coverslips were fixed with 4% (w/v) paraformaldehyde (PFA) for 18 min, permeabilized with 0.2% Triton X-100 for 8 min, and blocked with 10% goat serum and 10% donkey serum (both from Jackson ImmunoResearch Laboratories, Inc.) at room temperature for 1 h. Subsequently, the cells were incubated for 12 h at 4˚C with the following primary antibodies against pluripotency markers: Anti-OcT4 (1:200; Abcam), anti-Nanog (1:300) and anti-Sox2 (1:400; both from cell Signaling Technology, Inc.). The samples were then incubated with a cyanine 3 (cy3) dye-conjugated secondary antibody (1:1,000; cat. no. 711-165-152; Jackson ImmunoResearch Laboratories, Inc.) for 1 h at room temperature. The cell nuclei were counterstained with DAPI (Thermo Fisher Scientific, Inc.) for 15 min at room temperature. Finally, the coverslips were mounted with ProLong™ Gold Antifade Mountant (Invitrogen; Thermo Fisher Scientific, Inc.) and observed under an inverted fluorescence microscope (Guangzhou Micro-shot Technology co., Ltd.) (1,36). RT-PCR. Total RNA was extracted from the cells using TRIzol ® reagent (Invitrogen; Thermo Fisher Scientific, Inc.) and reverse transcribed into cdNA using PrimeScript™ RT Reagent Kit (Perfect Real time) (Takara Biotechnology co., Ltd.) with the following parameters: 37˚C for 15 min, 85˚C for 5 sec and finally 4˚C for 5 min. RT-PCR was carried out using TB Green Premix Ex Taq II (Takara Biotechnology Co., Ltd.) with a QuantStudio™ 6 Flex thermocycler (Applied Biosystems; Thermo Fisher Scientific, Inc.). The thermocycling conditions were as follows: Initial denaturation at 95˚C for 30 sec, followed by 40 cycles of denaturation at 95˚C for 10 sec and at 60˚C for 20 sec. The amplified fragments were visualized by 1% agarose gel electrophoresis and stained with Gel-Red (cat. no. D0140; Beyotime Institute of Biotechnology, Inc.), and GAPDH was used as the internal control. The primers were synthesized by Sangon Biotech co, Ltd., and their sequences are shown in Table SI. Behavioral detection of rat models of PD Sniff test. A total of 2 weeks after injection of 6-OHdA, the rats underwent a sniff test. Rats with Pd usually sniff an unfamiliar environment standing on the ground, with movements of vibrissae and the head tilted upwards (39). Apomorphine (APO)-induced rotation experiment. At week 2 after 6-OHdA unilateral injection, an intraperitoneal injection of APO (0.5 mg/kg; Sigma-Aldrich; Merck KGaA) was used to induce rotational behavior contralateral to the lesion side. A rotational speed of ≥210 revolutions/30 min was considered as the criterion for successful modeling of rats with Pd. The experiment was performed at 4, 8, 12, 20 and 24 weeks after cell transplantation. Rotarod test. A total of three rats were placed on the three channels of a rotarod test apparatus (SANS, Biotechnology co., Ltd.). The rats placed on the rod synchronously were detected simultaneously to assess motor coordination. The test was processed at a constant speed of 300 rpm for 1,800 sec, and each animal underwent three trials. Each trial was automatically paused, and the time it took for the rat to fall off the rod or run for 1,800 sec was recorded. If a rat fell within 10 sec, three repetitions of the experiment would be performed. The experiment was performed at 4, 8, 12, 20 and 24 weeks after cell transplantation. Open-field assay. Each rat was placed into the central grid of the open-field instrument (Noldus Information Tech, Inc.), and the surrounding curtains were quickly drawn. The Ethovision XT 10.1S system of open-field instrument (Noldus Information Technology BV) automatically captured the track of the rat within 5 min and analyzed its stay in the central grid and the total distance of movement. Zone heatmaps were obtained by tracing the path of the rat in the open field. The experiment was performed at 4, 8, 12, 20 and 24 weeks after cell transplantation. Transplantation of RiPSCs-6F/CR into the right MFB of rats with PD. A total of 2 weeks after 6-OHdA injection, RiPScs-6F/cR and RiPScs-6F (rat iPScs induced by OSKMNL only) were resuspended in serum-free H-dMEM at a cell density of 1.25x10 7 cells/ml and stereotactically transplanted into the right MFB of model rats with Pd according to the same two stereotaxic coordinates as the Pd model. At each site, an aliquot (8 µl) of cell suspension including 1.0x10 5 cells was injected into each rat with Pd using a microsyringe at a rate of 0.5 µl/min. The rats were divided into the following four groups: i) control group containing 12 healthy rats, which were injected with 8 µl saline. A total of 30 model rats with Pd (out of the initial 36 rats used for Pd modelling, 83.3% success rate) were divided randomly into three groups, including a ii) vehicle group containing 12 Pd model rats, which were injected with 8 µl H-DMEM; iii) a RiPSCs-6F/CR group containing 12 Pd model rats injected with 1.0x10 5 RiPScs-6F/cR; and iv) a RiPScs-6F group containing six Pd model rats injected with 1.0x10 5 RiPScs-6F. Behavioral analysis was performed on each group at 4, 8, 12, 20 and 24 weeks after cell transplantation. In total, two rats from each group were perfused, and coronal sections of the perfused brains were used for IF detection at 8 and 20 weeks after cell transplantation. On week 12, two rats from each group were perfused for hematoxylin and eosin (H&E) staining and tyrosine hydroxylase (TH)-3,3'-diaminobenzidine (dAB) detection by immunohistochemical analysis. Each test requiring sacrifice of experimental animals was repeated only once, i.e. two rats were sacrificed per test. And, the remaining six rats from each of the three groups were used for long-term monitoring of behavioral changes. Histology and immunohistochemistry (IHC). Rats were deeply anesthetized with intraperitoneal injections of 3% pentobarbital sodium (50 mg/kg), and then transcardially perfused with 0.9% Nacl followed by 4% PFA. during PFA perfusion, the limbs of the rats twitched continuously and became rigid, and the liver and brain became white, thus confirming successful perfusion and rat euthanasia. The brain was collected and fixed in 4% paraformaldehyde at 4˚C for 6 h, after which the brain tissue was transferred into 25% sucrose solution until it sank to the bottom at 4˚C. The brain tissue was then incubated at -80˚C overnight. Subsequently, the perfused brains were embedded using OcT embedding medium (Sakura Finetek USA, Inc.) and serial coronal sections (12 µm) were cut using a cryostat (CM-1850; Leica Microsystems, Inc.), mounted on gelatin-coated glass slides and frozen at -20˚C. Subsequently, the sections were permeabilized with 0.2% Triton X-100 for 8 min at room temperature, and blocked with 10% goat serum and 10% donkey serum at room temperature for 1 h. The sections were then subjected to double IF staining using an anti-GFP antibody (1:200) and the following nerve-specific labeling antibodies: Anti-βIII tubulin (TUJ1; 1:500 Table SII. Subsequently, the samples were incubated with appropriate Alexa Fluor 488 (1:500; cat. no. A-21202; Invitrogen; Thermo Fisher Scientific, Inc.) and cy3-conjugated (1:1,000; cat. no. 711-165-152; Jackson ImmunoResearch Laboratories, Inc.) secondary antibodies, followed by incubation with dAPI for nuclear staining. Images were obtained using a multiphoton laser scanning confocal microscope (FV-1200MPE SHARE; Olympus corporation). Perfused brains collected at 12 weeks after cell transplantation were paraffin embedded and cut into 30-µm sections. Antigen retrieval was performed using citric acid antigen retrieval buffer (cat. no. G1202; pH 6.0; Wuhan Servicebio Technology co., Ltd.) in a microwave oven. To block endogenous peroxidase activity, the sections were incubated in 3% hydrogen peroxide (disinfection Technology co. Ltd.) at room temperature in the dark for 25 min. Subsequently, sections were blocked with 3% BSA (G5001; Wuhan Servicebio Technology Co., Ltd.) for 30 min at room temperature. The sections were then incubated with an anti-TH (1:300; cat. no. ab112; Abcam) antibody for 1 h at room temperature. Subsequently, the sections were washed three times with dulbecco's PBS, and were incubated with a HRP-conjugated goat anti-rabbit secondary antibody (1:1,000; cat. no. 111-035-003; Jackson Immunoresearch Laboratories, Inc.) for 1 h at room temperature. The sections were then stained with 3,3'-diaminobenzidine tetrahydrochloride solution (cat. no. G1211; Wuhan Servicebio Technology Co., Ltd.) at room temperature; the color developing time (1-10 min) was controlled under tahe microscope. Additionally, histopathological examination of 3-µm paraffin-embedded sections was routinely performed using a H&E staining kit (Wuhan Servicebio Technology Co., Ltd.). The paraffin-embedded sections were first dewaxed with xylene, then dehydrated with increasing ethanol concentrations, washed with PBS and stained with H&E staining solution at room temperature for 5 min. The survival and number of TH + cells obtained from TH-DAB and H&E staining were calculated by whole brain scanning using a Nikon Imaging System (dS-U3; Nikon corporation) and caseViewer 2.0 software (3dHISTEcH, Ltd.). The number of TH + cells in three randomly selected fields was counted using ImageJ software (1.51r; National Institutes of Health). Statistical analysis. All quantitative data are presented as the mean ± standard error of the mean from at least three independent experiments. Statistical comparisons between two groups were performed using independent Student's t-test. For multiple comparisons, one-way ANOVA followed by Tukey's post hoc test was used to analyze the data. GraphPad Prism software 7.0 (GraphPad Software, Inc.) was used for statistical analyses and to produce the graphs. P<0.05 was considered to indicate a statistically significant difference. Results Morphological characteristics and identification of pluripotency of RiPSCs-6F/CR. The reprogramming procedure of rat fibroblasts transduced by the six reprogramming factors (OSKMNL) as well as cR is shown in (Fig. 1A). REFs were isolated, and a substantially homogeneous population of fibroblast-like cells was obtained after subculture for 3-4 passages (Fig. 1B). All REFs could express fibroblast-specific markers CD34 and vimentin, as determined by immunofluorescence (Fig. 1c). Moreover, the fibroblast-specific genes S100a4, COL1A1 and CD34 were also highly expressed in REFs, as determined by RT-PcR analysis (Fig. 1d). However, the REFs at P3 generation did not express epithelial cell marker genes CDH1 and MUC1 (Fig. 1d). Subsequently, REFs were transduced with lentiviral particles expressing OSKMNL, and the RT-PCR results confirmed that all OSKMNL genes were overexpressed after 3 days (Fig. 1E). The REFs were then cultured in the presence of the small molecules cR at day 4 and at day 8 after the initial transduction, small GFP + /iPS-like colonies could be observed. After 20 days, these GFP + /iPS-like colonies (referred to as RiPScs-6F/cR) were picked and cultured on feeder layers for expansion and further characterization. The cell morphological changes throughout the induction process are shown in (Fig. 1F). In addition, after 20 days of treatment with empty lentiviral particles and cR, REFs showed a tendency to aggregate, but no obvious GFP + clones appeared (Fig. 1G). CR greatly increased the efficiency of generation of RiPScs-6F/cR colonies by ~4.0-fold, resulting in 40-50 RiPScs-6F/cR colonies from 1x10 4 REFs within 20 days of infection (Fig. 1H). RiPScs-6F/cR colonies showed the morphology and growth properties of embryonic stem cells (EScs), such as typical clonal proliferation, round or oval shape, small nuclei, small cytoplasm, clear boundaries at the edges and a gradual protrusion in the center. Under an inverted microscope, it could be observed that the stereoscopic effect was strong and the cells were closely arranged ( Fig. 2A). The majority of RiPScs-6F/cR colonies (90%) expressed high levels of AP and maintained iPS-like morphology with GFP for >25 passages (Fig. 2A). Chromosome G-banding analysis confirmed that the percentage of RiPScs-6F/cR with normal karyotypes 2n=42 was 96.6%, indicating that RiPScs-6F/cR were reproducible diploids, and there was no cross-contamination of cells from other species (Fig. 2B). Specifically, RiPScs-6F/cR highly expressed pluripotency-specific ESC markers, including OCT4, Sox2 and Nanog, as demonstrated by IF and flow cytometric analysis (Fig. 2c and d). RT-PcR indicated that RiPScs-6F/cR highly expressed the six transcription factor genes (OSKMNL; Fig. 2E), and western blotting showed that RiPScs-6F/cR also expressed numerous ESc markers at the protein level, including Sox2, OcT4 and Nanog (Fig. 2F). Injection of RiPSCs-6F/CR ameliorates the motor deficits of 6-OHDA-lesioned model rats with PD. The Pd rat models were prepared by stereotaxic injection of 6-OHdA into the right MFB of SD rats at two coordinates (Fig. 3A). A flow chart of the experimental procedures and animal groups described in the present study is shown in Fig. S1. At week 2 post-injection of 6-OHdA, Sd rats exhibited Pd-like symptoms, such as tail-pressing, back arching, sniffing and motor coordination disorder (Fig. 3B). In addition, the behavior of continuously turning >210 ipsilateral rotations/30 min to the contralateral side of the injury in APO-induced rotation was considered a main criterion for model rats with Pd. A total of 30 rats were successfully modeled (83.3% success rate), as determined by a behavioral test. RiPScs-6F/cR (1.0x10 5 cells/graft) were stereotactically transplanted into the right MFB of model rats with Pd (n=12; Fig. 3c); all of the transplanted rats survived. In addition, transplantation of RiPScs-6F/cR induced by OSKMNL-cR did not lead to rejection or tumor formation (Fig. 3d). However, two out of six graft recipients of RiPScs-6F developed tumors in the brain at 8 weeks after transplantation; therefore, all six rats in the RiPScs-6F group were euthanized at 8 weeks due to tumorigenicity detection and no further behavioral testing was performed on the RiPScs-6F group. A total of 8 weeks after RiPScs-6F/cR transplantation, the rats were more excited and active than those in the vehicle group, and the total distance of movement was significantly increased in the open-field test (P<0.01; Fig. 3E and F). In addition, the APO-induced rotation of the transplanted rats was slightly reduced at week 4 after transplantation of RiPSCs-6F/CR, but there was no significant difference compared with that of the vehicle group (n=12; data not shown). However, the APO-induced rotational behavior was significantly reduced to 225.0±64.2 after Figure 4. Effects of RiPScs-6F/cR transplantation on the loss of TH + dopaminergic neurons in the right medial forebrain bundle of rats with Parkinson's disease. (A) Representative images of whole brain scanning with hematoxylin and eosin staining in the healthy control, vehicle and RiPScs-6F/cR groups 12 weeks after transplantation. (B) TH intensity was detected by TH-3,3'-diaminobenzidine staining of the whole brain in three groups, and was analyzed using CaseViewer. The boxed areas are shown at higher magnification on the right side of the image. (C) Number of TH + cells in the injured areas of the three groups. * P<0.05 by one-way ANOVA. RiPScs-6F/cR, rat induced pluripotent stem cells induced by OcT3/4, Sox2, Klf4, c-Myc, Nanog and Lin28 + cR; cR, cHIR99021 and RepSox; TH, tyrosine hydroxylase. 8 weeks of transplantation with RiPScs-6F/cR (P<0.01; Fig. 3G). Furthermore, the motor coordination ability of the RiPScs-6F/cR group was also effectively improved according to the results of the rotarod test (P<0.05; Fig. 3H). Moreover, the motor deficits of rats with PD in the RiPScs-6F/cR group were further improved 24 weeks after cell transplantation (Fig. 3F-H), which indicated that transplanted cells required a period to induce functional recovery in vivo. Notably, transplantation of RiPScs-6F/cR into the MFB may significantly improve the dyskinesia of rats with Pd after 8 weeks. RiPSCs-6F/CR differentiate into targeted TH + dopamine neurons in the MFB of model rats with PD. Whole-brain H&E staining analysis showed that the number of cells in the 6-OHdA-lesioned area of rats with Pd was markedly lower than that of the healthy control group, and the cells were disorderly arranged (Fig. 4A). However, numerous viable rat RiPScs-6F/cR grafts were observed in the injured area 12 weeks after transplantation and the cells were relatively neatly arranged (Fig. 4A). In addition, TH-dAB staining of the whole brain showed that, compared with that in the healthy control group, the number of TH + cells in the injured area of rats with Pd (vehicle group) was significantly reduced, and the expression level of TH was also markedly reduced. However, numerous TH + cells were present in the grafted area of RiPScs-6F/cR and the surrounding MFB 12 weeks after transplantation. Microscopic imaging showed markedly increased TH labeling in the transplanted MFB compared with that in the rats of the vehicle group, indicating robust recovery of the transplanted MFB from the engrafted RiPScs-6F/cR. Moreover, stereological cell counts of TH + dopamine neurons showed that the number of viable dA neurons in the MFB of the RiPScs-6F/cR group was significantly higher than that of the vehicle group ( Fig. 4B and c). These results were in agreement with behavioral evaluations, and suggested that RiPScs-6F/cR differentiated into targeted TH + dopamine neurons in the microenvironment of the host brain. RiPSCs-6F/CR differentiate into various types of functional neurons in the host MFB of rats with PD. The results of IF detection of frozen sections of rat brain tissue showed that GFP-positive cells formed a distinct graft area 2 weeks after transplantation of RiPScs-6F/cR. Furthermore, a large number of the transplanted cells had migrated 2.3 mm from the graft area into the surrounding brain tissue 20 weeks after transplantation (Fig. 5A). In addition, numerous GFP + cells also stained positive for TH, and certain TH + cell clusters were dispersed throughout the engraftment area and integrated into the host brain (Fig. 5B). Since TH is a specific marker of DA neurons, this suggested that RiPScs-6F/cR could differentiate into dA neurons in vivo. Moreover, the engrafted RiPScs-6F/cR gave rise to various functional neurons around and within the graft area, which could express the neuronal marker TUJ1, the GABAergic neuron marker GABA (Fig. 5B), the glutamatergic neuronal marker PSD95 (Fig. 5C) and the glial marker GFAP (Fig. 6A). Certain GFP-positive cells exhibited features of neural stem cells or neural precursor cells, and expressed the neural stem cell markers PAX6, Sox2 and Nestin 20 weeks after transplantation (Fig. 6B-d). This indicated that the transplanted RiPSCs-6F/CR differentiated into neural precursor cells first and then into mature neurons in the brain microenvironment. In addition, the expression of SYN was markedly increased after RiPSCs-6F/CR transplantation, and 52% of GFP + cells were also positive for SYN staining (Fig. 6E). Moreover, numerous SYN + , GFPpatches were adjacent to the transplanted RiPScs-6F/cR, indicating that host brain-derived presynaptic terminals connected with RiPScs-6F/cR-derived neurons to form mature synapses (Fig. 6E). These data suggested that RiPScs-6F/cR could differentiate into neural precursor cells, various types of specific functional astrocytes and neurons in the microenvironment of the host brain. Notably, no tumor formation was found in the 12 grafted rats 20 weeks after RiPScs-6F/cR transplantation. Discussion The identification of a method capable of obtaining functional cell types is currently the most basic scientific issue in regenerative medicine research. direct reprogramming has emerged as a promising approach to induce cell fate transition by introducing a combination of specific transcription factors (40). Moreover, previous reports have demonstrated that reprogramming efficiency could be significantly improved, and different functional cell types could be generated by the presence of certain small molecules, such as valproic acid [VPA, a histone deacetylase (HdAc) inhibitor], cHIR99021 [a glycogen synthase kinase-3β (GSK3-β) inhibitor], butyrate (an HdAc inhibitor), AZA (a dNA methyltransferase inhibitor) and vitamin c (41,42). Moreover, PScs could be directly induced from mouse somatic cells using a cocktail of seven small-molecule compounds called Vc6TFAE (namely, VPA, cHIR99021, 616452, tranylcypromine, forskolin, AM580 and EPZ004777). chemical reprogramming provides an alternative to the manipulation of cell fate and avoids the risk of genomic integration of exogenous transcriptional factors (43). In the present study, the small molecules RepSox (TGFβ receptor-1 inhibitor) and cHIR99021 promoted reprogramming and greatly improved the efficiency of RiPSCs-6F/CR colony generation by ~4.0-fold; notably, 40-50 iPSc colonies were generated from 1x10 4 REFs within 25 days of infection. Numerous studies have shown that inhibition of GSK3-β by CHIR99021 or inhibition of TGF-β signaling by RepSox can effectively replace Sox2 and c-Myc for reprogramming by inducing Nanog (41,43). In addition, GSK3β is a master regulator of Myc threonine 58 phosphorylation and leads to ubiquitin-dependent degradation of c-Myc. Therefore, inhibition of GSK3-β by Wnt signaling could promote self-renewal and cell reprogramming by regulating the stability of c-Myc (40,44). Inhibition of TGF-β could induce mesenchymal to epithelial transition (MET) and increase Nanog expression (36). Together with the present findings, it may be concluded that cR could promote the reprogramming process by simultaneously inhibiting GSK3-β and TGF-β signaling. Furthermore, the resulting RiPScs-6F/cR had typical iPScs-like morphology, retained normal karyotypes and AP activity, and stained positively for pluripotency-specific markers, including Nanog, OcT4 and Sox2, which shared similar characteristics with 4F-iPScs (iPScs induced by OcT4, Klf4, Sox2 and c-Myc) and rat EScs (45). EScs and iPScs can improve the motor behavioral defects of 6-OHdA-lesioned rats by re-innervating the striatum and restoring dA neurotransmission, but are also associated with the risk of tumor formation and teratomas in vivo, as well the possibility of undifferentiated cells or proliferating non-neural cells being present in the cell population (46)(47)(48)(49). To address these issues, the use of various functional midbrain dopamine neurons and NPcs derived from iPScs has the potential for the treatment of neurodegenerative diseases due to the lower immune rejection, lower tumorigenicity and better stability associated with these cells (50)(51)(52)(53)(54)(55)(56). In the present study, induced RiPScs were directly transplanted into the MFB of rats with Pd. After 20 weeks of transplantation, the RiPScs-6F/cR could form a distinct engraftment area in the MFB of rats with Pd. In addition, RiPScs-6F/cR grafts could differentiate into multiple types of functional active neurocytes and glial cells to promote behavioral recovery of motor dysfunction and neurological function of rats with PD, such as GFAP + , PSd95 + , PAX6 + , GABA + and TH + cells. TH serves a key role in the regulation of dA biosynthesis in DA neurons (3). A sufficient number of surviving TH + cells (dA neurons) derived from RiPScs-6F/cR was indicated to serve an important role in behavioral improvement in the current study. In addition, GABA + cells (GABAergic neurons) derived from the RiPSCs-GFP may be responsible for regulating the balance of excitatory and inhibitory signals in the dopamine pathway. In addition, synapse formation between donor-and host-derived neurons could promote functional recovery and behavioral improvement. Notably, no tumor formation was observed in any of the transplanted rats within 20 weeks of RiPScs-6F/cR transplantation; however, two out of six graft recipients of RiPScs-6F developed tumors at 8 weeks after transplantation. This result may suggest that not all iPScs will result in tumorigenesis after transplantation in vivo. It may also be due to the fact that the small molecules cR facilitate reprogramming, and reduce the tumorigenicity of iPScs in vivo; however, the specific mechanism needs to be further investigated (57,58). It is known that inhibition of GSK3-β by cHIR99021 can result in activation of β-catenin/c-Jun signaling and downregulation of NF-kB activity to promote apoptosis and inhibit proliferation (59). In addition, inhibition of TGF-β signaling by RepSox can induce MET and inhibit epithelial to mesenchymal transition, thereby inhibiting cell cycle progression and tumorigenesis (36). Small molecules are non-integrative to the genome, and are thus much safer and more advantageous than the gene editing method in modulating cell function and cell fate changes (60). Since the transplanted cells migrated in the microenvironment of the host brain, the percentage of survival and the potential mitotic activity of RiPScs-6F/cR after transplantation needs to be further investigated. In conclusion, in the present study, the small molecules cR significantly facilitated reprogramming and promoted RiPScs-6F/cR colony generation during lentivirus-mediated reprogramming of six transcription factors in REFs. Furthermore, the transplanted RiPScs-6F/cR could survive for ≥20 weeks in the MFB and could differentiate into multiple functional neurocytes to ameliorate neurological deficits in 6-OHdA-injured rats with Pd. work. WW and YL established the Parkinson's disease model. WG and YG participated in the statistical analysis. CM and cL conceived the research, and participated in its design and coordination. CM and CL confirm the authenticity of all the raw data. All authors read and approved the final manuscript. Ethics approval and consent to participate All experimental animal procedures were approved by the Institutional Animal care and Use committee of Bengbu Medical college (approval no. 2020-025). Patient consent for publication Not applicable.
2022-10-28T06:19:31.924Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "3c21103bfb2bbe3489ee3d1e10ecab311878d6be", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2022.5196/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7e8fea296b27b0dae621f0699503abe780825d28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252253813
pes2o/s2orc
v3-fos-license
Evaluation of Digital Storytelling in terms of Pre-Service ICT Teachers' Perceived TPACK Levels and Teaching Proficiency Self-Efficacy Levels: A Mixed-Method Study Today, the point where technology has come has revealed the need for individuals to be equipped with different skills. Teachers play an essential role in the development of these skills. Teachers should be able to use and master the methods that enable students to participate in the learning environment actively and gain the technological competencies they need today. In this direction, this study aims to examine the effect of digital storytelling on pre-service teachers' TPACK skills and teaching self-efficacy. An explanatory sequential research design is conducted with 29 pre-service ICT teachers. In the quantitative part of the study experimental research design and, in the qualitative part of the study case study is used. According to the results, digital storytelling is significantly affecting both TPACK and teaching proficiency self-efficacy levels of pre-service teachers. The results and future recommendations of the study are reported in detail. Introduction Today, the point where technology has come has revealed the need for individuals to be equipped with different skills. Teachers play an essential role in the development of these skills. Teachers should be able to use and master the methods that enable students to participate in the learning environment actively and gain the technological competencies they need today. In other words, it will be possible for them to train individuals equipped with skills that can keep up with today's society by effectively integrating technology into educational environments. One of the frameworks developed for the integration of technology into the educational environment is TPACK (Mishra & Kohler, 2006). With TPACK, it is stated that teachers need perfect content knowledge, technological knowledge, and pedagogical knowledge. Before starting the teaching profession, pre-service teachers need to encounter practices and methods to improve their TPACK skills and receive training to improve these skills. At this point, digital storytelling can be considered as an alternative, where learners can be active in the learning environment and use Web 2.0 tools that can be called new media (Condy et al., 2012;Robin, 2008;Yang & Wu, 2012). Digital storytelling can be defined as animating or digitalizing stories as texts using today's digital technologies. It can be said that it is an application that is becoming widespread in educational environments as it is easier to apply with internet-based applications thanks to today's new media. The contribution to the learners' variables such as motivation, engagement, and academic success plays an essential role in the widespread use of digital storytelling (Chun-Ming Hung, Hwang, & Huang, 2012;Nam, 2017;Sadik, 2008;Gocen Kabaran, & Duman, 2021;Walters et al., 2018). In addition to the contributions it makes to students, it can be said that it also contributes to teachers' technology integration skills (Sancar-Tokmak, Surmeli, & Ozgelen, 2014;Sancar-Tokmak & Yanpar-Yelken, 2015). In addition to cognitive skills, affective beliefs such as belief and self-confidence also play an essential role for teachers to integrate technology into the classroom environment (Joo, Park, & Lim, 2018). Although their technological, pedagogical, and field knowledge is sufficient, their belief in themselves plays an important role in turning this into performance. It is crucial to examine the concept of self-efficacy introduced by Bandura (1977), which is defined as the confidence individuals have in themselves to perform performance or task. The first purpose of measuring self-efficacy is to predict whether individuals will perform a performance rather than revealing their characteristics (Zimmerman, 2000). Although self-efficacy alone is not enough to predict success; As self-efficacy increases, their success increases (Akkoyunlu ve Kurbanoğlu, 2003). One of the four critical components for increasing self-efficacy is experiences (Bandura, 2001). In other words, as the experience of individuals increases, their self-efficacy increases. Thus, pre-service teachers' realization of applications where they can integrate technology will gain experience. This experience will ensure a successful technology integration in future classes. When the literature is examined, it is seen that TPACK and Self-efficacy are related. It is known that TPACK affects self-efficacy, as seen in Figure 1, and it also affects behavior (Joo et al., 2018). Therefore, evaluating these two variables together will provide a more accurate interpretation of the results. In this direction, this study aims to examine the effect of digital storytelling on pre-service teachers' TPACK skills and teaching self-efficacy. For this purpose, the following questions will be answered: • Do pre-service teachers' TPACK levels differ significantly before and after the application of digital storytelling? • Do pre-service teachers' teaching self-efficacy levels differ significantly before and after digital storytelling practice? • How do pre-service teachers evaluate the digital storytelling process within the framework of TPACK and teaching self-efficacy? Methodology In the research, one of the mixed-method approach Explanatory Sequential Design is used. This design is the most commonly used version of mixed methods in educational research. In the explanatory research design, firstly, quantitative data is collected and analyzed. Thus, the general framework of the research question is determined. Furthermore, qualitative data is collected and analyzed for investigating the research question deeply (Creswell, 2012). The symbolic representation of the design is as in Figure 2. (Creswell, 2012, pp. 534) In the quantitative part of the research, the one group pre-test post-test design (Büyüköztürk, Kılıç Çakmak, Akgün, Karadeniz ve Demirel, 2012), in other words, repeated-measures design (Creswell, 2012) is used. The symbolic representation of the design is as in Figure 3. The qualitative part of the research is planned as a case study. The purpose of the case studies is deeply investigating a case (Yıldırım ve Şimşek, 2011, s.77.). Participants Convenience sampling is used in this research. This sampling type is used, where the participant group's participation status is easier or more accessible (Ekiz, 2009 Procedure The research was carried out in the history of science courses in the Computer Education and Instructional Technologies (CEIT) department. The research took eight weeks without the data collection process. The process started with the implementation of the pre-tests and with giving information about digital storytelling, about the process of creating digital stories, and showing digital story examples. Pre-service teachers were first asked to be a group of 2 or 3 people, then to select one of the world-renowned Turkish scientists (like Gazi Yaşargil, Oktay Sinanoğlu, etc.) and to create stories about them. Assuming they know the video editing programs, the video editing programs are not described for being a CEIT department student. Post-tests are implemented to the group at the end of the process, and the digital stories are evaluated with a rubric for selecting the focus group interview participants. Data Collection Tools There are three different scales used for quantitative data collection. For evaluating the digital stories, a rubric is used. Moreover, for the qualitative data collection, open-ended interview questions are used. The original form of the scale is developed by Tschannen-Moran & Woolfolk-Hoy (2001) and first adapted into Turkish by Baloğlu ve Karadağ (2008). The adapted version of the scale is adapted for pre-service teachers by Tuluk (2014). The scale's adapted form involves 19 5 points Likert-type (1= strongly disagree to 5 = strongly agree) items under three dimensions. The scale explains % 54 of the total variance, and the alpha coefficient is .86 for Efficacy in Student Engagement dimension (9 items), .84 for Efficacy in Instructional Practices (7 items), .93 for Efficacy in Classroom Management (3 items). Perceived TPC Knowledge Scale For determining the effect of digital storytelling on pre-service teachers' perceived TPC Knowledge levels, the points Likert-type, starts from 1=" strongly disagree" to 10 = "strongly agree." Digital Story Rubric The rubric used in this study is developed by Özcan, Kukul and Karataş (2016). The rubric involves three main categories; Planning, Production, and Presenting/Sharing/FeedBack. There are four criteria under the planning category: under the product category, there are nine criteria, and under the presenting/sharing/feedback, there is one criterion for evaluating the digital stories. The rubric is developed according to the digital storytelling process and the elements of digital stories. There are 4 different points for each criteria; 1= "Poor", 2= "Low", 3= "Good", 4= "Excellent". Interview Questions In the qualitative part of the study, semi-structured focus group interviews are held. Interview questions are prepared according to the results of the quantitative analysis. Furthermore, it is asked the pre-service ICT teachers to think about all digital storytelling processes for different field teachers because pre-service ICT teachers' digital competencies may be better than those of other fields. For this reason, https://tureng.com/tr/turkceingilizce/spontaneousinstant questions are asked during the interview, like, "When you think of the other fields' teachers, how digital storytelling affects their teaching profession differently from ICT teachers?" The interview questions are seen below;  How can digital storytelling affect the teaching profession?  When you think of digital storytelling in the framework of student engagement, how does it affect student engagement?  How can digital storytelling affect teachers' technological pedagogical content knowledge? Data Analysis According to the explanatory sequential mixed method research design, the data analysis starts with analyzing the quantitative data. Firstly, it was checked whether the quantitative data met the parametric test assumptions. The study's quantitative data analysis was started by checking whether the parametric test assumptions were met. One of the parametric tests' assumptions is the normal distribution of data (Delice, 2010;Kraska-Miller, 2013). One of the most commonly used methods to determine whether the data shows normal distribution is the coefficient of skewness and kurtosis. Some researchers interpret this coefficient to be within ± one range (Başol, 2015;Çokluk, Şekercioğlu, & Büyüköztürk, 2012;Büyüköztürk, 2007) and some researchers emphasize that this value can be within the range of ± 2 (George and Mallery, 2003). When Table 1 is examined, it is seen that the Skewness and Kurtosis values of the tests other than readiness pretest are between +1 and -1. That means all the data of the tests show normal distribution. The digital story rubric is used for evaluating the digital stories developed by the pre-service teachers. Two researchers evaluated the stories separately, and then the scores given by the researchers were compared for reliability analysis. According to the results, the participants are separated into three different groups by their scores. From each group, three participants selected randomly for focus group interviews. Each participant of focus group interviews is given a nickname, as seen in Table 2. Students with different scores are selected for groups; it is the assumption that pre-service teachers who have low scores may not be satisfied with the process. Thus, it was tried to show why they were also not satisfied with the process and the weaknesses of the process. Findings Teaching Profession Self-Efficacy Scale Paired Samples t-Test is used for determining the effect of digital storytelling on pre-service teachers' teaching profession self-efficacy levels. The results are shown in Table 3. According to the results of paired samples t-test, there is a significant difference between pre-test and post-test scores (t(28)=-3.13; p<0.05). It is determined that the mean of post-test scores ( ̅ =77.17) is higher than pre-test scores ( ̅ =72.59). In other words, digital storytelling has a positive effect on pre-service teachers' self-efficacy levels towards teaching. As a result of the analysis made to determine the effect size, the value of 0.54 revealed that this effect size is medium. According to Cohen, the effect size higher than 0.5 is large enough for the researcher to see, even with his observations (Cohen, 1988). When the sub-factors are examined, the same effect is seen for all the sub-factors too. Digital storytelling positively affects pre-service teachers' efficacy in Student Engagement, Instructional Practices, and Classroom Management. The efficacy in Instructional Practices is the most effected sub-category with .57 effect size (t(28)=-3.19; p<0.05). Perceived TPACK Scale Paired Samples t-Test is used for determining the effect of digital storytelling on pre-service teachers' perceived technologic pedagogical content knowledge levels. The results are shown in Table 4. According to the results of paired samples t-test, there is a significant difference between pre-test and post-test scores (t(28)=-3.67; p<0.05). It is determined that the mean of post-test scores ( ̅ =123.14) is higher than pre-test scores ( ̅ =112.03). In other words, digital storytelling has a positive effect on pre-service teachers' perceived TPC knowledge levels. As a result of the analysis made to determine the effect size, the value of 0.59 revealed that this effect size is medium (Cohen, 1988). When the sub-factors are examined, it can be said that digital storytelling has a significant positive effect on all sub-factors of the TPACK Scale. Technological Pedagogical Content subfactor is the most effected factor with .63 effect size (t(28)=-3.42; p<0.05). Findings of the Focus Group Interviews In the framework of the study, three focus group interviews are organized. In this section, all the interviews will be analyzed together. In the first question directed to the pre-service teachers, digital storytelling's effect on teaching competencies was asked. Although the pre-service teachers generally emphasized the positive effects, they also stated that it did not contribute much because they were ICT pre-service teachers, considering that their technological skills were already sufficient. When asked to think about pre-service teachers in other fields; they stated that they found it positive in developing technological skills, enriching the learning environment, following the new technologies. It is seen that the pre-service teachers' thoughts below: P_3H: "With digital storytelling, teachers may need to use different web 2.0 tools. Therefore, the teacher needs to be aware of these tools." P_5M: "A teacher who will have this application must know web 2.0 technologies and follow new technologies. In doing so, their technological skills will improve, and I think today's teachers should use technology well." P_8M: "I think it is easy to use digital storytelling for the teachers in our department. After all, using and making web 2.0 tools available is part of our field. However, teachers in other departments may find it difficult." P_1L: "Yes, it is a different activity for students, but I do not think that teachers in other departments will implement it. It is a difficult process for them; they will both learn and teach new technologies." When the pre-service teachers were asked to evaluate digital storytelling in terms of student engagement, enrichment of the learning environment, and classroom management, it was stated that digital storytelling generally contributed positively to the learning environment. However, digital storytelling is thought to positively affect students, such as attracting students' attention, developing students' creativity, and developing their students' technological skills. P_6H: "I think digital storytelling is an important activity to enrich my class. The history of science is a subject that we can teach in secondary schools, and I would not have thought of using digital storytelling while dealing with this topic." P_4L: "I think it can make the lesson fun for children. In doing so, it is ensured that they both learn the subject and develop their technological competence." P_7L: "Although I cannot create a creative story, I think students will develop more creative things. Maybe if we did digital stories at the secondary school, I would be creative too :)" When the pre-service teachers are asked to evaluate digital storytelling within the framework of TPACK, it is seen that they think it contributes especially pedagogically. As seen in the previous questions, it is seen that they consider the course as a method that can enrich the course, and while doing this, they consider using technology to work as an advantage. Although digital storytelling works technology, it is not considered a technological innovation for ICT teachers. However, they state that they will contribute positively to their technological knowledge when they think in terms of teachers in other fields. Some of the opinions of pre-service teachers about the effect of digital storytelling on TPACK skills are given below: Discussion In the study, in which the effect of digital storytelling on the TPACK levels and teaching self-efficacy of ICT preservice teachers was investigated, it was observed that digital storytelling had a significant positive effect on both variables. This finding coincides with the studies of Sancar-Tokmak and Yanpar-Yelken (2015), Sancar-Tokmak, Surmeli, and Ozgelen (2014). However, in this study, it was seen that digital storytelling had the most excellent effect on the Technological Pedagogical Content sub-factor, whereas in Sancar-Tokmak and Yanpar-Yelken (2015) there was no significant difference under this factor. It has been observed that pre-service teachers evaluated digital storytelling as powerful, especially in terms of enriching the learning environment. While expressing this, pre-service teachers who care about integrating technology into the learning environment emphasized the importance of technological pedagogical knowledge. In this respect, it shows that digital storytelling is a strong alternative for integrating technology into the educational environment (Sadik, 2008). The pre-service teachers emphasize that they can make their lessons more enjoyable with digital storytelling, enable their students to be active in the lessons, and develop their creativity. This finding corresponds exactly to the study findings of Karataş, Kukul and Özcan (2018). It can be thought that students' commitment to the lesson will increase by having fun in the lessons (Nam, 2017). Although ICT pre-service teachers consider it an easy process for them, they think it can be a time consuming and difficult process for other teachers. This finding coincides with the study of Karataş, Kukul and Özcan (2018) and Yiğit (2020). Pre-service teachers generally think that processes other than traditional methods are difficult and time-consuming. Although they think that digital storytelling will be useful in different lessons (Islim, Ozudogru, & Sevim-Cirak, 2018;Kocaman-Karoglu, 2014), they do not believe that it is applicable. There may be different reasons for this:  First of all, pre-service teachers have not taken any courses other than traditional methods until and after they arrive at the university,  The implementations of the courses taken in university education,  Teachers' technological knowledge and skills are not sufficient. Conclusion It is seen that digital storytelling has a positive effect on pre-service teachers' TPACK levels and teaching selfefficacy. Many studies in the literature overlap with this study result (Nam, 2017;Sancar-Tokmak et al., 2014;Sancar-Tokmak & Yanpar-Yelken, 2015). Besides, it has positive effects on students Nam, 2017;Yang & Wu, 2012) and maybe (Islim, Ozudogru, & Sevim-Cirak, 2018;Karataş, Kukul, & Özcan (2018), many studies put forward. However, in some studies, as in this study, it is seen that pre-service teachers perceive digital storytelling as difficult to apply in a classroom environment for different reasons. (Karataş, Kukul, & Özcan, 2018). This finding can be seen as an essential finding that must be overcome for successful technology integration. Why do pre-service teachers think that digital storytelling is a time consuming and complicated process? Because the pre-service teachers come from the traditional learning environment and unfortunately, as they have stated, this situation does not differ at the university. Thus, pre-service teachers can get rid of this prejudice if they see more lessons conducted with unconventional methods. It can also be ensured that pre-service teachers practice with teaching methods in which new technologies can be used in the lessons included in the teacher training program such as special teaching methods, teaching principles, and methods. Thus, the process can be easier for pre-service teachers who find more practical opportunities. Another reason behind the pre-service teachers' perception of digital storytelling as a problematic process may be the lack of technological knowledge and skills. It is known that pre-service teachers are mostly limited to the use of social media until they come to university (Islim, Ozudogru, & Sevim-Cirak, 2018). Their inability or fear of not being able to do so when faced with new technologies can also cause them to consider the process difficult. In order to prevent this, increasing the digital literacy of pre-service teachers may be beneficial. In addition, if it is possible to expand the use of technology for different purposes in younger age groups, technology knowledge and skills will increase in future pre-service teachers.
2022-09-15T15:24:14.208Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "8d1603f05a046081a3548b634906e5c64a174d56", "oa_license": "CCBYNCSA", "oa_url": "https://ijte.net/index.php/ijte/article/download/240/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ac71fc7f823f974b3833729feee79e502eb7946", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
18847275
pes2o/s2orc
v3-fos-license
Circulating tumor cells as a prognostic and predictive marker in gastrointestinal stromal tumors: a prospective study Background Circulating tumor cells (CTC) are prognostic and predictive for several cancer types. Only limited data exist regarding prognostic or predictive impact of CTC on gastrointestinal stromal tumor (GIST) patients. The aim of our study was to elucidate the role of CTC in GIST patients. Results A total of 121 GIST patients and 54 non-GIST samples were enrolled in the study. The cutoff value for ANO1 positive was 3*10−5 and 65 (54%) GIST patients were defined as ANO1 positive. ANO1s were more frequently detected in unresectable patients. Tumor size, mitotic count and risk level were associated with ANO1 detection in resectable GIST patients. The presence of ANO1 significantly correlated with poor disease-free survival (15.3 versus 19.6 months, p = 0.038). Most patients turned ANO1-negative after surgery and inversely, all 21 patients with recurrence turned ANO1-positive with high ANO1 expression levels. Moreover, in the neoadjuvant setting, decline of ANO1 expression level correlated with the response of imatinib. Methods Cells from peripheral blood mononuclear cells tested positive for anoctamin 1, calcium activated chloride channel, ANO1 (DOG1) were considered as tumor CTC of GISTs. The expression levels of ANO1 were determined using quantitative real-time polymerase chain reaction (qRT-PCR). The highest level of ANO1 expression in non-GIST samples was used as the “cutoff” value. Conclusion ANO1 detection by qRT-PCR in peripheral blood is of clinical potential for monitoring recurrence and evaluating therapeutic efficacy of imatinib for GIST patients. INTRODUCTION Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumor of the digestive system with an incidence of about 15 cases per million per year [1]. GISTs are believed to originate from the interstitial Cajal cells (ICC) in normal bowel wall or from precursors of these cells [2]. Recently, anoctamin 1, calcium activated chloride channel (ANO1) previously called DOG1 (Discovered On Gastrointestinal tumor protein 1), was found widely expressed in GIST, even in c-KIT negative tumors, and has been shown as a sensitive and specific immunohistochemical marker for GIST [3,4]. Research Paper www.impactjournals.com/oncotarget However, even with complete resection, the rate of recurrence may be as high as 33% in five years [10,11]. Moreover, with imatinib treatment, 20% of GIST patients experience tumor growth within first 6 months [12]. However, except imaging, there are no reliable biological tools to follow the disease status over time. In recent years, circulating tumor cells (CTCs) detected non-invasively in ''liquid biopsies'' have been widely discussed in the field of monitoring cancer [13]. In several cancer types, such as breast, prostate, colon and lung cancer, CTCs has shown a significant correlation with clinical outcome [14][15][16][17][18]. Technological advances have enabled further study of the CTC as a prognostic and predictive marker [19]. However, the characteristics of CTCs in patients with GIST remain unknown. In the current study, we investigated the feasibility of detecting ANO1 based on its expression in peripheral blood of patients with GIST and determine the correlation between the presence of ANO1 and clinical outcome of GIST. Clinicopathological characteristics A total of 121 GIST patients were included. All GISTs were determined as ANO1-postive. Of these patients, 26 received neoadjuvant imatinib for 6-12 months before surgery according to National Comprehensive Cancer Network Guideline (NCCN) [20]. Seventeen of 26 GIST patients had R0 resection after imatinib, while 9 had progressive disease. A total of 112 patients including 68 men and 44 women received surgery. Of these, 46 cases had disease located in the stomach (41.1%), 54 cases located in the small intestine (48%), 4 cases in the colorectum (3.6%), 6 cases in the abdominal cavity (5.4%), and 2 cases in the mesenterium (1.8%). According to Fletcher risk classification, 52 of these 112 GIST patients were characterized as high risk ( [20]. Study flowchart is shown in Figure 1. ANO1 is a specific marker of CTC in GIST To analyze the expression level of ANO1 in PBMC from GIST patients, we established the range of expression levels of ANO1 in non-cancer healthy donors, gastric carcinoma patients and colorectal carcinoma patients. The levels of ANO1 transcripts in different samples were calculated relative to that of the housekeeping gene beta-actin. The highest expression levels of ANO1 transcripts relative to beta-actin were 3*10 -5 , 2.2*10 -5 and 3*10 -5 in 10 non-cancer healthy donors, 21 gastric carcinoma patients and 23 colorectal carcinoma patients, respectively ( Figure 2). Thus, the value of 3*10 -5 was used as "cutoff" value to determine if GIST patients have ANO1 in the PBMC samples. In our study, 65 GIST patients were defined as ANO1 positive. High ANO1 correlated with high risk, large tumor size and high mitotic count In the analysis of preoperative blood samples, 65 (54%) of 121 GIST patients were ANO1 positive, including 26 locally advanced GIST patients who received imatinib treatment before surgery ( Figure 3A and 3B). The expression levels of ANO1 in PBMC from locally advanced GIST patients were significantly increased and the positive rate of ANO1 was significantly higher than the patients with resectable GISTs (73.1% versus 54%, p<0.001). Expression levels of ANO1 were significantly associated with tumor size, mitotic count and risk levels. The expression levels of ANO1 and the positive rates of ANO1 were significantly higher in patients with large tumor size, high mitotic count and high risk ( Figure 3C-3H). Linear regression also confirmed the significant correlation (r 2 =0.3246, p<0.0001) between ANO1 expression and tumor size ( Figure 3E), and the significant correlation (r 2 =0.0379, p=0.008) between ANO1 expression and mitotic count ( Figure 3F). There was no association between ANO1 and gender, tumor location, morphology or Ki-67. Prognostic role of ANO1 in GIST For 112 patients with surgery, we tested ANO1 status before surgery and four weeks after surgery. There were 58 (51.8%) patients with positive ANO1 preoperatively where only seven remained ANO1 positive postoperatively ( Figure 4A). The mean follow-up time was 38 (0-50) months. During the follow-up period, 21 (18.8%) of 112 GIST patients had recurrence after surgery, including 16 (76.2%) in liver and 5 (23.8%) in peritoneal cavity. The median time of recurrence was 17.6 (6.4-47.6) months. Furthermore, the seven patients with consistently positive ANO1 had liver metastasis after surgery ( Figure 4B). All the 21 patients with recurrence were, or became ANO1positive ( Figure 4B, Table 2). The expression levels of ANO1 in patients with recurrence were significantly higher than that in patients without recurrence ( Figure 4C). In addition, the postoperative expression levels of ANO1 in liver metastatic GIST patients were significantly higher than that in peritoneal cavity ( Figure 4D). No patient died during the follow-up. The disease free probability at 50 months for GIST patients with positive ANO1 was 77.6% and for those without ANO1 was 86.2%. The presence of ANO1 predicted a significant poor disease free survival (15.3 versus 19.6 months, p = 0.038) ( Figure 4E). moreover, multivariate Cox regression analysis indicate www.impactjournals.com/oncotarget that ANO1 copy number in PBMC was an independent positive prognostic factor for GIST patients (Table 3). Predictive role of ANO1 for the response rate of neo-adjuvant imatinib We evaluated the efficacy of imatinib treatment according to the Response Evaluation Criteria in Solid Tumors (RECIST) after three months of neoadjuvant treatment. Of the 26 GIST patients who needed imatinib preoperatively, no patient had complete response, seven had partial response (PR, 26.9%), 10 had stable disease (SD, 38.5%) and 9 had progressive disease (PD, 34.6%). The DCR (CR+PR+SD) was 65.4%. We tested the expression of ANO1 in PBMC pre and post imatinib treatment ( Figure 5). The 17 patients with disease control (PR+SD) showed a DISCUSSION In the present study, we show for the first time that ANO1 copy number in PBMC is a strong prognostic factor for disease-free survival and predictive for therapeutic efficacy of imatinib in GIST patients. Recently, CTC detection has become an important field of study in biomedical research and has emerged as an early marker of tumor recurrence occurring before clinical symptoms present in various types of tumor [13,[21][22][23]. However, research on CTC in GIST patients is scarce. Originating from mesenchymal cells, GISTs express unique molecules, where c-KIT and ANO1 have been proven to be key biomarkers. c-KIT (also know as CD117) is a type III receptor tyrosine kinase that plays important roles in hematopoiesis, melanogenesis, and gametogenesis by binding its ligand, stem cell factor (SCF) [24,25]. Given that there are circulating c-KITpositive normal cells, including hematopoietic stem cells, it can not be used for GIST CTC dection [24]. The ANO1, a calcium-activated chloride channel that mediates receptor-activated chloride currents in diverse physiologic processes is rarely found overexpressed in other mesenchymal, but also non-mesencymal tumors [26,27]. Most studies about ANO1 focus on the cancer derived from epidermal tissue, in which ANO1 is non-specific [13,[28][29][30][31]. However, GISTs originate from mensenchymal tissue, and ANO1 is a highly sensitive and specific marker for GISTs. Thus, we deduced that CTC detection in PBMC by quantifying ANO1 could be a viable path. Here, we showed for the first time that the copy number expression of ANO1 in PBMC in GIST patients was significantly higher than that in non-GIST patients. Importantly significantly higher levels were detected in unresectable patients, large tumor size, mitotic count, risk level and poor disease-free survival. Since ANO1 levels dropped after GIST surgery and increased at recurrence, it seems likely that this marker can be used for monitoring after surgery and subclinical detection of recurrence. Targeted therapies have improved the treatment and survival of cancer patients over the past decade [31,32]. Imatinib mesylate is the standard first-line therapy of unresectable or metastatic GIST [33]. Currently there is no non-invasive test to monitor tumor response or progression. Also the optimal time point to perform surgery in this setting is also diffuse since we don't know when the maximum effect of neo-adjuvant treatment is. Here, we observed a decline of ANO1 levels in patients that received neo-adjuvant imatinib treatment, while they increased in progressing patients. These results indicate that ANO1 detection could serve as a supplementary approach to evaluate the efficacy of imatinib treatment in GIST patients, as well as using the ANO1 nadir to help define optimal timing of surgery. This study has some limitations. The number of participants could have been higher, but still, with such a relatively rare disease the number is not insignificant. The qRT-PCR does not confer visualization of ANO1, so one can argue that tumor cells have just been detected indirectly. However, even visualization by e.g. immunohistochemistry would give semi-quantitative information, while qRT-PCR is a quantitative method. In summary, these data suggest CTC detection by ANO1 as a potentially useful prognostic and predictive biomarker in GIST patients that may help to further stratify risk status within different stages of disease and to monitor the recurrence and metastasis. Extended studies regarding the characteristics of ANO1 in GIST patients are needed to establish this as a clinical method. Patients A total of 121 GIST patients were enrolled in our study in Affiliated Hospital of Nantong University and First Affiliated Hospital of Nanjing Medical University from 2011 to 2015. In addition, 10 healthy volunteers, 21 gastric cancer patients and 23 colorectal cancer patients were also included. All the resectable patients had a pathological diagnosis of GIST following surgical resection that met histological or cytological criteria. According to the Response Evaluation Criteria in Solid Tumors (RECIST) of the World Health Organization (WHO) imatinib response evaluation was performed with computed tomography (CT) scan. Response assessment was categorized as complete response (CR), partial response (PR), progressive disease (PD) and stable disease (SD). Informed written consents were obtained by all patients and the study was approved by Affiliated Hospital of Nantong University and First Affiliated Hospital of Nanjing Medical University ethics committees. Adjuvant treatment was done according to current treatment guidelines after obtaining interdisciplinary consensus for each patient. Reporting of the present study was in accordance with the REMARK guidelines [34]. Extraction of mononuclear cells from peripheral blood Nearly 10 ml peripheral blood was collected in EDTA vacuum tubes after discarding the first 2 ml of blood to avoid contamination of the blood sample with epithelial cells of skin. Peripheral blood mononuclear cells (PBMC) were isolated by density gradient centrifugation using Lymphocyte Separation Medium (Tianjin, China). The mononuclear cells were washed twice with RPMI Medium 1640 (1x) (Invitrogen, Carlsbad, CA), centrifuged at 1500 rpm for 8 min, then stored at -80°C until needed. Extraction of RNA from PBMC and synthesis of cANO1 RNA was extracted from isolated PBMC using TRIZOL reagent (Invitrogen, Carlsbad, CA) according to manufacturer's instructions. After quantification, RNA was used for cANO1 synthesis with a RevertAid First Strand cANO1 Synthesis Kit (Shanghai, China) according to the protocol. The 20µl reaction mixture was incubated at 42 °C for 60 minutes and then heated to 72 °C for 5 minutes to inactivate the reverse transcriptase, and the mixture was stored at -20 °C. Quantitative real time-polymerase chain reaction Quantitative Real Time-Polymerase Chain Reaction (qRT-PCR) was performed using FastStart Universal SYBR Green Master (Rox) (Roche Diagnostics GmbH, Mannheim, Germany). According to the protocol, 20 µl reaction volumes were run containing 2 µl cANO1. The qRT-PCR experiments were performed in 96-well plates in an ABI Prism 7500 (ABI, California, USA). Cycling parameters were as follows: hot start at 95°C for 10 min, 45 cycles of amplification, quantification at 95 °C for 15s, 58°C for 1 min, during which time fluorescence was measured, and 72 °C for 30 s. Melting curve analysis was performed using continuous fluorescence acquisition from 65 °C to 97 °C. These cycling parameters generated single amplification for primer set used according to the presence of a single melt peak. β-actin was selected as the internal reference. Each sample was processed in triplicate. Primer sequences which were designed on the basis of the published human gene sequences being as follows: ANO1 (sense: 5'-AGCCACC TCTTCGACAAC-3', anti-sense: 5'-GACAGCCTCCTC TTCCTCT-3') and β-actin (sense: 5'-TACTTGCGCTC AGGAGGAGCAA-3', anti-sense: 5'-GTCCTGTGGCAT CCACGAAACT-3'). Gene expression levels were calculated according to the following equation: 2 -ΔCt [ΔCt=Ct (target)-Ct (beta-actin)]. Statistical methods Statistical analysis was conducted using the SPSS17.0 statistical software (SPSS, Chicago, USA). Data are presented as mean ± SD. The association of ANO1 detection with clinicopathological variables was evaluated using the Chi-square test. Survival curves were constructed according to the Kaplan-Meier method and compared using the log-rank test. A P-value < 0.05 was considered to indicate a statistically significant difference. ACKNOWLEDGMENTS We thank the patients and their families for participation in this study. We thank Shenghua Jiang for his excellent technical assistance. CONFLICTS OF INTEREST The authors declare that there is no conflict of interest. GRANT SUPPORT This work was funded by the National Natural Science Foundation of China (81503160) and the Scientific Innovation Foundation of Nantong (HS2014043).
2018-04-03T05:54:09.049Z
2016-05-02T00:00:00.000
{ "year": 2016, "sha1": "4c5fcfd962ab251b361a3f513b0be206ed68fc16", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=27844&path[]=9128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c5fcfd962ab251b361a3f513b0be206ed68fc16", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229074699
pes2o/s2orc
v3-fos-license
Designing a human resource scorecard: An empirical stakeholder-based study with a company culture perspective Human resource management (HRM) in public organizations managed based on a balanced scorecard requires a different narrative on the map of strategic goals than in private organizations. However, this issue is not widely recognized and discussed. This study aims to identify strategic goals and outline an HRM strategy with a stakeholder approach from a corporate culture perspective based on a balanced scorecard by examining and highlighting areas that should be included in the revised narrative. This exploration was carried out through qualitative research, particularly a thematic analysis based on data from the Kish Free Zone Organization. Therefore, using the themes obtained, a human resources strategy map was presented based on a balanced scorecard. The six-step Clarke-Braun process and the three-step Attride-Stirling thematic classification method were combined into a thematic network, and a seven-step research process was created. Data was collected through interviews with stakeholders in the Human Resources (HR) unit. These stakeholders are (1) HR employees (2) employees of other entities (3) senior and middle management (4) family of employees (5) HR department of related companies (6) retirees, and (7) customers of this entity. To identify strategic goals and a human resource strategy map, 187 main topics, 39 organizational topics, and 12 global themes were identified after transcription of the interviews, including (1) the development of family policies (2) promoting the well-being, health, and well-being of employees (3) improving productivity HR department (4) promoting the human dignity of the staff (5) developing an organizational culture based on customer orientation and innovation (6) empowering employees (7) development HR information system (8) strategic recruitment and retention of employees (9) performance management and development employees (10) strategic transformation of HRM based on research and process reform (11) adjusting the allocation and use of the HR budget to the organization's strategy and (12) improving the accounting mechanism for the personnel budget. This study is innovative due to the proposed approach to redesign the strategy map and the balanced scorecard from a human resource management perspective, methodically, due to adopting a combined thematic analysis process and constructing related narratives and stakeholder approaches from a corporate culture perspective. INTRODUCTION Free zone organizations are a very important sector for developing the economy and independent trade and strengthening international relations. However, as part of the government, this sector has also faced employee demotivation as well as the negative opinion of the public sector in the general population (Mendes, Santos, Perna, & Teixeira, 2012). Some CEOs and senior line managers are skeptical about the role of human resources in their companies' success. Meanwhile, many executives, despite the belief that "human resources are the most valuable asset of an organization," cannot understand how human resources functions play a role in making the envisioned organizations a reality. The problem is rooted in the fact that it is difficult to measure the impact of human resource functions on an organization's performance and success (Becker, Huselid, & Ulrich, 2001). People are the company (Kucharska & Kowalczyk, 2019;2020) and employees are one of the key groups of stakeholders (Philips, 2003;Phillips, Freeman, & Wicks, 2003). Recently, Kianto, Vanhala, Ritala and Hussinki (2020) strongly highlighted the advantageous consequences of intellectual capital on various aspects of organizational performance. Moreover, Kucharska (2020) proved that employee commitment matters for a company's reputation and performance. Nonetheless, strategic HRM in the public sector is now considered (Guo, Brown, Ashcraft, Yoshioka, & Dennis Dong, 2011) because the contemporary public management movement focuses on increasing accountability and efficiency. Besides, the growing recognition of the importance of human resources, innovation, cost control, organizational members' participation, and and much research has not been considered so far in this field. Therefore, our knowledge regarding it is limited and the vacancy of research that addresses the use of the human resource scorecard in the public sector is tangible. According to the facts mentioned above, the purpose of this qualitative research is to identify the strategic objectives and strategy map of HRM in the Kish Free Zone Organization (KFZO) using a balanced scorecard approach based on the thematic analysis and from the company culture perspective. Note that KFZO's fundamental objectives are conducting the needed infrastructural works in the Kish Island (an island in Iran), helping to constructive development, improving economic development, generating helpful job opportunities, attracting both internal and international tourists and investors, setting both employment and commodity markets, facilitating active presence in the world market to develop non-petroleum exports, arranging condition for producing industrial products, launching processing industries, and finally, taking advantage of Kish Free Island special opportunities including general assembly, the board of the directors, managing director, chairman of the board of the directors, and legal inspectors. Accordingly, the cultural context of this organization encompasses all three aspects of economic, social, and political. Considering the main stakeholders of HRM in this organization, the data are first collected and then, using the objectives corresponding to the four perspectives of the balanced scorecard (financial, stakeholders, internal processes and functions, and employee development), the themes are identified by thematic analysis. Moreover, Hofstede and Minkov (2010) noted that national, cultural context might influence organizational studies results. Hence, this study may illustrate how the Iranian context of Kish Free Zone Organization may impact strategic human resource management by designing a human resource scorecard. LITERATURE REVIEW HRM in the public sector has major differences with the private sector (Boselie, Harten, & Veld, 2019). Although many HRM activities and processes are the same in both, the public sector issues always present challenges and contradictions concerning HRM (Berman, Bowman, West, & Van Wart, 2010;Knies, Boselie, Gould-Williams, & Vandenabeele, 2018). The concept of strategic HRM in the public sector gained high importance when the new public management appeared in the 1980s. New public management (NPM) theorists rose to progress a requirement for flexibility, innovation, managerialism, and responsiveness within the public sector, which challenged the essential principles of bureaucratic/mechanistic organizational forms Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez (Funck & Karlsson, 2019). With the advent of new public management, staff development is possible through advanced HRM techniques (Hajiagha, Akrami, Hashemi, & Amoozad, 2015;Hajiagha, Hashemi, Mahdiraji, & Azaddel 2015;Hood, 1995;Lapsley & Wright, 2004). Several factors in the public sector that may influence the adoption of a strategic HRM approach (Brunettov & Beattie, 2020). • First, the multiplicity and diversity of its objectives, the complexity of performance measurement, and the tendency for conflicts between various goals and stakeholders make strategic management as well as the achievement of the vertical and horizontal integration more difficult (Arnaboldi, Lapsley, & Steccolini, 2015). • Second, public management is subject to scrutiny or regulatory bodies created by the legislature (Biancone & Jafari-Sadeghi, 2016). Such a situation frequently limits executive and administrative autonomy in achieving a strategic approach. • Third, the political environment may affect the implementation of strategic HRM because successful HRM in the public sector needs the support from top managers and political support (Rainey, 2009). Therefore, in countries with relatively high political instability and frequent political changes, the limited time horizons of political leaders can lead to strategic HR policies' failure. • Another problem is the difference in HRM approaches at the level of central organizations and headquarters with operational centers. The strategic alignment between strategic HRM and the particular environment in which it is applied is important. Taking everything into consideration, it can be said that the implementation of strategic HRM in a particular country is influenced by a set of political, social, economic, and cultural factors that are interconnected (Jarvalt & Randma-Liiv, 2010). Performance management in the public sector can lead to various political as well as managerial purposes that affect each other (Wang, Zhu, Mayson, & Chen, 2019). • First, the definition of the missions and clear objectives help each employee understand what the organization desires and provides a concentration on the operations (communication purpose) (Niven, 2006 Blackman, O'Donnell, O'Flynn, & West, 2015;Hajiagha, Mahdiraji, Zavadskas, & Hashemi, 2014). • Fourth, the performance measurement systems can provide a basis for the compensation of public officials (appraising purpose) (Armstrong, 2000;Jamalnia, Mahdiraji, Sadeghi, Hajiagha, & Feili, 2014). The specification and intensive monitoring of performance, coupled with a set of incentives and sanctions, can be used to ensure the public sector managers continue to act in line with the interests of the society (Beheshti, Mahdiraji, & Zavadskas, 2016;Jafari-Sadeghi, 2019;Verbeeten, 2008). Considering what is said, the strategic HRM and employee performance management in the public sector needs to maintain a coherent and effective approach. Seeking to apply appropriate private sector models in the public sector, the new public management introduces the balanced scorecard model (Maran, Bracci, & Inglis, 2018). Although this model was first introduced for the private sector, Kaplan and Norton (2001a) presented a modified version of it for the public sector. Considering the four perspectives introduced, the given model appreciates the complexity of many public organizations and presents more measures. Moreover, this model is unlimited to the key perspectives provided by Kaplan and Norton (Arnaboldi, Lapsley, & Steccolini, 2015;Hansen & Schaltegger, 2016;Jafari-Sadeghi & Biancone, 2017b). The balanced scorecard is a strategic planning and management system that aligns business activities with the organization's vision and strategy, improves internal and external communications, and controls the organization's performance against the strategic objectives . The balanced scorecard can be used as a communication tool, measurement system, and strategic management system (Ahn, 2001;Becker & Huselid, 2006;Jia, Mahdiraji, Govindan, & Meidutė, 2013;Mahdiraji, Arabzadeh, & Ghaffari, 2012;Malina & Selto, 2001;Niven, 2006;Rezaei, Jafari-Sadeghi, & Bresciani, 2020). Kaplan and Norton suggest an effective way to implement a balanced scorecard is to use a strategy map. The strategy map outlines the causal relationships between strategic objectives and serves as a starting point for balanced scorecard projects. The strategy map includes four perspectives, like a balanced scorecard (Kaplan & Norton, 2008). Niven (2006) emphasizes that the financial perspective is not the main target in the public sector, but a limited resource by which the mission is accomplished. Considering performance from different perspectives based on the various objectives and stakeholders (McAdam, Hazlett, & Casey, 2005;Messeghemv, Bakkali, Sammut, & Swalhi, 2018), a balanced scorecard in the public sector is assumed as a tool for linking the goals of the performance management and the public organization objectives (Bobe, Mihret, & Obo, 2017;Modell, 2004). Performance management is more difficult in the public sector than in the Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez private sector because the social and political environment is more complex (Brignall & Modell, 2000;Hoque, 2014;Mahdiraji, Govindan, Zavadskas, & Razavi Hajiagha, 2014) and meeting the needs of the community is of utmost importance. Therefore, the client/customer perspective is at the highest level (Aidemark, 2001;Kaplan & Norton, 2001b;Mahdiraji, Kazimieras, & Razavi, 2015). The public sector strategic map changes in a top-down, cause-effect hierarchy (Moullin et al., 2007) and is translated as follows. The financial perspective provides the necessary means for human capital growth, productivity, organizational capacity, and information in the learning and growth perspective. This, in turn, provides the work needed for the success of the critical factors in the internal processes perspective and ultimately, the client's perspective (Mahmoudi, Mahdiraji, Jafarnejad, & Safari, 2019;Mathys &Thompson, 2006;Mendes, Santos, Perna, & Teixeira, 2012). Irwin (2002) argues that the customer perspective is determined by the definition of the organization stakeholders when the strategy map is drawn by the identification of the organization strategy. In public sector organizations; labels such as "customer," "consumer," "client," "user," "stakeholder," "citizen," "taxpayer," or "the public" are mostly used to describe this term (Cunningham, 2016). However, this perspective is not completely described only by the identification of a customer. Accordingly, depending on the nature of the activity, the customers/clients may be divided into several categories (Conaty & Robbins, 2018). The balanced scorecard in the public sector replaces the terms ''customer'' and ''internal processes'' with ''stakeholder '' and ''operational excellence,'' respectively. Moreover, growth is omitted in the innovation and learning perspective, since it may be misleading if it is simply considered as growth in physical or monetary terms. Besides, the term "growth" is eliminated in the learning and growth perspective, because it may be misleading and considered as growth in physical or monetary terms. Generally, the balanced scorecard model in nonprofit organizations seems to be unlimited to four main performance dimensions (Grigoroudis, Orfanoudaki, & Zopounidis, 2012;Mokhtarzadeh, Mahdiraji, Beheshti, & Zavadskas, 2018). By employing the four-dimensional model of balanced scorecard in human resource management, the HR Scorecard model's perspectives and strategic objectives were developed and the strategic map was constructed. Both the HR scorecard and the balanced scorecard include objectives, measures, initiatives, action items, and strategy maps that are designed in both of them, including several perspectives. Generally, they are applied to describe a specific strategy and execute it. While for-profit organization scorecards traditionally place the financial perspective at the top of the strategy map, an HR scorecard usually does not, considering that the HR department's primary goal is not to make a profit but to support its "customers," which are typically internal to the organization. Besides, since the balanced scorecard in HR is more likely to have an internal perspective that revolves around key strategic areas in which the department operates, the internal perspective themes in an HR scorecard are unique from traditional scorecards (Cunningham, 2016;Kaplan & Norton, 2006). An HR balanced scorecard helps HRM to prioritize capabilities and provides an appropriate approach for managers and employees. The advantage of an HR balanced scorecard is to show the priorities of human resources and how they are connected. Besides, it enables managers to determine the goals of human resources in future periods by communicating the priorities (Balogh & Golea, 2015;Jafari-Sadeghi, Biancone, Giacoma, & Secinaro, 2018). The HR scorecard aligns business strategy with the objectives and outcomes desired and expected by the human resources to provide a statistical basis for measuring human resources efficiency and their impact on the implementation of organization strategy (Becker, Huselid, & Ulrich, 2001;Jafari-Sadeghi & Biancone, 2017a). To achieve the strategic objectives of the organization in the public sector, it is first necessary to define the client, financial, process, and learning and growth objectives and activities to implement them using the HR scorecard. Precisely, HR managers can ask which HRM practices, skills, and behaviors help line managers implement the organization's strategic objectives (Cunningham & Kempling, 2011;. • The customer or client perspective. Internal and external clients are considered as customers. The external clients in public sector organizations include citizens, and the internal clients include groups who receive services in the organization (Jafari-Sadeghi, Kimiagari, & Biancone, 2020; Soysa, Jayamaha, & Grigg, 2019). Most of the HRM clients are internal ones in the organization and include line managers and employees who rely on HRM to perform their duties in response to external clients. • The financial perspective. Timely and accurate financial data are always a priority because the financial objectives and measures are helpful in summarizing the outcomes of budgetary expenditures. • Internal processes perspective. In the internal processes perspective, the managers identify the internal processes in which the organization needs to develop. These processes enable the organization to effectively provide its services (Cunningham & Kempling, 2011). • Learning and growth perspective. There is a direct relationship between the effectiveness of HRM and the quality of the work of the HR staff. Hence, by the encouragement and continuous training of employees for learning and innovation, organizations can achieve longterm development. This perspective is mainly related to the training of Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez human resources staff, helping to meet customer needs, optimizing the internal processes, and achieving overall objectives (Qingwei, 2012). To develop a human recourse scorecard in public sector organizations, customer objectives are first recognized; then, effective processes are considered, in addition to effective financing. The learning and growth perspective recognizes that these objectives rely on a human component -motivation, training, and the appropriate identification of competencies (Cunningham, 2016). New public management principles have promoted a more flexible and responsive approach to recruitment, selection, retention, training, and development of public sector employees. The new models of HRM in the public sector introduced the concept of human resources to achieve performance outcomes in line with the strategic direction of the public sector organization (Brown, 2004). Along with the emergence of new public management, the changing structure and operations of the governments replaced the traditional Weberian model including centralized and bureaucratic practices with private-sector HRM systems. The new public management has led to a strategic approach to HRM in the public sector. A new concept of "best practices" has arisen, which is called a "high-performance work system" (El-Ghalayini, 2017). The core of HRM is to achieve the strategic objectives of the organization. Because of unprofitability, public sector organizations' ultimate objectives are the quality and effectiveness of the services. Therefore, in the HR scorecard, the client perspective is related to the internal customers; that is, the employees of the organization (Qingwei, 2012). According to what is said so far, it follows that strategic HRM in the public sector needs significant concepts and means to illustrate the contribution of this unit to the value creation in the organization. It seems that the use of a balanced scorecard in the public sector HRM can effectively create responsivity and accountability to the performance of this unit. Additionally, by mapping strategic objectives and implementing them, the position of the HR unit can be promoted to the strategic partner of the organization. Here, some of the most important, relevant researches are reviewed. Balogh and Golea (2015) presented an HR scorecard model and argued that the indicators in the scorecard are calculated as predetermined values versus the actual values to facilitate the identification of causes leading to differences. Also, it facilitates the decision-making process on how to eliminate the causes which influence the performance. Anwar, Djakfar, and Abdulhafidha (2012) and Jafari-Sadeghi, Jashnsaz and Honari Chobar (2014) analyzed the organization's performance with a balanced scorecard approach and developed the indicators accordingly to evaluate the employees' behavior, attitude, skills, and knowledge. Besides, Iveta (2012) presented the possibilities of using the modern balanced scorecard method in human capital and identified that one of the organization's primary goals should be to have a manageable and sustainable HR scorecard with visible and measurable key performance indicators. This research's key performance indicators included all possible aspects -internal and external -of HR strategy, aiming to achieve a more significant organizational approach. Using a balanced scorecard and based on the Delphi method, Qingwei (2012) identified a set of indicators to evaluate HRM effectiveness in a hospital, combined with the hospital human resource characteristics. Furthermore, Boada-Grau and Gil-Ripoll (2009) studied the performance indicators by the examination of the relationship between strategic HRM in organizations using the three perspectives (customer, financial, and process) of the balanced scorecard. They recognized the indicator of "values and culture" among strategic HRM indicators as acquiring the most predictive capability. Their research identified that the strategic HRM variables were more predictive for process and customer perspectives than the financial perspective. Fottler, Erickson and Rivers (2006) developed an HR scorecard in a clinic and presented a considerable number of internal and external indicators for the financial, customer, internal processes, and growth and learning perspectives. Considering their role in achieving the strategic objectives, the management team identified these indicators for each of the above perspectives by modeling. The four perspectives were identified for the clinic's mission. Using the HR scorecard, Shankari and Suja (2008) analyzed the performance of strategic business units (luxury, business, and leisure) of the Taj Group of Hotels with a specific reference to the financial perspective. They aimed to maximize human capital and minimize HR costs. Cunningham and Kempling (2011) studied the promotion of organizational fit in the strategic HRM using the HR scorecard in two public sector organizations. Reviewing the relevant literature, it can be recognized that HRM researchers have sought to acknowledge the question as to whether HRM plays its role efficiently and effectively in the organization or not. Accordingly, the need for performance management is recognized and the need for a tool to determine the accordance of HRM with the organization's objectives and strategy becomes evident. The HR scorecard represents an approach that allows the accordance of the performance with the strategic objectives. The literature review recognized that the HR scorecard is used as a communication tool (Balogh & Golea, 2015; Phuong & Harima, 2019), as a system for measuring performance (Anwar, Djakfar, & Abdulhafidha, 2012;Jafari-Sadeghi & Biancone, 2019;Shankari & Suja, 2008), and as a system for implementing strategy (Bryl, 2018;Cunningham & Kempling, 2011;Qingwei, 2012;Reidolf & Graffenberger, 2019). The role of the balanced scorecard in managing performance and helping to realize the organization strategy, especially in the public sector, In the present research, the balanced scorecard approach as a communication tool and content-based qualitative analysis is used to present the strategy map of the HR unit performance by the identification of the strategic objectives for four perspectives of the HR scorecard. Qualitative research method Thematic analysis Six-stage process of Clarke and Braun (2006) Three-stage process of Stirling (2001) Seven-stage analysis of the research themes METHODOLOGY Using qualitative research, the present study seeks to create a map of HRM's strategic objectives, based on the balanced scorecard with the stakeholder approach from the perspective of the company culture. The research participants included all the stakeholders of the HR unit of Kish Free Zone Organization, who were selected based on the researcher's judgment. The data were collected by a semi-structured interview. The stakeholders of this unit include (1) HR unit employees (2) employees of other units (3) senior and middle managers (4) family of employees (5) HR units of affiliated companies (6) retirees, and (7) the clients of this unit. The questions in the interview were formulated based on the four perspectives of the balanced scorecard. Given that all the stakeholder groups were unrelated to each of the four perspectives of the scorecard, a special interview pattern was considered for each group when the interview questions were prepared; in this manner, they only responded to the parts related to them as presented in Table 1. Eventually, in the process of collecting data from the seven stakeholder groups of this unit, 21 interviews were conducted during the period from 21 November 2017 to 13 December 2017. As a case in point, the family of employees as a group of stakeholders in this research were asked to only respond to the questions related to the customer perspective. The given group has the necessary information from the corresponding perspective,  The given group does not have the necessary information from the corresponding perspective, The given group may have the necessary information from the corresponding perspective. To investigate the interpretive validity of the results, 14 of the 21 participants were asked about the conformity of their views with that of the interviewer after the completion of the interview and analysis of the data. Thereby, the accuracy of the research results was verified by the participants. There was collaboration between the researchers for reviewing the results and verifying them, receiving suggestions on how to conduct interviews, computer analysis, and categorization. Moreover, to provide confidence, how to follow the research processes was explained, and the detailed notes and reports on the results were prepared. Thus, the digital recording of collected data, the use of MAXQDA software, and the preparation of successive reports from each stage of the analysis were performed. DATA ANALYSIS To analyze the data in the present study, the six-stage process of Braun and Clarke (2006) and the three-stage thematic classification method of Attride-Stirling (2001) were combined to form the thematic network, and a sevenstage process was created. Thematic networks systematize the extraction of lowest-order premises evident in the text (Basic Themes); categories of basic themes grouped to summarize more abstract principles (Organizing Themes); and super-ordinate themes encapsulating the principal metaphors in the text as a whole (Global Themes). These were then represented as web-like maps depicting the salient themes at each of the three levels and illustrating the relationships between them. Considering the nature and questions of the research and due to the collection of data from the interview, the sevenstage process demonstrated in Figure 3 was followed. Firstly, the interviews recorded were listened to several times and transcribed to become aware of the interview atmosphere and collect the data. At this stage, a frequent review of the data was performed to search for meanings and patterns (first stage). After studying the data and understanding them, a preliminary list of the ideas contained in the data and the meaningful statements were prepared, called the basic themes. The basic themes indicate an important point in the text and by their combination; an organizing theme was created (Attride-Stirling, 2001). For instance, participant P 12 said in his interview, "at least staff should think HR is a utopia to honor people. This skill doesn't exist at all. They have to be respected by everyone, both the managers and the personnel." The basic theme of "the need to honor the staff in the HR unit" was extracted accordingly. Participant P 03 referred to the basic theme of the "lack of specialized training" as follows, "holding general and specialized training courses for each occupation, the education department can withdraw an effective step towards the development of human resources, while the general courses are now held mostly and there are no specialized courses for each occupation." Additionally, participant P 07 indicated the special circumstances of the life of employees in Kish Island noting that "in Kish Island, a person becomes depressed unconsciously. Our staff suffer such a problem. Hence, it should be addressed, because depression affects the ability of a person to do his duties in an office. Every person who experiences some problems in his home may not properly do his work at the workplace. Therefore, internal, ethical, and psychological issues should also be addressed." By these statements, the basic theme of "attention to the mental conditions of the staff considering the restrictions on the island" is extracted. At the end of this stage, 187 basic themes were extracted (second stage). Following the analysis of the basic themes, they were combined to create the general themes. The themes that have the most similarity and could indicate semantically a single meaning were placed in a category. Accordingly, the categories of themes were created, called the "organizing themes." All the basic themes were put in 39 categories, indicating the formation of 39 organizing themes (third stage). As a case in point, putting three basic themes under a category entitled "promoting the mental health of staff" is shown in Table 2. The organizing themes obtained were categorized in similar and coherent groups, and the global themes were redeveloped in the following stage. Decisions on how to categorize the themes were made based on the content and, if necessary, based on theoretical foundations. In the present study, given that the purpose of preparing the thematic network is to draw the strategy map of HR performance, the global themes were developed based on the strategic objectives to be placed under the four perspectives of the balanced scorecard (fourth stage). The themes identified at this stage provided the primary source of the formation of the thematic network. The global themes and the organizing themes which form them are presented in Table 3. The themes are narrated to describe the global themes and determine the nature of what a theme discussed about it. Given that the purpose of the research is to map the thematic network in the form of a strategy map, the narrative of themes is expanded to three levels as follows. • First level -The narrative of global themes. At this level, the global themes are defined and described, and the themes composing them are analyzed (fifth stage). • Second level -The narrative of the strategic map perspectives. After mapping the thematic network, based on the existing relationships between the global themes, each theme is placed as a strategic objective in one perspective of the HR scorecard. This led to preparing the strategy map. Each perspective of the map is explained at this level. • Third level -The overall narrative of the strategy map. Considering the cause-effect nature of the strategic objectives and the perspectives of the strategy map, the relationships between the perspectives of the map are described in this level. performance management and staff development Designing a strategic system for the assessment of staff performance Paying attention to the agreement between the occupation and the employed person Developing a modern and fair system for staff promotion 10 strategic transformation of HRM based on the research and process reform formulating a strategy for the transformation of HR in line with the organization vision Supporting the research plans Reengineering HR business processes Making HR processes agile 11 alignment between the allocation and consumption of human resources budget and the organizational strategy Improving effectiveness in the allocation and consumption of the budget The need to allocate sufficient budget to the activities of the HR unit The alignment between the HR budget and the organization objectives 12 improving the mechanism of the settlement of the human resources budget Promoting the efficiency of the staff costs Increasing the effectiveness of the HR unit costs First global theme -developing family-centered policies. The development of family-centered policies is concerned with the necessity of the development of strategies to address the needs of the families of staff. The purpose of the development of such policies is to reduce the conflicts between work-life and family life and provide facilities for families to consider the unique living conditions in Kish Island. Such as being far from the mainland, the high cost of family travel, and the low level of school and kindergarten services. Participant P 17, from the family of employees group, referred to this issue "sometimes, when my husband comes home, for example, when we are eating food, his directors call repeatedly and this annoys us." Sixth stage: The stakeholder perspective in the strategy map Promoting the well-being, health, and livelihoods of staff Developing family-centered policies Promoting human dignity of staff Improving the productivity of the HR unit Fifth stage: Global theme narrative, "promotion of the well-being, health, and livelihood of the staff" This theme addresses the well-being issues, the physical and mental health of the staff and their livelihoods. The organization must provide facilities and conditions that enable staff to focus on their duties and responsibilities without any concerns and do them in the right way. The organization's HRM needs to develop some objectives in this regard and provide incentives for the staff. A part of the interview with participant P 07 "in Kish Island, a person becomes depressed unconsciously. Our employees suffer such a problem. So, it should be addressed, because depression affects the ability of a person to do his duties in an office. Every person who experiences some problems in his home may not do properly his work at the workplace. Therefore, the internal, ethical and psychological issues must also be addressed" Figure 3. The data analysis process Second global theme -promoting the well-being, health, and livelihoods of staff. This theme is concerned with paying attention to the welfare issues and improving the physical and mental health of the staff and their livelihoods. The organization must provide facilities and conditions that enable the staff to focus on their duties and responsibilities and perform them in the right way 130 / Designing a human resource scorecard: An empirical stakeholder-based study with a company culture perspective Company Culture Matters Wioleta Kucharska (Ed.) with no concerns. The organization's HRM needs to develop some objectives in this regard and provide incentives for the staff. Regarding the exceptional conditions of the staff living in Kish Island, participant P 07 stated that "in Kish Island, a person becomes depressed unconsciously. Our employees suffer such a problem. Hence, it should be addressed, because depression affects the ability of a person to do his duties in an office. Every person who experiences some problems in his home may not properly do his work at the workplace. Therefore, internal, ethical, and psychological issues should also be addressed." Third global theme -improving the productivity of the HR unit. Increasing the productivity of the staff is one of the most critical objectives of the organization, which can be realized through joint plans of the senior managers of the organization and HR unit and is considered as a criterion to measure the performance of the HR unit. Developing a strategy to encourage and reward the staff and provoke them improves the administrative discipline of the staff and promotes their effectiveness and efficiency; thus, improving their productivity. Participant P 05 stated one of the reasons for the ineffectiveness and inefficiency of the unit was as follows, "over the past 20 years, the organization has become, unfortunately, more and more similar to extremely big public agencies that are not efficient and effective. The high amount of the current works prevents the managers and experts to take any action." Fourth global theme -promoting the human dignity of staff. Considering the staff of the organization as organizational capital is critically important. Honoring the staff and respecting them should be on the agenda of HRM and should be implemented at all levels of the organization. This objective should be considered at all stages of recruitment, maintenance, and the abandonment of the organization by the staff should be considered. This belief has to be created in the staff that the organization will not achieve its strategic objectives without their effective and efficient existence, and their efforts are valued and deserve appreciation. The staff must see the HR unit as a supporter of their interests and the decisions made in this unit in line with the improvement of their situation. For instance, participant P 12 stated in his interview, "at least staff should think HR is a utopia to honor people. This skill doesn't exist at all. They have to be respected by everyone, both the managers and the personnel." Fifth global theme -developing an organizational culture based on customer orientation and innovation. The organization's objectives must focus on the development of common ideas and beliefs among the members of the organization, concerned with meeting the needs of customers, satisfying them, and creating new innovative ideas. Creativity and innovation, adherence to the values of the organization, and increased satisfaction of the stakeholders are inferred collectively by the staff of the organization as valued issues. Participant Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez P 01 pointed to the need to develop an organizational culture to support creativity as follows, "the whole organization should be a place involving new ideas and creativity. When no new idea or creativity is presented by human resources, I will naturally not present any idea and not show any creativity." Sixth global theme -empowering employees. The theme of empowering employees in an organization indicates the objectives by which not only is it ensured that the employees demonstrate the necessary ability to carry out their duties but also the organization can entrust them the responsibility of making decisions on their certain duties. Participant P 03 referred to the training of human resources in line with the organization's objectives as follows, "holding general and specialized training courses for each occupation the education department can take an effective step towards the development of human resources, while the general courses are now held mostly and there are no specialized courses for each occupation." Seventh global theme -developing a human resources information system. A human resources information system is developed to create an integrated and comprehensive collection of information related to human resources based on the information technology that facilitates and accelerates decision making, planning, performing the tasks, and HRM processes. Participant P 12 criticized the lack of an organization's progress in utilizing IT facilities as follows, "for example, one day, people wrote and posted letters. However, in today's society, it is not acceptable to send a letter by post and the Internet and cyberspace are used for this purpose. Anyway, I think that we still use the earlier method." Eighth global theme -employee strategic recruitment and maintenance. One of the important and strategic objectives of the HR unit is to recruit the appropriate staff for the organization and maintain effective staff. Accordingly, considering the macro objectives of the organization, the HR unit should recruit staff from the indigenous people living on the island and maintain only productive and capable staff by the identification. Participant P 05 opined on recruitment and staffing as follows, "we should have a rigorous process for recruiting human resources. The right person should be recruited for the right work. In many administrative systems when people enter into the system, they should read at least four books, such as the book of administrative rules. This process should lead to people taking a test. People should know where they are working, what the objectives of the organization are, and where the organization wants to reach." Ninth global theme -performance management and staff development. Performance management is used to identify, measure and develop the performance of individuals and the team, and coordinate performance with the organization's strategic objectives and the staff development includes 132 / Designing a human resource scorecard: An empirical stakeholder-based study with a company culture perspective Company Culture Matters Wioleta Kucharska (Ed.) activities that affect the individual and professional growth of the staff. Regarding the need for a performance evaluation system and providing feedback to staff, participant P 12 explained the best performance evaluation model as follows, "using 360-degree feedback, you can evaluate your subordinates, colleagues, and supervisor(s) in terms of ethics, procedure, behavior, operation, effectiveness." Tenth global theme -the strategic transformation of HRM based on the research and process reform. The strategic transformation of HRM provides solutions by the activities creating the change to improve the consistency between the staff and HR units and between HR units and other units. It is implemented to solve present problems. Regarding the lack of agility of processes, participant P 11 pointed out the issue of it being timeconsuming "another weakness is that some affairs related to some staff are followed up with a delay. For instance, the letter written by some people may be investigated with a delay of three or four months in this bureaucratic process, and this is a weakness." The eleventh global theme -alignment between the allocation and consumption of human resources budget and the organizational strategy. In this theme, the necessity of allocating the budget to the objectives and strategies of the organization is referred to. The consumption of the budget will be effective when pursuing organizational objectives. Participant P 13 expressed the economic constraints as the weakness of the function of this unit "it is related to such issues as lack of budget. For example, regarding the welfare, education and research affairs, it can be said that as long as the budget is not allocated to the welfare unit to hold sports competitions or sports classes or contract with other centers in this regard, the staff of that unit cannot do anything, even if they are the best and most specialist ones. It equally applies to education and research." Twelfth global theme -improving the mechanism of the settlement of the human resources budget. The settlement of the human resources budget should be applied carefully and, the budget should be estimated rationally. It should be determined that each budget has been allocated for the achievement of which objective and the implementation of which plan and the budget allocated should be used exactly for the same objective or plan. Participant P 01 believes that the allocation of the budget to some activities of the HR unit led to the reduction of many other costs "On the other hand if this budget is unallocated to education, several times of it will be spent in other areas, whether for overhead costs, including electricity costs, or the costs of additional work." Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez RESULTS AND DISCUSSION After identifying and analyzing the relationships between the twelve global themes, the themes and the relationships between them are presented in the form of a graphical scheme called the thematic network. In the present study, global themes are considered as the strategic objectives of the HR unit. Therefore, by placing these themes within the framework of the HR scorecard, the proposed thematic network has been presented as the strategy map of HR in Figure 4. After drawing the strategy map of HR performance, the narratives related to each of the perspectives (second level of narratives description) are written. Ultimately, the cause-effect relationships between the different perspectives of the balanced scorecard are described as the comprehensive narrative of the strategy map (seventh stage). The narrative of the first perspective -financial. Although financial objectives are not placed at the top of the strategy map in public sector organizations, they are useful in summarizing the outcomes of the budgetary expenditures. In the HR unit, the budget is allocated to the activities of this unit. The alignment of the allocation and consumption of the budget with the organization strategy as well as the improvement of the mechanism of the settlement of the budget, are the objectives placed in this perspective to ensure that the given budget enjoys the highest effectiveness and efficiency. The narrative of the second perspective -HR functions and processes. In the HR processes perspective, the unit focuses on processes that their identification and improvement are associated with, the development of the unit and, consequently, the organization. Being a pioneer in these processes ensures the services provided are effective and efficient. Given the significant volume of information in the HR unit and the need for organizing them, the development of an HR information system is required. Because the appropriate human resources enter into the organization through the HR unit, paying attention to the strategic recruitment and maintenance of the staff is important. Exploiting the maximum potential of the staff and coordinating their performance with the strategic objectives of the organization are possible by the performance management and staff development. HR processes can meet the needs of the stakeholders only when they are transparent, agile, fast, and up-to-date, and are conducted without loss of resources in the least amount of time. For this purpose, the strategic transformation of HRM based on the research and process reform seems necessary. The narrative of the third perspective -staff development. To survive, an organization should focus on the growth and learning of employees and the development of their capabilities. The empowerment of employees is one of the goals of this perspective, concerned with the identification of the talents and capabilities of employees and their development in line with the organization's strategies to create a vibrant and dynamic organization and respond to the needs of the stakeholders more quickly and efficiently. In addition to the individual development of staff, paying attention to common culture is also important. By the common fundamental values and beliefs, the employees can collectively lead the organization to its ultimate objectives. Hence, staff development can be realized by developing an organizational culture based on customer orientation and innovation. The narrative of the fourth perspective -stakeholders of the HR unit. Stakeholders of the HR unit includes seven different groups (HR unit employees, employees of other units, senior and middle managers, the family of employees, HR units of affiliated companies, retirees, and clients). Identifying the needs and determining goals to encounter them are placed at the top of the objectives of the strategy map. The goals determined in this regard are typically related to the expectations of these seven groups. The expectations of the stakeholders in the group of staff are responded to by the promotion of their well-being, health, and livelihoods, and their family expectations can be responded to by developing family-centered policies. All groups of staff and retirees are taken into consideration by promoting the human dignity of the staff, and the improvement of the productivity of the HR unit can meet the expectations and needs of the staff of this unit and other related units, other staff, and clients. Big narrative -strategy map of the HR unit. A third-level narrative or big narrative describes the strategic objectives map of the HR scorecard in the Kish Free Zone Organization. After finding global themes, the relationship between the themes is identified as a cause-effect relationship. At that point, these themes are embedded in the HR strategy map as strategic objectives of the strategy map for their relationship with the HR scorecard. On the map, the financial perspective, HR functions and processes, development of staff and stakeholders were placed from the bottom to the top, respectively. At the lowest level, the financial perspective, which focuses on the efficient allocation and consumption of the budget, was placed. In line with the improvement of the HR functions and processes, the allocation of budget leads to the development of individual and organizational capabilities. Therefore, the objectives related to the HR processed were placed at the second level, followed by the perspective of staff development at a higher level. The staff development, and the focus on what distinguishes them in the direction of achieving the strategic objectives, meets the needs and expectations of the stakeholders in this unit. In this fashion, the perspective of stakeholders was placed at the top of the map. The results of this paper contribute to the literature by providing numerous theoretical and practical implications. Building on the cultural perspectives of organizations, this research contributes to HRM literature through redesigning a balanced scorecard and strategy map. In this regard, our paper employs the combined process of the thematic analysis and the construction of the related big narratives and with the stakeholder approach, in which 187 basic themes, 39 organizing themes, and 12 global themes have been synthesized. Therefore, the findings of this research highlight that the strategic objectives map of the HR scorecard consists of four pillars like financial motives, HR functions and processes, staff development, as well as stakeholders of the HR unit. Regarding practical implications, the findings of this paper shed light on the importance of a staff development strategy through developing an 136 / Designing a human resource scorecard: An empirical stakeholder-based study with a company culture perspective Company Culture Matters Wioleta Kucharska (Ed.) organizational culture based on customer orientation and innovation. This can be achieved by maximizing the value of customer feedback within the organization that requires strong communication with customers. This is an essential step to convert customer strategy to customer culture, where employees make the countermeasures against issues raised by customers. Clear communication to customer feedback leads to a more proper understanding of employees and how their roles and responsibilities impact the organization's performance. These customer-orientated behaviors shape the culture of the organization to a customer-oriented culture. CONCLUSION Recognizing the priorities of the Kish Free Zone Organization -concerning the staff (strategic objectives of HRM in the public sector) and mapping it into a capable framework (balanced scorecard strategy map) by qualitative research (seven-stage analysis of combined themes) based on the stakeholder approach and the company culture perspective -was the main purpose of the present research. Strategic themes of HRM have been identified in this research using a balanced scorecard approach based on the thematic qualitative analysis. Considering the key stakeholders of the HRM in the Kish Free Zone Organization, the stakeholders of the HRM unit expanded to internal and external ones. Internal stakeholders included senior and middle managers and the staff. External stakeholders included the employees' families, HR units of affiliated companies, retirees, and clients of this unit. The most important criteria of the strategic HRM have been identified by the present research. In this way, a connection between HRM and the main strategy of the organization can be created. Therefore, the importance of attention to strategic objectives in long-term plans can be recognized. They can assist the organization in achieving its mission. The purpose of this research was to identify the strategic objectives to create a strategy map of the HR scorecard for the Kish Free Zone Organization based on the company culture perspective. For this purpose, a qualitative analysis of the themes was employed and, after finding global themes, the thematic network was presented within the framework of the HR scorecard. Therefore, the distinctive feature of the present research was that it used the thematic analysis to map the thematic network to create the strategic objectives map of the scorecard. Additionally, considering the thematic analysis methods presented by Braun and Clarke (2006) and Attride-Stirling (2001) as the fundamental methods, the researchers developed in this study a specific qualitative analysis method to identify the strategic objectives of Hasan Boudlaie, Hannan Amoozad Mahdiraji, Sabihe Shamsi, / Vahid Jafari-Sadeghi, Alexeis Garcia-Perez HR. Various organizations can practice this method to provide an HR scorecard by identifying the basic, organizing and global themes, mapping the thematic network, and creating the strategy map. Free-trade industrial zone organizations and public sector organizations can enjoy the results of this research to develop HRM strategy. The performance objectives in the strategy map can present key performance indicators for companies and other organizations; hence, they can be a guide to determine the operational objectives and implement the strategy. Additionally, given that the present study led to drawing the strategy map of the HR unit of the organization, (a) its measures, operational objectives and executive initiatives can be determined in future research for the performance objectives identified, and (b) the HR balanced scorecard of the organization can be expanded using the strategy map drawn. The limitation of this research relies on several dimensions. First, the sector of which we based our paper has been limited to the HR scorecard in the public sector. Second, the research has been conducted in the context of Iran, which limits the generalization of its findings to those high-context cultural societies. Therefore, forthcoming studies can analyze other sectors in a different context with distinct cultural characteristics. More importantly, the data gathered for the synthesis depends on self-announcing, which increases the probability of it being one-sided for social desirability answers. Hence, future studies can provide evidence to prove the findings of this study, using quantitative analysis.
2020-11-12T09:03:18.529Z
2020-06-06T00:00:00.000
{ "year": 2020, "sha1": "a59f5bc615075cf118d442f0cab4eeb5d2f81a0b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7341/20201644", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e499b363c513a910493898d73672d1a4102e8903", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
247614923
pes2o/s2orc
v3-fos-license
Comparative study of vestibular projection pathway connectivity in cerebellar injury patients and healthy adults Objective Cerebellar injury can not only cause gait and postural instability, nystagmus, and vertigo but also affect the vestibular system. However, changes in connectivity regarding the vestibular projection pathway after cerebellar injury have not yet been reported. Therefore, in the current study, we investigated differences in the connectivity of the vestibular projection pathway after cerebellar injury using diffusion tensor imaging (DTI) tractography. Methods We recruited four stroke patients with cerebellar injury. Neural connectivity in the vestibular nucleus (VN) of the pons and medulla oblongata in patients with cerebellar injury was measured using DTI. Connectivity was defined as the incidence of connection between the VN on the pons and medulla oblongata and target brain regions such as the cerebellum, thalamus, parieto-insular vestibular cortex (PIVC), and parietal lobe. Results At thresholds of 10 and 30, there was lower connectivity in the ipsilateral hemisphere between the VN at the medullar level and thalamus in the patients than in healthy adults. At a threshold of 1 and 10, the patient group showed lower VN connectivity with the PIVC than healthy adults. At a threshold of 1, VN connectivity with the parietal lobe in the contralateral hemisphere was lower in the patients than in healthy adults. Additionally, at a threshold of 30, VN connectivity at the pons level with the cerebellum was lower in healthy adults than in the patients. Conclusion Cerebellar injury seems to be associated with decreased vestibular projection pathway connectivity, especially in the ipsilateral thalamus, PIVC, and contralateral parietal lobe. Introduction Balance is a key component that maintains the center of mass within the base of support for ambulation and reduces fall risk [1]. It requires complex integration of the visual, vestibular, and somatosensory systems [2]. In particular, the vestibular system, which is composed of the peripheral vestibular organs in the inner ear, ocular system, and projections of the central nervous system, has relatively low importance for balance in static environments such as horizontal and stable surfaces; however, it is crucial for balance in dynamic environments, where the surface is unstable from tilting and oscillating [3][4][5][6][7]. Vestibular function is controlled by interactions between various brain areas and neuropathways; it affects the balance and vertical position of the head and the body [8]. Studies have reported that vestibular projection pathways were mainly connected with the vestibular nuclei (VN), parieto-insular vestibular cortex (PIVC), cerebellum, and cerebral cortex [9,10]. The VN, which is located in the pons and medulla oblongata, receives sensory information from eye and head movements as well as Open Access BMC Neuroscience *Correspondence: grandhue@uu.ac.kr 4 Department of Physical Therapy, Uiduk University, 261, Donghaedae-ro, Gangdong-myeon, Gyeongju, Gyeongsangbuk-do 38004, Republic of Korea Full list of author information is available at the end of the article body orientation in space to control the movements [11]. The PIVC, which is a core region of vestibular input, contributes to the processing of bodily self-consciousness, estimation of verticality, and integration of visual motion [12]. The cerebellum, which receives vestibular information and projects vestibular information through projection pathways to the VN, contributes to equilibrium [9,13]. The cerebral cortex contributes to the conscious perception of movement and spatial orientation [11]. Because the vestibular projection pathway is connected to various brain areas, injury to the vestibular system can be accompanied by problems related to balance, spatial orientation, vertigo, and dizziness [14][15][16][17][18][19]. Moreover, the vestibular projection pathway is connected to the cerebellum [20]. Cerebellar injury can cause not only gait and postural instability, nystagmus, and vertigo but also vestibular symptoms; this is due to the fact that the nodulus of the cerebellum has reciprocal connections with numerous structures in the peripheral and central vestibular networks [13,14,[21][22][23]. However, changes in connectivity regarding the vestibular projection pathway after cerebellar injury have not yet been reported. Recently developed diffusion tensor tractography (DTT), which is derived from diffusion tensor imaging (DTI), has enabled three-dimensional reconstruction and estimation of the microstructural integrity of neural tracts [24][25][26]. Additionally, DTI enables the projection and reconstruction of functional connectivity and anatomical structures by visualizing water diffusion patterns [25]. Thus, DTI is a useful tool to provide images of the diffusion properties of white matter by quantifying multidirectional connectivity [25]. Studies have reconstructed human neural connectivity in the VN and other brain areas in three dimensions [9,10,25]. Therefore, in the current study, we investigated the differences in the connectivity of the vestibular projection pathway after cerebellar injury using DTI tractography. Subjects In this study, four stroke patients (three males, one female; mean age 70.75 ± 7.76 years) with cerebellar injury on magnetic resonance imaging (MRI) and 6 control subjects (four males, two females, mean age 30.00 ± 5.66 years) with no history of a neurological or psychiatric disease were recruited for this study at the University Hospital. The inclusion criteria were as follows: (1) first-ever stroke, (2) no traumatic brain injury, and (3) cerebellar injury due to infarction or hemorrhage. All subjects provided informed consent before undergoing DTI and functional evaluations. The study was approved by the Institutional Review Board of Dankook University. Probabilistic fiber tracking The DWI data were analyzed using the Oxford Center for Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library (FSL; www. fmrib. ox. ac. uk/ fsl). Affine multi-scale two-dimensional registration was used to correct the head motion effect and image distortion due to eddy currents. Fiber tracking uses a probabilistic image method based on a multifiber model; it was performed in this study by utilizing image routines implemented in FMRIB Diffusion (5000 streamline samples, 0.5 mm step lengths, curvature thresholds = 0.2) [29]. Both contra-and ipsilateral connectivity were defined as the incidence of connection between the VN (on pons and medulla oblongata) and the following target brain regions as well as were determined by whether the results passed through each target brain region: cerebellum, thalamus, PIVC, and parietal lobe. The incidence of connection was counted from the VN (on pons and medullar oblongata) to each brain region. Note that the seed region of interest (ROI) is located at the VN (on pons: Deitets' and Schwalbe's nuclei, on medullar oblongata). The fractional anisotropy (FA), mean diffusivity (MD), and tract volume (voxel number) of the projection pathway were also measured. Statistical analysis SPSS software (ver. 20.0; SPSS, Inc., Chicago, IL, USA) was used to analyze the results. The chi-square test was used to determine the significance of differences in the incidences of connectivity in the VN on pons and VN on medullar in patients with cerebellum. The level of statistical significance was accepted for p-values < 0.05. The reconstruction of VN on pons connectivity is shown in Table 2 and Fig. 1. The ipsilateral connectivity of VN on pons with the target brain regions (cerebellum, thalamus, and parietal lobe) was 100% in healthy adults, regardless of the threshold. In contrast, patients with cerebellar injury showed lower connectivity with the target brain area (cerebellum, thalamus, and parietal lobe). At thresholds of 1, 10, or 30, connectivity with the PIVC steadily decreased in patients (100.0%, 62.5%, and 37.5%, respectively) and in healthy adults (100.0%, 80.0%, and 75.0%, respectively). However, no significant difference was observed between the patients and healthy adults, regardless of the threshold (p > 0.05). At thresholds of 1, 10, or 30, contralateral connectivity of the VN in the pons with the cerebellum steadily decreased in patients with cerebellar injury (100.0%, 87.5%, and 62.5%, respectively) and in healthy adults (100.0%, 50.0%, and 16.7%, respectively). Notably, at a threshold of 30, connectivity with the cerebellum was significantly lower in healthy adults (16.7%) than in patients (62.5%) (p < 0.05). Connectivity with PIVC and parietal lobe also showed decrements with increasing thresholds in patients and healthy adults. It should be noted that at a threshold of 30, connectivity with the parietal lobe was lower in healthy adults (58.3%) than in patients with cerebellar injury (62.5%). However, connectivity at each threshold was not significantly different between the two groups (p > 0.05). Connectivity with the thalamus was 75.0% in patients at thresholds of 1, 10, or 30. In contrast, healthy adults showed lower connectivity with the thalamus at thresholds of 1, 10, or 30 (100%, 58.3%, and 33.3%, respectively). The reconstruction of the VN on the medullary connectivity is shown in Table 3 and Fig. 2. At thresholds of 1, 10, or 30, the ipsilateral connectivity with the cerebellum, PIVC, and parietal lobe steadily decreased in both patients and healthy adults. Notably, at thresholds of 1 and 10, connectivity with the PIVC was significantly lower in patients than in healthy adults (p < 0.05). At thresholds of 1, 10, or 30, connectivity with the thalamus decreased in both patients (75.0%, 50.0%, and 50.0%, respectively) and healthy adults (100.0%, 100.0%, and 91.7%, respectively). At thresholds of 10 and 30, connectivity with the thalamus was significantly lower in patients than in healthy adults (p < 0.05). At thresholds of 1, 10, or 30, contralateral connectivity of VN on the medulla with all target brain regions (cerebellum, thalamus, PIVC, and parietal lobe) steadily decreased in both patients and healthy adults. Notably, at a threshold of 1, connectivity with the parietal lobe was significantly lower in patients (62.5%) than in healthy adults (100%) (p < 0.05). Discussion In the current study, we investigated the differences in vestibular projection pathway connectivity after cerebellar injury using DTI tractography. We found that at thresholds of 10 and 30, there was lower connectivity in the ipsilateral hemisphere between the VN at the medullar level and thalamus in patients than in healthy Fig. 1 Results of neural connectivity between the VN of pons and vestibular-related areas (parietal lobe, PIVC, thalamus, and cerebellum) in patients with cerebellar injury, at thresholds of 10 streamlines as determined by DTI. The control showed a subject out of six adults. At thresholds of 1 and 10, the patient group showed lower VN connectivity with the PIVC compared to healthy adults. At a threshold of 1, VN connectivity with the parietal lobe in the contralateral hemisphere was lower in patients than in healthy adults. Additionally, at a threshold of 30, VN connectivity at the pons level with the cerebellum was lower in healthy adults than in patients. These results suggest that cerebellar injury due to hemorrhage might be associated with alterations in the connectivity of the vestibular projection pathway, especially the thalamus and PIVC in the ipsilateral hemisphere and parietal lobe in the contralateral hemisphere. Studies have reported that vestibular projection pathways from VN at the level of the pons and medulla are typically connected to the thalamus, PIVC, VN, cerebral cortex, and cerebellum [9,10,30]. In 2004, Lee et al. showed that patients with cerebellar infarction presented with isolated vertigo, spontaneous ipsilesional nystagmus, and contralesional axial lateropulsion, without symptoms of cerebellar dysfunction [21]. In 2017, Kim et al. reported that isolated vestibular symptoms were associated with cerebellar injury due to infarctions without other neurologic deficits [13]. Specifically, cerebellar lesions involving the inferior cerebellar peduncle, which include the neural pathway that typically transfers vestibular information to the VN, can lead to isolated vertigo and postural imbalance without other neurological deficits [13,31]. In 2018, Jang et al. suggested that the VN showed strong connectivity with the cerebellum, thalamus, and vestibular-related brain regions [9]. Our results are consistent with those of the previous studies. The cerebellum receives vestibular inputs and projects through the inferior cerebellar peduncle to the VN [13,32]. Subsequently, the VN sends vestibular information to the PIVC, which is then processed and integrated with the thalamus [32][33][34][35]. When vestibular information is deficient due to cerebellar injury, VN may affect connectivity with the PIVC and thalamus [13,21]. Hence, cerebellar injury might affect the connectivity of the vestibular projection pathway, especially in the thalamus and PIVC. In the current study, the VN at the medullar level connectivity with the parietal lobe was lower in patients than in healthy adults, at a threshold of 1 in the contralateral hemisphere. In 1994, Akbarian et al. reported that connectivity was present between the VN and premotor and parietal cortices [36]. Recently, Jang et al. reported the VN connectivity in 37 healthy adults; it has also been reported that the VN showed connectivity with the primary motor cortex (95.9%, 83.8%, and 74.3% at thresholds of 1, 10, and 15, respectively), primary somatosensory cortex (90.5%, 68.9%, and 64.9%), and premotor cortex (87.8%, 52.7%, and 40.5% at thresholds of 1, 10, and 15 respectively) [9]. Our results are consistent with those of previous studies. Thus, cerebellar injury might affect VN connectivity with the parietal lobe. In the current study, VN connectivity at the pons level with the cerebellum was higher in patients than in healthy adults, at a threshold of 30 in the contralateral hemisphere. Studies have reported that the unaffected hemisphere is associated with neuroplasticity in patients with brain injury [37][38][39]. In 2010, Kwak et al. demonstrated changes in the corticospinal tract in the unaffected hemisphere in stroke patients using DTI [37]. In 2013, Yeo et al. reported increased fiber volumes of the [39]. These studies suggest that the change in the neural pathway in the unaffected hemisphere can be regarded as neuroplasticity; therefore, the phenomenon of changes in the unaffected hemisphere can be regarded as a compensation for damage in the affected hemisphere [37][38][39]. The results of the current study are consistent with those of previous studies. Thus, greater connectivity with the cerebellum in patients than in healthy adults can be regarded as induced neuroplasticity. The present study has a few limitations. First, it is limited by its small sample size. Second, we only investigated the vestibular projection pathway connectivity in patients with cerebellar injury without clinical evaluation. Third, because DTT cannot discern the direction, the afferent and efferent fibers could not be divided between the VN and target brain regions. Fourth, DTI analysis is operator-dependent; because of fiber complexity and the crossing fiber effect, it may Fig. 2 Results of neural connectivity between the VN of medulla oblongata and vestibular-related areas (parietal lobe, PIVC, thalamus, and cerebellum) in patients with cerebellar injury, at thresholds of 10 streamlines as determined by DTI. The control showed a subject out of six underestimate the fiber tracts. Therefore, to overcome these limitations, in-depth studies as well as studies regarding the clinical application of our results in patients with cerebellar injury are encouraged. Conclusion We investigated the differences in the connectivity of the vestibular projection pathway after cerebellar injury using DTI tractography. We found that cerebellar injury seems to be associated with decreased vestibular projection pathway connectivity, especially in the ipsilateral thalamus, PIVC, and contralateral parietal lobe. Therefore, evaluating the vestibular pathway using DTT in patients with cerebellar injury might be useful for clinical evaluation.
2022-03-24T05:13:07.699Z
2022-03-22T00:00:00.000
{ "year": 2022, "sha1": "0d37f3ea834f57785dd221de31f6ad1065a28a14", "oa_license": "CCBY", "oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/s12868-022-00702-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b44546956b778807ce8389ec3cb7bb6f33decd9d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55742058
pes2o/s2orc
v3-fos-license
Impact of awareness about hypertension on compliance to antihypertensive medication Background: Hypertension, a common cardiovascular disorder accounts for 2050% of all deaths. This risk can be greatly ameliorated by creating awareness about disease and its effective treatment alongside regular medical check-ups. Therapeutic failures result from patient non-compliance, manifested as intentional or unintentional errors in dosage or schedule, overuse or underuse of prescribed drugs and early termination of therapy. Adherence is helpful for management of hypertension and cost minimization. Non-adherence to the drug treatment is an important factor for uncontrolled hypertension and its complications. Methods: Patients were interviewed individually after taking informed consent, using pretested, predesigned, selfadministered and closed ended questionnaire both before and 4 weeks after creating awareness about hypertension and its complications. Compliance measured by self-reporting in which knowledge of the patient about number of antihypertensive drugs being used, formulations of drugs, frequency of administration, duration of taking the drugs and knowledge of complications due to uncontrolled and untreated hypertension were assigned 1 score each. Patient having score of at least 4 out of total 5 was considered compliant. Results: No significant association of compliance with demographic and other variables like age, sex, marital status, economic status, education, urbanization, duration of treatment and drug procurement were noted. A significant increase in compliance in patients on antihypertensive medication was found 4 weeks after creating awareness about hypertension and its complication. A significant increase in compliance scores was also seen in non-compliant patients showing their shifting from non-compliance to compliance group. Overall compliance increased from 59.38% to 84.38%. A percentage decrease from 58.82% to 25% in patients having uncontrolled hypertension was also observed after the awareness about hypertension. Conclusions: Demographic variables, duration of hypertension and drug procurement have no significant effect on compliance to antihypertensive medication. There is persistence and improvement in compliance to antihypertensive medications after an education of the patients about hypertension and its complications. heart disease. 3 Awareness about hypertension and effective therapy can reduce the risk. 4 Overwhelming majority of the cases, underlying cause is unknown and such patients are labelled as idiopathic or essential hypertension. Because of escalating obesity and aging in developed and developing countries, the global burden of hypertension is rising and projected to effect 1.5 billion people all over the world, by year 2025. 5 In India most recent data on hypertension showed a prevalence rate of 59.9 and 69.9 per 1000 in males and females, in urban populations and 35.5 and 35.9 per 1000 in males and females in rural population. 2 Clinical cases of hypertension in developed countries represent just the tip of an iceberg. It has been found that 50% of the hypertensive patients are aware of the disease out of which 50% are being treated and only half of them are considered as adequately treated. The condition in developing countries is likely to be worse due to limited access to healthcare. 2 Asymptomatic nature of the condition delays diagnosis. Effective treatment requires continuity of care by a good physician and regular medical check-ups, which are less in developing countries. 5 Therapeutic failures result from patient non-compliance, manifested as intentional or unintentional errors in dosage or schedule, overuse or underuse of prescribed drugs and early termination of therapy. 6 Strategies to reduce blood pressure by decreasing salt intake, increased potassium intake and pharmacotherapy will cause dramatic reduction in stroke, heart failure and heart attacks. Control of raised blood pressure is likely to result in considerable savings on health expenditure as hypertension is not only the biggest cause of death but also the 2 nd biggest cause of disability, after childhood malnutrition. 3 Compliance or Adherence is defined as "the extent to which the patient follows medical instructions". 7 Improvement in compliance to prescribed medications is necessary to avoid adverse outcomes. Non-compliance is not only due to patient factors, but care providers and healthcare system also plays a major role. 8 Adherence is helpful for management of hypertension and cost minimization in cardiovascular diseases. 9 Non-adherence to the drug treatment is an important factor for uncontrolled hypertension and incidence of complications due to hypertension. 10 Simple strategies should be preferred in daily practice to improve adherence. Improvement in adherence provides maximum benefit of prescribed drugs. 8 The present study was proposed to see the effect of creating awareness about hypertension, on compliance to antihypertensive medication in tertiary health care of this hilly area, as persistence and improvement in compliance is one of the important determinants in achieving favourable therapeutic outcomes. Aims and objectives of the study were to study the impact on persistence and improvement of compliance to antihypertensive medication in essential hypertension, both before and after creating awareness about hypertension. METHODS After taking clearance from Institutional Ethics Committee (IEC) work commenced on this research project. Patients attending Out Patient Clinic of Cardiology Department of IGMC, Shimla, a tertiary care teaching hospital of H.P. Duration of Study was 8 weeks. Sample size includes, 40 patients were included in the study out of which 32 patients completed the study. It was hospital based cross-sectional study. Study tool Participants were interviewed individually using pretested, pre-designed, self-administered, close ended questionnaire both before and after creating awareness about the disease. Inclusion criteria • Outdoor patients of both sexes above 18 years of age. • Patients of isolated essential hypertension. • Patients taking antihypertensive drugs for the last at least 3 months and able to take their medication themselves. Exclusion criteria • Taking drugs for any other chronic illness. • Pregnancy and lactation. • Secondary hypertension. • Unwillingness to participate in the study. Measurement of blood pressure Patients were informed about the study in their own language and consent was obtained on the consent form (Annexure I). After 5 minutes of quiet rest two readings of B.P. in the seating position were taken with the help of sphygmomanometer and their mean was found. A cuff of adult size was used. The systolic B.P. was taken at Korotkoff phase I and diastolic at Korotkoff phase V. 11 Patients with systolic B.P.>140 and diastolic B.P.>90 were labelled to have uncontrolled hypertension. Compliance to antihypertensive drugs was measured on self-reporting by the patient, was considered convenient, economical and easily acceptable to the patients although a subjective method. Keeping in view various studies for measuring compliance on self reporting, a pre-tested, predesigned, self-administered, close ended questionnaire was prepared (Annexure-II). [12][13][14][15][16] Considering a well-known fact that patients who are taking drugs regularly for the last 3 months or more and compliant must be well familiar with number of antihypertensive drugs being used, formulations of drugs, frequency of administration and duration of treatment. For the persistence and improvement of compliance, knowledge of hypertensive complications can be helpful and acts as good motivator for compliance to antihypertensive medications. For convenience a compliance scoring was used in which the knowledge of number of drugs being taken, formulations of drugs, frequency of administration of drugs, duration of treatment and knowledge of hypertensive complications were assigned one score each. A patient getting 4 out of total 5 scores was labelled as compliant and less than 4 was considered non-compliant. Non-compliant patients were interrogated for various barriers to compliance. Compliance was measured before and after 4 weeks of awareness about the disease. Patients were educated in their own language as language is as powerful tool as the medication prescribed. 17 One to one verbal communication was done about the occurrence of various complications of untreated and uncontrolled hypertension, and about various barriers to compliance for its improvement and persistence. Patients having primary education and able to read and write a language were considered literate. Patients having per capita annual income less than Rs. 25000/were included in low income group. Statistical analysis Data were collected and analyzed using SPSS Version 16. The independent t-test was used to compare age. The Fisher exact test was used to compare categorical variables like sex, education, urbanization, marital status, and economic status, duration of hypertension and drug procurement in the two groups of compliant and noncompliant hypertension patients after creating awareness about hypertension. The paired t-test was used to compare the effect on compliance and non-compliance, before and after awareness about hypertension. P<=0.05 was used as the cut-off level for statistical significance. P<=0.001 was considered highly significant. There was no statistically significant difference in the mean age, sex distribution, literacy, urbanization, marital status, and economic status, duration of hypertension or drug procurement in compliant and non-compliant patients (Table 1). On the basis of compliance scores, a highly significant difference (p<0.001) was observed before and 4 weeks after awareness about hypertension in the study subjects. A significant increase in compliance score was observed in compliant patients (n=19) and highly significant difference (p<0.001) in non-compliant patients (n=13) before and after intervention of awareness about hypertension. A highly significant difference (p<0.001) in compliant patients (n=27) and significant difference (p<0.05) was also observed in non-complaint patients (n=5) with awareness of the patients about hypertension ( Table 2). The overall compliance rate increased from 19 (59.38%) before awareness, to 27 (84.38%) 4 weeks after awareness about hypertension (Figure 1). DISCUSSION The present study included 32 patients who were interviewed and made aware about hypertension, in cardiology out door clinic of a tertiary care government hospital. Our study investigated the impact of education about the occurrence of various complications of untreated and uncontrolled hypertension. Patients were educated about complications of hypertension and various barriers to compliance. Improvement and persistence of compliance to antihypertensive medication is said to play a significant role in favourable therapeutic outcome of treatment. In our study no, significant correlation could be found for age, sex, education, urbanization, marital status, and economic status, duration of hypertension and drug procurement with compliance (Table 1) which is supported with other study also. 18 There are different reports regarding these associations. One study shows poor adherence in younger age while the other study shows the association of high income and affordability with good compliance. 19.20 Poor socioeconomic status and illiteracy have been found important factor for poor adherence. 21 One study has shown more compliance in older patients, but there was no association with sex and marital status and also with education and economic status in another study. 22,23 One study has shown positive association of age of 50 years or more and female sex with good compliance to antihypertensive medication. 24 Studies shown no association of compliance with age and sex, but for literacy. 25 The present study has shown significant increase in compliance scores of all study subjects as well as in case of compliant and non-compliant patients after educating the patients about hypertension ( 19,20,21,[26][27][28][29] One study however could not find significant effect of education of hypertensive patients through mailing education packets, on mildly uncontrolled hypertension, but significant improvement in patient knowledge, home monitoring and satisfaction was observed. 30 In our study one to one verbal communication in the language of the patient probably proved more beneficial in improving compliance. Percentages of patients suffering from uncontrolled hypertension have decreased after making the patients aware about hypertension and its complications if remained uncontrolled (Figure 2). Other studies have also shown the beneficial effect of improved and persistent compliance on uncontrolled hypertension or for adequate control of high blood pressure. 13,21,27,31 In one study family based home health education by trained lay health workers along with education of general practitioners was found to have significant reduction in blood pressure among hypertensive patients. 32 Less correlation of uncontrolled hypertensive patients with compliance was seen in one study in which 56% of uncontrolled patients were compliant and hence an improvement in the quality and efficacy of medical treatment was emphasized. 25 Limitations of the study were to less number of patients in the study and self-reporting method to measure compliance in therapeutics is subjective and not much reliable in all situations. Strength of the study of the study was planned to keep in view time and resources available and method was convenient and economical. CONCLUSION Based on the findings of the present study a significant and beneficial impact of awareness about hypertension and its complications, on compliance to antihypertensive medication is present. Persistence and improvement in compliance may help in achieving favourable therapeutic outcomes in patients on antihypertensive drugs.
2019-03-17T13:11:21.074Z
2018-01-23T00:00:00.000
{ "year": 2018, "sha1": "75b84f86edba8bfb150818a4101fd07a1956376e", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/2260/1791", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "58288933a3fb05cb38e6caed16af828cb7bfab1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
78835199
pes2o/s2orc
v3-fos-license
A Prospective Double Blind Placebo Controlled Trial of Combination Disease Modifying Antirheumatic Drugs vs. Monotherapy (Sulfasalazine) in Patients with Inflammatory Low Backache in Ankylosing Spondylitis and Undifferentiated Spondyloarthropathy Inflammatory back pain (IBP) in Ankylosing Spondylitis (AS) and Undifferentiated Spondyloarthritis (UspA) adversely affects the quality of life. Herein combination DMARD vs. sulfasalazine (SSZ) monotherapy was evaluated in treatment of axial symptoms of AS and UspA. Methods: Patients with AS/UspA with disease duration ≤ 8 years, IBP of atleast 6 months duration, and BASDAI ≥ 4 or early morning stiffness ≥ 1 hour despite NSAID therapy for 6 weeks were included. Patients were initiated on SSZ with either combination DMARD [MTX (10 mg escalated by 2.5 mg every week up to 20 mg/week) and HCQS 200 mg/day] or SSZ Monotherapy group [placebo MTX and placebo HCQS]. ASAS20 response was assessed at baseline and at the end of 6 months. Results: Of thirty three patients (31 males) with mean disease duration 39 months and mean BASDAI of 6 at baseline, 27 completed the study (16 in Combination DMARD and 11 in SSZ monotherapy group). ASAS 20 response, was achieved in 68.4% (13/19) and 50% (7/14) in the Combination DMARD and SSZ monotherapy groups (p=0.47), respectively. BASDAI scores decreased significantly in both the groups after therapy. A significant improvement in BASFI, patient pain VAS, patient global disease VAS, HAQ, and MCS of SF-36 was observed in both the groups. In the combination DMARD group, significant improvement in BASMI, FACIT and PCS of SF-36 and decrease in the serum MMP-3 levels was observed following therapy. Conclusion: SSZ monotherapy is equally efficacious as combination DMARD group in a significant proportion of patients with NSAID refractory IBP associated with AS/ USpA. Citation: Venkatesh S, Viswanath VV, Tripathi D, Ansari M, Agarwal V (2015) A Prospective Double Blind Placebo Controlled Trial of Combination Disease Modifying Antirheumatic Drugs vs. Monotherapy (Sulfasalazine) in Patients with Inflammatory Low Backache in Ankylosing Spondylitis and Undifferentiated Spondyloarthropathy. J Arthritis S1: 001. doi:10.4172/2167-7921.S1-001 Introduction Seronegative Spondyloarthropathies (SpA) refer to a group of chronic inflammatory disorders of unknown etiology. With prevalence between 0.5-2.5 percent, they are among the most common rheumatologic disorders [1]. This is a male predominant disease with a mean age of onset usually in the second and third decade. This includes a heterogeneous group of patients with predominant axial skeletal and entheseal involvement to more wide spread peripheral involvement; asymmetric or symmetric oligo/polyarthritis without any axial involvement. The subgroups which come under this terminology includes: Ankylosing Spondylitis (AS), Psoriatic Arthritis, Reactive Arthritis, Inflammatory bowel disease associated arthritis and Undifferentiated Spondyloarthritis (USpA). These diseases are major cause of morbidity for the patients, causing pain, stiffness, loss of mobility, disability, poor sleep and overall poor quality of life. In a study conducted by Linden et al., it was found that work related indices such as percentage of unemployment, lack of permanence of work, number of sick leaves and early retirement due to illness were all higher in patients with AS as compared to healthy individuals [2]. In another study, it was demonstrated that the quality of life in these patients is worse than cancer and myocardial infarction patients [3]. Amor et al., concluded that predictive factors of long term outcome could be defined very early after the onset of spondyloarthropathy [4]. It has been demonstrated that a major factor which influences the quality of life is the extent of entheseal involvement and associated stiffness and pain [5]. Besides these, the economic impact of loss of productivity due to AS calculated in terms of average annual human capital lost varies from Euros 4227 to Euros 8862 per patient. NSAIDS have been the main stay of treatment for these diseases for long. Despite providing good pain relief, they are largely ineffective in altering the natural course. However, very often, in spite of therapy, pain and discomfort continues in these patients with recurrent exacerbations. The DMARDs (Disease Modifying Anti Rheumatic Drugs) are a group of drugs which have come into prominence following their remarkable efficacy in the management of rheumatoid arthritis. The major drugs representative of this group are; methotrexate (MTX), sulfasalazine (SSZ), hydroxychloroquine (HCQ), gold and leflunomide. Of these drugs, the most well studied drug in SpA is SSZ. However, its efficacy has been variably reported. Dougdas et al., conducted a multicenter, 36 week trial, involving 264 patients with evidence of active AS refractory to NSAIDs, defined as morning stiffness of >45 minutes duration, inflammatory back pain, and patient and physician global assessments of moderate or high disease activity. The primary outcome variable was treatment response based on morning stiffness, back pain, and physician and patient global assessments. In this trial, SSZ was given at a dose of 2 g/day. The trial found SSZ to be no more effective than placebo; treatment response rates were 38.2% for SSZ versus 36.1% for placebo. Significant treatment efficacy was not shown for any of the following four outcome measures used to define treatment response: physician global assessment (SSZ, 53.4% vs. placebo, 55.6%) patient global assessment (SSZ, 40.5% vs. placebo, 42.1%) morning stiffness (SSZ, 48.9% vs. placebo,44.4%) and back pain (SSZ, 23.7% vs. placebo, 27.1%). Premature discontinuation rates due to adverse events were 8% (11/131) and 5% (6/133) for SSZ and placebo, respectively [6]. A major reanalysis of a series of randomized, double blind, placebo controlled, 36 week multicenter trials of SSZ (2 g/day) (including the above study) on the axial and peripheral articular manifestations of AS (n=264), psoriatic arthritis (n=221), and reactive arthritis (n=134) was recently reported of which 187 patients had only axial manifestations, while 432 patients had peripheral arthritis. The primary outcome measure was treatment response, determined on the basis of improvement in four outcome measures: patient and physician global assessments (all patients), morning stiffness and back pain in patients with axial manifestations, and joint pain/tenderness scores and joint swelling scores in patients with peripheral articular manifestations. Intention to treat analysis showed that SSZ provided significant improvement in patients with peripheral arthritis; response rates were 59.0% in patients treated with SSZ versus 42.7% in the placebo group (p=0.0007). They did not find SSZ to be beneficial for axial or entheseal disease [7]. However Braun et al. had demonstrated the efficacy of SSZ in inflammatory backache due to USpA and early AS, in patients without peripheral arthritis [8]. The other major DMARD tried in AS is methotrexate (MTX). Though MTX monotherapy has not been found to be effective in axial symptoms of SpA in a number of studies, it has been reported to be useful in patients with peripheral arthritis [9][10][11][12][13]. Gonzalez-Lopez et al., reported significant improvement with MTX in physical well-being, BASDAI, BASFI, physician and patient global assessments, the HAQ, and spinal pain [14]. Haibel et al. studied the role of subcutaneous MTX at doses of 20 mg/week for 16 weeks in a NSAID refractory AS patients with axial symptoms and found ASAS 20, 50 and 70 responses of 25%, 10% and none respectively, with no change in BASDAI [11]. In a recent Cochrane review, it was concluded that MTX may not improve overall disease activity, physical function, overall pain, tenderness or swelling in the ligaments of the joints, movement of the spine, stiffness and overall well-being [15]. Leflunomide, the other major DMARD, has also fared poorly in a controlled trial in ankylosing spondylitis [16]. At present, there is inadequate data regarding the efficacy of HCQ for inflammatory backache. Apart from SSZ, there is no data available on the utility of DMARDS in USpA. The discovery of anti-TNF-α based biologics have been the major breakthrough in the management of SpA in recent times [17]. These drugs, besides providing symptomatic improvement, also improve disease activity indices. However, as of now, they have not demonstrated a definite benefit in halting radiologic progression [18]. Besides, the enormous cost incurred at a rate of about Rs 700,000/-(Euros 10000, US $ 14000/-) per annum, put it out of reach of the majority of affected population in India and other third world nations. Added to these, is the increased risk of tuberculosis and fungal infections, a major problem in our country. In this background, there is a severe and pressing need for alternative safe and effective drugs in the management of these diseases. It is here that the combination DMARD therapy assumes importance as a potential safe and cheaper alternative. It has been demonstrated that SpA is a TNF-α driven disease process. Though the level of this cytokine has been found to be quite high in the synovial and entheseal biopsies, they do not cause commensurate increase in the serum levels. It has been shown that one of the mechanisms of action of conventional DMARDs like SSZ and MTX is by TNF-α blockade. It could be that this benefit is not transmitted to the level of axial and entheseal sites because of the fact that the inflammatory burden in AS is higher. In this setting, a higher dose of the conventional DMARDs may be effective, but this is likely to be associated with significantly increased toxicity profile. The other alternative available is the use of combination DMARD therapy. Combination DMARD therapy has been tried in rheumatoid arthritis and has been found to be better than monotherapy in halting the clinical and radiological progression of the disease process when given early in the disease [19]. A recent review of the toxicity profile of these agents has proved them to be safe with withdrawal rates due to toxicity being no higher than patients on monotherapy [20]. Considering the effectiveness of combination DMARD therapy in RA, its potential as a safe and cheap alternative for inflammatory Chronic Low Back Ache (CLBA) in SpA needs further investigation. Most of the studies evaluating the role of DMARDs in SpA, have included patients with advanced disease, in whom significant Ankylosis has already occurred. Damage predominates over disease activity in this cohort and it is often very difficult to distinguish whether symptoms are due to the former or latter. In addition the role of combination therapy in inflammatory CLBA has not been evaluated in well-designed randomized controlled trials. In this prospective, double blind, placebo-controlled study, we compared the efficacy of SSZ monotherapy versus combination of DMARDs including SSZ, MTX and HCQ, for inflammatory CLBA in relatively early disease of AS/ USpA patients, refractory to NSAID therapy. Materials and Methods Patients who visited our SpA Clinic at Sanjay Gandhi Post-Graduate Institute of Medical Sciences (Lucknow, India), and who fulfilled criteria for the diagnosis of AS by the Modified New York Criteria [21] or USpA by the Amor criteria [22], with disease duration ≤ 8 years, and with inflammatory CLBA of at least 6 months duration were included in the study, if they had a BASDAI ≥ 4 or early morning stiffness ≥ 1 hour despite taking maximum dose of at least one NSAID for 6 weeks duration. The study was carried out between Jan 2010 to Dec 2012. Patients with renal or hepatic disease, severe uncorrected anemia (Hemoglobin <7 gm/dl), previous exposure to SSZ and/or MTX, pregnant or lactating females, malignancy, chronic or on-going acute infection, were excluded from the study. In addition patients who required and could afford biologicals and those receiving steroids in the previous 3 months were also excluded. Subject's written consent was obtained according to the declaration of Helsinki, and study was approved by institutional ethics committee and it conforms to standards currently applied in India. Institutional ethics committee was responsible for data safety and monitoring. All the drugs and matching placebo were procured from IPCA Activa, Mumbai, India. The primary end point of the study was proportion of patients achieving ASAS 20 response [23] at the end of 6 months. Secondary end points included proportion of patients achieving ASAS 40, changes in the BASDAI [24], BASFI [25], BASMI [26], FACIT [27], patient pain VAS, patient global disease assessment VAS, physician global disease assessment VAS, quality of life measures including the SF-36 [28] and the HAQ at the end of 6 months. Statistical analysis Intention to treat analysis was carried out. Patients who did not completed the study; their last observation was carried forward for analysis. Paired T test was used to analyze the numerical data. Mann-Whitney U test and test of proportions was used to calculate the difference between the numbers of patients achieving ASAS20 responses in the combination DMARD vs. SSZ monotherapy group. P value <0.05 was considered significant. All the statistical analysis was carried out on NCSS 2007 software. Results Thirty three patients were enrolled in the study with a mean age (24.9 years), M: F sex ratio (10:1), mean disease duration (39 months) and mean BASDAI (6.0). Of these, 27 patients completed the study with 16 being in the combination DMARD group and 11 in the SSZ monotherapy group. Three patients in each group dropped out of the study due to reasons mentioned in the Figure 1. Patients were initially given SSZ 1gm/day which was escalated to 2 g/day after 1 week, and continued until the completion of one month. Patients not tolerating SSZ in the initial month were withdrawn from the study. At the end of the first month, patients were randomized into two groups in a double blind fashion, either Combination DMARD group: MTX (10 mg escalated by 2.5 mg every week up to 20 mg/ week) and HCQS 200 mg/day (MTX + HCQS) or Monotherapy group: placebo MTX and placebo HCQS. SSZ was continued in both the groups for the next 5 months. Randomization sequence was generated by using random digit table. MTX and HCQS and their respective identical looking (shape, color and smell) placebo tablets were first packed in opaque yellow plastic envelops which were then packed inside identical white opaque plastic boxes by another person, who was not a part of the study team. The boxes were coded as box 1xxxA or box 1xxxB and sealed. Box 1xxxA contained either MTX (2.5 mg tablets) or placebo whereas box 1xxxB contained either HCQS or placebo. Both boxes (1xxxA and 1xxxB) provided to a particular patient, contained either only drugs or only placebo. The labeled boxes were distributed to the patients by one of the authors (VA). The key was sealed and stored beyond the reach of the investigators till the completion of the study and data entry, and was opened just before data analysis. Patients continued their current NSAIDs, and were advised to taper them in accordance with their symptom relief. Patients were required to follow up after completion of 1 month, 2 months, 4 months and 6 months after treatment initiation. Complete hemogram, renal and liver function tests monitored during every visit. Serum samples for analyzing MMP-3 and TIMP-1 were drawn at baseline and at the end of the study and stored at -80°C till analysis. MMP-3 and TIMP-1 analysis was carried out by ELISA as per the manufacturer recommendations (R&D systems Inc., Minneapolis, MN, USA). Baseline assessments The baseline characteristics were comparable in both groups ( Table 1). The mean age (± S.D) of the patients was 25.4 (± 5.5) and 24.3 (± 4.3) years in the combination DMARD (SSZ+MTX+HCQS) and SSZ monotherapy groups respectively, while the mean duration of disease was 3.1 (± 2.2) and 3.7 (± 1.9) years, respectively. There was one female in each group, with the ratio of patients of AS/USpA being 9/7 (n=16) and 6/5 (n=11) in the combination DMARD group and SSZ monotherapy groups respectively. Therapy response at the end of 6 months The primary end point, ASAS 20 response, was achieved in 68.4% (13/19) and 50% (7/14) in the combination DMARD group and the SSZ monotherapy groups (p=0.47, Fisher's exact test), respectively. BASDAI scores ( (Table 3), including BASFI, patient pain VAS, patient global disease VAS, HAQ, and MCS of SF-36 significantly improved in both the groups, while the 10 point BASMI, FACIT and PCS of SF-36 improved significantly in the combination DMARD group only. The levels of MMP-3 were significantly increased and the levels of TIMP-1 were significantly reduced in the AS/UspA patients as compared to healthy controls at baseline (Table 4). In the Combination DMARD group, the levels of MMP-3 decreased significantly following therapy as compared to the SSZ monotherapy group (Table 5). However, the levels of TIMP-1 remained unchanged in both the groups as compared to baseline. Toxicity profile Almost all patients tolerated the drugs except for one patient in the SSZ monotherapy group who withdrew due to drug induced diarrhea, which recovered after stopping the drug, and one patient in the combination DMARD group who developed drug associated vomiting after which he withdrew at the end of 2 months. One patient developed transient transamniitis which required discontinuation of medications for 4 weeks but patient completed the study without recurrence. Discussion In this prospective, double-blind, placebo-controlled study, we have assessed the efficacy of combination DMARDs (SSZ+MTX+HCQS) versus that of SSZ monotherapy in NSAID refractory inflammatory CLBA in patients with AS/ USpA of relatively short duration (≤ 8 years). The disease duration chosen was based on an Indian study which showed that the average delay from the onset of initial symptoms to diagnosis was 8 years [29]. In the present study we observed that both combination DMARD as well as SSZ monotherapy are effective in the treatment of NSAID refractory inflammatory CLBA in SpA of short disease duration. We found that combination of MTX, HCQS and SSZ, did not provide additional efficacy over that provided by SSZ monotherapy, as evidenced by the ASAS 20 responses of 68% (13/19) and 50% (7/14) in the two groups respectively. BASDAI improved significantly in both the groups compared to baseline, and at the end of 6 months all but 4 (14.8%) patients had a BASDAI <4. Both the therapies were well tolerated. DMARDs for axial symptomatology in SpA, have been evaluated in a very limited number of studies. Although SSZ is the most well evaluated of these, its efficacy is mostly limited to peripheral arthritis associated with SpA [6]. The ASCEND trial, which compared Etanercept 50 mg once weekly with SSZ 3g/d for 16 weeks, in active AS with both axial and peripheral symptoms, reported that ASAS 20 responses were achieved in almost 52.9% on SSZ, which is a significant number, though it was inferior to Etanercept (75.9%; p<0.0001). The mean disease duration in the study was 7.6 years [30]. The study by Braun et al., evaluating SSZ vs. placebo in inflammatory CLBA associated with AS/ USpA, included 230 patients, with 47% also having peripheral arthritis. At the end of 6 months, there was no significant difference in the reduction in BASDAI and most other secondary outcome variables between the two groups, though in the group without peripheral arthritis, there was significant benefit with SSZ in reduction in BASDAI, spinal pain and morning stiffness compared to placebo [8]. In a re-analysis of three randomized, placebo controlled trials of SSZ 2g/day vs. placebo in AS, PsA and ReA by the Department of Veterans Affairs Cooperative Study group, SSZ was found to be effective in peripheral articular manifestations, but not in axial disease [7]. In a retrospective study on the efficacy of combination DMARDs (SSZ + MTX) versus SSZ monotherapy in NSAID refractory AS, Can et al. had reported a significant improvement in BASDAI scores in both the groups at 6 months, which consequently lead to a reduction in the requirement of biological therapy by 21-24% if BASDAI was the decisive factor. BASDAI was >4 in 32.8% (20/61) of patients in the SSZ monotherapy and in 44% (11/26) in the combination arm at the end of 6 month follow up [31]. This being a retrospective study is obviously biased by the physicians' subjectivity in disease assessment and their personal choices with regards to therapy. To the best of our knowledge, this is the first placebo controlled double blind study, to evaluate the efficacy of combination DMARDs, which includes the combination of SSZ + MTX + HCQ, for inflammatory CLBA in AS/USpA. In addition we have included patients with relatively shorter disease duration, which ensures a more precise assessment of disease activity, and hence partially obviating the influence of damage in assessment of disease activity. In the present study we have observed that both combination DMARDs and SSZ monotherapy are associated with good response to treatment in SpA patient with NSAID refractory inflammatory CLBA. Though superiority (as far as ASAS20 response is considered) of combination of MTX, HCQS and SSZ was not observed however, combination DMARD group has significant improvement in physical component score of SF-36, BASMI and fatigue as compared to SSZ monotherapy group. Moreover, more number of patients reported ASAS 20 responses in the combination DMARD group. May be due to small number of patients in both the groups the study was not sufficiently powered to demonstrate superiority of combination DMARD therapy as compared to SSZ monotherapy group. MMP-3 has been reported to reflect degree of inflammation [32] and correlate with disease activity in AS [33] and radiographic progression [34]. MMP-3 is involved in degradation of extracellular matrix proteins and is involved in degradation of cartilage and bone of inflamed joints [35]. We observed significant decrease in MMP-3 levels in the combination DMARD group as compared to SSZ monotherapy group. Thus combination DMARD therapy has potential to minimize joint damage by reducing the levels of MMP-3. The limitations of our study include the small sample size due to single center and rigid inclusion criteria and absence of a control arm with only NSAIDs, due to its perceived unethicality in a cohort which is already in discomfort due to NSAID refractoriness. Hence we conclude that, in a small cohort of patients SSZ monotherapy is efficacious in a significant proportion of patients with NSAID refractory, inflammatory CLBA associated with AS/ USpA in relatively early disease, and would help probably delay or may be totally avoid the use of expensive anti-TNF biologics and its associated complications in this subset, especially in the under developed world. Moreover, we have found that the triple combination of SSZ, MTX and HCQS is not superior to SSZ monotherapy. However, a larger study with sufficient sample size and power is necessary to make a definite recommendation.
2019-03-16T13:05:37.126Z
2015-06-25T00:00:00.000
{ "year": 2015, "sha1": "5046be2c2f157b3d88b6238bf744c904d07229f0", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/a-prospective-double-blind-placebo-controlled-trial-of-antirheumatic-drugs-vs-monotherapy-sulfasalazine-in-patients-with-low-backache-and-spondyloarthropathy-2167-7921-1000S1-001.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e0e14660ae63db924fd82e7e13a184dd61c0ee57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17595269
pes2o/s2orc
v3-fos-license
The Dirichlet problem for discontinuous perturbations of the mean curvature operator in Minkowski space Using the critical point theory for convex, lower semicontinuous perturbations of locally Lipschitz functionals, we prove the solvability of the discontinuous Dirichlet problem involving the operator $u\mapsto{div} (\frac{\nabla u}{\sqrt{1-|\nabla u|^2}})$. By a solution of (1.2) we mean a function u ∈ W 2,p (Ω) for some p > N , such that ∇u ∞ < 1, which satisfies a.e. x ∈ Ω and vanishes on ∂Ω. At our best knowledge, this type of solutions, but for differential inclusions was firstly considered by A.F. Filippov [8]. Also, for partial differential inclusions we refer the reader to the pioneering works of I. Massabo and C.A. Stuart [12], J. Rauch [14], C.A. Stuart and J.F. Toland [17]. This work is motivated by the recent advances in the study of boundary value problems involving the operator M (see [2], [6] and the references therein) and by the seminal paper of K.-C. Chang [4] where the classical critical point theory is extended to locally Lipschitz functionals in order to study the problem in Ω, u| ∂Ω = 0. It is worth to point out that the operators M and ∆ have essentially different structures and the theory developed in [4] appears as not being applicable to problem (1.2). Thus, we shall use a more general critical point theory, namely the one concerning convex, lower semicontinuous perturbations of locally Lipschitz functionals, which was developed by D. Motreanu and P.D. Panagiotopoulos [13] (also, see [10], [11]). It should be noticed that, using this theory, various existence results concerning Filippov type solutions for Dirichlet, periodic and Neumann problems involving the "p-relativistic" operator were obtained in the recent paper [9]. A first existence result for the Dirichlet problem involving the operator M was obtained by F. Flaherty in [7], where it is shown that problem has at least one solution, provided that ∂Ω has non-negative mean curvature and ϕ ∈ C 2 (Ω) with ∇ϕ ∞ < 1. The result was generalized in [1] by R. Bartnik and L. Simon, proving that problem is solvable, provided that the Carathéodory function g : Ω × R → R is bounded. More general, if g satisfies the L ∞ -growth condition: it is shown in [2, Theorem 2.1] that (1.4) is still solvable. The approach in [2] relies on Szulkin's critical point theory [16]. The aim of the present paper is to obtain a similar result for the discontinuous problem (1. The rest of the paper is organized as follows. In Section 2 we recall some notions from nonsmooth analysis which will be needed in the sequel. The variational formulation of problem (1.2) is a key step in our approach and it is given in Section 3. Section 4 is devoted to the proof of the main result. Preliminaries Let (X, · ) be a real Banach space and X * its topological dual. A functional G : X → R is called locally Lipschitz if for each u ∈ X, there is a neighborhood N u of u and a constant k > 0 depending on N u such that For such a function G, the generalized directional derivative at u ∈ X in the direction of v ∈ X is defined by and the generalized gradient (in the sense of Clarke [5]) of G at u ∈ X is defined as being the subset of X * where ·, · stands for the duality pairing between X * and X. For more details concerning the properties of the generalized directional derivative and of the generalized gradient we refer to [5]. If I : X → (−∞, +∞] is a functional having the structure with G : X → R locally Lipschitz and Φ : X → (−∞, +∞] proper, convex and lower semicontinuous, then an element u ∈ X is said to be a critical point of I provided that The number c = I(u) is called a critical value of I corresponding to the critical point u. According to Kourogenis et al. [10], u ∈ X is a critical point of I iff where ∂Φ(u) stands for the subdifferential of Φ at u ∈ X in the sense of convex analysis [15], i.e., Also, I in (2.1) is said to satisfy the Palais-Smale condition (in short, (P S) condition) if every sequence (u n ) ⊂ X for which (I(u n )) is bounded and for a sequence (ε n ) ⊂ R + with ε n → 0, possesses a convergent subsequence. The variational setting In the sequel we shall give the variational formulation of problem (1.2). With this aim, we introduce the set Notice that since W 1,∞ (Ω) is continuously (in fact, compactly) embedded into C(Ω), the evaluation at ∂Ω is understood in the usual sense. According to [2], K 0 is compact in C(Ω) and one has v ∞ ≤ c(Ω) for all v ∈ K 0 , (3.1) with c(Ω) a positive constant. Also, the functional Ψ : C(Ω) → (−∞, +∞] given by is proper, convex and lower semicontinuous [2,Lemma 2.4]. Having in view the growth condition (1.1), we define F : L q (Ω) → R by and, on account of the embedding C(Ω) ⊂ L q (Ω), we introduce the functional From [4, Theorem 2.1], one has that F is locally Lipschitz in L q (Ω) and for all v ∈ L q (Ω). Then, by the continuity of the embedding C(Ω) ⊂ L q (Ω) it is clear that F is locally Lipschitz on C(Ω). Also, since C(Ω) is dense in L q (Ω), it holds (see [5], p. 47): ) for a.e. x ∈ Ω and for all w ∈ C(Ω). The functional framework of Section 2 fits the following choices: X = C(Ω), Φ = Ψ in (3.2), G = F in (3.3) and Notice that, the compactness of K 0 ⊂ C(Ω) implies that I satisfies the (P S) condition. Next, for arbitrary u ∈ K 0 , by (1.1) and (3.1), the primitive F satisfies |F (x, u(x))| ≤ C (c(Ω) + c(Ω) q /q) =: C 2 , for a.e. x ∈ Ω. Hence, We deduce that the functional I is bounded from below on C(Ω). Then, using that I verifies the (P S) condition and Theorem 2.1, we have that I is a critical value of I and the proof is complete.
2015-02-02T16:16:53.000Z
2015-02-02T00:00:00.000
{ "year": 2015, "sha1": "8ce79f80aff519234633e8ee07f8da6b14d10e03", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14232/ejqtde.2015.1.35", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "8ce79f80aff519234633e8ee07f8da6b14d10e03", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
86819081
pes2o/s2orc
v3-fos-license
Larval and adult density of the porcellanid crab Petrolisthes armatus ( Anomura : Porcellanidae ) in an Amazon estuary , northern Brazil Petrolisthes armatus (Gibbes, 1850) is a porcellanid crab with a wide geographical distribution. In the present study we analyzed variations in the abundance of P. armatus adults and larvae over an annual cycle in the Marapanim estuary of the Amazon coastal zone, in the northeastern portion of the state of Pará, Brazil. Particularly, we focused on the presence of ovigerous females and timing of larval release, with the aim of elucidating reproductive patterns in a tropical estuarine system. The mean density of P. armatus larvae (zoea I and II) correlated positively with the salinity of the shallow waters of the estuary, whereas the abundance of adults correlated with the salinity registered in water samples collected from the benthic environment. There was also a significant positive correlation between larval density (zoea I and II) and water temperature. Ovigerous females were captured throughout the study period, from August 2006 to July 2007, but were more abundant in June and less abundant during the rainy months, between February and May. Larvae were only present during the dry season and transition months (June to January), and were absent during the rainy season (February to May). Petrolisthes armatus reproduces throughout the year in the Marapanim estuary and all developmental stages of this species (zoeal stages I and II, megalopae and adults) are found in the estuary. The results indicate that the study area is an important environment for the reproduction of this decapod. The porcellanids are typical crustaceans, with a pelagic larval phase.Larvae disperse in the water column, where they feed and grow.This is followed by a benthic phase associated with the substrate (QUEIROGA & BLANTON 2005).Most porcellanid species pass through two zoeal stages prior to the molt that gives rise to the megalopae (OSAWA & MCLAUGHLIN 2010). Petrolisthes armatus (Gibbes, 1850) is the most widely distributed species of the genus, being found in the western and eastern Atlantic, and eastern Pacific (MELO 1999).In Brazil, P. armatus has been recorded off the northeastern coast (Fernando de Noronha Island) and in Santa Catarina, in the extreme South (MELO 1999).On the northern coast, this species has been recorded in Maranhão and, more recently, in Pará (MELO 1999, BARROS & PIMENTEL 2001).The spatial distribution, population structure, and development patterns of P. armatus has been studied in oceanic beach and estuarine environments in the southern and southeastern Brazil (OLIVEIRA & MASUNARI 1995, MASUNARI & DUBIASKI-SILVA 1998, MIRANDA & MANTELATTO 2009, 2010a). The larval development of P. armatus was first investigated by LEBOUR (1943), who described zoeal stages I and II, whereas GORE (1970GORE ( , 1972) ) described the complete larval development of specimens from the Atlantic and Pacific oceans, respectively.BROSSI-GARCIA & MOREIRA (1996) also described the juvenile instars of this species.STILLMAN & SOMERO (2000) and STILLMAN (2002) studied the thermal tolerance of a number of porcellanids, including P. armatus, and the consequences of this tolerance for their geographic distribution.The associations of these species with sandbanks formed by polychaetes and as a host for an isopod species have also been reported (OLIVEIRA & MASUNARI 1998, 2006, MICHELETTI-FLORES & NEGREIROS-FRANSOZO 1999, BOSA & MASUNARI 2002, MIRANDA & MANTELLATO 2010b).A number of other studies have also focused on the ecological features of the species, abundance patterns, and the morphology of the larval stages (LIMA et al. 2005, MAGRIS & LOUREIRO-FERNANDES 2005, DÍAZ-FERGUSON et al. 2008, TILBURG et al. 2010, MELO JR et al. 2012). The only information available for P. armatus from the Amazon estuary is the first record of this species in the region (BARROS & PIMENTEL 2001) and descriptions of the stomachs of larvae and post-larvae (LIMA et al. 2005).No published data on the reproductive biology or life cycle of P. armatus are currently available.In the present study we analyzed the variations in the abundance of P. armatus larvae and adults over an annual cycle in an estuary of the Amazon coast (Marapanim).We focused on the periods when the largest numbers of ovigerous females were present, and the timing of larval release in this tropical environment. MATERIAL AND METHODS The northern Brazilian coast is 1,200 km long.It encompasses the mouth of the world's largest river in length and freshwater and sediment discharge, as well as the largest continuous tract of mangrove forest (SOUZA FILHO et al. 2009).This region is also characterized by macro-tides, low water transparency and a predominance of sandy and muddy bottoms (SOUZA FILHO et al. 2009, KRUMME & SAINT-PAUL 2010). The waters of the Marapanim estuary are well mixed, being influenced by the semi-diurnal macrotides (amplitude > 5 m) (BERRÊDO et al. 2008).In the municipality of Marapanim, monthly precipitation is highest in February (760.60 mm) (ANA 2007), which is typical of the region's climate, characterized by a rainy season between February and May (MORAES et al. 2005). Three distinct climatic phases can be identified in this region (OLIVEIRA et al. 2012): the dry season (August to Decem-ber), transition periods (January and June to July) and the rainy season (February to May).Mean temperatures range from 27°C in the rainy season to 29°C in the dry season (BERRÊDO et al. 2008).The pH becomes higher during the rainy season (from 5.74 to 6.68) because of the intense influx of organic acids produced by the mangroves (24-60 g/kg) (BERRÊDO et al. 2008). Larvae and adults of P. armatus were collected in the estuary of the Marapanim River, which is located in the Brazilian state of Pará, part of the Amazon coastal zone, and known locally as the "Salgado paraense"."Salgado", which means salty, is a reference to the fact that this region is dominated by the Atlantic Ocean, in contrast with the sector of the coast further west, which is under the influence of the discharge of the Amazon River. The municipality of Marapanim is located between the Mãe Grande de Curuçá (BRASIL 2002b) and the Maracanã (BRASIL 2002c) extractive reserves, which are sustainable-use protected areas, as defined by the Brazilian National Conservation System (BRASIL 2002a).Both reserves are important for the protection of local mangrove ecosystems and the subsistence and cultural identity of local communities (IBAMA 2006). The sampling unit was a 0.5 m x 0.5 m quadrat of PVC tubing.The sampling sites were selected randomly during low tide, when the substrate was exposed.The sites were located equidistantly, following a horizontal line perpendicular to the margin of the estuary.The porcellanid crabs present in each quadrat were removed manually and the substratum was sieved (3 mm of mesh size) in order to facilitate collection of specimens.The individuals were kept on ice before being fixed in glycerol, and were subsequently stored at the Laboratório de Biologia Pesqueira e Manejo dos Recursos Aquáticos, Universidade Federal do Pará, Brazil. During collection, samples of water were obtained from the rock pools in which the adult P. armatus were found, with a 3 mL syringe, for the determination of salinity, using an optical refractometer (Atago).Data on monthly precipitation were obtained during the study period and long-term means were ZOOLOGIA 30 (6): 592-600, December, 2013 taken from the National Water Agency (ANA 2007).The temperature was recorded only for the zooplankton samples.A multiparameter analyzer was used to measure temperatures in situ, during collecting.It was not possible to measure the temperature of the benthic samples due to the reduced volume of water. In the laboratory, the specimens were counted and identified to species using the identification key available in MELO (1999).Males and females were identified based on OLIVEIRA & MASUNARI (1995).The abundance of benthic specimens were estimated by dividing the number of individuals collected in each quadrat by its area (0.25m 2 ), multiplied by 4 to obtain the number of individuals per m 2 . Zooplankton samples were obtained from six sites located along the margins of the Marapanim estuary.The western margin, with the town of Marapanim, and three fishing villages (Araticum, Aracumirim, and Alegria), suffers stronger anthropogenic impact.Three sites were selected on this margin, A1, A2 and A3, at distances of 4.7 km (A1-A2) and 6.7 km (A2-A3) from one another.The other three sites, B1, B2 and B3 were established on the eastern margin of the estuary, which is virtually uninhabited.The distance between B1 and B2 was 8.2 km, and that between B2 and B3, 8.7 km (Fig. 1). We attempted to establish sites B1, B2, and B3 exactly opposite to the corresponding points of profile A. In some cases it was not possible due to the presence of sandbanks or rocky outcrops.These sites also corresponded to the estuary's gradient of salinity, with zone I (A1 + B1) closest to the open sea, zone II (A2 + B2) intermediate, and zone III (A3 + B3) in the innermost portion of the estuary, where salinity is lowest. The temperature and pH of the water were measured during the collection of specimens from all six sites using a YSI multiparameter analyzer.Water samples were collected in polyethylene flasks for the analysis of salinity in the laboratory, using an optical refractometer (Atago). Zooplankton samples were collected by horizontal surface trawls using a conical-cylindrical net (200 µm of mesh size) equipped with a flowmeter at the mouth.The specimens were fixed in a formalin solution (4%).A total of 144 samples were collected, two samples from each site over the 12 months of the study period. Each sample (1 L) was subdivided in a Folsom Plankton Splitter, following the procedure described by BOLTOVSKOY (1981).A volume of 250 mL was established as the standard sample for the analysis of the larvae of P. armatus.These larvae were separated from the other zooplankton, analyzed under a Zeiss optical microscope, dissected, and identified as zoeal stages I or II, based on the studies of GORE (1970GORE ( , 1972)). The RESULTS The mean and standard deviation of the environmental parameters recorded in the shallow waters of the Marapanim estuary were 28.6 ± 0.5°C for temperature; 7.8 ± 0.6 for pH; and 19 ± 9.7 for salinity.The mean salinity was 17.5 ± 10.4 in all sites where adult P. armatus were collected (benthic environment). As salinity did not vary significantly between the shallow water and the rock-pools (H = 1.81, p = 0.18), the monthly median of pooled values are presented in Fig. 4. In this case, significantly higher values (H = 122.18,p < 0.01) were recorded during the dry season, between August (median = 28.5, range = 20-33) and December (median = 30, range = 26-31), with intermediate values being recorded during the transition month of January, and the lowest values during the rainiest months, between February (median = 8, range = 6-8) and July (median = 17.5, range = 13-24) (Fig. 4). The highest abundance of P. armatus larvae recorded during the study period was 269.35 zoea I/100m 3 and 172.15 zoea II/100m 3 (Table I).Larval density was significantly higher (H = 31.84,p < 0.01) during the drier and transition months, in particular in October, 2006, when the highest median values were determined.Also, we registered two peaks of larval abundance, in December, 2006, andJuly, 2007 (Fig. 5).The monthly variation in density was the opposite of that found for the larvae, with significantly higher densities recorded in January (transition month) and February (rainy month) (Fig. 6).The mean density of P. armatus larvae (zoea I and II) correlated positively with the salinity of the shallow waters of the estuary, whereas that of the adults collected from the rocky outcrops correlated with the salinity of the benthos, but the density of megalopae did not (Table II).There was also a significant correlation between the density of larvae (zoea I and II) and the temperature of the estuarine waters (Table II). Ovigerous P. armatus females were collected throughout the year, but were less common during the rainy months, from February to May.Larvae were collected only during the dry season and transition months, i.e.August-January, June, and July (Fig. 7), and were absent in the rainy season (February-May).Megalopae were collected between boulders only in August, October, January, and February, with the highest density ZOOLOGIA 30 (6): 592-600, December, 2013 Significant abundance of zoea I (H = 31.84,p < 0.01) occurred in December, 2006, with approximately 500 larvae/100 m 3 , and in July, 2007, with just over 450 larvae/100 m 3 .Peaks of zoea II abundance were recorded in July, 2007, October, 2006, and December, 2006, with approximately 297, 143, and 128 larvae/100 m 3 , respectively (Fig. 8). In August and October all developmental stages of P. armatus were collected, including ovigerous females (Fig. 8). DISCUSSION The densities of P. armatus adults in tropical regions such as the Marapanim estuary in Northern Brazil tend to be higher than those recorded in colder regions.This pattern can be observed on the Brazilian coast.In the present study (tropical) maximum densities of P. armatus adults were observed in August, with 14,960 individuals/m 2 , and in February, with 9,056 individuals/m 2 .MASUNARI & DUBIASKI-SILVA (1998) and OLIVEIRA & MASUNARI (1995), by contrast, recorded maximum densities of approximately 668 and 305 individuals/m 2 , in the south coast (subtropical) and MIRANDA & MANTELATTO (2009) collected a total of 775 specimens over the course of a year on the coast of the State of São Paulo (subtropical). The higher density of P. armatus adults recorded in tropical regions may reflect the relatively successful recruitment of this species in these environments, although the integrated knowledge on all stages of the life cycle (covering both planktonic and benthic environments) will provide a better understanding of population dynamics of this decapod species (DÍAZ-FERGUSON et al. 2008).In the present study, all the developmental stages of P. armatus (zoea I, zoea II and megalopa) were considered, allowing the identification of the period of most intensive reproductive activity and also the developmental strategies of this species.Petrolisthes armatus reproduces throughout the year in the Marapanim estuary, in the Amazon coastal zone, although ovigerous females were most common in June (at the end of the rainy season) and August (dry season).All life stages (zoea I and II, megalopa and adults) can be found in the estuary.The abundance of adults was high throughout the study period (August 2006through July 2007), which implies that the study area is important for the development of this decapod species, and therefore for its conservation. The spawning of decapod species in tropical estuaries tends to be continuous throughout the year, in contrast with temperate estuaries (DITTEL & EPIFANIO 1990).FRANSOZO & BERTINI (2001) identified some porcellanid species that present seasonal reproduction, and others that breed throughout the year.Petrolisthes boscii (Audovim, 1826), P. rufescens (Heller, 1861), P. elongatus (H. Milne Edwards, 1837), andP. vanderhorsli Haig, 1956 are all known to reproduce seasonally in some parts of the world (LEWIS 1960, WEAR 1965, AHMED & MUSTAQUIM 1974).GEBAUER et al. (2007) recorded a distinct pattern in P. laevigatus in southern Chile, with an 11-month breeding season, starting at the end of the summer, and ending in the middle of the subsequent summer. The Marapanim estuary offers favorable temperature conditions for reproduction throughout the year, with mean temperatures of between 27°C and 30°C.This intensive breeding activity enables P. armatus to colonize the region successfully, with densities of 41,280 individuals/m 2 .DÍAZ-FERGUSON & VARGAS-ZAMORA (2001) collected a total of 15,382 P. armatus crabs in the tropical Gulf of Nicoya in Costa Rica, between December 1997 and November 1998, with a maximum density of almost 100 individuals/m 2 .Other decapod species also reproduce throughout the year in the Marapanim estuary, such as the thalassinidean shrimps Lepidophthalmus siriboia Felder & Rodrigues, 1993and Upogebia vasquezi Ngoc-Ho, 1989(OLIVEIRA et al. 2012, SILVA & MARTINELLI-LEMOS 2012). The reproductive period of the species of the suborder Pleocyemata is normally defined according to the abundance of ovigerous females in different periods of the year (SANT 'ANNA et al. 2009), an approach also used for P. armatus in southeastern Brazil (MIRANDA & MANTELATTO 2009).Although two peaks of more intense reproductive activity were recorded -the first in the dry season (December), when the peak in the abundance of stage I zoea was recorded (483 larvae/100 m 3 ), ovigerous females of P. armatus were observed throughout the year in the Marapanim estuary.The abundance of stage II zoea was also above 100 larvae/100 m 3 , while megalopae were recorded in the subsequent months, January (dry-rainy transition) and February (early rainy season), indicating the beginning of the recruitment period. During the rainy months (February through May), the density of ovigerous females was lower, and no larvae were collected (excepted on April).Our hypothesis is that the absence of larvae during this period is due to the fact that they do not survive when salinity is low.We do not believe that they are dragged out from the estuary to offshore waters.Larval development from hatching to the megalopa stage does not exceed one month in either the Atlantic (GORE 1970) or the Pacific oceans (GORE 1972), which suggests that the larvae hatching between February and May are unable to survive when salinity is lower than 20.The low density of zoea I in April may represent the small number of larvae that are able to survive the prevailing conditions, reinforcing the idea that they remain in the estuary. The larval abundance increases in the end of the rainy season, when the environmental conditions begin to favor larval development, which means more saline waters.Salinity is a key factor in the structure and distribution of decapod larvae in tropical estuarine environments (ANGER 2003, MAGRIS & LOUREIRO-FERNANDES 2011).The dispersal and recruitment of larvae of estuarine crustaceans are strongly influenced by salinity (O'CONNOR & EPIFANIO 1985).Salinity appears to be the principal factor influencing the breeding activity of P. armatus in the study area, as shown by the lower densities of ovigerous females and reduced numbers of larvae collected during the rainiest months (February-May), when salinity was significantly lower than in the dry season. Similarly, the density of the thalassinid L. siriboia and U. vasquezi larvae in the Marapanim estuary is significantly higher during the dry season (OLIVEIRA et al. 2012).The same pattern has also been recorded in a tropical estuary in Costa Rica, where the abundance of P. armatus larvae was significantly higher during the dry season, when the salinity of the water was higher (DÍAZ-FERGUSON et al. 2008).In an estuary in the South Atlantic, TILBURG et al. (2010) also recorded significant variation in the density of P. armatus larvae in relation to salinity. In other regions of the world, in particular the temperate zone, temperature is the principal factor influencing the reproductive patterns of decapod species, including porcellanids (HERNÁEZ-BOVÉ 2001, EMPARANZA 2007, HOLLEBONE & HAY 2007).As considerable seasonal fluctuations in salinity occur in coastal and estuarine environments, the decapod species that inhabit these areas adopt different strategies of development according to their physiological constraints (STRATHMANN & STRATHMANN 1982, ANGER 2003, 2006).Some species, such as Ucides cordatus (Linnaeus, 1763) and Uca vocator (Herbst, 1804), export their early larval stages to offshore waters, which implies a more ample dispersal strategy (DIELE & SIMITH 2006, SIMITH & DIELE 2008, SIMITH et al. 2012).Some estuarine crabs also increase their swimming ZOOLOGIA 30 (6): 592-600, December, 2013 activity at higher salinities to avoid being removed from the estuary (QUEIROGA & BLANTON 2005).In northeastern Brazil, MELO JR et al. (2012) found that P. armatus larvae were more concentrated in the midwater and surface during the flood tide, thus avoiding being removed to more internal regions.By contrast, these larvae were more concentrated at the bottom during the ebb tides, avoiding exportation, which suggested that P. armatus reproduces and spends its life cycle on the inner shelf, rather than the outer shelf (MELO JR et al. 2012). DITTEL & EPIFANIO (1990) found all larval stages of Pinnotheres spp. in the plankton of Gulf of Nicoya and suggested that this species reproduces in this region and that larvae are retained in the system.At Marapanim, P. armatus remains in the estuary throughout its life cycle, following the same reproductive strategy described by MELO JR et al. (2012) for this species on the northeastern coast of Brazil.This hypothesis has yet to be tested experimentally, although the presence of all the developmental stages of P. armatus throughout the year at different locations within the Marapanim estuary reinforces the conclusion that this species passes through its larval phases on the inner shelf.Furthermore, P. armatus breeds throughout the year in this region and salinity is a key factor for its development. ABSTRACT. Petrolisthes armatus (Gibbes, 1850) is a porcellanid crab with a wide geographical distribution.In the present study we analyzed variations in the abundance of P. armatus adults and larvae over an annual cycle in the Marapanim estuary of the Amazon coastal zone, in the northeastern portion of the state of Pará, Brazil.Particularly, we focused on the presence of ovigerous females and timing of larval release, with the aim of elucidating reproductive patterns in a tropical estuarine system.The mean density of P. armatus larvae (zoea I and II) correlated positively with the salinity of the shallow waters of the estuary, whereas the abundance of adults correlated with the salinity registered in water samples collected from the benthic environment.There was also a significant positive correlation between larval density (zoea I and II) and water temperature.Ovigerous females were captured throughout the study period, from August 2006 to July 2007, but were more abundant in June and less abundant during the rainy months, between February and May.Larvae were only present during the dry season and transition months (June to January), and were absent during the rainy season (February to May).Petrolisthes armatus reproduces throughout the year in the Marapanim estuary and all developmental stages of this species (zoeal stages I and II, megalopae and adults) are found in the estuary.The results indicate that the study area is an important environment for the reproduction of this decapod.KEY WORDS.Decapoda; macrobenthos; reproduction; zooplankton. volume of water filtered during the trawls was based on V = A*R*C, where V = the volume of water in m 3 , A = opening of the net in m 2 (for a 0.5 m diameter opening, A = 0.19625 m 2 ), R = number of rotations recorded on the flowmeter before and after each trawl (FF-FI), and C = standardization factor following calibration of the flowmeter (C = 0.32).The density of P. armatus larvae was calculated by D = n/V, where n = number of larvae collected during the sampling, and V = volume of water filtered by the net (m 3 ), expressed as larvae per 100 m 3 .The variation in the abiotic factors and the density of the larvae and adults of P. armatus during the study period -August, 2006, through July, 2007 -was evaluated using the Kruskal-Wallis nonparametric analysis of variance, given the lack of normality or homocedasticity of the data, even after the data had been transformed (logarithmic and square root).The possible relationship between the density of P. armatus and abiotic variables was evaluated using the Spearman's correlation coefficient.All analyses were run in BioEstat 5.0 ®(AYRES et al. 2007), considering ␣ = 0.05. Figure 1 . Figure 1.Sampling sites ( ) where the larvae and adults of Petrolisthes armatus were monthly collected at Marapanim estuary, northern Brazilian coast, between August 2006 and July 2007. Table I . Abundance and density of P. armatus at different stages of development recorded in the Marapanim Estuary, Pará (Brazil), between August 2006 and July 2007. Table II . Spearman coefficients for the correlation between the density of P. armatus (larvae and adults) and abiotic factors.Significant values in bold script (p < 0.05).
2019-03-28T13:41:43.289Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "a9beada59d1733e82470d46af44e16fb00c38196", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/zool/a/csfKrRbQWtbkjztt4Fjvmmp/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a9beada59d1733e82470d46af44e16fb00c38196", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
16880659
pes2o/s2orc
v3-fos-license
Correlation of Prolactin and Thyroid Hormone Concentration with Menstrual Patterns in Infertile Women Introduction The increased prevalence of upper normal limit of TSH and raised anti-thyroperoxidase antibody titer indicate, relatively more frequent occurrence of compensated thyroid function in infertile women. This finding necessitates considering such cases for a thorough investigation of pituitary-thyroid axis. In addition, as some patients may exhibit the clinical picture of hypothyroidism despite normal TSH and free thyroxin (FT4) concentrations, this hospital-based study was undertaken to review the impact of thyroid status on the menstrual function and fertility of the subjects. Materials and Methods In this study, we investigated 160 women with primary infertility who attended the Biochemistry department, Maulana Azad Medical College (MAMC), New Delhi for hormonal evaluations. Eighty fertile women with similar age and socioeconomic status were enrolled as the controls. The association between thyroid dysfunction and levels of serum prolactin, LH and FSH as their menstrual status were reviewed. Results The majority of the infertile and fertile women were euthyroid. In infertile group, the crude prevalence of hypothyroidism was slightly higher in the infertile group in comparison with that of the general population. There was a positive correlation between serum TSH and prolactin levels in the infertile subjects. Menstrual disorders (mainly oligomenorrhea), were reported by about 60% of the infertile women. Hyperprolactinemia was depicted in 41% of the infertile women while it was only 15% in the control group. The infertile women with hypothyroidism had significantly higher prolactin levels when compared to the subjects with hyper- or euthyroidism. There was a significant association between abnormal menstrual patterns and anovulatory cycles, as observed on endometrial examination of infertile subjects with raised serum prolactin levels. Conclusion There is a greater propensity for thyroid disorder in infertile women than the fertile ones. There is also a higher prevalence of hyperprolactinemia in infertile patients. Introduction ormonal disorders of female reproductive system is comprised of a number of problems resulting from aberrant dysfunction of hypothalamic-pituitary-ovarian axis. These relatively common disorders often lead to infertility. Difficulty to conceive or subfertility constitutes a major psychological burden. Proper evaluation of these disorders involves a multidimensional diag-nostic approach, with a pivotal contribution from clinical laboratories (1). Measurement of prolactin and thyroid hormones, especially thyroid stimulating hormone (TSH), has been considered an important component of infertility work up in women (2). Thyroid dysfunctions interfere with numerous aspects of reproduction and pregnancy. Several articles have JRI 208 highlighted the association of hyperthyroidism or hypothyroidism with menstrual disturbance, anovulatory cycles, decreased fecundity and increased morbidity during pregnancy (3,4,5). The increased prevalence of upper normal limit of serum TSH and raised anti-thyroperoxidase antibody titer indicate relatively more frequent occurrence of compensated thyroid function in infertile women than normal women of reproductive age. This necessitates considering such cases a subgroup of women in which all aspects of pituitary-thyroid axis should be thoroughly investtigated than merely do with TSH testing (6). Despite normal TSH and free thyroxin (FT4) concentrations, some patients may exhibit the clinical picture of hypothyroidism. Treating such thyroid dysfunction with low dose thyroxin slightly increases FT4 levels leading to inhibition of TSH secretion within normal range, resulting in subjective improvement in health status, normalization of menstrual abnormalities and restoration of normal fertility (7). Hyperprolactinemia adversely affects the fertility potential by impairing pulsatile secretion of GnRH and hence interfering with ovulation (3,8). This disorder has been implicated in menstrual and ovulatory dysfunctions like amenorrhea, oligomenorrhea, anovulation, inadequate corpus luteal phase and galactorrhea (9,10). However many infertile women present with normal menses despite a raised serum prolactin level. Pituitary hormones such as TSH, prolactin or growth hormone may act synergistically with FSH and LH to enhance the entry of non-growing follicles into the growth phase (7). Morphological changes observed in the follicles in hypothyroidism can be a consequence of higher prolactin production that may block both secretion and action of gonadotropins (11). Adequate thyroid supplementation restores prolactin levels as well and normalizes ovulatory function (12). Even in the absence of hyperprolactinemia, hypothyroidism itself may contribute to infertility since thyroid hormones may be necessary for the maximum production of both estradiol and progesterone (13). In areas with endemic goiter, the major contributor of thyroid dysfunction is iodine deficiency. Infertility associated with thyroid dysfunction in these areas is not uncommon (14). The prevalence of thyroid dysfunction among the infertile females in Delhi and its suburban areas, which is considered as a non-endemic zone for iodine deficiency, had not been studied prior to this research. The aims of the study were to find the prevalence of thyroid disorders in female infertility after exclusion of tubal factor and male factor infertility in Delhi and suburbs from a hospital-based study and to investigate the impact of the thyroid status on serum prolactin, FSH and LH of the third day of menstrual cycle. Materials and Methods The cases consisted 160 female subjects who were suffering from primary infertility and had been referred the Department of Biochemistry of Maulana Azad Medical College, New Delhi for hormonal evaluations. The cases were selected over a period of six months. The inclusion criteria for the selection of cases were diagnosis of primary infertility, age between 20-40 years and duration of marriage more than one year. The exclusion criteria that were adopted during case selection were male factor infertility and amongst the female factors were tubal factor, any congenital anomaly of the urogenital tract, or any obvious organic lesion. Any history of thyroid disease or previous thyroid surgery or being on thyroid medications were also amounted to exclusion for the study. The protocol for infertility work up in the women included: a detailed medical history, a gynecological examination, a premenstrual endometrial sampling, an ultrasonography, a hormonal profile (TSH, FT4, FT3, prolactin, FSH and LH), screening for infectious diseases and whenever indicated, hysterosalpingography and/or laparoscopy. Eighty healthy fertile female employees of Lok Nayak hospital, New Delhi with similar age range and socioeconomic status were enrolled as controls. The participants were enrolled after signing on informed consent. Five milliliters of fasting venous sample obtained in the morning of day three of menstrual cycle for serum biochemical analysis. Serum was separated and stored for further analysis. All the hormones were estimated using electrochemiluminiscence kits of TSH, FT3, FT4, prolactin, LH and FSH (Roche diagnostics; Mannheim, Germany) and estimated on Elecsys 2010 (Roche Healthcare, Basel, Switzerland). Assay reliability was determined by the use of commercially derived control sera of low and high concentrations. The normal range of serum prolactin and TSH were 2-25ng/ml and 0.5-4.7mIU/L respectively. Women with serum prolactin levels >100ng/ml were advised to undergo CT scan or MRI to rule out any pituitary pathology. As per the serum TSH profile the cases, as well as the controls, were divided into three groups: I Euthyroidism was present when the value of TSH was within the normal range. II Hyperthyroidism was diagnosed if serum TSH was <0.5mIU/L. III Hypothyroidism was diagnosed if serum TSH was >4.7mIU/L. Patients with subclinical hyperthyroid as well as those with hypothyroidism were not included in the study. Statistical analysis was done by using SPSS software, version 12 (SPSS Inc, Illinois, USA) through Chi-Square test and Mann Whitney U test calculations. Spearman's correlation was used to look for association between different variables in the study group. A p-value <0.05 was considered statistically significant. Results Thyroid function status in the study population is depicted in table 1. Most of the control (86%) and infertile women (87%) were euthyroid. The prevalence of hyperthyroidism in the cases and the controls were 5% and 9%, respectively. Hypothyroidism was seen in 8% of the infertile subjects whereas in the control group it was found to be 5%. The crude prevalence of hypothyroidism was slightly higher when compared to hyperthyroidism in the infertile group. The mean serum levels of TSH, FT4, FT3, LH, FSH and prolactin in the study group are depicted in table 1. Significantly higher serum TSH levels were noted in the infertile cases with euthyroidism (p<0.01) and hypothyroidism (p<0.001) when their distributions were compared to their respective control groups. The rise in serum FT4 and FT3 in the infertile group with hyperthyroidism was found to be significantly higher as compared to the control group with hyperthyroidism (p<0.001). Serum FT4 value was significantly lower (p<0.01) in the infertile group with hypothyroidism when compared to the control group with hypothyroidism. Hyperprolactinemia was depicted in 41% of the infertile women while it was the case in only 15% in the control group. The mean serum prolactin concentration in the infertile cases with euthyroidism was significantly higher (p<0.001) than the control group with euthyroidism. The infertile women with hypothyroidism had significantly higher prolactin levels than the other three groups (the controls and the infertile subjects with euthyroidism and hyperthyroidism) (p<0.001). The serum LH and FSH levels in the infertile patients with hyperthyroidism were found to be significantly higher than the control group with hyperthyroidism (p<0.001 and p<0.05, respectively). Menstrual disturbances observed in the control and infertile groups were 18.7% and 61.2%, respectively ( Table 2). The majority of the cases (82.6%) as well as the controls (66.7%) who Among the infertile women, 54 % showed nonsecretory endometrium in premenstrual endometrial samples suggestive of the presence of an anovulatory cycle. Serum TSH levels were found to be positively correlated with prolactin levels in the cases (r=0.4, p=0.01). Discussion In this study, the majority of infertile as well as fertile women were euthyroid. However, the distribution of thyroid dysfunction in the study group was somehow different -hyperthyroidism being more prevalent in the controls whereas hypothyroidism was more prevalent in the infertile group. Elahi et al. (6) also depicted such pattern of thyroid dysfunction. Some investigators had claimed an association between mild iodine deficiency with hyperthyroidism and less frequently with hypothyroidism in the population (15,16,17). A relatively higher occurrence of hypothyroidism in infertile women, when compared to the control group in this study, reflects the tendency of infertile patients towards thyroid insufficiency or the vice versa. The prevalence of hypothyroidism in women of reproductive age (20-40 years) varies between 2% to 4% (18,19). Relatively higher crude prevalence rate of hypothyroidism (8%) in the infertile women found in our study could be due to special referral pattern of the patients who were referred to the hospital based on suspicion of thyroid abnormalities. A higher occurrence of hyperprolactinemia (41%) was seen in the infertile group as compared to the controls (15%) in this study. This higher propensity of hyperprolactinemia is in agreement with the findings of Kumkum et al (20) who had depicted a prevalence of 46% in their study. As per the study, we observed a greater percentage of infertile patients with hypothyroidism exhibiting hyperprolactinemia (46.1%). Choudhary and Goswami (21) observed hyperprolactinemia in 16.6% and Singh et al in 57% of women with hypothyroidism (22). Fifty-eight percent of infertile women with raised serum prolactin levels showed nonsecretory endometrium suggestive of anovulation. Kumkum et al (20) showed an incidence of anovulation in hyperprolactinemia patients to be 73%. Prevalence of ovulatory dysfunction, as one of the causes of female infertility, has been variously reported in different studies: 31.4% (22) and 51.4% (9). Menstrual abnormalities were detected in about 60% of the infertile cases in this study, which is nearly similar to that observed by Kumkum et al (20) who had reported the abnormality to be 57.6% in their study. Anovulatory cycles were present in 54% of the cases, which corroborates with the finding of Kumkum et al (50%). Maximum percentage of menstrual abnormality presented by the infertile group was oligomenorrhea (82%) whereas Kumkum et al (20) depicted the state to be smaller (50%). In the study done by Krasses et al (24), the prevalence of menstrual irregularities (mainly oligomenorrhea) reached 23% among 171 hypothyroid patients, while being only 8% in 214 controls (p<0.05). The authors had shown an association between the severity of menstrual abnormalities and higher serum TSH concentrations. We reported irregular menstrual cycles, mainly amenorrhea, in 31 % of the cases with hypothyroidism. Our study revealed an association between menstrual irregularities with raised serum prolactin levels (p<0.001) rather than TSH concentrations. A higher incidence of amenorrhea could be linked to hyperprolactinemia that was seen in the majority of patients with hypothyroidism. Hyperthyroidism was found in 8% of the infertile patients in the present study. Joshi et al (25) evaluated 53 hyperthyroid patients and found 5.8% of them to be infertile. In contrast to hypothyroidism, most women with hyperthyroidism do not have fertility problems, although 25% may have irregular menses (26). Joshi et al (25). Likewise, our study revealed that 62.5 % of hyperthyroid cases had menstrual disturbances. Krasses et al indicated that menstrual disturbances in thyrotoxicosis are 2.5 times more frequent than in the general population (26). Hyperprolactinemia resulting from longstanding primary hypothyroidism has been implicated in ovulatory dysfunctions ranging from inadequate corpus luteal progesterone secretion when mildly elevated to oligomenorrhea or amenorrhea when circulating prolactin levels are high (27). Amenorrhea occurs in hypothyroidism due to hyperprolactinemia resulting from a defect in the positive feedback of estrogen on LH, and because of LH and FSH suppression. Our study revealed a significant association between abnormal menstrual patterns, as well as anovulatory cycles, with hyperprolactinemia in the infertile group (p<0.001). For these reasons, TSH and prolactin are commonly-ordered clinical tests in evaluating infertile women. The main drawback of the present study was the number of participants in the study. Only 80 controls could be included in the study as compared to 160 cases due to the stringent inclusion criteria and non-compliance. Conclusion There was a higher crude prevalence of hypothyroidism and hyperprolactinemia in the infertile women as compared to the fertile ones in the control group. Both hypothyroidism and hyperthyroidism may result in menstrual disorders. Hypothyroidism is commonly associated with hyperprolactinemia and such patients exhibit ovulatory failure. Hence, assessment of serum TSH and prolactin levels are mandatory in the work up of all infertile women, especially those presenting with menstrual irregularities. To avoid the slightest traces of selection bias, the data should be extrapolated to the general population with care, as the study has been a hospital-based one due to the difficulties in recruiting counterparting controls. For a better calculation of the intended prevalence, a population-based study may be conducted.
2016-05-12T22:15:10.714Z
2009-10-01T00:00:00.000
{ "year": 2009, "sha1": "f3095d18d57d02e93b30393d061f5c501e23d0dd", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f3095d18d57d02e93b30393d061f5c501e23d0dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263669539
pes2o/s2orc
v3-fos-license
Effects of aging on accompanying intermittent hypoxia in a bleomycin-induced pulmonary fibrosis mouse model Background/Aims Obstructive sleep apnea (OSA) is prevalent in older patients with idiopathic pulmonary fibrosis (IPF); however, it is underrecognized. OSA is characterized by intermittent hypoxia (IH) and sleep fragmentation. In this study, we evaluated the effects of IH in an older mouse model of bleomycin-induced lung fibrosis. Methods Bleomycin-induced mice (C57BL/6, female) were randomly divided into four groups of young vs. old and room air (RA)-exposed vs. IH-exposed. Mice were exposed to RA or IH (20 cycles/h, FiO2 nadir 7 ± 0.5%, 8 h/day) for four weeks. The mice were sacrificed on day 28, and blood, bronchoalveolar lavage (BAL) fluid, and lung tissue samples were obtained. Results The bleomycin-induced IH-exposed (EBI) older group showed more severe inflammation, fibrosis, and oxidative stress than the other groups. The levels of inflammatory cytokines in the serum and BAL fluid increased in the EBI group. Hydroxyproline levels in the lung tissue increased markedly in the EBI group. Conclusions This study demonstrates the possible harmful impact of OSA in an elderly mouse model of lung fibrosis. This study further suggests that older patients with IPF and OSA may be more of a concern than younger patients with IPF. Further research is required in this area. INTRODUCTION Idiopathic pulmonary fibrosis (IPF) is a fatal chronic disease that causes respiratory failure and has a median survival of 2-3 years from the time of diagnosis [1].IPF progression in older adults is characterized by the restriction of pulmonary function due to fibrosis of the interstitial area [1].Common comorbidities are cardiovascular disease, pulmonary hypertension, gastroesophageal reflux disease (GERD), and obstructive sleep apnea (OSA) [2].Among these, OSA is particularly frequent (prevalence 6-91%) and is closely associated with the quality of life and prognosis in patients with IPF [3][4][5][6]. OSA is characterized by intermittent hypoxia (IH), a condition in which periods of normoxia alternate with hypoxia and sleep fragmentation [7].Moreover, chronic hypoxia is frequent in IPF and promotes systemic inflammation and pulmonary vascular damage [8,9].Various studies have suggested that if IH, a key mechanism underlying OSA, co-occurs with IPF, it may cause greater oxidative stress and systemic inflammation than IPF alone [10,11].Recent clinical and experimental data have shown that IH is closely associated with disease progression and poor outcome in patients with IPF.However, IH has not been adequately studied in aging population with IPF. In this study, we investigated the effects of IH on a bleomycin-induced pulmonary fibrosis using mouse models, aged 8 weeks (young) and 24 months (older), to characterize alterations in inflammation and pulmonary structures. Mice and experimental groups All experiments were performed using young (eight wk old) and old (24 mo old) mice.Pathogen-free female C57BL/6 mice were obtained from the Korea Research Institute of Bioscience and Biotechnology (KRIBB).Mice were maintained on a 12-h light, 12-h dark cycle with constant humidity, temperature, and free access to water and rodent feed in individual cages. The experimental design is illustrated in Figure 1.Mice were randomly allocated to four groups (six to eight mice per group): young + bleomycin + room air (YBC), young + bleomycin + IH (YBI), old + bleomycin + room air (EBC), and old + bleomycin + IH (EBI).After 0.01 U bleomycin (Sigma, St. Louis, MO, USA) was intratracheally instilled, the mice were placed in the Oxycycler chambers for four weeks and exposed to IH (approximately 7 ± 0.5% fractional inspired O 2 (FiO 2 ), 20 episodes/h, 8 h during daytime; to coincide with mouse sleep cycles) or room air.The mice were sacrificed at four weeks.The control mice were treated with equal volumes of saline.Oxygen levels were monitored with a ProOx 110 analyzer, and air, nitrogen, and oxygen flow flowing into the cage were regulated using BioSpherix computer software (BioSpherix Oxycycler, Redfield, NY, USA).Mice exposed to room air and oxygen were simultaneously evaluated and their body weights were measured weekly.This study was approved by the Ethical Committee on Animal Experiments of the Catholic University of Korea (approval number EPSMH20203003FA) and complied with the animal welfare guidelines. Analysis of bronchoalveolar lavage fluid (BALF) BALF samples were harvested through the trachea into the lung using a 23-gauge catheter and lavaged with 800 μL of Dulbecco's phosphate-buffered saline (DPBS).BALF was centrifuged at 1,500 rpm for 5 minutes at 4°C.Differentiated cells were centrifuged (5 min at 750 g) on glass slides and stained with Diff-Quick (Sysmax, Tokyo, Japan).Total BAL cells were counted using a hemocytometer, and the types of inflammatory cells (macrophages, neutrophils, eosinophils, and lymphocytes) were counted in at least 400 cells in randomly selected areas of the slide. Histopathologic evaluation For histopathological examination, the left lung tissues from the mice in each group were fixed overnight in a 4% para-formaldehyde solution and processed for paraffin embedding.Sections were cut to a thickness of 3 μm using a microtome and stained with hematoxylin and eosin (H&E) and Masson's trichrome for histological examination.Images were acquired using a Panoramic MIDI slide scanner (3DHISTECH Ltd., Budapest, Hungary), and each stained slide image was randomly screened.The Szapiel score was used to evaluate the degree of the parenchymal alveolitis [12].The Ashcroft scale was used to evaluate fibrotic changes [13].Five randomly chosen fields within each lung section were observed (×100 or ×200 magnification), and two independent blind observers scored each specimen.The mean Szapiel or Ashcroft score for each mouse was used for the statistical analysis. Lung wet-dry weight ratio The middle lobe of the right lung portion was immediately weighed after collection (wet weight), placed into a 65°C oven for 48 hours, and weighed again (dry weight).The wet-dry lung tissue ratio was calculated by dividing the wet weight by the dry weight.The ratio of wet and dry lung weights is a measure of edema formation and a marker of acute lung injury. Inflammatory cytokine assay The Cytometric Bead Array (CBA; BD Bioscience, Franklin Lakes, NJ, USA) kit, a flow cytometry application that allows users to quantify multiple proteins simultaneously, was used to analyze inflammatory cytokines.The kit was designed to detect the cytokines interleukin (IL)-12p70, tumor necrosis factor (TNF), interferon (IFN)-γ, monocyte chemoattractant protein (MCP)-1, IL-10, and IL-6 in BAL The analysis was performed according to the manufacturer's instructions. Collagen concentration assay Hydroxyproline content of the lungs is considered a quantitative biochemical measure of collagen deposition.Hydroxyproline levels in the lungs were examined using a commercial kit (BioVision, Inc., Milpitas, CA, USA) according to the manufacturer's instructions.Lung homogenates were briefly hydrolyzed with 12 N hydrochloric acid for 12 hours at 110-120°C.Chloramine T solution and DMAB reagent were added to the acid extract and incubated for 90 minutes at 60°C.Transforming growth factor beta (TGF-β) level was determined using a commercial TGF-β enzyme-linked immunosorbent assay (ELISA) Kit (Thermo Fisher Scientific, Inc., Waltham, MA, USA), according to the manufacturer's instructions.The absorbances of the samples were measured at 560 nm and 450 nm. Oxidative stress assay An indicator of polymorphonuclear leukocyte accumulation, myeloperoxidase (MPO; BioVision, Inc.), albumin, lipid peroxidation enzyme malondialdehyde (MDA; Abcam, Burlingame, CA, USA), oxidative stress-related enzyme catalase (CAT), superoxide dismutase (SOD), and glutathione (GSH) (Bioassay Systems, Hayward, CA, USA) in BALF or serum were measured using commercial mouse ELISA kits as according to the manufacturer's instructions.Chemiluminescence was detected using an ELISA plate reader.All samples and standards were analyzed in duplicate. Isolation of primary fibroblast cells and senescence-associated β-galactosidase (SA-β-gal) staining Lung tissues were obtained from euthanized mice and washed with DPBS to remove blood.The tissue was cut into small pieces using an autoclaved surgical blade and scissors and placed in a cell culture hood.Cells were dissociated by adding 5 mL medium (Dulbecco's modified eagle's medium high glucose + 1% penicillin/streptomycin + 1 mg/mL collagenase A) into the minced tissue in a conical tube, placing the sample tubes horizontally on a shaker, and gently shaking them at 37°C for 4 hours.After incubation, each sample was filtered through a 70 µm cell strainer.The cells were centrifuged at 1,300 rpm for 3 minutes using a refrigerated centrifuge.The supernatant was removed, and the pellets were resuspended in 5 mL minimal medium (without fetal bovine serum).The fibroblasts were maintained at 37°C in a humidified 5% CO 2 incubator and seeded 48 hours prior to staining at 5 × 10 4 cells/well in 6-well plates.Cells were stained with SA-β-gal and analyzed using a Cellular Senescence Assay Kit (CBA-230; Cell Biolabs Inc., San Diego, CA, USA) according to the manufacturer's protocol. Quantitative reverse-transcription polymerase chain reaction (qRT-PCR) Total RNA from the lung tissue was extracted using TRIzol (Invitrogen, Carlsbad, CA, USA), and qRT-PCR was performed using the QuantiFast SYBR Green PCR kit (Qiagen, Valencia, CA, USA) to analyze the changes in the mRNA expression of the senescence markers, p21 and p53.The primer sequences were designed based on the NCBI database.The threshold cycle (Ct) value was normalized to that of β-actin.The relative expression levels were determined by the 2 -ΔΔCt method. Western blot analysis Total protein was obtained from the homogenized lung tissue, and equal amounts of protein were separated by 10-15% sodium dodecyl sulfate polyacrylamide gel elctrophoresis and transferred to polyvinylidene fluoride membranes.The membranes were blocked and incubated at 4°C overnight with the primary antibodies p53 and-actin-as con-trols (Cell Signaling Technology, Danvers, MA, USA).Target proteins were detected using an Image Quant LAS 500 (GE Healthcare Bio-Sciences AB, Uppsala, Sweden). Terminal deoxynucleotidyl transferase dUTP nick end labeled (TUNEL) assay Apoptotic cells in lung tissue sections were detected with the TUNEL Assay Kit using an in situ apoptosis detection kit (Promega, Madison, WI, USA), according to the manufacturer's protocol.The TUNEL-positive cells (dark brownstained nuclei) were semi-quantitatively assessed using a light microscopy in ten random areas of lung sections under the power field of microscope (400 × magnification). Immunohistochemistry (IHC) For IHC staining, antigen retrieval was performed by incubation in a citrate buffer (pH 6.0) and boiling in a microwave oven for 15 minutes.After blockage of endogenous peroxidases was performed, slides were incubated with anti-p21 antibodies at 4°C overnight and subsequently incubated with biotin-labeled secondary antibodies.Immunoreactive signal was visualized with 3,3′-Diaminobenzidine substrate kit (Vector Laboratories, Burlingame, CA, USA), and counter-staining was performed with hematoxylin. Statistical analysis Statistical analyses were performed using the GraphPad Prism software, version 7.00 (GraphPad Software Inc., San Diego, CA, USA).The test data were analyzed using oneway analysis of variance (ANOVA), followed by Tukey's multiple comparison test or two-way ANOVA (for BAL cell differentiation).Data are expressed as the mean ± standard deviation, and p < 0.05 was considered statistically significant. Effects of bleomycin and IH on lung injury, inflammation and oxidative stress Histopathological analysis showed more severe lung injury with inflammatory cell infiltration and thickening of alveolar septa in the older mice groups, with the EBI group having more severe lung injury and inflammation than the other groups (Fig. 2A).Total cell and lymphocyte counts in the BALF were significantly higher in the EBI group than in the other groups (all p < 0.001) (Fig. 2D).The lung wet/dry ratio and albumin levels, which are markers of lung injury, significantly increased in the older mice groups, with the EBI group showing the largest increase (Fig. 2B, C).Szapiel score was significantly increased in the EBI group compared to other groups (Fig. 2E).MPO activity in lung homogenates and MCP-1 in BALF significantly increased in the EBI group compared to the other groups (Fig. 3A, E).The TNF, INF-γ, IL-10, and IL-6 levels in BALF showed increasing trends in the EBI group compared to other groups (Fig. 3C, D, F, G).The levels of MDA, CAT, SOD, and GSH were measured as indicators of oxidative stress in the total lung homogenate.The MDA levels in the lungs were significantly higher in the EBI group than in the other groups (Fig. 3H).The CAT levels were significantly lower in the EBI group than in the other groups (Fig. 3I).SOD activity and GSH levels did not differ significantly among the four groups (data not shown). Effects of bleomycin and IH on lung fibrosis Masson's trichrome staining of the lung tissues demonstrated more severe lung injury, including inflammatory cell infiltration and thickening of the alveolar septa, in the EBI group than in the other groups (Fig. 4A).The hydroxyproline count in the lung tissue and the TGF-ß level in BALF significantly increased in the older groups, especially the EBI group, compared to the other groups (p < 0.001) (Fig. 4B, C). Effects of bleomycin and IH on senescence After isolation of primary fibroblasts in lung, SA-β-gal staining increased significantly in the elderly groups, especially the EBI group (Fig. 5A). While the younger groups had no immunoreactive cells, tissue sections from the elderly groups, especially the EBI group, were immunoreactive for p21 (Fig. 5B).The elderly groups, especially the EBI group, had significantly higher levels of p21 and p53 mRNA compared to the younger groups (Fig. 5C).Immunoblotting for p53 showed higher levels of this protein in the lung tissues from older mice than those from the younger groups (Fig. 5D). Effects of bleomycin and IH on apoptosiss TUNEL-positive cells (apoptotic nuclei, Fig. 6A) were more evident in the elderly mice groups than in younger mice groups.The combination of bleomycin and IH significantly increased the number of TUNEL-positive cells (Fig. 6B). DISCUSSION Our study shows that IH augmented bleomycin-induced groups to exhibit severe inflammation, fibrosis, and oxidative stress.The elderly mouse IH bleomycin group showed more severe lung inflammation, fibrosis, and oxidative stress compared to the young mouse IH bleomycin group, suggesting that age is an important factor with regards to the effects of IH on IPF.A higher prevalence of OSA in patients with IPF than in the general population has not yet been verified.Aging appears to be a common risk factor for both diseases.IPF is considered an age-related disease and OSA is frequently found in older adults, with its prevalence increasing two to three times in older adults compared to middle-aged adults [14].Aging-related mechanisms, such as genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, cellular senescence, altered intercellular communications, and stem cell exhaustion have been suggested as potential hallmarks of lung fibrosis [15] and OSA [16].Aging is associated with pharyngeal fat deposition, soft palate lengthening, muscle tone impairments, and pharyngeal sensory discrimination.This can result in older people having more severe upper airway obstruction [16].Another common cause of IPF and OSA is reduced lung volume due to IPF.Reduced functional residual capacity increases respiratory effort owing to negative extrathoracic pressure and can lead to upper airway collapse [17].Therefore, common risk factors for OSA and IPF, such as aging and reduced lung volume, need to be further investigated.OSA should be routinely screened in patients with IPF because it commonly coexists with IPF. Recent results have suggested that OSA aggravates IPF and is closely associated with higher mortality and poor clinical outcomes [18].However, the main aggravating factor could be hypoxia.In patients with IPF, chronic hypoxia is frequently observed as the disease progresses and can induce oxidative stress and systemic inflammation [19].Patients with OSA often show repetitive episodes of apnea and hypopnea during the sleep cycle, which induces IH.Therefore, the co-occurrence of OSA and IPF can lead to IH and an increased duration of hypoxia, resulting in excessive systemic inflammation and overproduction of reactive oxygen species (ROS) [20].Studies have shown that chronic IH aggravates bleomycin-induced lung injury in a mouse model by increasing neutrophilic inflammation, lung cell apoptosis, and collagen accumulation [21,22].Our results are consistent with those of previous studies.Additionally, we report a novel finding stating that IH increased the severity of lung inflammation, oxidative stress, and fibrosis in the elderly mouse group compared to the younger group. While this greater inflammatory response and severity have been recognized with age, the mechanism underlying enhanced inflammation is unclear.Potential contributors to enhanced inflammation in the bleomycin-induced elderly mice group could be changes in pulmonary function and physiology [15], underlying chronic systemic inflammation [23], or changes in the immune response [24].In our study, we observed age-dependent increases in the levels of inflammatory cells and cytokines, lung fibrosis, and oxidative stress markers.It appears that IH enhanced the pro-inflammatory response, leading to lung fibrosis and oxidative stress, with a worse prognosis in the older mice group. ROS has a very short lifespan, and ROS-related tissue destruction can be detected indirectly through the final products of lipid peroxidation, such as MDA [25].The increased levels of MDA in the elderly groups demonstrate that the levels of oxidative stress are higher, especially in the EBI group compared to the younger groups.Common antioxidants include CAT, superoxide as CAT, and GSH-associated enzymes [26].CAT mitigates oxidative stress by destroying cellular hydrogen peroxide to produce water and oxygen.CAT deficiency is associated with the pathogenesis of many age-related degenerative diseases; therefore, in our study, it was observed to be the lowest in the EBI group.The SOD and GSH levels did not differ among the groups in our study.We postulate that SOD activity is often reduced early; however, at later stages, it can increase to counteract elevated ROS levels.Therefore, this variation may have caused insignificant changes among the groups.Moreover, GSH levels can decrease in lung fibrosis and increase in normal lung tissue.These findings may have led to insignificant changes in the outcomes [26]. Furthermore, the characterization of immunosenescence markers, such as SA-β-gal, p21, and p53, during exposure www.kjim.orghttps://doi.org/10.3904/kjim.2023.090 to IH in both young and old mice showed that senescence markers were more evident in older mice with lung fibrosis exposed to IH.Among the two senescence pathways, our results are related to the p53-p21-pRb pathway [27].Positive results related to the p16-pRb pathway were not observed in our results, which require further study.Our results showed that IPF is related to aging and cellular senescence, and that IH is associated with the augmentation of this process.Further studies are required to better understand the differential effects of aging. Our study has several limitations.First, the IH mouse model, a widely used animal model for OSA, was not standardized, and the bleomycin dose and IH settings may have affected the outcomes.In addition, this model does not reflect other aspects of OSA such as sleep fragmentation, intrathoracic pressure swings, and intermittent hypercapnia.Second, further research on the mechanisms underlying the effects of aging on IH and IPF is required.Moreover, our results should be verified in human studies on older patients with IPF and OSA. In conclusion, the bleomycin-induced IH-exposed older mice group showed significant increases in inflammation, fibrosis, and oxidative stress compared to the younger mice groups of the same characteristics.This suggests a potentially enhanced harmful impact of IH on elderly patients with IPF.Elderly patients with IPF and OSA may be more of a concern than younger patients with IPF.Further research on the subject matter is required. KEY MESSAGES 1. IH, a key mechanism of OSA, augmented bleomycin-induced pulmonary fibrosis in a mouse model of severe inflammation, fibrosis, and oxidative stress.2. The bleomycin-induced IH-exposed older mice group showed greater lung inflammation, fibrosis and oxidative stress than the other groups.3. Chronological age may be an important factor to consider when studying the effects of OSA in patients with IPF. Figure 1 . Figure 1.Scheme of animal experimental schedule.Young and elderly mice were given intratracheal instillation of bleomycin or saline.After 1 week, mice were exposed to room air (RA) or chronic intermittent hypoxia (IH) for 4 weeks and sacrificed at day 28.
2023-10-06T06:18:00.056Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "eb0051893bcf4b56bcd79c7b53d29036ebb2936c", "oa_license": "CCBYNC", "oa_url": "https://www.kjim.org/upload/kjim-2023-090.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0419ab14af4d06daa072b3cde44207324dafd2e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256071994
pes2o/s2orc
v3-fos-license
Intuition-Driven Navigation of the Hard Problem of Consciousness The discussion of the nature of consciousness seems to have stalled, with the “hard problem of consciousness” in its center, well-defined camps of realists and eliminativists at two opposing poles, and little to none room for agreement between. Recent attempts to move this debate forward by shifting them to a meta-level have heavily relied on the notion of “intuition”, understood in a rather liberal way. Against this backdrop, the goal of this paper is twofold. First, we want to highlight how the ontological and epistemological status of intuitions restricts the arguments in the debate on consciousness that rely on them. Second, we want to demonstrate how the deadlock in those debates could be resolved through a study of a particular, “positive” kind of intuitions. We call this approach “The Canberrish Plan for Consciousness” as it adopts elements of the methodological “Canberra Plan”. Introduction While the "hard problem of consciousness" (Chalmers 1996) -the question of why and how the physical processes going on in the brain become conscious -is central to the current study of consciousness, the discussion surrounding it has become somewhat stale. There are two well-defined, opposed camps of realists and eliminativists of all sorts, and almost no room for agreement between them. Recently, as a way out of this clinch, David Chalmers has proposed the "meta-problem of consciousness", the problem of why we think that there is a hard problem at all (Chalmers 2018: 6). What the meta-problem does is it introduces a new level into the debate -one on which the two camps can agree on the formulation of the issues in question, and work together to develop a viable research strategy. This in result could provide support to the different standpoints at the "ground floor" -the standard level -of the debate. Chalmers' formulation of the meta-problem makes extensive (and liberal) use of the notion of "intuition", underscoring the importance of intuitive reasoning and arguments in the discussion of the hard problem. However, his reliance on the notion of intuition is extremely problematic, as we will argue in what follows. "Intuition" is not a transparent notion, and contemporary debates on the question of its ontological and epistemological status are far from settled. In fact, as we will show in this paper, a particular stance with regard to the status of intuition fixes one's views on consciousness. In result, such a formulation of the meta-problem is begging the question. There is, however, a different way of tackling this issue. By focusing on positive intuitions, that is investigating particular (philosophical, as well as everyday) views on conscious experience, we may implement a sort of a "Canberrish Plan for Consciousness "(for Canberra Plan see e.g. Braddon-Mitchell and Nola 2009 and section 5 below). In this way, the problem of the ontological and epistemological status of intuitions can be avoided. In what follows, we will summarize the discussions of the hard problem (section 2) and the Chalmers' proposal of the meta-problem programme (section 3). Next, we will argue that Chalmers' programme fails to navigate the dispute between illusionists and realists. In particular, we will show that the nature of intuitions determines the possible solutions to both the meta-problem and the hard problem (section 4). Finally, we will argue that there is hope that comes from studying positive intuitions about phenomenal experience according to the proposed "Canberrish Plan for Consciousness", introduced in section 5. Realism Vs Illusionism Philosophy of consciousness -and consciousness studies in general -has been largely structured around the famous "hard problem" of consciousness, formulated by David Chalmers (1996). The hard problem can be stated as follows: "Why is all the information processing in the brain accompanied by an experienced inner life?" Why do some physical states become conscious, appear in a direct way to the subject, with an associated "something it is like" for the subject to be in that state? Chalmers distinguished this from the "easy" problems which are susceptible to a reductionist, functional explanation, and as such can be directly studied using empirical methods. The hard problem, on the other hand, asks about the phenomenal, first-person, subjective qualities of the experience of the world, which are left out by functionalist accounts. This incredibly influential proposal has shaped most of the current debate on consciousness, even though some prominent scholars, such as Daniel Dennett, believe that the "so-called problem is a chimera" and a distraction from the real hard question: "and then what happens?": "once some item or content 'enters consciousness', what does this cause or enable or modify?" (Dennett 2018: 1). The major division in contemporary consciousness studies lies in the researchers' attitude towards the hard problem (see e.g. Frankish 2016). Realists take the explanatory gap highlighted by the hard problem as impassable via empirical research and attempt to otherwise account for phenomenal experience. They typically claim that phenomenal consciousness is a real "something" going on in the brain (or ratherin the mind) beyond what our best functional theory can explain. Realism can take many forms, from classical, Cartesian-inspired substance dualism, via more sophisticated forms of dualistic views (advanced e.g. by Chalmers 1996), to less popular views such as Russelian neutral monism and various panpsychist approaches, which view mental states as a basic "ingredient" of reality (see e.g., Goff 2017). Realism, for a long time, has been opposed by eliminativism, a view that held that the idea of phenomenal states, similar to many other concepts from folk psychology, is simply wrong and the hard problem is misconstrued altogether: there is nothing in the mind above and beyond of what we can explain with reference to functional categories. The explanatory gap simply does not exist. In recent years, however, a more nuanced approach has been slowly gaining popularity. Illusionism has been on the table since at least the 1980s (Dennett 1991; the name has been proposed much later, by Frankish 2016), however for a long time it remained a minority view. It differs from earlier eliminativist approaches in that it takes the (illusion of) consciousness seriously. In this view, the "special", "ineffable", "subjective" character of phenomenal experience, is real inasmuch as the Necker cube "really" changes its position as we're looking at it. To stay with this simile: the instability of the cube is a valid explanatory target, however, once we explain that there are no intrinsic depth cues in the picture and the brain switches between two compatible interpretations of the shape nothing "above and beyond" remains to be accounted for. The change is explained away. Similarly, illusionists argue that even though the (illusion of) phenomenal experience is real, once we explain all "easy" problems of consciousness, there will be nothing else to add to the explanation of this illusion. In fact, under this view, the explanatory gap itself is an illusion. This debate, while organized around the hard problem, for a large part has been focused on the question of whether or not the hard problem is in fact meaningful. In result, there is little room for possible agreement between the proponents of realism and illusionism, and the discussion of phenomenal consciousness slowly becomes circular. Without assigning any blame, it seems uncontroversial to state that there is currently an impasse between the two views. 1 3 Meta-Problem as a Way Forward A recent paper by Chalmers (2018), offers a new way of addressing the question posed by the hard problem. Chalmers aims to move the discussion of the nature of consciousness to a meta-level. He argues that "The Meta-Problem of Consciousness" may help bridge the gap between the two camps and provide a novel research programme addressing the issues raised by the hard problem of consciousness. Chalmers formulates the meta-problem as follows: "The meta-problem is (...) the problem of explaining why we think consciousness is hard to explain" (Chalmers 2018: 6). In short, "the meta-problem of consciousness is (...) the problem of explaining (...) problem reports." (Chalmers 2018: 7). He points out that at least some people (including himself) have dispositions to make certain judgments that are in fact descriptions of different kinds of difficulties one encounters whenever they try to grasp the nature of consciousness, similar to the claim "consciousness is hard to explain". The dispositions to make this kind of judgments are dubbed by Chalmers "problem intuitions", and the verbalizations of these intuitions are called "problem reports". Chalmers offers several examples: "There is a hard problem of consciousness", "It is hard to see how consciousness could be physical", "Explaining behavior does not explain consciousness" (Chalmers 2018: 7), which are obviously spelled out in a language of a professional philosopher. However, we can imagine as Chalmers suggests by pointing out to the body of work on "intuitive dualism"nonphilosophers formulating sentences like "There is something mysterious about me having conscious experience of colour", "It is impossible to scientifically study consciousness" or "Okay, so this is what the brain does, but how do I experience X?". 2 Problem intuitions are interesting as they belong to functional consciousness, they are a part of behavior, and as such are "easy" to explain. At the same time they seem to be telling us something about the phenomenal consciousness and how it is experienced. Chalmers argues that they constitute a special subclass of the easy problems, as explaining where the problem reports come from may throw some light on the hard problem -independently of whether one is a reductionist or not. Hence, explaining how the problem intuitions arise could serve as a proxy for an explanation of the hard problem. In short, there are two reasons why the meta-problem could provide a new research paradigm, convincing to both realists and illusionists. First, the meta-problem is formulated in a language that can be accepted by both realists and illusionists. Problem intuitions and reports replace controversial "qualia" or "phenomenal experience" as explanatory targets. This strategy is compatible with both realists' reliance on and illusionists' reluctance towards qualia and phenomenal properties. Second, the meta-problem provides novel constraints on possible solutions to the hard problem, allowing to arbitrate among available options with respect to how much they agree with registered problem intuitions. However, in the next section we will show that although the idea of moving the dispute on the nature of consciousness to a meta-level is promising, the program proposed by Chalmers ultimately fails. The critique boils down to the fact that the ontological and epistemological status of "problem intuitions" depends on the status of intuitions in general in a way that ties respective answers to the question about intuitions to respective solutions to the hard (and meta) problem of consciousness. Problems with (Problem) Intuitions The aim of this section is to show that the meta-problem implicitly begs the question. First, we will show how the relation between problem intuitions and phenomenal consciousness could be understood. Then, in subsections 4.1-4.3, we will argue that solving the hard problem of consciousness by explaining problem intuitions is already begging the question, since the way we understand the notion of intuition would lead directly to either some kind of realistic or illusionistic account. We will introduce three ways of understanding the nature of intuitions that lead to particular positions regarding the nature of consciousness. Two of them lead to some kind of illusionism (section 4.1, 4.2), and one to a realistic account (4.3). Chalmers doesn't reflect on the status of intuitions, stating only very generally that problem intuitions are dispositions to make problem judgments and problem reports (at the same time being distinct from e.g. phenomenal beliefs -Chalmers 2018: 46). However, a good understanding of what are intuitions in general, and problem intuitions specifically, turns out to be crucial for the purpose of the meta-problem approach. Meta-problem programme relies on the claim that intuitions are somehow related to phenomenal consciousness. However, justifying that claim proves quite complicated. In fact, there are three possible relations between the problem intuitions and phenomenal consciousness (Chalmers 2018: 48): meta-problem nihilism or correlationism, which posits no causal relationship between intuitions and phenomenal states; metaproblem realizationism, which claims that phenomenal consciousness is the primary cause of problem intuitions, and hence depends on a realistic view of consciousness; and weak and strong illusionism, according to which the problem intuitions are either illusory (strong illusionism) or are aimed at certain other, lower-or higher-level mental processes (weak illusionism). Both in strong and weak illusionism the primary cause of problem intuitions is shared with the processes that give rise to the illusion of phenomenal consciousness. Note that, at the beginning of investigating the problem intuitions, we have to, at least roughly, bear in mind their ontological, and epistemological/methodological 3 status. It means that we start with some assumptions about what intuitions are, what is their role in cognition and in philosophical reflection. These issues are connected with each other in such a way that some ontological stances entail epistemological ones and vice versa. In result, once we accept any particular attitude towards intuitions, the solution to the meta-problem, as well as the hard problem, follows directly from this stance, rather than from any other philosophically interesting considerations about consciousness. In our view, solutions to the meta-problem of consciousness are in fact views about the content and reliability of problem intuitions. In the remainder of this section we will substantiate these claims, showing more directly the logical interconnectedness between intuitions and consciousness. Strong Illusionism Some philosophers argue that intuitions are simply beliefs or dispositions to believe. It seems that this indeed is Chalmers' standpoint, when he states that an intuition is a "disposition to judge and report" (Chalmers 2018: 12; this view is also shared by other philosophers engaging in the meta-problem discussion, e.g. Clarke-Doane 2019; Schwarz 2019). We can call proponents of this view "reductionists" or "eliminativists" about intuitions. This nomenclature calls attention to the fact that these philosophers usually deny that intuitions have any epistemically privileged role, especially in philosophical methodology. They do not set any specific conditions that dispositions to believe or mere beliefs must meet to count as intuitions. This kind of approach is represented by e.g., Timothy Williamson (2007), Peter van Inwagen (1997) or David Lewis (1983). Its deflationist character is best illustrated in what Lewis writes about intuition: "Our 'intuitions' are simply opinions; our philosophical theories are the same. Some are commonsensical, some are sophisticated; some are particular, some general; some are more firmly held, some less. But they are all opinions…" (Lewis 1983: X) To determine whether this stance about the nature of intuitions implicitly begs the question on the meta-problem, we should consider possible epistemological stances coherent with the scrutinized ontological account. This view is often accompanied by the methodological claim that intuitions are in fact totally redundant and we should avoid them in philosophical practice. But this methodological claim can be weakened, and still coherent with the core "reductionist" view. If intuitions are mere beliefs -as reliable and as questionable as any other beliefs -we can still find some space for them in our philosophical inquiry. The term "intuition" may be redundant, but we cannot refuse that ordinary beliefs do play some role in philosophical methodology. In this way, beliefs could be treated as a common ground or a starting point in theorizing (see Williamson 2007: 242). Therefore, by studying intuitions about consciousness, and problem intuitions in particular, we could find a common ground for investigating the nature of the folk concept of consciousness. These are two epistemological views coherent with the stance that intuitions are mere beliefs or mere dispositions to believe. The first one is that we should abandon intuitions in philosophical enterprise all together. Having done so, the meta-problem cannot be adopted, since it rests on studying a particular kind of intuition -problem intuitions. Such claim is held e.g. by Rosenthal (2019). However, as mentioned, according to the second available epistemological view, we could treat intuitions as a common ground in philosophical disputes. In the case of the meta-problem, we could treat problem intuitions as a common ground for discussing the hard-problem of consciousness. In that case, however, we should ask about how well justified are problem intuitions, as they are mere beliefs or mere dispositions to believe. Note, that problem intuitions in such a case are as reliable as any other beliefs, e.g. as reliable as the belief that the Earth is flat (which is false), that Pluto is a dog (which is true), or that all combustible bodies contain an element called "phlogiston", which has negative mass and is released during combustion (which is false). This view is in fact supported by the increasing amount of research indicating that beliefs about consciousness are in fact culture-dependent (Irvine 2019; Sytsma and Ozdemir 2019; Yetter-Chappell 2019) and in case of scientists working on phenomenal experience -theory-laden (Lau and Michel 2019). In this case, it is hard to see how we can assign any special role to intuitions about consciousness, specifically -to problem intuitions on which the Chalmers' metaproblem rests. One could argue that intuitions about phenomenal states differ somewhat from other kinds of intuitions in that they report our beliefs about our own mental states. Introspection seems to be the most plausible mechanism of their origin. According to traditional accounts of introspection, we have a privileged access to our own mental states (Schwitzgebel 2010;Schwitzgebel and Cushman 2012). This privilege is understood in various ways. Some philosophers (Burge 1988;Papineau 2002;Chalmers 2003) maintain that self-ascriptions are always true in a self-fulfilling way. Others (Brentano 2015(Brentano [1874; Chisholm 1969) argue that properties of our own mental states are self-presenting, so that knowledge is non-inferential. However it is disputed that self-reports are indeed more reliable than other typical beliefs. Arguments against the claim that we have privileged access to our own mental states are mostly based on empirical evidence. We can point to the whole range of research that has been accumulated since 1970s showing that people have poor knowledge about their own mental states or processes that underlie behaviour (Nisbett and Wilson 1977;Nisbett and Bellows 1977;Cherniak et al. 1983;Wegner and Wheatley 1999;Mele 2001;Wilson 2002;Johansson et al. 2005). This is not a place to resolve whether the proponents or opponents of the privileged access are right (and to what extent). Nevertheless, the following holds: if we accept that intuitions are ordinary beliefs, then we have to reject privileged access. Otherwise the beliefs about our own mental states are special because of privileged access and do not qualify as mere beliefs anymore. Rather, they would constitute a special kind of beliefs (we will discuss what would happen with this kind of special beliefs in section 4.3.). Therefore, we cannot claim both that intuitions are mere beliefs, and that problem intuitions are better justified than any other beliefs. What does it mean for the metaproblem programme? There are two possible answers. The first one is that problem intuitions could be true, however there is nothing special in their epistemic status. The second option is that they are just misguided beliefs. In the first case there is no reason to adopt the meta-problem programme in order to explain or solve the hard problem of consciousness. It is because in Chalmers' enterprise problem intuitions serve as a proxy for the explanation of the hard problem of consciousness because of their special status. However if they are just as reliable as any other beliefs, which could be as possibly true as false, then there is no gain in grounding the possible explanation of the hard problem of consciousness on the explanation of the problem intuitions. It should be obvious what happens if we accept the second view, according to which intuitions are just misguided beliefs. It would mean that there is nothing special or surprising about some people being convinced that their conscious experience is non-physical or hard to explain (just as there is nothing "special" or "uncanny" about some people being convinced that the earth is flat). This is precisely what the illusionists claim regarding the hard problem of consciousness. There is nothing special with what we call problem reports, just as there is nothing special about our belief that there is something called "phenomenal consciousness." The question of how we arrive at this particular belief may be of some interest. However, in line with the claims of strong illusionists, the view that intuitions are ordinary beliefs and that there is no privileged access to our mental states, dissolves the meta-problem (and in turn -the hard problem as well). There is simply nothing else to add to the explanation of why problem intuitions arise in some people in the first place. In sum, once we agree that intuitions are mere beliefs, we need to abandon the whole meta-problem enterprise all along or else we turn out to be illusionists. There does not seem to be a third way out of this dilemma. Weak Illusionism Another ontological view on intuitions is to hold that intuitions are special beliefs or special dispositions to believe. For some proponents of this account, the special aspect of relevant beliefs or dispositions to believe can be described as follows: someone has an intuition that p solely on the basis of competence with the concepts involved in p (Ludwig 2007: 135), or someone has an intuition that p merely on the basis of understanding p (Sosa 1998). This perspective can be accompanied by a methodological claim that intuitions serve either as a starting point or as a touchstone for philosophical theories. Some philosophers who agree with this view, regard conceptual analysis grounded in intuitions as the basic philosophical method. It is partly because they expect that intuitions should reveal some necessary truths a priori (e.g., BonJour 1998; Ludwig 2007). This could include some necessary truths about consciousness and in result, these special beliefs could form a much stronger foundation for the investigation of phenomenal experience. Their special status makes them less defective and less dependent on philosophically irrelevant "background noise", when compared to beliefs formed through experience. However, this view limits intuitions to conceptual intuitions. They make use of a specific concept of consciousness. First of all, it could be that the concept is a widespread, "general public" one. This is the option that seems preferable to any investigation of consciousness on the basis of intuitions. This view is also entertained by Chalmers in his meta-problem argument, as he discards that the "problem intuitions" arise solely for philosophers and claims that they are in fact widespread among the lay people (Chalmers 2018: 15). However, if it is such a folk concept, then the historical, cultural and socioeconomic factors come into picture (as highlighted by Irvine 2019; Sytsma and Ozdemir 2019; and Yetter-Chappell 2019) and the possibility of making strong, conceptual arguments of the kind imagined by some vanishes, as the intuitions turn out to be fallible, in the manner already discussed in the previous section. On the other hand, we could consider some kind of an improved, philosophical concept of consciousness, which relies on the folk one, but slightly changes its meaning in the course of conceptual analysis. In such case, however, it is highly unlikely that everyone has the required kind of conceptual competence, because otherwise this philosophical concept would not be different from the folk one. Furthermore, in this case the special status cannot be regarded as coming into picture based on the privileged access to our mental states. And since not everyone has the kind of conceptual competence required, then how can we verify who does? Philosophers? Consciousness researchers? While some philosophers are likely to endorse this answer (e.g., Ludwig 2007: 149), the perspective of understanding the nature of consciousness based on intuitions alone loses its appeal in this context (see e.g. Lau and Michel 2019). What's more, accepting that intuitions are a special subclass of beliefs or dispositions to believe entails a form of illusionism -higher order illusionism. Before we move to the argument note that in this approach, beliefs are obviously different from conscious experiences, as for example we may hold beliefs that we are unaware of or are not actively attending, and in general they do not require any phenomenal component. The argument explaining why accepting a stance that intuitions are special (conceptual) beliefs would entail weak illusionism, or at least exclude realism, can be presented as follows: Suppose that it is possible to be a realizationist (and thereby realist) and hold the view that intuitions are special beliefs that are justified by competence or understanding of the subject matter. Now, consider the meta-problem realizationism that Chalmers presents as the main way for a realist to tackle the meta-problem (Chalmers 2018: 42). Realizationism claims that consciousness directly gives rise to (or is a primary cause of) intuitions about phenomenal experience. If so, and if intuitions are special beliefs resulting from conceptual competence, then consciousness would have to play a direct causal role in the conceptual beliefs about consciousness itself. In other words, phenomenal experience should play a direct causal role in forming conceptual beliefs about consciousness. But then either it is the case that: & There is no additional process required and we have competence with regard to consciousness simply in virtue of having conscious experience. & Or there is an additional process which makes the difference between "normal" beliefs and intuitions about consciousness (this could be the process of introspection, if we would agree for privileged access, but then see section 4.3). As for the first optionit is highly implausible to assume that it is true. Note that it would entail that our conceptual beliefs about consciousness follow directly, so without any intermediate process, from our phenomenal experience. But several processes take part in acquiring conceptual beliefs in a linguistic form about any given subjectconsider at least the processes responsible for language processing. The view which holds that conceptual beliefs about consciousness follow directly from our phenomenal experience could come from the undetected difference between phenomenal character and phenomenal concept (see Tye 2003: 91). Phenomenal character is a quality of our phenomenal experience. According to some philosophers especially the proponents of qualia, we could have direct access to phenomenal character. However, phenomenal concepts are the result of processing the phenomenal character of our experience. Therefore, phenomenal concepts come indirectly from phenomenal experience, because several processes are needed to establish these concepts. The idea expressed above is in fact well seated. In Kantian jargon we can point out that to form concepts our sensations alone are insufficient. We have to use both the faculty of understanding and receptivity to formulate any judgment of perception. This idea is expressed by Kant in his famous quote "Thoughts without content are empty, intuitions without concepts are blind." (Kant 1787(Kant /1998. Therefore accepting the stance that there is no additional process required for appearance of problem intuitions and we have competence with regard to consciousness simply in virtue of having conscious experience can not be matched with the view according to which intuitions are conceptual beliefs. Now, let us turn to the option that there is an additional process which makes the difference between "normal" beliefs and intuitions about consciousness. If this is the case, then the primary causal role is played by this additional process. But this is contrary to the assumption of realizationism. In fact this is in line with the claims of higher-order illusionism since the additional process is responsible for our problem intuitions, and not phenomenal experiences themselves. Hence, if we try to build a theory of consciousness based on intuitions understood as special conceptual beliefs, only the claims of higher-order illusionism remain viable. "Only creatures with certain introspective models will be phenomenally conscious" (Chalmers 2018: 43) and therefore only those creatures will have intuitions about consciousness. Realism The final ontological possibility is to hold that intuitions are sui generis mental states that cannot be reduced to other mental states such as beliefs or dispositions to believe. This sui generis state can be approximately characterized as a state in which some proposition seems true (Bealer 1998;Pust 2000), or less precisely, as a state that comes with a peculiar phenomenology that attends the experience of seeing that some proposition is true (Plantinga 1993: 105-6). Under this view, intuitions and beliefs are independent. One can have an intuition that p without believing that p, like in the Müller-Lyer illusion, in which we can have an intuitive sense that one of the arrows is longer than the other without believing this to be the case. Intuitions thus understood are methodologically often taken as a starting point of philosophical reflection which reveals necessary truths, providing a priori justification (Bealer 1998;Pust 2000: 39). Such a perspective offers the strongest conceptual foundations for the work on consciousness that departs from intuitions, including the meta-problem. Unfortunately, accepting that intuitions are sui generis mental states and referring to intuitions about consciousness in an attempt to understand the nature of phenomenal experience also begs the question. If we regard intuitions as states of some proposition seeming to be true, or a mental state associated with some peculiar phenomenology, by definition we assume that there exists a phenomenal property that differentiates intuitions from beliefs. Thereby we deny the possibility of reducing this sui generis phenomenal state to any other mental state or process. In sum, the acceptance of an ontological view according to which intuitions are sui generis states differing from other mental states by their phenomenology entails that there exists an irreducible phenomenology within the mind. Hence, employing intuitions to explain the hard problem of consciousness can lead only to the view according to which phenomenological experiences are irreducible. Alternatively, a similar problem arises if we accept the claims discussed previously, namely that intuitions are mere or special beliefs, and at the same time accept privileged access to one's own mental states. Under this view, we can get to know something about mental states directly by observing our "mental life", without the need to infer mental properties from behaviour. However, if this direct observation is to constitute knowledge -and hence if it can be reliably referred to in the discussion of the meta-problem -the justification associated with these beliefs has to be accounted for. If the knowledge in case of privileged access is direct, it would seem that this justification can only come from an accompanying phenomenal state of seeing that some proposition is true, similar to the one that makes intuitions sui generis mental states. In a nutshell, the uniqueness of intuitions as a distinct class of mental events hinges on the phenomenology they come with: that of directly seeing the truthfulness of some proposition. Once we accept this view, there is no room for eliminativism or illusionism, as we would have to point out to specific mental processes this phenomenology reduces to, disassembling the metaphysical uniqueness of intuitions in the process. It seems that the approach to intuitions which is the most promising for a consciousness researcher does not in fact allow for a meaningful study of the problem, as the answer is assumed already at the very beginning. Furthermore, specifically in relation to the meta-problem introduced above, this view of intuitions will lead to a rejection of the Chalmers' claim that the meta-problem is a special, "gateway" easy problem, as it turns out to be only a different face of the hard problem. Meta-problem's Failure As we have previously mentioned, the introduction of the meta-problem of consciousness was supposed to serve as a new field of discussion between the proponents of opposing views of consciousness, on which they could agree at least with how the problem is defined. However, by the extensive use of the notion of intuition, and specifically the central concept of "problem intuitions", the meta-problem shifts the discussion to a dispute over intuitions themselves. However, this does not seem to be much of an improvement for the ongoing debates between illusionists and realists, as one's views on the nature of intuition fix the response to the questions about the nature of phenomenal experience, as we have argued above. Studying Positive Intuitions and the Canberra Plan Despite the issues with the meta-problem programme discussed in the previous section, Chalmers' main idea to move the debates on consciousness to the meta-level, and place the study of intuitions in the center of such a meta research programme is still promising. In this section we will show how we believe it is possible to engage with Chalmers' ideas in a way that is not subsumed under the critique we've laid out in the previous section. We will call this proposed approach "The Canberrish Plan for Consciousness". The main idea is that the meta-research should focus on the very concept of "consciousness" rather than specifically on the meta-problem of consciousness. This means that the researcher should grapple with all kinds of intuitions about consciousness, instead of studying only what Chalmers dubbed "problem" intuitions. In other words, the focus should be on the study of positive intuitions (i.e. intuitions about what consciousness is) instead of negative ones (i.e. intuitions about what consciousness is not or why consciousness cannot be explained by physicalist science). The methodology of this kind of approach can be transplanted from another intuition-centered methodological project: The Canberra Plan (see e.g. Braddon-Mitchell and Nola 2009). Such an approach would be able not only to clarify already existing debates on consciousness, but also to advance it beyond the entrenched oppositions of realists and illusionists. The Canberra Plan is an influential methodological approach proposed and developed mainly by David Frank Jackson (Lewis 1970, 1972;Jackson 1998). 4 According to this programme, metaphysical investigations should start from studying intuitions and platitudes about the analyzed concept. This is the first stage of the conceptual analysis. The result is a list of characteristics of the concept we are interested in. The second stage, then, is to find out empirically what scientific entities satisfy these characteristics (all of them or at least a majority included in such a list). In the case of the Canberrish Plan for Consciousness, the proposed research programme also consists of two stages. First, we should collect intuitions and platitudes about consciousness, focusing on lay people, and carefully paying attention to a representation of diverse backgrounds, cultures and worldviews. It is possible, given that Chalmers is right that problem intuitions are widespread, that in the course of conducting the research there would be some intuitions of the form "consciousness is hard to explain" collected. But most likely the majority would be of a positive form, that is ascribing certain characteristics to consciousness rather than denying them, including statements such as "conscious experience is subjective and perspectival", or perhaps such as "consciousness is inconstant and stressful". 5 In result, we could compile a list of characteristics that are ascribed to consciousness. In the second stage we envisage a slight departure of the Canberrish Plan from its forebear. The Canberra Plan is plainly a naturalistic account (Kornblith 2016: 155). The proper program for navigating the dispute on the nature of consciousness should however allow also for non-naturalistic theories of consciousness. The Canberrish Plan for Consciousness does not presuppose a naturalistic framework of the analysis' results. We propose that in the second stage the search for theories capable of accounting for the characteristics of consciousness should be extended beyond the accounts vindicated by contemporary science. The second stage would therefore consist of either finding empirically what kind of entities satisfy the list of characteristics from the first stage or determining a priori what kind of entities would satisfy that list. This seems to avoid the risk of rejecting non-physical (or more broadly -anti-reductionist) theories of consciousness at the outset. Limiting ourselves to the study of problem -or negative -intuitions, as Chalmers does in his formulation of the meta-problem, we remain within the confines of current debates on consciousness. Most importantly -within the confines of the opposition of realism and illusionism, which is damning for the program of studying intuitions, as we have shown previously. This limitation stems from the fact that negative intuitions are primarily not about consciousness itself, but about the hard problem: "why consciousness cannot be explained in such and such a way" or "why consciousness cannot be characterised so and so". Through the theory-ladenness of the questions, this "such and such" and "so and so" are presuppositions that limit possible explanations to the ones entertained by either realists or illusionists. However, with the inclusion of positive intuitions, the study of intuitions escapes the boundaries of this opposition and avoids begging the question on the status of intuitions themselves. This approach allows for the possibility that none of the extant accounts is correct or even on the right track, hence the current impasse. The novel accounts may be compatible with different views of intuitions and in this way, the Canberrish Plan for Consciousness avoids the criticisms laid out previously. We see multiple possible outcomes of this kind of study. Maybe some of the negative intuitions held only by philosophers would disappear in the process. Maybe this research would indicate which properties of the concept of consciousness affect the reception of scientific claims and explanations and make them unconvincing. It could also throw light on the pre-theoretical entanglements of problem intuitions specifically, highlighted by some commentators of Chalmers' meta-problem proposal. For example, Rosenthal (2019) argues that problem intuitions might be only intuitions of people with specific theoretical (or pretheoretical) background. This view hinges on the question whether the concept of consciousness underlying problem intuitions is the same concept that lies at the heart of the scientific discourse on consciousness. What's more, Rosenthal points out that intuitions can be elicited by the structure of wording of the question posed (Rosenthal 2019: 196), and argues strongly that they should not be regarded as constraints on explanation of consciousness. Similar point is also made by Wierzbicka who argues that problem intuitions will depend on the metalanguage in which they are studied (Wierzbicka 2019: 263). It is also possible that the Canberrish Plan for Consciousness will provide some insight into why people believe that there is in fact a hard problem of consciousness. It could turn out that for some consciousness is not a concept from the same category as properties or entities which can be explained by physical science. Consciousness could be e.g. regarded as a property of a person, where "person" is understood in terms of some religious or metaphysical assumptions that are incompatible with physical vocabulary. In this case, this research would highlight a difference between the very concepts used by some lay people and most of the researchers immersed in current discussions about consciousness. Translating the results about one of those concepts onto another would in fact constitute a category mistake. Note that the Canberrish Plan for Consciousness is very inclusive in terms of the understanding of the notion of "intuition". It goes with any ontological approach to intuitions, as well as with any position towards the methodological and epistemic status of intuitions. The only exceptions are accounts according to which intuitions should be entirely abandoned in any intellectual enterprise due to the lack of clarity about what they are (such views could be assigned to Cappelen (2012) or Williamson (2007)). In the way that the proposed programme centers the views of consciousness held by lay people and strives to be compatible with both eliminativist and anti-reductionist accounts, it is reminiscent of Daniel Dennett's heterophenomenology (e.g. Dennett, 1991Dennett, , 2003. 6 Similarly to the proposed Canberrish Plan for Consciousness, the goal of heterophenomenology was to "compose a catalogue of what the subject believes to be true about his or her conscious experience" (Dennett 2003: 20, italics original). There is however a major difference. In Dennett's approach this catalogue of beliefs was to be extracted from the reports concerned with some particular experiences. This is highlighted by the metaphor of a generic psychological experiment in which the subject is asked to report their train of thoughts. In this regard heterophenomenology happens exactly on the level of phenomenal consciousness. In turn, in the Canberrish Plan we are interested in the concept of consciousness the interrogated subject has: platitudes and intuitions about what consciousness in principle is, rather than what the subject's current experience consists of. That is precisely why the postulated programme (similarly to Chalmers' meta-problem) shifts the discussion to the meta level. Furthermore, we do not wish to make as strong statements as Dennett does, in claiming that "The total set of details of heterophenomenology, plus all the data we can gather about concurrent events in the brains of subjects and in the surrounding environment, comprise the total data set for a theory of human consciousness. It leaves out no objective phenomena and no subjective phenomena of consciousness." (Dennett 2003: 20). The Canberrish Plan for Consciousness at its first stage explicitly leaves out all phenomena of consciousness in order to construct an account of what falls under the name of "phenomena of consciousness" from the bottom up -directly from folk beliefs. The important difference between the Canberrish Plan and both naturalistic Canberra Plan and Dennet's heterophenomenology is that in Canberrish Plan it is not determined whether an object falling under the concept of "phenomena of consciousness" has to be empirically known (as in Canberra Plan) or correlated with events in the brain (as in Dennet's heterophenomenology). Phenomena of consciousness could meet these features, but they don't have to. In sum, studying positive intuitions in line with the Canberrish Plan for Consciousness in lieu of the meta-problem programme introduced by Chalmers could move the existing debates forward. The account proposed here could shed a light on the actual intuitions about the concept of consciousness, exposing possible differences in understanding this central notion between those who recognize the hard problem of consciousness as a problem, and those who do not. Moreover, this approach opens up the possibility of establishing some new theories of consciousness, and breaking the deadlock between realists and illusionists. Conclusion Much of the discussion on consciousness finds itself in an impasse. Illusionists and realists often do not agree about the very formulation of the "hard problem of consciousness" -the problem around which most of the discussion revolves. The meta-problem of consciousness has been proposed by Chalmers (2018) as a way out of that impasse. Unfortunately, his program fails. The notion of "intuition", specifically the problem intuitions, are central to his research. However, the ongoing debates on the ontological and epistemological status of intuitions must be taken into account. In fact, accepting any one of the three most popular ontological approaches to the nature of intuitions begs the question. If we accept that intuitions are either simply beliefs or some special beliefs, we are led to accept the claims of, respectively, strong or weak illusionism. At the same time, if we accept that intuitions are independent of beliefs and constitute a sui generis mental states, only realism remains a viable option. In fact, this shows that any intuition-driven approach to the hard problem of consciousness is doomed. However, the central idea of Chalmers' meta-problem, namely that of moving the debate on consciousness to a meta-level of sorts, through reference to intuitions can be salvaged. If we abandon the focus on the hard problem, and the negative intuitions that come along with it, we may attempt to implement a "Canberrish Plan for Consciousness": a qualitative study of what intuitions about qualities of consciousness are widely shared among people (philosophers and non-philosophers alike). Such a study would be able to provide not only a novel, valid explanatory target for theories of consciousness -or at least with a corroboration of the importance of features which consciousness researchers already bring under scrutiny. It could also hint at novel approaches to explaining the nature of consciousness. In result, it could also wade in on the debates about the hard problem and the explanatory gap it postulates.
2023-01-22T14:09:40.246Z
2021-02-27T00:00:00.000
{ "year": 2021, "sha1": "8a920d7654d5bb6f60eef2332f1380ba44263173", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13164-021-00533-w.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "8a920d7654d5bb6f60eef2332f1380ba44263173", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
258969999
pes2o/s2orc
v3-fos-license
The anti-tumoral role of Hesperidin and Aprepitant on prostate cancer cells through redox modifications Prostate cancer is the second prevalent cancer in men. While the anti-cancer effect of Hesperidin and (Aprepitant) AP on prostate cancer cells is well documented, their combined effect and their mechanism of action are not fully investigated. Therefore, this study aimed to investigate the anti-cancer effects of Hesperidin and AP alone and in combination on prostate cancer cells. PC3 and LNCaP cell lines were treated with Hesperidin and AP alone and in combination. The Resazurin test was used for assessing cell viability. The ROS (reactive oxygen Species) level, P53, P21, Bcl-2, and Survivin gene expression were assessed. Also, a trypan blue assay was done. Hesperidin and AP reduced cell viability and increased apoptosis in PC3 and LNCaP cells. The ROS level reduced after treating the PC3 and LNCaP cells with AP with or without Hesperidin. P53 and P21 gene expression increased after treatment with Hesperidin with or without AP compared to the untreated group in the PC3 cell line. Bcl-2 and Survivin gene expression decreased with AP with or without Hesperidin in the PC3 and LNCaP cells. The current study showed the synergic anti-cancer effect of Hesperidin and AP in both PC3 and LNCaP cell lines. Introduction Prostate cancer is the second most common frequent malignancy and the fifth most common cause of cancer-related death among men worldwide (Bray et al. 2018;Ferlay et al. 2018).The current treatment for prostate cancer is radical prostatectomy, with radio or chemotherapy followed by androgen deprivation therapy (Rydzewska et al. 2017).However, their adverse effects, cancer relapse, and tumor resistance have limited their efficacy (Nakazawa et al. 2017).Many investigations tried to identify the mechanism behind prostate cancer but there are still unknowns.Several recent therapies were introduced for prostate cancer including monoclonal antibodies, vaccines, and other types of targeted drugs (O'Neill et al. 2015).However, these studies are in the initial steps and the results need to be improved.Therefore, it's necessary to find a new effective therapy for prostate cancer with no adverse effects on healthy cells. Aprepitant (AP) is already approved by the Food and Drug Administration (FDA) as an effective agent for the prevention of chemotherapy-induced nausea and vomiting (Muñoz and Coveñas 2020).Previous studies showed neurokinin-1 receptor (NK1R) and its agonist, substance P (SP), are overexpressed in different types of cancer cells including gastric, glioblastoma, larynx, colon, pancreatic, and prostate cancers (Ghahremanloo et al. 2021;Cussenot et al. 1996).Besides, SP/NK1R axis is shown to have an important role in the progression, angiogenesis, and metastasis of various cancer cells (Esteban et al. 2006).With regard to these features, pharmacologic NK1R inhibition has become an important strategy for new cancer treatments.AP is a highly specific NK1R antagonist and its anti-cancer effects against different cancer cells including prostate cancer are shown previously (Ebrahimi et al. 1869).Besides, no adverse effects on healthy cells were found even after the administration of a large dose of AP (Muñoz and Rosso 2010). 3 In addition, phytochemical treatments have been used as an excellent cancer treatment due to easy extraction from plants, cost-effectiveness, and low side effects (Zhang et al. 2020).A flavanone largely extracted from citrus fruits called Hesperidin has been shown to have considerable beneficial effects on the human body (Tanwar and Modgil 2012).Previous in vivo and in vitro investigations presented the anti-inflammatory, antioxidant, and neuroprotective role of Hesperidin (Li and Schluesener 2017).The anti-cancer effect of Hesperidin has been shown in different cancer cells including both androgen-dependent and -independent prostate cancer cells (Lee et al. 2010;Ning et al. 2020). While the anti-cancer effect of Hesperidin and AP on prostate cancer cells is well documented, their combined effect, as well as their mechanism of action are not fully understood.Therefore, the present study aimed to investigate the anti-cancer effects of Hesperidin and AP alone and in combination on prostate cancer cells.In addition, their effect on reactive oxygen species (ROS), tumor suppressors (P53 and P21), and antiapoptotic genes (Bcl-2 and Survivin) were evaluated. Cell culture The present study was performed at the Mashhad University of Medical Science, Mashhad, Iran Two different prostate cancer cell lines, including PC3 and LNCaP, and also Human fetal foreskin fibroblast cells (HFF-1) were purchased from the National Cell Bank of Institute Pasteur of Iran.PC3 and LNCaP cells were cultured in RPMI 1640 medium, and HFF-1cells were grown in Dulbecco's Modified Eagle Medium (DMEM).One percent antibiotics (penicillin/streptomycin) and 10% heat-inactivated fetal bovine serum (FBS) were added to the cell cultures.Cell lines were kept at 37 • C humidified incubator with 5% CO 2 . Drugs AP was purchased from Sigma-Aldrich Company (St.Louis, MO, USA) and dissolved in ethanol.Also, Hesperidin was bought from a local source (Golexir Pars) and dissolved in dimethyl sulfoxide (DMSO). Resazurin cell viability assay Resazurin cell viability tests were performed to investigate the cell toxicity of Hesperidin and AP.Resazurin cell viability assay is on the basis of the magnitude of resazurin (nonfuorescent) transformation to resorufn and dihydro-resorufn (highly fuorescent) by the metabolically active cells.There is a direct association between the rate of dye reduction and the number of viable cells in a sample (O'brien et al. 2000).Briefly, a 96-well plate was used to seed 2.5 × 10 4 cells in a volume of 100 µL and treated with different concentrations of AP including 0 (control), 5,10,20,50,70,and 90 μM,and Hesperidin 0 (control),10,50,100,200,300 and 500 μM for 24 and 48 h.Then, after removing the medium, each well received 10 µL resazurin solution (0.01 mg/mL dissolved in phosphatebufered saline; Sigma-Aldrich) for 3 h at 37 °C, under 5% CO2 and protected from light.By using a microplate fuorimeter, the colorimetric assays were done under the emission and excitation wavelengths of 570 and 600 nm, respectively.The results were presented as percentage survival rates by comparing the treated cell absorbance with the untreated control.the Graph-Pad Prism® 6 software was used for calculating IC50. ROS level assay Based on the manufacturer's protocol, the Intracellular ROS level was assessed by using a 2′,7′-Dichlorodihydrofluorescein diacetate (DCFDA) cellular ROS detection assay.According to the protocol, 20 μM DCFDA was added to cells and after 24 h of incubation, DCFDA was washed.Afterward, the rewashed cells were treated with AP at a concentration of 10 µM, Hesperidin at concentrations of 50 and 100 µM, and the combination of 10 µM AP + 50 µM Hesperidin and 10 µM AP + 100 µM Hesperidin for 24 h.Tert-butyl hydroperoxide (TBHP) was used as a positive control.Finally, the relative fluorescence intensity was measured with the fluorescence plate reader Perkin Elmer. RNA extraction and quantitative real-time PCR (qRT-PCR) The RNA extraction kit (Pars Tous biotechnology, Iran) was used for RNA extraction from cultured cells including PC3 and LNCaP.According to the manufacturer's instructions, complementary DNA (cDNA) Synthesis Kit (Pars Tous biotechnology, Iran) was used to reversely transcribe the RNA to cDNA.LightCycler ® 96 RT-PCR system (Roche, USA) was used for qRTPCR amplifications with a fluorogenic dye detection system (SYBR green).One of the most common housekeeping genes, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), was applied as an internal reference gene and ΔΔCT method was used for the analysis of the relative changes with Graph-Pad Prism software (version 6.0).qRTPCR was performed for Bcl-2, Survivin, P53, and P21 primers which were purchased from Pishgaman (Pishgaman Co., Tehran, Iran). Trypan blue assay PC3 and LNCaP cells were treated with the aforementioned doses of Hesperidin and AP for either 24 or 48 h.Next, the viable cell density was done with 0.4% trypan blue staining as described (Javid et al. 2020).Trypan blue was used for cell staining.Then, an optical microscope was used for counting the stained (dead) and unstained (viable) cells.According to the following formula, the viable cell density was determined: % cell viability = (viable cell count/total cell count) × 100. Statistical analysis The experimental data were presented as mean ± standard error of the mean.The ANOVA test followed by Bonferroni's t-test was applied for statistical analysis.All the data were analyzed triplicate compared to the untreated control group.The statistically significant level was considered p-value < 0.05. the GraphPad Prism® 6.0 software (San Diego, CA,USA) for Windows was used for statistical analysis. The effect of AP and Hesperidin on cell viability of PC3 and LNCaP cell lines The PC3, LNCaP, and HFF-1 cell lines were treated with increasing concentrations of AP (5, 10, 20, 50, 70, and 90 µM) and Hesperidin (10, 50, 100, 200, 300, and 500 µM) for 24 h.As shown in Figs. 1 (a and b), IC50s were 212 and 235 µM in PC3 and LNCaP cell lines after 24 h of Hesperidin inoculation, respectively.In addition, IC50s were 18.30 and 22.35 µM in PC3 and LNCaP cell lines after 24 h of AP, respectively (Figs. 2 (a and b)).Furthermore, as shown in Fig. 3, the IC50 of the HFF-1 cell line after 24 h of Hesperidin was 527 µM.Regarding these results, 10 µM AP, and 50 and 100 µM Hesperidin were selected for experimental concentrations. The effect of AP and Hesperidin on ROS level The intracellular ROS level was measured in PC3 and LNCap cell lines in response to AP (10 µM) and Hesperidin (50 and 100 µM) alone and combination groups (Fig. 4 (a and b)).As shown in Fig. 4 (a), the ROS level increased significantly after inoculation with 50 and 100 µM Hesperidin in the PC3 cell line in comparison to the untreated group.Furthermore, the ROS level significantly reduced after treating the PC3 cells with 10 µM AP with or without Hesperidin pretreatment.Figure 4 (b) presents the ROS level after inoculation with AP and Hesperidin in the LNCaP cell line.ROS level significantly increased after inoculation with 100 µM Hesperidin and the combination of 100 µM Hesperidin + 10 µM AP.However, 10 µM AP with or without 50 µM Hesperidin significantly reduced the ROS level. The effect of AP and Hesperidin on P53 and P21 gene expression As shown in Figs. 5 (a and c), P53 and P21 gene expression significantly increased after treatment with 50 and 100 µM Hesperidin with or without 10 µM AP compared to the untreated group in the PC3 cell line.Besides, 50 and 100 µM Hesperidin significantly increased the P53 and P21 gene expression in the LNCaP cell line (Fig. 5 (b and d)). The effect of AP and Hesperidin on Survivin gene expression As shown in Fig. 6 (c), 10 µM AP with or without the combination with 50 or 100 µM Hesperidin significantly decreased the Survivin gene expression in the PC3 cell line.Furthermore, Survivin gene expression significantly decreased after inoculation with 10 µM AP with or without 50 µM Hesperidin in the LNCaP cell line (Fig. 6 (d)). The effect of AP and Hesperidin on Trypan blue apoptosis test As presented in Fig. 7, 50 and 100 µM Hesperidin, 10 µM AP, and the combinations of 50 µM Hesperidin + 10 µM AP and 100 µM Hesperidin + 10 µM AP induce a significantly higher rate of apoptosis in both PC3 and LNCaP cell lines compared to the untreated group. Discussion As previously noted, prostate cancer is the second most frequent cancer in men (Bray et al. 2018).Conventional therapies for prostate cancer include chemotherapy, radiotherapy, and androgen deprivation therapy (anti-androgenic medications (flutamide and bicalutamide)), orchiectomy (in severe cases), and freezing of prostate tissues (Barani et al. 2020).However, this approached are envisioned with loss of libido, erectile dysfunction, bone mass loss, and obesity (Barani et al. 2020;Oun et al. 2018).In addition, the development of castration resistance, cancer recurrence, metastatic disease, and therapeutic regimen adverse effects are the main problems and have limited the treatment efficacy (Nakazawa et al. 2017;Katzenwadel and Wolf 2015).Hence, it's vital to find new potential therapy that has no toxic effects on healthy cells.In the current study, prostate cancer cell lines, PC3 and LNCaP were treated with AP (10 µM) and Hesperidin (50 and 100 µM) alone and in combination groups.To the best of our knowledge, this is the first study that is aimed to assess the synergism effects of AP and Hesperidin on cancer cells.Our results showed that the combinations of 50 µM Hesperidin + 10 µM AP and 100 µM Hesperidin + 10 µM AP induce a significantly higher rate of apoptosis in both PC3 and LNCaP cell lines.On contrary, Hesperidin did not affect the normal cell line (HFF-1).The IC50 of the normal cell Fig. 2 The resazurin assay shows the viability of the PC3 and LNCaP cell line after 24 h of Aprepitant.Aprepitant can cause prostate cancer cell death in both PC3 and LNCaP cell lines in a dose-and timedependent manner.Section (a): IC50 in PC3 cells treated with Aprepitant was 18.30 μM after 24 h.Section (b): IC50 in LNCaP cells treated with Aprepitant was 22.35 μM after 24 h Fig. 3 The resazurin assay shows the viability of the HFF-1 cell line after 24 h of Hesperidin.Hesperidin can cause HFF-1 cell death in a dose-and time-dependent manner with IC50 of 527 μM line (HFF-1) after treatment with Hesperidin was almost 2 times and 20 times higher than LnCaP and PC3 cell lines respectively.These results showed the safety of Hesperidin on the normal cell line, while it induces apoptosis in prostate cancer cells. The mechanism underlying the synergism effect of AP and Hesperidin can be due to several molecular pathways. ROS is one of the potential pathways, especially for AP.ROS formation during metabolism has been shown in different physiological functions (Moloney et al. 2018).In fact, the balance between ROS production and its scavenging with antioxidants has been properly maintained in healthy cells (Kim et al. 2016).However, cancer cells have dysregulated ROS hemostasis, leading to a higher ROS generation (Kim et al. 2016).A higher level of ROS has an antiapoptotic effect as a result of redox-sensitive transcription activation which includes nuclear factor κ-light-chain enhancer of activated B cells (NF-κB) (Grivennikov and Karin 2010).NF-κB is located in the cytosol of healthy cells which is bonded to IκBα as inactive forms.Nevertheless, IκBα phosphorylation forms active NF-κB in cancer cells which can inhibit apoptosis and result in uncontrol cell growth (Wang et al. 1998).In fact, NF-κB can inhibit apoptosis by elevating anti-apoptotic genes including Bcl-2 and survivin (Arbab et al. 2012).Previous studies showed the overexpression of Bcl-2 in prostate cancer, B-cell lymphomas, colorectal cancer, and breast cancer (Kirkin et al. 2004).Therefore, reducing the ROS level in prostate cancer cells can lead to reduced antiapoptotic effects as a result of lower Bcl-2 and survivin gene expression.The results of the current study showed that the combination of 10 µM AP + 50 µM Hesperidin significantly reduced the ROS level in the PC3 cell line.Also, Bcl-2 and Survivin gene expression significantly decreased with the combination of 50 µM Hesperidin + 10 µM AP, and 100 µM Hesperidin + 10 µM AP in the PC3 cell line.Almost similar results were found in the LNCaP cell line.This mechanism can be one of the potential anti-cancer effects of Hesperidin and AP on prostatic cancer cells.On contrary, in a study done by Ning et al. on prostate cancer cells., it was shown that hesperidin can decrease cell growth and viability in a dose-dependent manner as a result of ROS elevation and MMP reduction (Ning et al. 2020).These results can be explained by the double sword feature of ROS, in which both lowering and elevating the ROS level in cancer cells can induce apoptosis and were identified to be potent therapeutic approaches in the cancer management (Javid et al. 2022b). On the other hand, P53 and P21 pathways seem to be another potential anti-cancer mechanism, especially for Hesperidin on prostatic cancer cells.Various cell functions are regulated with P53 tumor suppressor factor including cell growth, invasion, and migration (Muller et al. 2011).In addition, P21 plays an important role in the cell-growth arrest, senescence, and suppression of the cell invasion (Abbas and Lee et al. 2012).Its cell cycle arrest role can be due to the modulation of cell cycle regulatory proteins such as cyclins, cyclin-dependant kinases (CDK), and CDK inhibitors (Pandey and Khan 2021).In addition, it was reported that Hesperidin can increase P21 in various cancer cells including leukemia cell lines, colon cancer, breast cancer, and lung cancer (Oliveira et al. 2020).On the other hand, it was shown that AP can cause cell cycle arrest in G2/M and significantly decrease cyclin B1, as well as, increase P21 in other types of cancer cell lines (Obata et al. 2016).Besides, it was stated that AP can cause cancer cell death by regulating different pathways such as cell cycle-related genes (c-Myc, cyclin D1, cyclin B1, p21), P53, PI3K / Akt/ NF-kB, and apoptosis target genes (Bcl-2 and Bax) (Javid et al. 2022a).However, our results did not show significant changes in P53 and P21 in prostate cancer cells after treatment with 10 µM AP alone. Furthermore, SP and its primary receptor, NK1R can be another pathway for the anti-cancer effect.As previously noted, SP's binding to NK1R can induce cancer cell progression, metastasis, and angiogenesis (Esteban et al. 2006).Further studies showed that prostate cancer cells overexpress NK1R (Cussenot et al. 1996).Based on the role of SP/ NK1R system in initiating and progression of cancer cells, it seems to be a potential target for anti-cancer treatments.Toward this end, Ebrahimi et al. stated SP/NK1R can induce proliferation and migration in prostate cancer cells by affecting apoptosis-related genes, cell cycle-related proteins, and increasing MMP-2 and MMP-9 expression (Ebrahimi et al. 1869).They showed Aprepitant can reverse these effects in both in vitro and in vivo experiments on prostate cancer cells (Ebrahimi et al. 1869).Although our results for the first time, showed the synergic anti-cancer effect of Hesperidin and AP on prostate cancer cells in both PC3 and LNCaP cell lines, this effect can be well explained by P53, P21, Bcl-2, Survivin, and ROS pathways only in the PC3 cell line.However, our results for these possible pathways of the anti-cancer effect of Hesperidin and AP were not straightforward in the LnCaP cell line.One possible explanation for these results can be the natural difference between the PC3 and LNCaP cell lines.For instance, a prior study showed Hesperidin can inhibit the testosterone-induced cell proliferation of the LNCaP cell line, while it had no effect on hormone-independent prostate cancer cells, PC3 (Lee et al. 2010).Therefore, the hormonal effects of these treatments can be an explanation for this difference and another potential pathway for their anti-cancer effects.In addition, the following limitations should be considered in this study.First, SP/NK1R pathway in prostate cancer cells was not fully investigated in the current study.Second, the signaling pathways of NF-κB were not evaluated.Finally, this is in vitro investigation and further in vivo studies are needed on this subject. In conclusion, the current study showed the synergic anticancer effect of Hesperidin and Aprepitant on prostate cancer cells in both PC3 and LNCaP cell lines.The combination Figure 6 ( Figure 6 (a and b) demonstrates the effect of AP and Hesperidin on Bcl-2 gene expression.Bcl-2 gene expression significantly decreased with 10 µM AP with or without 50 or 100 µM Hesperidin pretreatment in the PC3 cell line.Besides, 10 µM Fig. 4 Fig. 4 The effect of Hesperidin and Aprepitant on ROS level in PC3 and LNCaP cell lines.Hesperidin significantly increased the ROS level.While Aprepitant significantly reduced the ROS level.Section (a): Aprepitant with or without 50 and 100 μM Hesperidin significantly reduced the ROS level in the PC3 cell line.Section (b): Aprepitant with or without 50 μM Hesperidin significantly decreased ROS Fig. 5 Fig. 5 The effect of Hesperidin and Aprepitant on P53 and P21 gene expression in PC3 and LNCaP cell lines.Section (a): 50 and 100 μM Hesperidin with or without 10 μM Aprepitant significantly increased P53 gene expression in the PC3 cell line.Section (b): 50 and 100 μM Hesperidin significantly increased P53 gene expression in the LNCaP cell line.Section (c): 50 and 100 μM Hesperidin with or without 10 μM Aprepitant significantly Fig. 6 Fig. 6 The effect of Hesperidin and Aprepitant on Bcl-2 and Survivin gene expression in PC3 and LNCaP cell lines.Section (a): 10 μM Aprepitant with or without 50 and 100 μM Hesperidin significantly reduced Bcl-2 gene expression in the PC3 cell line.Section (b): 10 μM Aprepitant with or without 50 and 100 μM Hesperidin significantly reduced Bcl-2 gene expression in the LNCaP cell line.Section (c): 10 μM Aprepitant with or without 50 and 100 μM Hesperidin significantly reduced Survivin gene expression in the PC3
2023-05-31T06:16:29.984Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "01649bf32c4ca0d17cde9402af37da4118aab090", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-2800743/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "92f87852d2f245c3db6009ac1aac8e65b3048e63", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229371028
pes2o/s2orc
v3-fos-license
IDEOLOGICAL REPRESENTATIONS IN POLITICIAN AND PUBLIC COMMENTARIES ON DKI JAKARTA REGIONAL ELECTION: JUDGING FROM THE VOCABULARY USAGE The multitude of news related to the Jakarta Regional Election caused many people to respond, not only politicians but also society. This study aims to describe the ideology behind the statements and comments of politicians and the public about the 2017 DKI Jakarta Regional Election political news from the use of vocabulary. This study uses a qualitative-descriptive approach and content analysis method based on Norman Fairclough's critical discourse analysis theory. The results showed that (1) three ideological representations, namely religionism, secularism, and neutrality found in politicians' statements about the DKI Regional Election in the online mass media, and (2) four ideological representations, namely religionism, secularism, liberalism, and neutrality found in public comments about the political discourse of the DKI Regional Election on social media. However, based on the dominance of its use, it can be concluded that the political discourse battle for the 2017 DKI Jakarta Regional Election is motivated by two ideologies, namely secularism and religionism, both in politicians' statements and public comments. Religious ideology is represented through the vocabulary that raises issues of faith, sharia, and morals; and linking it in choosing government leaders. On the other hand, Secular ideology is represented through vocabulary related to the ideal government leader, such as honest, intelligent, creative, etc. and does not link it to the religious factor he adheres. 1 Introduction This research is appealing to discuss because it personally examines the ideology of every politician and society who comments on the political discourse of the 2017 DKI Regional Election. The election of nation leaders and regional heads cannot be separated from the people who choose them (Usfinit, Suprojo, & Setyawan, 2014). Every time before the Regional head election, there will be many comments from various groups, especially on social media (Budiyono, 2016). Moreover, the context of the 2017 DKI Jakarta Regional Election was full of SARA issues because one of the candidates stumbled on the case of blasphemy. The incident has led to many pro and contra comments by politicians and the public to each candidate. Politicians and the public use different vocabulary to comment on the DKI Regional Election events, so it is necessary to study what ideology influenced them in using the language (vocabulary which tends to be verbal violence, both by politicians on online news portals and by people on social media (Agustina, 2017;Agustina & Syahrul, 2017). Research on ideology in a text has been extensively studied by previous researchers, including by Abdunasir (2015), Shahsavar & Naderi (2015), Sideeg (2015) in Australia; Faris & Paramasivam (2016) in Mandela; Ramanathan & Hoon (2016) in Malaysia; and Mohammadi & Javadi (2017) in Iran. Although there have been many studies on ideology like the one above that tend to examine the ideology of journalists from the media that employ them, this research examines the personal ideology of politicians and the public in commenting on the 2017 DKI Jakarta Regional Election, especially from the use of the vocabulary they choose. Based on these reasons, this study aims to describe (1) ideological representations in politicians' statements/comments on online news portals and (2) public comments on social media about the 2017 DKI Jakarta Regional Election political discourse. 2. Methodology/Materials This study uses a qualitative-descriptive approach and content analysis methods, with the critical discourse analysis by Norman Fairclough (2003) with text-dimensional analysis, discourse practice analysis, and socio-cultural practice analysis. However, in this article, the ideology in the comments of politicians and the public about the discourse on the DKI Jakarta Regional Election is only presented in one dimension, which is the text dimension. The data of this research are the choice of words (diction) in sentences containing certain ideological representations in the selected sources, namely (1) politicians' statements/comments about the political situation of the DKI Regional Election in the online news mass media and (2) public comments about the news on Facebook. Therefore, the main instrument in this research is the researcher himself using a tool in the form of computer media or cell phones to download data, inventory format sheets, and classification format sheets for data collection. Furthermore, for data analysis with Critical Discourse Analysis theory through the analysis method by Miles & Huberman (1992) in three stages, (1) data reduction, (2) data presentation, and (3) concluding in accordance with the step procedure of each stage. Ideological Representations in Politicians' Statements about the DKI Jakarta Regional Election in Online Mass Media Based on the results of data analysis, three types of ideologies were represented by politicians in their statements about the political events of the DKI Jakarta Regional Election on the online news portal, as shown in the following Based on the classification of the data, there are viewpoints behind the statements of politicians in the news about Regional Election in online sources, namely ideologies that are differentiated based on the relationship between religion and the government system, namely religion and secularism. However, there are also neutral statements yet not dominant. Religionism The statements of politicians who represent religious ideology are reflected in the vocabulary they use, which is related to Islam, especially those describing the nature and principles of religion, such as (surat) Almaidah, (pemimpin) beriman, (agama) Islam, (perang) Badar, Damai, (akal) sehat, etc. When linked to the context, the vocabulary represents religious ideology by bringing up several issues, including (1) issues of faith, (2) issues of sharia, and (3) issues of morals. The representation of religious ideology with the issue of faith, among others, is found in the statements of politicians, as follows. (1) "Kalau terdakwa (Ahok) The politician's comment represents a religious ideology that alludes to the issue of faith revealed by the use of the vocabulary Al Amaidah, which implies to the reader that the 2017 DKI Regional Election problem is only because A as a candidate for governor mentions the word Al Maidah on Pulau Seribu. This means that the politician regretted A's "slippy" action while on duty on the Pulau Seribu offending matters related to the faith of Muslims so that he was deemed to have insulted the contents of the Al Quran. In that comment, it is clear that two clauses of contradiction are used by using the conjunction 'kalau ... (then) tidak ...' as a proposition with the meaning of 'cause-and-effect' a number of times (repetitive) with the permutation of the structure Tak ada... kalau with the intention of 'affirmation'. In other words, the use of the Al Maidah 51 vocabulary is considered a 'discourse battle' of religious ideology until the next sequence. Statements of politicians who reveal religious ideology by bringing up the issue of sharia were also found, including those concerning muamalah. (2) "Islam nggak sangar kayak begitu. The religionism category of muamalah in data (2) is represented through a synonymous vocabulary that is repetitive-comparative, such as Islam tidak sangar (tetapi) Islam sejuk-damai then it is repeated to affirm Islam merangkul (bukan) memukul. Politicians use this vocabulary to describe the essence of Islamic leadership as a religion that loves peace, not spreads hatred. Through the repetitive-comparative vocabulary in these comments, politicians convey their disapproval of the attitude of some people who 'exaggerate' the problem so that they are no longer acting under Islamic teachings for the universe. As well as can be seen in data (3), the use of the perang Badar phrase provides a meaning of comparison of the top Muslim resistance situation. Furthermore, the religionism issues of morality in politicians' statements about the political discourse of the DKI Jakarta Regional Election in the online news can be seen in the following data. (4) "Yang paling penting adalah wujud kemenangan akal sehat. Gagasan yang kami berikan gagasan akal sehat (LP17:25/4/17) (The most important thing is the winning form of common sense. The ideas we provide common sense ideas) (5) Ada juga pesan P.., "Kita sudahi Jakarta yang gaduh dan terbelah di bawah gubernur lama. Bisnis memerlukan rasa aman. Ekonomi perlu stabilitas politik. Ini lebih bisa diberikan oleh Anies-Sandi." (DT22:19/4/17) (There was also a P ... message, "We will end Jakarta which was rowdy and divided under the old governor. Business needs a sense of security. The economy needs political stability. This is more that Anies-Sandi can provide.) Politicians in statement (4) use the vocabulary of kemenangan, akal, sehat to inform the public that the victory of the A-S candidate pair is a form of winning the idea of common sense. This statement indicates that the politicians who support and vote for A-Dj have ideas that are incompatible with common sense (irrational). This comment is a form of support from politicians for the A-S candidate pair; and vice versa as a paradoxical form of his dislike for the A-Dj candidate pair. Then, in statement (5), politicians use the vocabulary of gaduh, terbelah, gubernur lama; with an indication to corner the intended target in the text by displaying the fallacies of the target object. The use of the old governor's vocabulary was aimed at A in the hope that the reading public would know that A's morals were not good by expressing it in the text that Jakarta was rowdy and divided during the previous governor's reign, namely A. Based on the context, the choice of vocabulary by politicians is generally a reaction to other politicians' statements and/or community comments that are more dominant with a religious ideology so that the opposing politicians carry out counterattacks in the form of campaigns and propaganda with statements which tends to represent the secular ideology. In statement (6), through the choice of vocabulary pemimpin and pemerintahan, the politician wants to inform and remind readers that the Jakarta Regional Election is not aimed at electing religious leaders, but is looking for government leaders. So, do not bring up religious issues when choosing government leaders. Furthermore, in statement (7), politicians use kinerja vocabulary. Through this vocabulary, the politician informs the reader that in the Regional Election, the candidates' performances are the best measurement compared to their religion and ethnicity. Besides, based on the context, the politician also reminded readers to choose leaders who are honest and clean from corruption cases. The politician's statement implies covert propaganda as a form of support for the A-Dj candidate pair, and on the contrary, indicates impartiality for the A-S candidate pair. Apart from these two ideologies, there are also neutral comments in the 2017 DKI Jakarta Regional Election political discourse by politicians and political elites on online news portals. In this view, politicians are still objective and do not take sides with any ideology brought by candidate pairs and camps so that their comments are not in favor of the organization, but are more nationalist in nature, that is, they are more of an appeal and make various parties to the interests of the state realize Among them can be seen through vocabulary such as merusak demokrasi, (lihat) data, (turunkan) tensi politik, (jadilah) negarawan, (turunkan) eskalasi isu, etc. Among them can be seen in the following data. ("I hope we reduce political tension and reduce the escalation of issues that divide us.) (9) "Bapak-bapak politikus santun yang saya hormati, tolonglah berperilaku sebagai negarawan. Pilkada DKI sudah selesai. Sekarang waktunya fokus membangun Jakarta yang lebih baik," ujar Charles (DT27:28/4/17) ("Dear polite politicians, please behave as statesmen. The DKI Pilkada is over. Now it is the time to focus on building a better Jakarta, "said Charles".) The neutrality of comments (8) is characterized by vocabulary tensi politik, eskalasi isu to make other politicians reduce political tensions and issues that will divide society. Similarly, in (9) through the vocabulary negarawan, membangaun, Jakarta, etc. The politician appealed to other politicians who were too excited about the discourse between religion and government to stop. The implication, through these vocabularies, politicians want neutral and peaceful election. Based on some of these comments, it can be seen that there are still politicians who express a more neutral attitude by not taking sides with each candidate in relation to the political discourse of the DKI Jakarta Regional Election. Public Comments on the DKI Regional Election News on Social Media Based on the results of data analysis, three types of ideology were found in public comments on the political discourse of the DKI Jakarta Regional Election on social media, as shown in the following table. No Aspect Findings Percentage Religionism Public comments about the DKI Jakarta Regional Election on social media that represent religionism by raising the issue of faith are reflected in vocabulary related to Islam, such as the words sholat, muslim, masjid, seiman, etc. These vocabularies can reveal religious ideology if it is linked to the current context, raising three issues, namely (1) In comment (10) there is an emphasis with the implication of cornering A. This is reflected in the choice of kafir vocabulary to present A with the reason that according to Islamic teachings, it is not permissible to choose an infidel leader. If it is related to the context at that time, the vocabulary of kafir was used to refer to the perpetrator because he was involved in the blasphemy case of Islam. Furthermore, in commentary (11) the ideology of religion is reflected in the choice of vocabulary which means that whoever chooses A means that they have no religion. Both comments use a pattern of lengthening information (apositive) as an emphasis and affirmation of the meaning that the person really hates A, so he also hates people who choose A, especially those from Islam. This shows that the person is the socitey who also relates religious issues to matters relating to government. Other words that reflect religious ideology that raise the issue of faith in this study are Sang Maha Kuasa, beriman, Allah dan Rasulullah, Alquran, masjid, Islam, takbir, hisab, yaumul akhir, munafik, kitab suci, tobat, penista Alquran, dajjal, penista agama, and Almaidah. Public comments that express religious ideology with the issue of sharia are also divided into two, namely (a) ibadah and (b) muamalah issues. Public comments that fall into the category of worship can be seen in the following quote. Comments (12) and (13) are a form of prayer or hope from the community to Almighty God so that the A-S gubernatorial candidate is elected as the governor of DKI Jakarta. This is indicated by the choice of vocabulary words, Ya Allah and Bismillah ya Allah. The community wants the A-S partner to be a trustworthy leader. The two comments are a form of public support for the A-S candidate pair. Other vocabulary that expresses religious ideology that raises the issue of worship are as follows amin YRA, inshallah, Allahuakbar, ya Allah ya Rabbi, Alhamdulillah, semoga amanah, dan Amin Ya Allah. Furthermore, public comments that fall into the muamalah category can be seen in the following data. (No need to worry if candidate pair 2 chooses later the program will be continued with Anis_Sandi and added with the program that is being planned. Jkt needs leaders with character, justice, and the welfare of its people) Comments (14) and (15) relate to matters of law and leadership which are marked by the vocabulary of hukum Islam, menista Alquran dan ulama, pemimpin berkarakter, berkeadilan, dan menyejahterakan rakyat. The representation of religion is implied by the desire of the people that Jakarta leaders are in accordance with Islamic teachings and it is forbidden to choose leaders who have been determined to insult the beliefs of Muslims; and those with character, justice, and welfare of the people. This indicates that the previous leader was a leader without character, without justice, and did not bring welfare to the people. Both comments use a pattern of lengthening information (apositive) as an emphasis and affirmation of the meaning of rejection of the A-Dj candidate, on the contrary as support for the A-S candidate. Other words that reflect religious ideology that raise muamalah issues are as follows pemimpin muslim, hukum Islam, quran dan ulama, fatwa MUI, pemimpin berkarakter, solidaritas, pemimpin yang beriman, pemimpin yang santun, pemimpin yang bermoral, etc. Public comments that reveal religious ideology in terms of moral issues vocabulary can be seen in the following data. (Dirty stickers can be cleaned, but Ahox's dirty behavior must also be cleaned) Comments (16) and (17) are public comments related to moral issues by presenting the perpetrator in the text to blame the perpetrator, that is, to show the perpetrator's bad behavior. In the commentary, the vocabulary kotoran (najis), kelakuan, and kotor is used. The repetitive pattern of vocabulary and comparative repetition of sentences in the commentary text implies that the community dislikes A, because according to their opinion A is labeled as having no moral. Other words that reflect religious ideology by raising moral issues are sholat, kotoran, mati bunuh diri, akhlaknya bobrok, lobang pantat, sampahnya Ahk, kecurangan, nggak punya agama, mulut kotor, si Ahok tai, pecinta babi, culas, sangat tidak cerdas, penggusran, pedang bermata dua, di kafir yang bijak, etc. Secularism Public comments that represent secular ideology are the vocabulary that uses prohibition words to stop supporting the A-S candidate and at the same time corner the candidate pair. (18) Jadilah pemilih cerdas yang objektif dan rasional berlandaskan hati nurani (MPM: 29/01/17). (Be an objective and rational intelligent voters based on a conscience) (19) Jgn pernah jadikn agama apapun buat topeng utk mencapai niat jdi penguasa,,krn membawamu mnjdi manusia arogansi,,serakah,,dan menjdiknmu lupa daratan,,,kehilangan akal (DRA: 07/02/17). (Never use any religion to make a mask to achieve your intention to become a ruler, because it leads you to become a human being of arrogance, greedy, and makes you forget the land ,,, lost your mind.) In comment (17), the vocabulary pemilih cerdas is used aimed at the people of Jakarta with the intention of influencing them to be smart in electing regional heads which is confirmed by the explanation that intelligent voters are objective and rational voters based on conscience. This implies that the community does not have a problem with the religion adopted by the prospective leader. Furthermore, comment (18) is also addressed to people who include religion in regional election issues by using vocabulary agama and topeng. Both of these vocabulary words are aimed at people who bring religion into matters of government, in the sense that religion is used as a tool to win a regional head election contest. Other words expressing secular ideology are as follows mati kutu, cerdaslah, rakyat cerdas, objektif, kerja nyata, janji-janji palsu, Kebodohanya, mabuk jabatan, menjual agama, bukan coba-coba, memperalat agama, penghayal tingkat tinggi, licik dan culas, etc. Liberalism Public comments that represent liberalism are reflected in the vocabulary that indicates the freedom to choose each candidate pair, including in the following comments. The representation of liberal ideology in data (20) can be seen from the public's disapproval of the two candidates, with the assessment that candidate An is only good at theorizing, while candidate A does not want to be blamed. Likewise, in data (21), this comment shows his pessimism towards the two candidates because he is considered unable to overcome the floods in Jakarta. In this case, the representation of liberalism in these comments is implied in freedom which is not caring or ignores the discourse battle in the election on the grounds that the candidates do not meet the criteria desired by the commentators. Other words that express liberal ideology are siapapun, sesuka hatinya, bebas, kapeer, emang guwe pikirin. Neutral Apart from the three ideological descriptions above, public comments are also found neutral, objective and, impartial to each candidate, as in the following data. Based on these comments, it can be seen that there are still people who are rational and impartial to each candidate. He urges politicians not to force voters and to give them their freedom to make choices. In its context, impartiality does not mean that society intends not to vote or does not mean it is liberal. It means that it is not affected by the political issues that are developing at that time. This can be seen from the comments of the public who did not support or overthrow each candidate. People use logic and choose according to their own choices without coercion or influence from other parties. Another vocabulary that expresses neutral ideology are tidak usah menghujat, pilih sesuai hati nurani, terbaik, bersifat LUBER, dll. Discussion Online news portals and social media are places for politicians and the public to argue related to the news about the DKI Regional Election. The statement expressed is not just a statement, but also contains a specific purpose because politicians are influenced by certain ideologies that underlie the way they think and behave. This is in line with the opinion of Syam (2010) that ideology will influence a person in speaking and acting. In linguistics, a person's ideological representation can be traced through critical discourse analysis theory. This has been proven in research conducted by Abdulsyani (2012); Faris & Paramasivam (2016); Ramanathan & Hoon (2016);and Shahsavar & Naderi (2015), which concluded that critical discourse analysis plays an important role in representing hidden ideologies in a discourse. Based on the results of data analysis, it turns out that the statements of politicians about the 2017 Regional Election political discourse in online news are dominated by secular and religious ideological representations; whereas public commentary is dominated by religious and secular ideologies. This means that the two ideologies are the background of the 2017 DKI Jakarta Regional Election discourse battle. Representation of Religious Ideology The choice of vocabulary used by politicians and the public when producing texts (statements / comments) does not mean merely a technical issue. However, the choice of vocabulary represents a certain ideology, as in previous studies. There is a purpose behind the language used because language can never be separated from a certain ideology (Fairclough, 2003). Vocabulary choices are used to display or describe something in the commentary text (Eriyanto, 2009: 290). In these statements / comments, it can be seen how an ideology can influence someone in acting (Eriyanto, 2009;Syam, 2010, p.239), including religious ideology. Religious ideology is a view that includes religion in matters relating to governance. Religion determines, directs, and supervises the building (order) of politics, economy, law and society (Altwajri, 1997, p.90;Salam, 1997). The religious ideology in this study is represented by raising the issue of faith, the issue of sharia, and the issue of morality. The dominant issue is morality, followed by the issue of sharia, and the least is the issue of faith. It means that politicians try to lift and support the candidate pairs they carry through vocabulary battles related to morals. In this context, the use of the vocabulary of moral and moral issues is very relevant for A-Dj candidate pairs because A has a religious case at that time. Furthermore, the use of the vocabulary for the issue of sharia and faith also determines the discourse battle of politicians to win the candidate pairs they carry because they target dominant voters. This vocabulary became a perfect tool in discourse battles because it tarnishes and destroys A's career. Not only in the sense of convincing the public about the vision, mission, and programs, but more aimed at hitting political opponents with issues of faith, sharia, and morals as a representation of the religious ideology it uses. Besides, the vocabulary also implies the goodness and the strengths of Islam, with a choice of vocabulary that emphasizes the strength of the A-S candidate's character. In this case, politicians include religious issues when choosing leaders for DKI Jakarta 2017. This is also reinforced by the opinion (Altwajri, 1997: 90) that in reality politicians also bring religion to political matters; especially through vocabulary (Eriyanto, 2009: 286-287). The findings of this study are more specific, but are still relevant to the results of previous studies which state that vocabulary choices can represent a certain ideology (Asghar, 2014;Faghih & Moghiti, 2017;Shahsavar & Naderi, 2015;Widyawari & Zulaeha, 2016). Representation of Secular Ideology Secularism is an understanding that separates religious matters from the government system (Hurd, 2004;Tiwary, 2017). Based on the results of data analysis, it was found that the statements and comments of politicians and / or political elites represented secular ideologies. This means that in this case, politicians and society do not mix religious affairs with government affairs. In this study, it turns out that secular ideology is the basis of the comments of politicians and the public regarding the DKI Jakarta Regional Election. The findings of this study are relevant to the findings of previous studies that in reality many politicians do not include religious issues in matters related to leadership (Abdulsyani, 2012: 120;Altwajri, 1997: 178;Susanto, 2013: 41). For secular politicians, religion and government are two different things and they cannot be put together. The representation of secularism is revealed through the vocabulary it chooses. In critical discourse analysis, ideology can be expressed in texts viewed through vocabulary choices (Asghar, 2014;Faghih & Moghiti, 2017;Shahsavar & Naderi, 2015;Widyawari & Zulaeha, 2016). The choice of vocabulary can also be seen from the use of the metaphors used. According to Fairclough (Eriyanto, 2009: 292), the choice of metaphor is the key to how reality is presented and differentiated from others because metaphors are not only a matter of literary beauty because they can determine whether reality is interpreted as positive or negative. The vocabulary that represents the secular ideology in politicians' statements and comments from the public regarding the news discourse of the DKI Jakarta elections is vocabulary related to the ideal government leader, such as being honest, intelligent, creative, already having evidence, etc. Which does not have any connection to the religious factor he follows. The vocabulary used by the community that contains secularism is a form of public support for the A-Dj candidate pair. In this case, the choice of vocabulary used in the commentary text relates to how certain events, people, groups, or activities are categorized in a certain set (Eriyanto, 2009: 290). Vocabulary determines the meaning the writer wants to convey because it relates to the question of how reality is signified in language and how that language brings out a certain form of reality. Liberal and Neutral Ideology Representations Although not dominant, liberal ideology has also colored public comments about the news discourse on the DKI Jakarta Regional Election on social media. It means that the public also adheres to liberalism in addition to religious and secular understanding when commenting on the news about the DKI Jakarta Regional Election. Liberalism society aspires to a free society, characterized by freedom of thought for individuals, and rejects any restrictions, especially in terms of government and religion (Suryono, 2009: 33;Zarkasyi, 2011). The vocabulary that represents the liberal ideology in public comments about the news discourse of the DKI Jakarta elections is vocabulary that is free and does not care about the election political events. For them, the choice is according to their wishes and will not be influenced by anyone. In contrast to the three ideologies above, neutral statements and comments are expressed through the vocabulary of impartiality and the use of common sense and conscience in selecting Jakarta leaders. Although this objective view is ideal in determining choices, it cannot be denied that political discourse cannot be separated from the meaning of contestation, competition, and even battle. In this case, political discourse is more pragmatically competitive, even conflictive (Leech, 1993: 162).It is consistent with the facts that have occurred recently in political contestation in Indonesia. 4. Conclusion Political comments about the Jakarta Regional Election in the mass media are motivated by three ideological representations, namely religionism, secularism, and neutrality. On the other hand, public comments are motivated by four ideologies, namely religionism, secularism, liberalism, and neutrality. The representation of religionism is generally embraced by politicians and people who side with the A-S candidate pair by carrying out primordial issues using the vocabulary of choosing a Jakarta leader who has Islamic faith, sharia, and morals. On the other hand, the representation of secularism is embraced by politicians and the public who side with the A-Dj candidate, with a necessary and worthy goal of being someone who is brave, intelligent, honest, objective, and has evidence of performance even from a different religion. Liberalism representation is embraced by people who want freedom and are indifferent to political events in choosing leaders, while neutral comments are expressed through the vocabulary of impartiality and the use of common sense and conscience in selecting Jakarta leaders. Of the four ideologies underlying the comments of politicians and society, it turns out that only two ideologies are represented dominantly. This means that the political discourse battle for the 2017 DKI Jakarta Regional Election is a battle between religious ideology and secular ideology.
2020-12-03T09:05:36.595Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "d97d01230f74461396df0e7bb1576f9bc02bbe2e", "oa_license": null, "oa_url": "https://doi.org/10.37301/culingua.v1i1.5", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7087519fa2557e90cc12f7dbe1416a1d293d7067", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
57721221
pes2o/s2orc
v3-fos-license
Sum-of-square-of-rational-function based representations of positive semidefinite polynomial matrices The paper proves sum-of-square-of-rational-function based representations (shortly, sosrf-based representations) of polynomial matrices that are positive semidefinite on some special sets: $\mathbb{R}^n;$ $\mathbb{R}$ and its intervals $[a,b]$, $[0,\infty)$; and the strips $[a,b] \times \mathbb{R} \subset \mathbb{R}^2.$ A method for numerically computing such representations is also presented. The methodology is divided into two stages: (S1) diagonalizing the initial polynomial matrix based on the Schm\"{u}dgen's procedure \cite{Schmudgen09}; (S2) for each diagonal element of the resulting matrix, find its low rank sosrf-representation satisfying the Artin's theorem solving the Hilbert's 17th problem. Some numerical tests and illustrations with \textsf{OCTAVE} are also presented for each type of polynomial matrices. Introduction and preliminary Nonnegative polynomials, i.e., polynomials with scalar-coefficients, appears in a variety of mathematical problems and applications. The problem of approximating such polynomials as sums of squares of polynomials have been studied more deeply in both theoretical and practical points of view, see eg., [13] and references therein. In our opinion, this idea can be generalized to matrix polynomials, i.e., polynomials with scalar-matrix coefficients. A matrix polynomial can also be written as a polynomial matrix, i.e., a matrix with polynomial entries. However, the terminology "nonnegative" for scalar polynomials must be generalized as "positive semidefinite", which is usually used for (scalar or polynomial) matrices. Because of this reason, we prefer "polynomial matrices" to "matrix polynomial" in this paper. There have been a variety of researches focusing on the generalization of Positivstellensätze to psd-polynomial matrices, i.e., m ≥ 2, and its applications (see in, eg., [8] and references therein). However, from the practical point of view, most applications lead to problems over univariate/bivariate polynomial matrices. This motivates us to focus on these two types of polynomial matrices. Let R[x] be the ring of n-real variable polynomials with real coefficients. Let R m×m and S m R denote the set of all m × m matrices with real entries and its subset of symmetric matrices, respectively. By . T we denote the transpose of matrices. For A ∈ S m R, by A 0 we mean A is positive semidefinite, i.e., u T Au ≥ 0 for all u ∈ R n . We denote S m + R the set of all positive semidefinite matrices in S m R. Moreover, for two matrices A and B in S m R, we write A B if A − B ∈ S m + R. For a subset D ⊆ R n , and a polynomial matrix F ∈ S m R[x], we say F to be positive semidefinite on D if We will say "psd " instead of "positive semidefinite". Someone calls a psd polynomial matrix on D = R n the global psd one. Throughout this paper, we fix • m, n as above, i.e., m is the size of a polynomial matrix and n is the number of variables in the matrix; • d : the maximum of degrees of m 2 polynomials in the matrix. A symmetric polynomial matrix F is called a "sum of squares of polynomial matrices" (resp., a sum of squares of rational functions matrices 1 ) if it is a finitely many sum of the form where the matrices A i has polynomial entries (resp., rational-function entries) defined on the whole space R n . We shortly say a sum of squares matrix a sosmatrix while the other a sosrf-matrix. It is clear that if b 2 F, b ∈ R[x], is sos then F is sosrf since It can be seen that any sos-matrix or sosrf-matrix is psd on R n . The converse direction is always true for sosrf-matrices since Proposition 4, but not for sos-matrices. The observation of this issue is easy to see for scalar polynomials including a famous counter example of the Motzkin polynomial f M (x, y) = 1 + x 2 y 4 + x 4 y 2 − 3x 2 y 2 . 1 A rational function is a ratio of polynomials. When m = 1, polynomial matrix concept here coincides with the scalar polynomial as usual, and "psd-polynomial matrices" now means "nonnegative polynomials" as well. There is a number of well-known Positivstellensätze for nonnegative/positive scalar polynomials [7,22], leading to representations of scalar polynomials nonnegative on several subsets of R using sos-representations of other polynomials. For example, it is well known in literature that, see also in eg., [2], any univariate real polynomial nonnegative on R can be expressed as a sum of two squares of real polynomials. This is called a sos-representation of the initial polynomial. And, a polynomial f nonnegative on [0, ∞) can be written as for some real polynomials p, q. We will call the later representation the "sosbased representation" of f. In [9], the authors propose an algorithm for decomposing a nonnegative polynomial on R n as a sum of squares of rational functions relying on the idea of Reznick [18], saying that the common denominator is a power of the sum of squares of coordinate functions. In this paper, we deal with the problem of finding a sos-based and/or sosrf-based representation of a polynomial matrix which is positive semidefinite over one of the sets: R n , [a, b], [0, +∞) and strips in R 2 . The idea is to combine Schmüdgen's procedure of diagonalizing polynomial matrices and Levenberg-Marquardt algorithm [15] finding sosrfrepresentations of the diagonal polynomials of the resulting diagonal matrices. Our method for finding sosrf-based representations for psd-polynomial matrices is hence divided into two stages: (S1) diagonalizing the initial polynomial matrix F suggested by Proposition 1 below so that (3) holds true. Note that the initial polynomial matrix is positive semidefinite if and only if the resulting diagonal polynomials are all nonnegative; (S2) finding low-rank sos-representations of the resulting diagonal polynomials by applying the algorithm proposed in [9]. The representation for the initial polynomial matrix we need is then obtain by substituting to the relation in the first stage. As mentioned earlier, the second stage is to find sosrf-representations of scalar polynomials. It is well-known that any sum of squares polynomial can be determined by its Gram matrix. With the help of Artin's theorem answering to the Hilbert's 17th problem, one notes where e = n+deg(b 2 f ) n , π(x) is the vector of monomials x α := x α 1 1 . . . x αn n with degree not exceed deg(b 2 f ), and G is real symmetric and positive semidefinite, which is called a Gram matrix of b 2 f. By comparing the coefficients, we see that each coefficient on the left side is linearly dependent on the entries of G. In addition, a Gram matrix of a sos-polynomial can be found to be low rank. This suggests us to propose and solve the matrix rank minimization problem where φ : R m×p → R l is a differentiable map, that can be applied in our second stage. The rank function is non-convex, even the feasibility region is smooth. The idea of this stage is to check whether the problem (1) has a numerical solution of rank r, step by step, for r = 1, 2, . . . At each step, with respect to each r, the Levenberg-Marquardt method (for short, LM-method ) [15] solving least square problems is applied to the function defined by φ(X) = u. The differentiability of φ guarantees the existence of the Jacobian matrix of φ in Levenberg-Marquardt iterations. The problem of finding a sos-representation of a scalar sos-polynomial is thus equivalent to which of solving a linear system finding a polynomial's Gram matrix with low rank. This is reasonable to apply the model (1) with respect to a suitable linear map φ. Furthermore, any polynomial's Gram matrix must be symmetric, and the existence of a Gram matrix is equivalent to which of its Cholesky factor 2 , where both are the same rank. In this sense, φ will be a quadratic function in the Cholesky factor. Our second stage, as seen in Section 3, deals properly with finding such Cholesky factors. The polynomial's Gram matrix is then derived, and so is the polynomial. This is done by applying the LM-method. The LM-method has been generalized for complex setting in [20,21], which is called the generalized LM-method, or gLM-method. It solves the least squares problems with real-valued function in complex variables. The corresponding Matlab toolbox is called COT [21]. The authors there convert the complex setting to the real one by using 1-1 linear transformations that map any complex n-tuples z ∈ C n , to (Re(z), Im(z)) ∈ R 2n , and/or to (z,z) with the help of Wirtinger calculus and the complex Taylor series expansions. Even the gLM-method aims at solving least square problems with real-valued functions in complex variables, it is also valid for real-valued functions in real variables. In our work we recode the gLM-method in OCTAVE that makes more convenience for the readers. Unlike as "lsqnonlin" function in Matlab, OCTAVE codes, as those of COT as well, additionally need the Jacobian/gradient of the function as an input. We hence give detailed formulas for Jacobian matrices with respect to particular cases that gLM-method needs. The numerical tests in this paper are all implemented in GNU OCTAVE 4.4.2 3 and Ubuntu 14.04, on a desktop Intel(R) Core(TM) i3-3220 CPU@3.30GHz and 4GB RAM. This paper is organized as follows. Section 2 gives a clear explaination to Schmüdgen diagonalization, which is considered as the first stage of our method, and presents the corresponding algorithm. Some numerical tests are also illustrated. The second stage is based on a rank matrix minimization model devoted in Section 3. After proposing an algorithm solving such a model, we apply to solve our second stage. We give some numerical examples as well. In section 4 we prove sosrf-based representations for polynomial matrices that are psd on the sets R n , (n ≥ 1), some intervals in R, and some strips in R 2 . The conclusion of the paper is devoted in the last section. 2 Diagonalization of polynomial matrices 2.1 Schmüdgen's procedure We start this section by recalling a very important matrix decomposition by Schmüdgen [19] which is a key part for numerically computing sum of Hermitian squares of polynomial matrices below. For a symmetric matrix F with entries in a commutative unitary ring R, set where α, β and C are parts of F in the following partition Then the following relations hold Schmüdgen's procedure (2) is applied to diagonalize a polynomial matrix as in the following proposition, viewed as an extension of Artin's theorem solving Hilbert's 17th problem. Algorithmetic implementation Even though Schmüdgen's proof (of Proposition 1) suggests us an algorithm to find b, X ± and D, it was not numerically clarified in his seminal paper. In this section, we explain in more detailed how Schmüdgen's procedure (2) is applied to obtain b, X ± and D as in Proposition 1, step by step such that one can implement the algorithm in any scientific programming language. Given a polynomial matrix F ∈ S m R[t] that is partitioned as With the help of the procedure in (2), our algorithm below starts with F = [f and The next iteration will apply procedure (2) to the matrix B 0 . We would emphasize that, afterwards, the matrices with subscripts X i+ , have sizes that are changed through the different iterations; while the others, indexed by superscripts such as b (i) , X The matrices with superscripts must have the same size as F at every iteration. The algorithm proceeds by induction (on m) until From the computational point of view, the matrix D should be treated as its diagonal vector. We thus only need to compute its main diagonal element by element. In other words, at Iteration i, for instant, we only need to compute d i (exactly equals to α 3 i ) and update d 0 , d 1 , . . . , d i−1 , but do not pay attention for d r , i < r ≤ m. Below, we will see in our algorithm that the data at iteration i (i ≥ 2) is updated from that at iteration i − 1. More precisely, suppose we are given where . Then by applying procedure (2) that satisfy the relation (3): One then updates where d k 's are determined as in Proposition 2 below. To unify the notations and to make the readers more convenient, we implement two first iterations as follows. It is clear that (3) holds: by applying procedure (2) to F 1 . These satisfy relation (3): This, combining with (10), implies that and We hence obtain that b (1) = α 2 1 α 2 0 and X (1) Provided by (11) and (12) , X ± and D (1) also satisfy the relation (3): The diagonal of D (1) , as mentioned earlier, should be updated only d has the form (6). The two early iterations show us the numerical formulas to compute b, X ± and D in the next iterations such that they satisfy (3). The mathematical explanation in more detail, compared with (6), (7) and (9), is showed in the following proposition. Proposition 2. With notation as in (6), (7) and (9). Suppose the data at . Then for all i = 2, . . . , m − 1, we have the update rules These matrices furthermore satisfy the relation described in (3): Proof. Since the section of Iteration 1, the proposition is true with i = 1. Now suppose that the proposition is true for some i ≥ 1, namely, We will show that (14) also holds for i + 1. By inductive hypothesis, with This yields the form of D (i+1) and X For the last relation, X (13). (14), we can conclude and we are done. Remark 1. We would note that D (i) in Proposition 2 can also expressed via D (i−1) as Proposition 2 gives us formulas updating the outputs b, X ± and D in each step of the algorithm. The algorithm supposes that α i = 0 at all iterations i. Otherwise, according to [19], one can find an orthogonal matrix T such that the (1, 1)-th entry of TF i T T is nonzero. A more clear formula can also be found in [4, Proof of Proposition 5]. Step 4: If B i ∈ S 1 R[t] or B i is zero then stop. Else set i = i + 1 then go to Step 1. In the case Algorithm 1 stops at Iteration i with 0 = B i ∈ S 1 R[t], it defines the (i + 1)-th diagonal element in D (i+1) as B i . In this case we set this element as α i+1 . It is clear that F is positive semidefinite on R n if and only if the diagonal elements d j 's are nonnegative on R n . From Algorithm 1, d j 's can be replaced by more simple polynomials as follows. Another corollary of Algorithm 1 is as follows. Cimprič [4,Proposition 5], see also in [8], proves that for a set F of polynomial matrices in S m R[x], one can find a set of scalar polynomials Even though his proof is suggested by Schmüdgen's procedure, it is only inductive on matrices of smaller size than m, so that the resulting set of scalar polynomials is not much clear. With the help of Corollary 3, such a set of scalar polynomials can be verified as Numerical illustrations In OCTAVE, a polynomial will be stored as a vector of its coefficients. For example, the polynomial f (t) = t 2 will be saved as the vector f = [0 0 1]. The syntax "conv" of vectors was used to compute the multiplication of polynomials. A polynomial matrix will be saved as a cell, and the multiplication of matrices will be done as the cell multiplication. We next give a particular example that is also returned later. Univariate polynomials Example 1. We start with an example of 2 × 2 univariate-polynomial matrix Applying the algorithm we then obtain In addition, it is not hard to see F is positive semidefinite on R and {t ∈ R| F(t) 0, ∀t ∈ R} = {t ∈ R| α 0 = t 2 ≥ 0, ∀t ∈ R} = R, which illustrates (15). The cardinalities of Ω and Γ are well-known and are determined by |Ω| = n+d n and |Γ| = n+2d n , respectively. Our experiments in this situation admits the lexicographic order of monomials: The polynomial's coefficient vector is expressed with respect to the "lex" order as in Ω. In particular, the Motzkin polynomial mentioned earlier will be written as f M (x, y) = 1 − 3x 2 y 2 + x 2 y 4 + x 4 y 2 , and its coefficient vector will be stored in OCTAVE as It is worth mentioning that the lengths of multivariate polynomials' coefficient vectors rapidly increase as n and d are large. This makes the size of the present problem rapidly large as well. Algorithm As mentioned earlier, the problem (1) is solved by finding a matrix X ∈ R m×p step by step for rank(X) = 1, 2, . . . , min{m, p} such that φ(X) = u. The search of X believes in the gLM-method [20,21]. The Levenberg-Marquardt method [15] is a famous method that is used to solve the least squares problems. For convenience to the readers, we summarize this method as follows. A "real data" least square problem minimizes a real function f (x), x ∈ R n , where ] T ∈ R l , ∀x ∈ R n , and F j 's are continuously differentiable. According to [15], starting with an initial point x 0 ∈ R n , the method finds a sequence of points {x k } ⊂ R n that converges to a minimizer of f. At each step k, the next point x k+1 is determined by applying the first-order approximation of f over an appropriately closed "hyperellipsoid" with center x k . More precisely, one first approximates f (x k + p) by its first order approximation Then, one minimizes the approximating function to search a descent direction p * (a minimizer of the approximating function). Either the step bound ∆ k or the scaling parameters d (k) i will be updated after each step k, depending upon the difference between f (x k + p * ) and f (x k ). The next point x k+1 will then be decided to be x k + p * or exactly equal to x k . Roughly speaking, this method combines the advantages of the two well-known optimization methods: Gradient-descent and Gauss-Newton ones. This setting has already been implemented in several technology environments, for example in Matlab by calling the "lsqnonlin" or "fsolve" function. In this situation, we consider the least squares problem with respect to the function F : R m×p → R l , for appropriate integer number p, whose coordinate functions are defined as As mentioned in (18), the Jacobian matrix of the input function F is needed: where ∂F k ∂x ij denotes the partial derivative of F k with respect to x ij . It is men- Two tolerances (small enough) below are need at each Levenberg-Marquardt iteration: • the step toleranceτ : at the LM step kth, we rely on that a numerical solution X [k] to system (19) exists if ≤τ ; (20) • the residual tolerance τ : we rely on that the least squares problem has a numerical solution if F (X) 2 < τ. Step 2. Solve the system φ(X) = u, by applying the gLM-method to the function defined as in (19). Let X [k] be a numerical solution determined by (20) with respect to the toleranceτ at some LMiteration k. Compute F (X [k] ). Reformulation As mentioned earlier, the coefficients of a sum of squares scalar-polynomial linearly depend on its Gram matrix's entries. Finding a low rank Gram matrix of a sos scalar-polynomial hence leads to an affine rank minimization problem over positive semidefinite matrices: where ℓ : S m R → R l is a linear map and b ∈ R l is given. It is clear that the above problem is not in form (1). It is well-known that any psd-matrix X can be defined by its Cholesky factor Y ∈ S m R : X = Y Y T . Therefore, by expressing the linear map ℓ via l matrices A i ∈ S m R : we can define an objective function F : S m R → R that can apply LM-method [15,20,21] as: Moreover, if X ∈ S m + R is needed of rank r then so does Y ∈ R m×r . The problem (21) can be cast as which has the form (1). In this situation, the Jacobian matrix of F can be directly computed as follows Here "vec(X)" means the column vector that is built by stacking the X's columns as usual. Numerical tests It is very well-known from the theoretical point of view that the rank matrix minimization problem, say RM-problem, is NP-hard. One reason might be the nonconvexity of the rank function, so that there has not been any method directly solve this one in literature. A good way for solving the RM-problem over positive semidefinite matrices is to relax the rank function into nuclear norm function (see, eg., in [17,14]). The resulting problemis then a semidefinite program [23] and can be efficiently solved by SDP solvers. Very recently, Huang and Wolkowicz [6] have proposed a method, that combines a facial reduction and low rank structure of semidefinite embedding, to solve a matrix completion problem. Another method for solving the NNM-problem over positive semidefinite matrices was proposed in [11] by using modified fixed point continuation method. Unfortunately, resulting matrices given by SDP solver are usually full rank. The experiments in this section are also implemented in OCTAVE, where some codes are translated from which of COT [21] in Matlab. The input matrices A 1 , . . . , A k and b have entries randomly chosen in (0,1). Below we see that the resulting matrices given by gLM-method are low-rank. The following result is a direct corollary of Proposition 1, its proof was presented in [19] but we would rewritten here that makes the readers more easily understand our proofs below. (ii) b 2 F are sum of Hermitian squares for some polynomial b, that is where A i are polynomial matrices. Proof. It is not hard to see F 0 on R n if p 2 F = r i=1 A T i A i . Conversely, we consider two cases of F as follows: (a) if F is diagonal and F = D := diag(d 1 , . . . , d m ) 0 on R n . Then, denote e i the ith unit vector in R n , we have i De i is nonnegative on R n for every i = 1, . . . , m. It thus follows from Artin's Theorem solving Hilbert's 17th problem that for some polynomials p i , g ij ∈ R[x]. Set p = p 1 . . . p r , h ij = g ij j =i p j and if F is not diagonal then from (4), b 2 F = X + DX T + we are done by applying the representation of D above. The following result shows that the number of sosrf-terms in Proposition 4 can be restricted to 2 if polynomial matrices defined on R. Proof. We first prove the proposition for F is diagonal, where d i 's are non-negative polynomials. One notes that each univariate polynomial which is non-negative on the real line can be expressed as a sum of at most two squares. . We then have The opposite direction is not hard to obtain by directly computing the summation of two diagonal matrices is positive semidefinite on R n . It follows from Proposition 1, one has This yields y T F(x)y ≥ 0 for all y ∈ R m and hence F(x) 0 for all x ∈ R. On the interval [0, +∞) we have the following. Proof. Analogously to the proof of the previous proposition, we prove the case of diagonal matrices. Suppose D = diag(d 1 , . . . , d m ) 0, that is d i (x) ≥ 0 for all x ∈ [0, +∞). It is provided by the work in [2] The "only if " part is easy to see by directly computing the multiplication of diagonal matrices. We now consider an arbitrary F 0 on [0, +∞). Proposition 1 leads us to the following decomposition , X − , X + ∈ R[x] m×m and D is diagonal. Using the representation of D above we obtain that For the converse direction, take x ≥ 0 arbitrary, then for every y ∈ R m one has This implies y T F(x)y ≥ 0 for all y ∈ R m . Thus Similarly to the two previous cases we are provided the following result. Its proof utilises the same techniques as in the four propositions above. Proof. It is sufficient to prove for diagonal matrix F. It is well known that (see, eg., [2,10]) a scalar polynomial p of degree d is nonnegative on [a, b] if and only if there are two polynomials p 1 , p 2 such that if d = 2d 1 + 1. By decomposing D = D 1 + D 2 where D 1 's diagonal entries (resp., D 2 's) are those of D with odd (resp., even) degree as the same position in D, and are zeros otherwise. Then applying the above representation of polynomial nonnegative on [a, b] to diagonal entries of D 1 and D 2 , one can find A 1 , A 2 with respect to D 1 and A 3 , A 4 with respect to D 2 . The converse part is derived analogously to the two previous cases. Proof. Suppose D = diag(d 1 , . . . , d m ) 0 on the strip [0, 1] × R. One can find in [12] that where σ i , τ i ∈ R[x 1 , x 2 ] are sums of squares: We then take A t = diag(g 1t , . . . , g kt ) and B t = diag(h 1t , . . . , h kt ) so that the desired representation of D is obtained. The converse direction is done in the similar way to the previous proofs. Proof. This proposition is a consequence of Propositions 1 and 9. Numerical tests Given a polynomial matrix F ∈ S m R[x]. The idea to find sos-based/sosrf-based representations of F is • firstly apply Algorithm 1 to F to obtain a diagonal polynomial matrix D = diag(d 1 , . . . , d m ) satisfying (4); • next, find sosrf-representations of d 1 , . . . , d m , with assumption that the denominators in sosrf-representations of d i 's are all exponents of sums of coordinates functions, by applying the algorithm in [9]. In the second stage of our computation, we will express polynomial matrices as matrix polynomials. The implementation needs to take advantage of a monomial order for them. In the rest of this paper, we prefer the lexicographic order of monomials earlier mentioned. We suppose that a n-variable matrix polynomial P(x) of degree d, is always expressed under lexicographic order: P(x) = α∈Ω P α x α , P α ∈ S m R, ∀x ∈ R n . With P(x) written as above, let furthermore P be the corresponding column vector of matrix coefficients P α , and Π(x) := Π n,d (x) be the column vector of matrices x α I m with respect to "≦ lex ". Then P(x) can be written as (see also in, eg., [16,5]) P(x) = Π(x) T P = P T Π(x). T , P = [P 00 P 01 P 02 P 10 P 11 P 20 ] T , P ij ∈ S 2 R. Relying on the algorithm proposed in [9], we wish to obtain two diagonal polynomials of D from Example 2 are sums of squares of rational functions. It is clear that the 1st-one is a sum of squares. The 2nd-one is (1+x 4 y 2 ) det F. We see that both det F and (1 + x 4 y 2 ) det F consist of the term −x 2 y 2 and they do not contain the terms x 4 y 4 , x 2 , x 4 , y 4 , y 2 . Applying the algorithm proposed in [9] we obtain an approximation 4 of (1 + x 2 + y 2 ) det F(x, y) as a sum of 4 squares. A decomposition of F as in Proposition 5 is thus derived. It is worth mentioning that the algorithm in [9] is implemented in Matlab, we have converted these Matlab codes into OCTAVE for our present situation. Conclusion We have clearly expressed the Schmüdgen's diagonalization for polynomial matrices and have then implemented in OCTAVE. We have then proposed an algorithm for representing a global positive semidefinite polynomial matrix as a sum of Hermitian squares of polynomial matrices. Our method combines the Schmüdgen's diagonalization and an algorithm that approximates the diagonal elements of the resulting diagonal matrix as sums of squares, due to Reznick's idea. This have been applied to some psd polynomial matrices on several sets such as R and its intervals, and strips [a, b] × R. Corresponding numerical illustrations with OCTAVE have been presented in the paper. This might lead to other numerical diagonalizations of other psd polynomial matrices in the future.
2019-01-09T14:01:37.423Z
2019-01-05T00:00:00.000
{ "year": 2019, "sha1": "5ae234d7a984b37aa20fe68fc5bd18c81018c0e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5ae234d7a984b37aa20fe68fc5bd18c81018c0e5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
252199890
pes2o/s2orc
v3-fos-license
Targeting high symmetry in structure predictions by biasing the potential energy surface Ground state structures found in nature are in many cases of high symmetry. But structure prediction methods typically render only a small fraction of high symmetry structures. Especially for large crystalline unit cells there are many low energy defect structures. For this reason methods have been developed where either preferentially high symmetry structures are used as input or where the whole structural search is done within a certain symmetry group. In both cases it is necessary to specify the correct symmetry group beforehand. However it can in general not be predicted which symmetry group is the correct one leading to the ground state. For this reason we introduce a potential energy biasing scheme that favors symmetry and where it is not necessary to specify any symmetry group beforehand. On this biased potential energy surface, high symmetry structures will be found much faster than on an unbiased surface and independently of the symmetry group to which they belong. For our two test cases, a $C_{60}$ fullerene and bulk silicon carbide, we get a speedups of 25 and 63. In our data we also find a clear correlation between the similarity of the atomic environments and the energy. In low energy structures all the atoms of a species tend to have similar environments. Ground state structures found in nature are in many cases of high symmetry. But structure prediction methods typically render only a small fraction of high symmetry structures. Especially for large crystalline unit cells there are many low energy defect structures. For this reason methods have been developed where either preferentially high symmetry structures are used as input or where the whole structural search is done within a certain symmetry group. In both cases it is necessary to specify the correct symmetry group beforehand. However it can in general not be predicted which symmetry group is the correct one leading to the ground state. For this reason we introduce a potential energy biasing scheme that favors symmetry and where it is not necessary to specify any symmetry group beforehand. On this biased potential energy surface, high symmetry structures will be found much faster than on an unbiased surface and independently of the symmetry group to which they belong. For our two test cases, a 60 fullerene and bulk silicon carbide, we get a speedups of 25 and 63. In our data we also find a clear correlation between the similarity of the atomic environments and the energy. In low energy structures all the atoms of a species tend to have similar environments. Structure prediction methods are an important tool for the discovery of new materials [1]. Such methods can not only be applied for materials at ambient pressure but also under very high pressures that are relevant for geophysical applications, but not accessible by experimental methods [2]. For this reason, numerous methods such as simulated annealing [3], basin hopping [4], minima hopping (MH) [5][6][7][8], random structure searches [9], meta-dynamics [10] and various variants of evolutionary genetic algorithms [11][12][13][14][15][16] as implemented in the USPEX [17], CALYPSO [18] and XtalOpt [19] software package have been developed. These advanced global geometry optimisation methods have shown that they can efficiently explore [20,21] the potential energy surface (PES) of different clusters and bulk materials under a variety of external conditions and predict new structures. However, such methods require a high computational effort because the number of possible meta-stable structures grows exponentially with respect to the number of atoms in the system and the generation and relaxation of a single structure requires many energy and force evaluations. Unless defects of certain materials are studied explicitly, the ground state and the lowest defect free meta-stable structures are of greatest interest since they are the structures that can most likely be synthesized. However, for large cells most structures that are found in a structure prediction contain defects. Typically these defect structures represent local minima in a funnel whose bottom corresponds to a defect-free, meta-stable or the global minimum structure. To find this relatively small number of defect-free structures, a structure search which visits a very large number of defect structures, is inefficient. To favor high symmetry, most crystal structure prediction methods use input guess structures that are of high symmetry. If the correct symmetry is chosen, the most similar low energy structure is found much more rapidly. For basin hopping and genetic algorithms there exist also a versions where all the moves of the atoms are constrained to conserve the desired symmetry [4,22]. The inconvenience in all these approaches is that there are more than 200 space groups and it is a priori unknown which one will be adopted by the system. Traditionally symmetry is defined by geometric operations such as rotations or reflections that leave the structure invariant. We will use in this work an alternative definition of symmetry. We will consider a system to be highly symmetric if all atoms of the same element see only a small number of different environments. Structures with a large number of environments are actually unlikely to exist according Pauling's rule of structural parsimony originally established for ionic materials [23,24]. In many cases we will actually try to find systems where all atoms of the same element see the same environment. Evidently this is true for many high symmetry structures such as the 60 fullerene or the diamond structure of silicon and carbon. A structure is either invariant or not under certain symmetry operations. So basing some penalty function on the number of possible symmetry operations would give rise to a discontinuous function. Our definition of similarity is however a continuous functions. It is zero if the environments are identical and grows larger in a continuous way when the environments become more different. Our definition is thus broader than the traditional one. We classify a structure also as highly symmetric if there are a few distinct environments which are however very similar. The tendency that low energy structures have in general similar environments has already been exploited to gain efficiency in the context of evolutionary structure prediction algorithms. In this context mutation moves were introduced that favor environments that have a similar radial distribution as certain selected role model environments [25,26]. Even in amorphous systems it was observed that structures that had similar pair distribution functions were also low in energy [27]. The basic idea of our approach is to perform a structure search on a biased potential energy surface [28] ( 1 , . . . , ) = ( 1 , . . . , ) + ( 1 , . . . , ) where is the biased PES, is the physical PES, is the penalty function and is the biasing weight. Since the number of environments is larger for defective structures, the penalty part will push up these defective structures on the biased PES. In this way the downhill barriers are lowered compared to the uphill barriers and the PES becomes a stronger structure seeker character which speeds up the search for the global minimum and possibly other high symmetry structures at the bottom of other funnels. arXiv:2209.05342v1 [physics.comp-ph] 12 Sep 2022 We quantify the similarity of environments with the overlap matrix (OM) fingerprint [29] based on s-and p-type orbitals, that was shown to be able to detect in a highly reliable way different atomic environments [30]. In particular this fingerprint has both radial and angular resolution. In the OM method the eigenvalues of a localized overlap matrix, centered on the atom whose environment has to be characterized, are assembled into an atomic environment fingerprint vector f i . This environment characterization is done for all atoms in the system. If all atomic environments are identical, the rank of the matrix formed by all these vectors f i is one, if there are two distinct elemental environments the rank is two, etc. The rank can most easily be calculated from the eigenvalues of the Gram matrix = , constructed from the these fingerprint vectors. The number of the non-zero eigenvalues of this matrix gives the rank of the fingerprint vectors. So the penalty function that favours one single environment for a certain element is In case we want to allow for up to two environments, the penalty becomes where Tr is the trace of the matrix, i.e. the sum over all eigenvalues. As usual, we have assumed in all the above formulas that the eigenvalues are sorted in decreasing order. For a multi-component system, each element contributes its own penalty function and the total penalty function is the sum of all the elemental contributions. For highly symmetric structures where all local environments are equivalent, e.g. the ground state of C 60 , the bias function will be exactly zero. If the environments get more distinct, the bias function grows due to the positive semi-definiteness of the Gram matrix. Since the Gram matrix gives essentially the effective dimension of the vector space spanned by the local descriptor vectors, it is called dimensionality matrix in this paper. To test our method we selected two systems of quite different nature. The first one, silicon carbide, is a crystalline system with two elements that have to mix in the right way to find low energy structures and the second, the C60 fullerene, is a molecular clusters. Its global minimum is just one structure out of a huge number of meta-stable structures with varying structural motifs such as planar structures, chains and bowls. For the exploration of the PES the minima hopping (MH) algorithm was used, but our biasing scheme is in principle applicable to any structure prediction method. The MH algorithm is not based on thermodynamic principles like simulated annealing or basin hopping but uses a combination of molecular dynamics, local geometry optimization and a history of previously found local minima to escape quickly from already known regions and hence, efficiently explore the entire PES. For the geometry optimization of C 60 with free boundary conditions the conjugate gradient method was used. In the case of periodic boundary conditions (PBC) the highly efficient and stable vc-SQNM method developed by Gubler et al. [31,32] was used. The molecular dynamics (MD) simulation was implemented using the velocity Verlet algorithm for the non-periodic case and the variable cell shape MD [33] for PBC. This method allows atoms as well as cell vectors to move dynamically during the MD simulation for PBC. For the calculation of the PES of C 60 the transferable tight binding potential for carbon from Xu et al. [34] was used. For the silicon carbide simulations in PBC DFTB+ [35] was used with the Slater-Koster parameterisation set pbc-0-3 [36]. The MD and an initial local geometry optimisation are performed on the biased PES followed by a local geometry optimisation on the unbiased PES. This avoids falling in potentially existing spurious local minima on the biased PES. To obtain conservative forces of the biased PES the derivative of the symmetry bias needs to be added to the physical forces. The same is true for the derivative of the biased symmetry function with respect to the lattice vectors which need to be added to the lattice derivatives in the case of PBC. The derivations of these two quantities can be found in the supplementary information. Figure 1. Changes in the characteristics of the disconnectivity graphs of silicon carbide (top row) and 60 (bottom row) induced by a bias. The left column shows the disconnectivity graphs of the PES without a bias and the right column the disconnectivity graphs of the PES with a bias. The graph was constructed with the disconnectionDPS software [37]. The character of a PES can best be deduced from the appearance of its disconnectivity graph [38] Fig. 1. For a structure seeker [39], the downhill barriers are much lower than the uphill barriers. As a consequence, any algorithm that crosses preferentially lower barriers will experience some driving force toward the minimum at the bottom of the funnel and find it therefore faster. This driving force will of course depend on the strength of the bias. While on the one hand it is desirable to choose a large , the penalty should on the other hand only induce some weak perturbation that does not completely deform the physical PES. In particular there should remain in most cases a one-to-one mapping between the local minima on the physical and the biased PES. As already noted by Zwanzig in the context of protein folding [40] a relatively small bias can have a large effect on the dynamics of the system and reduce the folding time by several orders of magnitude. We were indeed always able to find a range of values for that did speed up the search for high symmetry structures considerable without destroying the overall character of the PES. With our weight the downhill barriers are typically twice as large as the uphill barriers and the penalty difference between high and low symmetry structures is a few times the difference of their physical energy. This later criterion can be used to find suitable values of . Fig. 1 shows the differences of the disconnectivity graphs for the unbiased and biased system. The changes in the appearance of the disconnectivity graphs indicate that the biased PES has a much stronger structure seeker character which should make the search for the lowest high symmetry structures considerably faster. To investigate the effect of the symmetry bias on the speed of the global geometry optimization in a systematic way, statistical tests were conducted for 60 and 16 atom silicon carbide cells. One hundred global geometry optimizations were started from different initial configurations until the ground state structure, or in the case of silicon carbide a polytype of the ground state, was found on the unbiased and the biased PES. For 60 the carbon atoms were randomly placed on a plane and for silicon carbide the carbon and silicon atoms were randomly placed in spatially separated sub-cells that formed the crystalline cell. To avoid nonphysical structures a minimum and maximum distance between the randomly placed atoms was enforced. Since there are no phase separated low energy structures in a cell of this size, the silicon and carbon atoms always had to mix to find the low energy structures. As a measure for the computational cost of the runs we used the number of required local geometry optimizations. As can be seen from Table I the biasing reduces the average number of geometry optimizations by a factor of 25 for C 60 and by a factor of 63 for silicon carbide. It can also be seen from Table I that other statistical markers like the standard deviation (std), quantiles and the number of geometry optimizations for the fastest as well as the slowest simulations decreased by about the same magnitude. As expected Table I. Table of statistical markers and biasing parameters of 100 global geometry optimisations started from randomly generated structures for C 60 and 16 atom silicon carbide cells on the unbiased PES as well as on the biased PES. The statistical markers always relate to the required number of local geometry optimisations. The biasing parameters for C 60 were = 0.3 with = 6.0 [29] and for silicon carbide = 3.5 with = 4.5. The numbers in parenthesis give the speedup with respect to the unbiased runs for corresponding quantities. All simulations were successfully carried out until the ground state structure or in the case of 16 atom silicon carbide a polytype of the ground state structure, was found. C60 and as shown in Fig. 2 the distribution of the found structures with respect to their physical energy and their degree of symmetry is also quite different. For the biased MH runs, the fraction of high symmetry structures is considerably higher and the average physical energy of low symmetry structures is higher since many low energy defects were not found. Many of these high symmetry structures are quite interesting. Searching for structures where all atoms of a certain species have the same environment, we found for instance several SiC structures where all the carbon atoms were 3-fold coordinated, whereas all the silicon atoms are 4-fold coordinated. Such a structure is shown in Fig. 3. Since our penalty function goes smoothly to zero when the environments get more similar, we actually also found most low energy structures of silicon carbide with up to 4 different environments per atom with a penalty function that favours a single environment. It turned out that in these cases, the environments tend to be quite similar and result thus in a small instead of a strictly zero penalty function (see Fig. 3). This finding is related to a strong correlation between the structural environment diversity as measured by our penalty function and the total energy as shown in Fig. 4. In summary, based on a non-conventional measure of symmetry, that is motivated by Paulings rule of parsimony, we construct a penalty function that measures the dissimilarity between different atomic environments in a structure. To construct the penalty function no guesses of which symmetry will be adopted Figure 3. Two high symmetry structures found by a biased run. The structure (a) is only 517 meV/atom higher in energy than the ground state structure, even though the bonding character is completely different from the ground state where all atoms are 4-fold coordinated. In this structure all carbon atoms are 3-fold coordinated. Structure (b) is a 4-fold coordinated silicon carbide structure with an energy of 6 meV/atom above the ground state. Four different environments exist for each carbon and silicon atom, but the environments are so similar that the differences can not be detected by eye. Carbon atoms are displayed by smaller spheres than silicon atoms for which each environment has its own colour. Figure 4. Correlation between our penalty function 1 , that is a measure of the structural diversity and the total energy . Only if all atoms in the C 60 (a) or silicon carbide (b) structure have similar environments, the energy of the structure will be low. by the system are required. Adding this penalty function to the physical PES gives a biased PES where disordered structures are pushed up in energy relative to high symmetry structures. This leads to a lowering of the downhill barriers compared to the uphill barriers. This stronger structure seeker property of the biased potential energy surface allows for much faster searches for high symmetry ground states. The penalty function also allows us to find high symmetry structures of higher energy rapidly. This feature opens the way to perform structure prediction on a PES that was constructed with a cheap but not very accurate method to find high sym-metry structures in low as well as moderately higher energy regions. These high symmetry structures can then be reranked by calculating their energies with a more accurate but also more expensive electronic structure method. In this way high energy structures that were higher in energy with the cheap method can become low in energy with the accurate method. This procedure would not be possible without the bias because in this case the overwhelming majority of higher energy structures are typically all defect structures, which are unlikely to become low energy structures when reranked. Our results also clearly show the general validity of Paulings rule that in low energy structures the variability of the atomic environments is quite limited. Financial support from SNF and computing time from CSCS ( project s963) and sciCORE (http://scicore.unibas.ch/) are acknowledged. We thank Prof. Alireza Ghasemi and Prof. Andris Gulans for useful comments on the manuscript. Appendix A: Symmetry Bias The goal of the symmetry bias is to find a measure for the structural symmetry of the system and to use it as a bias on the PES to drive the system faster to the global minimum during a minima hopping simulation. As a measure for the structural symmetry of a system we quantify the differences between the individual atomic environments. First a matrix is formed containing the overlap matrix (OM) fingerprints [29] of every atom of the cluster or cell as vector in columns. In the OM fingerprint method the eigenvalues of a localized overlap matrix are assembled into a vector. All entries of each fingerprint vectors need to be sorted before forming the matrix . with ( ) being the j-th entry of the OM fingerprint and is the length of the fingerprint vectors. The Gram matrix can now be formed. If all atomic environments are identical, the rank of the matrix formed by the OM fingerprint vectors is one, when there are only two different environments the rank is two, etc. The rank can most easily be calculated from the eigenvalues of the matrix , constructed from the OM fingerprint vectors. The eigenvalues of matrix are sorted in descending order, i.e. 1 is the largest eigenvalue. The matrix elements , equals < | > for the atom pair ( , ). The number of the non-zero eigenvalues of this matrix gives the rank of the fingerprint vectors. So the penalty function that favours one single environment for a certain element is In case we want to allow for up to environments, the penalty becomes where is the position of the atom in the system in Cartesian coordinates, equals the number of atoms in the system and Tr( ) is the trace of matrix D. For a multi-component system, each element contributes its own penalty function and the total penalty function is the sum of all the elemental contributions. with 1 being the eigenvector belonging to the largest eigenvalue 1 of matrix . The derivative of the dimensionality matrix depends on the derivatives of the OM fingerprints. with counting over all entries in the atomic fingerprint eigenvectors . The derivative is formed with the help of publication [29]. Now the negative gradient of the derivative can be added to the physical forces to obtain the biased forces belonging to the biased PES. Symmetry bias derivative with respect to lattice vectors Analog to the derivative with respect to the atom positions we can find the derivative with respect to the lattice vectors       with ℎ being the lattice vectors. Like before we can use the the Hellman Feynman Theorem for the term 1 h . This results in The derivative of the matrix entries , with respect to the lattice vectors is where we now need the derivative of the OM fingerprints with respect to the lattice vectors. To calculate the derivative of the OM fingerprints with respect to the lattice vectors we can apply the chain rule so that we can use the already known derivation of the OM fingerprint with respect to the atomic positions . It is important to note that the OM fingerprints for atom in the system is formed by putting Gaussian type orbitals only on all atoms within a given cutoff radius around the central atom and then forming an overlap matrix from them. Therefore, we only need to consider the atomic positions˜of all atoms in the sphere around the central atom . This leads to the fact that we now have two counting schemes, one for the atoms in the sphere and one for the atoms in the main cell. To deal with this we introduce a function index( , ) that maps atom number from the sphere counting scheme of the central atom to the main cell counting scheme that gives back the index of the corresponding atom in the main cell counting scheme. This results in with being the number of atoms in a sphere that is formed by the cutoff radius around the central atom . Since the derivative of the overlap matrix fingerprint is invariant under the change of the counting scheme we get = index( , ) . One needs to be aware of the fact that in periodic boundary conditions it is possible that multiple images of the same atom from the main cell can be inside the cutoff radius. The position of the atoms in the sphere around atom can then be described as = index( , ) + h · n , = h · frac index( , ) + h · n ,
2022-09-13T01:16:00.395Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "6bce6eb70e0b63f05a8a0d27c2f5aff2e656d7a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6bce6eb70e0b63f05a8a0d27c2f5aff2e656d7a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5744382
pes2o/s2orc
v3-fos-license
Influence of lean and fat mass on bone mineral density and on urinary stone risk factors in healthy women Background The role of body composition (lean mass and fat mass) on urine chemistries and bone quality is still debated. Our aim was therefore to determine the effect of lean mass and fat mass on urine composition and bone mineral density (BMD) in a cohort of healthy females. Materials and methods 78 female volunteers (mean age 46 ± 6 years) were enrolled at the Stone Clinic of Parma University Hospital and subdued to 24-hour urine collection for lithogenic risk profile, DEXA, and 3-day dietary diary. We defined two mathematical indexes derived from body composition measurement (index of lean mass-ILM, and index of fat mass-IFM) and the cohort was split using the median value of each index, obtaining groups differing only for lean or fat mass. We then analyzed differences in urine composition, dietary intakes and BMD. Results The women with high values of ILM had significantly higher excretion of creatinine (991 ± 194 vs 1138 ± 191 mg/day, p = 0.001), potassium (47 ± 13 vs 60 ± 18 mEq/day, p < 0.001), phosphorus (520 ± 174 vs 665 ± 186 mg/day, p < 0.001), magnesium (66 ± 20 vs 85 ± 26 mg/day, p < 0.001), citrate (620 ± 178 vs 807 ± 323 mg/day, p = 0.002) and oxalate (21 ± 7 vs 27 ± 11 mg/day, p = 0.015) and a significantly better BMD values in limbs than other women with low values of ILM. The women with high values of IFM had similar urine composition to other women with low values of IFM, but significantly better BMD in axial sites. No differences in dietary habits were found in both analyses. Conclusions Lean mass seems to significantly influence urine composition both in terms of lithogenesis promoters and inhibitors, while fat mass does not. Lean mass influences bone quality only in limb skeleton, while fat mass influences bone quality only in axial sites. Introduction The role of body weight, body mass index and body composition in the evaluation of lithogenic risk is still controversial. Even if many studies show an increase in the risk of developing nephrolithiasis with higher levels of BMI, the exact contribution of lean mass and fat mass in determining this risk is still unclear. There are some large epidemiologic studies recording a rise in the risk of kidney stones as body weight, BMI and abdominal circumference increase [1,2]. The rise in the risk is however associated to a change in the type of nephrolithiasis, with the prevalence of calcium stones decreasing and the prevalence of uric acid stones increasing [3][4][5]. A rise in lean body mass has been linked to an increase in the incidence of nephrolithiasis only in male [6]. Moreover, a loss of weight is not associated to a decline in the risk [2]. On the other hand, if we consider urinary factors of lithogenic risk, an inverse correlation between pH and BMI and between pH and fat mass has been reported [7]. Moreover, the excretion of oxalate has been linked to body weight, body surface area and to lean mass [8]. The excretion of oxalate, uric acid, sodium, phosphate and calcium rises when BMI increases [9][10][11]; however, the calcium excretion loses significance after correction for sodium and phosphate [9]. Some recent studies have also shown a positive relation between urinary lithogenic risk factors, overweight and obesity [12][13][14]. Nevertheless, relative supersaturations are not altered since even inhibitor excretion and water intake increase with the body weight and/or BMI percentiles growth. A common limit of many of these studies is the lack of a precise evaluation of dietary habits, particularly in protein intake [15,16]. Even the relation between body composition and bone mineral density is debated. It has been demonstrated that an increase in body weight improves bone mineral density, but the specific role of lean mass and fat mass remains uncertain such as different effects in men and women [17]. There is a positive relation, proven in many studies, between fat mass and vertebral bone mineral density, while lean mass seems to be related to a higher bone mineral density only in some areas and is highly influenced by age and physical exercise [18][19][20]. On the other hand a link between urine chemistries, body composition and bone mineral density has been described [21,22]. In this paper, basing on a cohort of healthy women, we identified two new mathematical indexes pointing out the role of lean body mass and fat body mass separately, trying to eliminate possible confounding factors (e.g. height) present in already validated indices such as the Fat-Free Mass Index and Fat Mass Index [23]. Therefore we verified: 1) whether the urinary excretion of lithogenic risk factors is influenced by the whole body weight or by its composition in lean mass and fat mass; 2) how the bone mineral density is related to body composition; 3) which are the body areas where lean mass or fat mass most influence bone mineral density. Participants We studied 78 healthy female volunteers at the Stone Clinic of the Clinical and Experimental Medicine Department, Parma University Hospital, Italy. Approvation by Ethical Committee of Parma Province was obtained as well as written informed consent from the patient for the publication of this report and any accompanying images. The study was carried out in compliance with the Helsinki Declaration. All women carried out: 1) 24-hour urine collection for the laboratory determination of urinary lithogenic risk factors; 2) bone mineral density and body composition measurement through Dual-Energy X-Ray Absorptiometry with a fan-beam Hologic QDR 4500A densitometer (Hologic, Bedford, Mass., USA); 3) 3-day dietary diary in non-consecutive days with one corresponding to the day of the urinary collection, subsequently analyzed by a dietitian and interpreted with a specific software (Dietosystem, DS Medica, Milano, Italy). Densitometry Body Composition was measured by DEXA with a fan beam densitometer (Hologic QDR 4500 A) and dedicated software (rel. 8.2). In the DEXA measurement, a trained physician performed the measurements on the women. DEXA measurements were performed following standard procedures, according to the manufacturer's guidelines, while the participant was lying in a supine position. The trunk was considered as the region delimited by a horizontal line passing under the chin, two vertical lines passing through the medial margin of the head of the homerus, excluding all of the upper limbs, and two oblique lines at the groin cutting midway through the neck of the femur and crossing below the pubis. Intra-site repeatability was a mean of 2-3 & for FM. The coefficients of variation for the method were assessed by repeated measurements. Index of lean mass (ILM) We defined an index in order to obtain two groups of women not different for body weight and BMI but only for lean mass. Body weight (BW), lean mass (LM) and fat mass (FM) are not independent variables because we can assume that total body weight is the sum of lean mass and fat mass. ILM has been conceived for not being influenced by body weight, i.e.: Since That we can also write as Since every woman studied has a lean body mass higher than the fat body mass, elevating their values to the power of two we obtain the higher difference the heavier the lean mass is. Therefore we calculated the value of ILM for every woman involved in the study, we found its median and we split our population in two groups (group A with ILM values lower than the median and group B with ILM values higher than the median). These two groups were characterized by a strong difference in lean body mass (p < 0.0001), but were not significantly different for fat mass, body weight and BMI. Index of fat mass (IFM) We defined an index in order to obtain two groups not differing for lean mass: Since an increase in body weight is generally associated to an increase both in lean mass and in fat mass, but the extent of the increase is higher for fat mass, the higher is body weight, the higher is the numerator and the lower is the difference between lean mass and fat mass, and, subsequently, the higher is the value of IFM. Thus, we calculated the value of IFM for every subject studied, we found its median and subsequently split our population into two groups (group C with IFM values lower than the median and group D with IFM values higher than the median). These two groups are characterized by a strong difference in fat mass, body weight and BMI (p < 0.0001), while there are no significant differences for the lean mass composition. Statistical analysis Data distribution was assessed using Shapiro-Wilk's test. Data were reported as media and standard deviation (SD). Data with deviations from normality were shown as median and range. Differences between the two groups in all tables were calculated using independent Student's t-test or Mann-Whitney's U-test. A p value lower than 0.05 was considered significant for t-test or U-test. Holm's test [24] was applied to adjust p values for multiple comparisons. Holm's statistical procedures rejected an hypothesis only if its p-value was less than their corresponding critical values. Holm's test was supported also by a discriminant analysis reported in Tables 1, 2 and 3 Pearson's correlation coefficient (r) was reported for all parameters quantified. The data were statistically analysed using SPSS 20.0 (SPSS inc. Chicago, IL, USA). Results The average age of the 78 women studied was 46 ± 6 years (range 31-59). 24% of them (19 women) had been menopausal since at least one year. ILM and IFM validation Indexes validation was carrying out as follows. Weight, total lean mass and total fat mass are parameters strongly correlated to each other (weight vs total lean mass, r = 0.839 and p < 0.0001; weight vs total fat mass, r = 0.909 and p < 0.0001; total fat mass vs total lean mass, r = 0.538 and p < 0.0001, where r is a Pearson's correlation coefficient). Multiple regression can be performed using least-squares method: total lean mass is dependent variable and weight is predictor, ILM (Index of Lean Mass) is reported in the model as covariate variable. This multiple regression is highly significant with p < 0.0001, R 2 = 0.989 and R = 0.995. A simple linear regression with only weight as independent variable resulting R 2 = 0.70 and R = 0.84. So ILM is very important in the explanation of the model. Total lean mass adjusted for weight correlates significantly with different parameters of urinary excretion and density bones. These Pearson's correlations provide results equal to values obtained from the correlations of ILM with same parameters. ILM is independent from the weight and also from BMI. For example two subjects may have the same weight's and height's values (same BMI), but lean total mass completely different. ILM is a parameter more specific for total lean mass. Now it is clear that subjects with high lean total mass not have necessary low total fat mass. The second index IFM correlates highly significant with total fat mass, r = 0.689 and p < 0.0001 and it is not correlated with total lean mass. These correlations are confirmed by Discriminant Function Analysis by standardized canonical discriminant function coefficients reported in Tables 1, 2 and 3. Table 4 shows the values of body composition and Table 5 shows the urinary lithogenic risk factors after partition of the women according to the median (1296) of the Index of Lean Mass. Groups A and B did not differ in body weight and BMI, but women in group B showed height and lean mass significantly higher (159 ± 6 vs 163 ± 5 cm and 40 ± 4 vs 45 ± 5 kg, p < 0.0001). Moreover, the group with high ILM showed a bone mineral density significantly higher in both upper and lower limbs and in ribs ( Table 4). The subjects of group B (with high lean mass) also showed urinary excretion of creatinine, potassium, phosphorus, magnesium, citrate and oxalate significantly higher than the ones of group A (Table 5). Data were reported as mean ± standard deviation (SD), unless otherwise specified.°D ata were reported as number of subjects (frequency). +Data were reported as median and range. *p value was calculated with nondependent Student's t test, unless otherwise specified. **χ 2 test was applied to evaluate p value. ***Mann-Whithey's u-test was applied to evaluate p value. ###Significant differences with p adjusted by Holm's test. BMD: Bone Mineral Density. The analysis of three-day dietary diaries did not show differences in the intake of water. Potential Renal Acid Load (PRAL calculated) [25], proteins, carbohydrates, lipids, sodium, potassium, calcium, phosphorus, and magnesium did not show differences between the two ILM groups ( Table 6). The percentage of subjects that regularly performed physical exercise (according to WHO guidelines [26]) was not significantly different as well (Group A 31% vs Group B 49%, p = 0.105). Table 7 shows the values of body composition and Table 8 the urinary lithogenic risk factors after subdivision of the women according to the median (3.28) of the Index of Fat Mass. The two groups did not differ in height, but the group with a higher IFM showed significantly greater values of BMI, total trunk mass, total leg mass and total body fat mass. The bone mineral density of the pelvis, lumbar vertebrae and femur, and the respective T and Z score, were significantly better in Group D, the one with high IFM ( Table 7). The urinary lithogenic risk factors (Table 8) showed no differences between Group C (subjects with low IFM) and Group D (subjects with high IFM); besides, even dietary intakes did not reveal significant differences ( Table 9). The percentage of subjects regularly performing physical activity appeared instead significantly higher in women with a low Index of Fat Mass (Group C 54% vs Group D 26%, p = 0.01). Tables 1 and 2 for ILM and Table 3 for IFM. Discussion Our data demonstrate that the urine composition in our cohort of female healthy volunteers is significantly influenced by the body composition in lean mass. A high lean mass promotes a high excretion both of some lithogenesis promoters, such as phosphate and oxalate, and of some lithogenesis inhibitors, such as magnesium, potassium and citrate. A positive trend also seems to occur with other urinary analytes such as sodium, chloride, uric acid and sulphate, although at the limit of statistical significance, perhaps because of the relatively low number of subjects studied. It seems plausible to argue that these findings were not due to differences in dietary intake, as demonstrated by a nutritional analysis performed through a 3-day dietary diary. We can therefore assume that lean mass plays an active role in determining urine composition, while fat mass seems to act as a metabolically inactive bystander. This hypothesis partially conflicts with the current paradigm that considers nephrolithiasis as a systemic disorder strongly linked to metabolic syndrome. There are data showing that a high insulin resistance, possible expression of a high fat mass, leads to lower urinary pH, to a high acid load and ammonium excretion [27]. This would expose subjects with a high fat mass to a higher risk of uric acid stones, although there are also data linking various features of the metabolic syndrome to calcium nephrolithiasis too [28]. These findings may explain the strong epidemiologic correlation between obesity and kidney stones [3][4][5]. Data were reported as mean ± standard deviation (SD), unless otherwise specified.°D ata were reported as number of patients (frequency). + Data were reported as median and range. *p value was calculated with independent Student's t-test, unless otherwise specified. **χ 2 test was applied to evaluate p value. ***Mann-Whitney's u-test was applied to evaluate p-value. ###Significant differences with p adjusted by Holm's test. BMD: Bone Mineral Density. On the other hand, there are also some reports indirectly suggesting that fat mass does not affect lithogenic risk until BMI rises to the range of morbid obesity. For example, Taylor et al. found that lithogenic risk does not rise for a body weight up to 67.7 kg and a BMI up to 27.7 kg/m 2 . Moreover, some recent data show that obesity does seem to determine a higher risk of nephrolithiasis in a children cohort, but surprisingly does not influence urine chemistries at all [22]. Finally, another recent report shows that in obese stone formers body composition does not influence stone chemistry until very high levels of BMI (> 40 kg/m 2 ) are reached [29], thus indirectly supporting our finding that urine chemistry is poorly influenced by fat mass. In fact, it is remarkable to point out that in our research there was an average difference in body weight of about 13 kg between group C (low fat mass) and group D (high fat mass) ( Table 7). The relationship between lean mass and urine composition has been indeed poorly investigated in literature. However, our findings partially match with those by Lemann jr et al., who demonstrated that oxalate and calcium excretion in males is directly related to creatinine excretion, an index of lean mass composition, in a cohort of healthy subjects [8]. Thus, the increase of lean mass might cause a rise in the risk for calcium nephrolithiasis. We may speculate that a high lean mass leads to higher protein catabolism, thus influencing the differences in urine composition we found in our research. We must also point out that subjects in group B, the ones with a high lean mass, had also a higher prevalence of physical exercise, although not statistically significant. It is plausible that physical exercise may influence a more active muscular metabolism, thus causing a higher excretion of metabolites such as oxalate, phosphate and citrate. The analysis of bone mineral density in our subjects confirmed the assumption, already well established in literature [18], that the higher is the body mass, the better is the quality of the bone, particularly in the spine ( Table 7). The women with high IFM had a significantly higher bone mineral density in lumbar vertebrae, pelvis and femur. This group also shows a poor percentage of subjects regularly performing physical activity (26% vs 54%). This tallies with published data showing that in premenopausal sedentary women bone mineral density correlates with fat mass [30]. However, we have to consider that in our model total body weigh increase when fat mass rise suggesting a non-linear dose-response relationship of fat mass on BMD as previous suggested [17]. On the other hand, the group with high lean mass shows better mineral density in upper and lower limbs and ribs. This group also includes subjects taller than the ones with low lean mass. It has already been demonstrated that height correlates with lean mass and mineral density of extra-axial bones [30,31]. We can also suppose that a better bone mineral density in limbs and ribs is, at least partially, due to physical exercise with a subsequent increase in muscle mass and mechanic anabolic stimulus on the bone [32,33]. Data were reported as mean ± standard deviation (SD). *p value was calculated with independent Student's t-test. We are aware of some limits that are implied in our study. First, the number of subjects studied is rather low. Secondly, the groups were split on the basis of mathematical indexes built to highlight lean mass and fat mass and not on the basis of direct measures. Moreover, we did not carry out an analysis distinguishing pre-menopausal and post-menopausal women. Finally, the analysis of a threeday dietary diary may not exhaustively capture the real dietary habits of a subject; nevertheless the diaries were interpreted by a dietitian during a meeting and the results do not change even after correction for body weight. Conclusions This paper suggests that in healthy women with a similar dietary intake, fat mass does not seem to influence the urinary excretion of lithogenic risk factors, which on the other hand seems to be much more dependent on the level of lean mass. Moreover, bone mineral density seems to be influenced by fat mass, while lean mass might play a positive role particularly on the extra-axial skeleton, as a possible result of the muscular activity. However, the field of interactions between body composition and mineral metabolism is far from being fully understood. Further research on larger cohorts both of healthy subjects and kidney stone formers or people with osteoporosis will clarify the specific role of lean mass and fat mass.
2014-10-01T00:00:00.000Z
2013-10-07T00:00:00.000
{ "year": 2013, "sha1": "3c7221b6e1d7756e4f29107ef22a0072567d9307", "oa_license": "CCBY", "oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/1479-5876-11-248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c7221b6e1d7756e4f29107ef22a0072567d9307", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34679066
pes2o/s2orc
v3-fos-license
Pure-transvaginal natural orifice transluminal endoscopic surgery (NOTES) ovariohysterectomy in bitches: a preliminary feasibility study Natural orifice transluminal endoscopic surgery (NOTES) is a relatively new surgical access for minimally invasive surgery, which is being widely studied in human medicine. However, few studies focusing on its applicability in the small animal practice have been performed so far. The aim of the current study was to evaluate the feasibility of pure-NOTES transvaginal ovariohysterectomy in bitches. Five bitches were evaluated. The abdomen was accessed through an 11mm trocar inserted through a vaginal incision. Using a rigid endoscope with working channel, the ovarian pedicles were coagulated and sectioned using bipolar diathermy. The uterine horn was pulled into the trocar and exteriorized along with the cannula. The uterine body and vessels were coagulated or ligated. The uterine stump was replaced into the abdominal cavity and the pneumoperitoneum drained. Pure-NOTES OHE was successfully accomplished in four out of five bitches. In the first try, it was converted to a hybrid-NOTES technique due to instrument failure. Mean surgical time was 52.1 (SD±11.5 minutes) for the pure-NOTES technique. Pure-NOTES OHE is feasible in bitches, which may result in no major complications and excellent surgical recovery. INTRODUCTION Ovariohysterectomy (OHE) is one of the most routinely performed surgeries in the veterinary practice.Furthermore, several surgical techniques of OHE and their benefits and complications have been I Departamento de Clínica e Cirurgia, Faculdade de Ciências Agrárias e Veterinárias (FCAV), Universidade Estadual Paulista "Júlio de Mesquita Filho" (UNESP), Jaboticabal, SP, Brasil. widely discussed for decades (PEARSON, 1973;SPAIN et al., 2004).The conventional OSH usually is accessed through median incision, which frequently encompasses the half or the middle third of the umbilicopubic distance.Pain is one of the most common early postoperative complications of laparotomy and frequently requires rescue analgesia (DEVITT et al., 2005).In the past 10 years, several laparoscopic and video-assisted spay techniques were developed in small animals (AUSTIN et al., 2003;DEVITT et al., 2005;FREEMAN et al., 2009). Natural orifice transluminal endoscopic surgery (NOTES) is a relatively new concept on surgical access, mainly for abdominal surgery.The main natural orifices used in NOTES include the oral cavity (transgastric approach), vagina and colon (KAVIC, 2006).Although accessing the abdomen through natural organic orifices may be challenging, several advantages of NOTES were highlighted in human medicine and animal models.Absence of cutaneous scars, short hospital stay, short convalescence period and less postoperative pain are the main benefits of NOTES (PEARL & PONSKY, 2008). A technique of hybrid-NOTES OHE in dogs has been described, which led to fast recovery and minimal surgical trauma (BRUN et al., 2009;BRUN et al., 2011).However, to our knowledge, the technique of pure-NOTES OHE has not been published to date.Therefore, the aim of the present study was to evaluate the feasibility of pure-NOTES ovariohysterectomy in bitches. MATERIAL AND METHODS Five young bitches, weighting 12.6kg (SD±2.7),were admitted for elective ovariohysterectomy.Physical exam was performed and blood samples were collected for routine preoperative examination.The bitches were fasted for 12 hours and their abdomen and perineum were clipped.The patients were premedicated with acepromazine (0.02mg kg -1 ), midazolam (0.4mg kg -1 ) and morphine (0.5mg kg -1 ), intramuscularly.Anesthesia was induced with propofol (6mg kg -1 IV) and maintained with isoflurane in 100% oxygen.Additionally, epidural block was performed with lidocaine (0.13ml kg -1 ) and bupivacaine (0.13ml kg -1 ).The animals were positioned in dorsal recumbency.The abdomen was prepared aseptically and the vagina was rinsed with 0.1% PVP-I solution in normal saline (10ml kg -1 ). For the transvaginal access to the abdominal cavity, the mucosa of the vaginal fornix was grasped with two Kelly curved forceps and pulled caudally.A stab incision was performed in the mucosa (Figure 1A) and the submucosal layer was dissected with curve Metzenbaum scissors.A 11mm disposable trocar with blunt tip was blindly inserted into the abdominal cavity through the vaginal mucosa incision (Figure 1B).Complete insertion of the trocar was certified by abdominal palpation.CO 2 insufflation was performed until intra-abdominal pressure of 10mmHg was achieved (1lmin -1 gas flow). A 10.5mm operative laparoscope a with a 5.5mm working channel was inserted into the abdominal cavity for initial inspection.The animal was turned to the left in order to expose the right ovarian pedicle.A 42cm laparoscopic Babcock forceps was inserted through the working channel of the laparoscope and the right ovary was grasped and raised to the abdominal wall for trans-abdominal suspension suture (Figure 1C).A 42cm laparoscopic bipolar forceps b was used to coagulate and cut the ovarian pedicle simultaneously (Figure 1D).The bipolar coagulation was set to 40 watts in the electrosurgical generator c . The animals were turned to the right.The same surgical approach was performed on the opposite ovarian pedicle.The left ovary was then grasped with the Babcock forceps, the tacking suture was released and the ovary was pulled into the trocar (Figure 1E).The trocar was then withdrawn from the vaginal canal along with the left ovary.Gentile traction was performed to exteriorize the left uterine horn, the uterine body and the right uterine horn and ovary.The uterus was ligated with double circulating/transfixing ligature with polyglactin 910 2-0 thread and resected in bitch n o 1 and 2, and coagulated and cut with the bipolar forceps (Figure 1F) in bitches n o 3, 4 and 5.The uterine stump was checked for bleeding and repositioned within the abdominal cavity using gentle digital pressure.Careful inspection of the vaginal incision was carried out for bleeding.No suture was applied at the vaginal incision and the bitches were ready to convalesce. The procedure could be converted to hybrid-NOTES with insertion of an 11mm trocar at the midline, 2cm caudally from the umbilicus, if needed.In case of massive/uncontrollable bleeding, laparotomy would be carried out to manage the hemorrhage adequately.Cephalexin (30mg kg -1 VO BID for 6 days), tramadol (2mg kg -1 VO BID for 3 days) and meloxicam (0.1mg kg -1 VO SID for 3 days) were given postoperatively. The surgical procedure was divided into seven stages: transvaginal access, establishment of the pneumoperitoneum, approach to the right/left ovarian pedicle, exteriorization of the uterus, hemostasis of the uterine stump and vessels and inactive/inoperative time.The time of each surgical stage was analyzed descriptively and expressed as mean value (±SD), in minutes (min). At the end of the surgical procedure, the ovaries and uterus were fixated in 10% buffered formalin for histological evaluation.After standard histological preparation, the slides were stained with hematoxiline and eosin (H&E) technique.The slides were evaluated under optical microscopy.The histological findings were analyzed descriptively. RESULTS The mean overall surgical time was 52.1 (SD±11.5 minutes).Approximately 8 minutes (SD±1.9minutes) were spent in order to reach the abdominal cavity through the vagina.The blind insertion of the trocar was safe and easy to perform. The first attempt of pure-NOTES OHE (animal n o 1) was not successful due to instrument failure.The disposable bipolar coagulation/cut forceps was bent and broke inside the working channel of the endoscope during the coagulation of the left ovarian pedicle.The procedure was then converted to a hybrid-NOTES technique.The rigid endoscope was introduced through the abdominal port and a 10mm clip applier was inserted through the transvaginal port.The left ovarian pedicle was triple-ligated with titanium clips and the pedicle was resected with 5mm Metzenbaum scissors, inserted through the transvaginal port.The same technique was used to reach hemostasis of the right ovarian pedicle.The uterus was exteriorized transvaginally and doubleligated with 2-0 polyglactin 910.In bitch n o 1, hybrid-NOTES OHE was accomplished in 92.5 minutes.Animal n o 1 was excluded from the assessment of the surgical time.The time spent on each surgical stage is shown in table 1 and the linear distribution of surgical time is expressed in figure 2. Ciência Rural, v.42, n.7, jul, 2012. No complications regarding transoperative bleeding, postoperative pain or infection were noted.Minor complications included mild vaginal bleeding, which stopped spontaneously within 24 hours in all bitches.According to the owner, bitch n o 5 presented vaginal bleeding on the 21 th day post-op, which persisted for six hours and ceased without adjuvant treatment.The blood loss was not possible to be precisely estimated.Moreover, the cause of such finding could not be determined, once the owner did not take the animal to the Veterinary Hospital for examination.There was slight vaginal swelling in all bitches.Vaginal swelling was absent in all patients following four days of the surgical procedure.Hypothermia (core body temperature <36ºC) occurred in all bitches, which resolved within the first hour post-op. The histological assessment revealed that the ovaries were completely resected with margins in all bitches.Three out of five uterus presented normal histological pattern of anestrus phase.The uterus of bitch n o 1 presented severe cystic endometrial hyperplasia (CEH), which was histologically characterized by great number of large cysts in the endometrium.There was no local inflammatory response nor fibroblast proliferation.One of the bitches was spayed about 45 days postpartum.Histologically, there were large masses of collagen fibers detaching from the placental sites. DISCUSSION The pure-transvaginal NOTES OHE technique was feasible in small or medium-size bitches.The surgical procedures presented similar operative time to those of other laparoscopic techniques for OHE.A three-portal laparoscopic ovariohysterectomy with harmonic scalpel in bitches was described.The mean surgical time was 55.7 minutes (HANCOCK et al., 2005).In a surgical trial involving transgastric NOTES, laparoscopic approach and open oophorectomy, the mean surgical time was 76, 44 and 35 minutes, respectively (FREEMAN et al., 2011).The shortest mean surgical time for endoscopic OHE described in the currently available literature was 20.8 (SD±4.0)minutes (DEVITT et al., 2005).A two-port videoassisted technique was used and the hemostasis of the ovarian pedicles was carried out with a simultaneous bipolar coagulation/cut forceps (DEVITT et al., 2005).It is believed that the surgical time obtained using the pure-NOTES OHE technique purposed in the current study was fairly good.However, it is truly believed that optimal surgical time will be achieved as soon as the learning curve has been reached. The development of the pure-NOTES OHE was based on the principles of the sigle-port videoassisted OHE technique (SILVA et al., 2011), associated with the principles of hybrid-NOTES OHE (BRUN et al., 2009;BRUN et al., 2011).Based in the results of a previous assessment of the learning curve of singleport video-assisted OHE in bitches (SILVA et al., 2011), it was hypothesized that between 20 and 30 pure-NOTES OHE should be performed in order to reach optimal surgical time. The use of a standard operative laparoscope for the NOTES OHE in bitches turned this technique into an attractive option for surgical contraception.However, the endoscope reached only the caudal part of the abdomen, at the umbilical level.It is believed that the use of recently development NOTES flexible endoscope and instruments would substantially increase the cost-effectiveness, technical difficulties and the surgical time in the vaginal access for NOTES, as mentioned in other study (FREEMAN et al., 2009). Table 1 -Intra-operative time (min) of each stage of the transvaginal pure-NOTES ovariohysterectomy in bitches. - -----------------------------Animal Id (n o )------------------------------ It is believed that the pure-NOTES technique could be one of the less invasive surgical approaches for OHE in bitches, which requires no wound care and may produce less painful stimuli during the surgery and on the early post-op period.Furthermore, the absence of abdominal incision can reduce to zero the possibility of postsurgical herniation or evisceration, as the only incision required is performed within the vagina.The pure-NOTES technique required no additional port and eliminated the possibility of abdominal herniation, emphysemas and seroma in the present study. One of the disadvantages of the pure-NOTES OHE is the lack of compatibility between the size of the instruments and the animals' biometry.The working length of the operative-laparoscope was 27cm and the total reach of the 42cm bipolar forceps inside the working channel of the endoscope was 32cm.Thus, it was observed that bitches presenting more than 30cm in length between the ovarian pedicles and the vulva are not good candidates for the pure-NOTES technique.However, the technique should be tested in larger animals in order to verify such hypothesis. Another limitation of the pure-NOTES technique is inherent to the amount of fat tissue surrounding the ovarian bursa.In such case, the exteriorization of the ovaries may be more difficult, increasing the risk of rupture of the uterine horn or loss of ovarian tissue within the abdominal cavity, as reported in other study (HANCOCK et al., 2005).However, the ovary and adjacent tissues entered the 11mm vaginal port and vaginal exteriorization was possible in all animals of the current study. It was hypothesized that the antibiotic therapy was effective in avoiding post-operative infection.The literature regarding the harmfulness of the canine vaginal microflora to the peritoneal cavity and the prophylaxis of post-surgical infection following NOTES procedures in dogs is sparse.Moreover, the vaginal defect was not sutured, which could communicate the vaginal canal to the abdominal cavity for hours or days.However, such fact has not been assessed so far.Therefore, the use of post-operative antibiotics for six days in the current trial was coherent. The vaginal swelling has possibly lasted more days in bitch n o 3 due to the smaller diameter of its vagina.It was observed that tight vaginal canal leaded to major trauma during insertion of the 11mm trocar.Therefore, pure-NOTES OHE may not be applicable in bitches whose vagina does not fit an 11mm or thicker trocar.In hybrid transvaginal NOTES OHE technique, the authors found that it was possible to insert a 11mm trocar through the vagina in one bitch weighting 4.2kg (BRUN et al., 2009). Vaginal bleeding in animal n o 5 resulted in no major clinical relevance and was conservatively treated.It is truly believed that a simple modification of the technique presented in the current study would prevent early or late vaginal wound bleeding. Histological findings of the bitch n o 4 were within the normal patterns of normal uterine involution in the canine specie (AL-BASSAM, et al., 1981).The histological findings of the uterus of bitch n o 1 were compatible with severe cystic endometrial hyperplasia (DE BOSSCHERE et al., 2001, which was successfully managed with the hybrid-NOTES OHE technique.The pure-NOTES OHE seems to be feasible in animals owning uterine disorders.Moreover, the present study confirmed the viability of conversion from the pure-NOTES to the hybrid-NOTES technique (BRUN et al., 2011) for Figure 1 - Figure 1 -Transvaginal pure-NOTES ovariohysterectomy in bitches.(A) Incision of the vaginal mucosa (arrow); (B) placement of a blunttip trocar into the abdomen through the vagina; (C) placement of a transabdominal suture (arrow) though the mesosalpinx (m) to keep the right ovary (o) raised against the abdominal wall; (D) coagulation of the ovarian pedicle (p) and suspensory ligament (arrow) using the Lina Tripol Powerblade ™ forceps; kidney (k).(E) The left ovary (o) is pulled into the trocar; (F) coagulation of the uterine body (u) and vessels. Figure 2 - Figure 2 -Linear distribution of the surgical time of the first five attempts of transvaginal pure-NOTES ovariohysterectomy (OHE) in bitches.OHE of the patient n o 1 was converted to the hybrid-NOTES technique. spent for cleaning the lens of the endoscope, repositioning the patient and inspecting the abdominal cavity and ovarian pedicles.§ Animal n o 1 was excluded from the assessment of the global surgical time. *Including time
2017-09-07T20:19:13.727Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "672acf9dc3a9a65cdbbb0040f2bdd2de64788827", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/cr/a/gDV7d84mLDvqKLMvyCjn9Cw/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "672acf9dc3a9a65cdbbb0040f2bdd2de64788827", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247888842
pes2o/s2orc
v3-fos-license
A Platform Approach to Protein Encapsulates with Controllable Surface Chemistry The encapsulation of proteins into core-shell structures is a widely utilised strategy for controlling protein stability, delivery and release. Despite the recognised utility of these microstructures, however, core-shell fabrication routes are often too costly or poorly scalable to allow for industrial translation. Furthermore, many scalable routes rely upon emulsion-techniques implicating denaturing or environmentally harmful organic solvents. Herein, we investigate core-shell protein encapsulation through single-feed, aqueous spray drying: a cheap, industrially ubiquitous particle-formation technology in the absence of organic solvents. We show that an excipient’s preference for the surface of the spray dried particle is well-predicted by its hydrodynamic diameter (Dh) under relevant feed buffer conditions (pH and ionic strength) and that the predictive power of Dh is improved when measured at the spray dryer outlet temperature compared to room temperature (R2 = 0.64 vs. 0.59). Lastly, we leverage these findings to propose an adaptable design framework for fabricating core-shell protein encapsulates by single-feed aqueous spray drying. Core-Shell Particles for Protein Encapsulation The encapsulation of proteins within amorphous dried particles has become a ubiquitous paradigm across the pharmaceutical and food industries. In comparison to bulk drying methods, particle forming technologies afford enhanced control over the end product with respect to bulk material homogeneity, protein release kinetics, aerosolisability, and handling properties (i.e., powder flow) [1][2][3][4]. Additionally, these technologies may be coupled with particle engineering methods to access a vast array of advanced particle structures. Core-shell structured particles are amongst the most desirable, particularly with regard to applications involving protein encapsulation. In these cases, biphasic segregation of formulation components enables incorporation of multi-functional materials that might otherwise compromise protein stability in the bulk phase. Protein core-shell particles typically consist of an inner protein/stabilising excipient 'core' surrounded by an outer 'shell' layer, often comprised of a polymer or wax, which forms a protective encasing of the labile cargo. Core-shell structures have been used in protein formulations to introduce advanced functionalities such as high precision controlled/triggered release, selective gas/solvent permeability, in vivo targeting capabilities, enhanced bio-absorption [5], improved dissolvability, reduced particle agglomeration, and increased stability in the presence of various stress vectors: humidity, heat, light, oxidants, etc. A diverse array of benefits associated with these formulations has wherein D is the particle diffusion coefficient, k B is Boltzmann's constant, T is the absolute temperature, µ is the particle mobility, and r u is the particle hydrodynamic radius. Solute species with larger hydrodynamic radii (r u ), and in turn slower diffusion coefficients (D), lag behind their smaller counterparts. As a result, the larger species more quickly reach their saturation limits and precipitate at the droplet-air interface, forming a particle shell enriched with the larger solute species [9,[11][12][13][14]. In the water-evaporation flux and surface activity theories, in contrast, the surface becomes enriched with hydrophilic and surface active compounds, respectively [15,16]. These theories are not mutually exclusive and in fact, it is likely a combination of all three that dictate the final morphology of a spray dried particle. Moreover, it is important to note that the relative contribution of each predictor should not be regarded as a constant, but rather as a complex function of system conditions (e.g., drying speed) and the degree of variance encompassed by the system components. Nevertheless, these theories provide useful frameworks for rational design of self-assembled core-shell structures by spray drying. For example, Chen et al. reported the single-step assembly of highly uniform core-shell structures from an aqueous two component system consisting of common biocompatible excipients: nanoparticles of Eudragit RS (ethyl acrylate-methyl methacrylate copolymer and a low content of methacrylic acid ester with quaternary ammonium groups) and silica sol (hydrolysed tetraethyl orthosilicate) [17]. The final microparticles exhibited a core-shell morphology comprised of a silica shell and Eudragit RS core. The authors attributed this segregation to the disparity in component hydrodynamic diameters (D h ); the D h of hydrolysed TEOS and Eudragit RS were 1 and 120 nm, respectively. While this study was proceeded by a number of reports demonstrating enrichment of larger solutes on the surface of the spray-dried particle, it served as the first example of true core-shell particle formation from an aqueous single-feed spray drying set-up [11][12][13][14]. EISA of Core Shell Protein Encapsulates by Spray Drying Despite these developments, EISA of core-shell particles by single-feed spray drying remains challenging for even simple binary systems; these challenges are exacerbated in complex formulations wherein the encapsulated 'core' species is a metastable biomacromolecule with surfactant character, i.e., a protein. In fact, the preferential migration of proteins to the droplet-air interface during the drying process makes them common choices for shell-forming encapsulation agents [18]. Nonetheless, when proteins are the active compound, there are a number of proven approaches for limiting their surface adsorption in spray dried particles. The most common and simple of these is the incorporation of surfactants. Indeed Pinto et al. performed an analysis of literature trends in spray dried protein pharmaceuticals and found that 10% of all feed solutions incorporated a surfactant of some kind. Furthermore, of the four commercially approved spray dried protein pharmaceuticals, two incorporate surfactant excipients [19]. These additives, however, do not generally provide stabilisation alone and in fact may compromise long-term stability of the protein powder. It is therefore necessary to incorporate surfactants alongside additional stabilising excipients [19]. Moreover, this approach is unfit for applications wherein a functional shell is desired as the chemical properties of the particle surface are determined only by those of the surfactant. Limiting the surface adsorption of costly protein pharmaceuticals during spray drying has also been achieved by the addition of 'sacrifical' protein species. These protein excipients competitively adsorb at the droplet-air interface, thereby displacing the more precious protein [20]. This method was employed by 3% of all reports of spray dried protein pharmaceuticals within the past 30 years [19]. The difference in preference for two proteins to adsorb at the air-water interface remains small, as such, high loadings of the excipient protein tend to be required for near complete competitive displacement of the protein of interest, significantly increasing the overall cost of formulation [21]. Furthermore, it should be noted that the incorporation of protein excipients in spray dried formulations has been shown to drastically effect the bioavailability of the pharmaceutical, often detrimentally [22]. The benefits of reduced surface adsorption therefore must be weighed against potential drawbacks associated with each unique formulation scenario. Study Aims We aimed to develop a scalable platform for the fabrication of core-shell protein encapsulates by simple, single-feed spray drying (Scheme 1). The principles of EISA were adapted to an industrially representative system. A semi-pure, commercially-relevant protein was used whilst organic solvents, expensive and/or toxic chemicals, and specialised spray drying equipment were avoided to maintain industrial relevance and translatability of our findings [9]. Scheme 1. Study overview. Our investigation was designed with the intention to relate readily-tunable feed solution parameters to the core-shell morphology of dried protein encapsulates. To achieve this, we applied a modified fractional factorial Design of Experiment (DoE) to a series of sixteen feed solutions investigating six factors. Feed solutions were systematically modified to isolate the effects of (1) pH, (2) ionic strength, (3) excipient D h , (4) excipient surface functionality, (5) total dissolved solids, and (6) the ratio of excipient to protein. To enhance the tunability of our system, we worked exclusively with silica nanoparticle excipients. These nanoparticles could be readily altered in terms of size and surface functionality, enabling the effects of the excipient sterics (D h ) and electronics (polarity, charge, etc.) to be directly studied. Moreover, the true size and surface chemistry of the nanoparticle excipients could be compared to the effective D h and zeta potential observed within the buffered feed solution. Relating these values to the obtained morphology gave insight into the extent to which aqueous 'solvent engineering' could influence particle microstructure. Overall, our work assesses the feasibility of using principles of EISA to access core-shell microparticles in an industrially representative system: namely the single-feed, aqueous spray drying of a semi-pure protein. We identify parameters with high predictive power and show how these can be tuned to control the surface preference of excipients. Furthermore we discuss how these predictors can be manipulated from both ex and in situ approaches. Our work provides insight on how tunable morphologies can be accessed in sensitive systems such as those containing biologics. Finally, we propose a highly adaptable and simple platform approach to enhance the extent of encapsulation for a wider array of bioactive compounds. System Design A series of of twenty-two feed solutions were designed to investigate the effects of (1) pH, (2) ionic strength, (3) excipient D h , (4) excipient surface functionality, (5) total dissolved solids, and (6) the ratio of excipient to protein on the morphology and surface composition of spray dried protein formulations (Table 1). . pH and CaCl 2 concentration are both rounded to one significant figure. The subscript is eliminated for samples wherein [Excipient]/([Protein] + [Excipient]) = 20 wt%. The superscript is eliminated for samples wherein the concentration of total dissolved solids is 3 wt%. For example, the sample denoted as 5-0[small-OH] is a powder prepared from feed buffer at pH 5.5, 0% (w/v) CaCl 2 , containing unfunctionalised small silica nanoparticles (D h = 16 ± 1 nm in water) incorporated at 20% (w/w) relative to the total mass of excipient + protein (3% (w/w) solution). b Excipient size is indicated as small, medium, or large. All excipients are silica nanoparticle based. The D h of small nanoparticles was measured to be 16 ± 1 nm in water. Medium nanoparticles refer either unfunctionalised or functionalised derivatives of silica nanoparticles with D h = 38 ± 1 nm in water. Large nanoparticles are chaterised by D h = 97 ± 2 nm in water. The intensity-weighted hydrodynamic size distribution for the small nanoparticles in water is provided in Figure A1. D h distributions for medium and large nanoparticles in water are provided in Figure A2. Of the formulations included, sixteen contained excipients-fourteen of these were studied by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and elemental analysis (EA) to assess both the morphology and surface composition of the obtained particles. The two remaining formulations were designed to probe the influence of the total dissolved solids content and the ratio of excipient to protein on particle morphology (5-0[med-OH] 15 and 5-0[med-OH] 50 ). These were characterised only by SEM ( Figures A6 and A9). The five feed solutions containing only protein and buffer were included as controls to isolate the effect of the excipient itself on the morphology of the spray dried particles ( Figure A1). Nanoparticle size and surface functionality were studied to probe the effect of directly modifying excipient molecular size and hydrophilicity. In contrast, feed solution pH and salt (CaCl 2 ) concentration were investigated as indirect methods of controlling the effective D h and colloidal stability of the excipient. Two additional factors unrelated to excipient properties (total concentration of dissolved solids in the feed buffer ([Excipient + Protein]) and excipient loading ratio ([Excipient]/[Excipient + Protein])) were also studied to understand their effect on particle morphology. Parameter levels-selected taking into consideration synthetic feasibility/commercial availability, protein stability, and industrial relevance-are discretely defined in Figure 2. Characterisation of Synthesised Nanoparticles Three sizes of silica nanoparticles were compared. Particles of D h = 16 ± 1 nm (small) and D h = 38 ± 1 nm (medium) were purchased as commercially available Ludox suspensions ( Figure A1). A third particle size-97 ± 2 nm (large)-was synthesised by a seed-growth method using AS40 Ludox silica as the precursor seed ( Figure A2). Characterisation of Functionalised Nanoparticles To test the influence of surface functionality on spray dried morphology and nanoparticle surface adsorption, nanoparticles with three different surface functionalities (hydroxyl (SiOH/SiO − ), aminopropyl, and octyl) were studied. Unfunctionalised Ludox silica nanoparticles (AS40) contained a hydroxyl surface. Aminopropyl and octyl functionalised silica nanoparticles were prepared by modification of Ludox (AS40) as described in the Methods section. Particle functionalisation was confirmed by zeta potential (ζ) measurements in MilliQ water ( Figure A3). The negatively-charged (−40 mV) hydroxylated (SiOH/SiO − ) surface and positively-charged (+35 mV) aminopropyl functionalisation exhibited good colloidal stability. Functionalisation with octyl groups yielded a near neutral (+3 mV) zeta potential indicative of an uncharged surface. Nanoparticle functionalisation also influenced particle size ( Figure A4). Both surface functionalities induced nanoparticle aggregation. Aggregation was more extensive when particles were functionalised with aminopropyl moieties, likely indicative of electrostatic interactions between functionalised (postively-charged) and residual unfunctionalised (negatively-charged) surface domains. Characterisation of Colloidal Feed Solution Nanoparticle excipients were also characterised under feed buffer conditions prior to spray drying. The measured D h and ζ values indicated the effective in situ size of the excipient nanoparticles under relevant processing conditions. To more accurately simulate the conditions during particle formation, characterisation was performed at both room temperature (RT) and the mean spray dryer outlet temperature (70°C). The intensity weighted D h and ζ for buffered excipients at RT and T outlet are tabulated in Table 2. Table 2. Hydrodynamic diameter (D h ) and zeta potential (ζ) values for excipients in feed buffer at room (RT, 25°C) and outlet temperature (T outlet , 70°C). Characterisation of Spray Dried Particles 2.5.1. General Morphology The morphologies of spray dried particles were assessed by SEM. Whilst the extent of core-shell structure could not be observed from microstructure alone, several general trends were found to characterise the morphologies of the systems studied. First, it was found that buffer composition (i.e., ionic strength and pH modifying components) strongly governed particle morphology in the absence of excipient ( Figure 3). In particular, high salt concentrations tended to induce needle-like crystal formation and particle fusion. Upon the incorporation of nanoparticle excipients, however, these morphological changes could be counteracted ( Figure 3). Further, it was found that the nature of the excipient-i.e., nanoparticle size ( Figure A7) and/or surface functionality ( Figure A8)-did not significantly influence the morphology of the obtained particles. These results suggest that the counteractive effect of nanoparticle excipients on buffer-induced particle morphology perturbations is likely attributable to the 'dilution' of buffer components in the dried particle, an effect largely indifferent to the chemical and physical properties of the excipient. Core-Shell Structure The extent to which obtained particles exhibited core-shell morphology was assessed by the the procedure described in Figure 4. The representation of protein and nanoparticle excipient were tracked by measuring the abundance sulphur and silicon elements, respectively. Bulk compositions were determined by elemental analysis, whilst surface compositions were measured via XPS. From the measured surface and bulk compositions of sulphur (a proxy for protein) and silicon (a proxy for nanoparticle excipients), the percent of preferential surface adsorption expressed by the excipient and protein could be readily calculated. These calculations, as well as the raw elemental compositions for Si and S in the dried material bulk and surface are reported in Table 3. Investigation of Predictive Parameters Diffusion controlled self assembly has been shown to yield core-shell structures wherein the shell layer is formed by components with slow diffusion coefficients (D) and in turn, large hydrodynamic size (D h ) (Figure 1). To test whether we could harness this phenomenon to control the surface composition of spray-dried particles containing protein, we compared the D h of three sizes of nanoparticle excipients (5-0[small-OH], 5-0[med-OH], and 5-0[large-OH]) in water against the surface preference exhibited by these excipients ( Figure 5). The trend in size did not follow the trend in surface preference for the three samples studied, although the difference in preference for 5-0[small-OH] and 5-0[med-OH] was relatively small (−82% vs. −90%) compared to that calculated for 5-0[large-OH] (−24%). This changed, however, when the D h of the excipient in the feed buffer was plotted against surface preference; in this case, the D h did predict the excipient surface preference. A plot of the D h in buffer vs. the excipient surface preference for all three formulations yielded a straight line with an R 2 = 0.999 ( Figure 5). Contrary to expectation, the surface preference of the excipient in formulations 5-0[small-OH], 5-0[med-OH], and 5-0[large-OH] was negative in all three cases. This could be the result of competition from buffer salts (sodium acetate) precipitating at the particle surface. To test this theory, the sodium content was measured for each surface and found indeed to be high (27, 24 and 6 wt% for samples 5-0[small-OH], 5-0[med-OH], and 5-0[large-OH] respectively). We next decided to study more broadly the relationship between an excipient's in situ D h and preferential adsorption at the particle surface. We plotted the in situ D h of excipients in thirteen formulations against the measured preferential surface adsorption (Figure 6a). The results indicated a moderate linear correlation (R 2 = 0.59). Interestingly, this correlation improved (R 2 = 0.64) when the D h was measured at the spray dryer outlet temperature (T out ) instead of room temperature (Figure 6b). From these results, we may conclude two key findings. First, the D h of an excipient is moderately predictive of its preference for the droplet-air interface during drying, and in turn for the surface of the spray dried particle. Second, it is important to consider the properties of an excipient under in situ operating conditions (i.e., pH, ionic strength, temperature) as the D h (and D, diffusion coefficient) is not an intrinsic property to the material but rather a function of both the material and its environment. Stated differently, the D h and concomitant surface preference of an excipient can be strategically manipulated by tuning the feed solution properties and drying conditions; moreover the effects of fine-tuned parameters on the D h of an excipient can be screened prior to drying by DLS. (a) (b) Figure 6. Preferential adsorption of excipient on particle surface increases with size. One sample (7-1[med-OH]) was statistically eliminated by the Grubbs outlier test [23]. Fit was improved when D h at T outlet (70°C) was used to predict preferential surface adsorption of the excipient (R 2 = 0.59 vs. In addition to excipient D h , we investigated excipient ζ as a possible predictive measure of preferential surface adsorption. The excipient ζ was manipulated both directly by functionalising the surfaces of the silica nanoparticles and indirectly by tuning the pH and ionic strength of the solvent. The effect of directly functionalising the particle surface was studied by direct comparison of samples 5-0[med-OH], 5-0[med-NH 2 ], and 5-0[med-Octyl], which contained medium-sized silica NP decorated with hydroxyl (unfunctionalised), aminopropyl, or octyl surface moieties, respectively. The ζ values for these excipients in buffer (pH 5.5, no CaCl 2 ) ranged from −8.7 mV at minimum (unfunctionalised) to 19.9 mV at maximum (aminopropyl). The ζ for octyl functionalised silica nanoparticles showed insignificant difference from the unfunctionalised silica (−7.1 mV). The size of the octylfunctionalised particles however, was significantly larger than that of the unfunctionalised particles (111 nm vs. 40 nm), suggesting agglomeration of the nanoparticles induced by hydrophobic interactions between the alkyl side chains. To understand the relationship between ζ and surface adsorption for these samples, the percent preferential surface adsorption was plotted against ζ; the trend in preferential surface adsorption roughly followed the trend in ζ at room temperature (Figure 7). Despite these initial results, more rigorous analysis of this relationship across all thirteen samples revealed no discernible correlation between excipient ζ and preferential adsorption at the particle surface ( Figure A10). The relationship between the absolute value of ZP, the magnitude of electrostatic repulsion between particles, and excipient surface adsorption was also found to be ambiguous ( Figure A11). It can therefore be concluded that the ζ of an excipient does not significantly influence its extent of enrichment at the particle surface within the range of ζ values studied (−13 to 26 mV) (Tables 1 and 2). It should, however, be noted that this range of ζ values was relatively narrow (largely due to buffer shielding effects) and the lack of a relationship between ζ and surface enrichment may therefore be attributable to insignificant difference the ζ values compared. Finally, we investigated the ability of the excipient to competitively displace the protein at the droplet surface. To achieve this, we plotted the enrichment of the protein at the particle surface against that of the excipient (Figure 8). Initially, we observed no significant correlation between the two (R 2 = 0.28). Limiting the dataset to only samples with excipient enriched surfaces, however, revealed a striking improvement in the correlation between excipient and protein surface enrichment; as hypothesised, a strong, inverse correlation (R 2 = 0.95) was found to describe the relationship between the two features. From this data it may be reasonably concluded that excipient preferential adsorption competes with that of the protein; the stronger the excipient's preference for the air-droplet interface, the more protein-depleted the interface becomes. When the excipient demonstrates a preference for the droplet interior, however, the protein surface adsorption is not determined by competitive adsorption from the excipient but other factors. Conclusions and Outlook Work towards the development of a scalable platform for spray drying of core-shell structures with labile cargo was presented here. Our proposed approach circumvents industrially-undesirable emulsion methods and complicated drying techniques, demonstrating that the surface affinity of a biologic can be curbed by tuning the preferential surface adsorption of excipients in the feed solution. We validate this approach by showing that positive preferential adsorption by the excipient competitively displaces protein from the air-droplet interface and in turn, dried particle surface (Figure 8). Our results suggest that the hydrodynamic diameter of an excipient D h can be used to predict the degree to which it adsorbs at the air-droplet interface; excipients with higher D h showed higher enrichment at the particle surface. The D h of a nanoparticle excipient could be tuned through buffer properties (ionic strength, pH) as well as the particle surface functionality and size of the SiO 2 core. Interestingly, the core size of the SiNP did not always predictably alter the D h under the relevant buffered conditions (in contrast to water) ( Table 2). Rather, it seemed to be the degree of aggregation amongst SiNP under the spray drying conditions that most significantly influenced the in situ D h . In fact, it was shown that measuring D h under conditions that best simulated the drying process (i.e., T = T oulet ) marginally improved the predictive power of D h , increasing the correlation with excipient preferential adsorption (%) from R 2 = 0.59 to 0.64 ( Figure 6). From these results it is clear that the utility of predictive parameters in spray dried systems depends not solely on the behaviour or properties of components in isolation, but rather on the behaviour of these components in the context of the whole system and its associated conditions. Unlike D h , the ζ of excipients did not predict their preference to localise at the particle surface ( Figure A10). This suggests that excipient chemistry could be altered solely for the purpose of modifying aggregation (and in turn, surface adsorption) without introducing confounding effects from changes in ζ. This conclusion, however, is bound by the scope of this study; excipient zeta potentials varied only narrowly from −13 to 26 mV. Future studies investigating the influence of a broader range of ζ values could be useful to increase the generalisability of these findings. Given the strong predictability of the D h parameter in determining excipient surface preference, we propose the use of the Trojan horse principle for controlled core-shell assembly by spray drying as depicted in Figure 9. By covalently tethering or non-covalently adsorbing low D h excipients at the surface of high D h nanoparticles, one can effectively 'hitch-hike' the secondary component to the spray dried particle surface. The nanoparticle thus serves as a Trojan horse; the secondary component effectively assumes the large D h of the nanoparticle, resulting in surface enrichment as predicted by the diffusion theory of core-shell self assembly (Figure 1). By this approach, not only can one create preference for a core-shell architecture wherein the labile biologic is effectively encapsulated (and in turn, protected), but furthermore, one can control the chemical composition of the shell without being limited by the intrinsic properties of the isolated excipient. As such, this strategy is amenable to applications wherein it is desirable to introduce a specific molecular entity or functionality (e.g., gas/solvent permeability, wettability, targeting, etc.) to the particle surface (in contrast to those where the main aim is simply to limit protein surface adsorption). In conclusion, this paper systematically investigates the relationship between colloidal properties of nanoparticle excipients in protein-containing feed solution and their relative enrichment at the surface of the spray dried particle. The hydrodynamic size, D h of the nanoparticle excipients studied was a clear predictor of their surface enrichment. On the other hand, ζ was not indicative of excipient surface representation within the obtained dry material. The use of high D h nanoparticles is shown to be a viable strategy for limiting protein adsorption at the air-droplet interface in single feed aqueous spray drying. Finally, a platform approach employing high D h nanoparticles as 'Trojan' horses to carry low D h excipients to the droplet air interface (and surface of the dried particle) is proposed. Materials and Methods All materials were purchased from Merck/Sigma-Aldrich (Damstadt, Germany) unless otherwise specified. Semi-pure phytase was kindly gifted by AB Enzymes (Darmstadt, Germany). Seed-Growth Synthesis of Silica NP Silica nanoparticles of 96 nm were synthesised via a seeding method using Ludox AS40 as the starting material. To 4.73 mL MilliQ water, 3.67 mL of 30% ammonium hydroxide solution was added slowly with stirring (400 rpm). To this, 0.226 g Ludox AS40 suspension was added. Finally, 2.9 mL tetraethyl orthosilicate (TEOS) was added to round bottom flask via a syringe pump at the rate of 0.2 mL/h. A 19G needle was necessary to resist clogging. The reaction was allowed to proceed for 12 h before centrifuging in water to remove residual ammonium hydroxide and TEOS (5× 13,000 RPM 4°C, 30 min). Solvent Exchange of Ludox Silica A solvent exchange was performed to redisperse ludox silica nanoparticles AS40 ('medium-sized', D h,water = 16 ± 1 nm) in ETOH. LUDOX NP solutions were diluted (5×) in de-ionised water and centrifuged at 13,000 RPM (4°C) for 30 min. At this point, a sedimented pellet (clear gel) was collected and redispersed in ethanol, washed another two times under the same conditions (13,000 RPM, 4°C) and finally diluted in ETOH to achieve a final concentration of roughly 40 mg/mL. Aminopropyl Functionalised Silica NP Aminopropyl functionalised SiNPs were obtained via an adapted literature procedure [24]. To 120 mL of Ludox AS40 redispersed in Ethanol was added 10 mL of APTES drop-wise. A plastic round bottom flask was used to avoid functionalisation of the glass surface. The solution was refluxed at 80°C for 80 min, allowed to cool and subsequently centrifuged for 30 min at 13,000 RPM and 4°C to remove unreacted APTES. Finally, the particles were dialysed against de-ionised water using a membrane with an 8000 g/mol molecular weight cutoff for a period of two days. Samples were analysed by DLS and ζ prior to spray drying. Octyl Functionalised Silica NP Octyl functionalised SiNPs were obtained via an adapted literature procedure [24]. To a plastic round bottom flask containing 10 mL of Ludox AS40 redispersed in ethanol was added with 1.6 mL Triethoxy(octyl)silane. The solution was refluxed under Nitrogen at 85°C for 80 min. The solution was allowed to cool and subsequently centrifuged for 30 min at 13,000 RPM and 4°C to remove unreacted triethoxy(octyl)silane. Finally, the particles were dialysed against de-ionised water using a membrane with an 8000 g/mol molecular weight cutoff for a period of 2 days. Samples were analysed by DLS and ζ prior to spray drying. Spray Drying All spray drying was conducted on a BUCHI Mini Spray Dryer B-290 fit with a small cyclone. The inlet temperature was consistently between 137-138°C and the pump was kept at 10% for all runs. The measured outlet temperature varied from 71 to 77°C, with the mean temperature being 75°C across all runs. The system was cleaned extensively between each run to prevent cross contamination. X-ray Photoelectron Spectroscopy XPS was used to determine the weight percent of silicon and sulphur elements on the surface of the spray dried particles (ca. 5 nm depth). Measurements were obtained using an Escalab 250Xi XPS instrument (Thermo Scientific, Waltham, MA, USA). Samples were prepared by mounting on double-sided copper tape. Elemental Analysis Bulk compositional analysis was performed by elemental analysis. The relative abundances of sulphur (S) and silicon (Si) in the spray dried powder were measured by inductively coupled plasma optical emission spectroscopy (ICP-OES). Field Emission Gun Scanning Electron Microscopy Particle morphology was characterised by SEM using a TESCAN MIRA3 FEG-SEM. Samples were prepared by direct deposition of freeze-dried powder on black carbon adhesive. The deposited sample was coated with Pt using the Quorum Technologies Q150T ES Turbo-Pumpted Sputter coater prior to imaging. Spray dried particle size analysis was performed over 200 particles per sample, using the Fiji open-source image-processing package, ImageJ software [25], version 1.53b, and Origin Pro 2018 software, version b9.5.1.195. Dynamic Light Scattering and Zeta Potential Dynamic light scattering (DLS) and zeta potential (ZP) measurements were performed using a Malvern Pananalytical Zetasizer Nano ZS90 instrument fitted with a He-Ne laser (λ = 663 nm). Samples measured in feed buffer media were measured at the concentrations relevant to the spray drying process. Measurements performed in water were made at ca. 0.1 mg/mL. Figure A5. Effect of CaCl 2 concentration on particle morphology. All formulations were spray dried from buffer at pH 5.5 containing medium-sized silica NP functionalised with aminopropyl moieties. (a) 0 wt% (b) 20 wt% (c) 50 wt% Figure A6. Effect of excipient loading on particle morphology. All formulations were spray dried from buffer at pH 5.5 containing no CaCl 2 . The percent values below each image are the weight percent concentration of medium-sized unfunctionalised silica excipient relative to the total dry mass of protein + excipient. (a) Small SiNP (b) Medium SiNP (c) Large SiNP Figure A7. Effect excipient size on particle morphology. All were formulations spray dried from buffer at pH 5.5 containing no CaCl 2 . (a) Hydroxyl (b) Amino propyl (c) Octyl Figure A8. Effect of excipient surface functionalisation on particle morphology. All formulations spray dried from buffer at pH 5.5 containing no CaCl 2 . Medium sized SiNP were used in all cases (D h,water = 38 ± 1 nm). Note that scale of images (a,b) differs from that of (c). Figure A10. Preferential surface adsorption of excipient is not predicted by zeta potential. One sample (7-1[med-OH]) was statistically eliminated by the Grubbs outlier test [23]. (a) Preferential surface adsorption vs. zeta potential at room temperature (25°C) (b) Preferential surface adsorption vs. zeta potential at T outlet (70°C). (a) (b) Figure A11. Relationship between the absolute value of zeta potential (ζ) and excipient preferential surface adsorption at (a) room temperature and (b) T outlet (70°C).
2022-04-03T16:31:05.503Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "38e4161987bf9ae922fd06a0724982e164db0fee", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/7/2197/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3aab9fb82a56010dcfa21d4787b572f68e6393a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
234833088
pes2o/s2orc
v3-fos-license
Structural insights into the binding of nanobodies LaM2 and LaM4 to the red fluorescent protein mCherry Red fluorescent proteins (RFPs) are powerful tools used in molecular biology research. Although RFP can be easily monitored in vivo, manipulation of RFP by suitable nanobodies binding to different epitopes of RFP is still desired. Thus, it is crucial to obtain structural information on how the different nanobodies interact with RFP. Here, we determined the crystal structures of the LaM2‐mCherry and LaM4‐mCherry complexes at 1.4 and 1.9 Å resolution. Our results showed that LaM2 binds to the side of the mCherry β‐barrel, while LaM4 binds to the bottom of the β‐barrel. The distinct binding sites of LaM2 and LaM4 were further verified by isothermal titration calorimetry, fluorescence‐based size exclusion chromatography, and dynamic light scattering assays. Mutation of the residues at the LaM2 or LaM4 binding interface to mCherry significantly decreased the binding affinity of the nanobody to mCherry. Our results also showed that LaM2 and LaM4 can bind to mCherry simultaneously, which is crucial for recruiting multiple operation elements to the RFP. The binding of LaM2 or LaM4 did not significantly change the chromophore environment of mCherry, which is important for fluorescence quantification assays, while several GFP nanobodies significantly altered the fluorescence. Our results provide atomic resolution interaction information on the binding of nanobodies LaM2 and LaM4 with mCherry, which is important for developing detection and manipulation methods for RFP‐based biotechnology. | INTRODUCTION Fluorescent proteins (FPs) are the most extensively studied and widely used genetic tools in molecular biology research. FPs can be easily expressed in almost all kinds of cells, and the fusion of FPs generally does not affect the function of other proteins. [1][2][3] Compared to jellyfishderived green FPs (GFPs), red FPs (RFPs) have several advantages when applied in imaging due to their longwavelength excitation, lower light scattering, and decreased autofluorescence. [4][5][6][7][8][9] Although many genetically encoded RFP animal strains have been established to facilitate live observation, the manipulation of RFPs is still desired, which may be improved by the development of RFP-specific nanobodies. 10 Nanobodies, first discovered by Hamers-Casterman in 1993, 11 are single domain antibodies derived from the heavy chain variable regions (VHH) of Camelidae atypical immunoglobulins. Nanobodies are the smallest functional fragments derived from a naturally occurring immunoglobulin. Unlike monoclonal antibodies, nanobodies can be easily produced in prokaryotic expression systems. Because of their small size (12)(13)(14)(15) and high stability and solubility, nanobodies are widely used for industrial 12 in vitro diagnostic and clinical applications. 13 The small size also allows nanobodies to be genetically encoded as chimera proteins and delivered to cells by fusion plasmids. Typically, the long CDR3 region enables nanobodies to bind to antigens with high specificity and affinity similar to those of traditional antibodies. 14,15 The small size of nanobodies also goes beyond traditional IgG antibodies in several specific applications, including binding with the smooth PD-L1 protein surface, 16 inserting into canyons on the HIV envelope that are not accessible to IgG 17 to neutralize a broad range of HIV-1 strains, and effectively blocking the entry of SARS-CoV-2 spike protein. [18][19][20] Kirchhofer et al. first developed a series of GFP nanobodies that can induce subtle opposing changes in the chromophore environment. 21 The GFP-specific nanobodies GBP1 (GFP enhancer) and GBP4 (GFP minimizer) were suitable for monitoring protein expression, subcellular localization, and translocation. Our previous work also showed that the chimeric GFP nanobody GFPenhancer-(GGGGS) 4 -LaG16 increased the binding affinity of GFP and was suitable for GFP-tagged target protein purification. 22 Tang et al. developed a GFP nanobodybased system for the selective manipulation of diverse GFP-labeled cells across transgenic lines. 23 Later, Tang et al. achieved direct optogenetic control of GFP expression in neurons by Cre/loxP recombination through the binding of the GFP-specific nanobody Cre chimera protein to GFP. 24 Herce et al. designed a cell-permeable nanobody system to label and manipulate intracellular antigens in living cells. 25 Simpson performed PROTAC degradation of a GPF fusion protein with an anti-GFP nanobody conjugated to the Halo-tag. 26 Prole and Taylor developed methods to visualize and manipulate intracellular signaling through GFP and GFP nanobodies. 27 In the existing solved GFP nanobody structures, most of the nanobody binding epitopes of GFP are different. GFP-enhancer, 21 GBP-minimizer, 21 and Sb44 28 bind to the different epitopes surrounding GFP's β-barrel. While Nb2 29 and LaG16 22 bind to the same epitope of GFP, they shared only 29.7% identical CDR sequences. These complex structures provided important structural information for the further development of GFP manipulation tools. Although many GFP nanobody-related protein visualization and manipulation applications have been introduced, few RFP nanobodies have been reported. Fridy et al. generated a series of nanobodies (named LaMs) that bind specifically to mCherry through a high-throughput screening method. 10 To develop an in vivo RFP manipulation system, the design of two or more nanobodies fused with other manipulating components that can interact with different epitopes of the RFP surface at the same time is required. However, the lack of structural information on the detailed interaction interfaces between RFP and specific nanobodies hinders the design and application of manipulation of RFP or RFP fusion proteins by high-affinity antibodies. Here, we determined the crystal structure of the LaM2-mCherry and LaM4-mCherry complexes and clarified the details of the binding of these two nanobodies to mCherry. We also verified the simultaneous binding of LaM2 and LaM4 to RFP by a series of orthogonal molecular biology assays. Our results provide crucial atomic resolution interaction information for the further development of methods to manipulate RFP or RFP fusion proteins in vivo. | RESULTS 2.1 | The overall structure of the LaM2-mCherry and LaM4-mCherry complexes To gain insight into the binding sites of nanobodies to RFPs, we purified recombinant LaM2, LaM4 and RFP mCherry and then determined the crystal structures of the LaM2-mCherry and LaM4-mCherry complexes. The crystal of the complex contains mCherry and LaM2 or LaM4 at a 1:1 stoichiometry. The overall structure of LaM2-mCherry was refined to 1.39 Å resolution and that of LaM4-mCherry was refined to 1.92 Å resolution. The crystallographic data are shown in Table 1. The binding interface of CDRs 1-3 of LaM2/LaM4 and mCherry was well defined. LaM2-mCherry crystallized in the space group P2 1 2 1 2 1, and the asymmetric unit contained one LaM2 nanobody and one mCherry molecule. The Matthews coefficient was approximately 2.11 Å 3 /Da, and the solvent content was 41.58%. LaM4-mCherry crystallized in space group C121, and the asymmetric unit contained one LaM4 nanobody and one mCherry molecule. The Matthews coefficient was approximately 2.14 Å 3 /Da, and the solvent content was 42.42%. Figure 1a shows the overall structure of the LaM2-mCherry complex, and Figure 1b shows the overall structure of the LaM4-mCherry complex. The binding sites of LaM2 and LaM4 on mCherry were different. Figure 1c shows the superposed structures of LaM2-mCherry and LaM4-mCherry. LaM2 binds to the side of the β-barrel (the fourth and fifth β-sheets of the 11 total β-sheets), while LaM4 binds to the bottom of the β-barrel (both the amino and carboxyl termini of RFP are at the bottom). The binding modes of the nanobodies are also very different. Figure 1d compares the binding of LaM2 and LaM4. Although the constant domains of the nanobodies are similar, the CDRs are totally different. The CDR3 of nanobodies is longer than that in IgG, and therefore, while only a loop in the IgG secondary structure typically interacts with the antigen, an α-helix in the nanobody may also emerge and provide an additional interaction mode with the antigen. CDR3 and CDR1 of LaM2 contain two α-helices: residues 123-126 (Ser-Glu-Asn-Asp) and residues 42-45 (Thr-Phe-Ser-Asp). CDR3 of LaM4 contains an α-helix consisting of residues 109-111 (Gln-Arg-Leu). Additionally, the surface potentials of LaM2 and LaM4 are quite different; LaM2 has a large negative patch in CDR1 that contributes to ionic interactions with mCherry, while the binding of LaM4 to mCherry does not involve similar ionic interactions ( Figure 1e). | Details of the binding sites of LaM2/LaM4 to mCherry Since the resolution of both nanobody-mCherry complex crystals was high enough, the binding sites between LaM2/LaM4 and mCherry were clearly defined. The detailed interaction interfaces of LaM2 and LaM4 with mCherry are shown in Figure 2. In the LaM2-mCherry complex, all the CDRs 1-3 of LaM2 contributed to the binding to mCherry. | Validation of the thermodynamics and binding affinity of the nanobody to mCherry by site-directed mutagenesis To further clarify the detailed driving forces of the binding between the nanobodies and mCherry, we performed structurally guided site-directed mutagenesis and studied the binding affinity of the mutated nanobodies to mCherry. We first used isothermal titration calorimetry (ITC) to measure the binding affinity and thermodynamic parameters because it is a label-free and insolution method and is regarded as the gold standard for protein-protein interactions ( Figure 3, Table 2). Both LaM2 and LaM4 showed high binding affinity to mCherry; the K D of LaM2-mCherry was 3.02 nM and that of LaM4-mCherry was 22.5 nM (Figure 3a,b). Then, we mutated some residues that contributed to the binding of mCherry. When the two residues of LaM2 CDR1 (Ser44) and CDR2 (Ser68) that form hydrogen bonds with mCherry were individually replaced by Ala, the binding affinity with mCherry was only slightly reduced ( Figure 3a). The side chain of LaM2 Trp67 was inserted into a hydrophobic hole in mCherry, and the W67A mutation abolished the hydrophobic interaction and significantly reduced the binding affinity to mCherry. When Trp119 and Tyr120 in CDR3 were replaced by Ala simultaneously, the binding with mCherry was totally abolished (Figure 3a), indicating that this region was crucial for mCherry binding. For LaM4, the surface of N103 seemed to be complementary to the surface near mCherry Lys84, and a hydrogen bond seemed to form between N108 and mCherry's Glu10. To confirm which interaction was dominant, we constructed LaM4 N103D, N103K, N108D, and N108K point mutation nanobodies and tested their binding affinity to mCherry by ITC. Both the N103D, N108D, and N103K mutations abolished the interaction with mCherry, while N108K still had high binding affinity (Figure 3b). These results suggest that Asn103 and mCherry binding occurs mainly through Van der Waals forces complementary to the protein surface because if Asn103 interacts with mCherry mainly through hydrogen bonds or salt bridges, the binding affinity should still be strong when mutated to Asp; however, the LaM4 N103D mutation totally abolished the interaction, similar to the N103K mutation. When Asn108 mutated to Lys, the interaction was only slightly weakened, while the Asp mutation totally abolished the interaction with mCherry, indicating that the interaction between N108 and mCherry's Glu10 was mainly through the specific hydrogen bond. | Validation of the simultaneous binding of LaM2 and LaM4 to mCherry The crystal structure of the LaM2-mCherry and LaM4-mCherry complexes showed that the binding regions of LaM2 and LaM4 to mCherry did not overlap, so we assumed that LaM2 and LaM4 could bind to mCherry simultaneously. We confirmed this assumption by ternary ITC and F-SEC experiments. The K D of LaM2 titrated into the LaM4-mCherry complex obtained by gel filtration was 8.33 nM, similar to that obtained for LaM2 directly titrated into mCherry (Figure 4a, Table 3), indicating that the binding of LaM4 did not significantly affect LaM2. The K D of LaM4 titrated into the LaM2-mCherry complex was 261 nM, a 10-fold decrease compared to titration into mCherry alone (Figure 4b, Table 3), indicating that the binding of LaM2 induces an allosteric change in mCherry's binding interface with LaM4. We proved this assumption by the analysis of crystal structure data, and the binding of LaM2 slightly shifted the loop position of mCherry β-barrel's two large bottom loops, which are crucial for the binding to LaM4's CDR1 (around Arg28) and CDR3 (around Leu101). We also observed the formation of a ternary complex of LaM2-mCherry-LaM4 by fluorescence-based size exclusion chromatography (F-SEC), 30 which can directly show the size of the biological macromolecule complex under physiological conditions. The F-SEC results (Figure 4c) also confirmed that a stable complex of 1:1:1 LaM2-mCherry-LaM4, 1:1 LaM2-mCherry, and 1:1 LaM4-mCherry formed if these proteins were mixed in a proper ratio. It is worth noting that although LaM2 and LaM4 are similar in size, there was a certain difference in the position of the peak after binding with mCherry, which may be due to the different 3D shapes of the LaM2-mCherry and LaM4-mCherry complexes. We determined the size distributions of mCherry alone, LaM2-mCherry, LaM4-mCherry, and LaM2-mCherry-LaM4 by dynamic light scattering (DLS) experiments. The results showed that mCherry alone was very high uniformity, centered at approximately 6 nm, and when complexes were formed with the respective nanobodies, the size increased to approximately 10-11 nm (Figure 5a). In contrast to some GFP nanobodies (GFP enhancer and minimizer), 21 the binding of LaM2 and LaM4 did not significantly affect the chromophore environment of mCherry, resulting in few changes in mCherry fluorescence properties (Figures 5b and S1). This feature ensured that the optical activity of mCherry would not change significantly with the binding of LaM2/LaM4. Thus, quantification by RFP fluorescence remains accurate when manipulated through the binding of LaM2/ LaM4 chimeric operators. | DISCUSSION While the molecular weight of a nanobody is only approximately one tenth that of IgG, nanobodies still provide a relatively large binding interaction interface. We calculated and compared the buried surface areas of LaM2 and LaM4 to mCherry and five nanobodies of GFP (GBP1 enhancer PDB ID: 3K1K, GBP4 minimizer PDB ID: 3G9A, LaG16, PDB ID: 6LR7, Nb2, PDB ID: 7E53, and Sb44, PDB ID: 6LZ2) to GFP, in addition to a representative PD-L1 nanobody KN035 (PDB ID: 5JDS) entering clinical trials by PISA 16 (Table 4). All of these complexes have similar buried surface areas of approximately 600-850 Å 2 , which is comparable to that of IgG and provides high affinity and specificity. We also compared the buried surface areas of two hapten nanobodies (CorNb-Cortisone, PDB ID: 6ITQ 31 and MTX Nb-MTX, PDB ID: 3QXV 32 ). Since these hapten antigens are relatively small and cannot provide a large surface for binding, the buried surface areas are relatively small, between 300 and 400 Å 2 ; however, in contrast to the small buried surface areas of protein antigens, over 50% of the hapten total surface is buried, showing the effectiveness of their interactions with specific antigens. In addition to the delivery of plasmids encoding nanobodies, unlike IgG, nanobodies can easily enter the cell membrane through a nonendocytic delivery system using a poly-Arg tag 25 and thus may have additional advantages over IgG-based chimeric manipulation systems. Simulations based on the crystal structures show that LaM4 can bind to the DsRed tetramer, and the binding sites are not on the DsRed self-multimerization interface; thus, the binding of LaM4 does not affect tetramerization. Therefore, it is possible to design chimeric proteins linking functional operation components with LaM4 and develop a self-assembling macromolecular machine based on the RFP tetramer. | CONCLUSION In summary, we have obtained the details of how nanobodies LaM2 and LaM4 bind to mCherry's different epitopes at atomic resolution via structural biology techniques. Additionally, our thermodynamic and molecular biology assays verified the crucial residues for the nanobody-RFP interaction. The binding of LaM2 or LaM4 did not significantly change the fluorescence of mCherry, which is important for fluorescence quantification assays. LaM2 and LaM4 can bind simultaneously to mCherry, which is crucial for recruiting multiple operation elements to the RFP. These results provide important basic information for the development of a LaM2/LaM4-based RFP manipulation system and provide strategies to further optimize the binding affinity of nanobodies to RFP. | Protein expression, purification, and characterization The coding sequences of LaM2 and LaM4 were optimized based on favored codon usage in Escherichia coli and were synthesized by Genewiz (Suzhou, China). For crystallization and binding assays, DNA encoding LaM2 and LaM4 was subcloned into the pET28a-SUMO vector with an N-terminal 6xHis tag followed by a SUMO tag or a pET21a-derived vector with an N-terminal 10xHis tag, respectively. The plasmids were transformed into E. coli strain BL21 (DE3) for expression. The bacteria were cultured in LB medium at 37 C until the OD600 reached 0.8. Recombinant protein expression was induced by the addition of 0.2 mM isopropyl-D-1-thiogalactopyranoside and incubation for an additional 18 hr at 18 C. The cells were harvested and resuspended in NiA buffer containing 20 mM imidazole, 5% glycerol, 150 mM NaCl, and 100 mM Tris-HCl, pH 7.5. The His10-tagged recombinant LaM2/LaM4 and their respective mutants were initially purified by Ni-NTA affinity purification using a HisTrap HP column (Qiagen) and eluted with NiB buffer containing 300 mM imidazole, 5% glycerol, 150 mM NaCl, and 100 mM Tris pH 7.5. For crystallization, the His-SUMO tag was removed by incubation with recombinant ULP1 overnight at 4 C. The cleaved tag fragment and ULP1 were removed by passing through a HisTrap HP column. LaM2/LaM4 was further purified by SEC on a Superdex75 Increase column (Cytiva), and the buffer was exchanged to gel filtration buffer: 10 mM HEPES, pH 7.4 and 100 mM NaCl. The purity and molecular weight of the target proteins were verified by SDS-PAGE. | Site-directed mutagenesis Site-directed mutagenesis was carried out by employing a PCR-based mutagenesis site-directed method (2x Phanta Master Mix, Vazyme Biotech Co., Ltd.) using His10-LaM2 and His10-LaM4 as the template. The sequences of the primers used to generate these mutants are displayed in Table S1. All site-directed mutagenesis constructs were confirmed by DNA sequencing (RuiDi, Shanghai, China). Cryoprotection was performed by adding glycerol to the reservoir buffer at a 20% concentration. X-ray diffraction data were collected at 100 K in beamlines BL17U1 33 and BL19U1, 34 Shanghai Synchrotron Radiation Facility, Chinese Academy of Sciences. | Determination and refinement of protein structure Diffraction images were indexed and processed by HKL2000. 35 The structures of LaM2-mCherry and LaM4-mCherry were obtained by molecular replacement using the Phaser program from the CCP4 crystallography package 36 with mCherry (PDB ID: 2H5Q 5 ) and a GFP nanobody (PDB ID: 3K1K 21 ) as the search model. Structure refinement was performed by Refmac 37 and Phenix. 38 The model was refined by COOT. 39 The crystallographic parameters of LaM2-mCherry (PDB ID: 6IR2; 1.39 Å) and LaM4-mCherry (PDB ID: 6IR1; 1.92 Å) are listed in Table 1. The related figures were drawn by PyMOL. 40 | Isothermal titration calorimetry The thermodynamic parameters of the binding of LaM2/ LaM4 and their respective mutants to mCherry were determined by ITC using VP-ITC or ITC200 calorimetry (MicroCal VP-ITC, Malvern). In a typical experiment, each titration was performed by injecting a 12 μl aliquot of protein sample into the cell containing another reactant (detailed concentration information is listed in Table S2) at a time interval of 120 s to ensure that the titration peak returned to baseline. Altogether, 23 aliquots were titrated in each individual experiment. The stoichiometry of binding (n), the association constant Ka, and the binding enthalpy ΔH were evaluated using MicroCal Origin 7.0 software with a one-site binding model. | Fluorescence-based SEC The oligomeric state of the tested samples in buffers was recorded by F-SEC. We used 100 μl 0.1 mg/ml mCherry as control. For the LaM2-mCherry and LaM4-mCherry complexes, 50 μl of 0.2 mg/ml mCherry (approximately 7 μM) and 0.2 mg/ml (approximately 14 μM) LaM2 or LaM4 were mixed in equal volumes (the final concentration of mCherry was 3.5 μM, and the final concentration of LaM2/ LaM4 was 7 μM) and incubated on ice for 1 hr. For the LaM2-LaM4-mCherry complex, 50 μl 0.2 mg/ml mCherry (approximately 7 μM) was mixed with 25 μl 0.4 mg/ml LaM2 and 25 μl 0.4 mg/ml LaM4 and incubated on ice for 1 hr. After high-speed centrifugation, the supernatants were loaded onto a Superdex 200 Increase size-exclusion column (Cytiva) equilibrated with SEC buffer (20 mM HEPES pH 7.0, 150 mM NaCl). The fluorescence of each sample was recorded by a fluorometer (excitation, 587 nm; emission, 610 nm for mCherry fluorescence). The data were processed and normalized by FSEC plotter software. | Emission spectrum measurements The emission spectra of mCherry (0.1 mg/ml) and LaM2/ LaM4-mCherrry (mCherry 0.1 mg/ml with excess nanobodies) were recorded using a fluorescence spectrophotometer (Varian Cary Eclipse). The excitation wavelength was 587 nm. The emission spectrum was recorded between 550 and 700 nm. The spectra data were analyzed with Origin. | DLS assay The particle size distributions of mCherry, the LaM2-mCherry complex, the LaM4-mCherry complex, and the LaM2-LaM4-mCherry complex were measured by a Nano-size-Zeta potential analyzer (Malvern Instruments, ZS90-2026). The test temperature was 25 C, and the test angle was 90 .
2021-05-21T16:56:29.420Z
2021-04-21T00:00:00.000
{ "year": 2021, "sha1": "801aa167677f1dff57bcfd73eed4d18f5b833ff2", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-424438/latest.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "e9e1fc0676a783b3a96b3ded434f4908d025e169", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
91025099
pes2o/s2orc
v3-fos-license
Hypoglycemic effect of Berberis microphylla G Forst root extract Purpose: To evaluate the effect of the root extract of Berberis microphylla on glucose uptake and AMPK-activated protein kinase (AMPK) activity in non-resistant and insulin-resistant HepG2 cells. Methods: B. microphylla root was extracted with absolute ethanol, filtered, concentrated and lyophilized. Subsequently, liver cells, HepG2 (resistant and non-insulin resistant), were exposed for 24 h to different concentrations of the extract (10, 5, 2.5 and 1.25 x 10 μg/μL) to determine the stimulation of glucose uptake and phosphorylation of AMPK. Results: In HepG2 cells without resistance exposed to B. microphylla root extract, glucose uptake varied from 34 to 59 % of the available glucose while AMPK phosphorylation was 1.9 to 3.6 times the phosphorylation of the control. In insulin-resistant HepG2 cells, glucose uptake varied from 68 to 95 % of available glucose while AMPK phosphorylation was 1.8 to 3.3 times the phosphorylation of the control. Conclusion: The root extract of B. microphylla possesses hypoglycemic effects and stimulates glucose uptake in HepG2 cells with and without resistance by activating AMPK protein. INTRODUCTION Type 2 diabetes mellitus (T2DM) is a highly prevalent pathology, which has become a major public health problem worldwide [1].Currently, it is considered a global epidemic since its prevalence has tripled during the last 30 years [2].T2DM is a metabolic disorder characterized by high levels of blood glucose due to a deficiency in the action and secretion of insulin [3].In the last years, AMP-activated protein kinase (AMPK) has been shown to be involved in regulating the energy balance by controlling the metabolism of glucose and lipids [4].At present, several drugs that target AMPK are available for the treatment of T2DM [5], foremost among them metformin (first-line drug for the treatment of T2DM) [6].Lamentably, 30 % of patients do not respond favorably to this treatment because they develop digestive disorders (diarrhea and vomiting), which can cause the discontinuation of the treatment with this drug [7].Thus, the search for new alternative medicines for the treatment of this pathology is necessary.The Berberis genus has emerged as a phytotherapeutic alternative as several species of this genus are described as having hypoglycemic potential, such as B. lycium, B. aristata, B. asiatica, B. vulgaris, B. integerrima, B. ceratophylla, B. moranensis and B. crataegina [8]. B. microphylla G. Forst is a South American species that has been utilized in ethno medicine for the treatment of febrile states, gastric pain and cold, among others [9].At present, it is used as an alternative medicine for the treatment of T2DM.However, no scientific study assessing the antidiabetic activity of this plant has been reported.Therefore, the objective of this study is to evaluate the effect of B. microphylla root extract on glucose uptake and AMPK activity in non-resistant and insulin-resistant HepG2 cells. EXPERIMENTAL Extract preparation The roots of B. microphylla were collected in the settlement of Bahía Mansa (53° 36' 39, 38'' S and 70° 55' 50, 56'' O) near the city of Punta Arenas, Chile.A sample of the species was identified by Dr Juan Marcos Henríquez, botanist and taxonomist at the Instituto de la Patagonia, Universidad de Magallanes, Chile (voucher no.012837).The collected roots were cut up and dried at room temperature for 30 days; subsequently, the pieces were ground and 100 g of the dried root were extracted with 1000 mL absolute ethanol for 72 h at room temperature.The extract was filtered and concentrated in a rotary evaporator at 40 °C.Finally, it was lyophilized and stored at 4 °C until use. HepG2 cell line HepG2 cells were purchased from the American Type Culture Collection (ATCC).Cells were maintained at 37 °C in a 5 % CO 2 atmosphere in low glucose DMEM medium (1 mg/mL glucose), supplemented with 10 % fetal bovine serum (FBS), penicillin (100 U/mL) and streptomycin (100 μg/mL).Prior to each experiment, the cells were plated in 96-well plates at a density of 10 4 cells/well.The growth medium was replaced with medium supplemented with 1 % FBS; and different concentrations of the root extract of B. microphylla, Metformin ® (Mt) and Berberine ® (Bb) were incubated for 24 h. The glucose consumption was quantified using the enzymatic-colorimetric method GOD-PAP (Glucose Liquicolor, Germany).Quantification of glucose uptake was calculated by obtaining the difference between the initial glucose content (t = 0 h) and the final glucose content (t = 24 h) in the medium. Statistical analysis All data are expressed as mean ± standard deviation (SD).Statistical analysis was performed using one-way analysis of variance (ANOVA), followed by Duncan's multiple range method.Values were considered statistically significant when p < 0.05. HepG2 cells viability vs. exposure to extract of B. microphylla root, Mt and Bb The percentage of viability of HepG2 cells, which had been exposed to different concentrations of B. microphylla root extract, Mt and Bb, was determined through the use of the MTT test, using cells without treatment as control.Table 1 shows that 100 % viability of HepG2 cells exposed to B. microphylla root extract occurs at a 10 x 10 -3 μg/μL concentration.Starting from this concentration, the following tests were carried out using lower concentrations.On the other hand, as shown in Table 2, 100 % cell viability after exposure to Mt occurs at a concentration of 4 x 10 -3 μg/μL, and with Bb occurs at a concentration 0.25 x 10 -3 µg/µl.Therefore, for comparative purposes in the following assays, this latter concentration will be chosen for both compounds. Glucose consumption stimulation The percentage of glucose consumption in noninsulin-resistant and insulin-resistant HepG2 cells stimulated with B. microphylla root extract, Bb and Mt, was determined for a period of 24 h.It was observed that B. microphylla root extract, Bb and Mt significantly stimulated glucose consumption at all concentrations tested with respect to the control for both cells models.This finding determines a positive correlation between the dose of B. microphylla root extract and glucose uptake. In the non-resistant HepG2 cells, B. microphylla root extract increased glucose uptake statistically significantly (different letters) by 59, 53, 45 and 34 % in the concentrations of 10, 5, 2.5 and 1.25 (x10 -3 ) µg/µl, respectively, as shown in Figure 1.The response to Bb and Mt, equally, was significant with respect to the control. AMPK activation To determine the stimulation pathway of B. microphylla root extract, the activation of AMPK in HepG2 cells (non-insulin-resistant and with insulin-resistant) was assessed.As shown in Figure 2, the stimulation of AMPK by B. microphylla root extract in both cell models is statistically significant in all tested concentrations (different letters) as opposed to the control (untreated cells).A dose-response relationship was generated; that is to say, as the concentration of B. microphylla root extract increases, the stimulation of AMPK grows as well.In non-resistant HepG2 cells, B. microphylla root extract significantly increases AMPK phosphorylation to 3.6, 2.7, 2.3 and 1.9 times more than the phosphorylation in the control at all doses tested, 10, 5, 2.5 y 1.25 x 10 -3 µg/µl, respectively.Bb and Mt also significantly stimulate the phosphorylation of AMPK to 2.9 and 1.7 times more than the phosphorylation in the control. In insulin-resistant HepG2 cells exposed to the B. microphylla root extract, the stimulation of AMPK at the doses 10, 5, 2.5 y 1.25 x 10 -3 µg/µl was 3.3, 2.5, 2.2 and 1.8 times greater than that of the control, respectively.The AMPK response stimulated by Bb and Mt was also 2.8 and 1.7 times greater than that of the control.All the responses differ significantly from the control. DISCUSSION Berberis genus has been used as an alternative therapy for the treatment of diabetes or as a hypoglycemic agent by several indigenous peoples around the world.From here, the genus potentially emerges as a drug to combat this pathology [8].In the literature, there are several reports, developed mainly in animal models, which highlight the hypoglycemic activity of the Berberis genus.These works show that this plant species generates a decrease in blood glucose levels in different murine species [11][12][13][14][15][16][17][18][19].Berberine, the active compound of this species, is also described as a glucose uptake stimulator in HepG2 cells [20].Similarly, the root extract of B. microphylla stimulates the uptake of glucose in HepG2 cells, beginning with the tested concentration of 1.25 x 10-3 μg/μL; the root extract also exerts this same effect on resistant hepatic cells at the sameextract also exerts this same effect on resistant hepatic cells HepG2 at the same concentration and in both cases generates a dose-dependent response. With respect to possible mechanisms of action, B. julianae extract has been shown to increase translocation and expression of glucose transporter GLUT4 in muscle cells L6, causing increased glucose uptake as well as increasing the phosphorylation of AMPK in the hepatic and muscular tissue of mice [21].In the same manner, it has been demonstrated in mouse muscle cells that berberine is involved in the activation of AMPK and p38 MAPK [22]. When exposing HepG2 cells with and without insulin resistance to different concentrations of B. microphylla root extract, an increasing AMPK phosphorylation in both experiments could be determined, which generated a positive correlation; that is to say, as the extract concentration increased, the phosphorylation of AMPK also grew.This may happen because the increase in the activity of AMPK in hepatic cell lines leads to a decrease in the expression of glucose-6-phosphatase (G6Pase) mediated by posttranslational silencing of its transcription factor, FOXO1a [23,24].However, further studies are still needed to clarify this mechanism of repression. AMPK acts as a master metabolic switch in response to alterations in the cellular energy load and plays an important role in energy homeostasis through the coordination of adaptive responses in low energy metabolic states [25].In the literature, there have been a number of reports on AMPK activators, such as AICAR, metformin, rosiglitazone and leptin, as well as natural products including berberine, caffeic acid phenethyl ester (CAPE), epigallocatechin-3gallate (EGCG), nicotine, β-sitosterol, and corosolic acid, which have been used as potential drugs in the treatment of type 2 diabetes [26]. CONCLUSION We can affirm that the root extract of B. microphylla would have beneficial therapeutic effects against diabetes because it is capable of increasing glucose consumption under conditions of no resistance and insulin resistance.The possible mechanism of action would be the stimulation of AMPK.This mechanism explains, in part, why B. microphylla would show hypoglycemic activity and may be used in the treatment of type 2 diabetes mellitus.However, further research is needed for a complete understanding of the underlying actions of its different phytochemical components. Figure 1 : Figure 1: Glucose uptake in insulin-resistant and non-resistant HepG2 cells exposed to different concentrations of B. microphylla root extract, Mt and Bb.Different letters (A, B, C, D for insulinresistant cell and a, b, c, d non-insulin-resistant cell) have significant difference by Duncan's test (p < 0.05). Figure 2 : Figure 2: Activation of AMPK protein phosphorylation by exposure to the extract of B. microphylla root, Mt and Bb in insulin-resistant and non-resistant HepG2 cells.Different letters (A, B, C, D for insulin-resistant cell and a, b, c, d non-insulin-resistant cell) have significant difference by Duncan's test (p < 0.05) Table 1 : Cell viability of HepG2 cells treated with B. microphylla root extract
2019-04-02T13:11:45.685Z
2017-10-04T00:00:00.000
{ "year": 2017, "sha1": "f0f2999c97fc826d26a72d34b7df411da4d7dc68", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/161854/151395", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f0f2999c97fc826d26a72d34b7df411da4d7dc68", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
15459889
pes2o/s2orc
v3-fos-license
Semi-Supervised Fuzzy Clustering with Feature Discrimination Semi-supervised clustering algorithms are increasingly employed for discovering hidden structure in data with partially labelled patterns. In order to make the clustering approach useful and acceptable to users, the information provided must be simple, natural and limited in number. To improve recognition capability, we apply an effective feature enhancement procedure to the entire data-set to obtain a single set of features or weights by weighting and discriminating the information provided by the user. By taking pairwise constraints into account, we propose a semi-supervised fuzzy clustering algorithm with feature discrimination (SFFD) incorporating a fully adaptive distance function. Experiments on several standard benchmark data sets demonstrate the effectiveness of the proposed method. Introduction Being one of the most important techniques in pattern recognition, machine learning, data mining and knowledge discovery, clustering is widely used in many application areas to understand and reveal hidden structure of the given patterns. The operation to discover structure is performed by partitioning similar patterns into the same clusters and dissimilar patterns into different clusters based on a certain distance metric, density measure, agglomerative or divisive process, and so on. In general, standard clustering is an unsupervised algorithm which can obtain results that closely match user's expectations, while classical supervised learning often needs a large number of labelled data to ensure its generalization performance. Semi-supervised clustering integrates the advantages of both, with less human effort, appropriate interaction and adaptable accuracy by taking class labels, prior membership degrees or pairwise constraints into account [1][2][3][4][5][6][7][8][9][10]. The research into semi-supervised clustering can be generally divided into two approaches: hard constraints based and fuzzy based methods. In semi-supervised hard c-means clustering methods [1][2][3][4][5][6][7], the clustering process is under control of class labels or pairwise constraints to make sure that each instance belongs to only one cluster. Being more specific, hard clustering approaches are crisp methods which are weak in description of the real-world data for the hard binary value of their memberships [11]. In semi-supervised fuzzy c-means clustering methods [8][9][10][11][12][13][14][15][16], alongside the class labels and membership degrees, pairwise constraints can also be considered to guide the process of the unsupervised clustering and eventually help enhance the accuracy of the algorithm. Recent studies have indicated that the family of fuzzy c-means approaches is a better and more meaningful way to partition the data into groups than hard approaches [10,12,13]. In order to overcome the limitations of the existing clustering algorithms, various computational methods with partial supervision have been adopted, ranging from the expectation maximization algorithm for maximum likelihood based parameter estimation [17][18][19], the integration of an incremental algorithm for the update of classifiers parameters [20], the optimization of an objective or learnable distance function [21], to a classifier retraining to integrate new labelled points [22]. In many situations (when dealing with web document, images, biological information, etc.), the amount of data is too enormous to completely label and pre-processing becomes an essential process to reduce the complexity of the problem. Such pre-processing consists of feature extraction and selection. Feature extraction [19] searches for the smallest possible set of distinguishing or typical features among the feature vectors, whilst the purpose of feature selection [23] is to select and weight the best subset of features from the set of features identified by feature extraction. Most feature weighting and selection approaches are based on the assumption that feature relevance is invariant over real world tasks, and hence a single set of weights is used for the whole dataset. However, feature relevance may vary widely within the domain of a dataset. Following previous work by Frigui [24,25] and Grira [8], we consider the user's experiences and the relevance between the feature and prototype centroids in the dataset to guide the process. This requires different feature weights for relevant and irrelevant features; the continuous feature weighting is obtained and the feature relevance representation of each cluster is learned when the clustering is in progress. It is clear that carrying out the clustering and feature selection (weighting) steps simultaneously can speed up the clustering process of the learning system, especially when the constraints provided by users are taken into account. In this paper, we address the problem of semi-supervised clustering based on both feature discrimination and objective function optimization with an adaptive distance norm. The feature discrimination process attempts to reduce the complexity of the clustering task by eliminating the effect of irrelevant features, whilst the objective function includes two components reflecting the pairwise constraints and feature weights. The paper is organized as follows. Section 2 outlines the existing algorithms for semi-supervised clustering. Section 3 introduces the algorithm description of semi-supervised fuzzy clustering with feature discrimination. Our experimental setting is described and the results of the comparisons among some semi-supervised algorithms are shown in Section 4. Finally Section 5 contains some conclusions and pointers for future research. Related Work Existing research into semi-supervised clustering has often focused on intensively studying various formulations for constraints, conversion of diverse classical clustering algorithms into partially supervised ones and further discussion about different applications. We generally classify the relevant studies of our proposed method into three categories, namely semi-supervised clustering, semi-supervised fuzzy c-means clustering, and clustering with feature discrimination. In this section, we briefly review some selected examples of existing literature in these categories. Different approaches can be used to guide the clustering procedure as semi-supervised clustering. In [26], Wagstaff et al. introduced a modified version of clustering with pairwise constraints, namely 'must-link' and 'cannot-link', to improve clustering performance. Pairwise constraints methods are also solved by using probabilistic models [27], fuzzy clustering models [10], and hierarchical clustering [28,29]. Later, Basu et al. [30] proposed the k-means algorithm based on seeding to deal with partly labelled data. Then a variant form of fuzzy c-means algorithm based on seeding was proposed by Bensaid and Bezdek [31]. These two approaches refer to the same idea, that is, to calculate simply the mean of the labelled data as seeds to initialize the prototypes of the clusters. Grira et al. [8] proposed an active fuzzy constrained clustering algorithm (AFCC) that minimizes a competitive agglomeration cost function together with fuzzy terms corresponding to pairwise constraints provided by the user. In addition, since fuzzy c-means (FCM) is one of the most classical algorithms, some related work has been presented as variants of semi-supervised FCM. Yasunori et al. [10] described a semi-supervised clustering algorithm (sSFCM) based on fuzzy c-means clustering by introducing prior membership degree for improving the clustering performance. Pedrycz and Waletzky [21] applied a modified FCM algorithm for considering labelled and unlabelled data of the clustering problems as some augmented objective function. Luis et al. [9] proposed a novel semi-supervised fuzzy c-means algorithm, which employs Gene Ontology annotations as prior knowledge to guide the process of partitioning related genes into groups. Also, kernel-based FCM methods [15,32] called SSKFCM, which combine semi-supervised learning techniques with the kernel method, were introduced to enhance the fuzzy partition quality. The method extends semi-supervised clustering to a kernel space in order to partition the clusters into groups with nonlinear boundaries in the input space. Moreover, some efforts have also been made on how to identify and weight the relevant patterns during the whole procedure of the clustering. In Pedrycz, Kira and Wettschereck's work [33][34][35][36], several methods have been proposed for feature selection and weighting to solve the problem of selecting and weighting the best subset of features to upgrade the generalization performance. Furthermore, some effective work has addressed unsupervised feature selection, supervised feature selection and especially semi-supervised feature selection. Unsupervised feature selection [37,38] evaluates feature relevance by keeping certain properties of the data, while supervised feature selection evaluates correlation between features and class labels. In many real world tasks, such as image retrieval applications [39], semi-supervised feature selection methods [40] especially pairwise constraints [32,39], are more practical than obtaining the true class labels, because it is easier for us to decide whether some pairs of instances belong to the same class or not. In this paper, we concentrate on the development of a novel and more effective semi-supervised approach based on an active fuzzy clustering algorithm with few constraints to refine the performance on homogeneous various datasets. Semi-Supervised Fuzzy Clustering with Feature Discrimination As mentioned above, an algorithm called simultaneous clustering and attribute discrimination (SCAD) [24,25] performs clustering and feature weighting simultaneously to solve unsupervised problems and indicates that, when SCAD is used in conjunction with a supervised learning system, it will offer several advantages. On the other hand, most clustering algorithms generally utilize Euclidean distance to reflect the connection between instances, but this form of distance favours generating clusters of a spherical shape. Such a Euclidean distance performs poorly in practice when each feature of the instance is dependent on others. In this Section, we develop a novel algorithm named semi-supervised fuzzy clustering with feature discrimination (SFFD), attempting to address these issues. Model Formulation The SFFD approach is designed to search for the optimal prototype parameters and the optimal set of feature weights under pairwise constraints. The underlying idea of SFFD is to integrate a fully adaptive distance function, feature weights and pairwise constraints in a unified objective function. FCM clustering with adaptive distance norm According to the Gustafson-Kessel (GK) algorithm, each cluster i is allowed to have its own norm-inducing matrix A i , which yields the following inner-product norm in order to detect clusters of different geometrical shapes in one data set, let d ij be the partial distance between data vector x j and cluster i, we can obtain: The matrices A i are used as optimization variables in the function, let A denote a c-tuple of the norm-inducing matrices: A = (A 1 , A 2 ,. . ., A c ). Let v ik denote the feature weights of each cluster i, N denotes the number of samples and n stands for the feature number of instances. The objective function of the GK algorithm, additionally weighted by constrained memberships, and is defined by: Note that the parameter m stands for weighting exponent which controls the fuzziness of the clustering algorithm . According to relative experiment of Pal. et al on clustering validity [41], the optimal value of m should be chosen between 1.5 and 2.5 based on the research experiences and their median value 2 will be the most appropriate choice when no special preconditions are required. Moreover, some typical semi-supervised clustering algorithms like AFCC [8] and SSKFCM [15] prefer to take 2 as the value of m. So the parameter m is set to 2. In Eq 2, J can be minimized by simply making A i less positive definite, that is, A i must be constrained to avoid that the clusters from uncontrolled growth. The usual way is to constrain the determinant of A i by allowing it to vary with its determinant fixed corresponds to optimizing the shape of the cluster while its volume keeps constant: Using the Lagrange multiplier method, the expression for A i is obtained as follows: Where F i is the fuzzy covariance matrix of the i th cluster defined as: Note that Eq 2 describes a generalized squared Mahalanobis distance norm between x j and the cluster mean c i and the covariance is weighted by the membership degrees in U = {u ij }. This component consists of the first term of SFFD, which allows us to obtain compact clusters. Considering the feature relevance, this term is minimized when only one feature is completely relevant in each cluster, while all the other features are irrelevant. Fuzzy clustering with feature discrimination Feature weight is the key factor in feature discrimination and the constraint on the feature weight can be written down as follows: v ik 2 ½0; 18i; k; and This constraint must be included as the second term of the augmented objective function. With the value of m set to 2, taking the adaptive distance norm of Eq 3 and the feature weight constraint of Eq 6 into account, and applying Lagrange multiplier method, Eq 2 converts into the form: Since the rows of v ik are independent to each other, Eq 7 can be rewritten as the following independent form: Note that i = 1,. . .,C, where V i is the i th row of v ik . By setting the derivative of J 1 to 0, we obtain: Then v ik and λ i can be obtained as follows: It should be noted that v ik has two parts. The first, 1/n, is the default value if all the features have the same relevance to the cluster. The second part is a bias that reflects the compactness of a feature compared to the others. It could be either positive or negative depending on the choice of δ i , soδ i can be thought of as a balance between the two parts of v ik . This can be achieved by updatingδ i in iteration t: Where K is a constant and u ij , v ik , d ik are denoted by superscript for iteration (t-1). To minimize J 1 with respect to the centres c ik , by setting the derivative of J 1 to 0, we obtain: Reducing the above equation, we get: From the view of Eq 14, there are two cases for c ik depending on the value of the product of v ik and A i , which is mainly relying on the value of v ik . That is, if the value of v ik is zero, the value of c ik will be zero. Otherwise, we use Eq 14 to calculate the value of c ik . Taking pairwise constraints into account As we are aiming for a new search-based semi-supervised algorithm, pairwise constraints are considered, given their wide use in guiding the clustering process towards an appropriate partition. For this purpose, we define an objective function based on Eq 7 with pairwise constraints taken into account. Let M denote the set of must-link constraints and z be the set of cannotlink constraints. Using the fuzzy clustering algorithm described in the previous section, we can rewrite the objective function of SFFD as follows: In Eq 16, the first part is an augmented FCM objective function with fully adaptive distance and feature weights. The second part is pairwise constraint that is weighted by α, a constant factor that delineates the relative importance of the supervision. The choice of α depends on the relative size of the set of constrained data and unlabelled patterns. Then α can be defined as follows: where M 0 denotes the number of pairwise constraints. To minimize J 2 with respect to U under the constraints, by setting the derivative of J 2 to 0, we obtain: Therefore By setting the derivative of J 2 to 0 with respect to ε, we obtain: So that, Substituting Eq 22 in Eq 18, the update equation for the membership values of SFFD can be described as: Note that the first component in Eq 23 is the membership term of the weighted FCM algorithm with adaptive distance norm, which focuses on weighted distances between feature points and prototypes. The second component considers the available supervision: memberships are reduced gradually by taking the pairwise constraints into account until the optimal values are reached. Algorithm Description The algorithm we propose is based on an iterative search for the optimal prototype parameters and the optimal set of feature weights by locally minimizing the sum of weighted intra-cluster distances while respecting to all the pairwise constraints provided by the user. SFFD updates the relevance weights and partition matrix step by step to reach the optimal result. After the initialization step, we compute the factor α that is used to balance the influence from the constrained data and unlabelled patterns, calculate the adaptive distances, and then δ i , the factor that balances the feature weights. Afterwards, the relevance weights and the partition matrix are updated until the maximum difference in value between the partition matrix in the current iteration and the previous iteration falls below a specified threshold. Algorithm1. SFFD algorithm Fix the number of clusters C; Initialize the relevance weights v ik to 1/n; Initialize the fuzzy partition matrix U; Repeat Calculate the cluster centres c ik by using Eq 15; Update δ i by Eq 13; Compute α using Eq 17; Compute d 2 ik for 1 i C and 1 k n; Update the relevance weights v ik by using Eq 11; Update the partition matrix U (k) by using Eq 23 and pairwise constraints; Until kU (k) − U (k−1) k < ε As for most fuzzy algorithms, every instance is assigned to the cluster that has the highest membership. In the end, we check the accuracy of the partition matrix by pairwise constraints. We evaluate the possibility of partitioning the constraints into different clusters and regard the highest possibilities as their clusters. Once the instances belonging to must-links are separated into a different group, we grouped them into one cluster of the highest possibility. If the data items pertaining to cannot-links are grouped into the same cluster, we divide them into two classes. Methodology and data sets In order to evaluate our proposed method, we ran a series of experimental studies to evaluate the SFFD algorithm in comparison with several typical clustering algorithms (a traditional unsupervised algorithm and four semi-supervised algorithms). Two popular approaches, Accuracy and the normalized mutual information (NMI) measure [42], were utilized to analyse performance during the whole process. Furthermore, a comparison of SFFD without feature weights allowed us to evaluate the effect of feature discrimination on the improvement of the accuracy of classification. Various data sets (see S1 Table) were employed to provide a relatively comprehensive evaluation on the effectiveness of our proposed approach. All the comparisons were performed on data sets (see S3 Table) taken from the UCI-repository (http://archive.ics. uci.edu/ml/). Since various algorithms utilize different information to guide the partitioning process, we use labelled instances to generate pairwise constraints and prior membership for each class as labelled data for each data set. During all the experiment, we set the parameter m as 2 and epsilon as 0.001. Firstly, we provide accuracy comparisons with FCM, AFCC [8], sSFCM [10] and SSKFCM [15]. FCM is an unsupervised clustering algorithm that represents unsupervised algorithms. SSKFCM is a semi-supervised kernel based fuzzy c-means algorithm using the labelled instances as clustering guide. sSFCM apply the prior membership to complete the process of partition. We also provide comparison with AFCC, which is a typical semi-supervised clustering algorithm relying on pairwise constraints. Since various typical semi-supervised clustering approaches adopt different side information to guide the clustering, we utilized labelled instances to produce pairwise constraints and prior memberships in our experiment. Secondly, an accuracy comparison between SFFD with weights and SFFD without weights was designed to test the contribution of the weights to the accuracy of classification on four data sets. The same average weight was employed to take the position of weights calculated during the clustering process to eliminate the influence of weight for SFFD without weights. The data sets we choose here have various geometric shapes of clusters (see S1 Fig) for a relatively fair evaluation of the performance of the algorithm. Finally, NMI comparison among the algorithms mentioned above was carried out on four data sets with various proportions of labelled data to obtain more effective evaluation concerning the clustering quality. Likewise, FCM is used as a baseline and other algorithms as references. NMI is a commonly used method to measure the clustering performance by using the clustering results obtained. A larger value of NMI implies a better clustering performance. Evaluation results For various data sets, 40% of data was randomly selected for each class as labelled data. Each algorithm with labelled data was run 50 times to obtain an average performance with errors. And we provide mean value of their accuracy performances for all algorithms in S2 Table as a base line and the other semi-supervised algorithms for comparisons. S2 Table implies that as improved FCM algorithms, AFCC, sSFCM, SSKFCM and SFFD can significantly improve the partition accuracy by providing corresponding partial supervision. However, different algorithms result in different clustering accuracy. For example, as a prior membership-based approach, sSFCM utilizes ðu i k À u i k Þ m to replace u i k m in objective function of FCM. Consequently, its performance outperforms FCM for every data set with the help of side information. For an algorithm with the same pairwise constraints, AFCC just minimizes the sum of intra-cluster distances and neglects the weight for various features. As a result, its capability is weaker than SFFD on all the data sets only except for the Dermatology data set. This is attributed to the fact that the feature weights can easily make the points of same clusters closer and those belong to different clusters far away. As S2 Table shows that our SFFD achieves the best performance on nearly all the data sets except for the Dermatology data set and all the classify accuracy values of SFFD are above the corresponding mean values on all the data sets. Thus, the SFFD algorithm can produce results that come much closer to our expectations. To visualize the performance promotion that the weights bring to SFFD on accuracy, we use various numbers of constraints to test the effect (see S1 Fig) that the weights have. For every number of constraints chosen, 50 experiments were carried out with randomly selected pairwise constraints to obtain a relatively fair result and to decrease the error at a low level. In S1 Fig, SFFD with weights results in a better outcome of clustering performance than the algorithm without weights. Especially for wine data sets, the best performance of SFFD with weights achieves nearly 7% in the clustering performance compared with SFFD without weights. The result shows the feature discrimination (weights) is a necessary help to partitioning the right cluster into the right group, especially for data sets with a regular shape such as the waveform data set and the wine data set. In addition, to obtain a comprehensive evaluation of our study, we change the number of labelled instances during the experiment to get a trend of NMI value on four data sets (see S2 Fig). According to the number of instances of each data set, four series of different labelled data was chosen for performance analysis. The result in S2 Fig shows that SFFD achieves its best performance as soon as enough instances were labelled according to NMI. For the Vowel data set, SFFD improved by more than 10% in performance compared with AFCC with 270 labelled instances. Since SSKFCM is a kernel based method, it obtained a better performance than AFCC, sSFCM on both the Sonar data set and the Wine data set. Obviously, not only the suitable kernel parameter but also more side information about the data is very important for SSKFCM in the applications. As a pairwise constraint based algorithm, by taking weights into account, SFFD outperforms AFCC on both the Vowel data set and the Wine data set, while AFCC has a better performance with a relatively less labelled data on the Scale and Sonar data sets. From S1 Fig to S2 Fig, the results obtained in terms of accuracy and NMI demonstrates little differences on some data sets. For the Scale data set, with respect to accuracy measure, both sSFCM and SSKFCM performs well. However, sSFCM got a better NMI performance than SSKFCM. This implies that more than one evaluation approach is necessary for a comprehensive evaluation on clustering performance. The results show that SFFD can help users improve the classification quality by providing possible constraints. Conclusion In this paper, we have presented a semi-supervised approach that performs clustering and feature weighting simultaneously. Different from the typical algorithms, such as AFCC, SSKFCM and sSFCM, the proposed algorithm SFFD focuses on learning a Mahalanobis distance metric instead of original Euclidean distance during the fuzzy clustering process. Thus, based on the same strategy as existed representive supervised algorithms, SFFD tries to adapt the distances among samples to make the data more separable. With pairwise constraints, SFFD can categorize the partial labelled data by determining the best feature weights within each cluster. Moreover, since the objective function of SFFD is based on that of FCM, it inherits most of the advantages from the FCM family of clustering algorithms. In particular, the proposed SFFD algorithm pays more attention to calculate proper feature weights and make the best use of pairwise constraints to improve the separability of the data. By taking the constraints provided by the user into account, different shapes of data sets can produce results that more closely match our expectations. In future, we shall continually evaluate its performance on other realworld datasets, including image databases, and further investigate how to make it more suitable for real-world clustering applications. Supporting Information S1 Fig. Clustering performance variances in accuracy on four different shape data with respect to weight (Variance = SFFD with weights-SFFD without weights). The data sets included Dermatology dataset,Ionosphere dataset,Waveform dataset and Wine dataset from UCI-repository. The weights were set to 1/n as average weight for SFFD without weights to compare with the algorithm with normal weight calculated by Eq 11. Variances of classification accuracy between SFFD with weight and SFFD without weights were calculated as variable V. Table. All the data sets used in our experiment. (XLS) S2 Table. Comparison of classify accuracy on eight data sets. (XLS) S3 Table. All the data sets listed in S1 Table. (RAR)
2015-09-18T23:22:04.000Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "a5e288c4db73ddb0cd7fff15b95f6dade09e5b54", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0131160&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5e288c4db73ddb0cd7fff15b95f6dade09e5b54", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
234683102
pes2o/s2orc
v3-fos-license
Trends in COVID‐19 death rates by racial composition of nursing homes See related letter by Gilman et al. INTRODUCTION Coronavirus disease 2019 (COVID-19) has taken a severe toll on US nursing homes, which accounted for more than one-third of all COVID-19 deaths in the United States in 2020. 1 Nursing homes with low proportions of white residents accounted for a disproportionate share of these deaths through mid-September 2020. 2 We examine trends in COVID-19 death rates by racial composition of nursing homes through mid-April 2021. METHODS On May 25, 2020, the Centers for Medicare & Medicaid Services (CMS) began requiring nursing homes to report the weekly number of residents with suspected or laboratory-positive COVID-19 who died in the facility or another location and the weekly number of occupied beds. This weekly information is included in CMS's COVID-19 Nursing Home Dataset. 3 We used this dataset to determine weekly COVID-19 deaths per 1000 residents for each facility from May 25, 2020 to April 18, 2021. This information was then merged with data on facility characteristics from LTCfocus, 4 quality data from Nursing Home Compare, 5 and county data from USAFacts. 6 Using these data, we examined the census region in which each facility was located, the number of certified beds, and the overall CMS star rating (which measures quality using a scale of 1-5, with 1 indicating lowest quality and 5 indicating highest quality). 7 We also examined trends in community spread, which we defined as the weekly average number of confirmed COVID-19 cases per 1000 people in the county. Our analysis included 13,820 nursing homes that passed CMS data quality checks and had no missing data. This sample represents over 90% of all nursing homes in the United States. RESULTS Nursing homes were categorized into quintiles based on the percentage of residents who were white, with quintile 1 indicating 0%-58. 1% As shown in Figure 1, although the high-white quintile initially had a lower death rate than the low-white quintile, it had a substantially higher death rate from mid-September 2020 to late February 2021. By the time of the vaccine rollout to nursing homes in late December 2020, the high-white quintile had experienced 3 months of higher community spread and its death rate had ballooned to nearly three times that of the low-white quintile (8.8 deaths per 1000 residents vs. 3.0 deaths per 1000 residents). After the vaccine rollout, death rates declined substantially for both groups. Overall, the high-white quintile had more total deaths than the low-white quintile (18,974 vs. 18,019) despite having fewer beds and higher star ratings. DISCUSSION In this nationally representative study, we found that COVID-19 death rates by racial composition of nursing homes have changed in striking ways over the course of the pandemic. Although nursing homes with high proportions of white residents initially had fewer deaths per 1000 residents, they had substantially more deaths per 1000 residents after COVID-19 began to surge in their (primarily Midwestern) communities. By late December 2020, their death rate was three times that of nursing homes with low proportions of white residents. This gap was so large that even after the vaccine rollout to nursing homes it would take an additional 2 months for the gap to effectively close. One possible explanation for the surge in death rate and community spread for nursing homes with high proportions of white residents in the months leading up to the vaccine rollout is the low level of concern about COVID-19 among many white Americans. Throughout the pandemic, Pew Research polls have consistently shown that the level of concern about COVID-19 varies greatly across racial groups, with whites especially less likely to view the coronavirus as a major threat both to their personal health and to the health of the US public. 8 Similarly, these polls have also shown that even before vaccines became available to the general public, whites reported lower levels of mask use 9 and less concern about unknowingly spreading the disease to others. 10 We hope our findings help disabuse people of the false and misguided notion that COVID-19 does not need to be taken seriously or only affects certain racial groups. F I G U R E 1 Trends in coronavirus disease 2019 death rates and community spread by racial composition of nursing homes. Nursing homes were categorized into quintiles based on the percentage of residents who were white, with quintile 1 (low white) indicating 0%-58.1% and quintile 5 (high white) indicating 97.7%-100%. Source: Authors' analysis of data from the Centers for Medicare & Medicaid Services, LTCfocus, and USAFacts
2021-05-17T06:16:13.051Z
2021-05-15T00:00:00.000
{ "year": 2021, "sha1": "fc4a9df5cf52a1a04384e0f95ebf91d2d54b8306", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8242855", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6eae3327ceb28cb86c5ed033a3cc781d6e0a2b80", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
264144379
pes2o/s2orc
v3-fos-license
A Simplified Screening Model to Predict the Risk of Gestational Diabetes Mellitus in Pregnant Chinese Women Introduction This study aimed to develop a simplified screening model to identify pregnant Chinese women at risk of gestational diabetes mellitus (GDM) in the first trimester. Methods This prospective study included 1289 pregnant women in their first trimester (6–12 weeks of gestation) with clinical parameters and laboratory data. Logistic regression was performed to extract coefficients and select predictors. The performance of the prediction model was assessed in terms of discrimination and calibration. Internal validation was performed through bootstrapping (1000 random samples). Results The prevalence of GDM in our study cohort was 21.1%. Maternal age, prepregnancy body mass index (BMI), a family history of diabetes, fasting blood glucose levels, the alanine transaminase to aspartate aminotransferase ratio (ALT/AST), and the triglyceride to high-density lipoprotein cholesterol ratio (TG/HDL-C) were selected for inclusion in the prediction model. The Hosmer–Lemeshow goodness-of-fit test showed good consistency between prediction and actual observation, and bootstrapping indicated good internal performance. The area under the receiver operating characteristic curve (ROC-AUC) of the multivariate logistic regression model and the simplified clinical screening model was 0.825 (95% confidence interval [CI] 0.797–0.853, P < 0.001) and 0.784 (95% CI 0.750–0.818, P < 0.001), respectively. The performance of our prediction model was superior to that of three other published models. Conclusion We developed a simplified clinical screening model for predicting the risk of GDM in pregnant Chinese women. The model provides a feasible and convenient protocol to identify women at high risk of GDM in early pregnancy. Further validations are needed to evaluate the performance of the model in other populations. Trial Registration ClinicalTrials.gov identifier: NCT03246295. Supplementary Information The online version contains supplementary material available at 10.1007/s13300-023-01480-8. Gestational diabetes mellitus (GDM) is a common complication of pregnancy, and the prevalence of GDM has increased in the past few years.However, a feasible method to screen GDM risk in early pregnancy is lacking. We aimed to develop a convenient clinical prediction model to identify pregnant women at high risk of GDM in early pregnancy that could be applicable in most areas of China. What was learned from the study?Three clinical characteristics (maternal age, prepregnancy body mass index, and a family history of diabetes) and three laboratory parameters (fasting blood glucose level, the triglyceride to highdensity lipoprotein cholesterol ratio, and the alanine transaminase to aspartate aminotransferase ratio) in the first trimester were selected and used to develop a simplified clinical screening model.The model showed good discrimination (ROC-AUC 0.784, 95% confidence interval 0.750-0.818,P \ 0.001) and calibration. The simplified prediction model in our study provided a simple and feasible tool to predict the risk of GDM in early pregnancy.The performance of our prediction model was superior to that of three other published models, and our prediction model would be applicable in pregnant Chinese women. INTRODUCTION Gestational diabetes mellitus (GDM) is a common complication of pregnancy and is defined as a condition of glucose intolerance that is first diagnosed during pregnancy [1].The prevalence of GDM has increased globally in the past few years, possibly because of the rapid societal transitions in nutrition and lifestyles.GDM affects up to 15% of pregnant women worldwide, whereas it affects 18.3-25% of pregnant women in Southeast Asia, demonstrating the higher prevalence of GDM in China [2][3][4].Accumulating evidence indicates that GDM can not only increase the risk of perinatal complications (pregnancy-induced hypertension, preeclampsia, stillbirth, etc.) but also lead to chronic health problems for offspring later in life, including diabetes mellitus, metabolic syndrome, and cardiovascular diseases [5,6]. According to the International Association of Diabetes and Pregnancy Study Group (IADPSG) criteria, the diagnosis of GDM is based on the results of a 2-h, 75-g oral glucose tolerance test (OGTT) between 24 and 28 weeks of gestation [7].However, pregnant women with GDM could have hyperglycemia for a longer period of time, even during the first trimester of pregnancy; thus, the diagnosis of GDM at 24--28 weeks of gestation might be retrospective and may not completely reverse the adverse effects on both mothers and their offspring [2].Therefore, it is essential to predict the risk of GDM in early pregnancy to improve the hyperglycemic environment. Several risk factors, including advanced maternal age, prepregnancy body mass index (preBMI), a family history of diabetes mellitus, and glucose and lipid profiles in early pregnancy, have been applied for the early identification of GDM [8][9][10].Based on our previous work, the triglyceride (TG) to high-density lipoprotein cholesterol (HDL-C) ratio (TG/HDL-C), alanine transaminase (ALT) to aspartate aminotransferase (AST) ratio (ALT/AST), and hepatic steatosis index (HSI) are independent risk factors for GDM [11,12].In recent years, other novel biomarkers have been reported as potential predictors, including angiopoietinlike protein 8 and plasma fatty acid-binding protein 4 [13,14].The use of individual biochemical markers has shown relatively poor sensitivity and specificity and, thus, combinations of risk factors have been taken into consideration for predicting the risk of GDM.Several studies explored the utility of preBMI combined with fasting blood glucose (FBG) in the first trimester as risk factors to predict the risk of GDM [15][16][17].However, there were no unified cutoff values among different studies, which limited the practicability of these combined risk factors.Because of the similar pathogenesis between GDM and type 2 diabetes mellitus (T2DM), several genetic variants related to insulin secretion (including glucokinase [GCK] and melatonin receptor 1B [MTNR1B]) and insulin resistance (including insulin receptor substrate 1 [IRS1] and peroxisome proliferator-activated receptor gamma [PPARG]) have been found to be associated with GDM [18].Although the role of genetic variants in the prediction of GDM risk has been discussed, the conclusions are inconsistent [19,20].To achieve early identification of the risk of GDM, there has been a rapid development of prediction models based on sociodemographic characteristics and laboratory data.However, these predictors are mostly evaluated during the second trimester (after 12 weeks of gestation), and it is uncertain whether the models developed by other regions are applicable to Chinese women [21,22].In addition, some of the prediction models are too complex, and the variables included in the models are not routinely tested during pregnancy [23]. The aim of the present study was to develop a convenient clinical prediction model to identify pregnant women at high risk of GDM in early pregnancy.A mathematical formula was first established by logistic regression analysis, and then a simplified screening model was derived.The diagnostic utility of our prediction model was compared with that of other published GDM prediction models. Ethical approval Written informed consent was obtained from each participant, and the study was performed in accordance with the Declaration of Helsinki as revised in 2013. Participants Singleton pregnant women aged [ 18 years were recruited to the study at their first prenatal visit during the first trimester of pregnancy (between 6 and 12 weeks).The inclusion criteria were: (1)\12 weeks gestation, and the ability to follow-up regularly; (2) natural conception; (3) no medication use before or during pregnancy, except for vitamins; and (4) agreement to participate in the study and to provide a signed consent form.The exclusion criteria were: (1) twin or multiple pregnancy; (2) impaired glucose tolerance or diabetes mellitus before pregnancy; (3) severe chronic diseases or infectious diseases (e.g., liver disease, kidney failure, cardiovascular disease, autoimmune disease, hematological disease, AIDS, and other diseases before pregnancy); and (4) the inability to understand and complete the study.The enrollment flow chart is shown in Electronic Supplementary Material (ESM) Fig. 1.Because a previous study revealed that an FBG level C 6.1 mmol/L in early pregnancy could predict the risk of GDM with a specificity of 100%, participants with an FBG level C 6.1 mmol/L at the first visit were excluded from our study [10].Baseline anthropometric and sociodemographic characteristics of the eligible women were collected at the first visit. Clinical and Laboratory Measurements Body height and weight were measured, and the BMI was calculated as (weight [kg])/(height [m]) 2 .Body weight, systolic blood pressure and diastolic blood pressure were measured at each follow-up visit.Blood pressure was measured twice at 5-min intervals using an automatic BP monitor and averaged. Laboratory tests were performed at the first visit.Homeostasis model assessment of insulin resistance (HOMA-IR) was calculated as (FBG [mmol/L] 9 fasting insulin [lU/mL])]/22.5[24].All participants were offered a 2-h, 75-g OGTT between 24 and 28 weeks of gestation for GDM screening.GDM was diagnosed according to the 2010 IADPSG criteria [25].Overall, 1289 pregnant women were included in the present study.All available data were recorded and verified by two investigators simultaneously. Data were collected on the following pregnancy outcomes from electronic medical records: gestational age at birth, type of delivery, infant birth weight, and the 10-min Apgar score.Preterm delivery was defined as delivery before gestational week 37 [26].Large for gestational age (LGA) and small for gestational age (SGA) were defined as birth weights above the 90th percentile and below the 10th percentile of the mean weight for gestational age and sex, respectively [27].Delivery data were available for 1064 of the 1289 participants. Statistical Analysis Missing data accounted for \ 10% of all data, and were handled by multiple imputations of 5. Continuous variables are presented as the mean ± standard deviation if normally distributed and as medians (interquartile range) if nonnormally distributed; categorical variables are presented as percentages.Categorical variables were evaluated using the Pearson Chisquared test (v 2 ).Comparisons between outcome groups for continuous variables were assessed by two-sample Student's t-test or the Mann-Whitney U-test as appropriate. Univariate and multivariate logistic regression analyses were performed to identify the risk factors for GDM by computing diagnostic odds ratios (ORs) and their 95% confidence intervals (95% CIs).A backward stepwise entry procedure was used to preliminarily select the variables to be retained in the multivariate logistic regression model with a statistical significance cutoff of P = 0.05.The variables included in the predictive model were selected on the basis of the Akaike information criterion.The coefficient estimates in the prediction model were normalized to construct a simplified GDM screening model.The diagnostic accuracy of the GDM prediction model and simplified screening model were evaluated by receiver operating characteristic (ROC) analysis.The optimal cutoff values were defined by obtaining the maximum Youden index calculated by the following formula: (sensitivity ?specificity) -1 [28].The area under the curve (AUC) with the 95% CI, sensitivity, specificity, positive likelihood ratio (LR?), and negative likelihood ratio (LR-) were used as measures of overall performance.Calibration was evaluated by the Hosmer-Lemeshow goodness-of-fit test and internally validated with bootstrapping (1000 random samples) to reduce overfitting bias.Statistical analyses were performed using the IBM SPSS statistical program (version 26.0; SPSS IBM Corp, Armonk, NY, USA),, GraphPad Prism (version 9.5.1,Graph-Pad Software, San Diego, CA, USA), and R software (version 4.3.1,packages Hmisc, rms, and caret; R Foundation for Statistical Computing, Vienna, Austria).A P value of\0.05(two-tailed) was considered to be statistically significant. The similar methodologies described in this study have been presented in our previous work [29]. Clinical and Laboratory Characteristics Of the 1289 participants enrolled in the present study, 272 (21.1%) developed GDM.The maternal and pregnancy characteristics of all participants are shown in Table 1.Compared to those in the normal glucose tolerance (NGT) group, women in the GDM group were older and heavier (P \ 0.05).A family history of diabetes and a history of adverse pregnancy did not significantly differ between the two subgroups.The majority of participants in this study were nulliparous (64.0% and 72.4% for the GDM and NGT groups, respectively), but more women with GDM were multiparous (P = 0.024).Women who developed GDM had significantly higher levels of FBG and HOMA-IR in the first trimester of pregnancy (P \ 0.01); in addition, other metabolic measures, including ALT, the ALT/AST ratio, and lipid profiles (TC, TG, HDL-C, low-density lipoprotein-cholesterol [LDL-C] levels, and the TG/HDL-C ratio) were also significantly different between the two groups (P \ 0.05).Regarding pregnancy outcomes, most of the participants had a term delivery, and there was no significant difference in the incidence of preterm delivery between the GDM and NGT groups.However, the proportion of LGA was higher in the GDM group than in the NGT group (5.4% vs. 2.0%, respectively; P = 0.006). Predictors of GDM The potential predictors of GDM were included in the logistic regression analysis.All clinical variables were included, and laboratory variables in early pregnancy were screened to simplify the prediction model (the FBG was substituted for the HOMA-IR, the ALT/AST ratio was substituted for the ALT and AST levels, respectively, and the TG/HDL-C ratio was substituted for other lipid measures).After using the backward (LR) method for preliminary predictor selection, five variables remained in the model, including two clinical variables and three laboratory variables.Although a family history of diabetes was not significantly different between the GDM and NGT subgroups in our cohort, it has been reported to be an important risk factor for GDM in previous studies [9].Therefore, we added a family history of diabetes to the prediction model.The univariate and multivariate logistic regression analyses for the final six variables are presented in Table 2 analysis in this prediction model showed an area under the curve (AUC) of 0.825 (95% CI 0.797-0.853,P \ 0.001), with a sensitivity of 76% and a specificity of 72% (Fig. 1).This prediction model was assessed by the Hosmer-Lemeshow goodness-of-fit test and was internally validated by bootstrapping.Hosmer-Lemeshow goodness-offit testing indicated good consistency between the predicted and actual data (v 2 = 9.756, P = 0.283) (Fig. 2a).The calibration curve after bootstrapping indicated good internal performance in terms of discrimination, with an adjusted C-statistic of 0.821 (Fig. 2b). Simplified Clinical Screening Model for GDM In according to the CHARMS recommendations [30], we extracted coefficients from the multivariate logistic regression and used these to calculate the GDM risk score.The fitted model and simplified scores are reported in ROC curves were used to analyze the performance and discrimination of the simplified screening model (Fig. 1).The simplified screening model had an AUC of 0.784 (95% CI 0.750-0.818,P \ 0.001), demonstrating a wellaccepted predictive and discriminative performance.The optimal cutoff of the scoring model was 5.5, with a sensitivity of 71% and a specificity of 74%; the LR?) was 2.73, and the LRwas 0.39.As shown in ESM Table 1, when the cutoff point was C 12.5, the specificity of GDM prediction was [ 95%; when the cutoff point was C 18.5, the specificity of GDM prediction was [ 99%.The diagnostic capacity of this prediction model at different cutoff points is described in Fig. 3 and ESM Table 1. Sensitivity Analysis by Different preBMI Cutoff Values As the findings of previous studies suggested lower preBMI cutoff values for application in pregnant Chinese women [16,17], we used different preBMI cutoff values ranging from 21 to 24 kg/m 2 for overweight stratification (Fig. 4; ESM Table 3).When the preBMI cutoff value was 22 kg/m 2 , the ROC-AUC of our prediction model was 0.789 (0.756-0.821); the two other preBMI cutoff values (21 and 23 kg/m 2 ) did not show better ROC-AUC values than the cutoff value of 24 kg/m 2 .The pairwise comparisons of different preBMI cutoff values did not show statistically significant differences (P [ 0.05). Comparison of the Performance of Our model with other GDM Prediction Models The performance of our model was compared with that of other prediction models published in the last 10 years.The screening and selection process of these models are given in ESM Fig. 2. Of the 886 records retrieved through the database search, we selected three published clinical risk models to compare with our model [34][35][36].As shown in Fig. 1 and ESM Table 2, our current model was superior to the other established GDM prediction models, with AUCs of 0.752 (95% CI 0.721-0.784)for Gao et al.'s model [34], 0.672 (95% CI 0.636-0.708)for Zheng et al.'s model [35], and 0.736 (95% CI 0.704-0.768)for Guo et al.'s model [36].For two of the three published models (those of Gao et al. and Guo et al.), the performance in our participants was better than the original models, whereas in Zheng et al.'s model it was worse. DISCUSSION In the present study, we developed a simplified clinical screening model for predicting the risk of GDM in early pregnancy.Using three clinical characteristics (maternal age, preBMI, and a family history of diabetes) and three laboratory parameters (FBG, the ALT/AST ratio, and the TG/HDL-C ratio) measured in the first trimester, the model showed good discrimination (a sensitivity of 71% and a specificity of 74%, with an AUC of 0.784) and calibration (as shown in Fig. 2a, b).This prediction model provided earlier screening for the risk of GDM, which would be applicable in pregnant Chinese women.Pregnant women with GDM have an increased risk of pregnancy complications.A systematic review and meta-analysis including 156 studies revealed that women with GDM had increased odds of cesarean section, preterm delivery, macrosomia, and LGA infants.Among pregnant women with GDM requiring insulin therapy, the odds of having an infant with respiratory distress syndrome were also higher [37].Based on the results of these studies, it is necessary to identify the risk of GDM as early as possible.Although numerous risk factors for GDM have been reported, the ability to precisely identify women at high risk for GDM before or early in pregnancy remains limited.The IADPSG recommended using an FBG range of 5.1-6.9mmol/L before 24 weeks of gestation to define early GDM, and pregnant women with FBG levels in this range should be referred for immediate intervention [25].However, it has been reported that FBG is related to gestational Fig. 1 The performance of our prediction model compared to that of the other published models for gestational diabetes mellitus prediction within our cohort.ROC Receiver operating characteristic age and body weight, and several women with GDM have normal FBG levels in early pregnancy [38].In addition, one study reported that even among pregnant women with FBG levels [ 5.6 mmol/L before 24 weeks of gestation, [ 50% did not develop GDM, indicating that it was inaccurate to predict the risk of GDM by FBG levels alone [39].Heterogeneity of physiological processes underlying hyperglycemia has been revealed among women with GDM [40].In a proportion of pregnant women with GDM, the pathophysiological mechanism of GDM was dominated by insulin secretion defects without impaired insulin sensitivity, whereas other patients had predominant insulin sensitivity defects with hyperinsulinemia and were more likely to develop altered adipokine profiles.The association between lipid profiles and liver function in early pregnancy and GDM has gradually been elucidated, but the diagnostic ability of each study was different with disparate cutoff points [41,42].Our previous work identified clinically useful biomarkers in early pregnancy for the prediction of GDM risk, which were used as variables in the prediction model reported in the present study and to determine cutoff values [11,12]. The parameters included in our scoring model have been reported in previous studies, providing the theoretical basis of the model. Race is one of the risk factors for GDM [9].The incidence of GDM in Chinese individuals is significantly higher than that in white individuals; thus, prediction models based on European or North American populations are not Participants with an FBG level C 6.1 mmol/L in the first trimester were excluded from this prediction model because of probable impaired glucose intolerance before pregnancy Fig. 3 The diagnostic capacity of this prediction model at different risk scores.Sensitivity, specificity, and positive likelihood ratio (LR?) are described by the y-axis on the left, and the negative likelihood ratio (LR-) is described by the y-axis on the right applicable for Chinese women.Several GDM prediction models have been established in China.Wu et al. developed a clinical model for gestational women in the first trimester by selecting seven variables via advanced machine learning, which demonstrated a promising predictive value [43].However, the model was too complicated to use in routine clinical care, especially in rural areas.Wang et al. found that FBG and TG levels during gestational weeks 14-20 were independent predictors for GDM and built a risk score using these two variables [44].The prediction model based on laboratory data ignored the relationship between sociodemographic characteristics and GDM.More studies devoted to predicting the risk of GDM by novel biomarkers, including genetic variants and proteomic analysis, have been implemented in most institutions in China [45,46].The aim of this study was to establish a practical and propagable method to identify the risk of GDM in Chinese women in early pregnancy, and the simplified screening model presented herein achieved high accuracy.Three published models with variables similar to ours were contrasted with our prediction model, but none of them had better predictive values than our model, neither the original AUC values nor the derived ones [34][35][36]. The diagnostic utility of our prediction model was satisfactory, with an AUC of 0.784 (95% CI 0.750-0.818,P \ 0.001).The optimal cutoff value of the model was 5.5, with a sensitivity of 71% and a specificity of 74%, which indicates that it could be a simplified and lowcost screening tool for clinical use.As shown in ESM Table 1, when the cutoff point was C 12.5, the specificity was [ than 95%; when the cutoff point was C 18.5, the specificity was [ than 99%.Therefore, we recommend that if the score is [ 12.5, intervention measures should be taken immediately because of the high probability of GDM.In addition, women with FBG levels C 6.1 mmol/L in the first trimester were excluded from our prediction model.Patients with FBG levels C 6.1 mmol/L were defined as having impaired fasting glucose (IFG), which indicated that they may already have abnormal glucose metabolism.Zhu et al. found that an fasting plasma glucose cutoff values of 6.1 mmol/L at the first prenatal visit had a specificity of 1 for predicting the risk of GDM [10].Based on the above, we recommend that pregnant women with an FBG level C 6.1 mmol/L in the first trimester should be treated as women with GDM and receive lifestyle intervention or even insulin treatment. There are several limitations to our study.First, some missing data were missing during early pregnancy in this prospective cohort.However, the proportion of missing data was \ 10%, and multiple imputations were conducted to develop the prediction model.Second, as our study was derived and internally validated only in pregnant Chinese women, it may not be applicable to other populations.Performing external validation in other populations and different settings would have been the optimal approach, but this was not feasible in this cohort.Moreover, although the screening model showed good discrimination, it could not identify all women at high risk of GDM in the first trimester.When the cutoff point was 5.5, the screening model failed to identify 78 of the 272 (28.6%) pregnant women with GDM in Fig. 4 The performance of our gestational diabetes mellitus prediction model stratified by different prepregnancy body mass index (preBMI) cutoff values.ROC Receiver operating characteristic this study.Further studies on GDM risk factors are needed to establish more accurate prediction models. CONCLUSIONS In conclusion, we developed a simplified screening model that can predict the risk of GDM in early pregnancy in the Chinese population based on sociodemographic characteristics and laboratory data; this model is easy to implement in most medical centers in China.The diagnostic utility of our prediction model showed better discrimination than other published models using similar biomarkers, with an ROC-AUC of 0.784 (95% CI 0.750-0.818).This model could help identify women at high GDM risk earlier than the 75-g OGTT, which may reduce the rate of perinatal complications in pregnant women as well as the economic burden of society. Medical Writing and Editorial Assistance The English editorial assistance of the manuscript was provided by Hannah S of Springer Nature Author Services, and we express our gratitude for their help.The English editorial assistance was funded by the first author Yanbei Duo.Open Access.This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creativecommons.org/licenses/by-nc/4.0/. Fig. 2 Fig. 2 Assessment of the multivariate logistic model.a Hosmer-Lemeshow goodness-of-fit test (v 2 = 9.756, P = 0.283).b Bootstrap-validated calibration curve (b = 1000 repetitions, boot, mean absolute error = 0.018, n = 1289).The x-axis represents the predicted probability of the multivariate logistic model, and the y-axis represents the actual probability of gestational diabetes mellitus.Perfect prediction would correspond to the 45°dashed line.The red line represents the entire cohort, and the orange line indicates bias correction by bootstrapping Table 3 NGT Normal glucose tolerance, GDM gestational diabetes mellitus, preBMI prepregnancy body mass index, FBG fasting blood glucose, HOMA-IR homeostasis model assessment of insulin resistance, ALT alanine aminotransferase, AST aspartate aminotransferase, ALT/AST ALT-to AST ratio TC total cholesterol, TG triglyceride, HDL-C high-density lipoprotein cholesterol, LDL-C low-density lipoprotein cholesterol, TG/HDL-C TG-to HDL-C ratio, OGTT oral glucose tolerance test, SBP systolic blood pressure, DBP diastolic blood pressure, LGA large for gestational age (defined as a birth weight [ 90th percentile of the mean weight for gestational age), SGA small for gestational age (defined as a birth weight \ 10th percentile of the mean weight for gestational age) *, **Significant difference between subgroups at *P \ 0.05 and **P \ 0.01 a Defined as embryo damage, spontaneous abortion, or preterm delivery in a previous pregnancy b Defined as delivery at \ 37 completed weeks of gestation HDL-C ratio \ 0.676 (score of 0) and TG/ HDL-C ratio C 0.676 ( score of 3). Table 2 Potential predictors of gestational diabetes mellitus in the logistic regression analysisGDM gestational diabetes mellitus, OR odds ratio, preBMI prepregnancy body mass index, FBG fasting blood glucose, ALT alanine aminotransferase, AST aspartate aminotransferase, TG triglyceride, HDL-C high-density lipoprotein cholesterol **Independent factors significantly associated with GDM at **P \ 0.01 Table 3 Simplified clinical screening model for gestational diabetes mellitus GDM gestational diabetes mellitus, preBMI prepregnancy body mass index, FBG fasting blood glucose, ALT alanine transaminase, AST aspartate aminotransferase, TG triglyceride, HDL-C high-density lipoprotein cholesterol a Defined as at least 1 family member having been diagnosed with diabetes b Contributions.Yanbei Duo, Tao Yuan, Weigang Zhao, Wei Sun, and Ailing Wang conceptualized the study.Yuemei Zhang, Shuoning Song, Jiyu Xu, Yan Chen, Xiaorui Nie, Qiujin Sun, Xianchun Yang, and Zechun Lu performed the investigation.Yanbei Duo, Xiaolin Qiao, Zhenyao Peng, Jing Zhang, Tao Yuan, Yong Fu, and Yingyue Dong determined the methodology.Yanbei Duo, Shuoning Song, Yuemei Zhang, Xiaolin Qiao, Jiyu Xu, and Yan Chen collected the clinical data.Yanbei Duo wrote the original draft.Yanbei Duo, Tao Yuan, Weigang Zhao, Wei Sun, and Ailing Wang edited the manuscript.Weigang Zhao supervised the study.All authors approved the final draft of the manuscript.Funding.This study was supported by 13th Five-Year National Science and Technology Major Project for New Drugs (Grant No. 2019ZX09734001 to WZ).The Rapid Service Fee was funded by the authors.Data Availability.The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request.Declarations Conflict of Interest.Yanbei Duo, Shuoning Song, Xiaolin Qiao, Yuemei Zhang, Jiyu Xu, Jing Zhang, Zhenyao Peng, Yan Chen, Xiaorui Nie, Qiujin Sun, Xianchun Yang, Ailing Wang, Wei Sun, Yong Fu, Yingyue Dong, Zechun Lu, Tao Yuan, and Weigang Zhao have nothing to disclose.Ethical Approval.Written informed consent was obtained from each participant, and the study was performed in accordance with the Declaration of Helsinki as revised in 2013.This study was part of an ongoing prospective double-center observational cohort study started in 2019, which was conducted at Haidian District Maternal and Child Health Care Hospital and Chaoyang District Maternal and Child Health Care Hospital (Beijing, China) (ClinicalTrials.gov:NCT03246295).The Ethical Review Committee of National Center for Women and Children's Health, Chinese Center for Disease Control and Prevention in Beijing, China approved this study on 3 April 2019 (approval number: FY2019-01).
2023-10-17T06:17:37.761Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "9a8245fed747a062b447d21c8ecfe887d2202393", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13300-023-01480-8.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "6f79f0574fe73197ea1a157dd18c65a61b3d6ce3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271119584
pes2o/s2orc
v3-fos-license
Role of indocyanine green-guided near-infrared fluorescence imaging in identification of the cause of neonatal cholestasis To evaluate the efficacy and safety of indocyanine green (ICG)-guided near-infrared fluorescence (NIRF) imaging during surgery to diagnose the cause of neonatal cholestasis (NC). Data on NC patients who underwent both NIRF with ICG and conventional laparoscopic bile duct exploration (the gold standard) at our institute from January 2022 to December 2022 were retrospectively analyzed. The patients’ baseline characteristics and liver function outcomes were collected and analyzed, and the diagnostic consistency was compared between the 2 methods. In total, 16 NC patients were included in the study, comprising 8 (50%) male and 8 (50%) female patients, ranging in age from 42 to 93 days, with a median age of 54.4 ± 21 days. During surgery, all the patients underwent NIRF with ICG, followed by conventional laparoscopic bile duct exploration. Finally, 15 of the patients were diagnosed with biliary atresia (BA) (1 with type-I BA, and 14 with type-II BA). The other patient was diagnosed with cholestasis. The diagnostic results from fluorescence imaging with ICG were consistent with those from conventional laparoscopic bile duct exploration. ICG-guided NIRF is associated with an easy operation, less trauma, and good safety. Also, its diagnostic accuracy is similar to conventional laparoscopic bile duct exploration. Introduction An accumulation of bile products in the liver, blood, and other organs can cause cholestasis, which is defined as an anatomical or functional restriction to biliary flow, regardless of the origin and site of obstruction.Neonatal cholestasis (NC) is the designation for cholestasis that begins within the first 3 months of life.Although anicteric newborns or people with normal feces might exhibit NC, it is typically characterized by jaundice, hypocholic stools, and choluria. [1]NC is relatively common, with an incidence of 1:2500 in live births.It can result in serious outcomes and requires prompt intervention. [2]iliary atresia (BA), an infantile hepatobiliary condition that includes both extrahepatic bile duct blockage and intrahepatic fibrosing cholangiopathy, is the most common cause of NC.Surgically releasing the extrahepatic obstruction by portoenterostomy may avoid cirrhosis developing with BA. [3] The patient age at the time of the surgery has an impact on the portoenterostomy postoperative prognosis. [4,5]Most patients receive the surgery within 60 days of age, [6] and the importance of performing the portoenterostomy as early as possible is widely acknowledged. [7]Therefore, the early diagnosis of BA is crucial to improve the prognosis.However, differentiating NC from other disorders in time is still challenging.Although many noninvasive diagnosis modalities have been proposed under current guidelines, intraoperative cholangiography is still the gold standard to diagnose BA.In intraoperative cholangiography, the gallbladder is freed and brought outside the abdominal wall for angiography to be performed; however, this can cause surgical and radiation trauma. [8]fter an intravenous injection of indocyanine green (ICG), the biliary anatomy can be identified intraoperatively using nearinfrared fluorescence (NIRF) imaging.Near-infrared light has low autofluorescence and can penetrate tissues to a depth of up to 1 cm.[11] ICG is a relatively nontoxic near-infrared fluorescent iodide dye that can be rapidly taken up by liver cells and excreted through the biliary system. [12]When exposed to near-infrared light, ICG binds to plasma proteins and produces light with a peak wavelength of around 830 mg/dL. [13]However, the efficacy and safety of the technique in diagnosing neonatal BA remains unclear.Consequently, we conducted this retrospective study to evaluate whether ICG injection followed by NIRF imaging can accurately and safely diagnose neonatal BA during surgery. Patient selection NC patients who underwent both fluorescence imaging with ICG and conventional laparoscopic bile duct exploration (the gold standard) from January 2022 to December 2022 in our institute were included in the study.The indications to perform laparoscopic bile duct exploration were as follows: After birth, serum bilirubin did not decrease, or it decreased but increased again; the serum bilirubin level increased ≥ 300 μmol/L; or the direct bilirubin level accounted for more than 50% of the total bilirubin, and was accompanied by a persistent increase in the glutamyl transpeptidase level.Also, jaundice did not improve after 2 weeks of conservative medical treatment [14] ; Transabdominal ultrasound examination indicated poor gallbladder development, no significant change in gallbladder volume before and after eating, or a hepatic hilar fibrous plaque. [15]This study was approved by the ethics committee of Guangzhou Women and Children Medical Center, Guangzhou Medical University (2023-234A01), and all methods were carried out in accordance with relevant guidelines and regulations.Written informed consent to participate was obtained from all the patients' parents or legal guardians. Procedure used to perform NIRF imaging guided by ICG All the included children received intravenous ICG 12 hours before surgery.The ICG was dissolved in sterilized water for injection, diluted to a standard concentration (5 mg/mL), and then 0.3 mg/kg was injected slowly through a peripheral vein. [16]IRF was used to detect the ICG to show the anatomy of the biliary tract. After that, conventional laparoscopic bile duct exploration was performed.The gallbladder was freed from the gallbladder bed and then presented through the abdominal incision.The base of the gallbladder was cut open and catheterized, and then injected with Ultravist.Bedside radiography (X-ray) was performed to determine the extrahepatic biliary tract. Observation parameters The key observation parameters for this study were the final diagnoses made according to the 2 methods. Baseline characteristics In total, 16 neonates were included in our study, comprising 8 males and 8 females.The median age at the time of surgery was 54.4 ± 21 days (ranging from 42 to 93 days) (Table 1). Final diagnoses According to conventional laparoscopic bile duct exploration, BA was diagnosed in 15 cases (Fig. 1) and cholestasis in 1 case (Fig. 2).Among the BA cases, 1 case was type-I BA and 14 cases were type-II BA.The results were also confirmed by ICG-guided NIRF and the diagnostic accuracy of the technique was 100%. Discussion The etiology of NC is complex and is related to many factors, including hereditary and environmental factors.The main manifestations of cholestasis are jaundice, white stools, hepatosplenomegaly, and elevation of the serum bilirubin.BA is the most common cause of NC.The main pathological features of BA are progressive inflammation and fibrotic atresia in both the internal and external hepatic systems.The vast majority (80%) of BA is perinatally acquired, and it is believed that immune abnormalities may be the main cause of its occurrence. [17]ncreasing evidence suggests a deeper implication of intricate mechanisms of innate immunity from the onset of the disease: oxidative stress, altered metabolism, and the induction of longterm/abnormal epigenetic changes.However, the full etiology of the disease remains unknown. BA can lead to severe obstructive cholestasis and eventually cholestatic cirrhosis, portal hypertension, and even liver failure.These are the main reasons for liver transplantation in children. [18]Therefore, early diagnosis and intervention are important to improve the prognosis of BA patients.When BA cannot be diagnosed by existing noninvasive diagnostic methods, but is highly suspected, a definite diagnosis should be made by radiography.At present, laparoscopic exploration and intraoperative cholangiography are the gold standard for diagnosing BA. [19] The procedure for performing laparoscopic bile duct exploration requires freeing the gallbladder from the abdominal wall and injecting a contrast agent into the external tube.X-ray fluoroscopy is used to show whether the bile duct is unobstructed.This method is highly accurate, but often requires the preparation of mobile C-arm X-ray machines in advance, and requires the cooperation of radiologists.The operation is complicated, cumbersome, and time-consuming, and there is a risk of exposure to radiation.In addition, the surgical trauma is relatively large and the biliary system needs to be dissociated.The procedure is associated with a postoperative risk of adhesive intestinal obstruction and biliary fistula. [20]If ICG is performed, these risks can be avoided in neonatal patients with BA. The application of ICG in laparoscopic cholecystectomy is still debated.According to a number of studies, an injection given before surgery into the elbow vein can benefit imaging of the liver and biliary system. [21,22]However, since ICG accumulates in the liver during surgery, liver fluorescence occurs, which can interfere with imaging of the gallbladder and biliary system.This will make it harder to recognize Calot triangle and the biliary anatomy.In NC patients, it is often difficult to make an injection into the elbow vein, which can affect the fluorescence effects and may cause concern in the analysis.However, this negative effect was not identified in our study.A prospective study with a large sample size is needed to verify the results of our study. ICG is a near-infrared fluorescent dye that can be rapidly taken up by liver cells and excreted in a free form through the biliary system into the intestine, and then excreted in feces without intermediate metabolites. [23]When ICG is combined with the protein in bile juice, it emits infrared light with a wavelength of about 840 nm, which can be captured by an infrared camera to show the condition of the liver and biliary duct.Due to this characteristic, ICG is increasingly applied in clinical studies, especially in those involving hepatobiliary surgery and oncology surgery.The intraoperative application of ICG can aid precisely locating the locations and boundaries of tumors, so as to support precise treatment. [24]However, the application of the technique in diagnosing NC has, to the best of our knowledge, not been reported yet.However, the results of our study suggest that NIRF guided by ICG is accurate and safe, and has value in clinical application. Studies have shown that a high concentration of ICG can affect the liver function of children, while a low concentration of ICG can achieve a good fluorescence imaging effect without adverse effects on liver function. [12,25]The risk of adverse events caused by ICG injection is considered minimal, and mainly occurs when the injection dose is >0.5 mg/ kg, which is about 0.003%. [26]The safety was also satisfactory in pediatric patients. [27]Therefore, ICG with a 0.3 mg/ kg concentration was selected for NIRF in this study, and the imaging effect was optimal.Moreover, the liver function showed no worsening significantly before or after surgery, which indicated that ICG-guided NIRF was safe.If a patient is diagnosed with cholestasis, the biliary tract can be irrigated with a medical syringe to prevent ICG retention.We can conclude that the technique is safe and reliable, and it greatly reduces surgical trauma and is beneficial for the postoperative recovery of NC patients. There are a number of limitations of this study to note, including the fact that the surgeon and individual performing the quantitative assessment could not truly be blinded.Second, it may be necessary to optimize the dose of ICG by further adjusting it based on the participants' liver function and the desired and actual body weights of participants.Third, the sample size of our study was relatively small, and so the diagnostic yield could not be determined.Prospective studies with a larger sample size are needed to further verify the findings of our study. Conclusion In summary, application of the ICG-guided NIRF technique in biliary exploration has the advantages of good safety, reliability, and accuracy.A future prospective study with a large sample size is recommended to confirm the results of our study before the technique is applied in clinical practice. Figure 1 . Figure 1.Intraoperative situation of a child with cholestasis.(A) Extrahepatic biliary tract; (B) indocyanine green fluorescence staining of the extrahepatic biliary tract; (C) original condition of the extrahepatic biliary tract; (D) extrahepatic cholangiography showing the intrahepatic biliary tract. Figure 2 . Figure 2. Intraoperative situation of a child with biliary atresia.(A) Extrahepatic biliary tract; (B) indocyanine green fluorescence staining of the extrahepatic biliary tract.Note, the gallbladder and extrahepatic biliary tract were not observed; (C) original condition of the extrahepatic biliary tract.Note, the gallbladder and extrahepatic biliary tract were not observed; (D) extrahepatic cholangiography not showing intrahepatic and extrahepatic biliary tracts. Table 1 Baseline characteristics and preoperative liver function outcomes.
2024-07-14T05:13:09.886Z
2024-07-12T00:00:00.000
{ "year": 2024, "sha1": "8ca20af18d93632ef35f449e37184eb1d0c57de7", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8ca20af18d93632ef35f449e37184eb1d0c57de7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245855081
pes2o/s2orc
v3-fos-license
MIMO Radar Parallel Simulation System Based on CPU/GPU Architecture The data volume and computation task of MIMO radar is huge; a very high-speed computation is necessary for its real-time processing. In this paper, we mainly study the time division MIMO radar signal processing flow, propose an improved MIMO radar signal processing algorithm, raising the MIMO radar algorithm processing speed combined with the previous algorithms, and, on this basis, a parallel simulation system for the MIMO radar based on the CPU/GPU architecture is proposed. The outer layer of the framework is coarse-grained with OpenMP for acceleration on the CPU, and the inner layer of fine-grained data processing is accelerated on the GPU. Its performance is significantly faster than the serial computing equipment, and satisfactory acceleration effects have been achieved in the CPU/GPU architecture simulation. The experimental results show that the MIMO radar parallel simulation system with CPU/GPU architecture greatly improves the computing power of the CPU-based method. Compared with the serial sequential CPU method, GPU simulation achieves a speedup of 130 times. In addition, the MIMO radar signal processing parallel simulation system based on the CPU/GPU architecture has a performance improvement of 13%, compared to the GPU-only method. Introduction Multiple-input multiple-output (MIMO) radar is defined broadly as a radar system employing multiple transmit waveforms and having the ability to jointly process signals received at multiple receive antennas [1]. Elements of MIMO radar transmit independent waveforms result in an omnidirectional beampattern or create diverse beampatterns by controlling correlations among transmitted waveforms [2]. In [3], it is observed that MIMO radar has more degrees of freedom than systems with a single transmit antenna. These additional degrees of freedom support flexible time-energy management modes [4], lead to improved angular resolution [5,6]. MIMO radar handles slow moving targets by exploiting Doppler estimates from multiple directions which allows MIMO radar to have a low probability intercept (LPI) [7] and support high-resolution target localization [8]. In addition, MIMO radar has the characteristics of significantly improved radar speed resolution and radar search ability, radar anti-jamming and anti-clutter performance, and the angular resolution [9]. Due to its abovementioned advantages, MIMO radar is widely used in remote sensing, navigation, weather forecast, resource detection, and other fields [10,11]. From the perspective of MIMO radar working mode, MIMO radar is mainly divided into three categories: one is time division multiplexing (TDM), the other is frequency division multiplexing (FDM), and the rest is code division multiplexing (CDM). Time division multiplexing MIMO radar transmits signals from one element per time slot. The algorithm of time division MIMO radar includes the windowing algorithm, moving target indication (MTI), moving target detection (MTD) algorithm, transmit beam and receive beam forming algorithm, and constant false alarm rate algorithm. The frequency division multiplexing MIMO radar is used to detect the target by setting the signals of different transmit array elements. The transmitting signals among the array elements are orthogonal to each other, and the receiving signals are matched and filtered to separate different transmitting signals. Frequency-division MIMO radar algorithms include digital beam forming (DBF) algorithm, pulse synthesis algorithm, MTI, and MTD algorithm [12]. CDM offers more degrees of freedom than TDM, which results in a more flexible design of the transmission sequences. On the other hand, a time division multiplexing MIMO radar requires less complex hardware. This is especially important for radars applied to automobiles, as they have to be produced at low cost [13,14]. The transmission waveform of the time division MIMO radar can use the chirp signal, and the signals transmitted by different transmission array elements can be distinguished according to time. The algorithm processing is simple, so time division MIMO radar was adopted for processing in this paper. With the development of MIMO radar technology, MIMO radar is used in highresolution, multitarget tracking, virtual array elements, and multimode. For example, the study of MIMO synthetic aperture radar (SAR) is a fascinating research field. The concept of MIMO SAR was first introduced in [15]. With the increased resolution of SAR systems and the demand for 3D applications [16], the objects of SAR raw data simulation change from point targets or surface targets into natural 3D terrain, which causes the rapid growth of computational time. Therefore, it is necessary to improve computational efficiency for the wide application of MIMO SAR. A combination of programmable logic gate arrays and digital signal processing based on hardware boards is adopted [17]. These two methods are expensive and have poor scalability. Another method is to use the central processor for parallel computing [18]. The central processing unit (CPU) can handle huge amounts of data, but there are many shortcomings, and this method is difficult to develop [19]. Among the known published studies, the effect of the parallelization for the field programmable gate array (FPGA) module can be better than CPU [20,21]. However, the former has high design complexity, and the absolute calculation speed is not high. Although the methods described in these papers are optimal for general data volume, they may not be the method of choice otherwise. Therefore, it is desired to provide a more efficient method to simulate the MIMO radar signal processing. As GPU computing power continues to enhance, using GPU to accelerate radar signal processing algorithms has the advantages of improving computing speed and reducing development costs. Therefore, in view of the extremely strong computing power of GPU, it will be more commonly used in various types of radar signal processing algorithms in the future development of science and technology. In order to improve the efficiency of MIMO radar signal processing simulation, a CPU-oriented method and a GPU-oriented method are used in this paper. The CPU-oriented approach mainly refers to parallel simulation on the CPU platform, such as multicore open multiprocessing (OpenMP) [22], multi-CPU message passing interface (MPI), and multimachine grid computing. The acceleration effect of these methods is proportional to the number of CPUs. The CPU hardware device cluster uses multiple CPUs for parallel calculation, but the cost of deploying the CPU hardware device cluster is high, resulting in high parallel simulation costs. Acceleration strategies may rely more on computer platforms than fast algorithms. As is known to all, the CPU clock frequency has not increased significantly today. Multicore CPU has become a new development direction. A GPU-oriented method has realized parallel simulation of large-scale cores on the GPU platform, and, especially, general-purpose computing based on GPU has also become a very attractive development direction in recent years [23]. Early GPUs were mainly used for image processing, while modern GPUs are equipped with general-purpose programming interfaces such as NVIDIA's CUDA (does not require programmers to master a lot of graphics knowledge), which is especially suitable for massively parallel numerical calculation [24]. How to make full use of GPU to improve the efficiency of MIMO radar signal processing has become a hot topic in recent years. Compared with the CPU method, the GPU method achieves a speedup of dozens to hundreds of times, and it is an efficient and low-cost solution for massive data MIMO radar signal processing. However, in the GPU-based MIMO simulation, the CPU is often ignored as a computing resource. Normally, the CPU core remains idle, while the GPU core is busy with calculations. Heterogeneous CPU/GPU computing seems to be the best solution to further improve simulation efficiency [25], because an increasing trend is to use multiple computing resources (usually heterogeneous) as the only computing resources in the system. Heterogeneous simulation is a hybrid simulation that implements multicore CPU parallelism and large-core parallelism on the GPU. Because almost all existing ordinary computers are shared memory multicore systems, they can be easily upgraded to GPU/CPU platforms. Therefore, it is very meaningful to implement the heterogeneous parallel signal processing algorithm flow of time division MIMO radar on the GPU/CPU platform [26]. Reference [27] proposed a hybrid CPU-GPU multilevel preconditioner with a moderate memory footprint for solving a sparse system of equations resulting from finite element method (FEM) using higher-order elements. Reference [28] presented a parallel high-efficiency video-coding (HEVC) intraprediction algorithm for heterogeneous CPU+GPU systems. Reference [29] presented a full realization of the higher-order method of moments (HMoM) with a parallel out-of-core LU solver on GPU/CPU platform. An SAR raw data simulation method based on multicore SIMD processor and multi-GPU deep collaborative computing was proposed [26]. In this paper, a GPU-based time division MIMO radar signal processing algorithm flow is proposed. Compared with the previous MIMO radar signal processing algorithms, this method has the following characteristics. (1) We propose a time division MIMO radar signal processing algorithm based on GPU parallel acceleration. (2) In order to improve processing performance, we propose an improved GPU-based time division MIMO radar signal processing algorithm, which simplifies the process of the signal processing algorithm and speeds up the processing remarkably. (3) In order to enable the CPU to participate in the operation, we propose a parallel acceleration method of CPU based on OpenMP technology in combination with CUDA stream operation. The method uses pipeline operation to make the time division MIMO radar signal processing algorithm meet the real-time requirements, and further improves the calculation efficiency. The simulation results show that, compared with the traditional CPU-based time division MIMO radar signal processing algorithm flow, the proposed GPU-based time division MIMO radar signal processing algorithm has better performance, and each GPU processing algorithm is better than CPU processing. The processing efficiency of a single-core CPU has been increased by more than 50 times, and GPU computing with improved algorithms has achieved 130 times acceleration. The rest of the article is structured as follows. Section 2 briefly introduces the basic principle of MIMO radar, the echo model, and the signal processing flow of time division MIMO radar. Section 3 mainly introduces time division MIMO radar algorithm optimization and GPU acceleration. The experimental results and optimization analysis are discussed in Section 4, and finally a conclusion is drawn. Basic Principles of MIMO Radar The basic principles diagram of MIMO radar is shown in Figure 1. The transmitting terminal of the MIMO radar is composed of M transmit arrays. By controlling each digital transceiver unit at the transmitting terminal, the transmitting arrays transmit mutually orthogonal or partially orthogonal signal waveforms. The transmitted signal waveforms cannot be superimposed in the air to synthesize narrow beams with high gain while synthesizing wide beams with low gain in space [30]. However, the MIMO radar receives the target's echo signal at the antenna receiving terminal, then uses digital beam-forming (DBF) technology to accumulate in the spatial domain, and finally form multiple high-gain narrow beams at the same time. Meanwhile, it can synthesize the receive and transmit beams with different directions by changing the digital beam coefficients. Under far-field conditions, it is assumed that the angle between the target and the antenna element is θ. At the same time, taking the rightmost array element as a reference, due to transmission attenuation and time delay, the signal s m (t) sent by the mth arrays will become MIMO Radar Echo Model where α 1 indicates the amplitude attenuation of the signal reaching the target. τ = R/c represents the time required for the signal emitted by the reference array element to propagate to the target. τ m is the delay of the m array relative to the reference array to reach the target [32]. φ m is the phase delay corresponding to τ m . c is the speed of light, and R is the distance between the reference array and the target. From the above isometric line array structure, it can be seen that between two adjacent array elements, the right array element is d sin θ more than the left array element from the target, so there is Assuming that the amplitude attenuation of each array's transmitting signal to the target is α 1 and all transmitting signals are narrow-band, the combined signal at the target can be written as The combined signal at the target will propagate to each receiving array after being reflected by the target. In a similar way, the amplitude attenuation of the reflected signal reaching each receiving array is α 2 . Similarly, the echo received by the n array element is where n n (t) is the Gaussian white noise received by the n array, and ϑ r represents receiving the steering vector [33,34]. Time Division MIMO Radar Processing Flow The time division multiplexing MIMO radar has M transmitting antenna array elements and N receiving antenna array elements, and its corresponding virtual antenna array is a uniform linear array, and it has MN antenna arrays, which are numbered in order from the array element to the MN antenna element according to the virtual position of the space. It transmits signals from one array per time slot. Through specific time division multiplexing timing of the transmitting antenna element and the receiving antenna element, the virtual antenna array receives signals according to the first element, the second element, the third element, the fourth element, . . . , the MNth element, which can distinguish the signals emitted by different transmitting arrays according to time. The time division MIMO radar signal processing flow chart is shown in Figure 3. Under the time division MIMO system working mode, the basic process of echo signal processing is as follows: 1. Dechirp processing is performed on the received echo signal. Under the condition of ensuring the same resolution, the bandwidth of the signal can be greatly reduced, and the broadband signal is converted into a single frequency signal. 2. In order to suppress clutter and improve detection performance, we need advanced moving target indication (MTI) before detection, mainly to eliminate clutter and stationary targets. Moving target detection (MTD) processing is to perform fast Fourier transform (FFT) or FIR filtering on the data of the same distance units with different pulse repetition periods to eliminate the effects of clutter. 3. Simultaneous digital multibeam-forming (receiving beam-forming) of the M echo signals of the receiving channel to obtain high-gain receiving beams pointing in k directions. We reasonably set the weighting factors of the receiving steering vectors, which can flexibly control the receiving beam's direction, so that the receiving beam is pointed to the space detection area of interest. Improved Time Division MIMO Radar Processing Flow It is also difficult to meet real-time requirements using GPU for processing, so the algorithm needs to be improved. In this process, windowing, MTI, MTD algorithm, and Doppler phase compensation are combined into one module for processing. Windowing minimizes spectrum leakage, resulting in low sidelobe. The MTI window improves the distance-dimensional main and sidelobe ratio characteristics, and the MTD window improves the Doppler-dimensional main and sidelobe ratio characteristics. We can perform two FFT transformations with two-dimensional FFT transformations at the same time. In order to reduce the amount of calculation, we can combine windowing and MTD window into one window, and only perform windowing once. Doppler phase compensation is processed in the one-dimensional pulse number. MTI is also carried out by the Doppler dimension. The MTI filter and the phase compensation window function can be synthesized together, and the MTI filtering and phase compensation can be performed at the same time. The improved time division MIMO radar signal processing flowchart is shown in Figure 4. In the time division MIMO system working mode, the basic flow of echo signal processing is as follows: 1. The received echo signal is subjected to dechirp, moving target indication (MTI), and moving target detection (MTD) processing, mainly to eliminate clutter and stationary targets. 2. Simultaneous digital multibeam forming (receiving beamforming) is performed on the M echo signals of the receiving channel to obtain high-gain receiving beams pointing to k directions. We can flexibly control the direction of the receiving beam by setting the weighting factor of the receiving steering vector reasonably, so that the receiving beam can point to the space detection area we are interested in. MIMO Algorithm GPU Parallel Processing Flow The CUDA programming model is based on a heterogeneous system composed of CPU and GPU, so we not only optimize the execution efficiency of the device terminal code, but also take into account the efficiency of the collaborative work between the host and the device. The running process of CUDA program is as follows: 1. Allocate memory on the host and device and prepare data. 2. Copy data from the host to the device. 3. Start the kernel function for calculation. 4. Transmit the calculation result from the device to the host. Time division MIMO radar signal processing simulation is a serial time process, in which different algorithms are processed sequentially according to the signal processing flow. However, the algorithm is still executed serially. Therefore, we can use GPU to perform parallel calculations. The fine-grained parallel strategy of time division MIMO radar signal processing simulation treats each algorithm as a computing node, and the entire algorithm is executed serially. Parallel execution improves the processing speed of the signal processing algorithm. The data in each channel of the time division MIMO radar are divided into threads and the data are processed at the same time. Figure 5 shows the CUDA implementation framework for accelerating the improved time division MIMO radar signal processing flow via GPU. The fine-grained parallel simulation of time division MIMO radar signal processing based on CUDA not only conforms to the physical process of time division MIMO radar signal processing, but also makes full use of the hardware resources and computing power of GPU. This method is suitable for the time division MIMO radar signal processing flow with large data volume. Using a MIMO radar with 4 transmitting elements and 50 receiving elements, the echo signals received after the four transmitting array elements transmitted are T1, T2, T3, and T4. The traditional time division MIMO radar processing process first performs windowing, MTI, MTD algorithms, Doppler phase compensation, and then beamforms the data of T1, T2, T3, and T4. The signal processing is greatly complicated, and it is difficult The fine-grained parallel simulation of time division MIMO radar signal processing based on CUDA not only conforms to the physical process of time division MIMO radar signal processing, but also makes full use of the hardware resources and computing power of GPU. This method is suitable for the time division MIMO radar signal processing flow with large data volume. Using a MIMO radar with 4 transmitting elements and 50 receiving elements, the echo signals received after the four transmitting array elements transmitted are T1, T2, T3, and T4. The traditional time division MIMO radar processing process first performs windowing, MTI, MTD algorithms, Doppler phase compensation, and then beamforms the data of T1, T2, T3, and T4. The signal processing is greatly complicated, and it is difficult to meet the real-time requirements for processing with GPU, so the algorithm needs to be improved. In this paper's process, windowing, MTI, MTD algorithm, and Doppler phase compensation are combined into one module for processing, which meets real-time requirements. Firstly, preprocessing is performed on the CPU; then, the number of transmitting array elements, the number of receiving array elements, the number of distance units, the number of pulse points and other data are set, memory on the host is allocated, and the T1, T2, T3, T4 receiving data and various window function data are imported into the memory for initialization. Memory is allocated on the device to copy the data on the host to the device. The downsampled data from the CPU side is copied from the CPU side to the GPU side. The echo signals T1, T2, T3, and T4 copied to the GPU are copied into four copies, namely 1-16 copies of data. The main reason is that the speed measurement range of the original four data is too small. After copying 16 copies, the speed range can be expanded four times after compensation. Then, the main algorithms are processed in parallel. The parallel processing of MIMO radar under the CPU/GPU architecture is mainly fine-grained parallel processing on the GPU. The specific processing of the T1's data needs to be carried out on the GPU, and the T1 part of the data must first be processed by the windowed MTI/MTD phase compensation module. T2, T3, and T4 also perform the same operation. Next is the beamforming module, which is processed on the GPU. Finally, the data output operation is performed, the processed data is copied from the device to the host, the data is output finally, and the data processed by the entire time division MIMO radar signal is obtained for drawing a comparison. Preprocessing on the CPU The time division MIMO radar signal processing algorithm is preprocessed on the CPU firstly, including setting the number of transmitting array elements, the number of receiving array elements, the number of distance units, the number of pulse points, the received data and various window data. Then, it needs to allocate memory on the CPU and GPU, store the data on the CPU, and complete the preprocessing on the GPU. The flow of preprocessing on the CPU is shown in Figure 6. Windowed MTI/MTD Phase Compensation Module For a MIMO radar with 4 transmitting antenna array elements and 50 receiving array elements, 200 (4 × 50) virtual antenna array elements are generated, and all these signals need to be windowed. The specific implementation of the windowing operation can be converted to the frequency domain multiplication operation, so the CUFFT library provided by CUDA C is used to calculate the Fourier transform. The sampling data of the same distance unit of several adjacent pulse repetition periods in each frame are cancelled out by MTI in turn. MTD is used to operate FFT on all the sampling data of the same distance unit in each frame and carry out the same cancellation and FFT processing for all distance unit sampling points, so the MTI and MTD processing have correlation between periodic data. However, the sampling data of different distance units are not related. In this way, the MN two-dimensional matrixes obtained after pulse compression can be divided by the number of sampling points into N data blocks to achieve data-level parallelism between the echo data of different distance units, and each data block contains M pulses of the same distance unit echo data. In order to make the MTI cancellation effect better, a filter is used to achieve MTI. Since the target is a moving target with certain speed, Doppler phase compensation must be performed before beamforming. Figure 7 shows the block diagram of windowed MTI/MTD phase compensation module. The specific operation can be divided into the following steps: 1. Use CUDA C to read MIMO radar echo data, extract all echo snapshot data in a certain CPI, and adjust the data format in conjunction with the usage specifications of the CUFFT library. It should be noted that the radar transmission signal data is stored in an array form to complete the data preparation; the data preparation stage is mainly data replication, which can be realized by using kernel functions. 2. Add filter window and MTD window to the echo data, mainly to perform dot multiplication on the data of T1, T2, T3, and T4 with the window function that has been passed to the GPU. 3. Perform two-dimensional FFT transformation on the data after adding the MTD window function in the azimuth dimension and the number of sampling points, complete the MTI and MTD, and use the CUFFT library function to perform twodimensional fast Fourier transform (FFT) on the data respectively. 4. Perform the phase compensation operation on the data after the two-dimensional FFT transformation, that is, perform the dot multiplication on the data and the preparation phase compensation window function, respectively. It should be noted that the added window function is different due to time delay of T1, T2, T3, and T4. Beamforming Module The DBF module needs to perform spatial filtering on the echo signal incident in a certain direction to obtain the echo data of the corresponding channel, such as the sum channel, the difference channel, and the auxiliary channel. For the antenna beam pointing at a certain moment, the weight vector of the spatial filter is multiplied by the incident echo signal to complete the DBF processing. When realizing full-wave position scanning, the calculation is performed in a circular manner. Each wave position is calculated once, and then the wave positions are changed in turn to ensure all wave positions are calculated cyclically. After completing the MTD of all virtual array elements, there are 200 virtual array elements. Each of them corresponds to 500 signals. At the same time, the digital beamforming operation can be regarded as the weighted summation of signals by 200 virtual array elements. Therefore, the realization of wave position digital beamforming can be regarded as the multiplication operation of the signal matrix and the array element weight matrix. The block diagram of the beamforming module is shown in Figure 8. The specific operation can be divided into the following processes: 1. First, the data of T1, T2, T3, and T4 are respectively compensated for the azimuth dimension Doppler phase, and the Doppler phase is canceled out. 2. Then, the data of the four channels T1, T2, T3, and T4 are rearranged, and the data is transposed and rearranged. 3. Secondly, the data is multiplied by a two-dimensional DBF window function. 4. Finally, the data is subjected to a two-dimensional FFT to achieve a weighting effect, thus completing the entire beam-forming process. Stream Acceleration Based on OpenMP Due to the difference in computing power between CPU and GPU, CPU is used as the serial part, and GPU is used for the parallel part of real signal processing in traditional CPU/GPU computing. In this sense, this calculation is actually a GPU-based method. The main calculation task is executed by a large number of GPU threads, while the CPU threads are in a waiting state. The CPU can also participate in the parallel algorithm, the multicore parallel based on OpenMP and the GPU stream processing mechanism can be combined, and the CPU and GPU can be processed in parallel at the same time to improve the processing speed and meet the real-time requirements. OpenMP bifurcation-merge model: (1) OpenMP uses a fork-join model to achieve parallelization. (2) All OpenMP programs start from the main thread. The main thread executes serially until it encounters the first parallel region. (3) Bifurcation: After that, the main thread will create a group of parallel threads. (4) The code in the parallel region is surrounded by curly braces and then executed in parallel on multiple parallel threads. (5) Merging: After the parallel threads execute the code in the parallel region, they synchronize and end automatically, leaving only the main thread. (6) The number of parallel regions and the number of parallel threads can be arbitrary. The fork-merge model of OpenMP is shown in Figure 9. The CUDA stream represents a GPU operation queue, and the operations in the queue are executed in the specified order. It can add some operations to the stream, such as kernel function startup, memory copy, etc. The order in which these operations are added to the flow is the order in which they are executed. Each stream can be viewed as a task on the GPU, and these tasks can be executed in parallel. When using CUDA stream, a device is selected that supports the device overlap function first. The GPU that supports the device overlap function can execute a CUDA core function while also performing data copy operations between the host and the device. In general, CPU memory is much larger than GPU memory. For large amounts of data, it is impossible to transfer the data in the CPU buffer to the GPU at one time. Therefore, it needs to be transferred in blocks. If one wants to perform kernel function operations on the GPU while transmitting in blocks, such asynchronous operations need to use the device overlap function to improve computing performance. However, there is no concept of flow in hardware. Instead, it contains one or more engines to perform memory copy operations and one engine to perform core functions. The stream acceleration parallel framework based on OpenMP is shown in Figure 10. Running operations into the queue of the stream should be breadth-first rather than depth-first. In other words, instead of adding all operations of the first stream, and then adding all four operations of the second stream, the two streams are added alternately. Assuming that the copy operation takes time a, and the execution of the kernel function takes time b, then: • When a ≈ b, the length of the timeline is about 4a. • When a > b, the length of the timeline is 4a. • When a < b, the length of the timeline is 3a + b. Stream parallelism can execute different kernel functions or pass different parameters to the same kernel function to achieve task-level parallelism. CudaMemcpy and CPU operations are synchronized. In order to achieve device overlap, CUDA provides cud-aMemcpyAsync for data copy operations. It is asynchronous and executes the next step of the program without waiting for the copy to complete. Coarse-grained parallel processing of time division MIMO radar signal processing flow on multicore CPUs uses OpenMP to copy data T1, T2, T3, and T4, windowing MTI/MTD modules, and DBF, totaling five parts to process in parallel. There are mainly three processing steps in the CPU/GPU cosimulation framework: 1. First, copy data T00, T01, T02, and T03 from the CPU to the GPU, and perform MTI/MTD processing. 2. Open up five parallel threads on the CPU via OpenMP: the first parallel thread is responsible for processing the DBF algorithm of the previous data, and the second to the fifth threads are responsible for copying the data T10, T11, T12, and T13 that will enter the GPU next time from the CPU to the GPU and undergo processing by MTI/MTD. 3. After the processing of step 2 is completed, the first parallel thread is responsible for the DBF algorithm of data T10, T11, T12, and T13. The second to fifth threads are responsible for copying the data T20, T21, T22, and T23 that will enter the GPU next time from the CPU to the GPU and undergo processing by MTI/MTD. The processing is performed circularly as above. When the first step data is being processed, the second step starts to load data, using ping-pong processing to reduce the time impact of copying data from the CPU to the GPU, so that the final signal processing meets the real-time requirements. Experimental Results In this paper, the data simulation of time division MIMO radar signal processing algorithm based on CPU/GPU parallel computing includes three improvements: the parallel of GPU-based radar processing algorithm, the improvement of time division MIMO radar signal processing algorithm, and the stream acceleration processing based on OpenMP. CPU, GPU acceleration and algorithm optimization improve the overall efficiency of the system. Four types of time division MIMO radar signal processing algorithm simulation experiments are designed: the time analysis of data copy, the parallelization of GPU-based radar processing algorithms, the improvement of time division MIMO radar signal processing algorithms, the impact of OpenMP-based stream acceleration processing on the simulation and the accuracy and error of the three methods are discussed, respectively. In order to evaluate the experimental results, this article considers five groups of experiments with different data volumes, such as 960 × 500 × 200, 480 × 500 × 200, 240 × 500 × 200, 120 × 500 × 200, and 60 × 500 × 200. These experimental data are obtained via the actual measurement of the data acquisition card (DAQ). One Intel Xeon GOLD 5122 CPU (including 56 threads) and one NVIDIA Tesla T4 GPU (including 2560 cores) are used in the experiment. The simulation parameters and hardware specifications are shown in Tables 1 and 2. In addition, the software environment consists of four parts. Specifically, the operating system is Windows Server 2018, the Intel C++ writer is Visual Studio 2013, and CUDA 10.1 is selected to drive GPU parallel computing. In addition, OpenMP is used for thread parallel processing, and its number can be set according to the number of CPU cores in a specific device. There are five CPU task-level threads for parallel processing, and each time division MIMO radar signal processing algorithm opens up GPU threads for parallel computing. In the collaborative computing mode, the running time of the Matlab code only considers the time of the core time division MIMO radar algorithm. The running result of the GPU code takes into account the input and output time, including GPU memory allocation and data transmission between CPU and GPU, and core GPU time division MIMO radar algorithm time [25]. Data Copy Time Analysis From the execution of the CUDA kernel function, it can be seen that the execution of the kernel function first needs to store data from the memory to the video memory, which takes data transmission time; then reads the data from the video memory and uses multiple threads to process the data, which takes calculation time. Therefore, the time consumed to execute the algorithm is the sum of the data transmission time and the calculation time. Before analyzing the signal processing algorithm, it is necessary to analyze the data transmission time. Figure 11 shows the time required for data copy under different data volumes. It can be seen from the figure that as the data volume increases, the copy time and data volume show a linear growth trend. It is worth noting that although this article is divided into modules to study the implementation of parallel acceleration of MIMO radar echo signal processing, it is not necessary for each module to copy data from the video memory to the memory after execution. When the video memory is enough to store the input data of all the operations, there is no need to take the data out of the video memory after each module is executed; it is only needed to release the unnecessary video memory variables in time. GPU-Based Time Division MIMO Radar Signal Processing Algorithm Analysis The accuracy of simulation analysis and runtime are two important factors for GPU acceleration. Therefore, we mainly analyze the MIMO radar signal processing flow in two aspects. The first is the analysis of the accuracy of GPU accelerated simulation. Consider the data results of each algorithm node, compare the GPU calculation results with the original Matlab simulation results, and verify the correctness of the simulation via error analysis. The second is the analysis of performance improvement of GPU acceleration, mainly by selecting data of different data volumes for processing and analyzing the acceleration ratio of GPU via its runtime results. Analysis of the Accuracy of GPU Accelerated Simulation The receiving channel is randomly selected, taking the 45th receiving channel as an example. As shown in Figures 12-14, there are the final output results of the GPU-based time division MIMO radar signal processing algorithm, the output results of Matlab, and their error statistics, respectively. As can be seen from the above figures, the Matlab results are consistent with the GPU results. The expected value of multiple data error statistics in Figure 14 is 106.56, and the mean square error (MSE) is 128.69, namely, error magnitude is about 10 2 . The specific cause of the error is because the use of float to store data will cause an error between the true value and stored data. The binary storage digits after the decimal point of the float type are 21 digits, which is 10 6 , and the storage accuracy is 6 digits after the decimal point. However, the magnitude of the data is about 10 8 , and the magnitude of the error is about 10 2 . The result of dividing the magnitude of the error by the magnitude of the data is roughly about 10 −6 . The error statistics results basically meet the float error range, which verifies the correctness of the GPU acceleration results. Analysis of GPU Accelerated Performance Improvement The MIMO radar signal processing algorithm mainly includes three parts: windowing processing, MTI/MTD processing, and beam-forming processing. We mainly choose different data volumes for these three algorithms and analyze the impact of different data volumes on the three algorithms. The acceleration ratio of GPU processing is analyzed by Matlab runtime and GPU runtime data volume. Table 3 shows the influence of different data volumes on the three algorithms. It can be seen that the simulation time is increasing as the data volume increases. The data copying time is fixed, and it is difficult to accelerate the processing. The windowing and MTI/MTD algorithm account for about half of the total time, so optimization can be considered. Due to the limitation of the number of Tesla T4 cores, when the amount of data reaches a certain level, the acceleration is limited. As the data needs to be rearranged in the DBF algorithm, which includes the time of data movement, it takes a long time. Table 4 shows the GPU processing flow and the schedule used by Matlab under different data volumes. Figure 15 is a time comparison chart. The simulation results show that as the amount of data increases, the simulation time for Matlab to process data increases significantly. When the data volume is about 200 million, the GPU-based MIMO radar signal processing algorithm is 100 times faster than the traditional CPU processing method. Improved Time Division MIMO Radar Signal Processing Algorithm Analysis Based on GPU The GPU-based improved time division MIMO radar signal processing simulation analysis is the same as the ordinary GPU-based time division MIMO radar simulation analysis. It is divided into the analysis of the accuracy of GPU accelerated simulation and the analysis of performance improvement of GPU acceleration. Analysis of the Accuracy of GPU Accelerated Simulation The receiving channel is randomly selected, taking the 75th receiving channel as an example. As shown in Figures 16-18, there are the final output results of the GPU-based time division MIMO radar signal processing algorithm and the output results of Matlab and their error statistics, respectively. The expected value of multiple data error statistics in Figure 18 is 120.87, and the mean square error (MSE) is 158.12, namely, error magnitude is about 10 2 . While the magnitude of the data is about 10 8 , the result of dividing the magnitude of the error by the magnitude of the data is roughly about 10 −6 . The error statistics results meet the float error range. As can be seen from the above figures, the Matlab results and the GPU results are almost identical, which verifies the correctness of the GPU acceleration results. Analysis of GPU Accelerated Performance Improvement The improved MIMO radar signal processing algorithm mainly includes two parts: windowing MTI/MTD processing and beamforming processing. Table 5 shows the impact of different data volumes on the two algorithms. It can be seen that the time of DBF is similar to the time used in Table 4, and the time of windowing MTI/MTD is reduced compared with the time of the first two algorithms in Table 4. Figure 19 is a comparison graph of the time by the improved GPU processing flow and the traditional GPU. The simulation results show that the improved MIMO radar signal processing algorithm based on GPU is about 50 ms faster than the traditional GPU processing method. When the data volume is about 200 million, the processing time of the improved GPU processing flow is about 150 ms, and the entire processing flow of Matlab takes 19.807 s. Compared with Matlab processing, the improved GPU algorithm is 130 times faster. Stream Acceleration Based on OpenMP The OpenMP-based stream acceleration processing is based on the original improved algorithm, and the data copy time and data calculation time are processed in parallel. The results of the operation shown in Figures 16-18 have the same results, verifying the correctness of the simulation results. We mainly analyze the improvement in processing speed. As shown in Figure 20, it is the total time (including data copy time and data calculation time) used for OpenMP-based stream acceleration processing under different data volumes. When the data volume is about 200 million, the total time is saved by 20 ms, which verifies the performance improvement of stream acceleration processing. Figure 21 is a graph of the runtime of three methods without improved GPU processing, improved GPU processing, and stream acceleration processing under different data volumes. It can be seen that when the amount of data is relatively small, the performance of stream acceleration is not well reflected, but as data volume increases, the advantages of stream acceleration processing appear. When the data volume is about 200 million, the stream acceleration processing method using CPU+GPU is about 20 ms faster than the improved GPU processing method. The entire processing flow of Matlab takes 19.807 s, which is about 150 times faster than Matlab processing using the stream acceleration processing method based on CPU+GPU. Conclusions This paper uses CPU/GPU computing technology to solve the calculation bottleneck problem of the traditional time division MIMO radar signal processing algorithm. A parallel simulation method for time division MIMO radar signal processing based on CPU/GPU parallel is proposed. Specifically, this article introduces three improvements: First, the GPU-based time division MIMO radar signal processing method, which greatly improves the computing power of time division MIMO radar signal processing and promotes the possibility of real-time processing of time division MIMO radar signals. The second is to propose an improved time division MIMO radar signal processing algorithm, which combines part of the algorithm content, which speeds up the processing speed from the original algorithm; the third is to join the OpenMP-based stream parallel processing method, parallel calculation by distributing data copy and data calculation into different streams, which further improves the simulation efficiency, and on the basis of the original GPU method, a desirable acceleration effect has been achieved, which basically meets the requirements of real-time processing. The experimental results show that the GPU-based time division MIMO radar signal processing method has increased the processing efficiency of single-core CPU by more than 50 times compared with the classic Matlab CPU method, and the GPU calculation of the improved algorithm has reached a speedup of 130 times. In addition, compared with classic GPU processing, the performance of OpenMP-based stream acceleration processing has increased by 20%. This method improves the simulation efficiency and has the advantages of energy saving and low hardware cost. This method is suitable for low-altitude time division MIMO radar signal processing simulation. Because MIMO radar signal processing has the characteristics of large data volume, it is expected to be better applied in multiantenna target detection. The future work of this research will form a complete set of methods based on time division MIMO radar signal processing and target detection, and target information extraction; using multi-GPU and CPU/GPU collaborative computing methods to apply to MIMO radar signal processing and target recognition, which will be a preliminary attempt made for the real-time processing and widespread application of MIMO radar products.
2022-01-12T06:18:25.045Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "fe954eb339d21a65d005a4ca6a597e0434c0d06d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7723ff88d9c0cd2a7b201121c8314e151eb9caa5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
264044242
pes2o/s2orc
v3-fos-license
International Journal of Human - Computer Studies We investigate how gender-anonymous voice avatars influence women’s performance in online computing group work. Female participants worked with two male confederates. Voices were filtered according to four voice gender anonymity conditions: (1) All unmasked, (2) Male confederates masked, (3) Female participant masked, and (4) All masked. When only male confederates used masked voices (compared to all unmasked), female participants spoke for a longer period of time and scored higher on computing problems. When everyone used masked voices (compared to all unmasked), female participants spoke for a longer period of time, spoke more words, and scored higher on computing problems. Effects were not significant on subjective measures and one behavioral measure. We discuss the implications for virtual interactions between people. Introduction Teams working together remotely became commonplace during the COVID-19 pandemic (Brynjolfsson et al., 2020), driven by factors such as cost reduction and flexibility in work-life balance (Ferreira et al., 2021), with many companies now offering positions that are permanently remote (Smith, 2022).Online education is also growing-the global market of $269.87 billion USD in 2021 is expected to increase to $585.48 billion USD in 2027 (Renub Research, 2022).Communication platforms such as Zoom, Microsoft Teams, Cisco Webex, Slack, and Discord are now common in businesses, education, and entertainment (Blagojević, 2022).However, communication on such platforms may reinforce social inequities in the workplace and online education.For example, multiple studies have found that women experience significantly higher zoom fatigue than men (Fauville et al., 2021;Ratan et al., 2022).The present research is concerned with the potential for online teamwork to augment the already high level of disparity for women in computing fields, where fewer than 20% of computer science students are women (Sax et al., 2017;Higher Education Statistics Agency, 2018;Joint Council for Qualifications, 2018), in part due to historical (Henn, 2014) and cultural biases (Abbate, 2012).Further, stereotypes that women do not belong in technological fields engender hostile and sexist environments that hinder equitable participation (Dean, 2007;Cohoon D. Kao et al. avatars persisted significantly longer in a cognitive task with other ostensible participants than those using race-revealing avatars (Lee, 2009).Another study found that both men and women participants who used a female avatar in a threat condition (i.e., in a group with two ostensible male avatars) performed worse on a math task than participants who used a male avatar (Lee et al., 2014).Another study found that using a female avatar in VR harmed math task performance and math confidence, while using a male avatar buffered against these stereotype threat effects (Peck et al., 2020).In all of these studies, a visual avatar was assigned to the participants, and although a few studies have examined the role of avatar customization in these effects (Fordham et al., 2020;Ratan and Sah, 2015), no studies of which we are aware have examined the role of non-visual avatars in stereotype threat effects.Addressing this gap, the present research focuses on mediated representations of user voices, which can be considered voice avatars, and which carry important cues of user identity that are potentially stereotyped, such as gender. Although voice avatars are a novel technology in the context of online teamwork, there is significant evidence that voice-based communication in online gaming environments triggers hostility and sexism toward women (Kuznekoff and Rose, 2013;McLean and Griffiths, 2019).Further, although participants in an online meeting can use a visual avatar on many platforms, few if any offer the option for voice avatars, and the effects of using such avatars are largely unknown.Hence, the lack of research on voice avatars and stereotype threat represents a significant gap. The technology required to mask or modify user voices is advancing rapidly (Kao et al., 2021), so we may expect that team communication platforms will offer these tools in the future, but the best approaches to mitigating stereotyping and stereotype threat through such tools are unclear.As a first step toward exploring this technology and the role of stereotype threat in online team communication, we focus on a simple comparison of gender-revealing and gender-masking voice avatars. 1ence, for the purposes of this study, we created a faux online meeting platform that allowed our self-identifying female participants to use either a gender-revealing or gender-masking voice avatar to interact with two ostensible male teammates (actually recorded confederates).Results suggest that gender-masking voice avatars buffered the effects of stereotype threat as reflected by the amount participants spoke and performance on computing problems, though subjective measures were not found to be affected.Together, this large-scale preregistered study suggests that stereotype threat is likely rampant in online technical meetings and that voice (and visual) avatars can likely be used to promote gender equity in such teamwork across a variety of contexts (virtual classes, business meetings, and entertainment). Stereotype threat Stereotype threat is a type of psychological discomfort triggered when people with a negatively-stereotyped social identity feel at risk of confirming the negative stereotype and consequently experience adverse effects in the stereotype-relevant domain (Steele, 1997;Steele and Aronson, 1995).In the seminal study on stereotype threat Steele and Aronson (1995), African-American students underperformed on the verbal Graduate Records Examination compared to their White counterparts when the test was framed as an assessment of intellectual ability or students were asked to record their race/ethnicity in a questionnaire prior to completing the task.However, no racial performance differences were found when the test was framed as a problem-solving exercise or the race/ethnicity question was administered after the test.This performance difference likely occurred because African-Americans are often negatively stereotyped as inferior to White students in intellectual domains, and thus the test framing that increased the negative stereotype's salience induced a psychological threat that inhibited their performance.These findings provide evidence that stereotypical views and beliefs about a group can negatively influence the intellectual functioning of individual group members. Furthermore, the type of diagnosticity makes a difference-e.g., women suffered more from stereotype threat when they were led to believe their leadership ability (stereotypically masculine skill) was being evaluated compared to being told relationship maintenance ability (stereotypically feminine skill) was being evaluated (McGlone and Pfiester, 2015).Stereotype threat is also triggered when a stereotyped identity is made salient (Beilock et al., 2006(Beilock et al., , 2007;;Blascovich et al., 2001;Brown and Pinel, 2003;Delgado and Prieto, 2008;Johns et al., 2005;Keller, 2002;McGlone and Pfiester, 2015;McIntyre et al., 2003;O'Brien and Crandall, 2003;Osborne, 2007;Rosenthal et al., 2007;Smith et al., 2007;Spencer et al., 1999;Stone and McWhinnie, 2008).For instance, women who were told that men are better than women in mathematical domains exhibited stereotype threat effects during a subsequent computer science programming task (Smith et al., 2007) and math problems (Beilock et al., 2007;Keller, 2002).Implicit cues to stereotype relevance can also trigger stereotype threat, for example, by telling female participants that their performance on a math test would be compared to men (Delgado and Prieto, 2008;Rosenthal et al., 2007) or that the objective of an experiment was to investigate differences between men and women in athletics (Stone and McWhinnie, 2008) and mathematics (Brown and Pinel, 2003;Johns et al., 2005;McIntyre et al., 2003;Spencer et al., 1999).Similarly, job descriptions utilizing more masculine-themed words compared to feminine or neutral words predicted lower belonging for potential female candidates (Stout and Dasgupta, 2011) and caused women to report lower scores of expected belongingness and job appeal (Gaucher et al., 2011).And perhaps even more implicitly, being asked to indicate gender, race/ethnicity, or age in questionnaires (Danaher and Crandall, 2008;McGlone and Aronson, 2006;Schmader and Johns, 2003;Shih et al., 1999;Steele and Aronson, 1995), being in physical (Cheryan et al., 2009) or virtual (Cheryan et al., 2011) environments decorated with stereotypically masculine objects, or simply being numerically underrepresented (e.g., women taking a test in a majority-male room) increases the likelihood of stereotype threat effects (Beaton et al., 2007;Inzlicht andBen-Zeev, 2000, 2003;Inzlicht and Good, 2006;Johns et al., 2008;Murphy et al., 2007;Sekaquaptewa and Thompson, 2003). Relatedly, the presence or behavior of others-such as someone who makes sexist remarks (Logel et al., 2009) or simply a male experimenter (McGlone et al., 2006)-can also trigger stereotype threat (Maass et al., 2008;Palomares, 2009;Stone and McWhinnie, 2008).Similarly, women were more likely to experience stereotype threat after being led to believe they were competing against a man in chess (Maass et al., 2008) or a shooter video game (Fordham et al., 2020).Because people possess multiple social identities (e.g., gender, age, race/ ethnicity, socioeconomic status, etc.), stereotype threat also occurs when a social identity relevant to the stereotyped context is made more salient than other social identities (Rosenthal and Crisp, 2006).For example, Asian women suffered from stereotype threat during a math test after filling out a questionnaire about their gender identity, but not when they answered a questionnaire about their ethnic identity (Shih et al., 1999).Similarly, female students primed to be more self-aware about their gender identity performed worse on a standardized spatial reasoning test compared to women who were made aware of their identity as students (McGlone and Aronson, 2006). Mitigating stereotype threat online Given the overwhelming evidence that stereotype threat occurs widely in society, especially among students, researchers should focus on developing approaches to combat it.Small effective steps to mitigate stereotype threat improves overall well-being, reduces worries about being devalued based on group membership (e.g., ethnicity, gender), facilitates meaningful engagement with peers, instructors, and classroom learning activities, and consequently reduces achievement gaps by 30%-40% (Cohen et al., 2012) in the short and long term (Cohen et al., 2006(Cohen et al., , 2009)).Approaches to mitigating stereotype threat generally require altering the psychological environment of the individual (Steele et al., 2002b).For example, just as framing a test as diagnostic triggers stereotype threat, framing a test as generic will counteract it (Steele and Aronson, 1995).In contrast, the present research focuses on stereotype threat mitigation through social-identity mechanisms.Namely, just as increasing a stereotyped social identity's salience triggers stereotype threat, reducing cues to social identity mitigates it (Garcia and Cohen, 2013).The present research focuses on a context of performance where such social identity cues are particularly poignant and also malleable: online group collaborations. Group interactions and collaborations are increasingly taking place online across many contexts, from undergraduate education (Parker and Lenhart, 2011) to academic research conferences (Wu et al., 2022) to work meetings in general (Karl et al., 2022).In all of these contexts, participants create and share content and engage with others socially through digital self-representations that often reflect their social identities (Boyd and Ellison, 2007;Pegg et al., 2018).Similar to social identity offline, online social identity can be threatened by social cues in the digital environment, and so approaches to mitigate stereotype threat offline can also work in online contexts (Chang et al., 2019).However, digital environments are different because individuals are afforded greater control over how other users perceive them by selectively presenting (or hiding) aspects of their personal identity as a part of their online identity-something that is more difficult to achieve in offline, face-to-face communication (Walther, 1996).Hence, we focus on how digital self-representations online may be used to mitigate stereotype threat. Avatars influence stereotype threat In online communication environments, avatars are the fundamental vehicle of users' social identities.Drawing from a well-respected definition that applies broadly across media modalities (Nowak and Fox, 2018), we define avatars here as mediated representations of humans used to interact with others, objects, or environments in real time.Avatars are mediated-not the users themselves, but representations of the users-and hence afford a potential for anonymity.In other words, even if an avatar appears (or sounds) very lifelike and realistic, it may not represent the users' identity characteristics accurately, so inferences drawn about the user from the avatar's depiction may be inaccurate.However, despite this rationale, people still treat avatars' artificially anthropomorphic characteristics as representative of actual human characteristics (Kao, 2019) and stereotype them accordingly (Kaye et al., 2018;Ratan and Sah, 2015).Such misattribution of human traits to non-human entities likely occurs because people tend to obliviously respond to social cues-including those exhibited by machines-following the social norms developed through humanhuman interaction (Nass and Moon, 2000;Gambino et al., 2020), a phenomenon referred to as the media equation (Reeves and Nass, 1996).Further, when people use avatars, they tend to adopt the avatars' identity characteristics into their self-perception and then behaviorally conform to associated stereotypes (Yee and Bailenson, 2007;Ratan et al., 2020).This phenomenon, referred to as the Proteus effect, helps explain why people are also susceptible to stereotype threat induced with respect to their avatars' identity characteristics.For example, one study found that both men and women participants who used a female avatar in a threat condition (i.e., in a group with two ostensible male avatars) performed worse on a math task than participants who used a male avatar (Lee et al., 2014).Another study found that using a female avatar in VR harmed math task performance and math confidence, while using a male avatar buffered against these stereotype threat effects (Peck et al., 2020).And another study with women participants found that those who customized and used a female avatar (compared to a male avatar) in a sword-fighting Wii game performed worse on a competitive math task after the game, but only if perceived avatar embodiment was low (Ratan and Sah, 2015).These previous studies of avatars and stereotype threat focused on visual avatar characteristics (i.e., the avatar's appearance on the screen).No studies of which we are aware have examined the role of non-visual avatars in stereotype threat effects.This is a major gap in the research because voicesand voice avatars-are a highly stereotyped aspect of social identity in online interactions. D. Kao et al. Stereotyping voice (avatars) Voice has a unique capacity to reflect social identity, acting as an ''auditory face'' that others evaluate (Belin et al., 2011) to ascertain the speaker's gender (Strand, 1999;Trent, 1995), age (Sebastian and Bouchard Ryan, 2018), ethnicity (Trent, 1995), personality (McAleer et al., 2014), and even social status (Ko et al., 2006;Sebastian and Bouchard Ryan, 2018).Hence, voice can trigger ''linguistic profiling'' (Gray, 2012), racism, and sexism from others (Chan and Gray, 2020;McLean and Griffiths, 2019).Unlike with visual selfrepresentations (i.e., avatars), most popular online communication platforms-including Zoom, Microsoft Teams, Cisco Webex, and Discord (Blagojević, 2022)-do not presently offer native tools for users to modify or customize the representation of their voices (i.e., voice avatars).Instead, such platforms simply detect audial information from the microphone input and replicate it through speaker outputs with little or no computational processing that affect how identity cues are presented (Byeon et al., 2022;Johns et al., 2008).Such systems hinder privacy and agency in voice communication, which helps explain why women and minority groups are often reluctant to use their voices online to avoid negative attention and delegitimization of their abilities (Cote, 2017;Kuznekoff and Rose, 2013;McLean andGriffiths, 2019, 2013).This is especially problematic because voice is growing in prominence as a medium of communication not only in online groups, but also in human-computer interaction, such as when communicating with virtual assistants and intelligent agents (Cherif and Lemoine, 2017;Clark et al., 2019;Seaborn et al., 2022;Divekar et al., 2019a,b;Xu et al., 2021;Xu and Warschauer, 2020;Zierau et al., 2020), dictating emails (Shah et al., 2021), navigating websites (Anon, 2020), and online learning (Khan et al., 2022;Miyazoe and Anderson, 2011;Paule-Ruiz et al., 2013). Voice avatars present a novel opportunity to address this issue by allowing users to decouple identity characteristics presented to others through voices from users' offline identity characteristics.Although some researchers have examined voice avatars or similar concepts related to voice modification or customization in computational settings (e.g., Kao et al. (2022Kao et al. ( , 2021)), Okano et al. (2022)), research on voice avatars in online group communication is practically nonexistent.As an early exploratory step in this direction, we focus on masking gender identity through voice avatars in a way that facilitates anonymity for the users, meaning their personal identity cannot be readily ascertained through the information shared online (Wallace, 1999(Wallace, , 2008)).Studies suggest that such anonymity can mitigate stereotype threat.For example, masking status cues during a choice-dilemma task led to more equitable group discussion between high and low status participants (Dubrovsky et al., 1991).Similarly, masking gender cues during group discussions among executives led to more equitable decision making between men and women participants (Sproull et al., 1991).Research on visual avatars suggests the same pattern.One study found that African-American participants using race-anonymized avatars persisted significantly longer in a cognitive group task than those using race-revealing avatars (Lee, 2009).Another study found that the African-American participants performed worse on a cognitive task when competing (instead of cooperating) with race-revealing avatars (Lee and Nass, 2012). Hypotheses Our hypotheses are based on an assumed context of online interaction in which an individual-the potential target of stereotype threat-interacts with group members potentially with voice avatars designed to anonymize social identity.In order to test our general expectation that voice avatars designed to anonymize users should mitigate stereotype threat, we pose a series of hypotheses (all included in our preregistration) about the mechanisms and the stereotype threat of performance outcomes.Regarding mechanisms, we draw from Self-Determination Theory (SDT) (Ryan and Deci, 2020) to hypothesize that stereotype threat will have a negative effect on competence (e.g., from worse performance (Steele, 2010)), autonomy (e.g., as a correlate of lower motivation (Deci and Ryan, 1987;Harter, 1981)), and relatedness (e.g., from a lower sense of belonging (Thoman et al., 2013)).Just as racial anonymization has been found to mitigate stereotype threat effects (Lee, 2009), we hypothesize that gender anonymization may similarly positively affect competence, autonomy, and relatedness.Regarding outcomes, we focus on proportion of speaking contribution, an outcome studied in previous research on group dynamics and stereotypes (Dubrovsky et al., 1991;Sproull et al., 1991), as well as on facets related to success on the group task. Additionally, studies have shown that the perception of being in the minority amplifies stereotype threat effects, e.g., Lee and Nass (2012).In this context, when all group members are not using gender anonymized voices, this should reinforce identity salience and the perception of being in the minority, augmenting stereotype threat effects.Therefore, we hypothesize that there will be an interaction effect such that when all group members are not using gender anonymized voices, stereotype threat effects will be greater than the summed stereotype threat effects of individual group subsets not using gender anonymized voices. Competence H1.1: Participant voice masking will lead to higher competence.H1.2: Groupmate voice masking will lead to higher competence.H1.3: Interaction effect: Participant unmasking will see a greater reduction in competence with respect to participant masking when faux participants are unmasked compared to when faux participants are masked. Autonomy H2.1: Participant voice masking will lead to higher autonomy.H2.2: Groupmate voice masking will lead to higher autonomy.H2.3: Interaction effect: Participant unmasking will see a greater reduction in autonomy with respect to participant masking when faux participants are unmasked compared to when faux participants are masked. Relatedness H3.1: Participant voice masking will lead to higher relatedness.H3.2: Groupmate voice masking will lead to higher relatedness.H3.3: Interaction effect: Participant unmasking will see a greater reduction in relatedness with respect to participant masking when faux participants are unmasked compared to when faux participants are masked. Stereotype Threat Scores H7.1: Participant voice masking will lead to lower stereotype threat scores.H7.2: Groupmate voice masking will lead to lower stereotype threat scores. Voice avatar creation software We wanted to develop voice avatar creation software.To do so, as a first step, we sought to leverage voice-changing software that could facilitate the creation of a gender-anonymous voice.Therefore, we (1) reviewed existing voice-changing software; (2) determined that none of the existing voice-changing software were suitable; and (3) developed and validated our own custom voice changer.Details of this process can be found in Supplementary Materials. Online meeting platform We developed an online meeting platform compatible in any modern browser.We built our own platform because we wanted finegrained control over the application-e.g., embedding our voice avatar creation software and recording meeting analytics.See Figs. 1 and 2. Details of the platform and the development process can be found in Supplementary Materials. Study preregistration Our study was preregistered on the Open Science Framework (OSF).Hypotheses, experiment design, data collection, sample size, measures, and analyses are contained in our preregistration. Conditions The study used a 2 × 2 factorial design.Participants worked in groups of 3 (participant + 2 male confederates).All participants selfidentified as female.Participants were led to believe that the two other group members were real participants.In reality, the two other group members were prerecorded male confederates.Participants were not told explicitly what the gender of the other group members was.We manipulated participant voice (gender-unmasked vs. gender-masked) and the two group members' voices (gender-unmasked vs. gendermasked).Voice anonymity conditions were as follows: • All unmasked: None of the group members were gender anonymous. • Male confederates masked: Only the two male group members were gender anonymous. • Female participant masked: Only the female participant was gender anonymous. • All masked: All group members were gender anonymous. Additional details on the specific manner in which stereotype threat was induced, how the male confederate voices were created, and the visual avatar used by meeting participants can be found in Supplementary Materials. Measures Full details and justification for each measure can be found in Supplementary Materials. D. Kao et al. Competence We measured competence using the perceived competence subscale of the Intrinsic Motivation Inventory (IMI) (McAuley et al., 1989). Autonomy We measured autonomy using the perceived choice subscale of the IMI (McAuley et al., 1989). Relatedness We measured relatedness using the relatedness subscale of the IMI (McAuley et al., 1989). Stereotype threat measure: Duration of speaking Duration of speaking was measured by manually inspecting the duration of speech in each participant voice recording, omitting blank silence at the beginning and end. Stereotype threat measure: Speed in responding Speed in responding was automatically recorded by the meeting platform.This was the duration of time between when the participant had the ability to begin recording their answer and when the participant actually began recording their answer. Stereotype threat measure: Correctness of responses Correctness for each computing problem was determined through audio recordings of participants' responses.Details on how correctness was assessed can be found in Supplementary Materials. Stereotype threat measure: Stereotype threat survey We used the ''negative stereotype concerns'' survey from a previous study on stereotypes in computing (Master et al., 2016). Stereotype threat measure: Respect from others We measured ''respect from others'' (Bartel et al., 2012;Tyler and Blader, 2002).Additional details can be found in Supplementary Materials. Sample size determination We calculated a priori sample size.For 2 × 2 ANOVAs, we used G*Power 3.1 to conduct a power analysis using an effect size of small (0.1), = 0.05, and 95% power.This power analysis found that a sample size of N = 1302 would be necessary. For the mediation analyses, we use Monte Carlo Power Analysis (Schoemann et al., 2017).We used a model with three parallel mediators, 95% power, 1000 replications, 20,000 Monte Carlo draws per rep, and a 95% confidence level.Correlations between variables are set to 0.2 based on the literature available to us, e.g., a validation of the IMI found moderate correlations between subscales (McAuley et al., 1989).The power analysis found that a sample size of N = 808 would be necessary. We take the upper bound across both power analyses of N = 1302. Participants 1362 participants were recruited from Prolific,4 with 17 participants removed during data screening (see data screening in Supplementary Materials for rationale).Participants were paid $11 USD per hour.There was no limitation on geographic location.Recruitment details, data screening, participants' demographics, and participants' prior computing experience can be found in Supplementary Materials. Design We used a between-subjects factorial design.There were four possible conditions, and each participant was randomly assigned to one condition.Across conditions, there were approximately equal numbers of participants (M = 336.3,SD = 9.3). Overview Here, we provide a brief overview of the procedure.Additional details including verbatim participant instructions can be found in Supplementary Materials.Participants first opened the online meeting link in their browser.After an audio test and microphone test, participants chose their visual avatar (see Section 5.2).After instructions, the participant was paired with two other ''participants'' (male confederates).Participants were told they would act as judges, while the other two participants were brainstormers. During the meeting, the group members were presented each of 5 computing problems one at a time.We took the problems from the ''CS4FN Puzzle Book''5 Specifically, we chose puzzles #7, #10, #13, #15, and #17.Each puzzle was presented one at a time and shown on the screen (see Fig. 2).See Section Brainstormer Dialogue Snippet and Example Participant Solutions for the specific dialogue for each problem.Each problem was discussed by the two brainstormers, after which the participant was prompted to record an audio clip of their answer.During recording, participants could hear the output from the voice avatar creation software after a delay in order to reinforce the manipulation. Afterward, participants filled out a post-survey and were debriefed.Full details of the procedure and justification for study choices can be found in Supplementary Materials. Brainstormer dialogue snippet and example participant solutions A complete listing of brainstormer dialogue for all problems and example participant solutions can be found in Supplementary Materials. D. Kao et al. Manipulation check The manipulation check consisted of six questions.All manipulation check questions validated the effectiveness of our manipulation.Detailed results of the manipulation check can be found in Supplementary Materials. H4-H8: Analyses of direct and mediated effect 6.3.1. Assumption checks Assumption checks for mediation analyses can be found in Supplementary Materials. Analysis extension: Speech word count Stereotype threat has been shown to increase anxiety, which may influence duration of speech without actually influencing the number of words spoken-e.g., nervously speaking quickly (Spieler, 2015).Therefore, we performed an additional analysis on actual number of words spoken by each participant during the meeting.This was a small oversight of our preregistration-this is the only analysis not part of the preregistration.The purpose of this analysis is to confirm whether the voice conditions affect not only duration of speaking, but also the actual number of words spoken. First, a researcher blind to conditions manually listened and recorded the number of words spoken in each participant audio recording.Filler sounds (e.g., ''ah'', ''uh'', ''um'') were excluded (Bortfeld et al., 2001).The researcher made two separate passes over all recordings to ensure an accurate word count.Next, the average number of words spoken per problem was calculated for each participant.This average number of words was used as the outcome variable ( ).We again tested the same mediation model in Fig. 3 with identical parameters to Section 6.3.2.From Table 5, we can see that competence, autonomy, and relatedness did not mediate the effects of voice condition on number of words spoken.However, there was a direct effect of voice condition on number of words spoken ( ′ ) for the comparison ''All unmasked'' vs. ''All masked''.Therefore, we conclude that the voice condition ''All masked'' had a significantly positive effect on number of words spoken.Similar to H4-H8, an interaction effect was tested separately using a factorial 2 × 2 ANOVA (participant masking × group member masking).The interaction effect was not significant ( [1, 1341] = 0.016, = 0.899, 2 = .000).Descriptives can be found in Table 4. Discussion Masking visual identity cues in online avatars (e.g., race for African-American participants in anagram-solving (Lee, 2009), gender for female participants in mathematics (Lee and Nass, 2012)) positively influences problem-solving performance in stereotype-relevant contexts (Lee, 2009;Lee and Nass, 2012).However, it is not known if masking of identity cues in voice avatars can have a similarly positive effect. Regression coefficients ( → ), ( → ), ′ (direct → ), (total → ), and .We use a multicategorical X with 4 conditions: (1) ''All unmasked,'' (2) ''Male confederates masked,'' (3) ''Female participant masked,'' and (4) ''All masked.''For each outcome variable, the first line presents results for the comparison between ( 1) and ( 2), the second line ( 1) and ( 3), and the third line ( 1) and ( 4).These specific comparisons are done (and not other comparisons) because we use indicator coding, in which all conditions are compared to the reference condition (in this case, ''All unmasked'' is the reference condition).See Hayes and Preacher (2014) for more details on this approach.All presented effects are unstandardized.Significant results are bold.and (4) All masked.When male confederates used gender-anonymous voices, female participants spoke longer and scored higher on computing problems compared to when no one used gender-anonymous voices. Competence When everyone in the meeting used gender-anonymous voices, female participants spoke longer, spoke more words, and scored higher on computing problems compared to when no one used gender-anonymous voices.However, effects of conditions on subjective measures (competence, choice, relatedness, perceived stereotype threat, and respect from others) and one behavioral measure (speed in responding) were not significant.There were no significant mediation or interaction effects.Outcomes of our study show that gender-masked voice avatars in online group work have the potential to promote women's participation (i.e., higher duration of speaking and higher spoken word count) and correctness (i.e., higher scores on problems).However, none of these outcomes were significant when female participants alone used a gender-masked voice avatar.Group members using a gendermasked voice avatar resulted in higher duration of speaking and higher problem scores, but not higher spoken word count.Significant positive effects on all three outcomes (speaking duration, spoken word count, problem scores) only occurred when everyone used gender-masked voice avatars.Therefore, while there may be benefits when a subset of meeting participants are using a gender-masked voice avatar, our results suggest that everyone in the meeting should use gender-masked voice avatars to maximize outcomes. 1010 The only statistically significant difference between ''Male confederates masked'' and ''All masked'' is spoken word count.However, there is another reason to use voice avatars that mask everyone.In a real virtual group, having The benefits of greater participation and performance to diversity Male-dominated organizations in computing fields can be discouraging for women (e.g., startups with fraternity-like cultures and sexism) (Steinberg, 2014).Repeatedly experiencing stereotype threat can lead to a negative cycle of diminished performance, confidence, and interest (Gilovich et al., 2006).Ultimately, this leads to disidentification from a career (Gilovich et al., 2006).However, our study suggests that voice gender anonymity can have a positive effect on both participation and performance in solving computing problems.Therefore, voice avatars may be one path forward toward improving diversity. Applications of identity-masked voice avatars online The results of this study suggest that identity-masked voice avatars can mitigate stereotype threat in group-based problem solving.When all group members used an identity-masked voice avatar, female participants participated more actively in the online collaborative activity.We view identity-masked voice avatars as applicable for institutions and corporations where meetings between employees are increasingly virtual (Pearlman and Gates, 2010) and where stereotype threat may be present in group decision-making processes wherein stakeholders differ in status (Dubrovsky et al., 1991;Tan et al., 1999).In these instances, only group members' voices gender-masked (from every user's perspective) would create asymmetries in what each user hears.This could be confusing and presents a usability issue.identity-masked voice avatars (coupled with visual anonymity) may empower more individuals to voice their opinions and share critical information.As a consequence, decision-making processes can be made more robust and foster more equitable discussions. In addition to applications in online collaborative work and employee meetings, identity-masked voice avatars may also have significant benefits in online learning environments.For instance, while studies within educational games have explored the effects of avatars on educational outcomes (Kao and Harrell, 2018, 2017, 2015), the specific benefits of identity-masked voice avatars remain largely unexplored.Gender bias is rampant in online learning, manifesting in lower peer ratings of female students' work (Morales-Martinez et al., 2020;Brooke, 2021), lower instructor grades of female students' work (Hofer, 2015), and lower student evaluations of female instructors (MacNell et al., 2015;Ayllón, 2022).Gender anonymization within these online learning environments, including identity-masked voice avatars, may help mitigate long-standing gender biases.Within the specific context of computing, the underrepresentation of women may be influenced by a perceived lack of similarity with those in the field (Cheryan and Plaut, 2010).Even subtle gender cues, such as objects in a classroom, can affect female participation (Cheryan et al., 2009).Given that voice carries rich identity cues, identity-masked voice avatars present a promising avenue for exploration in addressing gender inequities in computing. Anonymous voice avatars for everyone in the group maximizes benefits Research has shown that viewing oneself as being in the minority amplifies stereotype threat effects (Lee and Nass, 2012).Within this context, our results suggest that everyone should use gender anonymous voice avatars to maximize outcomes.This minimizes the potential sources of stereotype threat.There is, however, a broader ethical question of whether we should use anonymous voice avatars. Should we use anonymous voice avatars? This study may suggest that people should be encouraged to use gender-anonymous avatars in online learning platforms.However, the social implications of anonymizing gender online to combat stereotype threat are fraught-this techno-solutionist approach does not address the underlying cultural problem.Hence, we suggest that this study's results be interpreted as support for a solution that utilizes avatars (both visual and audial) to deemphasize gender cues and instead encourage users to express other aspects of their identities that are more relevant to the context of interaction. Anonymous voice avatars did not affect all outcomes Voice gender anonymity did not have a significant impact on competence, autonomy, relatedness, speed in responding, perceived stereotype threat scores, and autonomous respect scores. With the exception of speed in responding, these were all selfreported subjective outcomes.Interestingly, even though we see an actual difference in levels of competence between conditions (i.e., scores on computing problems), we do not see a corresponding difference in self-reported competence.Stereotype threat, through increased rumination, arousal, and efforts to self-regulate, causes cognitive depletion (Beilock, 2008).This cognitive depletion from stereotype threat has been shown to make people poor judges of their own competence (Tellhed and Adolfsson, 2018).As such, one potential explanation is that stereotype threat is undermining participants' assessments.Research into other measures used in our study (i.e., autonomy, relatedness, autonomous respect scores) may shed light on whether the accuracy of other self-reported measures might be inaccurate in some cases.Since SDT was used as a theoretical lens to hypothesize that stereotype threat would have a negative effect on competence, autonomy, and relatedness, future research could utilize alternative measurement approaches for each of these facets to further elucidate the effects of anonymous voice avatars-especially for autonomy and relatedness, for which there were no non-subjective measures in the present study. Because subjective stereotype threat scores did not differ significantly across conditions, one might argue that stereotype threat was not the cause of differences between conditions.However, given the highly controlled nature of our study and validation of the voice avatar creation software, we believe this is unlikely.Variation between conditions was minimized except for the presence or absence of stereotypical gender cues in the voice-changing output.This was further confirmed through manipulation checks.We believe that a more likely cause is that stereotype threat was a subconscious factor that influenced participants' performance and participation but was not salient enough to manifest in the self-reported scale.The vast majority of previous research on stereotype threat has focused on actual performance effects (Steele, 2010) rather than self-reported measures.Furthermore, previous research has shown that with regards to individuals under stereotype threat conditions, there is a significant divergence between self-reports regarding stereotype threat outcomes and objective stereotype threat outcomes (Bosson et al., 2004).For example, in one study stereotype-threatened individuals demonstrated significantly more anxiety, but this did not manifest as significantly more selfreported anxiety (Bosson et al., 2004).As such, divergences between objective and self-reported stereotype threat measures is supported by previous studies. Regarding speed in responding, our hypothesis that voice gender anonymity will lead to faster response times may be incorrect.Although we expected participants who are less anxious to also be less tentative (i.e., less hesitant in communicating), it is also possible that such participants would have more confidence and therefore take their time to think carefully about their responses before answering.Study design could have also played a role in this result: Because participant responses were recorded and participants were not engaged in a real-time conversation, this could have encouraged participants to take more time to think about their responses.Here, we did not find any effect of our voice avatar manipulation on speed in responding.Nevertheless, our manipulation had a significant effect on women's participation and performance on computing problems. Voice avatar application areas: Education, entertainment, work There are several potential application areas of voice avatars we envision.In education, this might include teaching through virtual reality (Bailenson et al., 2008), online meeting platforms (de Oliveira Dias et al., 2020), and massive open online courses (Uchidiuno et al., 2016).In entertainment, this might include online multiplayer games.For example, anonymous voice avatars may help reduce toxic behavior between players (Vella et al., 2020;Wadley et al., 2009;Türkay et al., 2020).In work, this might include virtual meetings between employees (Pearlman and Gates, 2010).Although we are optimistic about the potential of voice avatars, there are a few caveats. Firstly, our results on voice avatars may not generalize to all situations.For example, in a virtual organizational meeting in which the identity of all meeting participants is known in advance, it is not clear what level of effectiveness anonymous voice avatars would have. Secondly, our meeting platform did not contain a video feed of participants' faces.This absence allowed us to study voice-only avatars in isolation, an important first step.Voice-only avatars are highly applicable in audio calls (e.g., Zoom call-in, regular phone calls, Discord voice chat), social virtual reality, and online gaming.Here, we have shown that the effects of voice avatars alone are impactful.Nevertheless, video feeds are commonplace in many online meetings (Baym D. Kao et al. et al., 2021). 11As such, an appropriate next research direction is to study the combination of visual and audial avatars, i.e., visual avatars that align with their audial avatar counterparts, e.g., identity-masked visual avatars. Limitations Despite the carefully controlled nature of our experiment, there are a few limitations that should be considered for future studies. Participants were not engaged in live interaction with the other two group members (i.e., the two male confederates).However, according to the manipulation check, participants perceived the interaction as an online meeting.Future smaller-scale studies might consider live interaction. According to our manipulation checks, participants in self-unmasked conditions still felt their voice was somewhat masked.As a result, the effects of self-masking might be understated in our study (note that our conditions were still successful at inducing higher or lower levels of perceived self-masking-see Manipulation Checks; furthermore, our validation study in Section 3 confirms that the unmasked voice avatar is of a discernible stereotypical gender). Our study recruited only participants self-identifying as female.More research is needed on non-binary gender identities and anonymous voice avatars.Furthermore, investigating diverse group configurations (smaller vs. larger groups, variations in group demographics) would help researchers understand relevant contextual factors. Conclusion In a large-scale preregistered study, we investigated gender-anonymous voice avatars in online computing group work.Female participants worked with two ostensible male participants (actually recorded confederates).There were four voice gender anonymity conditions: (1) All unmasked, (2) Male confederates masked, (3) Female participant masked, and (4) All masked.When only male confederates used genderanonymous voices (compared to all unmasked), female participants spoke for a longer period of time and scored higher on computing problems.When everyone used gender-anonymous voices (compared to all unmasked), female participants spoke for a longer period of time, spoke more words, and scored higher on computing problems.We did not find significant effects on subjective measures and one behavioral measure we deemed to poorly reflect stereotype threat.Our findings demonstrate that anonymous voice avatars are a potential avenue for supporting diversity in virtual group work.We discussed several potential application areas of anonymous voice avatars (e.g., virtual classrooms, online multiplayer games, business meetings).This study presents a first step in understanding the effects of anonymous voice avatars. significant at < 0.05; ** significant at < 0.01; *** significant at < 0.005; significant based on 95% CI.Descriptives for outcomes in H4 through H8 and number of words spoken.Duration of speaking, speed in responding, and number of words spoken are averages per problem. *
2023-09-17T15:11:39.556Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "2348380dec3fa0ee663d58a3eeb608dab2c1225a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/j.ijhcs.2023.103146", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "146c584a3dfaa41b9822310158d812216ce825a9", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [] }
216108582
pes2o/s2orc
v3-fos-license
Spatio-Temporal Abnormal Behavior Prediction in Elderly Persons Using Deep Learning Models The ability to identify and accurately predict abnormal behavior is important for health monitoring systems in smart environments. Specifically, for elderly persons wishing to maintain their independence and comfort in their living spaces, abnormal behaviors observed during activities of daily living are a good indicator that the person is more likely to have health and behavioral problems that need intervention and assistance. In this paper, we investigate a variety of deep learning models such as Long Short Term Memory (LSTM), Convolutional Neural Network (CNN), CNN-LSTM and Autoencoder-CNN-LSTM for identifying and accurately predicting the abnormal behaviors of elderly people. The temporal information and spatial sequences collected over time are used to generate models, which can be fitted to the training data and the fitted model can be used to make a prediction. We present an experimental evaluation of these models performance in identifying and predicting elderly persons abnormal behaviors in smart homes, via extensive testing on two public data sets, taking into account different models architectures and tuning the hyperparameters for each model. The performance evaluation is focused on accuracy measure. Introduction The emerging Internet of Things (IoT) promises to create a world in which all the objects around us are connected to the Internet and communicate with each other with minimal human intervention. The crucial goal is to create a better world for human beings, in which the objects around us are context-aware, allowing them to respond to questions such as what we want, what we need and where we are. Smart homes are one of the main application domains of IoT, have received particular attention from researchers [1]. Smart homes provide a safe, secure environment for dependent people. They offer the ability (1) to track residents activities without interfering in their daily life; and (2) to track residents behaviors and monitor their health by using sensors embedded in their living spaces [2]. The data collected from smart homes needs to be deeply analyzed and investigated, in order to extract useful information about residents daily routines, in particular regarding specific activities of daily living. According to Reference [3], the training process can be distinguished into trained, training-free and trained-once. Therefore, this paper is interested in trained approach. Activity recognition [4], as a core feature of the smart home, consists of classifying data recorded by the different integrated environmental and/or wearable sensors into well-defined, known movements. 1. Investigating a variety of deep learning models even models hybridization for automatic prediction of abnormal behaviors in smart homes. 2. Managing the problem of imbalanced data by oversampling minority classes for LSTM model in particular. 3. Conducting extensive experiments based on two public datasets to validate the proposed models. The paper is organized as follows: Section 2 presents an overview of anomaly detection models and related work on machine learning algorithms. Section 3 presents materials and methods used to carry out our work. Section 4 shows the obtained results for each method/datasets. Finally, Section 5 discusses and highlights the obtained results. Related Work Tracking user behavior for abnormality detection has attracted considerable attention and is becoming a primary goal for some researchers [16]. Abnormal behavior detection approaches are based mainly on machine learning algorithms and specifically on supervised learning techniques [17]. Supervised classification techniques need labelled data points (samples) for the models to learn. This kind of classification entails training a classifier on the labelled data points and then evaluating the model on new data points. Thus, in the case of normal and abnormal classes, the model learns the characteristics of the data points and classifies them as normal or abnormal. Any data point that does not fit the normal class will be classified as an anomaly by the model. Various classification techniques have been applied for abnormal behavior detection. Pirzada et al. [18] explored the k-nearest neighbors algorithm (KNN), which works well to classify data into categories. Their method performs a binary classification in which activities are classified as good or bad, to distinguish anomalies in user behavior. The proposed KNN is applied to predict whether an activity belongs to the regular (good) or irregular (bad) class. Their method allows an unobtrusive use of sensors to monitor the health condition of an elderly person living alone. Aran et al. [19] proposed an approach to automatically observe and model the daily behavior of the elderly and detect anomalies that could occur in the sensor data. In their proposed method, anomalies can be relied on to signal health-related problems. They therefore created a probabilistic spatio-temporal model to summarize daily behavior. Anomalies, defined as significant changes from the learned behavioral model, are detected and performance is evaluated using the cross-entropy measure. When an anomaly is detected, caregivers are informed accordingly. Ordonez et al. [20] presented an anomaly detection method based on Bayesian statistics that identifies anomalous human behavioral patterns. Their proposed method automatically assists elderly persons with disabilities who live alone, by learning and predicting standard behaviors to improve the efficiency of their healthcare system. The Bayesian statistics are chosen to analyze the collected data and the estimation of the static behavior is based on the introduction of three probabilistic features: sensor activation likelihood, sensor sequence likelihood and sensor event duration likelihood. Yahaya et al. [21] proposed a novelty detection algorithm, known as the one-class Support Vector Machine (SVM), which they applied to the detection of anomalies in activities of daily living. Specifically, they studied an anomaly in sleeping patterns which could be a sign of mild cognitive impairment in older adults or other health-related issues. Palaniappan et al. [22] were interested in detecting abnormal activities of individuals by ruling out all possible normal activities. They define abnormal activities as randomly occurring, unexpected events. The multi-class SVM method is used as a classifier to identify the activities in the form of a state transition table. The transition table helps the classifier avoid states which are unreachable from the current state. Hung et al. [23] proposed a novel approach that combines SVM and Hidden Markov Model (HMM) in a homecare sensory system. Radio Frequency IDentification (RFID) sensor networks are used to collect the elder's daily activities; an HMM is used to learn the data and SVMs are employed to estimate whether the elder's behavior is abnormal or not. Bouchachia et al. [24] proposed a Recurrent Neural Network (RNN) model to address the problem of activity recognition and abnormal behavior detection for elderly people with dementia. Their proposed method suffered from the lack of data in the context of dementia. All of the aforementioned methods suffer from one or more of the following limitations: 1. The presented methods focus on spatial and temporal anomalies in user assistance. However, we note that abnormal behavior is not addressed in the smart home context; 2. These methods require feature engineering, which is difficult, particularly as data become larger; 3. The accuracy of abnormality identification and prediction is not sufficient; These points motivate us to propose methods, which seek to overcome these limitations and be useful for assistance in the smart home context. Proposed Method This section sets out the problem of abnormal behavior identification and prediction, describes different Neural Network (NN)-focused architectures and presents various hyperparameters for tuning the developed models. Problem Description Abnormality detection is an important task in health care monitoring, particularly for monitoring the elderly in smart homes. Abnormality detection consists of finding unexpected activities, variations in normal patterns of activities or patterns in data that do not conform to the expected behavior [25], because humans usually perform their ADLs in sequential manner. According to Zhu et al. [26], abnormalities can be classed as temporal, spatial or behavioral. Our work focuses on the behavioral class, because this kind of abnormality depends equally on time (when the activity is performed) and location (where the activity is performed). Each activity is defined by a sequence of sub-activities and if the person violates the expected sequence, that constitutes an abnormality. Deep Learning for Abnormal Behavior Detection Abnormal behavior detection is considered as a classification problem, in that the process entails using a time series as a model to predict future values based on previously observed values. It takes the order of the observations into account and uses models like Long Short-Term Memory (LSTM) recurrent neural networks, which have memory and can learn any temporal dependence between observations; the CNN model, which has a convolutional hidden layers that operate over a 1D sequence; and Autoencoder, which requires a dataset of sequences that are configured to read, encode, decode and recreate the input sequence. LSTM LSTM [27] is a recurrent neural network architecture whose principal characteristic is memory extension that can be seen as a gated cell, where gated means that the cell decides whether or not to store or delete information based on the importance it assigns to the information. Assignment of importance operates through weights, which are also learned by the algorithm. Simply put, this means that it learns over time which information is important. The LSTM architecture utilizes three types of layers: input, hidden and output. The hidden layers are fully connected to the input and output layers. A layer in LSTM is composed of blocks and each block has three gates: input, output and forget, which are all interconnected. These gates decide whether to let new input in (input gate), delete the information because it is not important (forget gate) or allow it to impact the output at the current time step (output gate). As mentioned previously, our rationale for using LSTM is its ability to remember inputs over a long period, making it possible to remember data sequences. Abnormality detection aims to identify a small group of samples which deviate markedly noticeably from the existing data. That is why we have chosen LSTM to identify and accurately predict abnormal behavior from what is likely to be a long series of sequential data, given that people perform their ADL in a sequential manner. Less human intervention is thus required in the identification and prediction process. The data must be reshaped to develop the LSTM input layer, which needs the input data to be 3-dimensional: that is, training sample, time step and features. For this layer, we added an activation function (ReLu). The dropout method [28] was used to avoid the overfitting problem in LSTM architectures and improve model performance. In our proposed model, the dropout is applied between the two hidden layers and between the last hidden layer and the output layer. We set the dropout at 20%, as recommended in the literature [29]. The last layer (dense layer) defines the number of outputs which represent the different activities and anomalies (classes). The output is considered as a vector of integers, which is converted into a binary matrix. The anomaly prediction is formulated as a multi-classification problem which requires the creation of (number of classes) output values, one for each class. Softmax is used as the activation function and categorical cross-entropy as the loss function. Figure 1 depicts the development of the LSTM architecture. CNN CNNs are a class of neural networks generally used for image recognition and object classification. Our aim is to use CNN to identify abnormalities in time series, an area which is attracting attention, as they can learn directly from the raw time series data, extract features from sequences of observations, without domain expertise and manually engineer input features [30]. CNN development entails adapting the time series (temporal multidimensional 1D readings) by forming a virtual image, in a two-stage process. The first stage is a feature extractor, which learns features from raw data automatically. The second is a trainable fully-connected, which performs classification based on the features learned in the previous stage. We develop our CNN architecture based on a feature extractor which comprises a convolution layer, an activation layer, a pooling layer and a fully connected layer, each of which requires a feature map as input and as output [31], as described in Figure 2. Our Convolution layer is a process that creates a feature map to predict the class probabilities for each feature by applying a filter (64) that scans the whole image, few pixels at a time. The shape of input to the convolution layer is (number of samples, number of timesteps, number of features per timestep). We add an activation function (Relu) that introduces non-linearity into the neural network and allows it to learn a more complex model. We use two convolutional hidden layer followed by a max pooling layer where max pooling (2) is a process that enables the CNN to detect an image when presented with modification. The convolution and pooling which can be repeated to have Conv3 or Conv4. The advantage of this approach is that we treat the 1D sensor reading as a 1D image, which is simple and easy to implement. After that, the Fully connected "flattens" the outputs generated by previous layers to turn them into a single vector that can be used as an input for the next layer, applies weights over the input generated by the feature analysis to predict an accurate label and generates the final probabilities to determine a class for the sequence 1D array. The output of these networks is CNN CNNs are a class of neural networks generally used for image recognition and object classification. Our aim is to use CNN to identify abnormalities in time series, an area which is attracting attention, as they can learn directly from the raw time series data, extract features from sequences of observations, without domain expertise and manually engineer input features [30]. CNN development entails adapting the time series (temporal multidimensional 1D readings) by forming a virtual image, in a two-stage process. The first stage is a feature extractor, which learns features from raw data automatically. The second is a trainable fully-connected, which performs classification based on the features learned in the previous stage. We develop our CNN architecture based on a feature extractor which comprises a convolution layer, an activation layer, a pooling layer and a fully connected layer, each of which requires a feature map as input and as output [31], as described in Figure 2. Our Convolution layer is a process that creates a feature map to predict the class probabilities for each feature by applying a filter (64) that scans the whole image, few pixels at a time. The shape of input to the convolution layer is (number of samples, number of timesteps, number of features per timestep). We add an activation function (Relu) that introduces non-linearity into the neural network and allows it to learn a more complex model. We use two convolutional hidden layer followed by a max pooling layer where max pooling (2) is a process that enables the CNN to detect an image when presented with modification. The convolution and pooling which can be repeated to have Conv3 or Conv4. The advantage of this approach is that we treat the 1D sensor reading as a 1D image, which is simple and easy to implement. After that, the Fully connected "flattens" the outputs generated by previous layers to turn them into a single vector that can be used as an input for the next layer, applies weights over the input generated by the feature analysis to predict an accurate label and generates the final probabilities to determine a class for the sequence 1D array. The output of these networks is often one or more fully connected layers that interpret what has been read and map this internal representation to a class value. Once the model is defined, it can be fitted in the training data and the fitted model can be used to make a prediction. Sensors 2020, 20, x FOR PEER REVIEW 6 of 14 often one or more fully connected layers that interpret what has been read and map this internal representation to a class value. Once the model is defined, it can be fitted in the training data and the fitted model can be used to make a prediction. Autoencoder-CNN-LSTM An autoencoder [32] is a multi-layer neural network in which the desired output is the input itself. The aim of autoencoder is to learn more advanced feature representation in compressed representation to catch the most significant features of the training data [33]. The architecture is constructed on three layers: an input layer, a hidden layer and an output layer. To attain valuable features from the Autoencoder, the hidden units dimension is regularized to be smaller than the dimension of the input units. The framework usually includes the encoding and decoding processes. Given an input x, Autoencoder first encodes it to one or more hidden layers through several encoding processes, then decodes the hidden layers to obtain an output x. In this work, CNN and LSTM are integrated to Autoencoder to be considered as a classifier, the proposed framework is described in the Figure 3. The developed architecture described in Figure 3 has CNN as encoder, RepeatVector (is used as the first layer of the decoder), LSTM layers as decoder and is dense with a TimeDistributed (Dense) layer. The encoding developed by CNN, requires as input a 1D vector followed by pooling and flatten layer. The output of the layer is an encoded feature vector of the input data, which can be used as compressed data. The encoding is followed by a ReapeatVector, where its role is to replicate the feature vector and LSTM layer with a number of nodes and a TimeDistributed (Dense) layer. Autoencoder-CNN-LSTM An autoencoder [32] is a multi-layer neural network in which the desired output is the input itself. The aim of autoencoder is to learn more advanced feature representation in compressed representation to catch the most significant features of the training data [33]. The architecture is constructed on three layers: an input layer, a hidden layer and an output layer. To attain valuable features from the Autoencoder, the hidden units dimension is regularized to be smaller than the dimension of the input units. The framework usually includes the encoding and decoding processes. Given an input x, Autoencoder first encodes it to one or more hidden layers through several encoding processes, then decodes the hidden layers to obtain an output x. In this work, CNN and LSTM are integrated to Autoencoder to be considered as a classifier, the proposed framework is described in the Figure 3. The developed architecture described in Figure 3 has CNN as encoder, RepeatVector (is used as the first layer of the decoder), LSTM layers as decoder and is dense with a TimeDistributed (Dense) layer. The encoding developed by CNN, requires as input a 1D vector followed by pooling and flatten layer. The output of the layer is an encoded feature vector of the input data, which can be used as compressed data. The encoding is followed by a ReapeatVector, where its role is to replicate the feature vector and LSTM layer with a number of nodes and a TimeDistributed (Dense) layer. Experiments For our experimental study to test our method's ability to identify abnormalities, we selected two public datasets involving different types of abnormality. Because these datasets generally exhibit a problem of imbalanced classes, a Synthetic Minority Over-Sampling TEchnique (SMOTE) method was used to oversample our data. We then evaluated the classification method using hyperparameter tuning. SIMADL Dataset This research uses the SImulated Activities of Daily Living (SIMADL) [34] dataset generated by OpenSHS [35], an open-source simulation tool that offered the flexibility needed to generate residents' data for classification of ADLs. OpenSHS was used to generate several synthetic datasets that include 29 columns of binary data representing the sensor values, where each binary sensor has two states, on (1) and off (0). The sensors can be divided into two groups: passive and active. The passive sensors react without the participant's interacting explicitly with them. Instead, they react to the participant's movements and positions. The sampling was done every second. Seven participants were asked to perform their simulations using OpenSHS. Each participant generated six datasets resulting in forty-two datasets in total. The participants self-labelled their activities during the simulation. The labels used by the participants were-Personal, Sleep, Eat, Leisure, Work, Other and Anomaly. The simulated anomalies are behavioral and are described in Experiments For our experimental study to test our method's ability to identify abnormalities, we selected two public datasets involving different types of abnormality. Because these datasets generally exhibit a problem of imbalanced classes, a Synthetic Minority Over-Sampling TEchnique (SMOTE) method was used to oversample our data. We then evaluated the classification method using hyperparameter tuning. SIMADL Dataset This research uses the SImulated Activities of Daily Living (SIMADL) [34] dataset generated by OpenSHS [35], an open-source simulation tool that offered the flexibility needed to generate residents' data for classification of ADLs. OpenSHS was used to generate several synthetic datasets that include 29 columns of binary data representing the sensor values, where each binary sensor has two states, on (1) and off (0). The sensors can be divided into two groups: passive and active. The passive sensors react without the participant's interacting explicitly with them. Instead, they react to the participant's movements and positions. The sampling was done every second. Seven participants were asked to perform their simulations using OpenSHS. Each participant generated six datasets resulting in forty-two datasets in total. The participants self-labelled their activities during the simulation. The labels used by the participants were-Personal, Sleep, Eat, Leisure, Work, Other and Anomaly. The simulated anomalies are behavioral and are described in Table 1. Note that each user has his/her own behavioral abnormality to simulate where the abnormality kind is as a forget as shown in Table 1. MobiAct Dataset MobiAct is a public dataset version 2 [36], a smartphone placed in the pocket is used to collect the data. The participants are asked to perform different types of activities (such as walking, sitting, standing, ascending and descending stairs, jumping, jogging and biking). The Table 2 shows the different asked abnormality and the different kind of falls. Imbalanced Data The distribution of the classes representing the different ADL is not uniform, leading to imbalanced classes. This situation arises because of the rarity of abnormal behavior. This can be clearly seen in Figure 4, where the class "anomaly" constitutes a minority. We decided to tackle this problem in order to improve our classification performance. Dealing with imbalanced datasets requires strategies such as the use of oversampling techniques before providing the data as input to the LSTM model. The oversampling strategy involves augmenting the minority class samples to reach a balanced level with the majority class. Network Architectures and Hyper-Parameters Tuning Models with Different Datasets The experiments were implemented in Python language using Keras library [37] with Tensorflow [38] to create the different LSTM, CNN and Autoencoders model architectures. Deep learning models are full of hyper-parameters and finding the best configuration for these parameters in such a high dimensional space is not a trivial challenge but there are some parameters, which are fixed for all architectures as shown in the Table 3. Balanced classes Oversampling We deal with the abnormality (anomaly) detection problem as a supervised learning that involves correctly classifying rare class samples as compared to majority samples. Anomalies constitute a minority in the whole set of behavior, which creates an imbalanced data problem. Therefore, we have to oversample our data, after which, we can classify correctly. To this end, a subset of data is taken from the minority samples as an example and new, synthetic, similar data points are created. These synthetic data points are then added to the original dataset and the resulting new dataset is used to train the classification models. The main approach to balancing classes is either to increase the samples of the minority class or decrease the samples of the majority class. In oversampling, we increase the minority class samples. This is done in order to obtain approximately the same number of instances for both classes, as demonstrated in Figure 4. Our rationale in using this strategy is to avoid overfitting. We used the SMOTE statistical method [37] to oversample our classes, as illustrated in Figure 4. We note that the x-axes indicate the number of classes and y-axes indicate the number of input data points. Network Architectures and Hyper-Parameters Tuning Models with Different Datasets The experiments were implemented in Python language using Keras library [37] with Tensorflow [38] to create the different LSTM, CNN and Autoencoders model architectures. Deep learning models are full of hyper-parameters and finding the best configuration for these parameters in such a high dimensional space is not a trivial challenge but there are some parameters, which are fixed for all architectures as shown in the Table 3. Many experiments were run by varying LSTM networks architecture according to the hyperparameters as shown in Table 4 to find the suitable hyperparameters. To improve the LSTM performance, it is important to vary nodes, layers and epochs. To compile and fit the model, we experimentally used the hyperparameters indicated in the Table 3. Note that the datasets are sensible for imbalanced classes as described in sections above. The convenient architecture for the two dataset is 20 nodes, 4 layers and 10 epochs. According to the CNN architecture described in the Section 3.2.2, we have experimented the framework with two datasets and the tuning of the CNN model requires varying the number of filters, size of kernel, pooling, number of layers and number of epochs as indicated in Table 5. All these variations according to the Table 5 is a clear improvement in the CNN to have the appropriate architecture. The appropriate architecture is attained with 64 filters, 5 kernels size, pooling 5, layers (2,3 and 4) and 10 epochs. The CNN-LSTM model is an hybridization of LSTM model and CNN model seen in the sections above, is used to identify and extract significant temporal and spatial features from multivariate time series, taking advantage of the strength of CNN on feature extraction ability from raw data and the excellent time series processing ability of LSTM. In order to find the best configuration of CNN-LSTM, a hyperparameters tuning process is required but in such a high dimensional space, it is not trivial. Table 6 cites the different hyperparameters. It was very challenging to find the suitable architecture where the aim is to find the temporal and the spatial features. Finally, we reached the suitable architecture by these hyperparameters: 64 filters, kernel size (5), pooling (5), layers CNN (3), nodes (20), layers LSTM (1) and 10 epochs. Tuning the Autoencoder-CNN-LSTM model is same to the CNN-LSTM model by following the developed architecture seen in Figure 3. Performance Metrics Analysis As stated in the introduction section, the experimental study was carried out in order to identify and predict the abnormal behavior. To highlight the performance of the proposed methods, we consider the accuracy, the precision and the recall as performance measures for the different LSTM, CNN, LSTM_CNN and Autoencoder_LSTM_CNN models. The results of each method/datasets are summarized in Table 7; the presented results are interesting in several ways. In this section, we analyze our objective in term of abnormality detection where the captured abnormality is different from model to another. LSTM aims to capture temporal abnormal behavior sequences by incorporating memory cell to store temporal dependency and information. As stated in the Introduction section, the most important characteristic of deep learning is that it does not need any manually features extraction to learn and can easily learn a hierarchical feature representation from the raw data directly. The LSTM model clearly has an advantage over temporal information identification and prediction. The metrics shown in Table 7 indicate that LSTM model adequately captures the important features to boost detection abnormality accuracy. LSTM performs well in each dataset with an accuracy of 94% and 93% respectively. We also reported precision and recall measures which are shown in the Table 7. A comparison of LSTM with classic machine learning models was done in Reference [39]. CNN aims to capture spatial abnormal behavior sequences of time series based on an automatic features extraction. We reported in Table 7 the obtained results for the SIMADL and MobiAct datasets where CNN is experimented by testing an increasing number of layers (3- CNN and 4-CNN). The model performs well even by increasing the number of layers. The accuracy obtained by the CNN models in both datasets was 93% and 91% respectively. As shown in Table 7, the hybridization of CNN with LSTM is interesting, it has strong ability in terms of the extraction of temporal and spatial features automatically at the same time. According to the obtained accuracy, precision and recall, it was decided that the model gives the best performance in terms of abnormality detection. The hybridization of CNN with LSTM achieves an accuracy of 98% in the SIMADL dataset and 93% in the MobiAct dataset. This could be explained by the fact that temporal and spatial features are two important types of features in detecting abnormal behaviors. Autoencoder-CNN-LSTM provides additional support and a clear improvement for our problem in the compressed manner and it can be seen from Table 7 that it gives the best accuracy, precision and recall for only MobiAct dataset. In contrast, testing the model on SIMADL, accuracy of 84% was obtained. Therefore, we cannot generalize its usefulness. In order to check if the obtained accuracy and precision were not misleading, we use a confusion matrix for each model and both datasets. LSTM, CNN and CNN-LSTM perform well when the datasets are oversampled by using SMOTE method as shown in Figure 5. features. Another interesting manner to extract the features by compressing them in the unsupervised manner with Autoencoder-CNN-LSTM model, we have to improve it in order to generalize it. In future work, an analysis of users' outdoor behavior could provide a fuller understanding of elderly people's health and thereby improve their well-being.
2020-04-23T09:06:37.243Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "7b1808e6b604622bade2e576314a9b5b6ddcef29", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/8/2359/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "212905e10529a77153154dd02406ab842cf1b44f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14930947
pes2o/s2orc
v3-fos-license
Bounded turning circles are weak-quasicircles We show that a metric Jordan curve $\Gamma$ is \emph{bounded turning} if and only if there exists a \emph{weak-quasisymmetric} homeomorphism $\phi\colon \mathsf{S}^1 \to \Gamma$. Theorem 1.3 ([Hei01, Theorem 10.19]). If X is connected and both X, Y are doubling, then every weak-quasisymmetry ϕ : X → Y is quasisymmetric Definition (1.3) for quasisymmetry appears in [TV80]. In earlier work (for example in [AB56], [Ahl63]) quasisymmetry is defined by (1.2); it is however only applied to maps where the two notions agree by the theorem cited above. A quasicircle is the image of the unit circle S 1 by a quasisymmetric map. Ahlfors has given in [Ahl63] the following geometric characterization for planar quasicircles. For a Jordan curve Γ ⊂ C it holds Γ is a quasicircle ⇔ Γ is bounded turning. Tukia and Väisälä generalize this characterization to all metric Jordan curves in [TV80], namely for a metric Jordan curve Γ it holds Γ is a quasicircle ⇔ Γ is bounded turning and doubling. If we call the weak-quasisymmetric image of the unit circle S 1 a weak-quasicircle, then Theorem 1.1 may be expressed as follows. For a Jordan curve Γ it holds Γ is a weak-quasicircle ⇔ Γ is bounded turning. It is easy to see that the quasisymmetric image of a doubling space is doubling (see [Hei01,Theorem 10.18]). Thus one recovers from Theorem 1.1 together with Theorem 1.3 the Tukia-Väisälä characterization of quasicircles. The first example of a bounded turning circle that is not a quasicircle was given by Tukia-Väisälä in [TV80,Example 4.12]. A simple catalog S of bounded turning circles that includes a bi-Lipschitz copy of any bounded turning circle is given in [HM]. A curve S ∈ S from this catalog is doubling, i.e., a quasicircle, if and only if a simple condition is satisfied. 1.2. Organization of the paper. The "if"-part of Theorem 1.1 is trivial. Namely let ϕ : S 1 → Γ be H-weak-quasisymmetric. Consider arbitrary points a, b ∈ S 1 , and let [a, b] ⊂ S 1 = [0, 1]/{0 ∼ 1} be the arc between a and b of smaller diameter. Then for points x, y ∈ [a, b] it holds The rest of this paper concerns the construction of a weak-quasisymmetry ϕ : S 1 → Γ, for a given bounded turning circle Γ. In Section 2 we show that we can restrict our attention to the case when Γ is 1-bounded turning. Also an elementary lemma about dividing arcs into subarcs of equal diameter is proved. In Section 3 we divide Γ into arcs Γ n 1 , . . . , Γ n N n (for each n ∈ N). Two arcs Γ n i , Γ n j have roughly the same diameter. Each arc Γ n+1 i is contained in a (unique) arc Γ n j , thus the sets Γ n = {Γ n j | j = 1, . . . , N n } form subdivisions of Γ. In Section 4 we divide the unit circle S 1 into intervals I n 1 , . . . , I n N n . Neighboring intervals I n j , I n j+1 have roughly the same diameter. Furthermore the combinatorics of the subdivisions of Γ and S 1 is the same, namely Γ n+1 The map ϕ : S 1 → Γ is defined in Section 5, by mapping endpoints of intervals I n j to endpoints of corresponding arcs Γ n j . Section 6 and Section 7 are preparations to prove the weak-quasisymmetry of ϕ. Namely we show, that the diameter of any interval in S 1 can be estimated in terms of the subdivision-intervals I n j . Then we show that if I n i , I m j are the largest subdivision-intervals contained in adjacent intervals of the same length, then |m−n| is bounded. Section 8 finishes the proof of Theorem 1.1. 1.3. Notation. The unit circle is denoted by S 1 , which we identify with [0, 1]/{0 ∼ 1}. The unit circle is thus equipped with the orientation inherited from the real line. We always assume that S 1 is equipped with the arc-length metric denoted by λ(s, t), i.e., if 0 ≤ s ≤ t ≤ 1, then The diameter with respect to this metric of an interval I ⊂ S 1 = [0, 1]/{0 ∼ 1} is denoted by |I|. Note that |I| equals the Lebesgue measure of I in the case when |I| ≤ |S 1 \ I|. Preliminaries We first show that we can restrict our attention to 1-bounded turning circles. More precisely, we show that any bounded turning circle is bi-Lipschitz equivalent to a 1-bounded turning circle. Then we prove that any arc can be divided into subarcs of equal diameter. 2.1. Diameter distance. Given any metric Jordan curve or Jordan arc Γ we define the diameter distance on Γ by for all x, y ∈ Γ, where Γ[x, y] ⊂ Γ is the arc of smaller diameter between x, y. We record some properties of dd. (3) For any arc A ⊂ Γ it holds Here diam dd denotes the diameter with respect to dd. It is elementary that postcomposing a H-weak-quasisymmetry with an L-bi-Lipschitz map yields a HL 2 -weak-quasisymmetry. Assume we have constructed for a given bounded turning circle Γ a weakquasisymmetry ϕ : id − → Γ is the desired weak-quasisymmetric parametrization of Γ. Thus to prove Theorem 1.1 it is enough to construct a weak-quasisymmetry ϕ : S 1 → Γ for any 1-bounded turning circle Γ. Dividing arcs. Here we prove that any metric Jordan arc can be divided into any given number of subarcs each having exactly the same diameter. The problem of finding points on a metric Jordan arc such that consecutive points are at the same distance is a non-trivial problem. In 1930 Menger gave a proof [Men30,p. 487], that is short, simple, and natural; but wrong. It was proved for arcs in Euclidean space in [AB35], and in the general case (indeed in more generality) in [Sch40, Theorem 3]; see also [Väi82]. For the case at hand, i.e., for bounded turning arcs, it suffices to find subarcs that have equal diameter. We give the following elementary proof for this problem. Lemma 2.2. Let A be a metric Jordan arc and N ≥ 2 an integer. Then we can divide A into N subarcs of equal diameter. Proof. We may assume that A is the unit interval [0, 1] equipped with some metric d. We claim that there are points 0 = s 0 < s 1 < · · · < s N −1 < s N = 1 such that where diam denotes diameter with respect to the metric d. When N = 2 this follows by applying the intermediate value theorem to the function According to Lemma 2.1 (3), we may measure the diameter with respect to the diameter distance. Thus, using Lemma 2.1 (4), we may assume that A is 1-bounded turning, i.e., that for any [s, t] ⊂ [0, 1] Next we modify d to get a metric d ǫ that is strictly increasing in the sense that The crucial point here is the strict inequality, which need not hold in general. To this end, fix ǫ > 0 and for all s, t ∈ [0, 1] set Then from (2.2) it follows that where diam ǫ denotes diameter with respect to d ǫ . This immediately implies (2.3). We now show that [0, 1] can be divided into N subintervals of equal d ǫ -diameter. Consider the compact set S := {s = (s 1 , . . . , assumes a minimum on S. If this minimum is zero, we are done. Otherwise, there are adjacent intervals Applying this procedure to all subintervals of maximal d ǫ -diameter we obtain a strictly smaller minimum for the function ϕ, which is impossible. Thus the minimum must be zero, and so we can subdivide [0, 1] into N subintervals of equal d ǫ -diameter. Consider now a sequence ǫ n ց 0, as n → ∞. Let s n 1 < · · · < s n N −1 be the points that divide [0, 1] into N subintervals of equal diameter with respect to d ǫn . We can assume that for all 1 ≤ j < N , all points s n j converge to s j as n → ∞. It follows that for all 1 ≤ i, j < N , The previous lemma is also true for metric Jordan curves Γ. In this case we are free to choose any point in Γ to be an endpoint of one of the subarcs. Dividing Γ Consider a 1-bounded turning metric Jordan curve Γ. We fix a point a 0 ∈ Γ, and an orientation of Γ. For each n ∈ N we will divide Γ into arcs Γ n 1 , . . . , Γ n N n , labeled consecutively on Γ, such that a 0 is the common endpoint of Γ n 1 , Γ n N n . The set of these arcs is denoted by Γ n . Here and in the following the upper index n will denote the order of the subdivision. In particular N 1 , N 2 , . . . , N n , . . . will be some (increasing) sequence of positive integers, not a geometric sequence. Lemma 3.1. There are divisions Γ n of Γ as above with the following properties. (1) Γ n+1 is a subdivision of Γ n . This means that every Γ n+1 ∈ Γ n+1 is contained in a (unique) Γ n ∈ Γ n . (2) The diameters of the arcs of the n-th subdivision are comparable, more precisely (3) The diameters of the n-th and the (n + 1)-th subdivision are comparable, more precisely for all Γ n+1 ∈ Γ n+1 and Γ n ∈ Γ n . The last property implies that each arc Γ n ∈ Γ n is subdivided into at least four arcs Γ n+1 ∈ Γ n+1 . Before we construct these divisions of Γ, i.e., prove the previous lemma, we need some preparation. Lemma 3.2. Let A be a 1-bounded turning arc, and let 0 < δ ≤ diam A. For each n we divide A into n arcs A 1 , . . . , A n of equal diameter (see Lemma 2.2). Let n be the smallest integer such that diam A 1 = diam A 2 = · · · = diam A n ≤ δ. Then diam A j ≥ δ/2 for all j = 1, . . . , n. Proof. Let n be as in the statement. If n = 1, then δ = diam A, and there is nothing to prove. Assume now that n ≥ 2. Assume that the statement is false. Then the subarcs of equal diameter A 1 , . . . , A n have common diameter diam A j < δ/2. Claim. Suppose A is subdivided into k subarcs A ′ 1 , . . . , A ′ k of equal diameter greater than δ. Then 2k + 1 ≤ n. Assuming the A i and the A ′ j are ordered in the same order along A, we see that one needs at least A 1 , A 2 , A 3 to cover A ′ 1 . Similarly, at least the first five arcs A 1 , . . . , A 5 are needed to cover A ′ 1 ∪ A ′ 2 . Inducting over the arcs A ′ 1 , . . . , A ′ k proves the claim. We obtain a contradiction when we set k = n − 1. Thus the arcs Γ n 1 , . . . , Γ n N n have been constructed for all n. Dividing the unit circle For each n ∈ N we divide the unit circle S 1 = [0, 1]/{0 ∼ 1} into intervals I n 1 , . . . , I n N n , labeled consecutively on S 1 . The common endpoint of I n 1 and I n N n is 0. The set of these intervals is denoted by I n . Lemma 4.1. There are divisions I n of the unit circle S 1 as above satisfying the following. (1) I n+1 is a subdivision of I n . This means that every I n+1 ∈ I n+1 is contained in a (unique) interval I n ∈ I n . Two adjacent intervals I, I ′ ∈ I n are called neighbors (i.e., I = I n j , I ′ = I n j+1 ). Note that neighbors are always elements of the same subdivision I n . To simplify the discussion we assume that |I| = 1. For the general case, if we write in the following "length of a subinterval is 1/4", it has to be replaced by "length of a subinterval is 1/4 · |I|" and so on. Case 1. N is even. Starting from the left endpoint of I, we divide the left half of I into intervals of length 1/4, 1/8, . . . , 2 −N/2 (times the length of I). There is one remaining interval of length 2 −N/2 , which is the last interval of the left half of I. The right half of the interval is divided in a symmetric fashion, meaning starting from the right endpoint, we divide the right half into intervals of length 1/4, 1/8, . . . , 2 −N/2+1 , 2 −N/2 , 2 −N/2 . See the bottom of Figure 1. This finishes the division of I, thus of all I n i , into intervals. Thus all I n j have been constructed for all n ∈ N. It is clear that they satisfy the properties of Lemma 4.1. In Case 1 there are two subintervals of I containing the midpoint of I; in Case 2 there is a single subinterval of I. Such a subinterval is called a middle subinterval of I. The weak quasisymmetry Let s n 0 , . . . , s n N n −1 be the endpoints of the intervals I n j ordered increasingly on S 1 = [0, 1]/{0 ∼ 1}, s n 0 = 0 for all n ∈ N. Let a n 0 , . . . , a n N n −1 be the endpoints of the arcs Γ n j . Then we define ϕ(s n j ) = a n j . From Lemma 3.1 (1) and Lemma 4.1 (4) it follows that ϕ is well defined, i.e., if s n i = s m j then ϕ(s n i ) = a n i = a m j = ϕ(s m j ). We show uniform continuity of ϕ on the set s = {s n j | n ∈ N, j = 0, . . . , N n − 1}. Let δ n := min j |I n j |. Then if λ(s, t) ≤ δ n /2, for two points s, t ∈ s (recall from (1.5) that λ is the metric on S 1 ) then s, t are contained in adjacent intervals I n j , I n j+1 . Thus ϕ(s), ϕ(t) are contained in adjacent arcs Γ n j , Γ n j+1 . Thus |ϕ(s) − ϕ(t)| ≤ diam Γ n j + diam Γ n j+1 ≤ 2 · 4 −n diam Γ, by Lemma 3.1 (3), showing uniform continuity of ϕ on s. Since this set is dense in S 1 , ϕ extends continuously to S 1 . The surjectivity is clear, since the set {a n j | n ∈ N, j = 0, . . . , N n −1} is dense in Γ. Injectivity follows from the fact that disjoint sets I n i , I n j are mapped to disjoint arcs Γ n i , Γ n j . Thus ϕ : S 1 → Γ is a homeomorphism. Estimating intervals Given an interval [x, y] ⊂ S 1 we define Here the maximum is taken over n ∈ N and all intervals I n j ∈ I n as defined in Section 4. Furthermore, if the maximum in equation (6.1) is attained for an interval I = I n j ∈ I n , then there are two intersecting (possibly identical) intervalsÎ,Ĵ ∈ I n−1 such that I ⊂ [x, y] ⊂Î ∪Ĵ. Case 1. |I| = |Î|/4. This can happen in three instances: either I is the left-or rightmost interval in I (i.e., I,Î share a boundary point); or N is equal to 4 or 5, and I contains the midpoint ofÎ. If Case 2. N ≥ 6 is even, and I = I n j is a middle subinterval ofÎ (i.e., contains the midpoint of I). Then either both I n j−2 , I n j+3 or both I n j−3 , I n j+2 have diameter strictly bigger than Note that the total length of these intervals is 8|I|. This finishes the claim in this case. Case 4. Remaining case. One of the neighbors of I = I n j , without loss of generality the left neighbor I n j−1 , has twice the length as I. Furthermore, there is a subinterval I n j+k ∈ I n ofÎ, that has the same length as I. It is symmetric to I with respect to the midpoint ofÎ. Then I n j−1 , I n j+k+1 have twice the length of I, thus are not contained in [x, y]. Thus [x, y] ⊂ I n j−1 ∪ I n j ∪ · · · ∪ I n j+k ∪ I n j+k+1 . The total length of the right-hand side is 8|I|, finishing the claim. Note that in Case 2-Case 4, the subintervals that cover [x, y] are all contained in the parentÎ, we then setĴ :=Î. Estimating order Consider now two adjacent intervals (in S 1 ) of the same length, i.e., [x−t, x], [x, x+ t] for some x ∈ S 1 and 0 < t ≤ 1/2. Consider the largest subdivision intervals contained in [x − t, x], [x, x + t], meaning we consider intervals J m ∈ I m , I n ∈ I n such that This is clear, since the interval between I, I ′ has to contain one of size 2 i |I| by Lemma 4.1 (2). From Lemma 6.1 it follows that |[x − t, x + t]| ≤ 24|I n |. Thus it follows from the previous claim that |J n |/|I n | ≤ 2 5 . Indeed |J n |/|I n | ≥ 2 6 implies by the claim that dist(J n , I n ) ≥ 2 5 |I n | = 32|I n |, which is impossible. Thus by Lemma 6.1 We obtain a contradiction if we choose k 0 such that 4 −k0 2 5 < 1/12 or k 0 ≥ 5. This finishes the proof. Proof of the Theorem After these preparations, we are ready to prove the main theorem. Proof of Theorem 1.1. Recall from Section 2.1 that it is enough to prove the theorem in the case when Γ is 1-bounded turning. This means that for any two points x, y ∈ Γ, the arc of smaller diameter Γ[x, y] ⊂ Γ between x, y satisfies diam Γ[x, y] = |x − y|. This finishes the proof. Concluding remarks It is natural to ask, how small the involved constants can be chosen. In particular, how small can the constant H ≥ 1 of the weak-quasisymmetric parametrization ϕ : S 1 → Γ for a given C-bounded turning circle be chosen? Recall from (1.4) that the image of the unit circle by a H-weak-quasisymmetry is C-bounded turning, where C = min{2H, H 2 }. Thus it is natural to ask, if any C-bounded turning circle admits a H-weak-quasisymmetric parametrization, where H = max{C/2, √ C}. As a starting point one may ask, if any 1-bounded turning circle admits a 1-weakquasisymmetric parametrization.
2010-06-09T09:10:48.000Z
2010-03-30T00:00:00.000
{ "year": 2010, "sha1": "213c0db9328ac2941b6361b917c48edfaf655baa", "oa_license": null, "oa_url": "https://www.ams.org/proc/2011-139-05/S0002-9939-2010-10634-2/S0002-9939-2010-10634-2.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "20764af7fc712a616e21185aeff21cbc1e30ecf7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
237379669
pes2o/s2orc
v3-fos-license
Development and validation of women’s environmental health scales in Korea: severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal health behavior, and community health behavior scales Purpose This study aimed to develop the following scales on women’s environmental health and to examine their validity and reliability: severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal health behavior, and community health behavior scales. Methods The item pool was generated based on related scales, a wide literature review, and in-depth interviews on women’s environmental health according to the revised Rogers’ protection motivation theory model. Content validity was verified by three nursing professionals. Exploratory factor analysis, convergent validity, and internal consistency reliability were examined. Results The scales included 10 items on severity, 11 on susceptibility, 10 on response efficacy, 14 on self-efficacy, 8 on benefits, 10 on barriers, 17 on personal health behavior, and 16 on community health behavior. Convergent validity with the environmental behavior scale for female adolescents was supported. The Cronbach’s α values for internal consistency were good for all scales: severity, .84; susceptibility, .92; response efficacy, .88; self-efficacy, .90; benefits, .91; barriers, .85; personal health behavior, .90; and community health behavior, .91. Conclusion The evaluation of the psychometric properties shows that these scales are valid and reliable measures of women’s environmental health awareness and behaviors. These scales may be helpful for assessing women’s environmental health behaviors, thereby contributing to efforts to promote environmental health. Introduction Although evidence is accumulating that women's environmental health problems are caused by environmental pollution [1][2][3][4][5], few studies have investigated health behaviors that promote women's environmental health [6]. Liu et al. [5] measured levels of exposure to environmental pollutants but did not address lifestyle changes, which is a limitation of that study. Therefore, it is necessary to investigate various aspects of health behavior. Wom-en's environmental health problems may affect the reproductive organs from birth to old age, reflecting the need to protect women's health in advance from contaminants to which women are repeatedly exposed during the course of their life [7]. A useful tool to measure environmental health behavior is required to inform efforts to improve women's reproductive health. However, the existing tools for measuring women's environmental health behavior have limitations, such as being restricted to specific behaviors (i.e., not including awareness) [8], having adolescent participants [9], and dealing with unrelated health behaviors [10]. This study developed a tool based on Rogers' [11] revised protection motivation theory to explain the mechanism of environmental health awareness, which affects environmental health behavior. When humans have fears regarding environmental health, they adopt protective behavior through threat appraisal and coping appraisal. Threat appraisal involves subtracting perceived severity and perceived vulnerability from the rewards of maladaptive responses, and coping appraisal involves subtracting the costs of adaptive responses from response efficacy and self-efficacy [11]. Severity is defined as one's evaluation of fear of a severe negative outcome, vulnerability as perceptions regarding the mortality or morbidity of the disease, response efficacy as the effect that behavior would have on disease prevention, and self-efficacy as an evaluation of the individual's ability to engage in certain behavior [12]. Rogers [11] extended this theoretical framework to emphasize the rewards of maladaptive responses and adaptive responses. The rewards of maladaptive responses are defined as the benefits of continuing a risky behavior and the Summary statement • What is already known about this topic? According to the revised protection motivation theory, environmental fear stimulates internal cognitive processes including an assessment of severity, vulnerability, response efficacy, self-efficacy, the rewards of maladaptive responses, and the costs of adaptive responses for health behavior. • What this paper adds Severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal health behavior, and community health behavior scales were developed and showed valid psychometric properties; these scales will help measure not only threat appraisal and coping appraisal but also women's environmental health behavior in light of personal and communal aspects. • Implications for practice, education, and/or policy Women's environmental health research and education may improve by utilizing the developed scales to test factors affecting environmental behavior and improving environmental health. costs of adaptive responses as the losses induced by maintaining a protective behavior [12]. Fear of environmental risks stimulates women's awareness of the severity of environmental harm, their susceptibility to environmental diseases, the response efficacy of preventive behavior, their self-efficacy, the rewards of continuing environmental behavior, and the costs of maladaptive behaviors that are dismissive of environmental health. Rogers' revised theory has been verified in various fields such as chronic pediatric diseases [13] and sexually transmitted infections [14]. Rogers' theory can also be adopted in environmental health-related fields because it provides insights into the inner decision-making mechanism for coping with threats [12]. Therefore, the revised protection motivation theory was applied in this study to measure women's internal perceptions and actions regarding environmental threats ( Figure 1). This study aimed to develop eight scales to measure environmental health awareness and health behavior (severity, susceptibility, response efficacy, self-efficacy, benefits, barriers, personal health behavior, and community health behavior) by applying the method developed by DeVellis [15] for women residing in local communities according to Rogers' revised protection motivation theory [11]. It is hoped that these scales will be used to measure the effectiveness of interventions for women's environmental health awareness and health behavior. The specific purposes were as follows: (1) to develop severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal health behavior, and community health behavior scales for women's environmental health; and (2) to confirm the validity and reliability of the measurement tools. Methods Ethics statement: This study was reviewed by the Institutional Review Board of Kongju National University (KNU-IRB-2020-34) and adhered to the Declaration of Helsinki. Informed consent was obtained from participants. Study design This is a methodological study to develop and validate the following eight scales for women's environmental health in Korea: severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal health behavior, and community health behavior scales. Participants The inclusion criteria were Korean women over the age of 19 years who lived in local communities. The criteria for selection were women who could speak, write, and read Korean and those who agreed with the purpose and process of the study. The exclusion criteria were women currently hospitalized for health problems, and those who had difficulty understanding the purpose and content of the study. Development of the preliminary items To develop the preliminary items, existing tools, a related literature review, and interview data from 10 women in the local community were analyzed. The literature review was done from September 9 to 13, 2020 using PubMed, CINAHL, Education Resources Information Center (ERIC), and the Research Information Sharing Service of Korea. For each database, an advanced search -("Environment" were done. The search of the four databases yielded 27, 10, one, and three results, respectively. Four articles were also retrieved through a manual search. Finally, three tools were used [8][9][10]. The interviews were held from September 26 to October 29, 2020. The researcher interviewed two women face-to-face and eight women by phone. The interviews took an average of 40 minutes per person, and two interviews were conducted for each participant. Participants were recruited through convenience sampling, and the face-to-face interviews were conducted at the office of the health center. The women's age ranged from 23 to 43 years, with an average of 37.4 years. The interviewees comprised four housekeepers, two freelancers, one bank clerk, one researcher, and two educators. Nine of the women had no health problems, while one had diabetes. The questionnaire was guided through a semi-structured questionnaire, with prompts such as "Please tell me about environmental pollutants that pose a threat to health. " The final 106 meaningful statements were derived, listed, and allocated to the scales. In total, 101 items of the preceding tools [8][9][10] were modified according to Rogers' revised protection motivation theory [16]. The conceptual framework of the tool was modified into eight scales: severity, susceptibility (adapted from vulnerability), response efficacy, self-efficacy, benefits (adapted from rewards of maladaptive responses), barriers (adapted from costs of adaptive responses), personal health behavior, and community health behavior ( Figure 1). A Likert scale was used, with responses from 1 (not at all) to 5 (very much). In the item extraction process, the researchers independently extracted and then decided whether to include inconsistent items through a meeting. Items with disagreements were included in the request for expert review of content validity. Fifty overlapping items were removed from the interview data, and the final 157 preliminary items were developed. The content validity of the preliminary items was verified by one head of a women's hospital, one professor of women's health nursing, and one maternal and child health expert at a public health center. Their average age was 52.2 years, and their average professional experience was 26.7 years. A request was made via e-mail for them to review content validity using a 5-point Likert scale from 1 (not very valid) to 5 (very valid). The item-level content validity index (I-CVI) and the average scale-level content validity index (S-CVI/Ave) were tested. For the I-CVI, the ratio of 'valid' and 'very valid' for each item was set as .78 or more, and for the S-CVI/Ave, the average of I-CVI for the item was set as .90 or more [16]. Preliminary survey The preliminary survey was done from November 19 to 23, 2020. Two women who met the inclusion criteria (22 years old and 61 years old) read the questions one by one to verify whether they understood the meaning and to confirm their understanding. The degree of comprehension was evaluated from 1 point ('I do not understand at all') to 5 points ('I understand very well'). The average score of comprehension was 4.5 points, and no item was rated as difficult to understand. The average time required to respond was 15 minutes. Measurement tools for convergent validity The 43-item Environmental Health Perception for Female Adolescents (EHP-FA) tool, developed to evaluate the environmental health awareness of female adolescents aged 18 to 22 years [9], was used for convergent validity. This tool is comprised of four subscales: sensitivity (17 items), vulnerability (8), response efficacy (9), and self-efficacy (9) according to Rogers' original theory [17]. At the time of development, Cronbach's α was .94, .95, .88, and .90 for each subscale; and in this study, the corresponding values were .85, .76, .87, and .86, respectively. The 32item Environmental Health Behavior for Female Adolescents (EHB-FA) [9] tool was also used. The EHP-BA was developed to evaluate the environmental health behaviors of female adolescents aged 18 to 22 years and has two subscales: personal health behavior (19 items) and community health behavior (13). Permission for use was obtained. At the time of development, Cronbach's α was .94 and . 88 for each subscale, respectively; and in this study, it was .92 and .89, respectively. The intention-related measurement used for the validity test was rated on a 7-point Likert scale, with responses ranging from 1 point ('I am not familiar with environmental health behavior') to 7 points ('I regularly practice environmental health behavior') [8]. Data collection From November 27 to December 3, 2020, survey data were col-lected by two researchers and three research assistants at schools, welfare centers, academies, libraries, public health centers, and homes in Daejeon, Gongju, and Sejong in Korea. The research assistants met with potential participants, explained the research purpose, and received signed informed consent forms. According to the sample size of 200 to 400 persons proposed in exploratory factor analysis to verify construct validity [18], the required number of participants was 200. Considering a possible drop-out rate of 10%, the questionnaire was distributed to 220 women. After excluding 10 inappropriate responses, data from 210 (95.5%) women were analyzed. Data analysis The collected data were analyzed using SPSS ver. 25.0 (IBM Corp., Armonk, NY, USA). Exploratory factor analysis and promax rotation in principal axis factor analysis were used due to the correlations between factors. To confirm the appropriateness of the sample, the Kaiser-Mayer-Olkin (KMO) and Bartlett sphericity tests were performed. The criterion for item selection for factor extraction was that the eigenvalue was greater than 1 and the commonality of each item was .40 or more [19]. Subscale intercorrelations and the item total correlation (ITC) were examined [20]. Pearson's correlation coefficients with the EHP-FA and EHB-FA were used for convergent validity analysis A Cronbach's α of .70 or higher was considered to indicate reliability [21]. Demographic characteristics of the participants The average age of the participants was 36.14 (standard deviation, ± 13.76) years (range, 19-70 years), and 54.3% of the participants had a spouse. The proportion of high school graduates was 46.7%, and 51.0% did not have a job. The most common range of monthly household income was between 4.5 million Korean won (approximately 4,000 US dollars) and 6 million Korean won (5,300 US dollars), which accounted for 30.5% of the participants. Some of the participants had previous experiences of disease treatment (35.7%), and 22.9% of the participants had a disease at the time of the survey (Table 1). Content validity The content validity of each item was .80-1.00 for 152 of the 157 items, and five items had an I-CVI less than .78. As the researchers reviewed the items, the five items with an I-CVI content validity less than .78 based on the expert group review were deleted. The deleted items were ' Ask for health information,' 'I have a habit of exercising,' 'I decide on my own health behavior,' 'Move to a place with less pollution,' and 'Endometriosis may occur. ' The S-CVI/Ave of the final 152 questions was . 92. The researchers held a meeting to ensure that the final items conveyed the intended meaning. The final 152 preliminary items included 24 on severity, 12 on susceptibility, 13 on response efficacy, 14 on self-efficacy, 10 on benefits, 18 on barriers, 33 on personal behavior, and 18 on community behavior. Factor analysis Exploratory factor analysis was performed by applying principal axis factor analysis and promax rotation for each of the eight conceptual scales of the 152 preliminary items selected according to their content validity. Severity of environmental health risks The KMO value was .83, exceeding the standard value of .80, and Bartlett's sphericity test showed that the approximate chisquare value was 743.82 (degree of freedom [df] = 45, p < .001). After factor analysis, items with the following values were extracted: an eigenvalue of 1 or more, a commonality of .40 or more, a subscale intercorrelation coefficient between factors of .30-.80, an ITC of .40 or more, and items that met the criteria, in which one factor includes three or more items. Four items were eliminated on the severity scale. As a result, this subscale consisted of four items for the first factor ('chemicals'), three items for the second factor ('electromagnetic waves'), and three items for the third factor ('harmful food'). The correlations between all three factors ranged from .43 to .48 (p < .001) and the correlations between the scale and subscales were .88, .79, and .76, respectively (p < .001). The ITCs ranged from .58 to .74 (p < .001), and the explained variance was 65.4% ( Response efficacy related to environmental health behaviors Suitable values were found for the KMO test (.88) and Bartlett's chi-square value (896.72; df = 45, p < .001). After factor analysis of 13 items, seven items for the first factor ('avoiding toxicants'), and three items for the second factor ('pursuit of health') were selected. Three items related to 'vegetable consumption, ' 'migrating to a low-pollution area, ' and 'inquiry to medical staff' were eliminated because they had ITCs of less than .40. The correlation between the final factors was .59 (p < .001), and the correlations between the scale and subscales were .97 and .86, respectively (p < .001). The ITCs ranged from .50 to .73 (p < .001), and the explained variance was 60.3% (Table 2, Supplementary Table 1). Self-efficacy related to environmental health behaviors The KMO value was .87, and Bartlett's chi-square value was found to be 874.90 (df = 91, p < .001). After the factor analysis of 14 questions, all questions were selected. There were five items for the first factor ('preventive efficacy'), five items for the second factor ('judgment efficacy'), and four items for the third fac-tor ('control efficacy'). The correlations between the factors ranged from .40 to .48 (p < .001), and the correlations between the scale and subscales were .84, .86, and .73, respectively (p < .001). The ITCs ranged from .48 to .72 (p < .001), and the explained variance was 67.2% (Table 2, Supplementary Table 1). Benefits of environmental health behaviors This scale was found to be suitable, with a KMO value of .88 and a Bartlett's chi-square value of 1,074.58 (df = 595, p < .001). After the factor analysis of 10 items, eight items were selected: five items for the first factor ('psychological benefits'), and three items for the second factor ('physical benefits'). The correlation between the final factors was .50 (p < .001), and the correlations between the scale and subscales were .82 and .92, respectively (p < .001). The ITCs ranged from .47 to .80 (p < .001), and the explained variance was 75.8% (Table 2, Supplementary Table 1). Barriers to environmental health behaviors The scale on barriers was found to be suitable, with a KMO value of .85 and Bartlett's chi-square value of 764.68 (df = 45, p < .001). After the factor analysis of 18 items, five items for the first factor ('negative atmosphere') and five items for the second factor ('burden') were selected. The correlation between the factors was .45 (p < .001), and the correlations between the scale and subscales were .87 and .84, respectively (p < .001). The ITCs ranged from .40 to .70 (p < .001), and the explained variance was 58.5% (Table 2, Supplementary Table 1). Personal health behavior Suitable results were found for the KMO value (.88) and Bartlett's chi-square value (2154.69; df = 153, p < .001). After the factor analysis of 33 items, 17 items were selected: seven items for the first factor ('lifestyle'), four items for the second factor ('personal goods'), three items for the third factor ('food'), and three items for the fourth factor ('dust'). The correlations between the factors ranged from .40 to .48 (p < .001), and the correlations between the scale and subscales were .87, .77, .82, and .61, respectively (p < .001). The ITCs ranged from .48 to .76 (p < .001), and the explained variance was 67.7% [20] (Table 2, Supplementary Table 1). Community health behavior For the community health behavior scale, the KMO value was .87 and Bartlett's chi-square value was 1,788.85 (df = 120, p < .001). After the factor analysis of 18 items, 16 items were selected: five items for the first factor ('reduction'), five items for the second factor ('involvement'), three items for the third factor ('recy-cling'), and three items for the fourth factor ('reuse'). The correlations between the factors were .40 to .55 (p < .001) and the correlations between the scale and subscales were . 85, .84, .68, and .77, respectively (p < .001). The ITCs ranged from .55 to .77 (p < .001), and the explained variance was 68.8% (Table 2, Supplementary Table 1). The finally developed eight scales included 10 items on severity, 11 on susceptibility, 10 on response efficacy, 14 on self-efficacy, 8 on benefits, 10 on barriers, 17 on personal health behavior, and 16 on community health behavior ( Table 2). Reliability Cronbach's α (95% confidence interval [CI]), as a measure of internal consistency, was good for all scales and subscales (Supplementary Table 1 Discussion The scales developed in this study were based on the revised protection motivation theory [11]. This theory explains changes in health behavior by adding the concepts of self-efficacy, rewards of maladaptive responses, and costs of adaptive responses to its original theoretical form [17]. When a person feels that the action's reward exceeds its cost based on threat appraisal and coping appraisal, he or she can intend to take action and change the behavior [11]. In this study, all concepts of the revised protection motivation theory were substituted with corresponding environmental health concepts, and psychological evidence for the composition of the tool was confirmed [15]. As a result of reviewing existing environmental health behavior measurement tools [11,12,15], a literature review, interviews, and expert content validity test, it was found that the I-CVI and S-CVI/Ave values were above the corresponding standards, thereby establishing content validity [16]. Construct validity was confirmed through empirical tests of the pattern matrix, the structural matrix, the correlation coefficients between all items and each factor, and the correlation coefficients between factors [19]. This study attempted to grasp the meaning of the factors for the concepts underlying each scale. The severity scale comprised three subscales ('chemicals,' 'electromagnetic waves,' and 'harmful food'). A difference between this scale and existing tools is that the severity of electromagnetic waves was derived as one factor. The items related to microplastics and light pollution reflect the recent problem of environmental pollution. Severity is an important concept related to environmental health in the United States, as a previous study assessed whether people had been exposed to environmental toxicity in the clinic [22]. Furthermore, this tool can be easily used as a more straightforward question. The susceptibility scale included two subscales ('reproductive health problems' and 'general health problems'). It can be seen that women were aware of their reproductive health problems and the health problems of the fetus and their children. As the relationship between female reproductive health problems and environmental pollutants has recently been established [2,23], it can be seen that this scale reflects the environmental health perceptions of women residing in local communities. Response efficacy contained two subscales ('avoiding toxicants' and 'pursuit of health'). The fact that the actions to avoid environmental pollutants had a higher explanatory power than actions taken to pursue health is consistent with the principle of precautionary care used in environmental health [7,24]. Self-efficacy contained three subscales ('preventive efficacy,' 'judgment efficacy,' and 'control efficacy'). This classification does not exist in existing tools. Since self-efficacy is a strongly influential variable within the theory of health behavior changes [25], the developed scale will be valuable as a measurement tool. Benefits consisted of two subscales ('psychological benefits' and 'physical benefits'). It semantically coincided with the concept of compensation for actions, which has recently been proposed in the revised motivational theory [11]. In addition, the concept of benefits includes the possibility of measuring rewards for actions that existing tools cannot measure. Barriers had two subscales ('negative atmosphere' and 'burden'). The classification of items was appropriate in terms of content and semantically coincided with the concept of the cost of action, which has recently been proposed in the revised motivational theory [11]. According to the theory of change in health behavior, people are more sensitive to the barriers of health behavior than to the benefits of behavior [12]. Therefore, the scale dealing with barriers should be included in studies using the measurement tool developed in this study. Personal health behavior consisted of four subscales ('lifestyle,' 'personal goods,' 'food,' and 'dust'). Compared to other tools [8], the personal behavior scale contains health behaviors that are easy for women to practice in daily life; therefore, it is straightforward to measure health behaviors using this tool, which is advantageous. Community health behavior consisted of four subscales ('reduction,' 'involvement,' 'recycling,' and 'reuse'). This subscale reflects the community's commitment to creating an environment that is not harmful to health by preventing environmental pollution [26]. The measurement tool developed in this study utilized all the constituent factors of the revised protection motivation theory model. Furthermore, validity and reliability testing was done. The tool reflects a comprehensive array of information on the environmental awareness and behavior of women residing in local communities in Korea through interviews and surveys. Therefore, it is distinctive from existing tools. It can be used to measure women's environmental health awareness and strengthen environmental health behavior. A limitation of this study limitation is the difficulty of generalizing the results to women in Korea as a whole or in other countries because data were collected from a local community setting in Korea. The convergent validity may have been high because of the lack of a gold-standard tool and because it was developed by applying a revised theory based on existing tools. Additionally, confirmatory factor analysis was not conducted. We suggest conducting confirmatory factor analysis in further research to test the fitness of the theoretical framework. In the exploratory factor analysis, the variance explained by each subscale was found to be from 58.5% to 75.8%. Further efforts to find the remaining sources of variance are needed. Additional analyses of women from a wider variety of regions are required to address the limitation of generalizability. It is also necessary to investigate whether women with environmental health problems have high scores on these scales. This study developed the following scales for measuring women's environmental health: severity, susceptibility, response efficacy, self-efficacy, benefit, barrier, personal behavior, and community behavior for women residing in local communities in Korea. Research on environmental health in women has attracted increasing attention not only in Korea but also worldwide. As the scales' validity and reliability were verified from various angles, their suitability for use in future research on women's environmental health can be confirmed.
2021-09-01T15:03:37.857Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "31f4892cdc72306c35060a7175ed8a4e4ab88be1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4069/kjwhn.2021.06.21", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2f04391e52630d1339f47e3d6415685ecc4839e4", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
104373995
pes2o/s2orc
v3-fos-license
Ammonia desorption from fly ash The main source of ammonia in ash are residues from an unreacted NH3 from the denitrification process, either from SCR or SNCR systems. The paper discusses the standards of NH3 content in fly ash and presents the most commonly used methods of removing excess ammonia from fly ash. In the next part of the work the results of laboratory tests on NH3 desorption are presented. Desorption was performed on samples of fly ash taken from the electrostatic precipitator of real PC boiler. Removal of NH3 from the ash was carried out in a heating chamber at 130°C and 150°C and detected by an analyser equipped with a NDIR sensor. Additionally, at the temperature of 130°C, the NDIR and analytical methods were compared (in accordance with the BN-750541-05 procedure) and the measurement uncertainty of both methods was estimated. Introduction The utilization of fly ash for construction purposes, i.e. for the production of concrete, requires that the ash meet certain criteria for physical and chemical properties. The fly ash obtained from boilers equipped with a flue gas denitrification system contains a certain amount of ammonium compounds due to the so-called "slip" of ammonia. Ammonia binds directly to ash as an unreacted substrate of denitrification reactions. In the case of SCR and SNCR installations with significant NH 3 /NO molar excess, this leads to a substantial slip of NH 3 to the exhaust gas and ash. Due to used method the sources of ammonia are [1,2]: • SCR (selective catalytic reduction) -usually NH 3 In an alkaline environment (concrete, mortar, etc.), ammonia is released to the atmosphere in molecular form. The presence of ammonium compounds in concrete does not unfavorably affect on its properties, however, it is an undoubted trade disadvantage, especially when the concrete is used in closed rooms. Currently there are no official standards for the content of ammonia in fly ash in Poland. Depending on the specification of the test site (including ventilation, air conditioning), it was found that the smell of ammonia is not perceptible in the case of fly ash containing not more than 100 to 200 mg NH 3 /kg. It was assumed that the maximum ammonia content in ash should not exceed 100 mg NH 3 /kg (U.S.A.), also lower values appear, up to 50 mg NH 3 /kg (Germany). Meanwhile, the ash from the SCR plant can contain up to 2 500 mg NH 3 /kg, which makes it unsuitable for use in the construction industry. Removing ammonia from fly ash would therefore be beneficial from the point of view of both the producer and the recipient. The amount of ammonia and ammonium salts present in the fly ash is related to the amount of ammonia in the exhaust gas. According to the literature, up to 80% of ammonia from SCR is absorbed on fly ash [3]. Mostly, ammonia in fly ash occurs in the form of ammonium salts (mainly ammonium sulphate and ammonium bisulfate). There are also smaller amounts of other salts, such as ammonium chloride. In addition to the formation of ammonium salts, ammonia can also be adsorbed on the surface of ash particles, especially if the ash contains some unburned carbon. The reaction of carbon with ammonia takes place at a temperature of 200-400°C and leads to the binding of a functional group on the surface of coal [3]. The studies presented in [4] show that a sample of coal-containing ash is characterised by a greater adsorption of ammonia than a sample without carbon. In addition, it can be noticed that the carbon in ash shows a decrease in ammonia adsorption in increasing temperature, i.e. at a lower temperature, more ammonia passes to ash, less to gas phase [4]. Thermal methods Thermal methods of removing ammonia from ash are characterized by lower investment costs than chemical methods. Typically, these methods are based on desorption of ammonia at 300-450°C [5]. ERC Method (Energy Research Center) In this method the ash from silo is placed into the fluidized bed reactor using air as fluidizing medium. During the process, the heated air flow continuously through the ash bed. Possible ash agglomerates are broken down by the acoustic method, through the waves generated by the acoustics generators. The method is used for ash with 500-1000 ppm concentration of ammonia. The release of ammonia starts at 150°C and the constant process temperature is 343-398°C. The process allows removing up to 90% of ammonia. This technology is not currently used industrially and remains in the testing phase. An unquestionable disadvantage is the necessity of using sound systems to break up the ash agglomerates in the reactor. The ERC method tests concern ash with an ammonia content above 500 ppm, and there is no data on its efficiency for lower concentrations [6]. Carbon Burnout In this method ash is fed into fluidized bed reactor were unburnt carbon in ash is burnout. The process takes place in a reactor at a temperature of about 700°C with a residence time of ashes in the reactor for 45 minutes. During combustion in high temperature ammonia compounds are decomposed. The process allows to obtain ash with an ammonia content below 5 ppm. In addition, the coal contained in the ash is burned (reduction of TOC), which improves its properties in terms of the use in construction. This method is used on a large scale in the U.S.A. [7]. STI Method In solutions of ammonium salts with pH above 7, the ammonia is released into the gas phase. This is the basis of most chemical methods for removing ammonia from fly ash. Water injection to the ash causes the increase of its pH and thus the release of ammonia to the gas phase due to the neutralization of acidic residues which are formed with ammonia. This reaction takes place in the process of producing cement from ash [8]. In the STI method small amount of water with alkaline compounds is injected to the ash. Ammonia released from the ash is then catalytically reduced or removed in wet absorbers. In the STI process it is possible to reduce the mass fraction of ammonia in ash to a value below 100 ppm, which is the accepted limit of detection in construction products. The technology involves the use of water, which in itself causes the release of some of the ammonia to the gas phase, but the addition of alkaline compounds increases the efficiency of the process. According to [9], the addition of water is about 5% (<20%) and alkaline compounds about 5% (<10%). Moreover solution of 0.25-1% CaO or Ca(OH) 2 is dosed. The process takes place at a temperature of 15-65.5°C for 15 to 30 minutes. ASMTechnology The ASMTechnology method assumes the use of Ca(ClO) 2 calcium hypochlorite as a strong oxidizing agent. The efficiency of this process is up to 95%. Ammonia contained in the ash is oxidized to molecular nitrogen, additionally chlorine ions are released. The degree of removal of ammonia depends on the pH of the ash, time and temperature of the reaction and the amount of reagent used. In practice, 1.0-1.5 times the molar excess of calcium hypochlorite to ammonia is used [10]. Ozonation The laboratory tests conducted in [11] show the results of ammonia removal from ash by ozone oxidation. The ash may have catalytic properties in the oxidation of ammonia with ozone at ambient temperature (23 °C) [12]. In this process air with high humidity is supplied to the ash in the order to keep moisture of ash at level of 1-5%. To the semi-dry ash obtained in this way, ozone, a mixture of air with ozone or oxygen and ozone as an oxidizing agent is supplied. Table 1 shows the results of the experiment for the initial NH 3 content in ash of 1200 ppm. In each case, ozone caused a reduction of ammonia and the best result was obtained for a mixture of 2% ozone at 150 °C. Catalitic Methods These methods are based on the reaction of selective catalytic oxidation of ammonia (SCO -selective catalytic oxidation) acc. reaction (1): The different catalysts can be used in this process, for example: CuO, The best efficiency obtained is 60% conversion of ammonia to N 2 with nitrogen selectivity over 90%. The process temperature is 300-450°C. The advantage of catalytic methods of ammonia removal is the ability to conduct the process in a stream of dusty exhaust gas, without the need to separate the ash. The disadvantage is the necessity of installing the catalyst and the possibility of its deactivation ("poisoning") by SO 2 or heavy metals contained in the exhaust gas. Studies carried out so far assume the utilization of water vapour resistant catalysts [12][13]. Wet Methods Studies on the solubility of ammonium compounds show that after 10 minutes of aqueous extraction, about 85% of ammonia is released from ash into water. In this process ammonia is removed in fluidized reactor in contact with high humidity air or by rinsing of dusted exhausts in a reactor with strongly turbulent flow. This allows removing from the flue gas ash particles up to 0.5 μm as well as gaseous components soluble in water, including NH 3 [14][15][16]. Methodology For experiments the fly ash was separated from the electrostatic precipitator of a coal-fired pulverized coal boiler. In the order to increase ammonia concentration in ash, the ash were extra enriched by gaseous ammonia. Because the initial content of ammonia in the ash was less than 50 mgNH 3 /kg. The increase of ammonia content in ash sample were achieved using the NH 3 gas stream supplied from the bottle. This value was increased to 453 mg NH 3 /kg. An ash sample of 200 g was placed in a flat crucible in a constant temperature zone in a closed heating chamber. The air flow through the chamber was set at 2 dm 3 /min and controlled by the flow regulator according to Figure 1. The test were conducted at two temperatures: 130⁰C and 150⁰C. NH 3 content in outlet of combustion chamber were measured by two methods: • Using the SIEMENS U6 gas analyser, equipped with NDIR (Non Dispersive Infrared Sensor) NH 3 sensor. This gas analyser is dedicated for NH 3 measurements in two ranges 0 -100 ppm vol. and 0 -1000 ppm vol. The gas sample flowing through the analyser must be adequately dedusted (particles d z <1μm) and dried. For this purpose, it is possible to use an exhaust gas conditioning system using a Peltier cell, which allows quick water condensation from the exhaust gas and its immediate removal. However, one should remember about the possibility of NH 3 being absorbed in the removed condensate, which may lower the real values of the measured NH 3 . . The scrubbers contain of 0.01 N sulfuric acid solution of 60 cm 3 and 1 cm 3 of methyl red alcohol solution. The apparatus has been set up so that the gas flow path to the scrubbers is as short as possible and all connections are sealed. After the gas was bubbled through the scrubbers, the solution from scrubbers 1 and 2 was quantitatively transferred to a flask and the excess of 0.01 N sulfuric acid solution was titrated with 0.01 N sodium hydroxide solution. Ammonia content in the gas X is calculated as gNH 3 /100 m 3 of gas according to: (2) where: a -volume of 0,01 N sulfuric acid solution, cm 3 b -volume of 0,01 N sodium hydroxide solution, cm 3 V o -volume of gas, cm 3 0,01713 -amount of ammonia corresponding to 1 cm 3 0,01 N sulfuric acid solution. Measurement uncertainty An account of measurement uncertainties should be a part of every correctly performed experiment, especially when comparing different measurement methods of the same value in an indirect way. The determination of measurement uncertainty has a significant impact on the formulation of correct and reliable measurement conclusions [17]. In the power industry, the determination of measurement uncertainty may also be related to compliance with emission standards [18]. Expanded uncertainty is the most widely used method when it comes to industrial installations measurements. In the presented study, calculations of the expanded uncertainty with the extension coefficient k = 2 for the 95% confidence level were presented. The accuracy of the NH 3 measurement by the analytical method consists of all the solutions volumes measurements, the subjective assessment of the color change of the solution, as well as the measurement uncertainty of the gas sampling device. When using the Siemens Ultramat U6 analyzer, Table 2 presents the parameters influencing the measurement result, which are provided by the manufacturer [19]. Table 3 shows the amount of NH 3 released from the ash after 10, 20, 30, 60 and 120 minutes of desorption at 130°C and 150°C. The amount of NH 3 (in mg NH 3 ) was calculated from the Siemens U6 analyzer measurement and presented and as a percentage of the initial NH 3 content in the ash sample. The concentration of NH 3 ppm vol. as a function of experiment time is shown (Figure 2). A visible effect of temperature on the release rate of NH 3 from ash can be observed. The initial concentration of NH 3 at 130⁰C was 710 ppm vol. whereas at 150⁰C it was 858 ppm vol. Stabilization of desorption was obtained at 130⁰C after 3 hours and 12 minutes, while at 150°C the time was shorter -2 hours and 37 min. Further maintenance of the samples in the heating chamber did not affect the NH 3 concentration in the outlet gas. For a more accurate analysis of the ammonia desorption process, the data presented in the Figure 3 was calculated. The rate of NH 3 concentration decrease ppm/min was calculated. This data shows that for the temperature of 150⁰C the maximum rate of 43ppm NH3/min was reached after about 3 minutes, whereas for the temperature 130⁰C it was 36ppm NH3/min after a time of about 4 minutes. At a higher temperature level (150 ⁰C) the process takes place more rapidly. The significant drop in the concentration of NH 3 /min (≥10 ppm / min) at 150°C takes place during the first 36 minutes of experiment, while for temp. 130 ⁰C for 31 min. Experimental results In the further part of the study, the analytical method of NH 3 measurement (in accordance with the BN-75 procedure 0541-05) was compared with the U6 analyzer. The values were converted into an amount of NH 3 released into air and the percentage value of the initial NH3 content in the ash sample. In Figure 3 a comparison of the NH 3 amount released from ash after 10, 20, 30 and 60 minutes of desorption at 130°C is shown. For a more detailed analysis, the uncertainty account was made using the extended uncertainty method with the extension coefficient k = 2 for the 95% confidence level. The uncertainty for the analytical method was +/-18.9% of the released amount of NH 3 , while for the U6 analyzer it was +/-9.25%. It can concluded that the U6 analyzer measurement is almost twice as accurate as the method presented in the analytical method. However, this result is influenced by a small population of samples for both discussed methods. In all cases, the lower values obtained by the analytical method indicate that the absorption of NH 3 in the condensed water (moisture from air and ash) removed in the exhaust conditioner had a little effect on the result of the U6 analyzer. Only in the lowest measured range after 10 minutes, similar values were obtained for 130 ⁰C (14 and 15 mgNH 3 ) within the limits of measurement uncertainty. In the remaining time ranges, the values of the released NH 3 do not differ significantly. Conclusions Desorption of ammonia from fly ash is an important process that can be applied in industry. It is mainly related to meeting the fly ash quality standard required by customers from the civil engineering industry. In presented research NH 3 desorption was carried out from fly ash taken from the electrostatic precipitator from coal fired power plant. The experimental part of the research presents the results of ammonia desorption form ash placed in the heating chamber. The desorption was carried out at two temperature levels: 130 ⁰C and 150 ⁰C. For the 130 ⁰C two measuring methods were compared: the analytical method (according to polish standard BN-75 0541-05) and the Siemens U6 gas analyzer with NDIR sensor. The uncertainty analysis shows nearly 2 times greater accuracy of measurements using an NDIR analyzer comparing to the analytical method. The ammonia desorption from the ash is a rapid process in the first phase (30 minutes). The release of gaseous NH 3 can be dangerous for people working with the power installation, as well as people who collect and transport the ash. The measurements made with the U6 analyzer indicate that for the first 30 minutes 41-42% of the initial value of NH 3 contained in the ash is released, after 60 min it is 59 -65%, and after 120 minutes it is 78 -88%, depending on the process temperature (130 ⁰C -150 ⁰C). Stabilization of the NH 3 concentration was observed after 3 hours and 12 minutes for a temperature of 130 ⁰C, while for a temperature of 150 ⁰C it was 2 hours and 37 minutes. Further research should take into account the influence of other ash compounds, its elemental composition and the size of the particles on the NH 3 desorption process. The SEM-EDS analysis of the tested samples would be a purposeful addition.
2019-04-10T13:13:22.478Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "a754f77a1502331a7ed38e84416fb243f64c0a18", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/08/e3sconf_icbt2018_01012.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bac41edae3d8d03b435764623b316a21baddf6b6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
240084826
pes2o/s2orc
v3-fos-license
Improving the Performance of Polysulfone-nano ZnO Membranes for Water Treatment in Oil Refinery with Modified UV Irradiation and Polyvinyl Alcohol Refinery wastewater is generated from the process of refining large amounts of oil. Oil refined wastewater contains micron-scaled emulsion droplets, and sub-micron droplets that are difficult to remove from water, which poses problems for researchers. Membrane technology is widely used in water treatment because it is very selective and effective in the filtration process. This research focuses on oil refinery water treatment using polysulfone (PSF) membrane ZnO nano composites modified with ultraviolet (UV) irradiation and polyvinyl alcohol. This membrane was prepared using the dry / wet phase inversion method. Then-the membrane was modified using UV irradiation and coated with polyvinyl alcohol (PVA). PSF-nano ZnO modifications have an impact on membrane performance; UV irradiation showed an increase in the value of membrane pure water flux from 4.5 Lm –2 h –1 to 5.7 Lm –2 h –1 . However, after UV irradiation, the rejection value decreased after UV irradiation, whereas the presence of PVA as a coating agent increased the rejection value to 77.2 % for Total Dissolve Solid (TDS) rejection, 76 % for Chemical Oxyugen Demand COD rejection, and 65.3 % for ammonia rejection. This value was higher than that obtained for membranes without PVA coating, namely only 47.3 % for TDS, 51.1 % for (COD), and 29.4 % for ammonia rejection. Modifications with UV and PVA irradiation provide interrelated effects to improve membrane performance. Introduction Oil companies use large quantities of water during the process of refining crude oil. Refineries use toxic organic compounds, such as oils and fats, which contaminate the water [1]. Therefore, oil companies must have water treatment management processes, such as optimizing usage of water, water recycling, water reuse, and maximizing water treatment system effectiveness [2]. Other research related to the purification of oil refined water waste by torma Treating refinery water with traditional methods such as adsorption and absorption cannot eliminate droplets, micro-droplets and sub-micro droplets emulsions effectively [3]. Separating the contaminants via gravity settling is time-consuming, even after adding chemicals to break the emulsion system were reported Munirasu et al. [4]. Membrane technology is a rapidly-developing method of water treatment today. Membrane technology can help in better separation, purification, and water-soluble and dispersible materials can be retained [5]. Thus, membrane technology is the best way to eliminate the micro and sub micro sized oil droplets in wastewater [6]. Kiss et al. [7] has used cross flow membrane filtration system in refinery water treatment. In a series of experiments, they demonstrated that ultrafiltration polyvinyl alcohol (PVA) membranes had superior permeate flux. Safari et al. also reported on [8] conducted a cross-flow microfiltration filtration process for oil refinery wastewater using a ceramic membrane material (Al 2 O 3 ) with a pore size of 50 nm. Oily wastewater treatment goes through a urification process using flocculation and microfiltration [9]. However, foulant deposition on the membrane, called fouling, blocks the pores of the membrane and results in a substantial reduction of the permeable flux over the operating time function. It consequently limits the membrane's full application in wastewater treatment [10]. Many studies have investigated modifications to reduce membrane fouling and increase flux values such as modification of hydrophilicity, pore size, porosity, and surface charge [11]. Kemal et al. [12] have modified the PSF membrane with nano graphene oxide (nano-go). These modifications resulted in 97 % pollutant rejection and a 219.1 Lm -2 h -1 flux. Furthermore, Anand et al. [13] modified PSF using nano ZnO. Such modifications increase the hydrophilicity and permeability of the PSF membrane. Membrane permeability values increased from 2.83 to 5.11 Lm -2 h -1 bar -1 . Chung et al. [14] also have modified the PSF membrane using nano ZnO and ethylene glycol as additives. The modification results showed that nano ZnO increases the membrane porosity as well as the permeability from 1 Lm -2 h -1 bar -1 to 5 Lm -2 h -1 bar -1 . Therefore, this research is focused on nano ZnO incorporation into the PSF membrane matrix. However, problems often occur when the incorporation of nanoparticles in the membrane matrix causes the aggregation and formation of unselective microvoids in the membrane. Kusworo et al. [15] has modified the polyetersulfone nano silica membrane by ultraviolet irradiation and thermal annealing. The modification results show that ultraviolet irradiation can increase the polymer chain density and the hydrophilic properties of the membrane. These modifications cause the formation of hydroxyl and carbonyl groups, to reduce the formation of microvoids. Ultraviolet irradiation is a type of membrane modification that offers simplicity, usefulness, versatility, and low cost, which makes it is widely utilized [5]. Ultraviolet irradiation can change the surface properties of the membrane without affecting the bulk properties [16]. Another method is to increase membrane hydrophilicity and reduce aggregation by adding hydrophilic material such as pva. Park et al. [17] have used PVA on the PVDF membrane by dip-coating. This modification increases the hydrophilic nature of the membrane, with flux values of 33 Lm -2 h -1 . Refinery wastewater treatment using psf-nano ZnO with modification by uv irradiation and pva has not been reported yet in the literature. The focus of this research is modification of the psf nano ZnO composite membrane with Ultraviolet and PVA irradiation through a dip-coating technique. The modification is expected to produce a membrane with higher separation performance for refinery processing compared to that of an unmodified PSF-ZnO membrane. Materials and methods 2.1 Materials Polysulfone (PSF) (UDEL ® PSU) as membrane material was obtained from Solvay Advanced Materials, USA. N-methyl-2-pyrrolidone or NMP was purchased from Merck, USA as a membrane polymer solvent. Polyvinyl alcohol (PVA) as a surface modifier additive and polyethylene-glycol 6000 Da and 4000 Da as a potogen agent were obtained from Merck, USA. ZnO nanoparticle as an inorganic nano-filling on the membrane Obtained from Nano Center Indonesia with specifications in Table 1. To remove water vapor absorbed, nano zno is dehydrated at 300 °C for 3 hours before use. waste oil from PT Pertamina Oil & Gas Company Ltd. as bait. While pure water as a non-solvent from the Integrated Laboratory of Diponegoro University Semarang, Indonesia. Membrane preparation The first step in this study was the fabrication of PSF-nano ZnO membranes. PSF-nano ZnO was made by mixing nano ZnO with the psf polymer. Following this, the PEG additive was added to the psf-nano zno membrane matrix to improve membrane performance. The PSF-nano ZnO composite membrane was prepared by means of dry phase inversion. N-Methyl-2-pyrrolidone as solvent, and water for coagulation [18]. First, the PSF-nano ZnO solution was prepared by mixing nano ZnO (0.1; 0.5; 1 % by weight) into 19 wt% PSF. The solution was stirred until it became homogeneous and then ultrasound was applied to the solution for 1 hour. Sulaiman et al. [19]. Have used a nano ZnO composite membrane PSF that has been left for 1 day to remove air bubbles trapped in the solution due to stirring. The solution was cast on a glass plate with a membrane thickness of 0.1 mm, and then the polymer is immersed into the coagulation site. In this study, a thin layer of the membrane was then soaked with water for 24 hours to separate the solvent trapped in the membrane matrix [19]. Membrane sheets were dried at room temperature for 24 hours [20]. The effect of using UV irradiation causes degradation of the polymer chains resulting in larger pores [21]. In addition, the effects of PEG and UV irradiation can create surface creases that extend, such as fingers, into the sub-layers, and large holes in the lower surface, as seen in Fig. 1(c). In Fig. 1(d), many white spots indicate spread of pva on the membrane sheet. Furthermore, the pores in Fig. 1(d) are less good when compared to Fig. 1(b) because the surface of the membrane is coated by PVA, resulting in cross-linking between PVA and the PSF-nano ZnO membrane Modification of the membrane Modification of the PSF-nano ZnO membrane was carried out in two steps in series. The first modification is uv irradiation on the pre-coagulated membrane. A UV lamp type C with wavelength of 254 nm was used in this work [8]. The dope solution was made with 19 wt% PSF, the optimal nano ZnO concentration, and the optimal PEG molecular weight based on membrane performance (flux and rejection) evaluation in the previous procedure. The solution was poured on the glass holder and then exposed to UV with a variable time of 1, 3, or 5 minutes. Subsequently, the glass plate was put in a coagulation bath for 1 hour, and the membrane sheet was soaked for 24 hours. The next step followed the previous method of membrane fabrication. UV-treated PSF-nano ZnO membrane sheet was evaluated for its performance in filtering refinery wastewater. The uv irradiated membrane that possessed the best performance was then modified using a pva coating [22]. PVA solutions were prepared at concentrations of 1, 3, and 5 % by weight. PVA was dissolved in distilled water at a temperature of 90 °C with stirring for 120 minutes. After that, the PVA-coated membrane sheet was allowed to stand for 8 hours, and dried at room temperature. Characterization and performance of psf-nano zno membranes 2.4.1 PSF-nano zno membrane performance The membrane was made by pouring on the surface of the glass using PSF-ZnO hubcap solution which was made from mixing 19 % PSF, 1 % nano zno, and 5 % PEG plus NMP solvent. The solution is stirred for half a day so that the solution is stable and homogeneous. Then the solution is left to stand for 1 day to get a perfect membrane such as the loss of air bubbles that are dissolved. then pouring using dry-wet phase inversion technique. After that the membrane was immersed in the coagulation place. After that the PSF-nano ZnO membrane was dried in an oven at room temperature. The membrane was prepared to obtain flux and rejection products. after that the hubcap solution was poured on a glass plate, and exposed to UV. The UV lamp used was type C with a wave of 254 nm and a power of 10 watts. The duration is 2 minutes. Then the membrane film is immersed into the coagulation bath. Furthermore, PVA was added to the PES-nano ZnO membrane solution by dip-coating method, by dissolving 1,3.5 wt% PVA into an air distillate at 90 °C. The membrane was prepared in a performance test based on the results of flux and rejection. The permeate flux, rejection and flux reduction ration were evaluated using Eqs. (1), (2), and (3), respectively. where J w is the flux (Lm -2 h -1 ), V is the permeate volume (L), A is the active membrane area (12.57 cm 2 ), and t is the operating time interval (hour). where C p (mg/l) and C f (mg/l) are concentrations in permeate and feed, respectively. Rejection is calculated based on the reduction of contaminants in the permeate which then is evaluated. In this study, the studied wastewater parameter Total Dissolve Solid (TDS) is analyzed using a TDS KIT (HM original), and ammonia is analyzed using a UV spectrophotometer (Shimadzu Bio spec-mini UV-Vis Spectrophotometer). Fabrication of membrane psf-nano zno Surface morphology and cross-sectional the structure/ shape of the PSF-nano zno membrane was analyzed by SEM (JEOL series JSM6510LA, Japan). The membrane sample was placed in liquid nitrogen and divided with a clamp. The membrane was placed in a gold-plated sample case, with tape added. Furthermore, the membrane morphology test was carried out with a magnification of 1000 times and an enlargement of 5000 times with a voltage of 20 kV. for functional group membranes using FTIR. The wavelength FTIR spectra were used to observe the PSF-nano ZnO membrane in the wavenumber range 4000-400 cm -1 . Furthermore, the analysis of membrane surface hydrophilicity was determined by the water contact angle of the membrane and was calculated using the sessile droplet method at constant temperature (room temperature). When the water droplets stabilized after 30 seconds, observations were made during a 4-minute observation duration. As reported by [24] membrane porosity using a gravimetric method. The membrane was immersed in de-ionized water for 1 day, then cleaned and weighed and dried at 60 °C and then weighed again to get the dry weight of the membrane [25]. To get membrane porosity then use Eq. (4). where ɛ is the porosity of the membrane, and wt i and wt 0 are the weights of the wet and dry membranes (g), respectively. ρ water is the density of pure water at 25 °C (0.997 g cm −3 ), A is the active area of the membrane (cm 2 ), and l is the membrane thickness (cm). Results and discussion 3.1 Effect of PVA modification and UV on membrane morphological structure SEM analysis is used for the study of membrane morphology, surface condition, and sublayer membrane relations. SEM test results can determine the effect of PVA coating on the matrix membrane PSF-nano ZnO composites. The results of the membrane characterization are shown in Fig. 1. Fig. 1(a) shows the surface of the membrane PSF-nano ZnO composite with modification. The surface looks cleaner, and the pores are not clearly visible compared to Fig. 1(b). Fig. 1(b) shows a membrane surface that has been changed by adding PEG 6000 Da additives and 1 minute of UV irradiation. PEG is used as a pore-forming agent in polymers, which causes more pores. High molecular weight PEG formed voids with PEG 4000 on the membrane matrix [26], as did the Ultraviolet irradiation on the membrane matrix. UV irradiation on the membrane matrix is used in Fig. 1(a) because it causes degradation in the polymer chain. Degradation forms radicals that function for polymerization [27]. At longer UV irradiation times, the influence of polymer chain degradation results in increasingly larger pores [28]. The effect of PEG and uv irradiation also creates surface hollows which extend, fingerlike, into the sublayers, and large holes in the lower surface, as seen in Fig. 1(c). In Fig. 1(d), many white spots show PVA spread on the membrane surface. Furthermore, the pores in Fig. 1(d) are less clear than in Fig. 1(b) because the membrane surface is coated by PVA, resulting in cross-linking between PVA and the PSF-nano composite ZnO membrane [29]. Fig. 1(e) is a cross-section image of the PSF-nano ZnO membrane coated with PVA. The cross-section structure is neater and tighter compared to Fig. 1(c). Fig. 1(e) shows the presence of PVA in the membrane, improving the membrane structure and forming a thin layer so that the membrane surface is denser than the membrane without PVA. Then, the PSF-nano ZnO membrane after filtration was characterized by SEM. The SEM results explain a condition that occurs between refinery impurities and membrane surfaces. The results of the characterization after filtration are presented in Fig. 2. Fig. 2(a) shows the PSF-nano ZnO membrane with 6000 Da PEG and 1 minute UV treatment after filtration. Cake foulant can be seen on the membrane surface due to contaminants in wastewater accumulating on the membrane surface. Fig. 2(b) is a SEM image of a psf-nano zno membrane surface that has been coated with PVA, which shows less accumulation of pollutants on the membrane surface. In the picture the membrane forms a layer of cake, as in Fig. 2(a); only the number of different cakes differs between processes. Fig. 2(b) shows the slight accumulation of pollutants on the surface which does not form a cake layer as in Fig. 2(a). Effect of PVA modification on contact angle values The stability and selectivity of the membranes often depend on the properties of the membrane used in the filtration process. The membrane properties can be seen based on the membrane contact angle values; the PSF-nano ZnO membrane contact angle values are shown in Table 2. Table 2 shows that the contact angle value of the PSFnano ZnO membrane reaches 72.33° when 0.1 wt% of nano ZnO was added. This result explains nano ZnO is added to the matrix of the membrane that has been mixed so smoother membrane surface with nano ZnO 0.5 wt%. This phenomenon is caused by the low water solubility of PEG 4000 Da and water solubility, so that the membrane's surface is rougher than the membrane without PEG [30]. A rough membrane surface and hydrophilic PEG properties can increase the hydrophilicity of the PSF-nano ZnO membrane. Furthermore, prolonged exposure of the PSF-nano ZnO membrane matrix to UV light causes the contact angle to decrease down to 55.17 %. This result shows that more exposure to UV irradiation for the PSF-nano ZnO membrane causes matrix bonds to degrade and form radicals, as described by Sarihan et al. [31]. These values indicate that the PSF-nano ZnO membrane modified by UV irradiation can increase membrane hydrophilicity. Furthermore, the response in contact angle to the presence of a PVA coating on the PSF-nano ZnO membrane's top layer as the final modification is similar to the response to UV irradiation. The contact angle value of the membrane reaches 36.42 %, lower than that of the uncoated membrane. A similar result was also reported by Zangeneh et al. [32], showing that low PVA concentrations can penetrate into the matrix, making the substrate even bigger [33]. With a higher concentration, the PVA can form a thin layer on the surface which results in decreased membrane hydrophilicity. FTIR characterization of the modified membrane: UV irradiation and PVA coating FTIR is used to measure the membrane spectrum so that the functional groups on the membrane can be determined. The FTIR spectra of the prepared membranes are shown in Fig. 3. Fig. 3 shows the spectra of the PSF-nano ZnO membrane with multiple modifications. Intense absorption occurs in one of the membrane samples at wavenumbers of 1290 and 1325 cm -1 , representing symmetrical stretch vibrations O = S = O coming from pure PSF as the main chain in the membrane matrix. The two peaks at wavenumbers of 1365 and 1488 cm -1 correspond to the symmetrical deformation vibrations and the asymmetrical deformation vibrations respectively. The rotation of the C = C conjugate mechanism of the benzene ring leads to the observed absorption rate at 1585 cm -1 . Meanwhile, the wavenumber of 3000 cm -1 shows asymmetrical stretching vibration of -CH [34]. Furthermore, the 1756 cm -1 region's wavelength changed after adding 1 wt% nano. The absorption shows strong water adsorption due to the presence of a hydroxyl group in the PVA chain [35]. Nevertheless, the modification of the psf-nano zno membrane did not change the main membrane matrix chain. This phenomenon shows that the membrane produced is stable and that this modification method does not cause backbone damage to the membrane. The effect of PVA modification on the porosity of the PSF-nano ZnO membrane Membrane porosity is defined as the ratio of the membrane pore volume to the total volume. Porosity can affect the efficacy of membrane performance (flux and rejection). The porosity values of the PSF-nano ZnO membrane with different modifications are shown in Fig. 4. Fig. 4 shows that addition of nano ZnO to the psf membrane increased its porosity to 64 % at nano ZnO 1 wt%. The increase in porosity due to the presence of nano ZnO resulted in the formation of microvoids between the PSFnano ZnO membrane matrices [36]. However, an increase in nano ZnO concentration resulted in a decrease in porosity of up to 62 % at 1 wt% nano ZnO. As reported by Kuvarega et al. [37], this phenomenon suggests that the high concentrations of nano ZnO cause the distance between pores to be smaller compared to low nano ZnO concentrations, meaning that membrane density increases. The distance between the pores occurs because nano ZnO is scattered around the matrix of the PSF membrane. PEG modification and UV irradiation have the same effect on the PSF-nano ZnO membrane matrix by increasing the porosity value. The porosity value reaches 86 % after the addition of PEG, and porosity in membranes exposed to UV light is higher than in the membrane without UV irradiation exposure. These results are consistent with the SEM report in Fig. 1, where the pores on the treated membrane surface are visible. Before the coagulation process, Fig. 3 The FTIR spectrum of the PSF-nano ZnO membrane with various modifications Fig. 4 Porosity values of various PSF-nano ZnO membranes UV irradiation allows the demixing process to be delayed, which causes the formation of voids. The molecular weight of PEG causes the membrane matrix to allow water to pass through the membrane efficiently and leaves pore traces on the membrane. Large numbers of pores improve the performance (flux and rejection) of the membrane. The last modification, PVA, when attached to the surface, can decrease the porosity value to 65 %. This phenomenon occurs because solutions with low concentrations of PVA (1 %, 2 % and 3 %) can enter the membrane matrix to reduce the volume of the pore. High levels of PVA can form a coating on the membrane surface that affects the rejection of the PSF-nano ZnO membrane [38]. The effect of PVA modification on the performance of the PSF-nano ZnO membrane 3.5.1 PSF-nano ZnO membrane performance The flux characteristic of membrane performance can be seen from the results of permeability and selectivity. Permeability (flux) is determined by the amount of permeate volume that passes through the membrane per unit area per time. The amount of permeate produced depends on the porosity and hydrophilicity of the membrane. The selectivity (rejection) is defined by the ability of the membrane to retain pollutant. The flux and rejection values of the PSF-nano ZnO membrane can be seen in Fig. 5. The ZnO-embedded membranes have higher flux values than that of the membrane without nano ZnO. The flux value was 1.85 Lm -2 h -1 , higher than that for the pristine PSF membrane which reaches only 1.6 Lm -2 h -1 . This effect is correlated with increasing porosity in the presence of nano ZnO. The nano ZnO has a higher reacting surface area and is well dispersed in the structure [39]. The flux value decreases with an increasing concentration of nano ZnO according to the decreasing porosity value. PEG molecular weight (PDI) affects the flux value. The higher molecular weight of PEG (PEG 6000 Da) causes a higher flux value of PEG 6000 Da. PSF-nano ZnO rejection value is higher, especially at 1 wt% nano ZnO PEG 6000 Da compared to PSF membranes without nano ZnO. This phenomenon is due to the decreased porosity value after the addition of 1 wt% nano ZnO so that the density increases. However, the presence of PEG decreased the rejection value in the membrane to 48.4 % for TDS, 52.4 % for Chemical Oxygen Demand (COD) and 29 % for ammonia when PEG 6000 was added during membrane preparation. This phenomenon is caused by an increase in porosity, pore size, and the hydrophilic nature of the membrane due to the addition of PEG, so that contaminants may not be retained. Based on the research results, the concentration of nano ZnO membrane with the best performance was 1 %, while the best molecular weight of polyethylene glycol (PEG) in this study was 6000 Da which its use increased the flux value significantly compared to its decrease in the rejection value [40]. Performance of the PSF-nano ZnO membrane with PVA modification The modification of the PSF-nano ZnO membrane is intended to improve the membrane flux and rejection. Based on the results of membrane characterization, the modifications made to the membrane affect the porosity level and hydrophilicity of the PSF-nano ZnO membrane. The performance of the UV irradiated and PVA-coated PSF-nano ZnO membranes are shown in Fig. 6. Fig. 6 shows that ZnO membrane with PEG 6000 is 5.7 Lm -2 h -1 , higher than that of an unirradiated membrane with the same ZnO and PEG 6000 treatment, which only reaches 4.5 Lm -2 h -1 . This effect is related to the porosity value that increases with the presence of UV radiation. Nevertheless, PVA modification has a different impact on membrane separation performance: PVA lowers the flux value to 3.3 Lm -2 h -1 with the addition of 1 wt% of PVA. The flux value decreases to 2.9 Lm -2 h -1 with increasing PVA concentration of 3 wt% PVA. The flux of a PVA coated membrane is lower than that of an uncoated membrane. However, the effect of PVA on the membrane flux value is more stable from the beginning of the filtration to the end of the filtration compared to that of uncoated membranes. Such results indicate that the presence of PVA decreases the fouling of the membrane, considering the results of the SEM characterization of PSFnano ZnO membranes after filtration in Fig. 6. The rejection value decreases with increased UV irradiation time. Such results are due to the more extended UV irradiation contributing to the degradation of the polymer membrane chain. Degradation of the membrane matrix causes an increase in the porosity of the membrane, causing waste pollutants to pass through the membrane matrix. PSF-nano ZnO membrane rejection after UV irradiation decreased to 46.6 % rejection of TDS, 48.8 % rejection of COD, and 15 % rejection of ammonia, lower than for the membrane without UV irradiation. The rejection value of the PVA-coated PSF-nano ZnO membrane is enhanced significantly compared to those of unmodified PSF-nano ZnO membrane and UV irradiated PSF-nano ZnO membrane. Rejection values were 77.2 % higher for TDS, 76 % for COD rejection and 65.3 % for ammonia rejection. This is because PVA increases the hydrophilicity of the membrane corresponding to the contact angle results. The hydrophilicity of the membrane is essential during the refinery water filtration process because the hydrophobic pollutants in the refinery water are retained and do not pass through the hydrophilic membrane. In particular, the hydrophilic nature is essential to reduce TDS and COD, as stated by Meng et al. [41]. The best UV irradiation time for this analysis is 1 minute, because the flux increases significantly compared to the decrease in the rejection value. The significant PVA concentration in this experiment was 3 % because at this concentration the rejection value was higher than the decrease in flux value. The effect of PVA modification on anti-fouling properties of PSF-nano ZnO membranes The anti-fouling analysis results can be explained based on the results of SEM of the PSF-nano ZnO membrane before and after filtration in Figs. 1 and 2. The FTIR characterizations of the PSF-nano ZnO membrane before and after filtration are shown in Fig. 7. Fig. 7 shows the FTIR spectra of the PSF-nano ZnO membrane with the addition of uv irradiation and PVA (before and after filtration). Based on these images, after filtration, changes occur in the FTIR spectrum. The spectrum changed in the range of 3300 cm -1 , which represents the stretching of O-H after 2.5 hours of titration. These changes indicate that there is an interaction between the surface of the membrane and the contaminants in the waste refinery. Nonetheless, there was no change in wavenumbers of 1290 and 1325 cm -1 , which represented symmetrical stretch vibrations O = S = O from pure PSF as the main chain. Effect of PVA modification on PSF-nano ZnO membrane stability The stability of the membrane against the presence of fouling on the surface can be analyzed based on the flux reduction ratio. The flux reduction ratio is calculated based on the percentage of the membrane's decrease ratio. These results are confirmed by the ratio of membrane flux decrease as shown in Fig. 8. Fig. 8 indicates that the application of UV irradiation and PVA has a significant influence on the value of the flux. Membrane flux values without modification decreased to 39 % at 0.5 hours of filtration. Flux values are unstable and will continue to decline for up to 2.5 hours of filtration. However, UV irradiation of the membrane increases the flux value and its stability during the filtration process for 2.5 h. The reduction ratio of the PSF-nano ZnO membrane flux with uv irradiation is lower at 1.97 % than that of a similar membrane without uv irradiation. This effect shows that the fouling of membranes treated with UV irradiation occurs more slowly [42]. The last modification of the PSF-nano ZnO membrane was the PVA coating. This modification causes the flux reduction ratio to be only 1 % lower than the PSF-nano ZnO membrane with uv irradiation. These results suggest that the presence of PVA increases the antifouling capability of the membrane. The flux reduction ratio in the graph is more stable from the beginning of filtration to the end of filtration. This effect implies that there is no accumulation on the surface of the membrane. Conclusion Modifications of the PSF-nano ZnO membrane by UV irradiation and PVA coating allow the membrane to be affected. UV irradiation increases the membrane flux but reduces the rejection value, while PVA decreases the flux value and increases the rejection value. The refinery water treatment with the PSF-nano ZnO membrane is 2.9 L m -2 h -1 at 6 bar for 2.5 hours, with values for rejection of 77.2 % for TDS rejection, 76 % for COD rejection, and 65.3 % for ammonia rejection. The results suggest that a membrane composition with 19 wt% PSF, 1 % nano ZnO, 6000 Da PEG, 1 minute UV and 3 wt% PVA performs the best out of the investigated variations for refinery water filtration.
2021-10-29T15:13:59.200Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "0cfdd80a28a47373fc5275f1fc530e121d2795fa", "oa_license": "CCBY", "oa_url": "https://pp.bme.hu/ch/article/download/17029/9230", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "02d48431d24c0e5fe7dff986475537de15cd23de", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
119512347
pes2o/s2orc
v3-fos-license
The"optical lever"intracavity readout scheme for gravitational-wave antennae An improved version of the ``optical bar'' intracavity readout scheme for gravitational-wave antennae is considered. We propose to call this scheme ``optical lever'' because it can provide significant gain in the signal displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. In this scheme displacement of the local mirror can be close to the signal displacement of the end mirrors of hypothetical gravitational-wave antenna with arm lengths equal to the half-wavelength of the gravitational wave. Introduction All contemporary large-scale gravitational-wave antennae are based on common principle: they convert phase shift of the optical pumping field into the intensity modulation of the output light beam being registered by photodetector [1]. This principle allows to obtain sensitivity necessary to detect gravitational waves from astrophysical sources. However, its use in the next generations of gravitational-wave antennae where substantially higher sensitivity is required, encounters serious problems. An excessively high value of optical pumping power which also depends sharply on the required sensitivity, is likely to be the most important one. For example, at the stage II of the LIGO project the light power circulating in the interferometer arms will be increased to about 1 MWatt, in comparison with about 10 KWatt being currently used [2]. In particular, so high values of the optical power can produce undesirable non-linear effects in the large-scale Fabry-Perot cavities [3]. This dependence of pumping power on sensitivity can be explained easily using the Heisenberg uncertainty relation. Really, in order to detect displacement ∆x of test mass M it is necessary to provide perturbation of its momentum ∆p ≥ /2∆x. The only source of this perturbation in the interferometric gravitational-wave antennae is the uncertainty of the optical pumping energy ∆E. Hence, the following conditions have to be fulfilled: ∆E ∝ (∆x) −1 . If pumping field is in the coherent quantum state then ∆E ∝ √ E, and therefore E ∝ (∆x) −2 . Rigorous analysis (see [4]) shows that pumping energy stored in the interferometer have to be larger than where Ω is the signal frequency, ∆Ω < Ω is the bandwidth where necessary sensitivity is provided, ω p is the pumping frequency, L = cτ is the length of the interferometer arms, ξ < 1 is the ratio of the amplitude of the signal which can be detected to the amplitude corresponding to the Standard Quantum Limit. This problem can be alleviated by using optical pumping field in squeezed quantum state [5], but can not be solved completely, because only modest values of squeezing factor have been obtained experimentally yet. Estimates show that usage of squeezed states allows to decrease ξ by the factor of ≃ 3 for the same value of the pumping energy (see [6]), and the energy still remains proportional to ξ −2 . In the article [7] the new principle of intracavity readout scheme for gravitational-wave antennae has been considered. It has been proposed to register directly redistribution of the optical field inside the optical cavities using Quantum Non-Demolition (QND) measurement instead of monitoring output light beam. The main advantage of such a measurement is that in this case a non-classical optical field is created by the measurement process automatically. Therefore, sensitivity of these schemes does not depend directly on the circulating power and can be improved by increasing the precision of the intracavity measurement device. The only fundamental limitation in this case is the condition where N is the number of optical quanta in the antenna. In the articles [8,9] two possible realizations of this principle have been proposed and analyzed. Both of them are based on the pondermotive QND measurement of the optical energy proposed in the article [10]. In these schemes displacement of the end mirrors of the gravitational-wave antenna caused by the gravitational wave produces redistribution of the optical energy between the two arms of the interferometer. This redistribution, in its turn, produces variation of the electromagnetic pressure on some additional local mirror (or mirrors). This variation can be detected by measurement device which monitors position of the local mirror(s) relative to reference mass placed outside the pumping field (for example, a small-scale optical interferometric meter can be used as such a meter). The optical pumping field works here as a passive medium which transfers the signal displacement of the end mirrors to the displacement of the local one(s) and, at the same time, transfers perturbation of the local mirror(s) due to measurement back to the end mirrors. B' Local position meter In this article we consider an improved version of the "optical bar" scheme considered in the article [8]. We propose to call this scheme "optical lever" because it can provide gain in displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. This scheme is discussed in the section 2. In the section 3 we analyse instability which can exist in both "optical bar" and "optical lever" schemes (namely, in so-called X-topologies of these schemes) and which was not mentioned in the article [8]. We suppose in this article for simplicity that all optical elements of the scheme are ideal. It means that reflectivities of the end mirrors are equal to unity, and all internal elements have no losses. We presume that optical energy have been pumped into the interferometer using very small transparency of some of the end mirrors, and at the time scale of the gravitation-wave signal duration the scheme operates as a conservative one. It has been shown in the article [8] that losses in the optical elements limited the sensitivity at the level where τ * opt is the optical relaxation time. Taking into account that value of τ * opt can be as high as 1 s, one can conclude that the optical losses do not affect the sensitivity if ξ 10 −1 . The optical lever One of the possible "optical lever" scheme topologies (L-topology) is presented in Fig. 1 (another variant -X-topology -is considered in the next section). It differs from the L-topology of the "optical bar" scheme [8] by two additional mirrors A' and B' only. These mirrors together with the end mirrors A and B form two Fabry-Perot cavities with the same initial lengths L = cτ coupled by means of the central mirror C with small transmittance T C . Exactly as in the case of the "optical bar" scheme, due to this coupling eigen modes of such a system form the set of doublets with frequencies separated by the value of Ω B which is proportional to the T C and can be made close to the signal frequency Ω. Let distances between the mirrors to be adjusted in such a way that Fabry-Perot cavities AA' and BB' are tuned in resonance with the upper frequency of one of the doublets, and additional Fabry-Perot cavities A'C and B'C are tuned in antiresonance with this frequency. It is supposed here that (i) distances l between the mirrors A',B' and the coupling mirror C are small enough to neglect values of the order of magnitude close to Ωl/c, and that (ii) it is possible to neglect the mirrors A',B' motion. For example, they can be attached rigidly to the platform where the local position meter is situated. Let only the upper frequency of this doublet to be excited initially. In this case most of the optical energy is concentrated in the cavities AA' and BB' and distributed evenly between them. Small differential change x in the cavities optical lengths will redistribute the optical energy between the arms and hence will create difference in pondermotive forces acting on the mirrors. In other words, an optical pondermotive rigidity exists in such a scheme. In the article [8] the analogy with two "optical springs", one of which was situated between the mirrors A and C and another one (L-shaped) between the mirrors B and C, has been used. It has been shown in that article that if the optical energy exceeded the threshold value of then these springs became rigid enough to transfer displacement x of the mirrors A, B to the same displacement y of the mirror C. It is rather evident that if additional mirrors A',B' are present then displacement x of the end mirrors A,B is equal to about F times greater displacement x of the mirrors C, where F is finesse of the Fabry-Perot cavities AA' and BB' (one can imagine, for simplicity, that these Fabry-Perot cavities are replaced by delay lines). Therefore, one can expect that scheme presented in Fig. 1 provides gain in the mirror C displacement relative to its displacement in the original "optical bars" scheme, and this gain has to be close to F . The analysis shows that it is true. The mechanical degrees of freedom equations of motion in spectral representation have the form (we omit here some very lengthy but rather straightforward calculations devoted to excluding variables for electromagnetic degrees of freedom and reducing the full equations set for the system to mechanical equations only): −M y Ω 2 + K yy (Ω) y(Ω) = K xy (Ω)x(Ω) + F fluct (Ω) , Here M x is the mass of the mirrors A,B, M y is the mass of the mirror C, x is the displacement of the mirrors A,B, y is the displacement of the mirror C, F grav (Ω) is the signal force acting Figure 2: Mechanical model of the "optical lever" scheme on the mirrors A,B, and F fluct is fluctuational back action force produced by the device which monitors variable y. We consider here only differential motion of the mirrors A,B, when their displacements have the same absolute value and the directions shown in Fig. 1. This motion corresponds to the gravitational wave with optimal polarization. It has been shown in the article [8] that the symmetric motion of the mirrors A,B (when both mirrors approach to or move from the mirror C) did not coupled with the degrees of freedom x and y and could be excluded from the consideration. Factors K xx , K yy , K xy , K yx which form the matrix of the pondermotive rigidities are equal to where φ = arcsin T C and R is the reflectivity of the mirrors A',B'. It have to be noted that these rigidities exactly satisfy the condition Suppose then Ωτ ≪ 1 (in the case of the contemporary terrestrial gravitational-wave antennae τ 10 −5 s and Ω 10 3 s −1 , so Ωτ 10 −2 ). In this case we obtain that and K xx (Ω) = ̥K xy = ̥ 2 K yy . There is a very simple mechanical model which also can be described by the equations (5,11), putting aside for a while particular spectral dependence (10). This is an ordinary mechanical lever with arm lengths ratio ̥ (see Fig. 2). Rigidities K xx , K yy , and K xy in this case are proportional to the bending rigidity of the lever bar. It is evident that if the motion is sufficiently slow and therefore it is possible to neglect bending then the y-arm tip will be ̥ times greater than the x-arm tip displacement. Consequently, if the observation frequency Ω is sufficiently small then in the "optical lever" scheme presented in Fig. 1 the mirror C motion will repeat the end mirrors A,B motion with the gain factor ̥. In all other aspects this scheme is similar to the "optical bars" scheme. As it follows from the symmetry conditions (11) if one replaces in the equations (5) y by y/̥, F fluct by ̥ × F fluct , M y by ̥ 2 × M y and then replaces all rigidities by their values corresponding to "optical bars" scheme (with ̥ = 1) then these equations still remain valid. It means that if in the "optical bars" scheme one (i) replaces the mass M y by ̥ 2 smaller one; (ii) decreases back action noise of the meter by the factor of ̥ and increases proportionally its measurement noise (for example, by decreasing pumping power in the interferometric position meter by the factor of ̥ 2 ); and (iii) inserts the additional mirrors A',B' with refelctivity defined by the equation (7) then signal-to-noise ratio (relative to the local meter noises) and dynamical properties of the scheme will remain unchanged, with the only evident replacement of y by ̥y. Two characteristic regimes of the "optical lever" scheme are similar to the quasistatic and resonant regimes of the "optical bars" scheme described in the article [8], and therefore here we consider them in brief only. Characteristic equation of the equations set (5) is the following: where Root Ω = 0 of this equation corresponds to the quasistatic regime. If pumping energy is sufficiently high: then the equations set (5) solution for this regime can be presented as It is evident that maximal value of the signal response can be obtained here if and in this case we will get Taking into account, that gravitational-wave signal force is proportional to the mass of the end mirrors: F grav ∝ M x , we can conclude that in the gravitational-wave experiments this regime can provide a wide-band gain in signal displacement proportional to M x /M y . It is necessary to note that this gain by itself does not allow to overcome the standard quantum limit because the value of the standard quantum limit for the test mass M y rises exactly in the same proportion. But it does allow to use less sensitive local position meter and it does increase the signal-to-noise ratio for miscellaneous noises of non-quantum origin and therefore makes it easier to overcome the standard quantum limit using, for example, variational measurement in the local position meter [11,12]. Another two roots of the equation (12) correspond to the more sophisticated resonant regime of the scheme. Placing these two roots evenly in the spectral band of the signal it is possible to obtain sensitivity a few times better than the standard quantum limit for a free mass in relatively wide band, as it has been proposed in the article [13]. Using the value of pumping power it is possible to implement the second-order-pole test object and obtain the sensitivity substantially better than the standard quantum limit for both free mass and harmonic oscillator in narrow band near the frequency Ω B / √ 2 [14]. In both cases "optical lever" allows to increase the signal displacement of the local mirrors and therefore makes it easier implementation of the local position meter. 3 X-topologies of the "optical bars" and the "optical lever" schemes In the article [8] two possible topologies of the "optical bars" scheme have been considered, the L-topology discussed in the previous section and the X-topology similar to the Michelson interferometer topology. The latter one also can be converted to the "optical lever" A" ✻ y Figure 3: X-topology of the "optical lever" scheme scheme using additional mirrors A' and B' as it is shown in Fig. 3. Here C is the coupling mirror with transmittance T C = sin φ. In this topology one optical "spring" exists between the mirrors A and A", and another one between the mirrors B and B" In the article [8] L-and X-topologies were considered as identical ones with the only difference that in the case of X-topology the value of Ω B was about two times greater, More rigorous analysis shows, however, that this is not the case. Really, the rigidities which appear in the equations (5) for the case of the X-topology are equal to K xy (Ω) = 2ω p E c 2 τ ̥ cos Ωτ cos 2φ and It means that the low-frequency mechanical mode which in the article [8] has been considered as a free mass mode (see factor p 2 in the equation (C.3) in the above-mentioned article) and which does represent free mass in the case of L-topology, has a non-zero rigidity in the case of X-topology. Moreover, if Ω < Ω B then this rigidity is negative, and therefore asynchronous instability exists in the system 1 . Characteristic time for this instability is equal to Taking into account condition (14) an supposing that Ω ∼ Ω B , one can obtain that This value is rather large (∼ 10 2 if, say, Ω ∼ 10 3 s −1 and τ ∼ 10 −5 s) in the case of pure "optical bars" scheme (̥ = 1). Therefore, this instability can be easily dumped by the feed-back system in this case. On the other hand, in the case of the "optical lever" scheme can be τ instab ∼ Ω −1 , if one attempts to use too large value of ̥. In the article [15] it has been shown, however, that even such a strong instability can be dumped in principle by feed-back scheme without any loss in the signal-to-noise ratio. Conclusion Properties of the "optical bars" intracavity scheme [8] can be substantially improved by converting arms of the antenna into Fabry-Perot cavities similar to ones used in traditional topologies of gravitational-wave antennae with extracavity measurement. This new "optical lever" scheme allows to obtain the gain in signal displacement of local mirror approximately equal to finesse of the Fabry-Perot cavities. This gain by itself does not allow to overcome the standard quantum limit in wideband regime. But it allows to use less sensitive local position meter and increases the signal-to-noise ratio for miscellaneous noises of non-quantum origin, making it easier to overcome the standard quantum limit using, for example, variational measurement in the local position meter. The value of this gain is limited, in principle, by the formula (8) only. As it follows from this formula ̥ can not exceed value (Ω B τ ) −1 . If Ω B ∼ Ω ∼ 10 3 s −1 and τ ∼ 10 −5 s (which corresponds to arms length of LIGO and VIRGO antennae) then ̥ 10 2 . If τ ∼ 10 −6 s (GEO-600 and TAMA) then this limitation is about one order of magnitude less strong, ̥ 10 3 . It is interesting to note that if ̥ is close to its limiting value (Ω B τ ) −1 ∼ (Ωτ ) −1 then signal displacement of the local mirror is close to the signal displacement of the end mirrors of hypothetical gravitational-wave antenna with arm lengths equal to the halfwavelength of the gravitational wave.
2019-04-14T02:21:36.385Z
2002-03-01T00:00:00.000
{ "year": 2002, "sha1": "e1b622f071822cd114bb4e6bca03aab83b26a9c1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0203002", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e7070575226fea2fdd8d5d26fd4c0e771d16c7d6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53230808
pes2o/s2orc
v3-fos-license
Heterologous expression of a Glyoxalase I gene from sugarcane confers tolerance to several environmental stresses in bacteria Glyoxalase I belongs to the glyoxalase system that detoxifies methylglyoxal (MG), a cytotoxic by-product produced mainly from triose phosphates. The concentration of MG increases rapidly under stress conditions. In this study, a novel glyoxalase I gene, designated as SoGloI was identified from sugarcane. SoGloI had a size of 1,091 bp with one open reading frame (ORF) of 885 bp encoding a protein of 294 amino acids. SoGloI was predicted as a Ni2+-dependent GLOI protein with two typical glyoxalase domains at positions 28–149 and 159–283, respectively. SoGloI was cloned into an expression plasmid vector, and the Trx-His-S-tag SoGloI protein produced in Escherichia coli was about 51 kDa. The recombinant E. coli cells expressing SoGloI compared to the control grew faster and tolerated higher concentrations of NaCl, CuCl2, CdCl2, or ZnSO4. SoGloI ubiquitously expressed in various sugarcane tissues. The expression was up-regulated under the treatments of NaCl, CuCl2, CdCl2, ZnSO4 and abscisic acid (ABA), or under simulated biotic stress conditions upon exposure to salicylic acid (SA) and methyl jasmonate (MeJA). SoGloI activity steadily increased when sugarcane was subjected to NaCl, CuCl2, CdCl2, or ZnSO4 treatments. Sub-cellular observations indicated that the SoGloI protein was located in both cytosol and nucleus. These results suggest that the SoGloI gene may play an important role in sugarcane’s response to various biotic and abiotic stresses. INTRODUCTION Ubiquitously occurring in nature, the glyoxalase pathway involves a two-step catalytic reaction. In the first step, glyoxalase I (GLOI, lactoylglutathione lyase; EC 4.4.1.5) catalyzes the isomerization of hemithioacetal formed spontaneously between methylglyoxal (MG) and reduces glutathione (GSH) to S-D-lactoylglutathione (S-LG) (Thornalley, 1993). In the second step, S-LG is hydrolyzed by glyoxalase II (GLOII, hydroxyacylglutathione hydrolase; EC 3.1.2.6) to produce GSH and D-lactate (Yadav et al., 2005a;Yadav et al., 2005b). Under normal physiological conditions, MG is produced primarily through glycolysis at the triose-phosphate step (Phillips & Thornalley, 1993), and to a much lesser extent, through catabolism of amino acids (threonine and glycine) and acetone (Yadav et al., 2005a;Yadav et al., 2005b;Yadav et al., 2007). Under abiotic stresses, however, the concentration of MG in plants can significantly increase by 2-6 folds (Yadav et al., 2005a). The high-level accumulation of MG is toxic to cells, as it can react with DNA to form modified guanylate residues (Papoulis, Al-Abed & Bucala, 1995). MG also can react with proteins to form glycosylamine derivatives of arginine, lysine and hemithioacetal with cysteine residues (Lo et al., 1994). Apart from the direct effect of MG, its intermediate compound S-LG, a substrate for glyoxalase II, is also cytotoxic at higher concentrations by inhibiting DNA synthesis (Thornalley, 1996). Therefore, glutathione-based detoxification of harmful metabolites is one of the main roles of both glyoxalase enzymes (Thornalley, 1990). The glyoxalase I enzyme is broadly categorized into Zn 2+ -or Ni 2+ -dependent class of metal activation. Previous studies have been showed that the Zn 2+ -dependent GLOI enzymes are thought to be of eukaryotic origin (Frickel et al., 2001;Ridderström & Mannervik, 1996), while Ni 2+ -dependent GLOI enzymes are thought to be of prokaryotic origin (Sukdeo et al., 2004). The coexistence of both Zn 2+ -dependent and Ni 2+ -dependent GLOI enzymes in Pseudomonas aeruginosa (Sukdeo & Honek, 2007) and the characterization of a Ni 2+ -dependent GLOI enzyme from rice (Mustafiz et al., 2014) have led to the discouragement of the view that Zn 2+ -dependent GLOI belongs to eukaryotes and Ni 2+ -dependent GLOI exists only in prokaryotes (Jain et al., 2016). In plants, the metal specificity of each member of the GLOI family is an important determinant of its catalytic efficiency (Kaur et al., 2017;Mustafiz et al., 2014). Sugarcane (Saccharum spp. hybrids) is a cash crop mainly used for sugar, biofuel and other food industries such as industrial alcohol in tropical and subtropical regions. It is one of the world's largest crops. According to the FAO, sugarcane was cultivated in 101 countries on about 26.1 million hectares of land in 2012 (Que et al., 2014). However, the yields of sugarcane are often influenced by many diseases and various environmental stresses, such as smut, rust, ratoon stunting disease (RSD), salt, heavy metal and drought. Sugarcane is reportedly susceptible to salt and shows toxicity symptoms, low sprout emergence, nutritional imbalance, and overall biomass reduction (Akhtar, Wahid & Rasul, 2003;Plaut, Meinzer & Federman, 2000;Wahid, Rao & Rasul, 1997). Though sugarcane plants can overcome a short period of water deficit during the late, sucrose accumulating growth stage, an extended period of drought can cause a significant loss in cane and sugar yields (Begcy et al., 2012). RSD causes significant yield losses, 12%-37% under normal conditions and up to 60% under drought conditions. Moreover, it may also lead to variety deterioration (Bailey & Bechet, 1997;James, 1996;Que et al., 2008). Sugarcane smut also causes serious losses in cane and sugar yields (Hoy et al., 1986;Padmanaban, Alexander & Shanmugan, 1998;Que et al., 2012). In our previous study, we constructed a sugarcane cDNA library from Sporisorium scitamineum-infected buds (Wu et al., 2013b). An expressed sequence tag of 613 bp (GenBank Accession Number: CA140600.1) had a high similarity to the GLOI gene of Zea mays (GenBank Accession Number: EU966885.1) (Wu et al., 2013b). To study stress response of GLOI in sugarcane, we cloned the entire sugarcane Glyoxalase I gene, designated as SoGloI. We determined the sub-cellular location of the SoGloI 's protein using tobacco protoplasts and investigated growth patterns of Escherichia coli Rosetta cells producing the SoGloI recombinant protein in response to salt and heavy metal ion stresses. We also assessed SoGloI expression and glyoxalase I enzyme in sugarcane in response to simulated biotic and abiotic stresses. The results provided valuable information for the improvement of stress resistance in sugarcane. Plant material Sugarcane genotype YCE 05-179 was used in this study. Plants were maintained in a genetic nursery at the Key Laboratory of Sugarcane Biology and Genetic Breeding, Ministry of Agriculture, Fujian Agriculture and Forestry University, Fuzhou, China. In addition, tissue culture-derived young, healthy plantlets of YCE 05-179 were also involved in the study. Expression of SoGloI in field-grown sugarcane plants Five tissue samples, white young roots, leaf (+1), leaf sheath (+1), buds (6th-8th from the base), and internodes (6th and 7th from the base) were collected from 7-to 8-month-old plants in the field nursery. All samples except buds were cut into small pieces, wrapped up within tinfoil, and immediately flash-frozen in liquid nitrogen. et al., 2004;Que et al., 2009a;Que et al., 2009b). Spot assays were performed to assess the response of pET32a-SoGloI transformed E. coli cells to NaCl, CdCl 2 , CuCl 2 or ZnSO 4 treatments. When the E. coli culture mixture reached OD 600 = 0.6, 1 mM IPTG was added into the LB medium and the culture mixture was incubated for 12 h at 28 • C. Then the cultures were first diluted to 0.6 (OD 600 ), and further diluted to two levels (10 −3 and 10 −4 ) (Guo et al., 2012). Thereafter, 10 µL each of the diluted cultures was spotted on LB plates containing 170 µg mL −1 chloramphenicol and 80 µg mL −1 ampicillin, along with each test chemical. The concentrations of the chemicals used were NaCl at 250, 500 and 750 mM, CdCl 2 at 250, 500 and 750 µM, CuCl 2 at 250, 500 and 750 µM, and ZnSO 4 at 250, 500 and 750 µM, respectively (Guo et al., 2012;Su et al., 2013). All plates were incubated overnight at 37 • C. Assay of sugarcane glyoxalase I enzyme activity Entire flash-frozen 4-month old plantlets (100 mg wet weight) were pulverized in liquid N 2 in a mortar. Protein was extracted with an extraction buffer containing 0.1 M potassium phosphate buffer (PPB, pH 7.5), 50% (v/v) glycerol, 16 mM MgSO 4 , 0.2 mM Phenylmethanesulfonyl fluoride (PMSF) and 0.2% (v/v) polyvinylpyrrolidone (PVP40). The extract was centrifuged twice at 13,000 rpm at 4 • C for 30 min to obtain the crude protein extract from the supernatant (Zeng et al., 2016). The supernatant was used as the cytosolic extract for the assessment of glyoxalase activity, and protein concentration was determined by the Bradford method (Bradford, 1976) using bovine serum albumin (BSA) as the standard. SoGloI activity assay was carried out following Hossain et al. (2009) andHasanuzzaman, Hossain &Fujita (2011). Briefly, the assay mixture contained 100 mM K-phosphate buffer (PPB, pH 7.0), 15 mM magnesium sulfate, 1.7 mM GSH, and 3.5 mM MG in a final volume of 0.8 mL. Thioester formation was measured by the increase in absorbance at 240 nm for 1 min. The enzyme activity was calculated using an extinction coefficient (ε) of 3.37 mM −1 cm −1 . Sub-cellular localization The SoGloI gene was sub-cloned into the Xcm I/BamH I restriction sites of pCXSN to construct a fusion protein expression vector of 35S::SoGloI ::GFP. The GFP-containing pCXSN vector was a gift of Songbiao Chen, Institute of Biotechnology, Fujian Academy of Agricultural Sciences. The pCXSN-SoGloI recombinant plasmids were transformed into Agrobacterium tumefaciens cells, strain GV 3101 (Chen et al., 2006). The transgenic GV 3101 cells were inoculated into LB medium containing kanamycin (50 µg mL −1 ) and rifampicin (34 µg mL −1 ). The culture was incubated overnight at 28 • C with shaking at 200 rpm. The culture was then centrifuged at 5,000× g to harvest the Agrobacterium cells followed by, re-suspension in 10 mM MgCl 2 and 10 mM fatty acid methyl ester sulfonate (MES). The concentration of the bacterial suspension was measured and adjusted to OD 600 = 0.6 with Murashige and Skoog (MS) liquid medium supplemented with 200 mM acetosyringone. The resulting culture was incubated at 28 • C for 3 h (Yang et al., 2014). Then, 1 mL of the bacterial culture was infiltrated into 4-week old tobacco leaves with disposable syringes. The injection sites were marked. Injected plants were incubated under a 12 h-light/12 h-dark cycle at 28 • C for three days (Su et al., 2013). Then, the protoplasts were isolated from well-expanded leaves following the rice protoplast isolation protocol of Chen et al. (2006). Briefly, the leaves were cut into 1-mm strips and placed in a dish containing 12 mL of K3 medium (3 mM MES, 7 mM CaCl 2 , 0.35 M mannitol, 0.7 mM NaH 2 PO 4 , 0.35 M sorbitol, 20 mM KCl, pH 5.6) supplemented with 0.4 M sucrose, 1.5% cellulase R-10 (Yakult Honsha, Japan) and 0.3% macerozyme R-10 (Yakult). The leaf tissue was vacuum-infiltrated for 30 min at 20 mm Hg and digested at room temperature with gentle shaking for 4 h to produce protoplasts. Then, the K3 medium was replaced with 12 mL of W5 solution (2 mM MES, 154 mM NaCl, 125 mM CaCl2, 5 mM KCl, pH 5.8). The protoplasts were collected by centrifugation at 300× g for 4 min at 4 • C and re-suspended in 1 mL WI solution (4 mM MES, 0.5 M mannitol, 20 mM KCl, pH 5.7). The sub-cellular location of the SoGloI gene was observed using fluorescence microscopy (Ci-L; Nikon, Tokyo, Japan). Expression of SoGloI in E. coli Upon IPTG induction, the recombinant SoGloI gene expressed well in Rosetta cells (Fig. 2, Lanes 5-8) to yield SoGloI protein that was 51 kDa in size and carried a Trx-His-S-tag of 18.3 kDa. Moreover, gradually increased amounts of SoGloI protein were also observed when the IPTG induction was extended from 2 h to 8 h. Expression patterns of SoGloI in sugarcane tissues RT-qPCR was conducted to detect both tissue-specific and stress-related expression of SoGloI. The SoGloI gene was ubiquitously expressed in five tissues of 7-to 8-month old plants collected from the field. The highest level was detected in buds, followed by leaves, roots, leaf sheaths, and internodes (Fig. 5A). SoGloI expression patterns in healthy 4-month old plantlets under NaCl, CuCl 2 , CdCl 2 , ZnSO 4 , SA, MeJA, and ABA treatments were shown in Figs. 5B and 5C. Under NaCl, CuCl 2 , CdCl 2 , and ZnSO 4 treatments, SoGloI expression was up-regulated steadily from 0 to 48 hpt. The peak level of SoGloI expression was about 3.1-, 2.9-2.8-and 1.9-fold of the level in control, respectively (Fig. 5B). In contrast, under SA and MeJA treatments, SoGloI expression decreased after peaking at 6 hpt. The maximum level of SoGloI expression was detected at 6 hpt, which was about 2.6-and 2.1-fold of the level in control, respectively (Fig. 5C). Similarly, the peak level of SoGloI expression was detected at 12 hpt under ABA treatments, which was 2.4-fold of the level in control (Fig. 5C). Thus, SoGloI gene has been found to provide tolerance to multiple abiotic stresses. Glyoxalase I activity in sugarcane under NaCl, CuCl 2 , CdCl 2 or ZnSO 4 treatment As shown in Fig. 5D, under NaCl, CuCl 2 , CdCl 2 , and ZnSO 4 treatments, the glyoxalase I activity was increased steadily from 0 to 48 hpt. Under a 250 mM NaCl treatment, the glyoxalase I activity was about 1.8-, 2.2-, and 2.3-fold at 12, 24, and 48 hpt comparing to control, respectively. At 48 hpt, the level of glyoxalase I activity reached 0.3230 µmol min −1 mg −1 . Under a 750 µM CuCl 2 treatment, the level of glyoxalase I activity was about 2.0-, 2.7-, and 3.0-fold of the level in control, with 0.4128 µmol min −1 mg −1 protein produced at 48 hpt. Similarly, under a 750 µM CdCl 2 treatment, the glyoxalase I activity was 2.1-, 3.1-and 4.2-fold comparing to control, with a highest activity of 0.5730 µmol min −1 mg −1 protein at 48 hpt. Under 750 µM ZnSO 4 treatment, the glyoxalase I activity at 12, 24, and 48 hpt was about 1.3-, 2.6-, and 3.1-fold of the level at 0 hpt, and 0.2883 µmol min −1 mg −1 protein produced at 48 hpt. Thus, glyoxalase I activity was increased in varying degrees under salt and heavy metal ions stress conditions. Determination of subcellular localization of ScGloI To further understand the function of SoGloI gene, its subcellular localization was determined. The SoGloI gene was inserted into a plant expression vector pCXSN between the 35S promoter and GFP. The recombinant pCXSN-SoGloI -GFP construct was then introduced into tobacco leaves through Agrobacterium-mediated transformation. As shown in Fig. 6, green fluorescence signals were observable in the cytosol and nucleus of both pCXSN-SoGloI -GFP and the pCXSN-GFP transformed tobacco protoplasts. DISCUSSION Glyoxalase I functions to detoxify the potent cytotoxic compound MG (Thornalley, 1993). In response to stress conditions, cells undergo active metabolism to produce more MG through leakages in the glycolysis and TCA cycle (Umea et al., 1994). GlyI, the first enzyme of the glyoxalase system, plays a critical role in controlling MG levels and cytotoxicity (Wu et al., 2013a). The GloI gene has been cloned and characterized from several plant species. However, the glyoxalase I gene was never cloned and characterized in sugarcane. In the present study, a length GloI gene, designated as SoGloI, was isolated from a smut-resistant sugarcane cultivar YCE 05-179. The GloI enzyme requires Ni 2+ or/and Zn 2+ for its catalytic activity (Sukdeo et al., 2004). Sukdeo & Honek (2007) reported that Pseudomonas aeruginosa, a gamma proteobacteria, encodes both Ni 2+ and Zn 2+ forms of the enzyme; GloA1, GloA2 (both Ni binding), and GloA3 (Zn binding). Jain et al. (2016) also found three active GLYI enzymes (AtGLYI2, AtGLYI3 and AtGLYI6) belonging to different metal activation classes coexisting in Arabidopsis thaliana. AtGLYI2 was found to be Zn 2+ -dependent, whereas AtGLYI3 and AtGLYI6 were Ni 2+ -dependent. Ni 2+ -dependent GloI is present as a two-domain protein in all eukaryotes. Among the early branching eukaryotes, the group of algae appears to be the first to encode this gene (Kaur et al., 2013). In this study, a sugarcane SoGloI gene was found to encode two glyoxalase domains as well (Fig. S2). Besides, the multiple protein sequence alignment of SoGloI with those from other species indicated that SoGloI was a Ni 2+ -dependent enzyme (Fig. 1). The result was similar to OsGLYI -11.2 (Mustafiz et al., 2014), who's expression was substrate inducible. However, unlike other eukaryotic Zn 2+ -dependent glyoxalases, OsGLYI -11.2 is a Ni 2+ -dependent monomeric enzyme. Plant glyoxalase system in different tissues plays an important role at various vegetative and reproductive stages (Mustafiz et al., 2011). GLOI gene is required for cell division and proliferation; a higher enzyme activity has been found in rapidly dividing cells of cell suspensions, seedlings, and root tips (Lin et al., 2010;Wu et al., 2013a). In this study, SoGloI was constitutively expressed in various tissues of sugarcane genotype YCE 05-179, with the highest level in buds, followed by leaves, roots, leaf sheaths, and internodes (Fig. 5A). To date, only a few reports have shown that GLOI gene is associated with disease resistance in plants. For instance, a maize Glx-I gene enhances the host defense against Aspergillus flavus through the detoxification of MG, a major product of A. flavus (Chen et al., 2004). The expression of wheat TaGly I is up-regulated 2.3-fold upon infection by Fusarium graminearum (Lin et al., 2010). In our previous study, SoGloI expression was up-regulated during infection with S. scitamineum, the pathogen of sugarcane smut (Wu et al., 2013b). In the present study, we used SA and MeJA to simulate biotic stress. Consistently, under SA and MeJA treatments, the SoGloI expression peaked at 6 hpt, when its activity reached 2.6-and 2.1-fold higher than that of the control, respectively (Fig. 5C). These results suggest that SoGloI expression can increase significantly under pathogenic stresses; however, the exact role of SoGloI in pathogenic resistance process needs to be further investigated. Glyoxalase I genes also have been implicated to enhance plant tolerance to salt stress. The expression of Gly, a glyoxalase I gene of B. juncea, is up-regulated after exposure to a high concentration of salt (Veena, Reddy & Sopory, 1999). The mRNA and polypeptide levels of GLX1, a glyoxalase I gene of tomato, increased by two to three folds in roots, internodes and leaves when the plants were treated with 10 g/L NaCl (Espartero, Sanchez-Aguayo & Pardo, 1995). The expression of two other glyoxalase I genes, Bv M14-glyoxalase I of sugar beet (Wu et al., 2013a) and TaGly I of wheat (Lin et al., 2010), also significantly enhanced hosts' tolerance to salt stress. In this study, SoGloI -expressing Rosetta cells grown on agar plates tolerated high concentrations of NaCl up to 250 mM (Fig. 3B) and grew faster in LB liquid medium containing 250 mM NaCl (Fig. 4A). SoGloI expression was increased steadily from 0 to 48 hpt in sugarcane under salt stress (Fig. 5B). Under salt stress, glyoxalase I activity also elevated (Fig. 5D). Taken together, the results indicate the expression level of SoGloI can be significantly up-regulated under salt stress; however, more research is needed to reveal the underlying mechanism. Zinc (Zn 2+ ), a micronutrient, is necessary for plant growth, but an excessive amount of Zn 2+ can inhibit plant growth (Sun et al., 2006;Zarcinas et al., 2004). A few studies have demonstrated that plant GloI genes enhance host tolerance to Zn 2+ . Singla-Pareek et al. (2006) showed that GlyI from B. juncea enhanced host Zn 2+ tolerance to toxic levels in the transgenic tobacco. The expression of TaGly I, a glyoxalase I gene of T. aestivum, is induced continuously under 20 mM ZnCl 2 treatment. Compared to control, the increase in TaGly I expression is nearly 1.5-fold at 24 h (Lin et al., 2010). In the present study, SoGloIexpressing E. coli Rosetta cells were able to tolerate high concentrations of ZnSO 4 up to 750 µM (Fig. 3E) and also grew faster in LB liquid medium containing 750 µM ZnSO 4 (Fig. 4D). Consistently, under ZnSO4 stress, the SoGloI expression in sugarcane was up-regulated steadily from 0 to 48 hpt, when its level and enzyme activity were 1.9-fold and 3.1-fold higher than that of the control (Figs. 5B, 5D). These results showed that SoGloI gene can enhance tolerance to excessive zinc stress even in a heterologous host system. Over-expression of glyoxalase I has been shown to confer tolerance to other heavy metals, such as cadmium or lead (Singla-Pareek et al., 2006). The level of expression and activity of SoGloI in E. coli (Figs. 3D, 4C) under CdCl 2 treatment (Figs. 5B, 5D) also supported this notion about tolerance to cadmium. Our work further showed that SoGloI expression and its enzyme activity were increased significantly under CuCl 2 treatment (Figs. 5B, 5D). All these findings suggest that SoGloI may be a good candidate gene for engineering to develop heavy metal resistant sugarcane cultivars. As is known, sugarcane is a polyploidy and aneuploidy crop (Scortecci et al., 2012), in which low transformation efficiency remains one of the major limiting factors on transgenic sugarcane production (Dal-Bianco et al., 2012;Gómez-Merino, Trejo-Téllez & Sentíes-Herrera, 2014;Scortecci et al., 2012). This has also limited the functional analysis of isolated sugarcane genes; nonetheless, a model plant species (Arabidopsis thaliana, Nicotiana benthamiana or Brachypodium distachyon) with a shorter life cycle and simpler genome can be explored as an alternative host for transforming and assessing the functional properties of isolated sugarcane genes, such as SoGloI. CONCLUSIONS This is the first report on the cloning and characterization of glyoxalase I (SoGloI ) gene in sugarcane. We isolated and characterized SoGloI gene and demonstrated the enzyme activity of glyoxalase I protein. We found that SoGloI expression and SoGloI enzymatic activity were elevated significantly when sugarcane tissues were subject to simulated biotic and abiotic stress conditions, such as high concentrations of salt or heavy metal ions. The findings have opened up a new research avenue for sugarcane to grow in polluted or salty environments via genetic engineering and breeding of SoGloI to enhance host resistance.
2018-11-15T17:36:51.789Z
2018-10-31T00:00:00.000
{ "year": 2018, "sha1": "eef73d3f7d3cbc34ea4d523e533c35cc0e5e99a1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.5873", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eef73d3f7d3cbc34ea4d523e533c35cc0e5e99a1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
39599936
pes2o/s2orc
v3-fos-license
On the spectral theory of groups of affine transformations of compact nilmanifolds Let $N$ be a connected and simply connected nilpotent Lie group, $\Lambda$ a lattice in $N$, and $X=N/\Lambda$ the corresponding nilmanifold. Let $Aff(X)$ be the group of affine transformations of $X$. We characterize the countable subgroups $H$ of $Aff(X)$ for which the action of $H$ on $X$ has a spectral gap, that is, such that the associated unitary representation $U$ of $H$ on the space of functions from $L^2(X)$ with zero mean does not weakly contain the trivial representation. Denote by $T$ the maximal torus factor associated to $X$. We show that the action of $H$ on $X$ has a spectral gap if and only if there exists no proper $H$-invariant subtorus $S$ of $T$ such that the projection of $H$ on $Aut (T/S)$ has an abelian subgroup of finite index. We first establish the result in the case where $X$ is a torus. In the case of a general nilmanifold, we study the asymptotic behaviour of matrix coefficients of $U$ using decay properties of metaplectic representations of symplectic groups. The result shows that the existence of a spectral gap for subgroups of $Aff(X)$ is equivalent to strong ergodicity in the sense of K.Schmidt. Moreover, we show that the action of $H$ on $X$ is ergodic (or strongly mixing) if and only if the corresponding action of $H$ on $T$ is ergodic (or strongly mixing). Introduction Let H be a countable group acting measurably on a probability space (X, ν) by measure preserving transformations. Let U : h → U(h) denote the corre-sponding Koopman representation of H on L 2 (X, ν). We say that the action of H on X has a spectral gap if the restriction U 0 of U to the H-invariant subspace L 2 0 (X, ν) = {ξ ∈ L 2 (X, ν) : does not have almost invariant vectors, that is, there is no sequence of unit vectors ξ n in L 2 0 (X, ν) such that lim n U 0 (h)ξ n − ξ n = 0 for all h ∈ H. A useful equivalent condition for the existence of a spectral gap is as follows. Let µ be a probability measure on H such that the support of µ generates H. Let U 0 (µ) be the convolution operator defined on L 2 0 (X, ν) by Observe that we have U 0 (µ) ≤ 1 and hence r(U 0 (µ)) ≤ 1 for the spectral radius r(U 0 (µ)) of U 0 (µ). Assume that µ is aperiodic, (that is, if supp(µ) is not contained in the coset of a proper subgroup of H). Then the action of H on X has a spectral gap if and only if r(U 0 (µ)) < 1 and this is equivalent to U 0 (µ) < 1. Ergodic theoretic applications of the existence of a spectral gap (or of the stable spectral gap; see below for the definition) to random walks (such as the rate of L 2 -convergence in the random ergodic theorem, pointwise ergodic theorem, analogues of the law of large numbers and of the central limit theorem, etc) are given in [CoGu11], [CoLe11], [FuSh99], [GoNe10] and [Guiv05]. Another application of the spectral gap property is the uniqueness of ν as H-invariant mean on L ∞ (X, ν); for this as well as for further applications, see [BeHV08], [Lubo94], [Popa08], [Sarn90]. Recall that a factor (Y, m, H) of the system (X, ν, H) is a probability space (Y, m) equipped with an H-action by measure preserving transformations together with a H-equivariant mesurable mapping Φ : X → Y with Φ * (ν) = m. Observe that L 2 (Y, m) can be identified with a H-invariant closed subspace of L 2 (X, ν). By a result proved in [JuRo79,Theorem 2.4], no action of a countable amenable group by measure preserving transformations on a non-atomic probability space has a spectral gap. As a consequence, if there exists a non-atomic factor (Y, m, H) of the system (X, ν, H) such that H acts as an amenable group on Y, then the action of H on X has no spectral gap. Our main result (Theorem 1) shows in particular that this is the only obstruction for the existence of a spectral gap when H is a countable group of affine transformations of a compact nilmanifold X. Let N be a connected and simply connected nilpotent Lie group. Let Λ be a lattice in N; the associated nilmanifold Λ\N is known to be compact. The group N acts by right translations on Λ\N : every n ∈ N defines a transformation ρ(n) on Λ\N given by Λx → Λxn. Denote by Aut(N) the group of continuous automorphisms of N and by Aut(Λ\N) the subgroup of continuous automorphisms ϕ of N such that ϕ(Λ) = Λ. The group Aut(N) is a linear algebraic group defined over Q and Aut(Λ\N) is a discrete subgroup of Aut(N). An affine transformation of Λ\N is a mapping Λ\N → Λ\N of the form ϕ • ρ(n) for some ϕ ∈ Aut(Λ\N) and n ∈ N. The group Aff(Λ\N) of affine transformations of Λ\N is the semi-direct product Aut(Λ\N) ⋉ N. Every g ∈ Aff(Λ\N) preserves the translation invariant probability measure ν Λ\N induced by a Haar measure on N. The action of Aff(Λ\N) on Λ\N is a natural generalization of the action of SL n (Z) ⋉ T n on the torus T n = R n /Z n . In fact, let T = Λ[N, N]\N be the maximal torus factor of Λ\N. Then the nilsystem (Λ\N, H) can be viewed as the result, starting with T, of a finite sequence of extensions by tori, with induced actions of H on every stage. Actions of of higher rank lattices by affine transformations on nilmanifolds arise in Zimmer's programme as one of the standard actions for such groups (see the survey [Fish]). The action of a single affine transformation (or a flow of such transformations) on a nilmanifold have been studied by W. Parry from the ergodic, spectral or topological point of view (see [Parr69],[Parr70-a],[Parr70-b]; see also [AuGH63] for the case of translations). Let V be a finite dimensional real vector space and ∆ a lattice in V. As is well-known, T = V /∆ is a torus and ∆ defines a rational structure on V. Let W be a rational linear subspace of V . Then S = W/(W ∩ ∆) is a subtorus of T and we have a torus factor T = T /S. Let H be a subgroup of Aff(T ) and assume that W is invariant under p a (H), where p a : Aff(Λ\N) → Aut(Λ\N) is the canonical projection. Then H leaves S invariant and the induced action of H on T is a factor of the action of H on T. We will say that T is an H-invariant factor torus of T. Here is our main result. Theorem 1 Let Λ\N be a compact nilmanifold with associated maximal torus factor T. Let H be a countable subgroup Aff(Λ\N). The following properties are equivalent: (i) The action of H on Λ\N has a spectral gap. (ii) The action of H on T has a spectral gap. (iii) There exists no non-trivial H-invariant factor torus T of T such that the projection of p a (H) on Aut(T ) is a virtually abelian group (that is, it contains an abelian subgroup of finite index). To give an an example, let T = R d /Z d be the d-dimensional torus. Observe that Aut(T ) can be identified with GL d (Z). Let H be a subgroup of Aff(T ) = GL d (Z) ⋉ T . Assume that p a (H) is not virtually abelian and that p a (H) acts Q-irreducibly on R d (that is, there is no non-trivial p a (H)invariant rational subspace of R d ). Then the action of H on T has a spectral gap. For more details, see Corollary 6 and Example 7 below. The result above is new even in the case where Λ\N is a torus; see however [FuSh99,Theorem 6.5.ii] for a sufficient condition for the existence of a spectral gap for groups of torus automorphisms. Our results shows, in particular, that the spectral gap property for a countable subgroup H of Aff(Λ\N) is equivalent to the spectral gap property for its automorphism part p a (H). The proof of Theorem 1 breaks into two parts. We first establish the result in the case where Λ\N is a torus (see Theorem 5 below ). Our proof is based here on the existence of appropriate invariant means on finite dimensional vector spaces. A crucial tool will be (a version of) Furstenberg's result on stabilizers of probability measures on projective spaces over local fields. In the case of a general nilmanifold Λ\N with associated maximal torus factor T, we show that (ii) implies (i) by studying the asymptotic behaviour of matrix coefficients of the Koopman representation U of H restricted to the orthogonal complement of L 2 (T ) in L 2 (Λ\N); for this, we will use decay properties of the metaplectic representation of symplectic groups due to R. Howe and C. C.Moore [HoMo79]. The equivalence of (i) and (ii) was proved in [BeHe10] in the special case of a group of automorphisms of Heisenberg nilmanifolds. Actions of countable amenable groups on a non-atomic probability space fail to have a property which is weaker than the spectral gap property. Recall that the action of a countable group H by measure preserving transformations on a probability space (X, ν) is said to be strongly ergodic in Schmidt's sense (see [Schm80], [Schm81]) if every sequence (A n ) n of measurable subsets of X which is asymptotically invariant (that is, which is such that lim n ν(gA n △A n ) = 0 for all g ∈ H) is trivial (that is, lim n ν(A n )(1−ν(A n )) = 0). It is easy to see that if the action of H on X has a spectral gap, then the action is strongly ergodic (see, for instance, [BeHV08, Proposition 6.3.2]). The converse does not hold in general (see Example (2.7) in [Schm81]). As shown in [Schm81], no action of a countable amenable group by measure preserving transformations on a non-atomic, probability space can be strongly ergodic. An interesting feature of strong ergodicity (as opposed to the spectral gap property) is that this notion only depends on the equivalence relation on X defined by the partition of X into H-orbits. Our result shows that the existence of a spectral gap for subgroups of Aff(Λ\N) is equivalent to strong ergodicity. Corollary 2 The action of a countable subgroup of Aff(Λ\N) on a compact nilmanifold Λ\N has a spectral gap if and only if it is strongly ergodic. We suspect that the previous corollary is true for every countable group of affine transformations of the quotient of a Lie group by a lattice. In fact, the following stronger statement could be true. Let G be a connected Lie group and Γ a lattice of G. Let H be a countable subgroup of Aff(Γ\G). Assume that the action of H on Γ\G does not have a spectral gap. Is it true that there exists a non-trivial H-invariant factor Γ\G of Γ\G such that the closure of the projection of H on Aff(Γ\G) is an amenable group? As our result shows, this is indeed the case if G is a nilpotent Lie group; it is also the case if G is a simple non-compact Lie group with finite centre (see Theorem 6.10 in [FuSh99]). It is worth mentioning that the corresponding statement in the framework of countable standard equivalence relations has been proved in [JoSc87]. Let again H be a countable group acting by measure preserving transformations on a probability space (X, ν). The following useful strengthening of the spectral gap property has been considered by several authors ( [Bekk90], [BeGu06], [FuSh99], [Popa08]). Following [Popa08], let us say that the action of H has a stable spectral gap if the diagonal action of H on (X ×X, ν ⊗ν) has a spectral gap (see Lemma 3.2 in [Popa08] for the rationale of this terminology). The following result is an immediate consequence of Theorem 1 above and of the corresponding result for groups of torus automorphisms obtained in [FuSh99, Theorem 6.4]. Corollary 3 If the action of a countable subgroup of Aff(Λ\N) on a compact nilmanifold Λ\N has a spectral gap, then it is has stable spectral gap. Next, we turn to the question of the ergodicity or mixing of the action of a (not necessarily countable) subgroup H of Aff(Λ\N) on Λ\N. As a consequence of our methods, we will see that this reduces to the same question for the action of H on the associated torus. Recall that an action of a group H on a probability space (X, ν) is weakly mixing if the Koopman representation U of H on L 2 (X, ν) has no finite dimensional subrepresentation, and that the action of of a countable group H is strongly mixing if the matrix coefficients g → U(g)ξ, η vanish at infinity for all ξ, η ∈ L 2 0 (X, ν). Theorem 4 Let H be a group of affine transformations of the compact nilmanilfold Λ\N. Let T be the maximal T torus factor associated to Λ\N. (i) If the action of H on T is ergodic (or weakly mixing), then its action on Λ\N is ergodic (or weakly mixing). (ii) Assume that H is as subgroup of Aut(Λ\N). If the action of H on T is strongly mixing, then its action on Λ\N is strongly mixing. Part (i) of the previous theorem has been independently established in [CoGu11]) with a different method of proof. In the case of a single affine transformation (that is, in the case of H = Z), the result is due to W.Parry (see [Parr69], [Parr70-a]). Also, [CoGu11] gives an example of a group of automorphisms H acting ergodically on a nilmanilfold Λ\N for which no single automorphism from H acts ergodically on Λ\N, showing that the previous theorem does not follow from Parry's result. Sections 1-7 are devoted to the proof our main result Theorem 1 in the case where Λ\N is a torus. The proof of the extension to general nilmanifold is given in Sections 8-14. Theorem 4 is treated in Section 15. transformations of T is the semi-direct product Aff(T ) = Aut(T ) ⋉ T . The aim of this section is to state the following result, which will be proved in the next two sections. Recall that p a denotes the canonical homomorphism Aff(T ) → Aut(T ). Theorem 5 Let H be a countable subgroup of Aff(T ). The following properties are equivalent. The following properties are equivalent: (i) The action of H on T does not have a spectral gap. (ii) There exists a non-trivial H-invariant factor torus T such that the projection of p a (H) on Aut(T ) is amenable. (iii) There exists a non-trivial H-invariant factor torus T 0 such that the projection of p a (H) on Aut(T 0 ) is virtually abelian. The following corollary is an immediate consequence of the implication (i) ⇒ (iii) in the previous theorem. Corollary 6 Let T = V /∆ be a torus. Let H be a countable subgroup of Aff(T ) such that p a (H) ⊂ Aut(T ) is not virtually abelian. Assume that the action of H on V is Q-irreducible for the rational structure on V defined by ∆. Then the action of H on T has a spectral gap. This last result was proved in [FuSh99, Theorem 6.5.ii] for a subgroup H of Aut(T ) under the stronger assumption that the action of H on V is Rirreducible. We give an example of a subgroup H of automorphisms of a 6dimensional torus T = V /∆ which acts Q-irreducibly but not R-irreducibly on V and which has a spectral gap on T. Example 7 Let q be the quadratic form on R 3 given by Let σ be the non-trivial automorphism of the field Q[ √ 2]. For every g ∈ SO(q, R), the matrix g σ , obtained by conjugating each entry of g, preserves the conjugate form q σ of q under σ. The mapping induces an isomorphism between Z[ √ 2] 3 and a lattice ∆ in R 3 ×R 3 . It induces also an isomophism γ → (γ, γ σ ) between H and a lattice Γ in SO(q, R) × SO(q σ , R). Moreover, H leaves Z[ √ 2] 3 invariant and Γ leaves ∆ invariant. We obtain in this way an action of H on the torus T = R 6 /∆. Since SO(q σ , R) ∼ = SO(3) is compact, H is a lattice in SO(q, R). This implies (Borel density theorem) that the Zariski closure of H in SL 3 (R) is the simple Lie group SO(q, R), so that the action of H on R 3 is R-irreducible and hence Q-irreducible for the usual rational structure on R 3 . It follows that the action of H on R 6 is Q-irreducible for the rational structure defined by the lattice ∆ of R 6 . Observe that the action of H on R 6 is not R-irreducible since Γ leaves invariant each copy of R 3 in R 6 = R 3 ⊕R 3 . Moreover, H is not virtually abelian as it is a lattice in SO(q, R) ∼ = SO(2, 1). As a consequence of the previous corollary, the action of H on T has a spectral gap. Concerning the proof of Theorem 5, we will first treat the case of groups of toral automorphisms. Choosing a basis for the Z-module ∆, we identify V with R d and ∆ with Z d . By means of the standard scalar product on R d , we identify the dual group V of V (that is, the group of unitary characters of V ) with V . The dual action of an element g ∈ GL(V ) on V corresponds to the action of (g −1 ) t on V. Since T = V /∆, the dual group T can be identified with ∆. Let W be a rational linear subspace of V . The dual group of the quotient V /W corresponds to the orthogonal complement W ⊥ of W, which is also a rational linear subspace of V . The dual group of the torus factor T = (V /W )/((W + ∆)/∆) corresponds to W ⊥ ∩ ∆. The discussion above shows that Theorem 5, in the case of a group of toral automorphisms is equivalent to the following theorem. Theorem 8 Let H be a subgroup of GL d (Z). The following properties are equivalent. (i) The action of H on T = R d /Z d does not have a spectral gap. (ii) There exists a non-trivial rational subspace W of R d which is invariant under the subgroup H t of GL d (Z) and such that the image of H t in GL(W ) is an amenable group. (iii) There exists a non-trivial rational subspace W of R d which is invariant under H t and such that the the image of H t in GL(W ) is a virtually abelian group. Observe that the implication (iii) =⇒ (ii) is obvious and that the implication (ii) =⇒ (i) follows from the result in [JuRo79] quoted in the introduction. Therefore, it remains to show that (i) implies (ii) and that (ii) implies (iii). A canonical amenable group associated to a linear group Let V be a finite-dimensional real vector space. (Although we will consider only real vector spaces, the results in this section are valid for vector spaces over any local field.) Let g ∈ GL(V ) and W a g-invariant linear subspace of V. We denote by g W ∈ GL(W ) the automorphism of W given by the restriction of g to W. If W ′ is another g-invariant subspace contained in W, we will denote by g W/W ′ ∈ GL(W/W ′ ) the automorphism of W/W ′ induced by g. Also, if H is a subgroup of GL(V ) and W ′ ⊂ W are H-invariant subspaces of V, we will denote by H W and H W/W ′ the corresponding subgroups of GL(W ) and GL(W/W ′ ), respectively. For a subgroup H of GL(V ), we denote by H its closure for the usual locally compact topology on GL(V ). The aim of this section is to prove the following result. A more explicit description of V (H) will be given later (Proposition 15). For the proof of the proposition above, we will need the following elementary lemma. Assume that H W and H V /W are amenable. Let L be the closed subgroup consisting of the elements g ∈ GL(V ) leaving W invariant and for which g W belongs to H W and g V /W belongs to H V /W . The mapping Invariant means supported by rational subspaces Let G be a locally compact group. There is a well-known relationship between weak containment properties of the trivial representation 1 G and existence on invariant means on appropriate spaces (see below). We will need to make this relationship more precise in the case where H is a subgroup of toral automorphisms. By a unitary representation (π, H) of G, we will always mean a strongly continuous homomorphism π : G → U(H) from G to the unitary group of a complex Hilbert space H. Recall that, for every finite measure µ of G, the operator π(µ) ∈ B(H) is defined by the integral Assume that G is a discrete group and π and ρ are unitary representations of G; then π is weakly contained in ρ if and only if π(µ) ≤ ρ(µ) for every finite measure µ on G (see Section 18 in [Dixm69]). Recall also that, given a probability measure µ on G which is aperiodic, the trivial representation 1 G is weakly contained in a unitary representation π if and only if π(µ) = 1 (see [BeHV08,G.4.2]). Let X be a topological space and C b (X) the Banach space of all bounded continuous functions on X equipped with the supremum norm. Recall that a mean on X is a linear functional m on C b (X) such that m(1 X ) = 1 and such that m(ϕ) ≥ 0 for every ϕ ∈ C b (X) with ϕ ≥ 0. A mean is automatically continuous. We will often write m(A) instead of m(1 A ) for a subset A of X. Observe that the means on a compact space X are the probability measures on X. Let H be a group acting on X by homeomorphisms. Then H acts nat- Let Y be another topological space and f : X → Y a continuous mapping. For every mean m on X, the push-forward f * (m) of m is the mean on Y defined by ϕ → m(ϕ • f ) for ϕ ∈ C b (Y ). We will consider invariant means on two kinds of topological spaces: • X is a set with the discrete topology and endowed with an action of a group H. It is well-known (see Théorème on p. 44 in [Eyma72]) that there exists an H-invariant mean on X if and only if the natural unitary representation U of H on ℓ 2 (X) almost has invariant vectors (that is, if and only if U weakly contains the trivial representation 1 H of H). The following result is a version of Furstenberg's celebrated lemma (see [Furs76] or [Zimm84, Corollary 3.2.2]) on stabilizers of probability measures on projective spaces. We will need later (in Section 5) the more precise form we give for this lemma (see also the proof of Theorem 6.5 (ii) in [FuSh99]). Proof We can find finitely many positive measures (ν i ) 1≤i≤r on P(V ) with ν = 1≤i≤r ν i such that ν(V i ∩ V j ) = 0 for i = j and such that supp(ν i ) ⊂ π(V i ) for every i ∈ {1, . . . , r}, where V i is a linear subspace of V of minimal dimension with ν i (π(V i )) > 0. The H-orbit of V i and hence the H-orbit of ν i is finite (see Proof of Corollary 3.2.2 in [Zimm84]). Since stabilizers of probability measures on P(V ) are algebraic (see Theorem 3.2.4 in [Zimm84]), it follows that H 0 stabilizes each V i and each ν i . Now ν i , viewed as measure on P(V i ), is zero on every proper projective subspace of P(V i ). Hence (see Corollary 3.2.2 in [Zimm84]), the image of the restriction H 0 Remark 12 The conclusion of the previous lemma does not hold in general if we replace H 0 by an arbitrary subgroup of finite index of H. For example, let V = Re 1 ⊕ Re 2 and let H ⊂ GL 2 (R) be the stabilizer of the measure ν = (δ π(e 1 ) + δ π(e 2 ) )/2 on P(V ). Then [H, H] = H is not bounded; however, H 0 is the subgroup of index two consisting of the diagonal matrices in H and [H 0 , H 0 ] is trivial. Proof (i) Let π : V \ {0} → P(V ) be the canonical projection and ν = π * (m). Then ν is an H-invariant probability measure on P(V ). Let W the linear span of π −1 (supp(ν)). Then W is non-trivial and ν is not supported on a proper projective subspace of π(W ). It follows from Lemma 11 applied to the closed subgroup At this point, we can give the proof of the fact that (i) implies (ii) in Theorem 5 (or, equivalently, in Theorem 8) in the case of group of automorphisms. Proof of (i) =⇒ (ii) in Theorem 8 Let H be a countable subgroup of GL d (Z). Assume that the action of H on T = R d /Z d does not have a spectral gap. Then the unitary representation of the transposed subgroup H t on ℓ 2 (Z d \ {0}) weakly contains the trivial For the proof of (ii) =⇒ (iii) in Theorem 8, we will need a precise description of the subspace V (H) associated to a subgroup H of GL(V ) and introduced in Proposition 9. For this, we will use the following result which appears as Lemma 1 and Lemma 2 in [CoGu74]. Since the arguments in [CoGu74] are slightly incomplete, we give the proof of this lemma. Lemma 14 Let V be finite-dimensional real vector space and let H be a subgroup of GL(V ) such that the action of H on V is completely reducible. (i) Assume that the eigenvalues of every element in H all have modulus 1. Then H is relatively compact. (ii) Assume that there exists an integer N ≥ 1 such that the eigenvalues of every element in H are all N-th roots of unity. Then H is finite. The action of H on each V i extends to a representation of H on V C i which either is irreducible or decomposes as a direct sum of two irreducible (mutually conjugate) representations of H. It suffices therefore to prove the following Claim: Let H be a subgroup of GL d (C) acting irreducibly on C d . Then the conclusion (i) and (ii) hold. For every h ∈ H, we consider the linear functional ϕ h on the algebra M d (C) of complex (d × d)-matrices defined by ϕ h (x) = Tr(hx). Since H acts irreducibly, it follows from Burnside theorem that the algebra generated by H coincides with M d (C). Hence, there exists a basis {h 1 , . Assume that the eigenvalues of every element in H all have modulus 1. Then the ϕ h i 's are bounded on H by d. It follows that the matrix coefficients of the elements in H are bounded. Hence, H is relatively compact subset of Assume that, for a fixed N ≥ 1, the eigenvalues of every element in H are N-th roots of unity.. Then the ϕ h i 's take only a finite set of values on H. Proposition 15 Let V be a finite-dimensional real vector space and H a sub- Let W be the smallest H-invariant subspace such that ν is supported on P(W ). It follows from Lemma 11 that [H 0 , H 0 ] acts isometrically on W , with respect to an appropriate norm on W. We can apply the same argument to the group H V (H)/W acting on the quotient space V (H)/W. Hence, by induction, we obtain a flag Let N be the unipotent subgroup of GL(V 1 ) consisting of the elements in GL(V 1 ) which act trivially on every quotient W i+1 /W i . We can choose a scalar product on V 1 such that, denoting by We will need need the following corollary of (the proof of) the previous proposition . Corollary 16 Let Γ be a subgroup of GL d (Z). Assume that the eigenvalues of every γ ∈ Γ all have modulus 1. Then Γ contains a unique maximal unipotent subgroup Γ 0 of finite index. In particular, Γ 0 is a characteristic subgroup of Γ. Proof As in the proof of the previous proposition, we consider a Jordan-Hölder sequence for the Γ-module R d and let N be the subgroup of all g ∈ GL(V ) which act trivially on every W i+1 /W i . We choose a scalar product on R d such that Γ embeds as a sub- Let γ ∈ Γ. For every l ≥ 1, the l-th powers of the eigenvalues of γ are roots of the same monic polynomial with integer coefficients and of degree d. Since the eigenvalues of γ are all of modulus 1, the coefficients of this polynomial are bounded by a number only depending on d. By a standard argument (see e.g. the proof of Lemma 11.6 in [StTa87]), it follows that all the eigenvalues of γ are roots of unity of a fixed order N which only depends on d. Let Γ be the projection of Γ in K. The action of Γ is completely reducible, since the W ⊥ i 's are irreducible, and it follows from Lemma 14.ii that Γ is finite. Hence, Γ ∩ N is a unipotent normal subgroup of finite index in Γ. We have therefore proved that Γ contains a unipotent normal subgroup of finite index. We claim that Γ 0 := Γ∩Zc(Γ) 0 is the unique maximal unipotent normal subgroup of finite index in Γ. Indeed, let Γ 1 be a unipotent normal subgroup of finite index in Γ. Set U := Zc(Γ 1 ). Observe that the connected component of U coincides with Zc(Γ) 0 , since Γ 1 has finite index in Γ. On the other hand, as is well-known, U is connected since it is a unipotent algebraic group. (Indeed, the Zariski closure of the subgroup generated by a unipotent element u ∈ GL(R d ) contains the one-parameter subgroup through u; see e.g. 15.1. Lemma C in [Hum81].) It follows that Zc(Γ) 0 = U is unipotent. Moreover, since Γ 1 ⊂ U, we have Γ 1 ⊂ Γ 0 and the claim is proved. We can now complete the proof of Theorem 8. It follows from Corollary 16 that Γ contains a unipotent subgroup Γ 0 of finite index which is moreover characteristic. Let W 1 be the space of the Γ 0 -fixed vectors in W. Then W 1 is a rational and non-trivial linear subspace of W. Moreover, W 1 is H-invariant, since Γ 0 is characteristic. We claim that H W 1 is virtually abelian. For this, it suffices to show that Hence, Zc(G) 0 is abelian. The subgroup G ∩ Zc(G) 0 has finite index in G and is abelian. Herz's majoration principle for induced representations Unitary representations of a separable locally compact group G induced by a closed subgroup H will appear several times in the sequel. We review their definition when the homogeneous space H\G has G-invariant measure. This will always be the case in the situations we will encounter. (Induced representation are still defined in the general case, after appropriate change; see [Mack76] or [BeHV08].) Let ν be non-zero G-invariant measure on H\G. Let (σ, K) be a unitary representation of H. We will use the following model for the induced representation Ind G H σ. Choose a measurable section s : H\G → G for the canonical projection G → H\G. Let c : (H\G) × G → H be the corresponding cocycle defined by for all x ∈ H\G, g ∈ G. The Hilbert space of Ind G H σ is the space L 2 (H\G, K) of all square-integrable measurable mappings ξ : H\G → K and the action of G on L 2 (H\G, K) is given by In the sequel, we will use several times a well-known strengthening of Herz's majoration principle from [Herz70] concerning norms of convolution operators under an induced representation. For an even more general version, see [Anan03, 2.3.1]. For the convenience of the reader, we give the short proof. Proposition 17 (Herz's majoration principle) Let H be a closed subgroup of G such that H\G has a G-invariant Borel measure ν and let (σ, K) be a unitary representation of H. For every probability measure µ on the Borel subsets of G, we have Proof Let c : H\G → H be the cocycle defined by a Borel section of and observe that ϕ = ξ . Using Jensen's inequality, we have Since Ind G H 1 H is equivalent to λ G/H , the claim follows. We will also need (in Section 10) a precise description of the kernel of an induced representation. Lemma 18 With the notation as in the previous proposition, let π = Ind G H σ. Then Ker(π) = g∈G gKer(σ)g −1 , that is, Ker(π) coincides the largest normal subgroup of G contained in Kerσ. Proof Let c : H\G × G → H be the cocycle corresponding to a measurable section s : H\G → G with s(H) = e. Let a ∈ Ker(π). Then, for every ξ ∈ L 2 (H\G, K), we have for all x ∈ H\G. Taking for ξ mappings supported on a neighbourhood of Ha, we see that a ∈ H. Hence c(H, a) = a. Taking for ξ continuous mappings with ξ(H) = 0 and evaluating at H, we obtain that a ∈ Ker(σ). Since Ker(π) is normal in G, it follows that gag −1 ∈ Ker(σ) for all g ∈ G. Conversely, let a ∈ G be such that gag −1 ∈ Ker(σ) for all g ∈ G. Since Hence, for every ξ ∈ L 2 (H\G, K) and x ∈ H\G, we have This shows that a ∈ Ker(π) and the claim is proved. Proof of Theorem 5 Let T = V /∆ be a torus and H a countable subgroup of Aff(T ) = Aut(T )⋉T . The implication (iii) =⇒ (ii) is obvious and the implication (ii) =⇒ (i) follows from [JuRo79]. The fact that (ii) implies (iii) has been proved in Theorem 8. Therefore, it remains to show that (i) implies (ii). Again by Theorem 8, it suffices to show that if the action of H on T has no spectral gap, then the same is true for the action of p a (H) on T , where p a is the projection from Aff(T ) to Aut(T ). This will be an immediate consequence of the next proposition. For a probability measure µ on Aff(T ), we denote by p a (µ) the probability measure on Aut(T ) which is the image of µ under p a . Let U 0 be the Koopman representation of Aff(T ) on L 2 0 (T ). Proposition 19 For every probability measure µ on Aff(T ), we have Proof Set Γ = Aut(T ). Let T ∼ = Z d be the dual group of T. The Fourier transform sets up a unitary equivalence between U 0 and the representation V of Aff(T ) on ℓ 2 T \ {1 T } given by Choose a set of representatives S for the Γ-orbits in T \ {1 T }. Then ℓ 2 T \ {1 T } decomposes as the direct sum of Aff(T )-invariant subspaces where O χ is the orbit of χ ∈ S under Γ. It follows from Formula ( * ) above that the restriction V χ of V to ℓ 2 (O χ ) is equivalent to the induced representation Ind Γ⋉T Γχ⋉T χ, where Γ χ is the stabilizer of χ in Γ and where χ is the extension of χ to Γ χ ⋉ T given by for all γ ∈ Γ χ , a ∈ T. The proposition will be proved if we can show that, for all χ ∈ S, we have Now, the restriction of V χ to Γ is equivalent to the natural representation of Γ in ℓ 2 (O χ ), which is the induced representation Ind Γ⋉T Γχ⋉T 1 Γ . Observe that Ind Γ⋉T Γχ⋉T 1 Γ is equivalent to Ind Γ Γχ 1 Γ •p a . Hence, Inequality ( * * ) follows from Herz's majoration principle (Proposition 17) and the proof of Theorem 5 is complete. The following corollary gives a more precise information about the spectral structure of the Koopman representation associated to the action on T of a countable subgroup of Aff(T ). Corollary 20 Let H be a a countable subgroup of Aff(T ) and Γ = p a (H). There exists a Γ-invariant torus factor T of T such that the projection of H in Aff(T ) is an amenable group and which is the largest one with this property: every other Γ-invariant torus factor S of T for which the projection of H in Aff(S) is amenable is a factor of T . Moreover, the torus factor T has the following properties: (i) the projection of Γ on Aut(T ) is a virtually polycyclic group; (ii) the restriction to L 2 (T ) ⊥ of the Koopman representation of H does not weakly contain the trivial representation 1 H . Proof As for the proof of Theorem 5, we proceed by duality, using Fourier analysis and identifying V and ∆ with their dual groups. Let V rat (Γ) be the subspace generated by the union of Γ-invariant rational subspaces W of V for which Γ W is amenable. Then V rat (Γ) is a Γ-invariant rational subspace and, by Proposition 9, Γ Vrat(Γ) is amenable. We claim that the natural unitary representation of Γ on ℓ 2 (∆\(V rat (Γ) ∩ ∆)) does not weakly contain 1 Γ . Indeed, assume by contradiction that this is not the case. Then there exists a Γ-invariant mean m on ∆ \ (V rat (Γ) ∩ ∆)) . We consider the vector space V = V /V rat (Γ) with the lattice ∆ = p(∆), where p : V → V is the canonical projection. Then p * (m) is a Γ-invariant mean on ∆ \ {0}. Hence, by Proposition 13, there exists a non-trivial Γ-invariant rational W subspace of V such that the image of Γ in GL(W ) is amenable. Let N be a connected and simply connected nilpotent Lie group with Lie algebra n. Kirillov's theory provides a parametrization of N in terms of the co-adjoint orbits in the dual space n * = Hom R (n, R) of n. We will review the basic features of this theory. We have to recall a few general facts about decay of matrix coefficients of unitary group representations, following [HoMo79] and [Howe82]. Let (π, H) be a unitary representation of the locally compact group G. The projective kernel of π is the normal subgroup P π of G defined by P π = {g ∈ G : π(g) = λ π (g)I for some λ π (g) ∈ C}. Observe that the mapping g → λ π (g) defines a unitary character λ π of P π . Observe also that, for ξ, η ∈ H, the absolute value of the matrix coefficient C π ξ,η : g → π(g)ξ, η is constant on cosets modulo P π . For a real number p with 1 ≤ p < +∞, the representation π is said to be strongly L p modulo P π , if there is dense subspace D ⊂ H. such that, for every ξ, η ∈ D, the function |C π ξ,η | belongs to L p (G/P π ). Observe that then π is strongly L q modulo P π for any q > p, since C π ξ,η is bounded. Moreover, if π is strongly L 2 modulo P π , then π is contained in an infinite multiple of Ind G Pπ λ π (this can be shown by a straightforward adaptation of Proposition 1.2.3 in Chapter V of [HoTa92]). We will also use the notion of a projective representation. Recall that a mapping π : G → U(H) from G to the unitary group of the Hilbert space H is a projective representation of G if the following holds: • π(e) = I, • for all g 1 , g 2 ∈ G, there exists c(g 1 , g 2 ) ∈ C such that π(g 1 g 2 ) = c(g 1 , g 2 )π(g 1 )π(g 2 ), • the function g → π(g)ξ, η is measurable for all ξ, η ∈ H. The mapping c : G × G → S 1 is a 2-cocycle with values in the unit cercle S 1 . The projective kernel of π is defined in the same way as for an ordinary representation. Every projective unitary representation of G can be lifted to an ordinary unitary representation of a central extension of G (for all this, see [Mack76] or [Mack58]). Decay of extensions of irreducible representations of nilpotent Lie groups Let N be a connected and simply connected nilpotent Lie group with Lie algebra n. The group Aut(N) of continuous automorphisms of N can be identified with the group Aut(n) of automorphisms of the Lie algebra n of N, by means of the mapping ϕ → d e ϕ, where d e ϕ : n → n is the differential of ϕ ∈ Aut(N) at the group unit. In this way, Aut(N) becomes an algebraic subgroup of GL(n). Therefore, the group Aff(N) = Aut(N)⋉N of affine transformations of N is also an algebraic group over R. Set G := Aff(N). In the following, we view N as a normal subgroup of G. The group G acts by inner automorphisms on N and hence by automorphisms on n, n * , and N; observe that, for g ∈ G and l ∈ n * , we have (Ad * (n)l) g = Ad * (gng −1 )(l g ) for all n ∈ N. This shows that g permutes the orbits of the co-adjoint representation, mapping the orbit of l onto the orbit of l g . Let π ∈ N with corresponding coadjoint orbit O. The representation π g ∈ N, defined by π g (n) = π(gng −1 ), corresponds to the orbit O g . For a co-adjoint orbit O in n * , we denote by G O the stabilizer of O in G. Similarly, G π = {g ∈ G : π g is equivalent to π} is the stabilizer in G of π ∈ N . Observe that, if π is the representation corresponding to the co-adjoint orbit O in Kirillov's picture, then G π = G O . Observe also that N is contained in G π . The following elementary fact will be crucial for the sequel. Proposition 21 Let π be an irreducible unitary representation of N. The stabilizer G π of π is an algebraic subgroup of G. Moreover, for every l in the co-adjoint orbit corresponding to π, we have G π = G l N where G l is the stabilizer of l in G Proof The co-adjoint orbit O associated to π is an algebraic subvariety of n * (see Theorem 3.1.4 in [CoGr89]). It follows that G π = G O is an algebraic subgroup of G. Moreover, since N acts transitively on O, it is clear that Let π be an irreducible unitary representation of N, with Hilbert space H. It is a well-known part of Mackey's theory of unitary representations of group extensions that there exists a projective unitary representation π of G π on H which extends π. Indeed, for every g ∈ G π , there exists a unitary operator π(g) on H such that π(g(n)) = π(g)π(n) π(g) −1 for all n ∈ N. One can choose π(g) such that g → π(g) is a projective representation unitary representation of G π which extends π (see Theorem 8.2 in [Mack58]). The following proposition, which will play a central rôle in our proofs, is a consequence of arguments from [HoMo79] concerning decay properties of unitary representations of algebraic groups. Proposition 22 Let π be an irreducible unitary representation of N on H and let π be a projective unitary representation of G π which extends π. There exists a real number p ≥ 1, only depending on the dimension of G, such that π is strongly L p modulo its projective kernel. Proof Since π is irreducible, π(g) is uniquely determined up to a scalar multiple of the identity operator I for every g ∈ G π . In particular, all projective unitary representations of G π which extend π have the same projective kernel. We will need to give an explicit construction of a projective representation of G π extending π. This representation will lift to an ordinary representation of a two-fold cover of G π . We denote by O the co-adjoint orbit associated to π and we fix throughout the proof a linear functional l in O. Set H = Aut(N) so that G = H ⋉N. Let H l be the stabilizer of l in H. As shown in Proposition 21, G π is an algebraic subgroup of G and G π = H l N. It is clear that H l is also an algebraic subgroup of G. Let U l be the unipotent radical of H l . Then U = U l N is the unipotent radical of G π . •First step: We claim that π can be extended to an ordinary unitary representation σ of U. Indeed, let u l be the Lie algebra of U l . We extend l to a linear functional l on the Lie algebra u = u l ⊕ n of U by defining l(X) = 0 for all X ∈ u l . Let m ⊂ n be a polarization for l. We claim that m := u l ⊕ m is a polarization for l. Indeed for all X ∈ m. Let M be the closed subgroup of U corresponding to m. The unitary character χ l of M given by l coincides with χ l on M. Since a fundamental domain for M\N is also a fundamental domain for M \U, we see that Ind U M χ l can be realized on the Hilbert space of Ind N M χ l and that σ := Ind U M χ l extends π = Ind N M χ l . •Second step: We claim that G σ = G π . It is obvious that G σ ⊂ G π . Let H l = RU l be a Levi decomposition of H l , where R is a reductive subgroup of G l . In order to show that G π ⊂ G σ , it suffices to prove that R ⊂ G σ , since G π = RU. Now, R leaves u l and n invariant and fixes l. Hence, R fixes the extension l of l defined above and the claim follows. •Coda: As a result, upon replacing N by U, we can assume that N is the unipotent radical of G π . Since the connected component of G π has finite index, we can also assume that G π is connected. As shown above, we have a Levi decomposition G π = RN with R a reductive subgroup contained in G l . According to [Howe73], we can find in N algebraic subgroups K 1 ⊂ P 1 ⊂ N 1 with the following properties: • K 1 , P 1 , and N 1 are normalized by R; • K 1 and P 1 are normal in N 1 and N 1 /K 1 is a Heisenberg group with centre P 1 /K 1 ; • there exists a unitary character λ of P 1 /K 1 such that π is equivalent to the induced representation Ind N N 1 π 1 , where π 1 is the lift to N 1 of the unique irreducible representation of the Heisenberg group N 1 /K 1 with central character λ. The action of R on N 1 /K 1 defines a homomorphism from R to the symplectic group Sp(N 1 /P 1 ) of the vector space N 1 /P 1 ; as a result, we have a homomorphism ϕ : RN 1 → Sp(N 1 /P 1 ) ⋉ (N 1 /K 1 ). The representation π 1 of N 1 /K 1 extends to a projective representation ω of Sp(N 1 /P 1 ) ⋉ (N 1 /K 1 ), called the metaplectic (or oscillator, or Shale-Weil) representation; more precisely, there exists a two-fold cover Sp of Sp(N 1 /P 1 ) and a unitary representation ω of Sp ⋉ (N 1 /K 1 ) on the Hilbert space of π 1 which extends π 1 . We can lift ϕ to a homomorphism ϕ : RN 1 → Sp ⋉ (N 1 /K 1 ) for a twofold cover R of R. Then ρ := ω • ϕ is a unitary representation of RN 1 on the Hilbert space of π 1 which extends π 1 . Set π := Ind RN RN 1 ρ. Then π is a unitary representation of the two-fold cover G π := RN of G π = RN; moreover, π extends π, since π is equivalent to Ind N N 1 π 1 , and ρ extends π 1 . Observe that G π is in general not an algebraic group. Let p : G π → G π be the covering map. Let us say that a connected subgroup H of G π is reductive if p(H) is a reductive subgroup of G π . We claim that G π has no non-trivial reductive normal subgroup. Indeed, let H be a reductive normal subgroup of G π . Since G π = RN is a Levi decomposition of G π , the normal subgroup p(H) of G π is conjugate to a subgroup of R and therefore p(H) ⊂ R. Hence, p(H) centralizes N. It follows that p(H) is trivial since p(H) ⊂ Aut(N). Now, the same arguments as those on pages 87-93 in [HoMo79] show that there exists an integer k such that the k-fold tensor power π ⊗k of π is square integrable modulo the projective kernel P π of π. For instance, let us check how the first step in [HoMo79] towards this claim carries over to our situation. For an integer k, we are interested in the tensor power π ⊗k . In order to apply Mackey's tensor product theorem (see [Mack76,Theorem 3.6]), we have to show that ( RN 1 ) k and the diagonal subgroup ∆ G π of G k π are regularly related. Now, the quotient space G k π /( RN 1 ) k is can be canonically identified with G k π /(RN 1 ) k , and the action of ∆ G π on G k π /( RN 1 ) k corresponds, via the covering mapping p : G π → G π , to the action of ∆G π on G k π /(RN 1 ) k . Since ∆G π of G k π are algebraic subgroups of G k π , the claim follows. Rational unitary representations of a nilpotent Lie group As in the previous section, let N be a connected and simply connected nilpotent Lie group and G := Aff(N) = Aut(N) ⋉ N. Let π be an irreducible unitary representation of N and G π the stabilizer of π in G. Let π be a projective unitary representation of G π extending π. In the following proposition, we describe the projective kernel P π of π. Proposition 24 Let L π be the connected component of Ker(π). Set N = N/L π and let p : N → N be the canonical projection. For g = (h, n) ∈ G π with h ∈ Aut(N) and n ∈ N, the following are conditions are equivalent: (ii) h leaves L π invariant and the automorphism of N induced by h coincides with the inner automorphism Ad(p(n) −1 ). Proof Assume that g = (h, n) ∈ P π . By definition of P π , we have π(h) = λ π (g)π(n −1 ). It follows that, for every x ∈ N π(h(x)) = π(h)π(x) π(h) −1 = π(n −1 )π(x)π(n) = π(n −1 xn), that is, Since N is connected, this is equivalent to As L π is normal in N, this shows that L π is invariant under h and that the automorphism induced by h on N is Ad(p(n) −1 ). Conversely, suppose that L π is invariant under h and that the automomorphism h induced by h on N coincides with Ad(p(n) −1 ). Observe that π factorizes to a representation σ of N . Let σ be an extension of σ to the stabilizer of σ in Aut(N ) ⋉ N . Then for all x ∈ N, that is, σ(p(n)) σ(h) commutes with σ(p(x)) for all x ∈ N. Since π is irreducible, it follows that σ(p(n)) σ(h) and hence π(n) π(h) is a scalar operator. This means that g = (h, n) ∈ P π . Recall first that a lattice Γ in a locally compact group G is a discrete subgroup such that the translation invariant measure induced by a Haar measure on G on the homogeneous space Γ\G is finite. The Lie algebra n (or the corresponding nilpotent Lie group N = exp(n)) has a rational structure if there is a Lie algebra n Q over Q such that n ∼ = n Q ⊗ Q R. If n has a rational structure given by n Q , then N contains a cocompact lattice Λ such that log Λ ⊂ n Q . Conversely, if N contains a lattice Λ, then Λ is cocompact and n has a rational structure given by n Q = Q − span(log Λ). Assume from now on that N has a rational structure n Q and let Λ be a lattice inducing this rational structure. We say that a R-subspace h of n is rational if h = R − span(h ∩ n Q ). All subalgebras in the ascending or ascending series as well as the centre of n are rational. A connected closed subgroup H of N is said to be rational if the corresponding subalgebra Lie algebra h is rational. This is equivalent to the fact that H ∩ Λ is a lattice in H. Let H be a rational connected normal closed subgroup of N with Lie algebra h Then N/H has a canonical rational structure (n/h) Q induced by the lattice ΛH/H of N/H. There is a unique rational structure n * Q on the dual space n * defined as follows: a functional l ∈ n * belongs to n * Q if and only if l(X) ∈ Q for all X ∈ n Q . An important role will be played later (in Section 12) by irreducible unitary representations of N which are rational in the sense of the following definition. Definition 25 An irreducible unitary representation π of N is rational if its co-adjoint orbit O π is rational, that is, if O π ∩ n * Q = ∅. We fix for the rest of this section a rational irreducible unitary representation π of N. We first establish the rationality of the kernel of π. Proposition 26 The connected component L π of Ker(π) is a rational normal subgroup of N. As a consequence, Λ = ΛL π /L π is a lattice in N/L π . Proof Since π is rational, the corresponding co-adjoint orbit in n * contains a functional l ∈ n * Q . The representation π is unitarily equivalent to Ind G M χ l , where m is a polarization for l, M = exp(m), and χ l is the unitary character of M corresponding to l. Recall from Lemma 18 that Ker(π) coincides with the largest normal subgroup of N contained in Ker(χ l ). For the ideal l corresponding to Ker(π), we have therefore l = n∈N Ker(Ad * (n)l) = X∈n Q Ker(Ad * (exp X)l). Since Ker(Ad * (exp X)l) is rational for all X ∈ n Q , it follows that l is rational. Thus, the connected component L π of Ker(π) is rational, by definition. The set Aut(Λ\N) consisting of the automorphisms γ ∈ Aut(N) with γ(Λ) = Λ is a discrete subgroup of the algebraic group Aut(N). Let G π be the stabilizer of π in G and π a projective unitary representation of G π extending π. Set Γ π = G π ∩ Aut(Λ\N). The projective kernel P π of π was determined in Proposition 24 . We will need to have a precise description of P π ∩ (Γ π ⋉ N). As before, let L π be the connected component of Ker(π), N = N/L π , p : N → N the canonical projection, and Λ = p(Λ). Observe that g(L π ) = L π for all g ∈ G π ∩ Aut(N). Consider the induced continuous homomorphism Proposition 27 Let Norm(Λ) be the normalizer of Λ in N. (ii) In view of (i), it suffices to prove that the subgroup ΛZ(N) has finite index in Norm(Λ). The next proposition will allow us to deduce decay properties of representations of G π restricted to Γ π ⋉ N. Proof Using Proposition 24, we see that It therefore suffices to show that ϕ(Γ π )Ad(N) is closed in Aut(N). Let (γ i ) i and (x i ) i be sequences in Γ π and in Ad(N) such that Since Ad(Λ) is a cocompact lattice in Ad(N), there exists a compact subset D of Ad(N ) such that x i = δ i d i for some δ i ∈ Ad(Λ) and d i ∈ D. As D is compact, we can assume that lim and ϕ(Γ π ) is a subgroup of the discrete group Aut(Λ\N). It follows that gd −1 ∈ ϕ(Γ π ), that is, g ∈ ϕ(Γ π )Ad(N ). Hence, ϕ(Γ π )Ad(N ) is closed in N. A general estimate for norms of convolution operators Let G be a locally compact group. For a unitary representation (π, H) of G, the contragredient (or conjugate) representation π acts on the conjugate Hilbert space H. Recall that, for an integer k ≥ 1, the k-fold tensor product π ⊗k of π is a unitary representation of G acting on the tensor product Hilbert space H ⊗k . We will need in a crucial way the following estimate which appears in the proof of Theorem 1 in [Nevo98]. Proof Denote byμ the probability measure on G defined byμ(A) = µ(A −1 ) for every Borel subset A of G. Using Jensen's inequality, we have for every vector ξ ∈ H, and the claim follows. Analysis of the Koopman representation of the affine group of a nilmanifold Let N be a connected and simply connected nilpotent Lie group, Λ a lattice in N. There is a unique translation invariant probability measure ν Λ\N on Λ\N and it is induced by a Haar measure on N. This measure is also invariant under Aut(Λ\N). We fix throughout this section a subgroup Γ of Aut(Λ\N). The Koopman representation U of Γ ⋉ N associated to the action of Γ ⋉ N on Λ\N is given by In particular, we have for all γ ∈ Γ, n ∈ N. Recall that T = Λ[N, N]\N is the maximal factor torus associated to Λ\N. The action of Aff(Λ\N) on Λ\N induces an action of Aff(Λ\N) on T. We identify L 2 (T ) with a closed subspace of L 2 (Λ\N). More generally, let L be a connected closed subgroup of N which is both rational and invariant under Γ. Then Λ∩L is a lattice in L and Λ = ΛL/L is a lattice in N = N/L. There is an induced action of Γ⋉N on the subnilmanifold L/(Λ ∩ L) and on the factor nilmanifold Λ\N. The canonical mapping p : Λ\N → Λ\N is Γ ⋉ N-equivariant and presents Λ\N as a fibre bundle over Λ\N with fibres diffeomorphic to L/(Λ ∩ L). The Hilbert space L 2 (Λ\N ) can be identified, as Γ ⋉ N-representation, with the Γ ⋉ N-invariant closed subspace of L 2 (Λ\N) consisting of the square-integrable functions on Λ\N which are constant on the fibres of p. We write where H is the orthogonal complement of L 2 (T ) on L 2 (Λ\N), and observe that H is invariant under Aff(Λ\N). We are going to show that the restriction of U to H has a canonical decomposition into a direct sum of induced representations from the stabilizers in Γ ⋉ N of certain representations π ∈ N ; this decomposition can be viewed as generalization of the decomposition of L 2 (T ) which appears in the proof of Proposition 19. Since Λ is cocompact in N, we can consider the decomposition of H into its N-isotypical components: we have where Σ is a certain set of infinite-dimensional pairwise non-equivalent irreducible unitary representations of N; for every π ∈ Σ, the space H π is the union of the closed U(N)-invariant subspaces K of H for which the corresponding representation of N in K is equivalent to π. According to [Moor65,Corollary2], every π ∈ Σ is rational in the sense of Section 10. Every H π is a direct sum of finitely many irreducible unitary representations; therefore, the restriction of U(N) to H π is unitarily equivalent to a tensor product π ⊗ I acting on K π ⊗L π , where K π is the Hilbert space of π and where L π is a finite dimensional Hilbert space. (For a precise computation of the dimension of L π , see [Howe71] and [Rich71]; the fact that L π is finite-dimensional will not be relevant for our arguments.) Let γ be a fixed automorphism in Γ. Let U γ be the conjugate representation of U by γ, that is, U γ (g) = U(γ −1 (g)) for all g ∈ G. On the one hand, for every π ∈ Σ, the subspace H π γ −1 is the isotypical component of U γ | N corresponding to π. On the other hand, relation (1) shows that U(γ −1 ) provides a unitary equivalence between U| N and U γ | N . It follows that In summary, we see that Γ permutes the H π 's among themselves according to its action on N . Write Σ = i∈I Σ i , where the Σ i 's are the Γ-orbits in Σ, and set Every H Σ i is invariant under Γ i ⋉N and we have an orthogonal decomposition Fix i ∈ I. Choose a representation π i in Σ i and set Choose a set S i of representatives for the cosets in with e ∈ S i . Then Σ i = {π s i : s ∈ S i } and the Hilbert space H Σ i is the sum of mutually orthogonal spaces: Moreover, H s i is the image under U(s) of H i for every s ∈ S i . This exactly means that the restriction U i of U to H Σ i of the Koopman representation U of Γ ⋉ N is equivalent to the induced representation Ind Γ⋉N Γ i ⋉N V i . As we have seen above, we can assume that H i is the tensor product Let g ∈ Γ i ⋉ N. By (1) and (2) above, we have On the other hand, let G i be the stabilizer of π i in Aff(N); then π i extends to an irreducible projective representation π i of G i (see the remark just before Proposition 22). Since π i (g)π i (n) π i (g −1 ) = π i (gng −1 ) for all n ∈ N, it follows from (3) that the operator ( π i (g −1 ) ⊗ I L i ) V i (g) commutes with π i (n) ⊗ I L i for all n ∈ N. Since π i is irreducible, there exists a unitary operator W i (g) on L i such that It is clear that W i is a projective unitary representation of Γ i ⋉ N, since V i is a unitary representation of Γ i ⋉ N. Proof of Theorem 1: first step We summarize the discussion from the previous section. We have a first orthogonal decomposition into Aff(Λ\N)-invariant subspaces where T is the maximal torus factor of Λ\N. Let Γ be a subgroup of Aut(Λ\N). There exists a sequence of Γ-invariant sets (Σ i ) i∈I of rational infinite dimensional unitary irreducible representations of N such that we have a decomposition into mutually orthogonal Γ ⋉ N-invariant subspaces with the following property: for every i, the representation U i of Γ ⋉ N defined on H Σ i is equivalent to where π i is a representation from Σ i , where π i is the restriction to Γ i ⋉ N of an extension of π i to the stabilizer G i of π i in G = Aff(N), and where W i is some finite dimensional projective unitary representation of Γ i ⋉ N. We need to recall the decomposition of the representation U tor of Γ on L 2 0 (T ) from Section 7. Let T ∼ = Z d be the dual group of T and let S be a set of representatives for the Γ-orbits in T \ {1 T }. Then where Γ χ is the stabilizer of χ in Γ and λ Γ/Γχ is the natural representation of Γ on ℓ 2 (Γ/Γ χ ). In the following result, we establish a link between the restrictions to H and to L 2 0 (T ) of the Koopman representation of Γ. This result, which is a consequence of the discussion above and of results from Section 10, is a major step in our proof of Theorem 1. Recall that p a denotes the canonical projection Aff(Λ\N) → Aut(Λ\N). For a probability measure µ on Aff(Λ\N), let p a (µ) be the probability measure on Aut(Λ\N) which is the image of µ under p a . Proposition 31 There exists an integer k ≥ 1 only depending on dim N with the following property. Let Γ be a subgroup of Aut(Λ\N) which stabillizes some π ∈ N appearing in the decomposition H = π∈Σ H π of H into isotypical components under N. For every probability measure µ on Γ ⋉ N, we have U π (µ)) ≤ U tor (p a (µ)) 1/2k , where U π and U tor are the restrictions of the Koopman representation of Γ⋉N to H π and L 2 0 (T ) respectively. Proof Let G π be the stabilizer of π in G = Aff(N). Let π a projective representation of G π extending π. As we have seen above, U π is equivalent to ( π| Γ⋉N ) ⊗ W for some finite dimensional projective unitary representation W of Γ ⋉ N. Let P denote the projective kernel of U π . Observe that P = P 1 ∩ P 2 , where P 1 and P 2 are the projective kernels of π| Γ⋉N and W . Denote by L π the connected component of Ker(π) and N = N/L π . As in Section 10, let ϕ : G π → Aff(N) be the corresponding homomorphism and where Λ is the lattice ΛL π /L π in N and Z(N ) the centre of N . Then is a subgroup of finite index of P 1 (Proposition 27). By Corollary 29, there exists a real number p ≥ 1 only depending on the dimension of Aut(N) ⋉ N such that π |Γπ⋉N is strongly L p modulo Q. We claim that Q is contained in P. Indeed, for g ∈ Q, we have for some x ∈ Λ and z ∈ Z(N ). Hence ϕ(g) acts as the right translation by z on L 2 (Λ\N). Observe that H π is contained in L 2 (Λ\N) and that g acts as ϕ(g) on H π . Since N acts as a multiple of the irreducible representation π on H π , it follows that g ∈ P and the claim is proved As a consequence, we see that Q is a subgroup of finite index in P . Observe that Q is also contained in P 2 . It follows that U π = ( π| Γ⋉N ) ⊗ W is strongly L p modulo Q and hence U π is strongly L p modulo P . Let k be an integer with k ≥ p/4. Then the tensor power U π ⊗ U π ⊗k is strongly L 2 modulo P. Hence, as discussed in Section 8, U π ⊗ U π ⊗k is contained in an infinite multiple of the induced representation Ind Γ⋉N P λ π , for the associated unitary character λ π of P. It follows that, for every probability measure µ on Γ ⋉ N, we have and hence, using Proposition 30, On the other hand, observe that P N = p −1 a (p a (P )) is closed in Aff(Λ\N), as Aut(Λ\N) is discrete. Since, by induction by stages, we have, using by Herz's majoration principle (Proposition 17), Now, λ (Γ⋉N )/P N = λ Γ/pa(P ) • p a and hence λ (Γ⋉N )/P N (µ) = λ Γ/pa(P ) (p a (µ)) . As a consequence, the proposition will be proved if we establish the following inequality To show this, recall (see (4) above) that U tor is equivalent to the direct sum χ∈S λ Γ/Γχ , where S is set of representatives for the Γ-orbits in T \ {1 T }. As a consequence, Inequality (5) will be proved if we can show that there exists χ ∈ T \ {1 T } such that λ Γ/pa(P ) (p a (µ)) ≤ λ Γ/Γχ (µ) . By Herz's majoration principle again, it suffices to show that exists χ ∈ T with χ = 1 T such that p a (P ) ⊂ Γ χ . For this, recall that, for every g ∈ P ⊂ P 1 , there exists x ∈ N such that γ = p a (g) acts as Ad(x) on N (Proposition 27). For every unitary character χ of N , we have χ(ϕ(γ)(y)) = χ(xyx −1 ) = χ(y) for all y ∈ N. Thus, p a (P ) fixes every unitary character of N . Observe that N is non-trivial, since π = 1 N . Choose a non-trivial unitary character of N which is constant on the cosets of Λ and denote again by χ its lift to N. Then χ ∈ T \ {1 T } and χ is fixed by p a (P ). Example 33 Let N = H 2n+1 (R) be the (2n + 1)-dimensional Heisenberg group (over R) and let Λ be a lattice in N. Then Aut(Λ\N) contains a subgroup of finite index Γ consisting of automorphisms which fix every infinite dimensional representation π ∈ N (see [Foll89] 14 Proof of Theorem 1: completion of the proof We are now in position to give the proof of Theorem 1. In view of Theorem 5, we only need show that (ii) implies (i). Let H be a countable subgroup of Aff(Λ\N). Assume, by contraposition, that the action of H on Λ\N does not have a spectral gap. We have to prove that the action of H on T does not have a spectral gap. Set Γ = p a (H). By Theorem 5, it suffices to prove that the action on T of some subgroup of finite index in Γ does not have a spectral gap. Let U H be the representation of Aff(Λ\N) on the orthogonal complement H of L 2 (T ) in L 2 (Λ\N) and U tor the representation on L 2 0 (T ). Our theorem will be proved if we can show the following Claim: Let µ be an aperiodic measure on H. Assume that U H (µ) = 1. Then there exists a subgroup ∆ of finite index in Γ and an aperiodic probability measure ν on ∆ such that U tor (p a (ν)) = 1. To prove this claim, we proceed by induction on the dimension of the Zariski closure Zc(Γ) of Γ in Aut(N). If dim Zc(Γ) = 0, then Γ is finite and there is nothing to prove. Assume that dim Zc(Γ) ≥ 1 and that the claim above is proved for every countable subgroup of H 1 of Aff(Λ\N) for which dim Zc(p a (H 1 )) < dim Zc(Γ). Recall from Sections 12 and 13 that, as Γ ⋉ N-representation, U H is equivalent to a direct sum where Γ i is the stabilizer in Γ of a rational representation π i ∈ N and V i is a unitary representation of Γ i ⋉ N. Let I fin ⊂ I be the set of all i ∈ I such that Γ i has finite index in Γ and set I ∞ = I \ I fin . Let and denote by H fin and H ∞ the corresponding subspaces of H defined respectively by U fin and U ∞ . Since U H (µ) = 1, two cases can occur. • First case: we have U ∞ (µ) = 1. By Herz's majoration principle, we have Let ε > 0. We can choose i ∈ I ∞ such that We claim that dim Zc(Γ i ) < dim Zc(Γ). Indeed, otherwise Zc(Γ i ) and Zc(Γ) would have the same connected component C 0 , since Zc(Γ i ) ⊂ Zc(Γ). As the stabilizer of π i in Aut(N) is Zariski closed (Proposition 21), C 0 would stabilize π i . Therefore, Γ ∩ C 0 would be contained in Γ i . But Γ ∩ C 0 has finite index in Γ. Hence, Γ i would have a finite index in Γ and this would be a contradiction, since i ∈ I ∞ . Since this is true for every ε > 0, we obtain that U tor (p a (µ)) = 1. Moreover, ∆ has finite index in Γ, since every Γ i has finite index in Γ. From Sections 12 and 13, we have a decomposition of H fin into ∆ ⋉ Ninvariant subspaces where H i is the isotypical component corresponding to π i under the action of N. Let ν be a probability measure with support equal to (∆ ⋉ N) ∩ H. Considering as above the aperiodic measure (µ+ν)/2 on H, we have U fin (ν)) = 1, since U fin (µ) = 1. On the other hand, by Proposition 31, there exists an integer k ≥ 1, which is independent of i, such that U i (ν)) ≤ U tor (p a (ν)) 1/2k for all i ∈ I fin where U i is the representation of ∆ ⋉ N on H i . As a consequence, we have U fin (ν)) ≤ U tor (p a (ν)) 1/2k and it follows that U tor (p a (ν)) = 1. Since the support of p a (ν) is the subgroup ∆ of finite index in Γ, this completes the proof of Theorem 1. Remark 34 The proof of Theorem 1 we gave above is not effective: it does not give, for a probability measure µ on Aut(Λ\N), a bound for the norm of µ under U H in terms of the norm of µ under U tor and/or other "known" representations of the group generated by µ, such as the regular representation. In the following example, such an explicit bound is given. The crucial tool we use is Mackey's tensor product theorem This approach succeeds here because of the special features of the example and we could not use it to get explicit bounds in the most general case. Example 35 Let n = n 3,2 be the free 2-step nilpotent Lie algebra on 3 generators and let N = N 3,2 be the corresponding connected and simplyconnected nilpotent Lie group. As is well-known, n is a 6-dimensional Lie algebra which can be realized as follows. Set V 1 = V 2 = R 3 and define a Lie bracket on the vector space n = V 1 ⊕ V 2 by where X 1 ∧ X 2 denotes the usual cross-product on R 3 . (The factor 2 appears here just for computational ease.) The centre of n is V 2 and the Lie group N is V 1 ⊕ V 2 with the product (x 1 , y 1 )(x 2 , y 2 ) = (x 1 + x 2 , y 1 + y 2 + x 1 ∧ x 2 ) for all x 1 , x 2 , y 1 , y 2 ∈ R 3 , so that the exponential mapping exp : n → N is the identity. Observe that, for a matrix A ∈ GL 3 (R), we have The automorphism group Aut(N) of N is the subgroup of GL 6 (R) of matrices g A,B of the form with A ∈ GL 3 (R) and B ∈ M 3 (R), so that Aut(N) is isomorphic to the semi-direct product GL 3 (R) ⋉ M 3 (R) for the action of GL 3 (R) by left multiplication on the vector space M 3 (R) of 3 × 3-real matrices. We will identify n with n * by means of the standard scalar product (X, Y ) → X|Y on R 6 . For (x, y) and (X 0 , Y 0 ) in V 1 ⊕ V 2 , we compute that Ad * (x, y)(X 0 , Y 0 ) = (X 0 + x ∧ Y 0 , Y 0 ). It follows that the coadjoint orbit of (X 0 , 0) is {(X 0 , 0)} and, for Y 0 = 0, we have The orbits which are not reduced to singletons are therefore the two-dimensional affine planes The subgroup Λ = Z 3 ⊕ Z 3 is a lattice in N. The group Aut(Λ\N) is the subgroup of Aut(N) of automorphisms g A,B as above given by matrices A ∈ GL 3 (Z) and B ∈ M 3 (Z). Fix (λ 0 , Y 0 ) ∈ R×(R 3 \{0}). The irreducible unitary representation π λ 0 ,Y 0 of N corresponding to the coadjoint orbit O λ 0 ,Y 0 appears in the decomposition of L 2 (Λ\N) into N-isotypical components if and only if where ∆ Y 0 is the subgroup of Z consisting of the integers m for which mY 0 ∈ (RY 0 ) ⊥ + Y 0 2 Z 3 . Let Γ be a subgroup of Aut(Λ\N). For simplicity, we assume that Γ consists only of automorphisms g A,0 with A ∈ SL 3 (Z). We identify Γ with a subgroup of SL 3 (Z). For A ∈ SL 3 (Z), we have and is isomorphic to a subgroup of the semi-direct product SL 2 (Z) ⋉ Z 2 . The projective kernel P λ 0 ,Y 0 of V λ 0 ,Y 0 coincides with the subgroup of Γ of all automorphisms which fixes every point (X, Y ) ∈ O λ 0 ,Y 0 ; hence, P λ 0 ,Y 0 = {I} if λ 0 = 0 and Every π λ 0 ,Y 0 factorizes to a representation of a quotient of N of dimension 3 or 4, which is isomorphic to the Heisenberg group H 3 or to the direct product H 3 ⊕ R. It follows that the representation V λ 0 ,Y 0 of Γ λ 0 ,Y 0 is strongly L 6+ε modulo P λ 0 ,Y 0 for every ε > 0 (see [BeHe10] and [HoMo79]). We claim that U ⊗4 is weakly contained in the regular representation λ Γ of Γ on ℓ 2 (Γ). Let µ be a probability measure on Γ. It follows from what we have seen that where U H is the Koopman representation of Γ on H = L 2 (T ) ⊥ . As a consequence, we have U 0 (µ) ≤ max{ λ Γ (µ) 1/4 , U tor (µ) }, where U 0 and U tor are the Koopman representations of Γ on L 2 0 (Λ\N) and L 2 0 (T ). The same estimate was established in [BeHe10, Corollary 3] in the case where N is the Heisenberg group H 3 . Proof of Theorem 4 Let H be a subgroup of Aff(Λ\N). The following elementary proposition shows that ergodicity of H on T is inherited by every subgroup of finite index in H. Proposition 36 Let H be a subgroup of Aff(T ) and H 1 a subgroup of finite index in H. Assume that L 2 0 (T ) contains a non-zero H 1 -invariant function. Then L 2 0 (T ) contains a non-zero H-invariant function. Proof By standard arguments involving Fourier series, there exists a unitary character χ in T \ {1 T } with a finite orbit under p a (H 1 ) and such that H 2 := H 1 ∩ p −1 a (Γ χ ) fixes χ, where Γ χ is the stabilizer of χ in Aut(T ). Then H 2 has finite index in H and s∈H/H 2 U tor (s)χ is a non-zero H-invariant function in L 2 0 (T ). Proof of (i) in Theorem 4 As is well-known, the action of a group H on a probability space (X, ν) is weakly mixing if and only if the diagonal action of H on (X × X, ν ⊗ ν) is ergodic. Since T × T is the maximal factor torus of (Λ\N) × (Λ\N), we only have to prove the statement about ergodicity. So, let H be a (not necessarily countable) subgroup of Aff(Λ\N) acting ergodically on T. We have to prove that H acts ergodically on Λ\N. We can assume that N is not abelian, otherwise there is nothing to prove. Set Γ = p a (H). Recall from Sections 12 and 13 that we have orthogonal decompositions into Γ ⋉ N -invariant subspaces L 2 (Λ\N) = L 2 (T ) ⊕ H and such that the representation U i of Γ ⋉ N on H Σ i is equivalent to an induced representation Ind Γ⋉N Γπ i ⋉N V i , where Γ π i is the stabilizer in Γ of some π i ∈ Σ i . In view of the previous proposition, it suffices to prove the following Claim: Assume that, for some i, the subspace H Σ i contains a non-zero Hinvariant function. Then L 2 0 (T ) contains a non-zero H 1 -invariant function for some subgroup H 1 of finite index in H. To show this, set π = π i , Σ π = Σ i , U π = U i , and V π = V i . Let S be a set of representatives for the cosets in Γ/Γ π ∼ = (Γ ⋉ N)/(Γ π ⋉ N) with e ∈ S. Then, by the definition of an induced representation, H Σπ is an orthogonal sum where K carries the Γ π ⋉ N-representation V π and where K s = U π (s)K. It follows from this that there exists a non-zero function in K which is invariant under H ∩ (Γ π ⋉ N) and that Γ π has finite index in Γ. Upon replacing H by the subgroup of finite index H ∩ (Γ π ⋉ N), we can assume that H is contained in Γ π ⋉ N. Let L π be the connected component of Ker(π) and N = N/L π . Observe that N is not abelian, since π is not a unitary character of N. As seen in Section 10, the action of Γ π ⋉ N on H π factorizes through the quotient nilmanifold Λ\N. Hence, we can assume that L π is trivial. By the proof of Proposition 31, there exists a real number p ≥ 1 such that the representation V π of Γ π ⋉ N is strongly L p modulo ∆, where ∆ is the normal subgroup ∆ = {(Ad(x), x −1 z) : x ∈ Λ, z ∈ Z(N)}. We claim that H ∩ ∆ has finite index in H. Indeed, let R = H∆ be the closure of H∆ in Γ π ⋉N. Then the restriction of V π to R is strongly L p modulo ∆. Observe that (Ad(x), x −1 z) ∈ ∆ acts as multiplication with λ π (z) on H π , where λ π is the central character of π. Let ξ a non-zero V π (H)-invariant function in K. The function x → | V π (x)ξ, ξ | is non-zero, belongs to L p (R/∆), and is R invariant. It follows that R/∆ is a compact group. Let R 0 be the connected component of R. Since R is a Lie group, R 0 is open in R. It follows that R 0 ∆/∆ is an open (and hence closed) subgroup of R/∆. Since R/∆ is compact, we conclude that R 0 ∆/∆ ∼ = R 0 /(R 0 ∩ ∆) is a subgroup of finite index in R/∆. On the other hand, observe that R 0 ⊂ N, since R ⊂ Γ π ⋉ N and since Γ π is discrete. Observe also that R 0 ∩ ∆ = R 0 ∩ Z(N), since Z(N) is connected (as N is simply connected). It follows that R 0 ∩ ∆ is a connected subgroup of the nilpotent simply connected Lie group R 0 . But R 0 /(R 0 ∩ ∆) is compact. Hence, R 0 /(R 0 ∩ ∆) is trivial. As a consequence, we see that R/∆ is finite. This shows that H ∩ ∆ has finite index in H. Therefore, upon replacing H by H ∩ ∆, we can assume that H ⊂ ∆. The centre Z(N) being a rational subgroup of N, the subgroup Λ = ΛZ(N) of the nilpotent Lie group N = N/Z(N) is a lattice. Observe that N is non-trivial, since N is non-abelian. The group ∆ acts trivially on the factor nilmanifold Λ\N and hence on the associated torus T . Since T is a ∆-invariant factor torus of T, it follows that the action of H on T is not ergodic. Proof of (ii) in Theorem 4 Let H be a subgroup of Aut(Λ\N) with a strongly mixing action on T. We have to prove that the action of H on Λ\N is strongly mixing. With the notation as in the proof of Part (i) above, the Koopman representation U of H on H decomposes as a direct sum U ∼ = ⊕ i U i , where U i equivalent to an induced representation Ind H Hπ i V i . It suffices to prove that, for every i, the matrix coefficients of U i belong to c 0 (H). This will follow if we show that the matrix coefficients of V i belong to c 0 (H π i ). Set π = π i and V π = V i . Let L π be the connected component of Ker(π) and Λ\N the corresponding H π -invariant factor nilmanifold. Since H π is contained in Aut(Λ\N), the projective kernel P of V π coincides with the kernel of the homomorphism ϕ : H π → Aut(Λ\N), by Proposition 27. We claim that P = Ker(ϕ) is finite. Indeed, otherwise the matrix coefficients of the Koopman representation of H π on the maximal factor torus T of Λ\N would not belong to c 0 (H π ) and this would imply that the action of H π and hence of H on T is not strongly mixing. Since P is finite, V π is strongly L p for some p ≥ 1. It follows that the matrix coefficients of V π belong to c 0 (H π ). This finishes the proof of Theorem 4.
2011-06-14T07:03:11.000Z
2011-06-14T00:00:00.000
{ "year": 2011, "sha1": "56046b1ebb7fdf4add2414da6271c39ce514d124", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1106.2623", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "13a961e28cd82225ab243f5b161b7b2316dad597", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233415218
pes2o/s2orc
v3-fos-license
Generative Adversarial Networks–Enabled Human–Artificial Intelligence Collaborative Applications for Creative and Design Industries: A Systematic Review of Current Approaches and Trends The future of work and workplace is very much in flux. A vast amount has been written about artificial intelligence (AI) and its impact on work, with much of it focused on automation and its impact in terms of potential job losses. This review will address one area where AI is being added to creative and design practitioners’ toolbox to enhance their creativity, productivity, and design horizons. A designer’s primary purpose is to create, or generate, the most optimal artifact or prototype, given a set of constraints. We have seen AI encroaching into this space with the advent of generative networks and generative adversarial networks (GANs) in particular. This area has become one of the most active research fields in machine learning over the past number of years, and a number of these techniques, particularly those around plausible image generation, have garnered considerable media attention. We will look beyond automatic techniques and solutions and see how GANs are being incorporated into user pipelines for design practitioners. A systematic review of publications indexed on ScienceDirect, SpringerLink, Web of Science, Scopus, IEEExplore, and ACM DigitalLibrary was conducted from 2015 to 2020. Results are reported according to PRISMA statement. From 317 search results, 34 studies (including two snowball sampled) are reviewed, highlighting key trends in this area. The studies’ limitations are presented, particularly a lack of user studies and the prevalence of toy-examples or implementations that are unlikely to scale. Areas for future study are also identified. INTRODUCTION The emergence of artificial intelligence (AI) and machine learning (ML) as a crucial tool in the creative industries software toolbox has been staggering in scale. It is also one of the most active research areas in computer science (Murphy, 2012). Recent progress in generative deep learning (DL) techniques has led to a dearth of new solutions in the fields of computer graphics, vision, and user-aided design (Pan et al., 2019). ML applications that directly interface with everyday users are increasingly pervasive. However, these applications are still largely entirely designed and deployed by ML engineers. Data collection, feature selection, preprocessing, model development, parameter tuning, and final assessment of the resulting model's quality are all made without consulting with end-users on how they will interact with the resulting system. Typically, this has led to systems where the enduser involvement consists of little more than providing some input and hoping for a good result. In this survey, we look at research where end-users', specifically design practitioners', involvement is deeper and collaborative in nature. That is, systems that function as design support tools rather than simple automatic synthesis tools. One of the main challenges facing DL today is its "black-box" nature. Data are fed to a trained neural network, which then outputs a classification, decision, action, sample, etc. Despite recent advances in the field of explainable artificial intelligence (XAI) (Biran and Cotton, 2017), these algorithms' inner workings often remain mysterious to the user and even to the model's engineers. While the architecture and mathematics involved are well-defined, interpreting what is happening in the neural network's inner state remains a very challenging problem (Zeiler and Fergus, 2013). This opaque nature can also lead to a fundamental mistrust between end-users and the systems with which they are interacting. The emergence of the family of generative models has created another potential avenue for the erosion of trust, with much-publicized examples such as systems to hide from facial detection systems (Mirjalili et al., 2017;Johnson, 2020) or the generation of highly realistic fake images (Zhu et al., 2017a) having drawn mixed public reaction. Exploring these issues is an essential and complex research area. One way to address trust is to give the user real-time or interactive feedback, allowing them to visually explore and develop a relationship with the underlying system. Finally, there is the ethical perspective (Whittle, 2019; Fjeld et al., 2020). To minimize potential harm to society, there needs to be a strong commitment from both government and society to provide oversight and regulation concerning how and where AI systems are used. One of the most decisive steps forward in DL synthesis has been the development of the family of algorithms known as generative adversarial networks (GANs). First proposed by Goodfellow et al. (2014) in 2014, GANs are a type of generative model with a specific architecture in which two networks, a generator and a discriminator, compete with one another to produce increasingly plausible generated samples ( Figure 1 shows the original architecture). In practice, a GAN is not dissimilar to any other convolutional neural network (CNN). The discriminator's core role in a GAN is similar to an image classifier, and the generator also operates similarly to other CNNs, just operating in reverse. GANs have several advantages over other members of the deep generative model family of algorithms. They produce higher quality output (Goodfellow, 2017) than other models. When compared with variational autoencoder (VAE), the images produced by GANs tend to be far sharper and realistic (Goodfellow, 2017). Autoregressive models (van den Oord et al., 2016) have a very simple and stable training process, but they are relatively inefficient during sampling and do not easily provide simple lowdimensional codes for images. The GAN framework is flexible and can train any type of generator network. Other models have constraints for the generator (e.g., the output layer of the generator is Gaussian (Kodali et al., 2017)). There is no restriction on the latent variable's size. These advantages have led to GANs leading performance in generating synthetic data, especially image data . An important step toward integrating GANs into design tools was developing methods to add a level of control over the generated outputs. Conditional GANs (Mirza and Osindero, 2014;Lee and Seok, 2017) allow the user to add additional input values to the generator and discriminator for categorical image generation. The InfoGAN (Chen et al., 2016) algorithm can extract latent features in an unsupervised manner by introducing a latent code, which is fed as an additional input to the generator. The latent code can then capture the generated images' structure by adding an additional regularization term to the loss function of GAN between the latent code and the generated image. Research into user control over generative networks is a very active area, but still in early development (Carter and Nielsen, 2017) but maturing rapidly (Pan et al., 2019). A full breakdown of the current state of the art concerning GANs is outside this study's scope. There have, however, been many excellent recent surveys on the state of the art (Alqahtani et al., 2019;Hong et al., 2019;Pan et al., 2019;Khursheed et al., 2020), performance (Kurach et al., 2018), advances in image synthesis (Wu et al., 2017;Wang et al., 2019), and approaches to improving stability (Wiatrak et al., 2019). These reviews showcase the prevalence of GANs in research and indicate it as a growing area of importance. While there have been individual case studies into interactive systems that look at collaborative design with generative models (Kato et al., 2019;Noyman and Larson, 2020), there has not been a systematic review looking at the area more broadly. In the next section, we will qualify the reasoning for selecting human-AI collaborative design with GANs as an area for further investigation and describe the work's guiding research questions. Following this, the systematic literature review methodology will be described, and the results of the review will be presented. MOTIVATION The families of generative models can be broadly categorized into two distinct categories: explicit density and implicit density models. Explicit density models are those models that assume some kind of prior distribution about the data. Prevalent examples of these approaches are those based around recurrent neural networks (RNNs) (van den Oord et al., 2016), autoencoders (VAEs) (Dayan et al., 1995), and their variants (Kingma and Welling, 2013;Oord et al., 2016;Higgins et al., 2017). These approaches have produced excellent results and are widely studied . However, they do have some drawbacks that limit their current adoption in creative tools. Models based around RNNs operate sequentially; therefore, output generation is comparatively slow. Autoencoders do not exhibit this problem but have not been shown to produce the output quality of competing models. GANs are a very prevalent example of an implicit density model, models that do not explicitly define a density function. Despite the drawbacks associated with GANs, such as training stability, they currently exhibit several advantages over competing models. Most importantly, they do not have the performance issues exhibited by RNN-based models while generating best-in-class output quality. For this reason, we focus on research that utilizes GANs and investigate their increasing prevalence as a tool in the process of design. As mentioned previously, much of the current research around GANs has focused on automatic synthesis Khursheed et al., 2020), where end-user interaction with the system is minimal (e.g. image-to-image translation). These systems have also been improving swiftly and are now capable of some impressive results (Wu et al., 2017). Despite this, however, interaction with AI technology as a design tool is still a relatively immature and challenging problem (Dove et al., 2017), so we take a more targeted look at the current research that has a more collaborative approach to human-AI interaction. Benefits Generative design has a broader history and scope beyond the ML space (Krish, 2011). Traditionally, these tools have been used by engineers or design experts who input design goals and parameters (e.g., performance, mass, spatial, and cost requirements). From these inputs, these tools explore the solution space, generating design options and alternatives. AI technology has been making inroads in the area 1 (Kazi et al., 2017), and we are now beginning to see generative models coming into the fold at research and application levels. Figure 2 illustrates how Zeng et al. (2019) saw AI being integrated into the design cycle. It shows how the human-AI relationship can be collaborative; in this case, their system generates design variety from user input, which can then be explored by the user and incorporated into the next iteration. Other examples we will discuss include systems that can generate landscape paintings (Sun L. et al., 2019) or terrains (Guérin et al., 2017) from quick sketches, thus allowing users to more efficiently iterate over their designs than if they had to realize their design at each step fully. ML has been an active research area for a long time, but its adoption as a mainstream technology in the creative/design space is a relatively new phenomenon. Much of the research has been directed into creating systems capable of performing tasks, and while many powerful systems and applications have emerged, the user experience has not kept pace (Dove et al., 2017;Yang, 2018). Interaction design aims to create interfaces that are easy to use, effective, and enjoyable. In designing user interfaces that interact with AI systems, there has been a general lack of focus on the enduser. This review will look at many examples where interaction design and end-user needs have been considered to varying degrees, highlighting good practice, current limitations, and avenues of interest for further research. Challenges In the public mind, the use of AI and ML is seen as a new, innovative technology. While this notion has some truth to it (Cearley et al., 2019), it is also true that ML is quite a mature field. ML has been around a long time (Samuel, 1959). There are countless textbooks, academic courses, and online resources dedicated to the topic. With this said, user experience design for ML systems and human-AI collaboration remains relatively rudimentary (Dove et al., 2017;Yang, 2018;Yang et al., 2020). There may be several reasons for this, but one that stands out is simply that ML is a fundamentally more difficult design material. ML systems have a "black-box" quality to them that is fundamentally different from heuristics driven systems. The outputs of these systems are often not easily explained, particularly when errors occur. Therefore, designers have a challenging task designing systems that bridge the ML-human perspective, with deep collaboration with engineers being critical. The research challenges associated with improving human-AI collaboration in the generative space do not have the easily digestible outcomes associated with the most well-known work in the DL field. To evaluate the work, one asks the question: "Is the technology helping humans think and create in new ways?" rather than whether the technology outperforms previous methods on a well-defined task. This can be a more difficult question to ask. There is an outstanding question as to whether creativity might be limited by using tools based on GAN architecture. An optimally trained GAN generator should recreate the training distribution and therefore cannot directly generate an image based on new governing principles because such an image would not be similar to anything like it has seen in its training data. Therefore, one must ask if users would be prevented or discouraged from exploring more exciting directions. While GANs show tremendous promise in allowing people to create and explore, this fundamental question remains. Aside from human-AI interaction challenges, many technical challenges exist despite the powerful results demonstrated by GANs. One issue is mode collapse, which is one of the most common failures in GANs. It occurs when the generator maps multiple distinct inputs to the same output, which means that the generator produces samples with low diversity. There are many proposed solutions (Arjovsky et al., 2017;Kurach et al., 2018) to mitigate the problem, but it remains an area of research. Another problem is training convergence. As the generator improves with training, discriminator performance naturally decreases because it becomes increasingly more difficult to distinguish between real and fake. This progression poses a problem for convergence of the GAN as a whole: The discriminator feedback gets less meaningful over time. If the GAN continues training past the point when the discriminator is giving completely random feedback, then the generator starts to train on junk feedback, and its quality may collapse. Finally, it is worth mentioning that the training of a simple neural network takes some computational effort. There is an added level of effort required in training GANs due to the networks' dueling nature, requiring both more time and computational horsepower. While these technical challenges are not the central focus of this paper, they represent a significant factor in how the GAN-enabled user interfaces are developed and deployed. METHODOLOGY With ML, more specifically GANs, becoming increasingly important for a range of reasons previously described, and work in this area beginning to grow, it is important to take stock of the current approaches to find similarities, themes, and avenues for further research. As such, the guiding research questions for this review are as follows: • What approaches exist around GAN-enabled human-AI collaborative design tools? • What are the limitations of studies and approaches around GAN-enabled human-AI collaborative design tools? • What subareas are understudied in the domain of GANenabled human-AI collaborative design tools? Given these research questions, the following section describes the methodology for searching the extant literature for information to address them. LITERATURE SELECTION CRITERIA A systematic literature review was performed using the PRISMA (Shamseer et al., 2015) reporting methodology to examine the current state of the literature. Searches were conducted on the ScienceDirect, SpringerLink, Web of Science, Scopus, IEEExplore, and ACM digital libraries, using the following Boolean search queries: • ("Generative Adversarial Network" OR "GAN") AND ("Art Design" OR "Sketch" OR "Computer Art" OR "Artist" OR "Creative Arts" OR "Computer Aided Design") • ("Generative Adversarial Network" OR "GAN") AND ("Architecture" OR "Urban Design" OR "Urban Planning") • ("Generative Adversarial Network" OR "GAN") AND ("Design Process" OR "Computer Aided Design" OR "Human Computer Interaction" OR "Human-AI" OR "Collaboration") FIGURE 2 | The AI-augmented creative design cycle as described by Zeng et al. (2019). The authors describe how AI can be used to augment the design process by introducing variety to the users' input, allowing them to quickly expand their solution space. The core creative design cycle of continuous interaction between creation and reflection remains unchanged. Frontiers in Artificial Intelligence | www.frontiersin.org April 2021 | Volume 4 | Article 604234 • ("Machine Learning") AND ("Art Design" OR "Sketch" OR "Computer Art" OR "Artist" OR "Creative Arts" OR "Computer Aided Design") • ("Machine Learning") AND ("Architecture" OR "Urban Design" OR "Urban Planning") • ("Machine Learning") AND ("Design Process" OR "Computer Aided Design" OR "Human Computer Interaction" OR "Human-AI" OR "Collaboration") In addition, articles were restricted using criteria common in systematic reviews in the area of ML. The criteria used were as follows: • Recent article: articles had to be published within the last 5 years (i.e., since 2015 at the time of writing); • Relevancy: articles had to be relevant to the topic of AI (articles which spoke about general AI learning from a human psychology perspective were excluded) and future of work (i.e., articles which did not describe approaches or techniques for advancing the future of work were excluded); • Accessibility: articles needed to be accessible via the portals previously described; • Singularity: duplicate articles were excluded; • Full paper: abstracts and other short papers were excluded (extended abstracts were included). Figure 3 illustrates the filtering process used to produce the final set of literature. Using the above research parameters, combined with a year filter (≥2014), a total of 317 articles were gathered, which were reduced to 262 after filtering out duplicate results using the JabRef software "Remove Duplicates" feature. The titles and abstracts of these articles were reviewed for relevance to the domain of generative networks and design, of which 188 were deemed relevant using the relevancy measure described above. These articles were then read in full to determine relevance to the domain. The remaining 34 articles after this stage of filtering constitute the primary analysis of this article. The collapse from 317 to 34 works was due to the search terms' broad scope. Many of the articles returned outlined automatic methods or algorithms. The criteria for this survey require the method to be user-guided or have iterative user involvement, so these articles were excluded from the final literature. Second, several articles simply mentioned the search terms for describing AI systems generally for the reader. Such passive use of the search terms could not be determined until the full paper was examined. Additionally, two articles were added to the review, using a snowball sampling technique (Greenhalgh and Peacock, 2005), where if a reviewed article cited a relevant sounding article, it was subsequently assessed, and if deemed relevant, added to the pool of articles for review (14 articles were examined during this stage). Before discussing the methodologies, the following section explores at a high level the core themes in the 34 articles reviewed, in terms of example domains and scope, to paint a picture of the current state of the research space. SUMMARY OF LITERATURE Selected articles were categorized and analyzed based on domain space, publication type, year, user-interface modality, and operation method. A full list of selected articles and values for each of these is provided in the appendix. Publication Type The reviewed articles' largest outlet was conference proceedings (18), with 15 articles published in journals. One extended abstract (Noyman and Larson, 2020) was included due to its scope and critical relevancy. Year In 2019, 13 articles were published. Eight were published in 2020 (so far), nine in 2018, three in 2017, and one in 2016 ( Figure 4). This indicates that research into attempting to incorporate GANs with user-guided design is an area that is young and developing. The slight decrease in articles in 2020 may be due to the current difficulties in performing both experiments and user testing. Given the sudden increase in publications, there is a reasonable amount of cross-over between some research streams. Ideally, these researchers may consolidate their work and progress together, rather than in parallel, into the future. Domain Space Articles were categorized based on the featured subject domain(s) they focused on (either in their implementation or theoretical domain). Work could exist across multiple categories. The distribution of articles across the categories is summarized in Figure 4 and expanded upon in this section. The largest cohort of articles (17 articles) focused primarily on art-design tasks (e.g., generating paintings, and terrains). Six articles are situated in the fashion-design space. An area that was expected to have greater representation was urban planning/ design; however, this area was the focus in only three articles reviewed. There were four articles addressed in the graphic design space and three in game design. Finally, one article addressed sports-play design. This categorization helps understand the areas where this research is currently being deployed and aids in identifying areas currently under-served. Human-Computer Interface Modality We are focused on researching how GANs are being integrated into design support tools. One of the critical human-computer interaction (HCI) considerations when creating these tools is how the end-user will communicate or interact with the system. The reviewed articles present various interface modalities, with the most common being sketch-based interfaces (21 articles) and what we choose to call "landmark-based," which is where an area or point of interest is marked by the user in some manner (12 articles). In addition, two works each featured node-graph, parameter-based, and language-based interaction modalities, respectively. Figure 5 illustrates the breakdown of the UI modalities. As most articles are situated within the art-design space, it is somewhat unsurprising that most interface modalities were sketch-based. Sketch-based UI systems are familiar, comfortable, and intuitive to use for artists. There is some cross-over between sketch and landmark modalities also, as sketches can be used to provide information, or as commonly referred to as "hints," to the network to constrain or guide the output. Node-based interfaces are another common feature of modern digital content creation (DCC) tools. This type of interface may become more prevalent in the future. Natural-language user interfaces (NLUIs) have become a feature of everyday life (Shneiderman et al., 2016). It remains a challenging problem and a highly active research area. Despite this, NLUIs represent a highly intuitive way to communicate with a system. Two of the articles reviewed took this approach. Disentangled representation learning (Locatello et al., 2019) is an exciting, emerging research topic within the ML space. It is an unsupervised learning approach that seeks to encode meaningful feature representations in the learned space, with each dimension representing a symmetrically invariant feature. In practical terms, this allows for the extraction of parameters that correspond to desirable features that facilitate control over the system. If we take the example of a data set of simple shapes, the approach may allow for the extraction of parameters such as rotation and color. This is not a trivial task within the GAN space, as there is no predefined distribution over which we can exercise control. Two articles adopt current approaches (Chen et al., 2016) to the problem to present users with controllable parameters to aid in the design process. Method of Operation In examining the surveyed work, two fundamental modes of operation became apparent: variation and beautification. Design horizon expansion through variation is not a new paradigm. Many interesting new tools have been coming online to allow designers to explore machine-generated variations. The basic workflow is that a designer provides a design, possibly alongside a specified set of constraints (e.g., variance from example and structural constraints). The machine then generates a selection of design variants. The designer can then examine the variants and select one or adapt their design, taking inspiration from the generated examples. This process can be iterated over until a final desired design is found. Seven articles fall into this category. The other primary mode of operation was "beatification," or elaboration based on course user input. This mode is perhaps the most straightforward mode of operation, in that designers provide the system with course level input (e.g., sketches, graphs, and language-based instruction), and the system outputs a more fully realized design (e.g., image, landscape, and game level). This review outlines various examples of this approach, and despite differences in interaction mode, inputs, and so on, the basic principle remains the same. This category represents the largest cohort of works, with 26 articles. A single outlier, BaketballGAN (Hsieh et al., 2019), operates by generating a predicted simulation result given a user design. DISCUSSION 6.1 Research Question 1: What Approaches Exist Around Generative Adversarial Networks-Enabled Human-Artificial Intelligence Collaborative Design Tools? Architecture and Urban Planning Graph editors are a common interface paradigm within the DCC landscape, 2,3 so it is somewhat interesting that the work by Nauata et al. (2020) presented one of only two graph editor interfaces in the reviewed literature. The work describes a framework for a node-graph-based floor plan generation tool. A user constructs a simple graph representing the rough desired layout ( Figure 6), with nodes representing rooms of various categories and edges representing adjacency. The method uses the Conv-MPN (convolutional message passing networks) architecture, but here, the graph structure is explicitly passed to the generator. The Conv-MPNs are used to update feature volumes via message passing, which are later up-sampled and propagated to a final CNN network that converts a feature volume into segmentation masks. In this way, the generator generates output that resembles a floor layout, a segmented image with axis-aligned rectangles for each room and corridor. The user can then select preferred outputs and manually adjust them as required. The work notes some limitations to be addressed in future work, such as the current requirement that rooms be rectangular and allowing for further parameterization (i.e., room size and corridor length). The FrankenGAN (Kelly et al., 2018) framework allows users to generate high-fidelity geometric details and textures for buildings. The name is due to its nature as a patchwork of networks rather than an end-to-end system. By adopting this approach, intermediate regularization steps could be performed, leading to higher quality results. In addition, it offers users the chance to interact with the system at several stages. A user initially provides a coarse building shape, and the geometry generation network then adds high-fidelity details, such as doorways, windows, and sills. At this stage, the user can edit the generated geometry through a sketch-based system before passing new geometry to the texture generation network. The user can then specify the desired style for the resulting texture as well as variance parameters. The authors present some impressive results across a wide range of building styles, and a perceptual study indicated that their system produced significantly better results than competing models. The DeepScope project (Noyman and Larson, 2020) presents a real-time, generative platform for immersive urban-design visualization. In this work, the authors used a tangible user interface (TUI) to allow designers to iterate over urban designs quickly and observe generated images of the emergent street scene in real time (Figure 7: top left). The TUI takes the form of a tabletop with a grid layout, with physical cell blocks of varying colors representing different classes of tile (street, green space, and construction) and an observer (a standard LEGO figurine), from whose perspective the final street scene is rendered. A scanner overlooks the tabletop and relays changes being made to the grid-cell layout, which updates the system's internal virtual 3D layout. This 3D environment is procedurally decorated with cityscape elements (e.g., lamppost and vegetation) and then passed to the DC-GAN, which generates a street scene from the observer's perspective. The system was designed with intuitiveness as a design principle, allowing experts and nonprofessionals to experiment collaboratively with urban design scenarios with real-time feedback. The platform can augment the early stages of cityscape design with vivid streetview visuals. Unlike traditional CAD tools, the complexity of creating a 3D urban scene is carried out by the pretrained neural network. Finally, the lack of high-resolution visual fidelity, currently a drawback with GAN output, allows designers and regulators to focus on the overall "Image of the City" instead of undecided details. In this case, rather than training a discriminator to recognize whether an icon is man-made or machine-generated, two discriminators determine whether paired images are similar in structure and color style, respectively. With this system, humans and machines cooperate to explore creative designs, with human designers sketching contours to specify the structure of an icon, then the system colorizing the contours according to the color conditions. To improve usability and to not overwhelm the user with all possible varieties, the user is asked to specify a "style." The system then randomly selects a selection of icons labeled with that style which are fed to the network as the color condition. Even giving for the fact that output are relatively simple icons, the results of this method are quite impressive and a subjective evaluation study conducted by authors confirmed that their method performed best among a selection of other methods representing the state of the art. Content layout is a core skill of graphic designers, being a key component in guiding attention, esthetics, etc. Zheng et al. () presented a system for generating user-guided high-quality magazine layouts. The network was trained on a large data set of fine-grained semantic magazine layout annotations with associated keyword-based summaries of textual content. Users can exercise control over the layout generation process by roughly sketching out elements on the page to indicate approximate positions and sizes of individual elements. The work of (Zeng et al., 2019) investigated whether AI can be used to augment design creativity. They based their approach on fundamental design principles (Dorst and Cross, 2001;Preece et al., 2015) and adapted the existing design cycle to incorporate AI tools (Figure 2). Addressing a particularly difficult design problem, that of typeface design for Chinese characters, the authors noted the historically stylistically broad yet unified nature of Chinese character design. Modern Chinese character fonts do not exhibit this level of variation, and so the authors attempt to use AI to augment typeface designers' creative abilities. The network was trained on a selected number of standardized Chinese typefaces. The final model was then used to generate many typefaces, and the designers examined the generated fonts to find ones that matched their desired features. Input fonts that adversely affected the resulting sets were then removed, and the network was retrained. This cycle was repeated until the designer's criteria were met. The study shows how the design cycle is not fundamentally altered, but simply augmented. Design professionals are still responsible for putting forward questions, formulating rules, and providing the starting point. The AI can then take over some responsibility, to generate a more diverse set of typeface forms than would be feasibly possible by a design team. The study also demonstrates how this collaboration can continue indefinitely to meet design goals. Chen et al. (2019) presented a system to aid design ideation. The framework consists of two separate networks: a semantic ideation network and a visual concepts synthesis network. In the initial design session, the users interact with a visual semantic network graph (the network is based on ConceptNet, with a filter to increase concept diversity). The users can choose how far they would like to venture, conceptually, from the initial idea and also view and filter the resulting network (i.e., in one example, a user starts with the concept "spoon," steps forward, and lands on "straw" via "soup": the participant then combined the two ideas to develop a spoon that incorporated a straw into the handle). In the second phase, the system uses a GAN to generate novel images that attempt to synthesize a set of visual concepts. A case study was performed, with participants developing some interesting results, combining concepts together (e.g., one design team blended a spoon with branching leaves, each of which had a vessel for condiments shaped like seeds) in unusual and interesting ways. Game Design In the field of content creation for game design, Volz et al. (2018) presented a controllable level creator for the ever-popular Mario Brothers video game. Many, including modern level descriptions for tile-based 2D platformers, boil down to simple text files. In this work, a DC-GAN was trained on a large set of text-based Mario Brother levels. To provide designers a level of control over the final levels, the space of levels encoded by the GAN is further searched using the covariance matrix adaptation evolutionary strategy (CMA-ES) (Hansen et al., 2003). This algorithm makes it easy to specify vectors that correspond to various desirable features, such as the number of enemies and bonuses. One issue that arose was that some of the generated levels were unsolvable and unreachable by the player, given the control restrictions. This was solved through a simulation approach. An AI agent ran through each of the generated levels to identify those levels that did not have a solution. In this way, level elements with undesirable features were eliminated. The authors note the fast speed of level generation and suggest that the levels could be generated on the fly, dynamically adapting to playstyle, difficulty, or designer input. Schrum et al. (2020) extended this work with a focus on providing a set of design tools that would give level designers a greater level of control over the finished levels. They again used a GAN to generate a latent space, from where level segments can be drawn. The designer can explore a series of level segments, highlight segments they like and then apply a latent variable evolution algorithm that presents them with the selected segments and their offspring for further iteration. The designer can also select an individual-level segment and alter the individual latent vector values, allowing further control. The designer can also select two segments and interpolate between them by walking the line in high-dimensional space between the two latent vectors (Figure 8). The authors then performed a user study, with 22 participants, to evaluate their system. Three groups participated, two with evolution controls and one with the full feature set. The results showed that exploration controls were preferred to evolution, but the full feature set was most desirable. The work of Gutierrez and Schrum (2020) took this idea a step further. They use a similar automatic GAN-based approach to the generation of individual dungeon rooms, as the previously described work, but combine this with designer-specified graphs describing the high-level dungeon layout for the game The Legend of Zelda. This work blends the abilities of GANs nicely to generate level segments, whose parameters can be controlled similarly to the two previous works, with an interface that allows a game designer to easily specify the desired high-level features. Fashion Several works centered around fashion design, with solutions presented spanning across many different problems. The work of Kato et al. (2019) examined whether GANs can be used to generate novelty while preserving the inherent brand style. Their training set consisted of samples of a single brand design released over a 3-year span; this choice was made to maximize style consistency. The progressive growing of GANs (P-GAN) algorithm (Karras et al., 2017) was applied to produce a set of generated designs of varying resolutions outputted at three discrete training epochs. They then performed a user study and evaluation with professional pattern makers. The professionals were asked to evaluate the difficulty in creating a physical pattern from each design they were presented to evaluate the importance of both resolution and training time. Interestingly, neither factor had a significant impact. Of far greater importance was the professional's experience with the underlying brand. Pattern makers are quite familiar with elaborating designs from rough sketches, but those designs are generally informed by brand design philosophy and principles. The authors noted that the professional's impression of pattern-making difficulty was reduced when given rough dimensions and material suggestions and suggested that results could improve dramatically with much higher quality images. Sbai et al. (2019) presented a system that generates novel fashion designs while allowing a fair degree of user control over several key features, importantly texture and color. The system builds on existing approaches to the generation of novelty by GANs (Elgammal et al., 2017) and facilitates a degree of fine control over the output and the degree of novelty introduced. In this way, the final tools are more useful to a potential designer than being presented with a set of novel designs. The authors performed some user studies to determine the degree to which their system produced preferred designs over state-of-the-art approaches and found a very significant improvement in likability scores for the garments produced by their system. Cui et al. (2018) presented a tool for designers to visualize more complete fashion designs quickly. Users provide both an input sketch and a material. The system then applies the material to the sketch in an intelligent manner. In contrast to the previous work, the training data were broadly sourced, containing diverse designs, styles, and brands. The network architecture adapts BicycleGAN (Zhu et al., 2017b) by using fabric pattern samples to train the encoder so that only the material and color information are contained within the latent vector. A user's sketch constrains the shape of the final generated image, and the color and material are constrained by the input pattern. The final sketch-based UI is straightforward to use, and one could imagine it being used in both recreational and professional settings. From a professional's perspective, one could imagine the benefit of quickly visualizing different patterns on the same design, saving valuable production time. This work, unfortunately, omits a user study. A study may have yielded interesting findings as the visual fidelity of the produced designs is very impressive and among the best found among all papers reviewed. As we saw from the previously discussed paper, practitioners perceived difficulty in creating physical patterns from the generated designs were mitigated through material specification. Investigating how professionals perceived this tool with material specification and improved visual fidelity would be highly interesting. Zhao and Ma (2018) described an in situ augmented reality (AR)-enhanced fashion design system powered by AI. In this case, the authors detailed the thought process behind their design decisions. They consulted with design professionals in determining the feature set for their interface and considered their common practices and working habits. They determined that modern fashion designers often look to street style, taking inspiration from spontaneous urban fashion trends. The authors decided on an AR system to empower designers to sketch and design garments in situ quickly. One of the issues with generative networks is that they are bound to the data set upon which they are trained, but designers have different styles and techniques. To compensate for this and create a more generalized system, the authors decided on a two-step compensation method. The author first marks up the captured images with familiar landmarks (e.g., hemline and sleeve end). These landmarks are then used as a compensation signal for the second network to cater to sketch styles that lay outside of the trained network's representation. While the system results lack the previous example's visual quality, the interface and design processes are much improved (Figure 9: left). The authors kept the end-users in mind throughout, and the building blocks are in place for a viable, usable system. The AR component is a little limited where there is little essential difference between the desktop and AR in practice. However, a natural extension would be to use AR to dynamically map the generated designs to the model. Cheng et al. (2020) introduced a novel approach for languagebased interactive image editing. A database of image of fashion items alongside textual description was created and used in network training. During a session, a virtual agent takes natural-language directions from the user as the input. Based on the directions, the agent modifies the current image accordingly. In this way, the user can arrive at their desired design. Dong et al. (2020) presented a system very similar to the method of Jo and Park (2019) (see section 6.1.5) but applied in the fashion space. The authors conditioned their network on data containing full-body models for better performance when working on clothing. Two-Dimensional Art Considered a significant early work in the GAN space, the iGAN system developed by Zhu et al. (2016) was among the first systems FIGURE 9 | Left: The interface designed by Zhao and Ma (2018) in action, an AR system for in situ fashion design. Right: When designing, it can be cumbersome to iterate overall the possible material options. Cui et al. (2018) presented a system that attempts to optimize the process. Frontiers in Artificial Intelligence | www.frontiersin.org April 2021 | Volume 4 | Article 604234 that facilitated interactive feedback with a GAN-based model (it also heavily influenced the architecture around which many of the examples in this section are based). A user selects an image, which is projected into a low-dimensional latent representation using a GAN. The user then uses various brush tools to achieve the rough desired shape and color requirements visualized in the low-dimensional model in real time. At the final step, the same series of transformations are applied to the original image to generate a result. The system can also be used to design from scratch as even from a few simple strokes, the generator will do its best to generate a plausible result. The work of Chen et al. (2018) presents a similar painting interface to the previous example, with the intent this time to translate rough user sketches into more esthetically pleasing results. Their work builds on the VAE-GAN model (Larsen et al., 2016), generating far crisper images than merely using an AE model while maintaining many of their benefits. Park et al. (2019) presented a system, commonly known as GauGAN, capable of turning rough sketches into photorealistic pictures (Figure 10: left). Their system is built on top of the pix2pixHD algorithm, introducing the SPADE (SPatially ADaptivE Normalization) normalization technique. Traditionally, normalization attempts to learn the affine layers after the normalization step, and so semantic information from the input tends to be "washed away." SPADE learns the affine layer directly from the semantic segmentation map so that the input's semantic information can be kept and will act across all layer outputs. Users provide the system with a sketch, in effect a semantic map, and a style image. The resulting image is generated in real time and highly responsive to user input. The SmartPaint system (Sun L. et al., 2019) presents a system and interface that closely mirrors the previous example. From an interface perspective, the main point of difference is that the system recommends a set of reference material from the dataset (representing the most similar examples to the user input) based on the user's input sketch. In this way, the system attempts to guide the user toward more realistic, known examples while still allowing a large degree of creative flexibility. There has been a considerable amount of recent research around image generation based on an input image and set of controllable parameters (Lee and Seok, 2017;Alqahtani et al., 2019). The recent work of Jo and Park (2019) builds upon this research and presents a sketch-based image-editing tool that allows users to alter images in a variety of ways (altering facial geometry, adding makeup, changing eye-color, adding jewelry, etc.). During an editing session, the user masks off, via sketch, areas of the image they want to edit and then sketch in the changes they want using their free-form artist license (Figure 10: right). They can also use color brushes to sketch features of that color (e.g., hair color). The system is highly performant, and the generated image adapts in real time to user input. The system also operates on relatively large images, 512 × 512 pixels, which increases real-world usability, as does the feel of its interface, which is much like any professional painting tool. Ho et al. (2020) presented a novel sketch-based generation network for full-body images of people. The authors used semantic key points corresponding to essential human body parts as a prior for sketch-image synthesis. The authors demonstrate some impressive results, even given very course input sketches. The art of inking is a refining process that builds on artists' sketches. Inking refines the sketch, drawing emphasis on certain areas and lines of the sketch, and is a crucial tool in creating depth and perspective. Several image-editing suites 4 provide automatic inking tools and features. Building upon their prior work, Simo-Serra et al. (2018) presented a system that uses GANs to transform a user sketch into an inked (a process the authors refer to as sketch-simplification) image. The network was trained jointly on both a supervised (a series of professionally drawn sketches and corresponding inked image pairs) and unsupervised (rough sketches and line drawings) datasets by employing an auxiliary discriminator network. By combining supervised and unsupervised data in this way, the system can more easily handle a wide variety of artist styles. The human-AI collaborative loop is not explored in depth, but the system does provide real-time feedback, and a user study could further validate the impressive results. After inking, the next stage in developing a finished comic artwork is coloring. Ci et al. (2018), Hati et al. (2019), and Ren et al. (2020) all presented frameworks for user-guided line-art colorization. Automatic colorization of line art is a challenging task, as the colorization process must achieve a pleasing result while keeping the texture and shading of the original work. Line art, by its nature, does not provide any semantic information. To overcome this problem, the authors developed systems where the user sketches on the image providing information to the system about desired colors, location, etc. All articles used large datasets of colorized anime images, corresponding line art, and a pixel hint mask to train the network. One of the key improvements of the Hati et al. work was stroke simulation rather than simple pixel sampling to provide the color hints during training. Unfortunately, the works did not perform end-user evaluation studies, as the PaintsTorch system is one of the most feature-rich among the reviewed literature. Architecturally, the work presented by Zhang et al. (2017) is quite similar. Here, the authors trained their network on grayscale photographs and their corresponding color images. The collaborative system allows users to add color landmarks and adjust them with real-time feedback to the gray image, and the system generates plausibly colorized images. The authors note that it is not always easy for users to select colors in an esthetically pleasing or realistic manner. To mitigate this problem, the system gives the user feedback about the colors they may wish to use, based on a predicted codistribution, guiding them toward a good result. The authors did perform a user study. A group of nonexpert users was given 1 min to colorize a series of photographs, and the results were then passed through a real vs. fake Amazon Mechanical Turk (AMT) test. 5 The automatic colorization performed reasonably well, but with user input, the number of images passing as real examples significantly increased and increased again when user color recommendations were used. While the user interface presented by Ren et al. (2020) is similar to the previous work, the internal architecture differs. An innovative twostage interactive colorization based on superpixel color parsing was used to generate better results. The authors also proposed metrics for quantitative result evaluation. All the design tasks that we have covered till now have been aimed at expert or semi-expert users. Zou et al. (2019) presented an example of human-AI collaboration primarily designed for children. The system is trained on a set of scene sketches and cartoon-style color images with text descriptions. The system allows users to progressively colorize an image, via simple natural language-based instructions. Users can refine the result interactively, specifying and colorizing specific foreground objects to match their requirements. An extensive series of validation experiments were run, looking at criteria such as performance and generalization. A more in-depth look at how children interacted with the system would be of real benefit, but we acknowledge the difficulty in performing such studies. 3D Art Normal maps are a commonly used tool in efficiently representing complex 3D shapes, adding depth and lighting to otherwise flat images. Su et al. (2018) presented a human-AI collaborative tool to generate normal maps from user sketches in real time. The authors used a slightly modified version of the popular pix2pix (Isola et al., 2016) algorithm and trained the network on a database of sketches with corresponding normal maps and a single-channel point-mask (user-defined hints). At runtime, the user can sketch and watch in real time as the normal map is generated. The user can select points on the image and manually adjust as needed (this point is adjusted in the point mask), allowing for fine-grain adjustment of the normals as needed. The final system is intuitive, with simple but responsive interactive controls, and the generated maps are of high quality, superior to those achieved by a selection of the other state-of-the-art algorithms. The authors conducted a small pilot study to look at perceptual loss for the rendered normal maps against several different methods, and their system performed significantly better than the other algorithms. One of the most impressive systems reviewed was the terrain authoring tool presented by Guérin et al. (2017). The authors trained several GANs, or terrain synthesizers, corresponding to different sets of topological features. The training set was assembled through automatic conversion of example patches of the landscape into user-like sketches and contours. During terrain authoring, the artist provides a rough sketch. The sketch defines features such as rivers, ridges, some altitude cues, or a combination of them. The input is given to the sketch-to-terrain synthesizer that generates a plausible terrain from it in real time. If the result is not satisfactory, the user can re-edit the sketch and rerun the synthesis or remove parts of the terrain that will then be completed by the eraser synthesizer. After the coarse sketch is finished, the user can erode the terrain by running the erosion synthesizer. It should be noted that this level of performance and interactivity had not been seen before, even in professional tools. To evaluate and validate their system, the authors conducted a user study with both expert and nonexpert groups. After generating a landscape with specified features, the participants were asked to evaluate the system scale according to three criteria: (1) Does the generated terrain follow the sketch? (2) Is the system reactive? And finally, (3) is it easy to express one's intent? The system scored very highly on all criteria. The work of Zhao et al. (2019) zoned in on the task of adding high-fidelity detail to landscapes. Rather than outputting a landscape from a sketch, it amplified detail on an existing coarse landscape. A novel approach to embedding landscape "themes" into a vector space is described, giving artists and end-users control over the result's look. The system is also performant enough for interactive edition, a crucial criterion for artists. The nature of the embedding space also allows for interpolating between them, allowing for exploration of themes outside the example space. The system proposed by presents an interactive 3D modeling tool, assisting users in designing real-world shapes using a simple voxel-based interface. The system builds on the 3D-GAN model (Wu et al., 2016) by adding a projection operator that maps a user-defined 3D voxel input to a latent vector in the shape manifold of the generator that both maintains similarity to the input shape but also avoids areas of the latent space that generate unrealistic results. The training set consisted of a collection of shapes within a broad object category represented by voxel grids (the authors demonstrate results for planes, chairs, and tables). The method attempts to avoid "bad" or unrealistic areas of the latent space by training a projection model that attempts to balance similarity of the input to output with generating something very close to an existing sample. During a session, the user quickly builds up a simple voxel shape representing a rough approximation of the desired output in a voxel editor. Once finalized, the user hits the "SNAP" button, which triggers the generator and generates a result. In this way, the user interacts with the system to finalize their design. The system represents work in progress. The final output of a user session would need significant editing before being used in a production environment. However, the interaction between the AI and the user is intuitive and straightforward, and a similar approach may become more relevant as the quality of 3Dgenerated results improves. The work of Wu et al. (2018) presented a very novel system to generate paint strokes using GANs. Traditionally, to generate realistic brush and natural media behavior (e.g., watercolors and oils), a fluid or physical simulation approach is adopted (Chen et al., 2015). Here, the authors replaced the paint simulation with a neural network (the brush strokes were still simulated). The model was trained on data generated by a physically based oil painting simulation engine (the inputs being corresponding height fields, color fields, and stroke information). During a live painting session, the network's input consists of the existing paint on the canvas and the new stroke drawn by the user, and it outputs a predicted height map and color map of the new stroke. The system is highly interactive, and examples of some stunning user creations are presented. The system significantly outperforms their previous, simulation-based work and presents a new avenue for exploration with other natural media painting simulations such as watercolor or pastels. Sport In BasketballGAN (Hsieh et al., 2019), the authors present a novel approach to human-AI collaborative play design. Basketball has a long history of coaches using clipboard sketches as a tool for play design and to convey those plays to their players. One need only turn on any high-level televised basketball game to see this in practice. One of the drawbacks of this design methodology is that it is static. It does not explicitly cater for how opposition players may react. Having an instinct for how the opposition will behave is purely down to the coach's skill in understanding both the game and the skill sets of the opposing players. BasketballGAN gives the play designers the same primary tool they are used to and augments it with AI. The network was trained on a player movement dataset released by NBA. The system takes as input a sketch from the designer and outputs a dynamic play simulation (Figure 7: top right). To maintain the realism of the resulting simulations, several loss functions were described for dribbling, defending, passing, and player acceleration to guide the network. These heuristics prevent abnormal player behaviors on the court. The resulting system produces very plausible 2D simulations, and in this way, a coach can analyze their play designs and get an instant prediction on how the opposition may counter it. Using this information, they can iterate over the play to improve it or avoid passing to a player likely covered by a skillful defender, and so on. A small user study was performed to examine the plausibility of the generated results. Three groups, with varying levels of basketball knowledge, were asked to answer whether they thought a sample of generated and real plays was real or fake. Only the most expert group could distinguish the generated plays above the chance level, proving that the system could prove a viable real-world tool with further refinements. 6.2 Research Question 2: What Are the Limitations of Studies and Approaches in Generative Adversarial Networks-Enabled Human-Artificial Intelligence Collaborative Design Tools? Given the early stages of human-AI collaboration research in the generative space, many of the articles reviewed presented effectively "toy" examples or case studies that were deliberately scoped to smaller examples to avoid the combinatory explosion problem. The applicability of some of the examples presented in this article will be tested as further research is conducted on more complex examples, but as the work stands now, very few of the systems described would be fit for a production environment. Due to the complexity of AI and ML, from an algorithmic and architecture perspective, there is a gap in knowledge between interaction/user-experience designers and ML engineers when it comes to understanding ML's limits, what it can and cannot achieve (Yang, 2018;Yang et al., 2020). Barring two examples (Guérin et al., 2017;Zhao and Ma, 2018), none of the other reviewed works discussed the gathering of end-user requirements, user experience design with any degree of detail. This may reflect the fact that ML engineers are driving the technology at this nascent stage. If AI technology is going to bridge the gap between algorithmic and human concerns, then HCI and UX designers have a vital role to play. From a research perspective, having a stronger initial focus on requirements gathering and human concerns would significantly improve both the final system and ground them in real-world practical problems. Related to the previous point, the lack of systematic, robust user studies presents a significant limitation of the studies presented in this review. Close to half of the systems were not tested with users (n 10), or when they did, little details of the testing were published (n 6). Participant counts varied greatly, from 6 to 26. Some studies used experts, and some used novice users. Optimally, both groups' performance would be examined during a study, but only two articles adopted this approach. We are currently living in a 4K, soon to be 8K, world when considering consumers' expected image resolution. Further to consumer displays, there has been increasing adoption of immersive environments with massive resolutions (Lock et al., 2018;Bourke and Bednarz, 2019). Due to the computational cost, architecture complexity, and difficulty in training GAN models, the current state of the art outputs images at 512 × 512 px. While some of these results are very impressive and have garnered much media attention, how much real-world value and penetration will these systems achieve without a marked increase in visual fidelity? There is no doubt that the quality of results will continue to improve as architectures evolve, but right now, it remains a major limiting factor. The "Double Diamond" design process model (UK, 2005) is among the most well-known and cited extant design process visualizations. It is referenced widely in the HCI/ML literature, and authors have attempted to adapt it to cater to ML systems (Yang et al., 2020). ML presents some fundamental problems that make its incorporation into such a design process challenging. First, rapid prototyping of ML systems can be very difficult to achieve in practice. Networks can take a long time to train and iterate over. Second, the results of the developed system are fundamentally constrained by the available data. Working with artists and designers to investigate how current processes can be adapted to cater to ML remains an essential avenue for research. One fascinating piece of work was the BasketballGAN framework (Hsieh et al., 2019). It was notable that it was the only work that looked to visualize not simply a result, given a proposed design, but also a visualization of how that result would evolve over time. While this is not a genuinely novel concept, many simulation-based approaches exist to solve similar problems. It does represent a novel approach in the GAN space. Taking crowd simulation as an example, we see several GAN-based solutions for modeling behavior (Gupta et al., 2018;Amirian et al., 2019), but these models only take current agent states into account. A wide range of factors affect crowd behavior, for example, cultural factors (Fridman et al., 2013), density (Hughes et al., 2015), and group goals (Bruneau et al., 2014). Combining a similar approach to BasketballGAN, current methods could greatly improve crowd behavior and allow exploration of semi-scripted scenarios. Similarly, this concept could be extended to many problems and research fields. One of the notable aspects of many of the works that we have reviewed is that users get instantaneous visual feedback based on their input (Keim et al., 2008). This allows the user to develop a relationship with the system and understand its features and limitations. As we mentioned in the previous section, ML solutions can fail in highly unpredictable ways. These failures can lead to a loss of user confidence and trust. One emerging field of research that seeks to mitigate this problem is XAI (Biran and Cotton, 2017;Hughes et al., 2020). XAI aims to look within the black-box and extract information or explanations for the algorithm's output. In addition to providing tools to assist with trust and accountability, XAI can assist with debugging and bias in ML. The inputs and outputs and network design of ML algorithms are ultimately still decided with human input (human-in-the-loop) and, as such, are often subject to human errors or bias. Explanations from XAI-enabled algorithms may uncover potential flaws or issues with this design. Bau et al. (2019) presented DissectionGAN, a framework designed to examine the extent to which GANs learn image composition. To train their system, the authors first generated a series of images and then identified neurons within those images that correlated with meaningful object concepts. A user can switch these neurons on or off using their system, and the corresponding objects will be added or deleted. In this way, the system extracts meaning from the network that it can relay to the user in a useful manner. The LogicGAN system, presented by Graves et al. (2020), adapts recent advances in XAI (Lundberg and Lee, 2017) to the GAN space. Ordinarily, the discriminator network of a GAN simply reports one real-numbered value of corrective feedback to the generator network. LogicGAN incorporates an explanation network that allows for additional information regarding what features were important/unimportant in the discriminator's decision back to the generator, in effect "explaining to the AI." The explanations can also be explored by a user or ML engineer. These recent examples represent important steps forward in improving user trust and potential avenues for further research. There is a fundamental question around a generative model's ability to navigate outside its example space, generating more than simply re-combinations of the input. In section 6.1.4, we discussed some examples from the fashion field; one nonacademic example of note was the work by the crossdiscipline team responsible for the Internet series of case studies, "How to generate (almost) anything" (Cameron and Yanardag, 2018). Their fashion design case study closely matched the work of Kato et al. (2019), but they trained their model on a database of cover art of vintage sewing patterns. Due to the restricted training time, the authors noted that the AI made some interesting mistakes, such as combining standard sleeves and bell sleeves within the same dress. Also, it tended to blend in elements from the background into the final design. These "mistakes" were inspirational for the pattern makers and led to final patterns that would probably not exist but for the training restrictions. In essence, this poses a critical question around the power of generative models. Often, the model is merely generating re-combinations of existing ideas. This is a limitation of an ideal GAN, since a perfectly trained GAN generator will reproduce the training distribution. Such a model cannot directly generate an image based on new fundamental principles because such an image would not look anything like it has seen in its training data. Other artists are explicitly exploring this idea (Olszewska, 2020), using GANs to create interesting new artworks. It may be the case that an imperfect GAN can be more artistically interesting than its ideal counterpart. Due to the criteria we imposed, the final sample size of articles is relatively small, limiting what broad conclusions or models can be elaborated at this point. However, it does represent the state of the research in the area at the moment. This would indicate that there is a large space for researchers to examine and exploit. CONCLUSION Leveraging the power of generative networks to create interfaces and systems that add to the creative toolbox of design practitioners is still in its early stages. This review has explored the current literature in human-AI collaboration involving GANs in the design space. We have shown that while the work in the area is still nascent, some powerful tools are starting to emerge. Trends are beginning to appear in terms of areas that researchers are focusing on, sketch-based interfaces, in situ design, and end-user-driven interface design. This article has described current approaches, while also identifying a range of limitations in this field of research, primarily finding a lack of focus on the end-user when developing training sets and designing interfaces, and limited outcomes in terms of scalability or professional usability. If this technology is going to make the break-through to mainstream adoption, a stronger focus on collaboration and the end-user is needed. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS RTH selected and studied the sources, designed the structure of the manuscript, and wrote the first draft of the manuscript. LZ and TB contributed with supervision over the study of the literature and the writing of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
2021-04-28T13:32:48.198Z
2021-04-28T00:00:00.000
{ "year": 2021, "sha1": "767b5e373facf6c1ddd30e0b45b3720beb83cb9c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frai.2021.604234/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "767b5e373facf6c1ddd30e0b45b3720beb83cb9c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
2128948
pes2o/s2orc
v3-fos-license
Tracheal epithelium cell volume responses to hyperosmolar, isosmolar and hypoosmolar solutions: relation to epithelium-derived relaxing factor (EpDRF) effects In asthmatic patients, inhalation of hyperosmolar saline or D-mannitol (D-M) elicits bronchoconstriction, but in healthy subjects exercise causes bronchodilation. Hyperventilation causes drying of airway surface liquid (ASL) and increases its osmolarity. Hyperosmolar challenge of airway epithelium releases epithelium-derived relaxing factor (EpDRF), which relaxes the airway smooth muscle. This pathway could be involved in exercise-induced bronchodilation. Little is known of ASL hyperosmolarity effects on epithelial function. We investigated the effects of osmolar challenge maneuvers on dispersed and adherent guinea-pig tracheal epithelial cells to examine the hypothesis that EpDRF-mediated relaxation is associated with epithelial cell shrinkage. Enzymatically-dispersed cells shrank when challenged with ≥10 mOsM added D-M, urea or NaCl with a concentration-dependence that mimics relaxation of the of isolated perfused tracheas (IPT). Cells shrank when incubated in isosmolar N-methyl-D-glucamine (NMDG) chloride, Na gluconate (Glu), NMDG-Glu, K-Glu and K2SO4, and swelled in isosmolar KBr and KCl. However, isosmolar challenge is not a strong stimulus of relaxation in IPTs. In previous studies amiloride and 4,4′-diisothiocyano-2,2′-stilbenedisulfonic acid (DIDS) inhibited relaxation of IPT to hyperosmolar challenge, but had little effect on shrinkage of dispersed cells. Confocal microscopy in tracheal segments showed that adherent epithelium is refractory to low hyperosmolar concentrations that induce dispersed cell shrinkage and relaxation of IPT. Except for gadolinium and erythro-9-(2-hydroxy-3-nonyl)adenine (EHNA), actin and microtubule inhibitors and membrane permeabilizing agents did not affect on ion transport by adherent epithelium or shrinkage responses of dispersed cells. Our studies dissociate relaxation of IPT from cell shrinkage after hyperosmolar challenge of airway epithelium. The effect of elevations in ASL osmolarity on respiratory epithelium physiology has been studied on a limited basis and is not understood. Willumsen et al. (1994) using cultured human nasal epithelium reported that the application of D-mannitol (D-M) or NaCl to create hyperosmolar conditions (50-430 mOsM) at the apical surface resulted in decreased thickness of the epithelium and alterations in Na + , Cl − , and K + transport. Inhalation of hyperosmotic saline or D-M aerosols also elicits pulmonary obstruction in asthmatic patients (Holzer et al., 2003;de Meer et al., 2004) that is thought to result from elevation in ASL osmolarity and involve EIB mediators. Inhaled hyperosmotic saline and D-M aerosols are efficacious agents for identifying bronchial hyperreactivity in asthmatic patients (Brannan et al., 2005;Anderson, 2010;Wood et al., 2010). Inhalation of hyperosmotic solutions and D-M by cystic fibrosis patients reduces exacerbations, and improves pulmonary function and hydration of sputum (Elkins et al., 2006;Daviskas et al., 2010;Aitken et al., 2012). Hogg and Eggleston (1984) asked, "Is asthma an epithelial disease?" in relation to the effects of inhaled isosmolar and non-isosmolar aerosols in asthmatic patients. A corollary question, "what are the effects of raised osmolarity of the ASL on airway function?," has not been addressed and, therefore, has been investigated in our laboratory. In the guinea-pig isolated, perfused trachea (IPT) preparation, hyperosmolar challenge of the epithelium induces relaxation of the airway smooth muscle (Munakata et al., 1988;Fedan et al., 1999Fedan et al., , 2004aJohnston et al., 2004;Wu et al., 2004;Jing et al., 2008a) that is inhibited by the Na + channel blocker, amiloride, and the Cl − channel blockers, 4, 4 -diisothiocyano-2, 2 -stilbenedisulfonic acid (DIDS) and 5nitro-2-(3-phenylpropylamino) benzoic acid (NPPB). Ionic and non-ionic, permeant and impermeant osmolytes have similar relaxant potencies (∼9-25 mOsM). Relaxation is elicited with as little as 3-5 mOsM increments. The osmotic relaxant effect is very powerful. For example, 120 mM KCl added to the serosal surface of the trachea elicits depolarization of the smooth muscle and contraction, but, applied to the lumen of the trachea, it causes relaxation, thereby overwhelming any effect that KCl might have had on the muscle after diffusing across the epithelium. The relaxations are dependent upon the presence of the epithelium and mediated via the release of epithelium-derived relaxing factor (EpDRF). EpDRF resembles, in part, carbon monoxide; it is not nitric oxide or a prostanoid. p38 is involved in EpDRF-mediated relaxation (Jing et al., 2008a). Relaxation responses are not inhibited by cytoskeleton/microtubule-interfering agents. EpDRF release occurs in response to incremental increases in osmolarity rather than sensing of the absolute osmolarity. Functional evidence was obtained to suggest that the EpDRF release initiated by hyperosmolar challenge is unrelated to cell shrinkage; this evidence was indirect. Hyperosmolar challenge evokes electrophysiological responses that are complex, osmolyte-specific and concentration-dependent, polarized across the epithelium and involve activation of JNK, PKC and phosphatases (Wu et al., 2004;Jing et al., 2008b). The osmosensor which triggers these responses is undescribed. Lipopolysaccharide treatment in vivo (Dodrill and Fedan, 2010) or exposure to cytokines in vitro (Ismailoglu et al., 2009) potentiated hyperosmolarity-induced relaxation. Lipopolysaccharide treatment in vivo also increased transepithelial potential difference (V t ) and potentiated depolarization responses to elevations in osmolarity . These findings suggest that the EpDRF system is regulated dynamically in these models and might occur in lung diseases beyond asthma. Hyperosmolar and hypoosmolar solutions applied to airway epithelium also induce vasodilation and vasoconstriction, respectively, of submucosal blood vessels (Prazma et al., 1994), implying that the epithelium is involved in regulation of blood flow and that this axis is modulated by ASL tonicity. Mammalian cells shrink when exposed to a hyperosmolar environment (Strange, 1994;Wehner et al., 2003;Lang, 2006;Hoffmann et al., 2009). Previously, our hypothesis that EpDRF release is not attributable to epithelial bioelectric events or cell shrinkage was supported indirectly by functional experiments in the IPT using osmolar maneuvers known to affect volume in other cell types. In the present investigation we evaluated this hypothesis further by measuring cell volume responses of dispersed and adherent tracheal epithelial cells to solutions of varying composition and osmolarity, and examined the effects of blockers of ion transport, cytoskeleton/microtubules reorganization, signaling, mediator formation, and membrane permeabilizing agents. We utilized experimental conditions and protocols similar to those that had been employed in IPT experiments to enable comparisons between the two investigative approaches. Our findings dissociate cell shrinkage from EpDRF release in response to hyperosmolar challenge. ANIMALS These studies were conducted in facilities accredited fully by the Association for the Assessment and Accreditation of Laboratory Animal Care International and the research protocol was approved by the Institutional Animal Care and Use Committee. Male Hartley guinea pigs (Crl:Ha 600-700 g) from Charles River Laboratories (Wilmington, MA), monitored free of endogenous viral pathogens, parasites, and bacteria, were used in all experiments. The animals were acclimated before use and were housed in filtered ventilated cages on Alpha-Dri virgin cellulose chips and hardwood Beta chips as bedding, provided HEPAfiltered air, Teklad 7906 diet and tap water ad libitum, under controlled light cycle (12 h light) and temperature (22-25 • C) conditions. PREPARATION OF EPITHELIAL CELL SUSPENSIONS Guinea-pigs anesthetized with sodium pentobarbital (65 mg/kg, i.p.) were sacrificed by thoracotomy and bleeding and 4.2 cm long tracheal segments were removed. After cleaning in modified Krebs-Henseleit (MKH) solution (composition below) the tracheas were cut longitudinally through the smooth muscle band, and incubated with 2 ml 0.2% protease in EMEM at 37 • C for 1 h. The digestion was stopped with 10% FBS/EMEM solution at 4 • C. The epithelial cells were scraped off with a scalpel blade; clumps were rinsed and triturated in 10 ml of EMEM solution containing 0.1% DNase I. The digest was centrifuged (800 rpm) for 4-5 min at 10 • C. Cells pooled from several animals, the number of which was determined by the particular experiment, were re-suspended in 5 ml of MKH solution, filtered (Falcon 40 mm nylon filter) and centrifuged. The cells were suspended in 1 ml of gassed MKH solution and incubated for 1 h at 37 • C, to allow for re-establishment of ion gradients. Cell suspensions were divided into aliquots for the various experimental conditions. Cell integrity was assessed microscopically after adding 0.4% trypan blue solution. A typical ciliated cell in the suspension is shown in Figure 1A. CELL VOLUME MEASUREMENT OF DISPERSED CELLS Cell volume was calculated from diameter measured with a cell sizer (Coulter Multisizer, Beckman Coulter, Inc.; Fullerton, CA). ∼12 s was required for volume measurements. Thus, volume was decreasing during the early, ∼30 s time point readings. Challenge of the cells with agents being investigated for their hyperosmolar effects on cell volume involved rapid pipetting of cell suspension (5-50 μl) into 20 ml vials containing solutions (37 • C) of interest, and mixing the vials with gentle inversion. Cell size readings were begun 3-5 s later. Challenge of cells with hypoosmolar solution was accomplished by first suspending cells in 10 ml of MKH solution, followed by rapid mixing in the vial with 10 ml of added distilled water (37 • C) in order to halve the osmolarity, before volume measurements were made. To examine the effects of isosmolar solutions, the cells in MKH were allowed to settle to the bottom of a conical tube. All the MKH solution except that trapped between the cells was aspirated. Isosmolar solution (1 ml; gassed; 37 • C) was added to the cells, a 20 μl sample was mixed into a vial of isosmolar solution of the same composition, and measurements were made. To examine the effects of a transition from isosmolar solution to hyperosmolar solution (37 • C; gassed), referred to as "hyperosmolar jump," cells (20 μl) from the isosmolar suspension were placed in a vial of hyperosmolar solution, mixed, and measurements were made. IPT PREPARATION The IPT (Munakata et al., 1988;Fedan and Frazer, 1992;Jing et al., 2008a) is a novel preparation that permits agents to be applied separately to the mucosal (intraluminal or IL) or serosal (extraluminal or EL) surfaces of the trachea while monitoring contractile or relaxant responses of the airway smooth muscle from changes in diameter. It allows assessment of the role of the epithelium in integrated responses of the organ (Jing et al., 2008b) and has been used to demonstrate that both the apical and basolateral membranes of airway epithelial cells respond to hyperosmolar challenge (Fedan et al., 2004a). After sacrifice, a 4.2 cm-section of trachea was excised, cleaned in gassed MKH solution, and mounted on a perfusion holder. When mounted, indwelling cannulae became inserted into the tracheal lumen at either end. The cannulae contained side holes for measurement of pressure at the inlet (positive) and outlet (negative) ends of the trachea, and changes in tracheal diameter were detected as changes in the inlet minus outlet pressure difference ( P in cm H 2 O) using a differential pressure transducer while the lumen was perfused with gassed MKH solution from the separate IL bath of MKH solution (IL bath) at a rate of 34 ml/min. The device was placed into an extraluminal bath containing gassed MKH solution. Transmural pressure was set to zero. Both baths were maintained at 37 • C. The preparation was allowed to equilibrate for 1 h with MKH solution changes at 15-min intervals. HYPEROSMOLAR JUMP IN IPT The trachea was contracted with MCh [3 × 10 −7 M; ∼extraluminal EC50 (Fedan and Frazer, 1992)]. At the response plateau, the IL perfusion solution was changed abruptly from MKH solution to isosmolar K 2 SO 4 or KBr solutions. Upon establishing a stable response, the perfusing solution was abruptly changed to hyperosmolar solution (120 mOsM) of the same osmolyte. Confocal imaging and transepithelial V t measurement A custom chamber (RC-50 Imaging Chamber; Warner Instruments; Hamden, CT) was used to image the height of living epithelium in tracheal segments while simultaneously www.frontiersin.org October 2013 | Volume 4 | Article 287 | 3 measuring V t . A 3.5-mm section of trachea was removed as described above, cleaned in MKH solution, slit longitudinally through the smooth muscle, and mounted onto the chamber. Apical and basolateral chambers were perfused (1 ml/min) independently with gassed MKH solution (37 • C). Using "T" junctions, the inflow lines for the apical and basolateral chambers, containing MKH solution, were in continuity with silver/silver chloride-agar bridge voltage electrodes containing 0.9% NaCl to measure V t under open-circuit conditions with a voltage/current clamp amplifier (DVC 1000; World Precision Instruments, Inc., Sarasota, FL). The chamber was mounted on a Zeiss LSM 510 laser confocal microscope. Confocal microscopy-palette was applied to the image stacks to indicate the intensity of the fluorescent cellular stain with color scale of red/white-yellow-green-blue representing highest to lowest intensity, respectively. Epithelial cell thickness was measured using the orthogonal view/measurement function and 3-D projections of the cell layer were constructed about the z-axis using Zeiss image software. Following perfusion with MKH solution for ∼60 min, the fluorescent dye, calcein (15 μM), was added to the apical perfusate for 30 min to load the epithelial cells. After a 30-min washout with MKH solution to remove extracellular calcein, control images of the un-stimulated trachea were taken. The remaining procedures were done in such a way as to mimic the conditions used in the IPT preparation (see above). The basolateral chamber was perfused with MKH solution containing MCh (3 × 10 −7 M); from this point onward delivery of MCh was continuous. Confocal images were taken after 15-20 min. In one series of experiments, the apical bath was perfused with MKH solution while making cumulative additions of D-M to elevate osmolarity. In a second series of experiments, isosmolar solutions of D-M or urea dissolved in distilled water were delivered to the apical bath followed by D-M or urea dissolved in distilled water to create a hyperosmolar jump. In another series of experiments the effects of selected pharmacological cytoskeleton/microtubule-interfering blockers on epithelial cell height were investigated. These included colchicine, erythro-9-(2-hydroxy-3-nonyl)adenine (EHNA), cytochalasin B and D, nacodazole and latrunculin B. In these experiments the tracheal segments were exposed to basolateral MCh before and during hyperosmolar challenge. BIOELECTRIC MEASUREMENTS IN TRACHEAL SEGMENTS: USSING CHAMBER An Ussing chamber (World Precision Instruments) was used to measure changes in V t and transepithelial resistance (R t ) in response to various solutions and agents. A tracheal segment was prepared as described above, reflected open and anchored across an aperture of 0.125 cm 2 to separate the apical and basolateral hemi-chambers. Both hemi-chambers (5 ml each) were perfused separately with recirculating, gassed MKH solution (37 • C). Two silver/silver chloride-agar bridge voltage electrodes containing 0.9% NaCl, and two silver/silver chloride-agar bridge current electrodes containing 0.9% NaCl, were placed, one of each type in each hemi-chamber, to monitor V t and deliver current, respectively. Isotonic NaCl-containing bridge electrodes were used instead of 3 M KCl-containing bridges to prevent possible changes in osmolarity arising from KCl diffusion from the electrodes. The preparations were allowed to equilibrate with MKH solution changed at 15-30 min intervals. V t was measured under open-circuit conditions (DVC 1000 or EVC 3000; World Precision Instruments, Inc.). Square-wave pulses (5 μA, 5 s) were delivered at 50-s intervals in order to obtain R t from Ohm's law. Preparations of isosmolar and anisosmolar solutions A freezing point depression osmometer (Osmette A Osmometer; Precision Systems Inc.; Natick, MA), was used to determine the osmolarity of solutions (±2 mOsM standard error). The osmometer was calibrated before use with reference solutions (100 and 500 mOsM). Isosmolar solutions were matched to the osmolarity of MKH solution prepared for each experiment. STATISTICAL ANALYSIS The results are expressed as means ± SE. All data were normally distributed. ANOVA for repeated measures was utilized to detect differences when multiple measurements were made using a single sample. In experiments in which two measurements were made using a single sample, and readings were taken before (control) and after an experimental manipulation, Student's t-test for paired samples was employed to detect significant differences. Student's t-test for non-paired data was employed to detect significant differences when appropriate for single comparisons made between two unpaired samples. P < 0.05 was considered significant. n is the number of separate experiments. n-values for perfused trachea and confocal experiments represent results obtained using tracheas from separate animals; n-values for dispersed cells represent separate experiments in which cells pooled from several tracheas were employed. INITIAL CHARACTERIZATION OF DISPERSED EPITHELIAL CELLS The cell suspension consisted primarily of single cells or doublets of ciliated and non-ciliated cells ( Figure 1A). In ciliated cells, cilia were clustered and beating was evident. The volume of unstimulated cells was ∼0.42-0.48 pl. The cells excluded trypan blue (85-93%) for at least 5 h, during which cilia continued to beat. The volume of control cells did not change over a 1-h period ( Figure 1B) but decreased significantly after 120 min, reaching a value of ∼0.36 pl after 5 h. In response to 120 mOsM D-M or 240 mOsM NaCl, volume decreased rapidly, reaching a maximum by ∼30 s to 1 min. Volume then increased somewhat at ∼10-20 min, reflecting modest regulatory volume increase (RVI), and remained constant over 60 min. By 120 min cell volume declined similarly to the control cells. Based on these results, experiments with dispersed cells lasted no longer than 30 min. Responses to hypoosmolar challenge ( Figure 1C) were examined. After exposing cells to half-osmolar MKH an immediate swelling, maximal by 1 min, was followed by regulatory volume decrease (RVD) over a 30-min period. To examine relaxant effects following exposure of epithelium to isosmolar and hyperosmolar solutions in previous studies, the IPT preparations were first contracted with EL MCh (3 × 10 −7 M; see Figure 4). Cell volume was unaffected by MCh (3 × 10 −7 M) during a 15-min incubation (n = 7; not shown). MCh had no effect on D-M-induced cell shrinkage responses ( Figure 1D); therefore, MCh was omitted in many of the remaining experiments. OSMOLAR CONCENTRATION DEPENDENCE OF CELL SHRINKAGE IN DISPERSED CELLS To compare reactivity of dispersed epithelial cells to hyperosmolar challenge with those observed previously in the IPT, we investigated the osmolar time-and concentration-dependencies of cell shrinkage using D-M (a nonionic, impermeant osmolyte), urea (a nonionic, permeant osmolyte) and NaCl (an ionic osmolyte; Figure 2). For D-M and NaCl, significant cell shrinkage was stimulated at 10 mOsM and was concentration-dependent up to 120 mOsM. Urea showed comparable reactivity. Urea and NaCl caused ∼15% shrinkage, whereas D-M caused ∼25% shrinkage, at the highest osmolyte concentrations. There was little evidence of RVI in these experiments, except at 80 and 120 mOsM NaCl. At 30 mOsM D-M and urea, a concentration reported earlier that approximates the EC50 of the osmolytes for relaxation of the IPT, the reduction in volume was less than half of the maximal amount of shrinkage. ISOSMOLAR SOLUTION EFFECTS IN DISPERSED EPITHELIAL CELLS Previously (Fedan et al., 2004a), we reasoned that if EpDRF release in response to hyperosmolar challenge was triggered by cell shrinkage per se that a relaxation response could also be triggered by shrinkage under isosmolar cell shrinkage conditions. Therefore, whether isosmolar challenge of dispersed cells elicits shrinkage was examined. Isosmolar NaCl did not significantly affect cell volume (Figure 3A), although a small decrease was seen consistently. Isosmolar NMDG-Cl ( Figure 3C) containing the impermeant cation produced a comparable cell shrinkage that was significant. Replacement of Cl − with the impermeant anion Glu in Na-Glu ( Figure 3E) caused a greater shrinkage response than NaCl or NMDG-Cl. Replacement of Na with NMDG along with substitution of Cl − with Glu ( Figure 3G) resulted in a large shrinkage response (∼35%). (D-M and urea could not be examined because cell sizing is dependent on solute conductivity). Isosmolar KCl ( Figure 3H) initiated a large cell swelling response, but in the IPT did not cause relaxation (Fedan et al., 2004a). The swelling effect could have resulted from accumulation of intracellular Cl − from a solution containing 144 mM Cl − (greater than 122.8 mM in MKH solution) in a less negative cytoplasm resulting from depolarization of the membrane by 144 mM K + . We explored this notion using K + salts with Cl − substitutions. Isosmolar KBr ( Figure 3F) initiated swelling that was approximately half that produced by isosmolar KCl; RVD was not evident. In contrast to KCl and KBr, both K-Glu and K 2 SO 4 caused shrinkage without RVI (Figures 3B,D). No RVI or RVD was observed during responses to these osmolytes under isosmolar conditions. HYPEROSMOLAR CHALLENGE OF IPT FOLLOWING PERFUSION WITH ISOSMOLAR SOLUTION (HYPEROSMOLAR JUMP PROTOCOL) We investigated the effects of isosmolar KBr and K 2 SO 4 in the IPT inasmuch as these two osmolytes affected cell volume oppositely (see Figure 3). Both agents initiated contractions (Figure 4) or had no effect (not shown) when applied to the IL bath, in the manner seen earlier for KCl. K + , which is present in high concentration (144 mM) in these isosmolar solutions (compared to MKH solution), would be expected to diffuse across the epithelium in sufficient quantities to cause depolarization of the smooth muscle and contraction. But adding 120 mOsM of KBr and K 2 SO 4 triggered large and long-lasting relaxations. HYPEROSMOLAR CHALLENGE OF DISPERSED EPITHELIAL CELLS FOLLOWING INCUBATION IN ISOSMOLAR SOLUTION (HYPEROSMOLAR JUMP PROTOCOL) For comparison, the hyperosmolar jump protocol used in the IPT was applied to dispersed cells. The cells were first incubated with isosmolar KCl, KBr, or NaCl for 10 min, after which they were challenged with added 120 mOsM of KCl, KBr, or NaCl. KCl and KBr were chosen for these experiments because under isosmolar conditions they had caused cell swelling but not relaxation of FIGURE 4 | Responses of IPT to perfusion with isosmolar (IO) K 2 SO 4 (Top) or KBr (Bottom), followed by hyperosmolar (HO; 120 mOsM) challenge with the same osmolyte (hyperosmolar jump). These results are representative of n = 4 experiments for K 2 SO 4 and n = 6 for KBr, in which contraction (shown) to isosmolar K 2 SO 4 or KBr or no effect were observed (not shown). The discontinuities in the responses after the isosmolar additions occurred during perfusion solution changeover. Vertical bar, 5 cm H 2 O; horizontal bar, 5 min. the trachea, and NaCl was chosen because it also did not cause consistently cause relaxation under isosmolar conditions (Fedan et al., 2004a). As expected, placement of cells in isosmolar KCl or KBr initiated cell swelling, and isosmolar NaCl led to cell shrinkage which was reproducible but not significant (Figure 5). After addition of KCl, KBr, or NaCl to elevate osmolarity, the cells immediately shrank to levels that were both less than the t = 0 min and t = −10 min values. RVI was swift; volume was gained by t = 5 min and returned to the t = −10 min values in the cases of KCl and KBr. Hyperosmolar NaCl-challenged cells lost 40% of their volume and did not volume regulate to the t = −10 min values. In contrast, hyperosmolar solution addition to tracheas perfused with isosmolar or MKH solution stimulated a relaxation that remained at a stable plateau (Figure 4). EFFECTS OF Na + AND Cl − CHANNEL INHIBITORS ON DISPERSED CELLS AND THEIR RESPONSES TO MCh AND HYPEROSMOLAR SOLUTIONS Because relaxation of the MCh-contracted preparations in response to hyperosmolar challenge was inhibited by amiloride, DIDS and NPPB but not by bumetanide (see Introduction), experiments were conducted using dispersed cells to examine the effects of these blockers on responses to MCh and hyperosmolar solutions. Osmolarity was raised using D-M, rather than NaCl, to avoid changes in ion gradients. The IPT protocol was mimicked: cells were incubated with an inhibitor for 30 min, MCh (3 × 10 −7 M) was applied, and 15 min later D-M was added to elevate osmolarity while MCh remained. Cells from separate preparations were used to obtain control data and to evaluate the effects of the ion transport blockers, MCh in the presence of ion transport blockers, and D-M in the presence of the ion transport blockers and MCh. The results are depicted in Figure 6. In control cells volume was stable over 30 min. Amiloride either in the absence or presence of amiloride, had no effect. D-M (30 mOsM) evoked significant shrinkage both in the absence or presence of amiloride. Compared to the control cells, the shrinkage was attenuated at most time points when amiloride was present. Bumetanide (10 −5 M) had no effect on cell volume, but, in the presence of bumetanide, MCh induced significant volume decrease at early time points. In cultured guinea-pig tracheal epithelial cells (Fedan et al., 2007) basolaterally-applied MCh stimulates Cl − efflux. Coupled with inhibition by bumetanide of Cl − influx via Na + ,K + ,2Cl − -cotransport, it is possible that MChstimulated Cl − efflux resulted in a decrease in intracellular Cl − level that promoted shrinkage. Bumetanide had negligible effects on shrinkage responses to D-M, and, if anything, had a small potentiating effect. Cell volume was decreased in the presence of DIDS (10 −4 M); NPPB (10 −5 M) did not have this effect (not shown). It is difficult to explain the cell shrinkage in the context of inhibited Cl − efflux, which would evoke cell swelling, and the fact that DIDS and NPPB differed in their effects. Neither blocker affected volume in the presence of MCh (NPPB not shown; n = 5), and both agents inhibited D-M-induced cell volume reduction, DIDS to a greater degree (NPPB not shown; n = 5). LACK OF EFFECT OF CALCEIN ON EPITHELIAL ION TRANSPORT Before the fluorescent intracellar cellular dye, calcein, was used in bioelectric and confocal microscopy experiments described below, we first investigated whether it had any effects on ion transport. Both calcein (1.5 × 10 −5 M) dissolved in DMSO (n = 4) and DMSO alone (n = 4) caused small hyperpolarizations (∼1 mV; P > 0.05). After 30 min of incubation, MCh (3 × 10 −7 M) was applied to the serosal bath; the resulting ∼1 mV hyperpolarization was not affected by calcein (P > 0.05). After 15 min, 120 mOsM D-M applied to the apical bath caused depolarization; subsequently-added 240 mOsM D-M elicited a further depolarization. There were no differences in the two responses to D-M in the absence and presence of calcein (P > 0.05). Calcein, DMSO and MCh had no effects on R t . D-M increased R t in a concentration-dependent manner, but calcein had no effect on these responses (P > 0.05). It was concluded that calcein would not affect V t responses in confocal microscopy experiments. EFFECTS OF HYPEROSMOLAR AND ISOSMOLAR SOLUTIONS ON IN SITU, ADHERENT EPITHELIUM: BIOELECTRIC RESPONSES AND CONFOCAL MICROSCOPY For comparison to dispersed cells, cell volume changes of adherent epithelial cells were measured in relation to electrophysiological changes using protocols employed in IPT experiments. The goal of comparing the time-courses of bioelectric and volume responses in "real time" proved to be infeasible, as the 2-3 min required for processing images was too long to permit moment-to-moment comparisons with bioelectric changes. First, we validated the preparation. Serosally-applied MCh (3 × 10 −7 M) elicited hyperpolarization (5.6 ± 2.5 mV; n = 6). The subsequent addition of 120 mOsM and 267 mOsM D-M to the mucosal chamber evoked concentration-dependent depolarization responses (Figure 7). Occasionally 120 mOsM D-M triggered hyperpolarization. These results are consistent with our previous findings from IPT and Ussing preparations. Serosal MCh (3 × 10 −7 M) addition had no effect (5.8 ± 5.6%) on cell height (P > 0.05), nor did mucosal D-M in concentrations less than 120 mOsM (not shown). This is in contrast to the finding that 10 mOsM D-M caused shrinkage of dispersed cells (above) and EpDRF release and ion transport alterations (previous studies). Nevertheless, after 15-20 min of exposure, 120 and 267 mOsM D-M caused concentration-dependent shrinkage (Figure 8), up to ∼35% at 267 mOsM, over the same range it caused depolarization (Figure 7). The effects of isosmolar D-M and urea on cell volume could be investigated in the confocal apparatus. Neither osmolyte affected cell height (Figure 9; compare to Figure 3). Upon addition of 120 mOsM D-M or urea to the isosmolar solutions, D-M caused a decrease in cell height but urea was without effect (not shown; n = 4). Hyperosmolar urea, in contrast, was equiactive with D-M and other osmolytes in relaxing the trachea (Fedan et al., 2004a). EFFECTS OF CYTOSKELETON/MICROTUBULE-INTERFERING INHIBITORS ON EPITHELIAL BIOELECTRIC RESPONSES: USSING CHAMBER Cytoskeletal re-arrangements accompany volume change in cells. EHNA, colchicine, nocodazole, cytochalasins B and D, and latrunculin B did not inhibit relaxation responses of the IPT to D-M, whereas latrunculin B potentiated the responses (Fedan et al., 2004b). Little is known of the effects of these agents on airway epithelial ion transport. Therefore, we investigated their effects on V t and R t and bioelectric responses to MCh and D-M and cell volume responses (Figures 10, 11). Frontiers in Physiology | Renal and Epithelial Physiology October 2013 | Volume 4 | Article 287 | 8 accompanied by a change in R t suggests a decrease in transcellular ion transport. The remaining agents variously decreased R t . The decreases in R t caused by the cytochalasins and latrunculin B may explain the depolarization responses they initiated. The depolarization caused by jasplakinolide (∼35%) was greater than the change in R t (∼15%), suggesting that transcellular ion transport was inhibited. Colchicine (0.2 mM) and nocodazole (2.5 × 10 −5 M) (both inhibit microtubule polymerization) had no effect on V t and R t . These findings indicate that the cytoskeleton and microfilaments regulate ion transport variously via transcellular and paracellular pathways. MCh (3 × 10 −7 M; Figure 11) elicited hyperpolarization without affecting R t ; this effect was not due to DMSO vehicle [see also Figure 7 and Johnston et al. (2004)]. EHNA and the cytochalasins appeared to inhibit the response to MCh, but only cytochalasin B produced a significant inhibitory effect. This observation agrees with the finding that cytochalasin B inhibited contractions of IPT to MCh (Fedan et al., 2004b). The remaining agents did not affect responses to MCh. Only latrunculin B potentiated 120 mOsM D-M-induced relaxation of MCh-contracted IPT (Fedan et al., 2004b). Therefore, we investigated the effects of these blockers on V t and R t responses to D-M. In the presence of MCh, none of the agents affected depolarization or R t responses to D-M (Figure 11). Collectively these findings indicate that the bioelectric response of airway epithelium to D-M is not regulated by the cytoskeleton. EFFECTS OF CYTOSKELETON/MICROTUBULE-INTERFERING INHIBITORS ON DISPERSED EPITHELIAL CELL VOLUME RESPONSES In preliminary experiments, DMSO, the solvent for most of these agents, reduced cell volume even at the lowest concentration (0.1%) needed to dissolve the inhibitors. Therefore, a DMSO control was included in every experiment. EHNA dissolved in DMSO produced less cell shrinkage than DMSO itself (Figure 12); the other agents had no effect compared to control (DMSO or vehicle control; not shown). After incubation with DMSO or agent dissolved in DMSO, 120 mOsM D-M in DMSO vehicle-containing MKH solution was added and cell volume was measured. EHNA inhibited responses to D-M (Figure 12). In these experiments colchicine, cytochalasins B (n = 6) and D (n = 6), nocodazole (n = 4) and latrunculin B (n = 4) had no effect (not shown). These findings agree with the lack of effect of these inhibitors on cell volume responses to hyperosmolar challenge in other cells (Foskett and Spring, 1985;Hallows et al., 1991Hallows et al., , 1996. EFFECTS OF HYPERTONICITY-INDUCED CATION CHANNEL (HICC) INHIBITION ON EPITHELIAL BIOELECTRIC AND MECHANICAL RESPONSES Inhibition of relaxation of the IPT to hyperosmolar challenge by amiloride suggests that EpDRF release is linked to epithelial Na + channels. Two of the three HICCs are sensitive to amiloride (Hoffmann et al., 2009;Numata et al., 2012) and could have been affected by amiloride. To evaluate this possibility the effects of the HICC blockers, gadolinium and flufenamic acid, were investigated in Ussing chambers (Figure 11). The effects of the two agents were different. Apically-applied gadolinium (10 −4 M) had no effect on V t and did not affect MCh-induced hyperpolarization. It did, however, inhibit D-M-induced depolarization without affecting R t . However, flufenamic acid (10 −4 M) elicited a strong depolarization but was without effect on MCh-and D-M-induced responses; R t also was not affected. In separate IPT experiments, mucosal flufenamic acid did not evoke a response (in cm H 2 O; DMSO control: −0.2 ± 0.1; flufenamic acid: 0.0 ± 0.1; n = 5; P > 0.05), nor were relaxant responses to mucosally-applied 120 mOsM D-M affected (% relaxation of the MCh-induced contraction: DMSO control: 84.1 ± 16.1%; flufenamic acid: 67.8 ± 10.9%; P > 0.05). These findings suggest that while bioelectric responses to hyperosmolarity may involve HICCs, relaxant responses mediated by EpDRF do not. DISCUSSION Small elevations in osmolarity, comparable to those that activate forebrain osmosensory neurons (Bourque, 2008;Ciura et al., 2011), are detected by the airway epithelium and alter ion transport and elicit EpDRF release and airway smooth muscle relaxation. Earlier functional studies using the IPT indicated indirectly that the stimulus to EpDRF release after hyperosmolar challenge of epithelium results not from cell shrinkage, but from the incremental increase in osmolarity. The focus of the present study Frontiers in Physiology | Renal and Epithelial Physiology October 2013 | Volume 4 | Article 287 | 10 was to employ parallel strategies and protocols used in earlier functional studies in order to investigate whether EpDRF release is linked to cell shrinkage. A new characterization of some cell volume regulation properties of the epithelium also was obtained. FIGURE 11 | Effects of apically-applied cytoskeleton/microtubuleinterfering agents, pore-forming agents and hypertonicity-induced cation channel blockers, on V t and R t (A), V t and R t responses to MCh (B), and V t and R t responses to 120 mOsM D-M (C). Top row: % change in This information will be of use for understanding the consequences of elevations in ASL during exercise and in response to therapies developed to raise the osmolarity of the ASL with osmolar agents, such as saline and D-M. The main conclusion of this investigation is EpDRF release from adherent epithelial cells in response to hyperosmolar challenge is unrelated to volume changes in the cells. Another conclusion is dispersed epithelial cells share many volume regulation properties reported in other cell types, but their sensitivity to the volume effects of hyperosmolar challenge is substantially greater than that of adherent epithelial cells. Finally, the reactivity of dispersed cells to hyperosmolar challenge is paradoxically comparable to that of adherent cells in relation to EpDRF release but not volume change in adherent cells. The relevance of these novel findings to regulation of submucosal blood flow by apical osmolarity warrants further investigation. The volume responses of epithelium upon hyper-and hypoosmolar challenge with ionic and non-ionic osmolytes were similar to those reported in other cells (Foskett and Spring, 1985;Hallows et al., 1991;Nielsen et al., 2007;Hua et al., 2010;Numata et al., 2012 and references in Introduction). During hyperosmolar exposures of dispersed cells RVI was evident in some but not most preparations and it was not as vigorous as that observed in other cell types [(Hallows et al., 1991[(Hallows et al., , 1996Pedersen et al., 1998;Nielsen et al., 2007;Numata et al., 2012) and reviews above]. Willumsen et al. (1994) did not find evidence of RVI in cultured human airway cells during hyperosmolar challenge. However, RVI was evident and consistent in hyperosmolar jump experiments performed on dispersed cells. Exposure of dispersed cells to halved osmolarity led to cell swelling accompanied by RVD, as observed in other cell types (Hallows et al., 1991;Pedersen et al., 1998;Nielsen et al., 2007), including Calu-3 cells (Harron et al., 2009). Swelling of dispersed cells occurred in response to hypoosmolar challenge and during incubation with isosmolar KCl and KBr. That stimulated by KCl was observed to be halide-dependent. The anion permeability sequence for the cells (Cl > Br > SO 4 > Glu) is somewhat distant from the permeability sequence of volumeactivated Cl − channels of parotid gland and HL-60 cells [Br > Cl > Glu (Arreola et al., 1995a,b)] but comparable to swellinginduced currents in Ehrlich ascites cells (Cl > Glu; Pedersen et al., 1998). MCh had no effect on cell volume. Except for their role in longterm cell regulation (Vazquez-Juarez et al., 2008), little is known of the role of G-protein coupled receptors in volume responses of cells to hypo-and hyperosmolar challenge. This finding may suggest that basal ion transport and secretory function of airway epithelium is not associated with muscarinic receptor control of volume. There are few similarities between the effects of isosmolar solutions in the IPT vis à vis dispersed cells that tie cell shrinkage to EpDRF release. Whereas neither isosmolar D-M, NMDG-Glu nor urea elicited relaxation of the IPT, isosmolar NMDG-Glu caused extensive shrinkage in dispersed cells. In the IPT these agents elicited relaxation when applied in hyperosmolar concentrations either to MKH solution or in hyperosmolar jump maneuvers. Isosmolar NaCl, Na-Glu, and NMDG-Cl elicited relaxation in some preparations, but hyperosmolar additions of these agents always caused relaxation. Isosmolar NaCl had no effect, while NMDG-Cl, and Na-Glu elicited cell shrinkage in dispersed cells. Under isosmolar conditions, no K + salt caused relaxation, whereas hyperosmolar concentrations of K + salts did cause relaxation. On the other hand, isosmolar KCl and KBr caused swelling of dispersed epithelial cells. In dispersed cells, isosmolar K + salts gave rise to cell swelling or shrinkage, with a halide-dependence in the direction of the response: KGlu and K 2 SO 4 evoked shrinkage, while KCl and KBr initiated swelling. The shrinkage caused by NMDG-Glu was greater than that caused by either NMDG-Cl or Na-Glu. A Na +dependence of the volume response also was evident: NMDG-Glu caused greater shrinkage than Na-Glu. Despite the diversity of effects seen in the responses of dispersed cells to isosmolar solutions, the dissociation between the effects of isosmolar solutions on dispersed cells, where shrinkage was encountered, and adherent epithelium, where relaxation was largely not initiated by these solutions, is a second line of evidence that argues against the view that EpDRF release is initiated by cell shrinkage. Differences between IPT and dispersed cells were also evident in the context of RVI. Isosmolar KCl and KBr, but not NaCl, increased cell volume, but neither condition initiated relaxation of IPT, and yet hyperosmolar jump caused rapid cell shrinkage followed by RVI. In contrast, IPT preparations remained relaxed as long as hyperosmolar conditions were present, either after hyperosmolar addition alone or hyperosmolar jump, suggesting that EpDRF release is a prolonged and not transient phenomenon. If RVI occurred in the adherent epithelium of the IPT, it was unrelated to the release of EpDRF. Amiloride, DIDS and NPPB, but not bumetanide, inhibited hyperosmolar solution-induced relaxation of IPT and bioelectric responses (Fedan et al., 1999;Wu et al., 2004;Jing et al., 2008a). In unstimulated, dispersed cells, amiloride and bumetanide had little or no effect on cell volume, while DIDS (but not NPPB) reduced cell volume. Amiloride and DIDS inhibited modestly cell shrinkage in response to hyperosmolar challenge with D-M. DIDS had no effect on RVD in Calu-3 cells (Harron et al., 2009). Because Na + and Cl − transport are associated with EpDRF release and inhibit cell volume decrease, these findings are evidence that cell shrinkage could be required for EpDRF release. But in the face of other lines of evidence obtained in this study, this view is not tenable. Confocal microscopy was utilized to evaluate the effects of hyperosmolar solutions on volume and V t of adherent cells. Several differences between dispersed and adherent cells came to light as a result of these experiments: whereas as little as 10 mOsM increase induced shrinkage in dispersed cells, shrinkage and depolarization of adherent cells occurred at ≥120 mOsM D-M. That is, adherent cells are ∼10-20-fold less sensitive to hyperosmolarity than dispersed cells. A second difference between the two preparations was neither isosmolar D-M nor isosmolar urea affected cell volume, and the subsequent addition of hyperosmolar D-M in the hyperosmolar jump protocol led to shrinkage, whereas the addition of hyperosmolar urea did not. Neither isosmolar D-M nor isosmolar urea evoked EpDRFmediated relaxation responses in the IPT (Fedan et al., 2004a). In considering these findings and others in this study, it would appear that attachment to the airway wall influences the effects of isosmolar and hyperosmolar solutions on epithelial cell volume. The adherent cells apparently utilize basolateral ion transport to compensate so as to resist volume changes that would otherwise occur 1 . Nevertheless, low levels of hyperosmolarity trigger EpDRF release by mechanisms that involve changes in ion transport which do not affect cell volume. Freed of the basement membrane, dispersed cells now respond as non-polarized cells confronted with an altered osmolar milieu on their entire surface. Cytoskeleton/microtubule-interfering inhibitors had no effect on relaxation responses of IPT to hyperosmolar challenge (Fedan et al., 2004b). In the present investigation five of these inhibitors evoked bioelectric responses of adherent cells involving both electrogenic (i.e., EHNA) and paracellular (i.e., latrunculin) pathways. Cytochalasin B alone affected V t responses to MCh, which suggests that actin is somehow involved in muscarinic regulation of ion transport. Generally, none of the blockers affected bioelectric responses to D-M. These findings are in general agreement with those made in HL-60 cells (Hallows et al., 1991(Hallows et al., , 1996 but not PC12 cells (Fernandez and Pullarkat, 2010). In the absence of inhibitor effects on relaxation, and V t and cell shrinkage responses, it would appear that structural elements in epithelium play little role in EpDRF release. EHNA affected volume responses in dispersed cells, inhibiting responses to DMSO vehicle and decreasing the shrinkage response to hyperosmolar D-M. It is not known whether EHNA produced these effects by counteracting DMSO-stimulated increase in water permeability (Ellis et al., 1987;Zelenina and Brismar, 2000), redistributing of aquaporin 2 (Vossenkamper et al., 2007), or inhibiting dynein, phosphodiesterase 2 (Chambers et al., 2006) or adenosine deaminase. Under conditions in which we have established that nystatin caused membrane permeabilization in the IPT, i.e., the short-circuited apical membrane revealed a basolateral Na + ,K +pump-driven V t (Dodrill and Fedan, 2010), the drug caused hyperpolarization and potentiated MCh-induced V t responses, whereas α-hemolysin did not. It is surprising that V t responses to D-M were unaffected by either agent. In the IPT nystatin evoked contractions and potentiated relaxation responses to D-M, whereas α-hemolysin did not influence relaxation responses to hyperosmolar challenge (Fedan et al., 2004b). We investigated the possibility that HICCs could be involved in responses of tracheal epithelial cells to hyperosmolar challenge because amiloride might inhibit hyperosmolar-induced relaxations of the IPT by an action at HICCs as well as at Na + channels. Gadolinium had no effect itself on V t or on responses to MCh, but it blocked the depolarizing responses to D-M. Inasmuch as these responses were also inhibited by amiloride, amiloride-sensitive HICCs appear to be involved in both types of responses. In contrast, flufenamic acid provoked a strong bioelectric response and did not influence D-M-induced V t responses. Collectively, these results suggest provisionally that the HICCs (Hoffmann et al., 2009;Koivusalo et al., 2009) are involved in the response to hyperosmolar challenge, and that the HICCs are of the type that are amiloride-and gadolinium-, but not flufenamic acid-, sensitive. The depolarizing effect of flufenamic acid is of interest. Prevailing models of airway epithelial ion transport generally do not consider HICCs (Toczylowska-Maminska and Dolowy, 2012), although apical volume-sensitive outwardly-rectifying Cl − channels (VSOC) that are sensitive to DIDS, NPPB and flufenamic acid and activated by swelling have been characterized in human airway epithelium (Okada et al., 2006;Toczylowska-Maminska and Dolowy, 2012). It is difficult to understand how the depolarization by flufenamic acid was mediated at the level of HICCs or VSOCs. A degree of promiscuity exists in its actions: serosal flufenamate depolarized colon, perhaps by inhibiting K + conductance, Na + ,K + -ATPase and cAMP-dependent Cl − currents (Schultheiss et al., 2000). In conclusion, adherent airway epithelial cells are very sensitive to small elevations in osmolarity at their apical surface and respond by releasing EpDRF, which relaxes airway smooth muscle. Dispersed airway epithelial cells also are very sensitive to the effects of raised osmolarity with an osmolar concentrationdependence which mimics that for EpDRF release, and respond with cell shrinkage. However, adherent epithelial cells are less sensitive to hyperosmolar solutions than dispersed cells in terms of the cell shrinkage response, with a concentration-dependence different from that of EpDRF release. The results buttress the earlier hypothesis that cell shrinkage per se is not a trigger of EpDRF release. The release of EpDRF by hyperosmolar solution is another role of airway epithelial cells: it serves to alter the function of airway smooth muscle.
2016-06-17T22:38:57.841Z
2013-10-11T00:00:00.000
{ "year": 2013, "sha1": "8d8fabdff19bc6be7b3306b1fd2cbfe7d068b48a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2013.00287/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d8fabdff19bc6be7b3306b1fd2cbfe7d068b48a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118431382
pes2o/s2orc
v3-fos-license
The Physical Conditions, Metallicity and Metal Abundance Ratios In a Highly Magnified Galaxy at z = 3.6252 We present optical and near-IR imaging and spectroscopy of SGAS J105039.6$+$001730, a strongly lensed galaxy at z $=$ 3.6252 magnified by $>$30$\times$, and derive its physical properties. We measure a stellar mass of log(M$_{*}$/M$_{\odot}$) $=$ 9.5 $\pm$ 0.35, star formation rates from [O II]$\lambda$$\lambda$3727 and H-$\beta$ of 55 $\pm$ 20 and 84 $\pm$ 17 M$_{\odot}$ yr$^{-1}$, respectively, an electron density of n$_{e} \leq$ 10$^{3}$ cm$^{-2}$, an electron temperature of T$_{e} \leq$ 14000 K, and a metallicity of 12+log(O/H) $=$ 8.3 $\pm$ 0.1. The strong C III]$\lambda$$\lambda$1907,1909 emission and abundance ratios of C, N, O and Si are consistent with well-studied starbursts at z $\sim$ 0 with similar metallicities. Strong P Cygni lines and He II$\lambda$1640 emission indicate a significant population of Wolf-Rayet stars, but synthetic spectra of individual populations of young, hot stars do not reproduce the observed integrated P Cygni absorption features. The rest-frame UV spectral features are indicative of a young starburst with high ionization, implying either 1) an ionization parameter significantly higher than suggest by rest-frame optical nebular lines, or 2) differences in one or both of the initial mass function and the properties of ionizing spectra of massive stars. We argue that the observed features are likely the result of a superposition of star forming regions with different physical properties. These results demonstrate the complexity of star formation on scales smaller than individual galaxies, and highlight the importance of systematic effects that result from smearing together the signatures of individual star forming regions within galaxies. INTRODUCTION The accumulation of stellar mass and metallicity in galaxies at high redshift involves complex interactions between several astrophysical processes, including star formation, winds and outflows, and gas accretion. In the current prevailing paradigm metal-poor gas is accreted from the intergalactic medium (IGM) and fuels star formation, which enriches the interstellar medium (ISM) with metals and generates galaxy-scale winds. This general picture provides a broad framework for understanding how galaxies build up their mass and metal content, but the details of how accretion, star formation, enrichment, and outflows are physically regulated remain poorly understood. The rest-frame ultraviolet spectra of star forming galaxies include a wealth of diagnostics that constrain the properties of massive stars, the elemental abundances and physical properties of the nebular gas that those stars ionize, and the galaxy-scale outflows that they power. Rest-frame near-infrared spectra provide complementary measurements of the physical properties of the ionized nebular line-emitting regions. However, the observational data at z > 3 remains extremely limited; the current challenge is to obtain good data for faint, distant galaxies. Photometry is relatively inexpensive, but reveals only limited constraints on the internal properties of galaxies. Spectroscopy provides significantly more information, but high S/N spectra of individual high redshift field galaxies are an expensive use of current instrumentation on larger-aperture telescopes (e.g., Erb et al. 2010), and are limited to sampling the brightest (i.e., atypical) field galaxies. Stacking analyses of low signalto-noise (S/N) spectra are useful for understanding the average properties of galaxies (e.g., Shapley et al. 2003;Jones et al. 2012), but sample neither the variations between galaxies nor the variations between regions within individual galaxies. Strong gravitational lensing provides a means by which the detailed physical properties of distant galaxies can be studied at high S/N. Galaxies that are highly magnified by massive foreground structures -typically galaxy groups/clusters -have their flux boosted by factors of 10. Observations of these lensed sources with current facilities provide data that will only be matched, at best, by future generations of 30m class telescopes. The exploitation of strong lensing to conduct high S/N studies of intrinsically faint galaxies at high redshift dates back to MS 1512-cB58 ("cB58" Yee et al. 1996), a z = 2.73 Lyman Break Galaxy (LBG) that is magnified by a foreground galaxy cluster by a factor of ∼ 30 (Seitz et al. 1998). Follow-up observations of cB58 yielded a high S/N spectra that revealed a tremendous level of detail about the chemical composition and state of the ISM (Pettini et al. 2000;Teplitz et al. 2000;Pettini et al. 2002). In recent years several "cB58-like" strongly lensed z ∼ 2-3 galaxies have been discovered (Fosbury et al. 2003;Cabanac et al. 2005;Allam et al. 2007;Belokurov et al. 2007;Smail et al. 2007;Diehl et al. 2009;Lin et al. 2009;Koester et al. 2010;Wuyts et al. 2010), and efforts to target these apparently bright (but intrinsically faint) galaxies for optical and NIR spectroscopy with large groundbased telescopes are accelerating. The lensing magnification allows for high quality spectroscopy from which the detailed physical properties and abundances can be derived (Hainline et al. 2009;Quider et al. 2009;Yuan & Kewley 2009;Bian et al. 2010;Dessauges-Zavadsky et al. 2010;Erb et al. 2010;Jones et al. 2010;Quider et al. 2010;Dessauges-Zavadsky et al. 2011;Rigby et al. 2011;Wuyts et al. 2012a,b;Jones et al. 2013a;Stark et al. 2013;Shirazi et al. 2013;James et al. 2014). Observations of emission line properties of fainter strongly lensed galaxies have also enabled measurements that push farther down the faint end of the luminosity function, deeper into the mass function, and out to higher redshifts (Bayliss et al. 2010;Richard et al. 2011;Christensen et al. 2012b,a;Wuyts et al. 2012a,b;Jones et al. 2013b). In this paper we present a multi wavelength analysis of SGAS J105039.6+001730, a LBG at z = 3.6252 that is strongly lensed by a foreground galaxy cluster at z = 0.593 ± 0.002; the giant arc is highly magnified and has an apparent AB magnitude of F606W = 21.48 ( Figure 1). This is the most distant galaxy for which a detailed study of the properties of massive stars and the inter-stellar medium (ISM) has been performed. SGAS J105039.6+001730 was discovered as a part of the Sloan Giant Arcs Survey (SGAS; M. D. Gladders et al. in prep), an on-going search for galaxy group and cluster scale strong lenses in the Sloan Digital Sky Survey (SDSS; York et al. 2000). The full SGAS sample includes hundreds of strong lenses, many of which have been published in a variety of cosmological and astrophysical analyses (Oguri et al. 2009;Koester et al. 2010;Bayliss et al. 2010Bayliss et al. , 2011aGralla et al. 2011;Oguri et al. 2012;Bayliss 2012;Dahle et al. 2013;Wuyts et al. 2012a,b;Gladders et al. 2013;Blanchard et al. 2013). SGAS J105039.6+001730 appears near the core of a strong lensing cluster that was first published by Oguri et al. (2012). This paper is organized as follows: in § 2 we summarize the follow-up observations of SGAS J105039.6+001730 that inform this work, including both space-and ground-based imaging, as well as extensive optical and NIR spectroscopy. In § 3 we review the analysis methods that we apply to the available data: spectral line profile measurements, systemic redshifts, strong lens modeling of the lens-source system, and photometry. § 4 includes the derivation of the detailed physical properties of SGAS J105039.6+001730, including stellar mass constraints from spectral energy distribution (SED) fitting, as well as the internal redding/extinction, star formation rates, electron density & temperature, metallicity, and abundance ratios. In § 5 we discuss the physical picture that emerges for SGAS J105039.6+001730 given the available data, and we place it in the context of other studies of high redshift galaxies. Finally, § 6 contains a summary of our analysis and its implications. All magnitudes presented in this paper are in the AB system, based on calibration against the SDSS. We use a solar oxygen abundance Z : 12 + log(O/H) = 8.69 (Asplund et al. 2009). All cosmologically dependent calculations are made using a standard flat Λ cold dark matter (ΛCDM) cosmology with H 0 = 70 km s −1 Mpc −1 , and matter density Ω M = 0.27 as preferred by observations of the Cosmic Microwave Background, supernovae distance measurements, and large scale structure constraints (Komatsu et al. 2011;Reichardt et al. 2013 The field centered on SGAS J105039.6+001730 was observed with the Subaru telescope and the SuprimeCam instrument on UT Apr 7, 2011. The resulting data consist of g-, r-and i-band imaging, with total integration times of 1200 s, 2100 s, and 1680 s in g, r, and i, respectively. These observations were optimized to take advantage of the large field of view of SuprimeCam (∼34 ×27 ), and were used in a combined strong+weak lensing analysis of the foreground lensing cluster, SDSS J1050+0017; for details of the image reductions and resulting lensing analysis we direct the reader to Oguri et al. (2012). HST/WFC3 SGAS J105039.6+001730 was observed with the Hubble Space Telescope and the Wide Field Camera 3 using both the IR and UVIS channels on UT Apr 19-20, 2013 as a part of the GO 13003 (PI: Gladders) program. Total integration times are 1212 s, 1112 s, 2400 s, 2388 s in the F160W, F110W, F606W and F390W filters, respectively. A post flash was applied to the individual F390W exposures to reduce the impact of charge-transfer inefficiency in the WFC3 UVIS camera. The image processing was performed using the DrizzlePac 11 software package. Images were rescaled by re-drizzling the corrected flats with the astrodrizzle routine to a scale of 0.03 pixel −1 to take advantage of the finer grid made possible by our dithering pattern. The resulting images were then aligned across filters with the image taken in the F606W filter using the tweakreg function. The astrometric solutions provided by tweakreg were then propagated back to the corrected flat-fielded images using tweakback. Using astrodrizzle, the F606W flat field images were drizzled onto a new grid with a scale of 0.03 pixel −1 , once with a drop size of 0.8 pixels, and separately again with a drop size of 0.5. We found that a drop size of 0.5 for the IR camera and 0.8 for the UVIS camera provide the best sampling of the point spread function into our common pixel scale of 0. 03. The resulting image was used as a reference grid for the redrizzling of the images taken in the three remaining filters using the same scale and drop size and the updated astrometric solutions. The WFC3 images were further processed to correct for IR 'Blobs' not removed by the standard WFC3 pipeline image of SGAS J105039.6+001730 with slits over-plotted for each of the follow-up spectroscopic observations described in Section 2. The three smaller green slits indicate the positions of slits placed on the arc in the GMOS nod-and-shuffle mask. The shorter GMOS slit was observed at both the pointing and nod positions of the N&S observation, and therefore received twice the integration time as the other two tilted slits (this slit also covers the brightest part of the arc, and therefore dominates the signal in the final stacked GMOS spectrum). The longest cyan slit (dashed lines) indicates the position of the MagE slit, and the single long, tilted red slit indicates one of the two AB nod positions of the FIRE slit. flat-fields. These artifacts 12 appear as small regions of reduced sensitivity due to dust particles contaminating the steering mirror that directs light into the WFC3 IR channel. We used object-masked individual frames in each IR filter, from the entire large HST GO program, to generate a sky flat for each filter. Though the IR blob artifacts are apparent in each flat, the raw flats are significantly contaminated from residual flux from real objects, and so we used GALFIT Peng et al. (2010) to create a model of each IR blob as the sum of a few Gaussian components. This model is then used to flat-field the artifacts on each individual IR frame. UVIS flats were corrected for charge transfer inefficiencies using the CTE correction tool provided by STScI 13 . Spitzer/IRAC Observations of SGAS J105039.6+001730 were obtained in the 3.6µm and 4.5µm channels as a part of Spitzer program #70154 (PI: Gladders). Total integration times were 1200 s, taken during the warm Spitzer/IRAC mission. The data were reduced with the MOPEX software distributed by the Spitzer Science Center and drizzled to a pixel scale of 0.6 pixel −1 . UT 2011-06-11 and2011-06-12. A pair of frames with individual integrations times of 602 s were obtained each of the two nights, for a total integration of 2410 s. The echelle grating and the 1.0 slit were used, resulting in spectral resolution of R=3600 (83 km s −1 ). The position of the FIRE slit is shown in Figure 2. The A0 V star HD 96781 was observed as a standard star for purposes of fluxing and telluric correction. The star was observed immediately before each pair of science frames; the star was located 14 degrees from the science target, with sec(Z) airmass that differed by less than 0.2. FIRE data were reduced using the FIREHOSE data reduction pipeline tools 14 , which were written in IDL by R. Simcoe, J. Bochanski, and M. Matejek, and kindly provided to FIRE users by the FIRE team. Spectroscopy The FIRE pipeline uses lamp flat-fields to correct the pixel-to-pixel variation and sky flats to correct the illumination. The wavelength solution is fit using OH sky lines, and a two dimensional model of the sky is iteratively fit and subtracted following Kelson (2003). The object spectrum is extracted using a spatial profile fit to the brightest emission line, and a flux calibration and corrected for telluric absorption correction are applied using the method of Vacca et al. (2003) as implemented in an adapted version of the xtellcor routine from the SpeX pipeline. The final FIRE spectrum is a weighted average of the spectra that were extracted from the four individual exposures. 2.2.2. Gemini/GMOS-N Nod-and-Shuffle SGAS J105039.6+001730 was observed with the Gemini-North telescope and the Gemini Multi-Object Spectrograph (GMOS; Hook et al. 2004) on UT Mar 29 2012 in macro nod-and-shuffle (N&S) mode and in clear conditions with seeing ≤0.75 as a part of queue program GN-2011A-Q-19. We used the R400 G5305 grating in first order with the G515 G0306 long pass filter, and the detector binned by 2 in the direction of the spectral dispersion and unbinned spatially. The N&S cycle length was 120s, chosen to reduce the number of shuffles in a given integration to mitigate charge trap effects. These observations were carried out after November 2011, and therefore used the new e2vDD detectors on GMOS-North, which provide significantly improved quantum efficiency relative to the older detectors that they replaced. A single multi-object slit mask was used to primarily target strongly lensed background sources in the core of SDSS J1050+0017; the approach is identical to the data described in detail by Bayliss et al. (2011b) and we refer the reader to that paper for an in-depth description of the mask design. Four micro-slits on the mask were placed on SGAS J105039.6+001730, two of which covered the brightest knot in the arc at both the pointing and nod positions of the N&S observations, and two slits which covered the length of the arc at only the pointing position (see Figure 2). The total spectroscopic integration consisted of two 2400 s exposures, half of which was spent at each of the pointing and nod positions. The resulting spectra include 4800 s of total integration time on the brightest knot of SGAS J105039.6+001730, and 2400 s of total integration time on the fainter parts of the arc extending to the southwest. All science slits were 1 wide, with the two slits extending along the length of the arc titled 30 degrees relative to the dispersion axis to capture as much flux as possible. Slit placements for all of our spectroscopic observations are also shown in Figure 2. The resolution of the data varies from R 700-1100 (270-430 km s −1 ). Sky subtraction of N&S data is simply a matter of differencing the two shuffled sections of the detector. The GMOS data were then wavelength calibrated, extracted, stacked, flux normalized, and analyzed using a custom pipeline that was developed using the XIDL 15 package. Flux calibration was performed using an archival observation of a single standard star and is therefore subject to pedestal offsets in the absolute flux calibration. The pipeline is almost identical to what was used by Bayliss et al. (2011b), with some updates made to account for the new e2vDD detectors. Magellan/IMACS We also observed the field centered on SGAS J105039.6+001730 with Magellan-I and the Inamori-Magellan Areal Camera & Spectrograph (IMACS) using the long (f/2) camera on UT Mar 17 2013 in stable conditions with seeing ranging from ∼0.8-1.0 , with the airmass ranging from 1.26 to 1.15. Two multi-slit masks were each observed for 3×2400 s, and included several slits placed on potential counter-images of SGAS J105039.6+001730, as well as on other faint candidate strongly lensed background objects that were not targeted by the GMOS N&S observations. We used the 200 l/mm grism and the spectroscopic (i.e., no order blocking) filter to allow for the broadest possible wavelength coverage and sensitivity. The detector was unbinned, resulting in spectral resolution R 500-1000 (300-460 km s −1 ) and sensitivity over the wavelength range ∆λ = 4800-9800Å. The data were wavelength calibrated, bias subtracted, flat-fielded, and sky subtracted using the COSMOS 15 http://www.ucolick.org/∼xavier/IDL/index.html reduction package 16 designed specifically for IMACS and provided by Carnegie Observatories. The data were then extracted and stacked using custom IDL code; because the goal of these observations were redshift measurements, the extracted spectra were not precisely flux calibrated. Magellan/MagE We observed SGAS J105039.6+001730 with the Magellan II (Clay) telescope and the Magellan Echellette (MagE) spectrograph (Marshall et al. 2008). The observation started at May 6 2013 02:30:28 UT, and the integration time was 3600s; the MagE slit was placed on the brightest knot of the arc, which had also previously been targeted with FIRE and GMOS ( Figure 2). The weather was clear, and the seeing as measured by wavefront sensing during the integration varied from 0.6 to 1.1 . The airmass rose from 1.30 to 1.57 during the observation. The target was acquired by blind-offsetting from a nearby brighter object; target acquisition was verified via the slit-viewing guider camera. The slit was 1 by 10 and the spectra were collected with 1×1 binning resulting in a resolution of R = 4100 (70 km s −1 ). The spectra were reduced using the LCO Mage pipeline written by D. Kelson. The pipeline produces an extracted, one-dimensional, wavelength-calibrated spectrum for each echelle order. The sensitivity function was computed using the IRAF tools onedspec.standard and sensfunc, using at least two observations each of the standard stars LTT 3864, EG 274, and Feige 67, at airmasses ranging from 1.02 to 1.53. We scaled the sensitivity functions to the star with the highest throughput to create a composite sensitivity function. To flux calibrate, these sensitivity functions were applied to the spectrum using the IRAF tool onedspec.calibrate. The uncertainty in the flux calibration is ±20%, resulting primarily from uncertainties in the slit losses due to variable seeing over the course of long spectroscopic integrations. Overlapping orders of the echelle were combined with a weighted average to make a continuous spectrum, and then corrected lines are nebular emission lines, short solid red lines are stellar photospheric absorption features and medium length blue lines are ISM absorption lines, and long purple lines indicate transitions that could be either stellar photospheric or ISM (or more likely, a blend of the two). The error array is over plotted as the black dotted line, and the fit to the continuum level across the spectrum is plotted as a thin green line. The apparent emission feature that we observe at ∼6290Å is the result of a pernicious sky subtraction residual, and lines resulting from intervening absorption systems are indicated with downward facing arrows. Bottom: GMOS spectrum covering the rest-frame wavelength range ∆λ = 1600-1950Å. Lines are indicated according to the same scheme as the top panel. The N III] 1750 emission line is only detected at ∼2σ, but we indicate its location here because it is used later to constrain the relative nitrogen abundance. to vacuum barycentric wavelength. The signal-to-noise ratio, per-pixel, of the continuum is low (rising from 0.1 at 3500Å to 1 at 7500Å.) GMOS vs MagE Flux Calibration The MagE spectrum was flux-calibrated more carefully, with several standard stars during the night, whereas the GMOS flux calibration was based on a noncontemporaneous (archival) standard star observation. We therefore extract the GMOS spectrum corresponding only from the slit which covers approximately the same region that was targeted by both the MagE and FIRE observations (see Figure 2), and compare the resulting fluxed spectra against the MagE flux spectrum. In the wavelength range where both spectra have reasonable S/N (i.e., ∆λ ∼ 6000-7000Å) the two datasets have the same continuum flux level to within ∼5%. Having performed this "boot-strap" flux calibration we feel secure in using the GMOS spectrum to measure line fluxes with a calibration that is accurate to within the uncertainty in the more carefully quantified MagE flux calibration (±20%). Line Profile Measurements We fit gaussians to all spectroscopic features of interest; the fits use a single gaussian profile with three free parameters: the normalization, width and centroid. For the FIRE data this process is performed directly on the extracted 1-dimensional spectrum, in which any continuum emission is consistent with zero flux to within the uncertainties of the data. The regions in the FIRE spectrum in which emission lines appear are shown in Figure 3. The GMOS spectrum for SGAS J105039.6+001730 exhibits strong features in both absorption and emission ( Figure 4). We begin our analysis of this spectrum by fitting a polynomial continuum model to the continuum. The continuum fitting process begins by identifying regions of continuum emission with good S/N (e.g., λ obs ∼ 6650-6750Å, 7250-7400Å, 7500-7550Å, 7800-8200Å, and 8900-9000Å) and then iteratively add new wavelength ranges to to the fit. Our final continuum fit is robust to the exact model parameterization, and is plotted on top of the GMOS data in Figure 4. The residuals of this continuum fit are consistent with the uncertainties in the extracted GMOS spectrum. The continuum fit is subtracted from the GMOS spectrum prior to the measurement of individual line positions and fluxes. We then fit gaussian profiles to the contintuum-subtracted spectrum to measure their wavelength centroids, as well as -in the case of emission features -their fluxes. We incorporate both statistical (measurement) and systemic uncertainties into the emission line flux measurements. We determined the systematic uncertainty contribution from the continuum fit empirically by comparing the line flux measurements that result from different continuum models. The different continuum fits agree well, and the magnitude of the systemic contribution to the total uncertainty is typically 20% that of the measurement uncertainty. The MagE spectrum also includes continuum emission, though at lower S/N than in the GMOS data. We use the same procedure to fit a continuum model to the MagE data before measuring line profiles. All line flux measurements in the optical are made using the GMOS data, which is much higher S/N. We do measure some line profiles in the MagE spectra, and find centroids that agree well with the GMOS measurements of the same features. All measured line centroids are reported in Table 2. For measurements and analyses that span different spectra (e.g., comparing line fluxes between the GMOS and FIRE data), we restrict the analysis to the GMOS spectrum extracted only from the slit targeting the same bright knot as the MagE and FIRE observations. The total integration time for this slit was twice as long as the other slits, and the knot in question is the brightest part of the arc, so that the GMOS data from this slit alone provide a spectrum with only marginally lower S/N than a stack of all the GMOS slits. Comparing spectral features measured from this knot minimizes the geometric corrections between the different spectral datasets. Geometric corrections do not account for the effects of differential refraction, but the differential refraction effects across the GMOS spectrum should be minimal given the wavelengths covered by the observations. Atmospheric dispersion effects are also not a significant problem in spectra in the NIR. Systemic Redshift Measurements Given the broad wavelength coverage (rest frame UV to optical) and good S/N of the spectra, it is possible to measure systemic redshifts for emission and absorption features with different astrophysical origins within SGAS J105039.6+001730. Typical uncertainties in individual line redshifts for well-detected transitions are σ z = 0.0006, 0.0002, and 0.0002 in the GMOS, MagE, and FIRE spectra, respectively (some low S/N lines have larger uncertainties, e.g., σ z = 0.001-0.002). These values include the uncertainties in the individual line centroids, as well as the (negligible) rms uncertainty in the wavelength calibration. The first line system that we examine is the family of nebular emission lines that appear in both the optical (rest-UV) and NIR (rest-optical) spectra. These lines originate from ionized regions within the galaxy, i.e. HII regions around massive stars. From 14 measurements of 14 separate nebular emission features we measure a systemic redshift z neb = 3.6253 ± 0.0008 (O III]λλ1661,1666 and HeIIλ1640 are detected in both the GMOS and MagE data). We exclude the [O II]λλ3727,3729 doublet lines from inclusion in this systemic redshift due to their coincidence in wavelength with a bright sky line. The sky subtraction residuals from the bright sky line seem to cause an over-subtraction on the blue side of the sky line and an under-subtraction on the red side, which seems to result in a redshift measurements for the [O II]λλ3727,3729 lines that is slightly biased low (see also § 4.5 below). There are also numerous absorption lines in the optical spectra that originate from ionized metals in the ISM of SGAS J105039.6+001730. These include a significant P Cygni absorption/emission profile for the CIVλλ1448,1450 doublet, and similar P Cygni features are also apparent at lower significance for the SiIVλλ1393,1402 doublet. From 20 measurements of 14 individual line features we measure a systemic ISM absorption redshift z ISM = 3.6236 ± 0.0011. We also compute the systemic redshift for the 7 detected P Cygni absorption features alone, and find z P −Cyg = 3.6232 ± 0.0011 -this is marginally more blueshifted than the complete set of ISM lines, which would be consistent with the P Cygni features tracing regions with stronger outflows, though the offset is not statistically significant. There are also several spectral features that can arise from a blend of stellar and nebular P Cygni features, including NVλλ1238,1242 and O V 1371, O V 1417, S Vλ1501 -where O V 1417 can also be blended with Si IV 1417. We lack the S/N and spectral resolution to disentangle these features in the GMOS and MagE spectra, so we do not use them to compute any systemic redshifts, and flag them as possibly being both stellar and P Cygni (i.e., likely a blend of the two) in origin in (see Table 2). The emission parts of the P Cygni features are difficult to fit because of significant asymmetry due to the neighboring absorption. These features may also originate, in part, from nebular line emission from these transitions (see § 5.3). Given the difficulties we refrain from measuring line centroids for the P Cygni emission, as it is not clear how to interpret such measurements. Additionally, in the GMOS spectrum we note the presence of OIVλ1343, C IIIλ1427, and NIVλ1718 in absorption. All of these lines are associated with photospheric absorption in the atmospheres of massive stars (e.g., Pettini et al. 2000), and are therefore likely tracing the stellar content of SGAS J105039.6+001730. These four lines are all weak relative to the much stronger ISM absorption lines, and have a mean redshift z stars = 3.6258 ± 0.0005. All of the three systemic redshift measurements agree within 2σ given the measurement uncertainties, but we note that the ISM absorption line redshift is formally blueshifted by 100 ± 70 km s −1 . The nebular and stellar photospheric redshifts agree within 1σ, as is expected given that both of these features should trace structures that are gravitationally bound within the galaxy. To obtain the best possible systemic redshift constraint for SGAS J105039.6+001730 we restrict ourselves to using nebular emission lines in the FIRE spectrum. The FIRE spectrum is high resolution and includes many high S/N lines, whereas the nebular lines in the GMOS and MagE data are much lower spectral resolution and S/N, respectively. Computing the systemic redshift using just the remaining five lines in the FIRE spectrum results in a statistically consistent redshift measurement but reduces the uncertainty signficantly. The systemic nebular emission line redshift derived from the FIRE data is z sys = 3.6252 ± 0.0001. The telluric A band absorption feature is prominent in the GMOS spectrum, but at the redshift of SGAS J105039.6+001730, it fortuitously falls between the He IIλ1640 and O III]λ1661 emission lines without significantly affecting the flux of either line. Because we use archival standard star observations to flux calibrate the GMOS spectrum, we cannot apply a reliable telluric b Names track back to objects in Figure 6. absorption correction and so instead we refrain from using the affected parts of the GMOS spectrum. All individual line redshifts are all presented in Table 2, and the resulting systemic redshift measurements are given in Table 3. Intervening Absorption Systems In addition to the lines that area associated with SGAS J105039.6+001730, we also identify several foreground intervening features. An absorption doublet at λ ∼ 6105Å is MgIIλλ2796,2803 from an intervening galaxy at z = 1.1820 ± 0.0002. There is also an absorption doublet at λ ∼ 6330Å that we identify as the Al IIIλλ1854,1862 doublet, along with a an absorption line at 5687Å that we identify as Al IIλ1670; these Al lines originate from a second intervening galaxy at z = 2.4034 ± 0.0011. Interestingly, both of these intervening galaxies are also strongly lensed by the foreground cluster, SDSS J1050+0017 and have redshifts confirmed from spectroscopic data that are not presented in this paper. Several other absorption features that are present in the GMOS spectrum remain unidentified -at least some of these likely result from intervening absorption by the foreground galaxy labeled "Gal 1" in Figure 6 for which we do not yet have a spectroscopic redshift measurement. Lines from intervening absorbers are include in Table 2 and Table 3. Spatially Extended Spectral Emission and Lack of AGN Features From the GMOS observations of SGAS J105039.6+001730 we have optical spectra that extend along the length of the arc, which allows us to test for spatial variations in the spectrum. There are several emission lines that are strong enough to be well-detected in spectra that are extracted from sub-apertures along the GMOS slits covering the arc, and we can look for variations in the relative strengths of these lines as a function of spatial position. C III]λλ1907,1909 and HeIIλ1640 are the two highest S/N such lines, and we find no evidence for spatial variation in their equivalent widths in the GMOS data. The seeing during the GMOS observations was 0.65 , and we also note that the emission line features are clearly extended along the GMOS slits, which are between 1.5 and 2 long ( Figure 5). This spatially extended emission rules out and active galactic nucleus (AGN) as the dominant source of ionizing photons in SGAS J105039.6+001730. There is also a notable lack of emission lines that would be associated with the extremely hard ionizing spectrum of an AGN, such as N V in the rest-UV and [Ne V] ) = 0.9, SGAS J105039.6+001730resides well within the region occupied by star forming objects in this space. Based on all of the available information we therefore conclude that AGN activity is not powering a significant fraction of the ionized gas emission, and proceed with the assumption that AGN contributions to the spectrum of SGAS J105039.6+001730 are negligible. PSF Matched Photometry Photometry of the giant arc is performed using custom IDL code that lets us construct apertures which follow the ridge-line of the arc. Images are convolved to a common point spread function (PSF) so that the apertures can be defined and applied to the same regions on the sky. This process is described in more detail in Wuyts et al. (2010) and Bayliss (2012). SGAS J105039.6+001730 has a total magnitude of F606W = 21.48 ± 0.05 and i = 21.18 ± 0.06. 3.6. Lensing Analysis 3.6.1. Strong Lens Model In addition to SGAS J105039.6+001730, we identify several different strongly lensed background galaxies around the core of the foreground galaxy cluster; these are labeled in Figure 6. Galaxy A, at z = 2.404, is lensed into a giant tangential arc (A1) and a radial arc (A2), both with similar morphology and colors in the HST and Spitzer data. Galaxy B is a faint tangential arc at unknown redshift. Galaxy D is lensed into four images (D1-4), spectroscopically confirmed to be at z = 4.867 from Gemini/GMOS (D1) and Magellan/IMACS (D2,3,4). Two images of galaxy C form the giant arc SGAS J105039.6+001730. One counter image (C3) at (α,δ) = 10:50:41.336, +00:17:23.35 (J2000) was spectroscopically confirmed by IMACS based on the presence of emission line features coincident with He IIλ1640, shown, zoomed in on the region where the C III] doublet appears and smoothed with a gaussian kernel with σ = 1 detector pixel. The vertical axis here is the direction of dispersion, the horizontal axis is the spatial direction along the detector, and the vertical green lines indicate the approximate edges of the three N&S slitlets that are plotted in Figure 2 with the spatial scale is indicated by the yellow bar. The C III] emission feature appears in all three N&S slitlets, and is clearly extended along the length of each slitlet. C IVλλ1448,1450 and C III]λλ1907,1909 at the same redshift and in the same approximate relative strengths as observed in the GMOS spectrum of the main arc. Another image (C4) is predicted by the lens model and visually confirmed in the HST data, but lacks spectroscopic confirmation. The positions and redshifts of the spectroscopically confirmed galaxies are used as constraints in the lens modeling process. With the exception of system C (SGAS J105039.6+001730), we use one position per image. For system C, we use the positions of the four brightest knots in C1 and C2, and the two brightest knots in C3.The lens model is computed using the publiclyavailable software, Lenstool (Jullo et al. 2007), utilizing a Markov Chain Monte Carlo minimizer both in the source plane and the image plane. The lens is represented by several pseudo-isothermal ellipsoidal mass distribution (PIEMD) halos, described by the following parameters: position x, y; a fiducial velocity dispersion σ; a core radius r core ; a cut radius r cut ; ellipticity e = (a 2 − b 2 )/(a 2 + b 2 ), where a and b are the semi major and semi minor axes, respectively; and a position angle θ. The PIEMD profile is formally the same as dual Pseudo Isothermal Elliptical Mass Distribution (dPIE, see Elíasdóttir et al. 2007). All the parameters of the cluster PIEMD halo were allowed to vary except for r cut , which is not constrainable by the lensing evidence and was thus set to 1.5 Mpc. We selected cluster member galaxies in the ∼ 30 Suprime-Cam field of view, as red-sequence galaxies in a colormagnitude diagram. All galaxies within 6.5 from the BCG and brighter than i = 23 mag were included in the model, with positional parameters (x, y, e, θ) that follow their observed measurements, r core fixed at 0.15 pc, and r cut and σ scaled with their luminosity (for a description of the scaling relations see Limousin et al. 2005). Four foreground galaxies have a more direct influence on the lensed galaxies due to their proximity to the lensed images. We thus allowed their velocity dispersions to be solved for in the lens modeling process. In particular, we note a galaxy at (α,δ) = 10:50:39.704, +00:17:29.14 -identified as "Gal1" in Figure 6 -that is not on the cluster red sequence, but its perturbation of the lensing potential is key to the lensing configuration of SGAS J105039.6+001730. Since the deflection is linear both with the distance term and the velocity dispersion, we included this galaxy in the same lens plane as the cluster, but note that its best-fit fiducial velocity dispersion scales with its dynamical mass and with its distance term. This approach is described in more detail by Johnson et al. (2014). We are ignoring the second order effects that result from halos residing in different lens planes (D'Aloisio et al. 2013;McCully et al. 2014), including other projected structure which exists along the line-of-sight toward the cluster lens, SDSS J1050+0017 ), but these corrections contribute insignificantly to the uncertainty in the magnification derived from our lens modeling of this cluster. The distribution of cluster galaxies shows a secondary concentration ∼ 290 north of the BCG. Thus another PIEMD halo is included at the position of the brightest galaxy at that position (see Table 4). The inclusion of that structure is also supported by the weak lensing analysis of Oguri et al. (2012). curve of the best-fit lens model is plotted in red. Multiply-lensed galaxies are marked with ellipses, and their IDs and redshifts are labeled. The thick ellipses mark confirmed arcs, and thin ellipses mark arc candidates that were predicted by the lens model but not spectroscopically confirmed. The foreground galaxies that were individually modeled as contributing to the strong lensing are indicated. We also indicate other objects in the background of the clusters for which we measured spectroscopic redshifts. See § 3.6.1 for a complete description of the strong lens model. The results of preliminary lens models indicated that several parameters are not well constrained by the lensing evidence. These include all the parameters of the secondary cluster-scale halo (a large range in σ is allowed), and the scaling relation parameters for cluster member halos; these parameters were fixed in subsequent models. Table 4 lists the best-fit parameters and uncertainties, and values of fixed parameters. The model uncertainties were determined through the MCMC sampling of the parameter space and 1σ limits are given. The image plane RMS of the best-fit lens model is 0. 31. The lensing cluster SDSSJ 1050+0017 was first published by Oguri et al. (2012), which included a simplified strong lens model. The model was based on constraints from only one lensed galaxy (A), and assuming z = 2 ± 1 as a best-guess for its redshift, which was not measured at the time. Oguri et al. (2012) report an Einstein radius for the fiducial arc redshift of 16.1 with large error bars due to the redshift uncertainty. For the same arc, we find that the Einstein radius (defined here as the radius of a circle with the same area as the critical curve) is 17.5 , well in line with the initial model presented by The ellipticity is expressed as e = (a 2 − b 2 )/(a 2 + b 2 ). θ is measured north of West. Error bars correspond to 1 σ confidence level as inferred from the MCMC optimization. Values in square brackets are for parameters that were not optimized. The location and the ellipticity of the matter clumps associated with the cluster galaxies and the BCG were kept fixed according to their light distribution, and the fixed parameters determined through scaling relations (see text). Oguri et al. (2012). The Magnification of SGAS J105039.6+001730 The lensing magnification depends strongly on the location along the arc. Areas in close proximity to the critical curve (plotted as a red line in Figure 6) have the highest magnification, and the magnification decreases farther from the critical curve. To convert the measurements in this paper to their intrinsic, unmagnified values, we calculate the total magnification inside the relevant aperture within which each measurement was made. The boundaries of the aperture are ray-traced to the source plane; we then measure the area covered by the aperture in the image plane, and divide it by the area covered by that aperture in the source plane, thus averaging over magnification gradients within the aperture. This approach overcomes the problem of pixels very close to the critical curve with extremely high magnification artificially driving the average magnification to a higher value. To derive the uncertainties in the magnification, we compute many models with parameters drawn from the MCMC sampling which represent a 1σ range in the parameter space. The total magnifications are 27.3 +11.4 −6.5 in the aperture used for the FIRE spectroscopy, 31.9 +10.7 −7.8 in the GMOS aperture, and 31.4 +10.8 −3.5 in the aperture used for the photometric measurement of the arc. We take the magnification factors that correspond to the slit apertures for the FIRE and GMOS spectrum to correct the line flux measurements described in § 3.1. The resulting magnification-corrected (i.e. intrinsic) emission line fluxes that result from the brightest knot (observed with FIRE, GMOS and MagE) are reported in Table 5. RECOVERING PHYSICAL QUANTITIES In this section we constrain various different physical properties of the lensed source, SGAS J105039.6+001730. The following subsections describe our methods for recovering parameter values, and the resulting values are shown in Table 6 (stellar mass, extinction, star formation rate, electron density & temperature, and ionization parameter) and Broadband SED Fitting We model the observed spectral energy distribution using the fitting code FAST (Kriek et al. 2009) at fixed spectroscopic redshift with Bruzual & Charlot (2003) stellar population synthesis models, a Chabrier (2003) IMF and Calzetti et al. (2000) dust extinction law. We adopt exponentially decreasing star formation histories with minimum e-folding time log(τ ) = 8.5 yrs (e.g., Wuyts et al. 2011) and the metallicity is allowed to vary from 0.2Z to Z . This results in a stellar mass estimate log(M * /M ) = 11.0 ± 0.15 (statistical) ± 0.2 (systematic) M . We combine this with the magnification acting on the arc as computed from the strong lens models described above (µ = 31.4 +10.8 −3.5 ) to recover the intrinsic stellar mass of SGAS J105039.6+001730: log(M * /M ) = 9.5 ± 0.15 (statistical) ± 0.2 (systematic) Because of the relatively large point spread function (PSF) of the IRAC bands there is possible contamination in the measured flux of SGAS J105039.6+001730 in the IRAC 3.6 and 4.5µm bands from a nearby foreground galaxy that is separated from the arc by ∼1.5 . As a test we have explored SED fits with the IRAC flux reduced by a factor of 2× (an extreme case); we find that this factor of 2 contamination case only weakly affects the best-fit stellar mass, and that the possible contamination is subdominant to other systematic uncertainties in the SEDderived stellar mass (Shapley et al. 2005;Wuyts et al. 2011;Conroy et al. 2009. Reddening Constraints from Balmer Lines The H-β and H-γ Balmer lines are detected in the FIRE spectra, which allows us to place a constraint on the reddening due to interstellar dust in the restframe. H-β is well-detected in the FIRE data, but unfortunately the H-γ line falls in a region of poor atmospheric throughput and on top of sky lines, which degrades our ability to precisely measure the line flux. SGAS J105039.6+001730 appears from all indications to be a relatively low-mass galaxy with active ongoing star formation, so we assume a Calzetti et al. (2000) extinction law. The measured H-β/γ ratio indicates an internal extinction of E(B-V) = 0.14 +0.38 −0.14 , which is effectively an upper limit of E(B-V) < 0.52. This agrees with the results of the best-fit SED model, which prefers A V = 1.0 ± 0.2. From here on out we proceed with an extinction value of A V = 1.0 and the Calzetti et al. (2000) extinction law and correct all reddening sensitive measurements accordingly. Achieving better constraints on the internal reddening will be challenging due to H-α being redshifted out of the K band, and H-γ falling into a region of poor atmospheric transmission. Damped Lyman-α Absorption From the GMOS and MagE spectra it also appears as though SGAS J105039.6+001730 is a damped Lymanα Absorption (DLA) system. The GMOS spectrum is higher S/N and we use it to fit a Voigt profile using the XIDL procedure x fitdla. Even in the GMOS data the S/N is falling off rapidly in the region of the damped Lyα absorption due to the combination of 1) the throughput of the grating decreasing at bluer wavelengths and 2) the continuum emission being suppressed by the Lyα absorption. We use ∆λ ∼5600-6200Å-corresponding to ∆λ ∼1210-1340Å in the rest frame -to fit the DLA profile (see Figure 4). In this region of the spectrum the data fall off from S/N per spectral pixel of ∼6 at 6200Å to <1 at 5600Å. The low S/N limits our ability to precisely constrain the centroid of the DLA profile, but we find that the data prefer a high column density, log(N HI ) > 21.5 cm −2 , independent of the precise redshift centroid of the DLA feature. The width of the velocity broadening component of the profile is also unconstrained. Higher S/N observations of the spectrum blueward of 6000Å will be necessary to make a precise measurement of the DLA feature, but the available data strongly indicate the presence of a large quantity of neutral hydrogen. Star Formation Rate Estimates Following the calibrations of Kennicutt (1998), we compute the star formation rate (SFR) from the FIRE observations of the brightest knot of SGAS J105039.6+001730 using the H-β and [O II]λλ3727,3729 emission lines. Both the H-β and [O II]λλ3727,3729 SFR estimates measure the instantaneous star formation, because they probe the integrated luminosity of massive stars blueward of the Lyman limit (i.e. ionizing photons). We correct these SFR measurements by using the strong lens model described in § 3.6.1 to compute the magnification factor that applies to the region of the arc covered by the FIRE slit. In this case, the FIRE slit measures a region of the arc with a total magnification, µ = 27.3 +11.4 −6.5 , and after correcting for this magnification we compute the SFR (for the knot covered by FIRE only) of SFR Hβ = 84 ± 24 M yr −1 and SFR OII = 55 ± 25 M yr −1 , where the large uncertainties reflect a 20% uncertainties in the absolute flux calibration of the FIRE data, as well as measurement errors and a 30% scatter in the calibration between and luminosity in the case of SFR OII . Electron Density Our spectra include three distinct doublet lines that provide a measurement of the electron density, n e , in the HII regions that are responsible for the observed nebular line emission (Osterbrock 1989). The GMOS spectrum contains both C III]λλ1907,1909 and Si III]λλ1882,1893, where the Si lines are well-separated and the C III] lines are blended but resolved. The FIRE spectrum resolves the [O II] λλ3727,3729 lines. All of these line pairs are located very close to one another in wavelength so that the uncertainty regarding the internal reddening does not factor into the electron density determination. We use the curves from Osterbrock (1989) to convert line ratios into electron density; the reported uncertainties are those associated with the measurement uncertainties in the line ratios and assume no additional systematic uncertainties in the line ratio vs. n e curves of Osterbrock (1989). We compute initial estimates of the electron density beginning with an assumed electron temperature, T e = 10,000 K, and then iterate the computations described in this Section and the following section to arrive at the converged values presented here. The cleanest lines that we can measure in the available data are Si III]λλ1882,1892, which are well-separated in wavelength and unaffected by strong sky subtraction residuals. We also measure the line ratios for C III] and [O II], though systematic uncertainties from sky lines make them both less robust measurements than the Si III] ratio. The C III] and Si III] lines both probe a relatively higher density range and therefore cannot precisely constrain n e values below ∼ 10 3 cm −2 ; the [O II]λλ3727,2729 line ratio provides constraints below n e ∼ 10 3 cm −2 but are unfortunately limited by a bright sky line residual in our data. The specific n e constraints from are data are summarized as follows: 1. Si III]λλ1882,1892 -We measure this line ratio to be 1.6 ± 0.2 in the GMOS spectrum. This value corresponds to an electron density n e = 10 3 cm −2 for nebular line emitting regions within SGAS J105039.6+001730. Incorporating the uncertainty in the Si III] line ratio results in an upper limit, n e ≤ 2 × 10 3 cm −2 . This is fully consistent with SGAS J105039.6+001730 being in the low-density regime, though the Si III] doublet ratio transitions at higher densities and therefore does not provide as powerful a constraint on low density values of n e as as well- Table 3). This line profile fitting method allows us to recover an estimate of the true line strengths, ignoring contamination from the bright sky line residuals. We measure a line ratio of 1.0 ± 0.4; the central value implies an electron density of n e ∼ 2-3×10 2 cm −2 , though the large uncertainty effectively encompasses all physically plausible density values. 3. C III]λλ1907,1909 -The C III]λλ1907,1909 line ratio is fit in the same way as the [O II]λλ3727,3729, and we find a line ratio of 1.65 ± 0.14, which is nonphysical but less than 1σ from the maximum value for the line ratio in the low-density limit (∼1.55), and implies a 2σ limit of n e 3×10 3 cm −2 . We also note that the line measurements for C III] suffer from a similar source of uncertainty as the O II]λλ3727,2729 doublet due to the lines being redshifted to fall on to a bright sky line. However the nod-and-shuffle strategy employed with the GMOS data helps significantly in limiting the sky line subtraction uncertainties to the poisson minimum. The two gaussian profile fits to each of the [O II], C III], and Si III] doublets are shown in Figure 7. From the three electron density indicators, there is a strong preference for the low-density regime, with n e ≤10 3 cm −2 . We also note that the apparent offset in density values preferred by the [O II] and C III] doublets may simply reflect the fact that these doublets trace the electron density of the low and medium ionization zones, respectively (Quider et al. 2009;Christensen et al. 2012a;James et al. 2014 (Villar-Martín et al. 2004;Erb et al. 2010). We have good detections of all the relevant lines, so this method allows us to measure a precise temperature rather than a limit. However, this measurement has its own caveats, primarily a large sensitivity to the intrinsic extinction, and a large uncertainty between the absolute flux calibrations of the optical (GMOS) and NIR (FIRE) spectra. With these caveats in mind, we measure T e = 11300 +1400 −1000 K (measurement uncertainties only), which agrees well with the T e constraint from the FIRE spectrum alone. When we fold in the additional uncertainty between the relative flux calibrations of the GMOS and FIRE data, as well as the uncertainty in our best-fit value for the dust extinction (A V = 1 ± 0.2) we find that T e constraint from the O III]λλ1661,1666 to [O III]λλ4960,5008 line ratio is quite broad: 4000 < T e < 15000 K. This is less constraining (for physically plausible values of T e ) than the limit derived from the non-detection of [O III]λ4364. From here on out we proceed with the T e ≤ 1.4 × 10 4 K constraint on the electron temperature. Oxygen Abundance Indicators In the the next two subsections we apply several different Oxygen and ionic abundance indicators to our observations of SGAS J105039.6+001730. The results are summarized in Table 7. For all of these calculations we include a systematic uncertainty term that results from the uncertainty in the extinction correction (A V = 1 ± 0.2) that is applied to the Oxygen lines. Te Direct Oxygen Metallicity From the detected oxygen lines we can use the direct T e method to constrain the metallicity of SGAS J105039.6+001730 using the prescription outlined by Izotov et al. (2006). This metallicity measurement accounts for the singly (O + /H + ) and doubly ionized (O ++ /H + ) oxygen atoms, but ignores triply ionized oxygen which only contributes significantly to the measured abundance in regions of extremely high ionization. The metallicity constraint from this method is 12 + log(O/H) T e ≥ 8.05 (≥ 0.22Z ) when we use the upper limit on the electron temperature from the [ -β). This index is problematic because it is double valued, with both high metallicity and low metallicity branches. We measure log(R 23 ) = 1.05 ± 0.06. The high value for R 23 places SGAS J105039.6+001730 in the transition zone between the upper and lower branches of the observed R 23metallicity relation. There are several different calibrations for the R 23 index in the literature for one or both branches; we compute the R 23 based 12 + log(O/H) for these different calibrations. Ne3O2 The ratio of [Ne III]λ3869 to [O II]λλ3727,3729 has also been calibrated as an indicator of the log(O/H) metallicity by Shi et al. (2007). We measure log(3869/3727+3729) = −0.41, which corresponds to 12 + log(O/H) = 7.5 ± 0.2. The error bar reported here does not include the uncertainty in the zero point of the Ne3O2 calibration, which is quite large (∼ 0.7 dex), and while we report the Ne3O2 metallicity estimate in the spirit of being thorough, we do not use it to compute the mean metallicity of SGAS J105039.6+001730 (see Table 7). Average Oxygen Abundance Metallicity In addition to the results of each of the individual Oxygen abundance metallicity indicators, we also show in Table 7 the average metallicity value of all of the Oxygen abundance indicators calculated in the previous sections, excluding the Ne3O2 method due to the extremely large uncertainty in the zero point of the Ne3O2 metallicity diagnostic. 4.8. Ionic Abundance Ratios 4.8.1. Ionization Parameter, log(U) From the available data we can follow Kewley & Dopita (2002) and Kobulnicky & Kewley (2004) the rest-frame optical emission lines is log(U) = −2.22 ± 0.15. This ionization parameter is not nearly so extreme as has been found in a few extreme star bursting galaxies at z ∼2-3.3, where log(U) is measured to be as high as ∼ −1 (e.g.; Villar-Martín et al. 2004;Erb et al. 2010). We also use the Ne3O2 line ratio diagnostic for recovering the ionization parameter as recently proposed by Levesque & Richardson (2014), which we agree is likely a much better use of this line ratio measurement than the Ne3O2 metallicity indicator. Ne3O2 yields an ionization parameter estimate of log(U) = −2.7 +0.3 −0.2 . The uncertainty here includes contributions from the line flux measurements, the reddening correction, and the ∼ 0.1dex uncertainty in the average metallicity measurement (see Table 7). It is not clear which of the two diagnostics above provides the more reliable estimate of log(U). There are reasons to suspect, however, that both of the preceding ionization parameter estimates are unreliable, or at least incomplete in their description of the physical conditions within SGAS J105039.6+001730. Specifically, the detection of strong He IIλ1640 emission, and significant excess emission in the P Cygni line profiles of Si IV and C IV both prefer larger values of log(U). We discuss these features in more detail in § 4.10 and § 5.3. Ne ++ Abundance From the strong [Ne III]λ3869 emission line we can compute the abundance of Ne ++ /H + using equation 7 from Izotov et al. (2006). We find 12 + log(Ne ++ /H + ) = 7.13 ± 0.23. In the Lynx arc, Villar-Martín et al. (2004) approximate that all of the nebular Ne atoms are doubly ionized, so that Ne/H ∼ Ne ++ /H + , but it's not clear that we can make the same assumption given that we measure an [O II] to [O III] ratio that is consistent with an ionization parameter that is considerably lower than the log(U) ∼ −1 that is found in the Lynx arc. We can at least note the lack of observable flux from known Ne IV and Ne V lines in both the rest-UV and restoptical, which qualitatively argues against these states representing a large fraction of the total Ne. 4.8.3. C ++ /O ++ Abundance Ratio Garnett et al. (1995b) used HST spectroscopy of dwarf galaxies to measure the relative abundances of C ++ /O ++ and N ++ /O ++ from rest-UV emission lines. We detect the same families of lines in the GMOS spectrum of SGAS J105039.6+001730 and can therefore measure the ion abundance ratios of this galaxy at z = 3.6252. We use the ratio of the line intensities of O III]λλ1661,1666 and C III]λλ1907,1909 (Garnett et al. 1995b) to measure log(C ++ /O ++ ) = −0.79 ± 0.06. This abundance measurement is relatively insensitive to extinction due to the similar wavelengths of the lines. Kobulnicky & Skillman (1998) have also shown that it is possible to measure the C ++ /O ++ from the relative line strengths of C III]λλ1907,1909 and [O III]λλ4960,5008. From this method we measure log(C ++ /O ++ ) = −0.77 ± 0.32, where the larger uncertainty is driven by the uncertainty in the reddening correction due to the fact that the lines used with this method span a large range in wavelength. Comparing the two C ++ /O ++ measurements we see remarkable agreement, which could be interpreted as a confirmation that A V = 1 is, in fact, an appropriate extinction value for SGAS J105039.6+001730. However, the values reported here do not reflect the uncertainty that results from the lack of a precise electron temperature constraint, and it is possible that the agreement results in part from different sources of error fortuitously canceling out. Previous investigations have shown that there is no appreciable ionization correction factor (ICF) necessary for comparing C/O ion ratio given log(U) values in line with what we find above for SGAS J105039.6+001730 (Garnett et al. 1995b;Erb et al. 2010). If our estimate of log(U) is correct then the log(C ++ /O ++ ) measurements above are effectively telling us the total log(C/O) elemental abundance ratios in SGAS J105039. 6+001730. 4.8.4. N ++ /O ++ Abundance Ratio Similar to the C/O ratio measurement, Garnett et al. (1995b) use the ratio of the O III]λλ1661,1666 lines and the N III]λ1750 multiplet to measure the N/O abundance. Nitrogen has ionization potentials that are closer to oxygen than carbon, and so an ionization correction factor should also not be necessary for inferring the total N/O ratio using the N ++ /C ++ ion ratio. Applying this method we measure log(N ++ /O ++ ) = −1.6 ± 0.2, which assuming a negligible ICF, provides an estimate of the relative C/N enrichment: log(C/N) = 0.8 ± 0.2. 4.8.5. Si ++ /C ++ and Si ++ /O ++ Abundance Ratios Garnett et al. (1995a) demonstrate how the Si/O relative abundance can be computed from the rest-UV Si III]λλ1882,1892 and C III]λλ1907,1909 lines. As shown by Garnett et al. (1995a), the ICF for using the ratio of Si ++ to C ++ as an approximation of the total Si/C abundance is somewhat sensitive to the ionization parameter. Our estimate of log(U) for SGAS J105039.6+001730 implies a fraction of doubly ionized oxygen of X(O ++ ) ∼ 0.8-0.85 based on the models from Erb et al. (2010). From Garnett et al. (1995a) this implies X(Si ++ )X(C ++ ) ∼0.65-0.9. This abundance estimate has a 25% uncertainty from the ICF alone; we estimate it to be in the range ∼1.1-1.5 with a central value of 1.4 (see, e.g.; Kobulnicky & Skillman 1998). We find log(Si ++ /C ++ ) = −1.2 ± 0.3. Assuming our assumptions about the ICF are reasonable, this measurement can be combined with our measurements of the log(C/O) abundances to yield log(Si ++ /O ++ ) = −2.0 ± 0.4. Starburst99 Model Comparisons The P Cygni ISM absorption/emission features that we observe in the rest-UV are the byproduct of powerful winds that are typically associated with Wolf-Rayet (W-R) stars. To try and understand the physical implications of the wind features in the spectrum of SGAS J105039.6+001730, we compare our data against Starburst99 models (S99; Leitherer et al. 2010). Both our GMOS and MagE data include each of the N Vλλ1238,1240, Si IV λλ1393,1402, and C IVλλ1548,1551 doublets, though the N Vλλ1238,1240 feature is never detected at high S/N. The MagE spectrum is much higher spectral resolution than the GMOS spectrum, and would be the ideal dataset for comparison against synthetic spectra. However, we were limited to collecting 1 hr of integration time with MagE, and that taken at relatively high airmass and in highly variable seeing. As a result, the MagE data do not provide a superior comparison against the S99 model spectra than the GMOS data, and so we focus on S99 model-GMOS comparisons from here on out. We generate an array of S99 models assuming both continuous and instantaneous star formation scenarios. The continuous star formation models also spanning a range in metallicities from from solar to 2% solar, and a range in ages of 2 Myr to 40 Myr. Our instantaneous models all assume a metallicity of 0.4Z and span the range in ages (2-100 Myr), and also allow for variation in the maximum stellar mass formed, ranging from 30-120 M . In Figure 8 we plot two of the continuous star formation model synthetic spectra on top of the GMOS data in the regions surrounding the three strong P Cygni features noted previously (N V, S IV and C IV). The specific models plotted have metallicities of 0.4Z -in line with the metallicity we measure from nebular line diagnostics -and ages of 2 Myr and 40 Myr. The S99 synthetic spectra assuming an instantaneous burst of star formation are shown in Figure 9, also plotted for two stellar population ages (2 Myr and 10 Myr). None of the models with either continuous or instantaneous star formation can reproduce the combination of narrow and deep absorption features, the shape of the absorption trailing off blueward of the C IV line center, or the narrow and strong emission from Si IV and the broader strong emission from C IV. The strength of the observed Si IV and C IV emission could imply that these features are at least partly nebular in origin, rather than resulting entirely from the winds and atmospheres of massive stars. He II Emission He II λ1640 emission appears in the GMOS and MagE spectra of SGAS J105039.6+001730. This feature appears in composite spectra of z ∼ 3 galaxies (Shapley et al. 2003) and in the spectrum of individual strongly lensed z ∼ 2-4 galaxies (e.g.; Cabanac et al. 2008;Dessauges-Zavadsky et al. 2011), but typically appears as a broad emission feature that is associated with the winds of W-R stars, and exhibits velocity widths of ∼1000 km s −1 . However, the He II λ1640 detected from SGAS J105039.6+001730 has a width of ∼330 km s −1 in the GMOS spectrum, which matches the resolution of that data. The feature is too low S/N in the MagE spectrum to inform a multiple component fit to the velocity profile, but a simple gaussian fit to the line prefers a FWHM that is consistent with the resolution of the MagE spectra (∼75 km s −1 ), implying that a significant fraction of the emission does indeed originate from a nar- row distribution in velocity space, and therefore is likely resulting from nebular emission rather than W-R winds. Explaining such strong, narrow emission from He II requires extreme ionization. The He IIλ1640 line is extremely bright in SGAS J105039.6+001730, with a total flux ratio of He II λ1640/Hβ ∼ 0.17 after applying the extinction correction, and likely originates in part from nebular emission. Erb et al. (2010) note the presence of significant nebular HeII λ1640 emission in a bright fieldselected LBG at z = 2.3, and find that high ionization parameter values are required to explain the ratio that they measure for He II λ1640/Hβ of 0.3, and an equivalent width of 2.7Å. We can also compare the equivalent width, W 1640 , of He II λ1640 in SGAS J105039.6+001730 against predictions for the Wolf-Rayet wind He II λ1640 emission strength in the S99 models described above; we measure W 1640 = 1.5 ± 0.15Å. The S99 models only generate values this high for an extremely short-lived stretch (t age ∼ 5-7 Myr) and only in models with solar metallicity (in strong disagreement with all of the metallicity diagnostics measured in § 4.7). And as noted previously, He II λ1640 emission originating from W-R winds would be much broader than the unresolved line that we see in the GMOS spectrum. Lastly, we also see no evidence of a P Cygni blue shifted absorption feature in the He II λ1640 feature, which further argues against this line as originating entirely from W-R winds. HST imaging enables us to examine the morphology of SGAS J105039.6+001730 via both the internal structure of the arc itself, as well as the much less distorted counter image. The counter image appears extremely irregular, and contains several distinct emission knots. The giant arc also includes multiple images of three distinct knots of emission, one of which is considerably redder than the rest of the arc. The central emission knot in the counter image also appears notably redder. While the knots have different IR-optical colors, all are extremely bright in the optical (rest-UV). This suggests that there is active star formation throughout SGAS J105039.6+001730, but that the redder central knot likely has a a larger underlying population of older stars. It is possible that the redder knot corresponds to the core of the galaxy, which possibly hosts the early stages of an assembling bulge, and that the bluer emission from the outskirts of the galaxy is preferentially tracing small but intense star forming regions. Strength of the C III] Feature Strong C III]λλ1907,1909 emission is not ubiquitous in star forming galaxies at moderate redshift (e.g., z ∼ 2 composite; Shapley et al. 2003). It is therefore interesting to compare SGAS J105039.6+001730 against individual low redshift star forming galaxies with good UV spectra to try and understand the astrophysical conditions that are conducive to producing strong C III]λλ1907,1909. The galaxy sample compiled by Leitherer et al. (2011) (hereafter L11) is an excellent comparison set, with HST UV spectra of 46 star forming regions within 28 galaxies with z < 0.06. We examine the individual spectra that were analyzed in L11, and measure the equivalent width, W 1909 , of the C III]λλ1907,1909 line in the L11 spectra. In Figure 10 we show the relationship between W 1909 and metallicity in the L11 sample; there is a dearth of galaxies with both a large W 1909 and high metallicity, indicating that C III]λλ1907,1909 emission may be suppressed in high metallicity star forming galaxies. Interestingly, SGAS J105039.6+001730 has a larger W 1909 than all but 2 of the 25 L11 galaxies, yet it has a metallicity, 12 + log(O/H) = 8.3, which coincides almost exactly with the apparent turn-off of strong C III] emission in the L11 sample. The C III] lines are forbidden and semi-forbidden transitions, so any such suppression must be the result of one or more indirect mechanisms, unlike the suppression of resonant lines such as Lyman-α. A similar relationship has also recently been observed in low-mass high redshift galaxies (Private communication; Stark et al. in prep 2014), where W 1909 appears to be correlated with the strength of Lyman-α emission. In this context SGAS J105039.6+001730 is puzzling, in that is exhibits a strong DLA feature, which is opposite of what one should expect given a correlation between W 1909 and W Ly−α . The strength of C III] emission in SGAS J105039.6+001730 is difficult to understand in light of this galaxy's other observable properties and the apparent tendency for strong C III] emission to coincide with strong Ly-α emission and low metallicity. It would seem that the physics which dictate strong C III] cannot be summarized according to a simple correlation against a singe fundamental physical quantity (e.g., metallicity). Relative C/O, N/O, Si/O Enrichment With measurements of the relative abundance of C/O, N/O and Si/O we can compare the enrichment of the ISM in SGAS J105039.6+001730 against wellstudied star forming galaxies at low redshift. Garnett et al. (1995b) examine the relative C/O, C/N and N/O abundances of irregular/HII galaxies at low redshift. SGAS J105039.6+001730 fits nicely onto the sequence of C/O vs. 12 + log(O/H) and C/N vs. 12 + log(O/H) that Garnett et al. (1995b) observe. The relative abundances of Si and O were also studied by Garnett et al. (1995a). Si and O should be produced in the same stars, and therefore their ratio is not expected to vary strongly with metallicity. Variations can occur, however, if Si depletes onto dust grains in the ISM, rendering it unobservable via observation of line emis-sion from ionized HII regions. Our measurement of the Si/O abundance in SGAS J105039.6+001730 is somewhat lower than the stable value observed by Garnett et al. (1995a), but the large uncertainties prevent us from making a strong statement about whether or not Si depletion by the formation of silicate dust grains may truly be taking place in SGAS J105039.6+001730. Our abundance measurements indicate that SGAS J105039.6+001730 has elemental enrichment properties that are generally in-line with observations of irregular star forming galaxies at z ∼ 0. P Cygni Features Based on the strength of the P Cygni features in SGAS J105039.6+001730, we expect the extremely low metallicity models to do a poor job of reproducing the observed features, and this holds true. Models with metallicities of 0.4Z result in the best agreement with our data, which is encouraging given that we measure the metallicity to be ∼0.4Z from nebular emission line diagnostics. However, none of the synthetic spectra that we generate using S99 produce the combination of P Cygni features that we observe in of SGAS J105039.6+001730. The most challenging features to explain are the narrow shape and exceptional strength of the blueshifted absorption features, as well as the strength of the redshifted C IV emission and the odd, strong emission from S IVλ1393. Qualitatively these features indicate the presence of an extremely young population of massive stars. Some recent work has also explored the ways in which stellar rotation can affect the spectra of massive stars, and find that including rotation effects in the modeling of stellar spectra can result in a increase in the amount of ionizing radiation, hardening of the ionizing radiation, and stronger profiles for some lines, including the UV Si IV and C IV P Cygni features (Levesque et al. 2012;Leitherer et al. 2014). Our results here are reminiscent of the difficulty that other studies have encountered comparing synthetic spectra against observations of galaxies at z 2 (Pettini et al. 2000;Shapley et al. 2003;Quider et al. 2009;Erb et al. 2010). As suggested by Erb et al. (2010), the excess emission features could be explained, at least in part, by nebular emission. The presence of nebular emission from these transitions does, however, agree with the properties of the He IIλ1640 emission line discussed above, and would imply an extremely hard ionizing radiation field from massive O stars. One possible explanation for what we observe in SGAS J105039.6+001730 is that the integrated spectrum that we measure across a wide range of wavelengths is a blend of the properties from several different star forming regions embedded within the galaxy. Different star forming regions could be generating extreme spectral features that dominate the observable signal in some regions of the spectrum (e.g., the rest-UV) while only contributing in part to other parts of the spectrum (e.g., the rest optical). The superposition of a "normal" star forming galaxy with small, highly magnified regions of intense and short-lived star formation could be responsible for generating the complex spectral features that defy reproduction by a simple monolithic synthetic population of stars. There is, in fact, a body of published work that uses real (Kobulnicky et al. 1999;James et al. 2013a,b) and simulated (Pilyugin et al. 2012) observations of wellstudied low-redshift star forming galaxies to explore the biases that can result from simple analyses of the integrated spectra of distant galaxies. Recent observations of different line of sight within a single lensed star forming galaxy also shows clear evidence of different physical conditions in different star forming regions . There is clear evidence to suggest that spatially resolved spectroscopy will be essential for constraining some of the physical properties of high redshift star forming galaxies. Interpreting High Ionization Features in the Rest-UV It is physically unlikely that all of our measured He IIλ1640 line flux is nebular in origin, as that would require ionization parameters > −1 according to the models explored by Erb et al. (2010). Even assigning only ∼25% of the flux to nebular emission requires an ionization parameter, log(U) > −2 for metallicities in the range that we measure for SGAS J105039.6+001730, according to the models explored by Erb et al. (2010). A physical picture in which there are one or more extreme star forming region(s) within SGAS J105039.6+001730 is also consistent with evidence that unresolved starbursts have a maximum ionization parameter, log(U) ∼ −2.3 (Yeh & Matzner 2012), implying that diagnostics indicating significantly larger values of log(U) are likely to be sampling individual extreme star forming regions within galaxies (e.g., Snijders et al. 2007;Indebetouw et al. 2009). Another possible explanation for the strong He IIλ1640 line is the presence of an exceptionally hard ionizing spectrum. The ionization parameter, U , only describes the intensity of ionizing radiation; a very hard ionizing spectrum with relatively low intensity can generate a large amount of nebular He II emission even given a low ionization parameter. An initial mass function (IMF) that generates more of the hottest, most massive stars, for example, could account for a harder ionizing spectrum than considered by the models of Erb et al. (2010). However, arguments that are based on the vague, qualitative implications of varying the IMF are distasteful and presumptive when one considers that the spectral properties of the most massive stars are not well understood, especially at low metallicity (e.g., Rigby & Rieke 2004). The strong, narrow, He II λ1640 emission that we detect in SGAS J105039.6+001730 presents a puzzle. It either casts serious doubt on the ionization parameter diagnostics that we compute and rely on throughout § 4.8, or it suggests a hard ionizing spectrum that is difficult to reconcile with the electron temperature constraints available from rest-frame optical emission lines. Returning to the explanation that we put forth in the previous subsection, it seems likely that the integrated spectrum of SGAS J105039.6+001730 is too complex to be described by a simple stellar population, or a single list of parameters (i.e., a single ionization parameter, electron temperature, metallicity, etc). Rather, it is possible that the [O III]λλ4960,5008, [O II]λλ3727,3729, [Ne III]λ3869, and He II λ1640 lines are in fact blends of multiple components which are originating from different physical regions within the galaxy. A relatively small but extreme region of recent star formation with a large population of hot massive stars could, for example, produce strong He II emission, while other more "normal" regions within the galaxy could be responsible for weighting the measured [ In cases where we are studying the physical properties of strongly lensed star forming galaxies, it is easy to imagine that differential magnification effects across the surface of the background galaxy can generate integrated observable quantities that are weighted toward the highest surface brightness regions within the lensed galaxy, which would naturally tend to be the regions of most intense unobscured star formation (e.g., Er et al. 2013). This issue is not, of course, unique to strongly leaned galaxies at high redshift. Studies of the integrated properties of any distant galaxy must, ultimately, account for the fact that properties such as metallicity, SFR and ionization are not held uniform throughout a given galaxy. The fundamental scale of star formation in the universe is much smaller than a galaxy, and individual galaxies can (and should) therefore host a range of star forming regions with physically different properties. This point is emphasized by the few published studies of lensed galaxies with IFU units in the NIR (e.g., Stark et al. 2008;Wuyts et al. 2014). Spatially resolved/IFU spectroscopy of distant sources is a powerful tool for confronting some of the dangers that result from analysis (or over-analysis) of integrated spectra of these galaxies. It is also sensible that the diversity of properties within an individual galaxy could vary more dramatically in the earlier universe, when star formation was still ramping up to its peak and galaxies had yet to assemble a majority of their stars relative to galaxies in the present epoch. SUMMARY AND CONCLUSIONS We present a detailed analysis of optical and NIR imaging and spectroscopy of an exceptionally bright strongly lensed galaxy at z = 3.6252. SGAS J105039.6+001730 is among the best-characterized star forming galaxies at z > 2, and the highest redshift galaxy with its properties measure from high S/N rest-frame UV and optical spectra. The observations and analyses presented here are a step toward improving observational constraints on the internal astrophysics in high redshift, vigorously star-forming galaxies. Our primary results are: -SGAS J105039.6+001730 is a moderate-metallicity (Z = 0.4Z ) moderately low-mass (log(M * /M ) = 9.5) galaxy, with star formation rates of 55 ± 25 and 84 ± 24 M yr −1 measured from nebular [O II]λλ3727 and H-β emission, respectively, implying that vigorous starbursts are taking place in one or more regions within the galaxy. -Several derived physical characteristics of SGAS J105039.6+001730, including estimates of relative elemental abundances and the strong C III]λλ1907,1909 emission are in-line with wellstudied star forming galaxies at z ∼ 0. Other features such as the strong damped Lyman-alpha absorption, however, are surprising given the strong C III emission. -Some features in the UV spectrum -the P Cygni lines and strong He II emission -indicate a strong ionizing field and/or a very high ionization parameter in conflicts with rest-frame optical diagnostics. Attributing strong He II emission in the UV to a high ionization parameter, requires log(U) > −2, whereas the rest-frame optical nebular lines prefer a more "normal" value of log(U) < −2.05. -Our work here is a reminder that as the quality of data improve for high redshifts galaxies, it is essential that the subsequent analyses are aware of and account for the systematic effects that result from measuring properties from spectral features that are a combination of emission originating from different regions within galaxies. The fundamental mode of star formation in the universe operates on scales much smaller than individual galaxies. Studies of strongly lensed galaxies, therefore, provide a unique opportunity to probe the spatial variance of star formation in galaxies during the era of peak star formation. SGAS J105039.6+001730 is a prime target for more extensive follow-up observations. High resolution optical spectra, in particular, would help to answer many of the outstanding questions about the nature of the star formation, ISM, and the properties of the population of massive and W-R stars within this galaxy.
2014-07-04T11:25:00.000Z
2013-10-24T00:00:00.000
{ "year": 2014, "sha1": "ac7e1badc9f63f596b04b3935bdc706e8898291c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1310.6695", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ac7e1badc9f63f596b04b3935bdc706e8898291c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8185756
pes2o/s2orc
v3-fos-license
Whole-exome sequencing associates novel CSMD1 gene mutations with familial Parkinson disease Objective: Despite the enormous advancements made in deciphering the genetic architecture of Parkinson disease (PD), the majority of PD is idiopathic, with single gene mutations explaining only a small proportion of the cases. Methods: In this study, we clinically evaluated 2 unrelated Spanish families diagnosed with PD, in which known PD genes were previously excluded, and performed whole-exome sequencing analyses in affected individuals for disease gene identification. Results: Patients were diagnosed with typical PD without relevant distinctive symptoms. Two different novel mutations were identified in the CSMD1 gene. The CSMD1 gene, which encodes a complement control protein that is known to participate in the complement activation and inflammation in the developing CNS, was previously shown to be associated with the risk of PD in a genome-wide association study. Conclusions: We conclude that the CSMD1 mutations identified in this study might be responsible for the PD phenotype observed in our examined patients. This, along with previous reported studies, may suggest the complement pathway as an important therapeutic target for PD and other neurodegenerative diseases. In this study, we clinically evaluated 2 different families suffering from late-onset PD (LOPD) without mutations in the known genes 18 and performed whole-exome sequencing (WES) analyses in 3 affected family members to identify the genetic causes of disease to enhance our knowledge of the genetic architecture of PD. We identified 2 different mutations in a novel gene, previously reported to be associated with the risk of PD, as possible disease-causing mutations in both unrelated PD families. METHODS Standard protocol, approvals, registrations, and patient consents. Two individual families with LOPD and of Basque origin and 1 isolated familial PD (fPD) case were clinically examined. Patients of 3 families were diagnosed and treated by a group of movement disorder specialists (J.R.-M., A.B., and J.F.M.-M.) at the University Hospital Donostia and were included as having fPD without mutations in the known genes. Mutations in known genes were previously excluded through custom targeted sequencing. 18 Patients were diagnosed according to the UK PD Brain Bank Society and Gelb criteria. 19,20 Written informed consent, fully approved by the local ethics committee of the Hospital Universitario Donostia, was obtained from all participants. DNA samples from 115 Spanish patients with PD and 94 DNA samples belonging to ethnicity-matched neurologically normal individuals (45 men and 49 women) without a family history of any movement disorders were also available for genetic screening. The age at sample collection of the control individuals ranged from 60 to 93 years with an average of 69.1 years. 17 The NDPT102 Parkinson panel from the Coriell Institute for Medical Research (coriell.org/) that contains DNA from 92 unique and unrelated Caucasian individuals with idiopathic PD (59 men and 33 women) was also used for mutational screening. All available DNA samples were isolated from whole blood using standard procedures. Whole-exome sequencing. Four DNA samples belonging to 2 different families with LOPD (cases A1, B1, and B2) and 1 isolated familial case were subject to WES analyses (figure, A), which were conducted as previously described. 21 The SureSelect Human All Exon 50Mb exon-capture kit was used for library enrichment (Agilent Technologies Inc., Santa Clara, CA), and captured libraries were sequenced on the HiSeq2000 according to the manufacturer's instructions for paired-end 100-bp reads (Illumina Inc., San Diego, CA), using a single flow cell lane per sample. Sequencing data were put through a computational pipeline for WES data processing and analysis following the general workflow adopted by the 1000 Genomes Project, 22 where raw sequence reads were aligned to the human reference genome sequence (NCBI GRCh37.p13) using the fast lightweight Burrows-Wheeler Alignment Tool (BWA), 23 followed by a base-quality recalibration and local realignment through the Genome Analysis Toolkit (GATK v1.5-16-g58245bf). Single nucleotide substitutions (single nucleotide polymorphism/single nucleotide variant) and short insertions/deletions (INDELs) were called using the GATK Unified Genotyper tool, where calls were filtered based on mapping quality (q30 or higher) and depth of coverage (d10 or higher). Last, the AnnTools kit was used for annotation of the resulting calls, 24 and PICARD was used to conduct exomes' statistics (picard.sourceforge.net/). Genomic variations observed as common mutations (frequency . 5%) in the latest dbSNP149 build, 1000 Genomes Project Phase 3, the Exome Variant Server of the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (evs.gs.washington.edu/EVS/), 25 and the Exome Aggregation Consortium (exac.broadinstitute.org/) were excluded from subsequent analyses, as were genomic variations mapped to intragenic, intronic, and noncoding exonic regions. Given the low frequency of pathogenic autosomal dominant PD mutations in public databases, which are either not present (SNCA, VPS35) or identified with very low frequency (LRRK2, 1E-06), with the exception of LRRK2 p.Gly2019Ser mutation that is identified with a higher frequency (1E-04) because of its prevalence in European population, novel and genomic variations with very low frequency were prioritized in follow-up analyses. Candidate gene screening. To amplify all coding CSMD1 exons, genomic primers were designed using the ExonPrimer script (ihg.gsf.de/ihg/ExonPrimer.html) (primer sequences available on request), and PCR products were purified using the ExoSap-IT reagent (Applied Biosytems, Foster City, CA). These were sequenced in both forward and reverse directions using Applied Biosystems BigDye terminator v3.1 sequencing chemistry as per the manufacturer's instructions, resolved on an ABI3730 genetic analyzer (Applied Biosytems), and analyzed using Sequencher 5.4.1 software (Gene Codes Corporation, Ann Arbor, MI). RESULTS Clinical history. Family A: Patient A1. This patient began feeling tired, clumsy, and very slow in movements at the age of 72 and was diagnosed with PD 1 year later. Initially, he did not take any treatment, but then came back to the clinic with slowness, clumsiness, and a rest tremor in his right hand. On examination, he showed an amimic face, hypophonia, and dysarthria. He had a resting tremor (1/4) in the right hand as well as rigidity in the neck (3/4 neck stiffness), right arm (2/4), and left arm (1/4). He had a moderate global akinesia (3/4) with axial disturbance. Reflexes were normal. He was diagnosed with PD-score 2 according to the Hoehn and Yahr (H&Y) scale. This condition was treated with carbidopa/levodopa (75/300 mg/d) with improvement, and a year later cabergoline (2 mg/d) and rasagiline (1 mg/d) were administered. Later, cabergoline was replaced with rotigotine (8 mg/d). At the age of 78, he was autonomous in daily life, but a year later, his postural instability increased with more clumsiness when turning in bed and hypophonia. At the age of 80, he had an important gait disturbance with freezing but not falls, oscillations, and moderate dyskinesias without cognitive impairment. He died at the age of 86 of respiratory infection. He had a family history of PD with a paternal aunt with a diagnosis of PD, a father with a possible resting tremor in the elderly, a brother who died without PD at the age of 77, a healthy sister (87 years old), a brother with cognitive impairment (86 years old), a daughter with PD (58 years old; figure, A; patient A2), and a daughter without PD (53 years old). Family A: Patient A2. This patient was first seen at 49 years of age with a 6-month evolution of bradykinesia in her right hand affecting her daily activities, including writing, moving, dressing, and so on. She also noticed in writing notes that her letters were smaller. She reported trouble sleeping because of symptoms compatible with REM sleep behavior disorder (RBD), associated with anxiety precipitated by a family conflict and restless legs syndrome (RLS) in the last 2 years. On examination, she showed mild amimia as well as neck and right arm rigidity. She had a slight decrease in arm movements in her walk, micrography, and bilateral babinski signs with mild hyperreflexia. Brain MRI study was normal, and DATSCAN showed bilateral striatal hypocaptation, mainly on the left side. She was diagnosed with PD, and initially began treatment with rasagiline (1 mg/d) and later with carbidopa-levodopa (75/300 mg/d). In the following years, she has been presenting with increasing oscillations and generalized dyskinesias that have progressed to severe oscillations, difficulty in walking with some degree of ataxia, and severe instability with falls over the last year. Her cognitive state remains normal. Family B: Patient B1. This patient was examined for the first time at the age of 70, when he presented with tremor in his right hand of 6 months of duration. At examination, he showed rest tremor and akinesia with mild rigidity in his right hand. He was diagnosed with PD-score 1 according to the H&Y scale-and was treated with carbidopa-levodopa (75/300 mg daily) and selegiline with good response. In the following 3 years, his symptoms progressed to bilateral and he developed dyskinesias 6 years after levodopa initiation. He was autonomous in daily life until the age of 83, and he could go outside without assistance. He subsequently had postural instability, hypophonia, dysarthria, and dysexecutive mild cognitive impairment, but never became demented. At the age of 93, he suffered from lateral bulbar infarction related to embolism because of atrial fibrillation, with minor sequels. Two months later, he had a fall with a chest contusion and fracture of the 10th rib. During his hospital admission, he had acute delirium, dyspnea, and finally died. He had a family history of PD. His mother, who died at the age of 75, suffered from PD and rest tremor since the age of 70. Her brother died of pancreatic cancer at the age of 79. This brother had 4 children: a son who was diagnosed with PD at the age of 51 (he is now 59 years old) as well as 2 other sons and 1 daughter who died without neurodegenerative disease. The patient (patient B1) had 3 children: a son who died of sepsis at the age of 53, a 68-year-old healthy daughter, and a 63year-old daughter with postural tremor (figure, A; patient B2). Family B: Patient B2. This patient was first examined at the age of 57 when she presented with tremor in the hands of 2 years of evolution. She feared having fPD related to her father's PD. She had postural tremor of small amplitude and frequency 8 Hz in the hands and was diagnosed with enhanced physiologic tremor that aggravated by anxiety. At the age of 61, she came back to the clinic because of her tremor. She had shaking hands without cephalic postural tremor, but neither rigidity nor slowness was observed. Her tremor increased with nervousness. She reported to have symptoms suggestive of RLS. No constipation, hyposmia, dizziness, or symptoms suggestive of RBD were observed. The levels of thyroid hormones and other analytic studies were normal. Genetic results. WES approaches were performed in 3 different family members (cases A1, A2, and B1) belonging to 2 PD families (families A and B) and 1 isolated familial case. Between 96.38% (case A2) and 93.56% (case A1) of the target exome at 20-fold coverage or higher was captured for all sequenced samples. This led to the identification of 979 coding genetic variations for case A1, 1,560 for case A2, 1,132 for case B1, and 907 for the isolated familial case. After filtering and by including only novel or with very low frequency genomic variation (1E-04-1E-06), 26 SNVs (12 novel and 14 with low frequency) were found to be shared by the 2 affected members of family A. We then searched if any of the genes, where the shared SNVs were located, were also mutated in family B and the isolated case, and identified 2 novel heterozygous mutations, not previously reported in public databases, in the CSMD1 gene (MIM# 608397) in both affected families A and B. No novel genetic variation was found to be shared between the 2 families (A and B) and the isolated fPD case. A G-to-A transition (c.5885G.A) resulting in p. Arg1962His amino acid substitution was identified in both affected members from family A, while a G-to-A transition (c.8959G.A) resulting in p.Gly2987Arg amino acid substitution was identified in the only family member of family B subject to WES (table 1). Analysis of this second mutation (p.Gly2987Arg) in additional family members revealed that this mutation segregated with the disease status ( figure, A and B). We also found that both mutated CSMD1 amino acids are highly conserved across different species (figure, B) as well as CSMD2 and CSMD3 proteins (data not shown) and that both mutations are located in different complement control protein (CCP) domains of the translated CSMD1 protein (figure, C). The CCP domains, containing approximately 60 amino acid residues, have been identified in several proteins of the complement system that is part of the innate immune system. The CSMD1 gene is also found to be weakly expressed in most tissues, except in the brain, where it is expressed at an intermediate level in the cerebellum, substantia nigra, hippocampus, and fetal brain. 27 Both novel CSMD1 mutations were predicted highly pathogenic by various computational methods. We then screened both CSMD1 mutations in 372 control chromosomes, including 188 ethnicity-matched control chromosomes, and did not identify any additional mutation carrier. No mutation carrier was identified in 115 DNA samples of Spanish patients with PD tested through Sanger sequencing. However, a genome-wide significant association between a CSMD1 nucleotide variation (rs12681349; intron 2) and PD has been recently reported using the NeuroGenetics Research Consortium (NGRC) data set, which includes 435 fPD, 1,565 sporadic PD, and 1,986 control cases (table 2). 28 This association remained highly significant when including all PD (familial and sporadic) and sporadic PD cases. And although it was also observed when considering only fPD, this was not statistically significant, probably because of power limitations. DISCUSSION We here described the identification of a novel gene (CSMD1) to be mutated in unrelated Basque families with LOPD in which known PD genes were previously excluded. 18 Although patients from both families mainly presented with classical symptoms of PD, the disease course was observed with variable age at onset, ranging from 49 to 72 years, and variable phenotypic heterogeneity, with 2 patients presenting with resting tremor, akinesia, and rigidity, and later on postural instability (patients B1 and A1), 1 patient (patient B2) presenting with tremor of the hands that aggravated with anxiety and RLS, and the youngest patient with severe bradykinesia affecting her daily activities, RBD, and RLS, as well as atypical symptoms such as bilateral Babinski signs and ataxic gait (patient A2). We do not know whether the patient B2, currently diagnosed with postural tremor, will manifest PD in the near future, but we believe that she might develop PD at a more advanced age as her father did. Despite the advance age of some of the patients, only 1 patient presented with dysexecutive mild cognitive impairment (patient B1). However, we cannot discard that the other 2 reported mutation carriers, who are still relatively young, might develop some kind of cognitive dysfunction at an advanced age. Because of the same ethnicity and geographical region of the examined patients, we first searched for novel and rare genetic variations shared between all affected individuals, and although we did not find any mutation to be shared by all examined patients, we found the same gene, CSMD1, to be mutated in both families. Both CSMD1 mutations segregated with disease status, were not previously reported in public databases, were absent in a large number of neurologically normal individuals, were highly conserved among other orthologous, and were predicted to be pathogenic by various computational methods. This, along with previously reported CSMD1 association with the risk of PD, 28 led us to believe in a possible role of CSMD1 genetic variability in the pathogenesis of PD. The CSMD1 gene contains 70 coding exons and encodes for a large protein (3,564 aa) that contains multiple CUB and Sushi, also known as CCP, domains. It is primarily synthesized in the developing CNS and epithelial tissues, and its encoding protein is known to act as a regulator of complement activation and inflammation in the developing CNS. 29 Complement activation is essential for synaptic pruning and plasticity and has recently been implicated in several brain-related disorders and functions, including schizophrenia, Alzheimer disease (AD), immediate episodic memory, and information processing. [30][31][32] In particular, based on mice models of AD, it can be said that both the microglia and the component pathway might act as early mediators of hippocampal synapse loss and dysfunction before plaque formation and neuroinflammation. 32 In addition, copy number variations within the CSMD1 locus have been reported in patients with AD vs controls, 33 further supporting a role of the CSMD1 genetic variability and function in the pathogenesis of AD. Therefore, taking into consideration the previous associations of CSMD1 and the HLA region with the risk of PD and the identification of diseasesegregating CSMD1 mutations in familial LOPD, we hypothesize that CSMD1 genetic variability might also contribute to PD pathogenesis through mechanisms implicated in immune-related synaptic dysfunction. Although it will be important to address these pathologic functions, this study may highlight the complement pathway as an important therapeutic target in PD as it has been suggested in AD.
2018-08-01T18:04:08.934Z
2017-08-02T00:00:00.000
{ "year": 2017, "sha1": "a728bda4fe34732ce65dd09946cc7c45bf330fc4", "oa_license": "CCBYNCND", "oa_url": "https://ng.neurology.org/content/nng/3/5/e177.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1033e9b3f8ae9377cd80939b474a61a4e70c2e8b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261430886
pes2o/s2orc
v3-fos-license
Can a UV-C box help the cinema industry by disinfecting video cameras? Summary Introduction UV-C has proven to be an effective virucide and microbicide, and its cost-effectiveness allowed it to spread as a disinfecting procedure in different environments. Methods The study aims to determine the microbicide activity on Staphylococcus aureus, Escherichia coli and SARS-CoV-2 of the UV-C Boxer by Cartoni S.p.A. Three separate experiments were performed to assess the effectiveness of the UV-C disinfection device on different materials, directly on surfaces of a video camera and on a specific carrier for SARS-CoV-2. Results In all three experiments, a significant abatement of bacterial and viral contamination was reached after 60 seconds on carriers and after 3 minutes on all examined surfaces of the video camera, with a higher reduction on glass carriers. Conclusions UV-C devices may be a valuable tool to implement in the working routine to achieve a higher level of safety in work environments. Introduction In recent years, studies on the ability of microbes to colonize the environment have increased considerably, as it has been shown that surfaces can be a source of infection for humans [1].Any inanimate object that can have infectious agents on its surface and, thus, spread them is called fomite.It has been proven how the contamination of fomites in health facilities can be a means of infection, from patients' room surfaces to healthcare workers' tools [2].Staphylococcus aureus, for example, is a pathogen associated with a broad spectrum of infections both in nosocomial environments and community settings [3].Despite it being a ubiquity that affects the skin of healthy individuals [4], it has become a relevant global health issue due to the development of antibiotic resistance.Methicillin-Resistant S. Aureus (MRSA) infections, in particular, are steadily growing in incidence and prevalence [4,5].The World Health Organization (WHO) global report on Antimicrobial resistance describes how MRSA represents at least 20% of all S. aureus species in all WHO Regions, with some areas reporting an 80% peak [7], making MRSA a global threat and its control the main challenge for global health.Alongside Hospital-Acquired MRSA (HA-MRSA), which is an important cause of mortality in nosocomial environments [5], Community-Associated MRSA (CA-MRSA) has recently taken an essential spotlight in medical research due to its incidence among people who had no contact with healthcare environments [6].CA-MRSA can be transmitted by direct contact between people and between shared objects and surfaces, considering that it has proven to live in surfaces for a significant amount of days [7].This has led to fomites being essential means of MRSA infections and outbreaks [8,9], favoring the spread of antibiocidal resistance.Gram-negative bacteria have proven to last on surfaces and fabrics in hospital environments.Escherichia coli, in particular, is a very common cause of HAI [10], and there's a relevant focus on this microbe due to the recent uprising of Multi-Drug Resistant (MDR) strains with the New Delhi metallo-βlactamase -type carbapenemases [11].In 2020 the sudden rising of the SARS-CoV-2 pandemic urged scientists to study the virus' characteristics; among them, its' transmission means.The virus, counting more than 750 million confirmed cases and almost 7 million deaths as of 22 nd of March 2023 [12], is mainly transmitted via respiratory droplets and direct contact [13]; however, it is possible for the virus to contaminate high-contact surfaces and dry surfaces in hospitals [14] stratifying the risk based on virus source, time of exposure and location of the surface.In fact, Belluco et al. proposed a classification for risk of a Sars-Cov-2 infection from surfaces based on these three factors, thus dividing the risk in "High, Medium, Low and Very Low" [15].And while as of 5 th of March 2023 the WHO declared the pandemic no longer constitutes a public health emergency of international concern [16], the need to control and study the virus has led to massive restrictions, including business shutdowns that have resulted in the loss of as many as 33 million jobs worldwide and, according to the International Can a UV-C box help the cinema industry by disinfecting video cameras? Labour Organisation's report, 'The most serious crisis since World War II: Job losses are increasing rapidly worldwide' [17]. To avoid these kinds of contamination, objects and surfaces disinfection is one among all precautions needed in various settings.As seen in nosocomial environments, a good disinfection practice of stethoscope is necessary to avoid MRSA contamination, but it has been reported a lax and unreliable cleaning habit from physicians and other healthcare professionals [18,19].New technologies, like UV light devices, have been proven effective in disinfecting various healthcare environments and surfaces [20,21], only recently has scientific literature started exploring the potential of UV-C devices in house and work environments [22].The correct use of UV-C technology takes the following parameters into account: distance from the light source (m), spatial light distribution, radiant power (W), irradiance (W/m 2 ), inversely proportional to the square of the distance, and radiation times (min).This allows more accurate disinfection of objects that are exposed to an adequate dose of UV-C, where the dose (J/m 2 ) is the product of the irradiation time and irradiance [23].Simulation models, that take into account the parameters described above make it possible to estimate the disinfection capacity of systems based on UV-C technology.In particular, once the dose corresponding to a specific reduction in microbial load has been established, they enable the relative UV-C irradiation times to be evaluated for each distance, and vice versa [24].However literature about surface contamination and control in nonhealthcare environment with this type of technology is scarce and every surface in every work environment can be a fomite. For the purpose of this study the focus is shifted to cinema industry.It was forced to halt its production by the COVID-19 pandemic, especially in the first half of 2020: movie theaters and production studies had to close for months, heavily impacting the market [25].As described by the 2020 THEME Report [26], redacted by the Motion Picture Association, the global box office market was $12 billion in 2020, 72% lower than 2019. From the same report, it is highlighted the fact that only 46% of the U.S./Canada population went to the cinema at least once in 2020, compared to 76% of population in 2019.Video cameras, in particular, are tools that are shared among the crew and have frequent contact with different parts of the body: these factors result in video cameras being a potential route of transmission via fomite colonization.And while a protocol for the protection of workers in this work sector was developed in 2020 [27], the experiments discussed in our study might be the first experiments involving the cinema industry and disinfection of commonly shared work tools, such as video cameras.This study aims to evaluate the microbicidal efficacy of a new UV-C device for the disinfection of cameras and cinema equipment.Equipment like this are often contaminated by hand contact and proximity to the nose, mouth, ears and conjunctivae.The performance of the device will be analyzed by placing contaminated carriers with selected microbes at sensitive spots on the camera. Materials and methods The First experiment In this first experiment, two different bacteria were used: S. aureus ATCC 43300 and E. coli ATCC8739.A 0.5 McFarland inoculum for each bacteria strain was prepared, and from each inoculum, several scalar dilutions were performed.Then 100 µl of each dilution was spread on a 20 cm 2 carrier, with a sterile spatula, and let dry inside the laminar flow hood.Three different materials were selected: metal carriers, glass carriers and plastic carriers.Carriers were then positioned horizontally in the UV box, 50 cm from the upper light sources of the device. Second experiment The contaminated device used for this experiment is a Sonyh Ampex CVR (BVW) 400P video-camera (Sony, Tokyo, Japan) which was placed on the sliding container of the box.To conduct this study, it was necessary to locate selected spots on the video camera to place the test microbic sample following two criteria: 1) spots with a high frequency of contact with human skin and, thus, very likely to be contaminated in everyday use of the device; 2) spots where the UV-C light might not reach directly, to test the microbicide effectiveness of reflected light on the camera.Five spots were identified: Spot A, 23 cm from the light sources (handle position, direct to the light sources); Spot B: 30 cm from the light sources (ocular position, not direct to the light sources); Spot C: 33 cm from the light sources (lateral position, not direct to the light sources); Spot D: 34 cm from the light sources (keypad position, not direct to the light sources); Spot E: 50 cm from the light sources (shoulder pad position, opposite to the light source) (Fig. 2).The test microorganism for this experiment was S. aureus ATCC 43300.On each spot, a 20 cm 2 plastic carrier was placed, and the S. aureus inoculum was spread on each carrier with a sterile spatula and left to dry inside a laminar flow hood.Positive control was also prepared with another 20 cm 2 plastic carrier which was left in the lab during the experiment, out of range of UV radiation.The concentration of the inoculum in the Treated Samples and Positive controls was 1.5x10 7 CFU/ mL for each spot.The video camera was exposed to the UV-C light inside the closed box for 3 minutes.After the treatment, the used protocol to prepare the samples was the same procedure used for the first experiment.This experiment was conducted in triplicates. Third experiment In the last experiment, SARS-CoV-2 was tested (Lot: SARS-CoV-2_COV2019 ITALY/INMI1) using the VERO E6 C1008 (ATCC CRL-1586) cell line as host cell.We have designed a support made of polylactic acid (PLA) then printed it with an FDM 3D-Printer Anycubic (Shenzhen Anycubic Technology Co., Hong Kong, China).At both ends of the PLA support, two quartz carriers (UV-C permeable) were placed and between them, a plastic cap with the inoculum drop placed inside of it.The PLA support was positioned at the centre of the sliding grid of the box (Fig. 3).The Inoculum consisted of with 100 µL of viral suspension.The suspension virus used was 10 6.88 TCID50%/mL (6.88 expressed by Log 10 ).The device irradiated the surface for 3 minutes.Three samples inoculated with the virus were subjected to the action of the UV-C box as per protocol.In comparison, three samples were inoculated but not treated with UV to determine viral titer after recovery and examined immediately after inoculation.The collected suspensions were inoculated into a multi-plate into which the VERO E6 cell cultures were fixed.Plates were incubated for three days at 37°C ± 2°C at 5% CO 2 in a humidified atmosphere. After the exposure time, we tested the residual virus activity by evaluating the Tissue Culture Infective Dose of 50% (TCID50%). Statistical analysis In the database, the variables collected were the Petri dish ID, CFUs/mL, microorganism species and inoculum concentrations.Data analysis and statistical computations were performed using Microsoft Excel software (ver.16) for preliminary statistical evaluations of empirical data and Stata software Ver 16 for the statistical analysis.The results of each experiment in triplicate were expressed as mean CFU/mL for each test for the experiments involving bacteria.The mean logarithmic reduction and its 95% confidence interval were calculated from the replicates data of the microorganisms and compared with positive controls. First Experiment This experiment showed that the higher bacterial inactivation effect is reached for all two strains at 60 seconds, although at 30 seconds, there is a significant reduction in the bacterial load.With a concentration of 1.5x10The highest reduction was seen in glass carriers, whereas the smallest reduction was seen in plastic carriers.The complete data can be seen in Table I. Second Experiment These experiments showed that after 3 minutes of UV-C exposure of the video camera inside the Cartoni UV-C BOXER, there is a significant reduction in the bacterial load.After a 3 minutes' exposure to the UV-C light inside the box, the mean bacterial inactivation in plastic carriers on Spot A was 6.The results are similar with those obtained in the first experiment, despite different exposure times.The findings also highlight the direct and indirect (from reflection) effect of UV-C light on target objects. Third Experiment The tests showed that for the carriers located on the device grids, 5.37 Log 10 reduction (>99,999%) was reached when tested against SARS-CoV-2, with an irradiation time of 3 minutes for all the three repetitions (Tab.III). Discussion Cinema studios.We first wanted to test if there is any significant difference in the microbicide activity of the UV-C lamps between different types of surfaces.In the first experiment, the greatest reduction was observed on glass carriers, with the total abatement for E. coli and between 6 to 7 log 10 (between 99.9998% and 99.99999% reduction) for S. aureus at one minute of exposure.In contrast, the smallest reduction was observed on plastic carriers.A possible explanation of the different abatements on the carriers may be attributable to a dissimilar hydrophobic condition of the materials that do not allow the same dispersion of the drop on the carrier.The latter may cause a superposition of microbes exposed to UV-C rays.Coughenor et al. showed how MRSA survives more on plastic and vinyl, posing as a hypothesis that they have a microscopically coarse structure, which provides more protection from dehydration, comparing this to glass, instead, being a smooth surface and having the shortest survival time [28].The next step was to see how the UV-C box performed on actual equipment from the carrier.As previously stated in the experiment setup, while selecting the spots, we considered not only the direct or reflected exposure to the UV light but mainly areas of high contact with different body parts.While utilizing a video camera, the operator makes direct contact or close contact with several body districts such as the eyes, hands, mouth and ears. The microbiological results showed a significant reduction in all five spots after 3-minute irradiation inside the UV-C box.This experiment showed how the logarithmic reduction also depends on the carriers' position.Direct or reflected on the walls, the light irradiates the selected spots differently.The highest decrease was observed in spot A (handle position), with a 99.99995% reduction of bacterial load after 3 minutes of exposure, while the worst logarithmic reduction was observed in spot B (ocular position), with an abatement of 99.998%.Spot B was selected because it is in close contact with the human eye, possible contamination with tears, and proximity with the conjunctival mucosae. As previously stated, MRSA can be pathogenic when transmitted via unanimated surfaces.While MRSA keratitis and post-operative endophthalmitis have been reported leading to poor visual outcomes, these kinds of infections are still very uncommon, and not only the percentage of MRSA eye diseases are quite low, but also they generally present with a mild clinical history and a good response to first-line therapy [29].Spot C, where the box reached a 99,998% reduction, was identified as a surface in contact with the ear, while Spot D is crucial because the presence of buttons and a display there make it a high contact zone.Considering how bacteria can widely contaminate computer keyboards [34] and mobile phone surfaces [35] due to their frequent utilization. The results obtained here of abatement of 99,998% are in line with other studies performed on different settings [30,31].To be noticed was the interesting result obtained in spot E (shoulder pad position), with a microbe reduction of 99.999%, where the light could only irradiate the plastic carrier due to the reflective wall under the positioning grid. We lastly tested the virucide activity on SARS-CoV-2.A mean reduction of 5.37 Log 10 across all three repetitions of the same test was reached in a 3-minute exposure. Regarding SARS-CoV-2, a significant number of studies showed the persistence of the virus in different types of surfaces and materials.Gonçalves et al. in 2021 showed that, while COVID-19 can be found in a wide range of surfaces with different materials and environments, the availability of pathogenic viruses on them is yet to be demonstrated, so it is not yet clear if a COVID-19 infection from fomites is possible or not [32].Considering the obtained results, the same considerations discussed in the previous paragraphs about the different camera spots can also be done for SARS-CoV-2: the virus presence in the conjunctival sac can be a source of spread, and ocular manifestations may be part of the early symptoms of the disease, as stated in the meta analysis by Zhong et al. [33]. The same study highlighted how conjunctival swab tests for viral RNA resulted in positive in 3.9% of all patients.The study could not confirm nor exclude the possibility of a SARS-CoV-2 infection due to the eye as a potential source of disease, also considering how the percentage of positivity of swabs does widely vary in literature [34][35][36].In all three experiments, the UV-C box managed to reduce the contamination in different samples in a short span.These results confirmed the efficacy of UV-C disinfection against microbes such as MRSA and SARS-CoV-2, aligning with other studies.The interest in UV-C disinfection comes from the ability to design easy-to-use devices in everyday routine and the reported resistance of some bacterial strands to common chemical disinfectant agents [37,38]. In the film industry segment, where expensive devices are used and shared every working day, it is crucial to preserve the integrity of the materials the devices are made of.UV-C after long and repeated exposures can irreversibly damage irradiated surfaces [23].From the tests performed, we believe that the duration of the disinfection cycle is not sufficient to alter the physical properties of the camera and the film recorded inside, even with consecutive cycles of irradiation.Also, it must be considered how chemical disinfectants may stiffen plastic if not used appropriately and with the appropriate chemical for every machine.Also, UV-C disinfection can represent a more environmentally friendly alternative to chemical disinfection.Although the lamps used in the Cartoni UV-C box do have mercury among their components, which represents a costly waste to dispose of, there is an increasing focus on LED UV-C lamps, which may become a solution to avoid toxic wastes and to lower the energy demands of the disinfection devices.One of the possible limitations of this study is that there are no data regarding the energy doses on every spot on the camera, not allowing this study to make a thorough consideration on the possible values of dose/ microbe abatement on every step area.While we can expect a lower value on sites where only reflected light could reach the surfaces, possibly related to the higher microbial reduction on Spot A (directly facing the UV-C lamps), identifying a technical dose/reduction value can be a point of interest for future studies.Another limitation of this study is that it does not report any information about the potential transmission of the microbes from the treated surfaces to the camera operators and vice versa, like in hand to surface contamination.Although there's plenty of evidence of persistence of the microbes in different surfaces and environments [39,40] the evidence of transmission of SARS-CoV-2 via fomites is low [41] and needs further research.In addition, the opportunity to expand current knowledge in the field of UV disinfection, even at frequencies other than UV-C [42], could add information on the resistance mechanisms of microbes that persist for long periods on treated surfaces. Conclusions The microbiocidal activity of the UV-C boxer was effective on three different types of materials in a short time of exposure to UV light.Effective disinfection can be obtained with UV-C regardless of the position of the surface with direct or reflected rays.Further engineering and research applications on this technology could encourage companies and workers outside the healthcare context to use this type of device to maintain a safe working environment.In combination with complementary disinfection techniques (e.g. chemical disinfectants) and adherence to established best practices, the use of this innovative tool has the potential to improve the overall safety standards of working environments, in particular by effectively reducing the risk of microbial contamination of various cinema equipment and surfaces. experiment was conducted between December 2020 and February 2021 at the Department of Molecular and Developmental Medicine, University of Siena, Italy.The UV-C device is a "Cartoni UV-C BOXER number BX0002", provided by Cartoni S.p.A. (Fig.1).The UV-C boxer has a large sliding box-like container for safe loading and disinfection of multiple pieces of gear at the same time.There are 10 UV-C lamps, "OSRAM PURITEC HNS UV-C", at 255 nm (0.9 Watt/each) (OSRAM GmbH, Munich, Germany) equally distributed on the top of the internal chamber.All six internal walls are reflective, to allow the UV rays to reach every surface of the device to disinfect.If the box chamber door is not safely locked, a switch sensor placed directly on the device door does not allow the UV-C lamps to be turned ON.The UV-C lights are activated by closing the box and pressing the switch button.A timer control can be used to program switching ON and OFF the device to set disinfection cycles.Three different types of experiments were conducted.The first is a test of inactivation of selected bacterial isolates at a fixed distance, with two exposure times and different carrier materials.The second experiment consisted of a disinfection test of a video camera with contaminated carriers attached in different spots of its surface.The third experiment involved an inactivation test for the SARS-CoV-2 virus placed in a plastic cap inside a polylactic acid support with two UV-C permeable quartz walls (on the upper and bottom part). Fig. 2 . Fig. 2. The video camera and the position of the selected spots for experiment 2. Fig. 3 . Fig. 3. (a) the PLA support (grey part) used for the test.Inside the support, the viral inoculum has been placed in a plastic test tube cap (blue cap).(b) Placement of the PLA support on the metal grid of the device (view from above). Carriers were exposed for 30 seconds and 60 seconds to UV-C rays.Additional carriers were placed out of reach of UV-C radiation, covered with an aluminium shell outside the device (positive controls).After the treatment, exposed and non-exposed carriers were transferred to 90 mm Petri dishes and 10 mL Dey and Engley (D/E) neutralizing broth medium was added (Liofilchem S.r.l., Teramo, Italy).Subsequently, the D/E medium was transferred to a 50mL Falcon centrifuge and spun for 40 minutes at 4500 rpm.Next, the supernatant was eliminated and the pellet re-suspended in 1mL D/E medium.Finally, 100 μl was transferred to Mannitol Salt Agar Petri dish (Oxoid Limited, Hampshire, United Kingdom), for S. aureus, Brilliance E. coli/Coliform Selective Agar Petri dish (Oxoid Limited, Hampshire, United Kingdom), for E. coli, and incubated at 36°C for 48 h.This experiment was conducted in triplicates. 33 Log 10 ; on Spot B was 4.74 Log 10 ; on Spot C was 4.83 Log 10 ; on Spot D was 4.89 Log 10 ; finally, on Spot E was 5.00 Log 10 (Tab.II). studies work in different environments, from open spaces to little rooms, where maintaining a safe distance can be problematic, and equipment is shared. This pandemic represented a challenge to step up technologies and techniques to keep safety in every work environment.We conducted this test to see if devices like the Cartoni UV-C box can be a practical solution to control fomites infection in a peculiar work environment like movie Tab. I. CFU/mL logarithmic reduction of S. aureus and E. coli on plastic, metal and glass carriers after UV-C irradiation inside the box, experiment 1.Tab.II.S. aureus ATCC 43300 CFU/mL logarithmic reduction on plastic carriers after UV-C irradiation inside the box, experiment 2. * The value of Log TCID50 = 1.5 means total viral inactivation
2023-09-02T06:18:08.373Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "dd8aedafd261cd7bff843ca8fdf9c04730ce214a", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3e2b52058520162c00d200c852ef1764d0027446", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264820732
pes2o/s2orc
v3-fos-license
A sebaceous cyst resolved by individualized homoeopathic medicine: A case report Sebaceous cyst, a benign encapsulated, subepidermal nodule filled with keratin. They can occur anywhere on the body except the palms and soles. A 33 year- old-patient came had a cystic swelling for 6 months. After a thorough assessment, a individualized treatment plan was created, and the most suitable homeopathic medicine, Calcarea carbonica, was prescribed. The sebaceous cyst was successfully treated within a certain number of months with repetition as per needed. The progress was documented with photographs taken from the same angle and under similar lighting conditions during each follow-up Introduction Sebaceous cyst also known as an epidermoid cyst is a non-cancerous, enclosed nodule beneath the skin's surface filled with keratin.These cysts can appear in various places, including the scrotum, genitalia, fingers, and occasionally the mouth lining, but are most commonly found on the face, neck, and trunk, excluding the palms and soles.The cyst's development is gradual and it can persist for years.They are typically observed between the ages of 30 to 40 and are rare before puberty.They are more prevalent in males than female, with a ratio of 2:1.Approximately 1% of these cysts have the potential to transform into squamous cell carcinoma (SCC) or basal cell carcinoma (BCC) [1,2] .Sebaceous cysts may be singular or multiple, spherical, and smooth.They have a border that gives way to pressure.Usually, there's a black spot on the swelling, known as a punctum, which is a blocked opening.It contains greasy, thick, greyish material that can be expressed under pressure.In larger cysts, an indentation may be seen at the punctum.Usually, these cysts are asymptomatic until they rupture, causing an inflammatory reaction as keratin spills into the surrounding tissue [2] .The differential diagnosis for a sebaceous cyst depends on its location and may include pilar cyst, lipoma, abscess, neuroma, benign growths, skin cancer, metastatic cutaneous lesions, ganglion cyst, neurofibroma, dermoid cyst, brachial cleft cyst, pilonidal cyst, and calcinosis cutis [2] .Complications can include infection, ulceration, rupture, sinus formation, calcification, carcinomatous change, cock's peculiar tumor, and sebaceous horn [2] .The conventional treatment involves complete excision of the cyst through an elliptical incision.If infected, drainage is performed first, followed by excision after the infection subsides [1] .Homeopathic medicines are mentioned as effective in treating skin abscesses and boils, with a lower chance of recurrence after treatment [3,4] .The case study aims to demonstrate the effectiveness of individualized homeopathic medicines for treating sebaceous cysts. Case report A 33-year-old female patient who presented with a lump on her back for the past six months.The lump was initially hard, non-painful, and gradually increased in size, eventually becoming soft.Upon clinical examination, the swelling appeared oval, uniform, and had a regular outline with a sebaceous punctum.The patient stated she hadn't experienced any trauma or undergone any surgeries in that region. History of present complaints The patient's complaint started six months prior with a hard swelling that gradually grew.She tried allopathic medicines for four months without significant relief.When the swelling didn't respond to the medication and surgical intervention was suggested, she opted for homoeopathic treatment. Past History The patient's past history included a bout of chickenpox during childhood. Family History There were no noteworthy aspects in the family history. Mental generals Mentally, she exhibited a high level of anxiety about her health and had a fear of darkness. Physical generals Physically, she tended to feel cold, was prone to catching colds easily, had a good appetite with a thirst for 2-3 liters of water per day.She had a preference for hot food and sweets.Her tongue was slightly coated and moist, bowel movements were irregular and hard, while urine was normal.She also had a tendency to sweat excessively, especially on the face and neck. Local and systemic examination Systemic examination was normal.On local examination, the swelling maintained its oval, uniform, and regular appearance with the sebaceous punctum.There was no tenderness, redness, or abnormal temperature in the area. Analysis of the case After analyzing the case, taking into account mental and physical symptoms, a totality was considered.The patient's mental state, thermal reaction, desire for sweets, hard and irregular stool, along with profuse perspiration, and particular symptom, were all included in this totality. Miasmatic Analysis of the case As per the detail (Table 1), a miasmatic analysis was also performed, indicating a predominantly Psoric miasm with elements of Sycotic manifestations [5] . Repertorization Repertorization was carried out using the Repertory of the homoeopathic materia medica by J.T Kent and Hompath Firefly software indicating following medicines for the case shown in the fig- 1. [6,7] .The score from highest to lowest are as follows: Calc carb > Kali-c > Kali-ar > Nit-ac > Sulph > Bar-c etc.This process suggested potential medicines, with Calcarea carb being the highest scored remedy.Finally, Calcarea carb 1M, two doses, was prescribed after consulting Materia Medica.~ 238 ~ Table 3: Modified Naranjo criteria [8] Items Yes No Not sure Or N/A Was there an improvement in the main symptom or condition for which the homeopathic medicine was Prescribed? +2 Did the clinical improvement occur within a plausible Time frame relative to the drug intake?+1 Was there an initial aggravation of symptoms?(need to Define in glossary) 0 Did the effect encompass more than the main symptom or condition, i.e., were other symptoms ultimately improved or Changed?+1 Did overall wellbeing improve?(suggest using validated Scale) 0 (A) Direction of cure: did some symptoms improve in the Opposite order of the development of symptoms of the disease?+1 (B) Direction of cure: did at least two of the following aspects apply to the order of improvement of symptoms: From organs of more importance to those of less importance From deeper to more superficial aspects of the individual From the top downwards +1 Did "old symptoms" (defined as non-seasonal and non-cyclical symptoms that were previously thought to have Resolved) reappear temporarily during the course of Improvement?0 Are there alternate causes (other than the medicine) thatwith a high probabilitycould have caused the Improvement?(consider known course of disease, other forms of treatment, and other clinically relevant Interventions) +1 Was the health improvement confirmed by any objective Evidence?(e.g., lab test, clinical observation, etc.) +2 Did repeat dosing, if conducted, create similar clinical Improvement?0 Total score As the total score is '9'.So causal attribution is definite and the improvement was solely due to homoeopathic medicine. Discussion We all know that treatment of sebaceous cyst in conventional treatment is only surgical excision, but there are also homeopathic medicines available for treating cystic growth.In this specific case, the homeopathic medicine Calcarea carbonica was prescribed based on homeopathic principles, and the cyst completely resolved within a certain timeframe.The final causal attribution score in this case was assessed using the Modified Naranjo Criteria, as proposed by the HPUS Clinical data Working Group, June 2014.The total score was 9, thus suggesting a 'definite' association between the medicine and the positive outcome (definite: ≥ 9; probable 5-8; possible 1-4; and doubtful ≤0).The paragraph concludes by recommending further case studies and randomized trials to evaluate the effectiveness of individualized homeopathic medicines in treating sebaceous cysts. Conclusion Homeopathic medicine, chosen based on a comprehensive assessment of symptoms, demonstrated a promising treatment effect in sebaceous cysts.This case also highlights the importance of tailoring treatment to each individual in homeopathy.While a study of a single case may not establish a definitive conclusion, the results are encouraging.Further research in these cases could generate more interest and consideration towards using homeopathic medicine as a preferred treatment option for patients. Acknowledgement Authors are grateful to the patient for her active cooperation and participation. Declaration of patient consent The patient has provided consent in the form for her images and other clinical information to be included in the journal. The patient is aware that her name and initials will not be disclosed, and every attempt will be made to protect her identity.However, complete anonymity cannot be assured. Financial support and sponsorship None. Table 2 : Follow up schedule: Follow ups are presented in a tabular format along with the photographs in the following tableHard lump on the back, non-painful, swelling appeared oval, uniform, regular outline with a sebaceous punctum.Patient had irregular hard stool.Swelling decreased than before.No pain or tenderness was noted.Anxiety of the patient decreased than before.Stool was hard and irregular. Swelling gradually become soft, suppuration occurred, pain and tenderness also appeared.Stool becomes regular.Placebo for 1 month.4thvisit 28/08/2021The cyst started healing.No tenderness or pain left.Placebo for 1 month.
2023-11-01T15:17:53.025Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "b4872a75e309f95f163f13077a0ab74ef879e895", "oa_license": null, "oa_url": "https://www.homoeopathicjournal.com/articles/980/7-4-30-165.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "42b9f0453f5df18702752d3b751a97807a25f965", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
201650208
pes2o/s2orc
v3-fos-license
Impact of Strong Anisotropy on Phase Diagram of Superfluid $^3$He in Aerogels Recently, one analog of the Anderson's Theorem for the $s$-wave superconductor has attracted much interest in the context of the $p$-wave polar pairing state of superfluid $^3$He in a model aerogel in the limit of strong uniaxial anisotropy. We discuss to what extent the theorem is satisfied in the polar phase in real aerogels by examining the normal to polar transition temperature $T_c$ and the low temperature behavior of the superfluid energy gap under an anisotropy of a moderate strength and comparing the obtained results with experimental data. The situation in which the Anderson's theorem clearly breaks down is also discussed. Recent observations on superfluid He in anisotropic aerogels have clarified profound roles of an anisotropy for the superfluid phase diagram and properties. The polar pairing state [1] has been discovered in nematic aerogels with a nearly one-dimensional structure [2]. It has been found that this polar pairing state does not occur when the magnetic scattering effect due to the solid 3 He localized on the surface of the aerogel structure is active [3]. This high sensitivity to the type of "impurity" scatterings of the superfluid phase diagram is not easily explained within the original theoretical model assuming a weak global anisotropy of the aerogel structure [1,4]. It has been pointed out that, when the aerogel structure is in the limit of strong anisotropy so that the scattering is specular along the anisotropy axis, the normal to polar transition temperature T c (P ) should be insensitive to the (nonmagnetic) impurity scattering strength [5]. Recently, this result analogous to the Anderson's Theorem in the s-wave superconductor [6] has attracted much interest [7,8] in relation to the low temperature behavior of the energy gap in the polar phase and to the robustness of the p-wave superfluid polar phase in relatively dense nematic aerogels. Previously, various features seen in superfluid 3 He in nematic aerogels [2,7] have been discussed based on the model assuming the weak anisotropy [1]. Here, the anisotropy is measured by the size of the correlation length L z of the random scattering potential. Once taking account of the puzzling result [3] brought by the magnetic impurities altogether, the approach starting from the side of the strong anisotropy may be more appropriate. Further, the polar phase has been detected so far only in the nematic aerogels. Then, one might wonder whether the polar phase occurs only in the limit of strong anisotropy. However, L z in nematic aerogels seems to be finite if taking account of the fact that splayed strands and crossings between straight strands are seen in real images of the nematic aerogels [2,8]. In this communication, consequences of the strong anisotropy in the phase diagram of superfluid 3 He in aerogels with no magnetic scattering effect are studied in details. It is found that, in the weak-coupling BCS approximation, the impurity-scattering independent T c is approximately satisfied even in the scattering poten-tial model with a finite correlation length L z along the anisotropy axis, implying that the Anderson's Theorem is apparently satisfied over a wide range of the strengths of the anisotropy. Thus, we argue that, consistently with the original argument [1], the polar phase may be realized in aerogels with a global anisotropy of a moderate magnitude. Further, the dependences of the superfluid energy gap |∆(T )| on the strengths of the impurity scattering and the anisotropy are also examined, and the T 3 behavior arising from the horizontal line node of |∆(T )| in the polar pairing symmetry is found to be robust against changes of the impurity strength and the anisotropy. Further, the situation in which T c (P ) is also reduced so that the Anderson's Theorem is not satisfied will also be discussed. First, let us describe how the Anderson's Theorem occurs in the context of the p-wave superfluid phase in an environment with nonmagnetic elastic impurity scatterings. The starting model of our analysis to be performed below is the BCS Hamiltonian for a spatially uniform equal-spin paired state in zero magnetic field where g is the strength of the attractive interaction, V is the system volume, |∆| is the maximum of the quasiparticle energy gap, and ξ p is the quasiparticle energy measured from the Fermi energy µ. The total Hamiltonian H is the sum of eq.(1) and the nonmagnetic impurity potential term where n(r) is the particle density operator. As usual, the impurity scattering can be modelled by the correlator or its Fourier transform w(k) = d 3 rW (r)e ik·r , with u imp = 0, where imp denotes the random average, N (0) is the density of states on the Fermi surface per spin in the normal state, and τ is the relaxation time of the normal quasiparticle in the case with no anisotropy. For simplicity, the Born approximation will be used to incorporate the impurity-scattering effect in the Green's functions for the quasiparticles in an equal-spin paired superfluid state. Then, we have a mean field problem for spin-less Fermions, and solving the corresponding gap equation can be performed in quite the same manner as in the s-wave paired case [9]. The resulting gap equation can be expressed in the form where ε = πT (2m + 1) with integer m, T c0 (P ) is the superfluid transition temperature of the bulk liquid, and p denotes the angle average on the unit vectorp over the Fermi surface. Further, in eq.(4), and are the impurity-averaged Matsubara Green's functions [9]. As a model of the impurity correlator (3) in the presence of a stretched anisotropy favoring the polar phase in which ∆ p = ∆p z , we will use the following expression where L z is the correlation length defined along the anisotropy axis on the random distribution of the potential u(r), and the z-axis is chosen here as the anisotropy axis or stretched direction. The size of the anisotropy is measured by |δ u | = k 2 F L 2 z , while the measure of the impurity strength is 1/(τ T c0 ) [10], where k F is the Fermi wave number. Then, the Fourier transform w(k) of W (r) becomes Equation (8) has the following limiting cases. For the weak anisotropy, |δ u | < 1, this model reduces to the expression, w(k) ≃ 1 − |δ u |k 2 z , introduced in Ref. [1]. The opposite limit of the infinite |δ u | corresponds to the case with the impurity scattering persistent along the stretched direction. In this case, w(k) reduces to implying that, as sketched in Fig.1(a), the scattering is specular along the z-axis. The present model (7) interpolating the above-mentioned two limits has been used to study how the half-quantum vortex (HQV) pair, which should appear in the polar phase, survives in the PdB phase at lower temperatures [11]. For any value of the anisotropy δ u and the impurity strength 1/(τ T c0 ), the polar to normal transition temperature T c (P ) and the superfluid gap |∆(T )| in the polar phase can be numerically obtained using eqs. (4) and (8). Now, it is easy to verify the Anderson's Theorem for the polar pairing state with ∆ p = ∆p z in the limit of the strong anisotropy. In fact, by applying eq.(9) to eq.(5), any 1/(τ T c0 ) dependence in the last term of eq.(4) is cancelled between the denominator and numerator of the term, and, as in the s-wave pairing case, the gap equation (4) becomes its expression in clean limit or for the bulk liquid. Next, let us determine the polar to PdB transition temperature T PB (P ). In the present weak-coupling approach, the polar-distorted A (PdA) phase [1,2,8] does not appear, and, on cooling, the polar phase is transformed into the PdB phase through a continuous transition. Since the real polar to PdA transition is also continuous [1,2], however, the T PB line obtained here is expected to be qualitatively comparable with the polar to PdA transition line. The T PB (P ) line is easily obtained according to the diagrams sketched in Fig.2 representing the gap equation linearized with respect to the order parameter of PdB state by using the quantities characterizing the polar pairing state. Then, T BP is given by the temperature T satisfying (10) Examples of the T c (P ) and T PB (P ) obtained numerically from eqs. (4) and (10) are presented in Fig.3, where the experimental data on T c0 (P ) [12] were used. As is seen in Fig.3(a) where a moderately large anisotropy |δ u | = 30 is used, T c (P ) weakly depends on the impurity strength τ −1 . In general, for a stronger anisotropy, the τ −1 -dependence of T c becomes weaker, while the corresponding one of T PB becomes stronger. At higher pressures, the pressure dependence of T c /T c0 (P ) is quite weak, reflecting the proximity to the limit of strong anisotropy in which the Anderson's Theorem is exact, while T c /T c0 is lowered at low enough P -values because of an increase of the dimensionless impurity strength 1/(τ T c0 (P )). In contrast to T c , however, T PB is quite sensitive to the impurity strength and rapidly decreases with increasing 1/(τ T c0 ). Thus, the temperature range of the polar phase is wider for a lower P . Further, as Fig.3 (b) shows, an increase of the anisotropy extends the region of the polar phase: With increasing the anisotropy |δ u |, T c is increased and approaches T c0 , while T PB decreases and approaches its finite value in the limit of strong anisotropy (see Fig.3 (b)). In any case, the temperature range of the polar phase at a fixed P becomes wider with increasing the anisotropy and/or the impurity strength. The results on T c (P ) in Fig.3 will be compared with the corresponding curves determined in experiments [2,3,7,13]. In aerogels, an effective decrease of the porosity leads to an enhancement of the "impurity" scattering via the aerogel structure [3]. In fact, Fig.1(a) and Fig.2(a) and (c) in Ref. [13] have shown a slight decrease of T c and a drastic decrease of the transition line to the PdA phase due to a reduction of the porosity. This tendency of the two transition curves is consistent with the features seen with increasing 1/(τ T c0 ) in Fig.3(a). By combining this observation with the T c curve under the large enough anisotropy, |δ u | = 3 × 10 3 in Fig.3(b), it is natural to regard the nematic aerogels in which the polar phase of superfluid 3 He is realized as random media with a finite correlation length of the scattering potential. Nevertheless, examining the superfluid polar phase in the nematic aerogels by starting from the limit of strong anisotropy where the Anderson's Theorem is satisfied is a proper description. Next, as another quantity related to the Anderson's Theorem, let us examine the temperature dependence of the energy gap |∆(T )| of quasiparticles in the polar phase. As indicated elsewhere [8], the energy gap difference |∆(0)| − |∆(T )| estimated from the NMR frequency data in the polar phase at 30 (bar) is proportional to T 3 , reflecting the presence of a line node in |∆(T )|. Since the relevant energy scale at low T is not T c but |∆(0)|, we will express the T 3 behavior in the form This relation to be satisfied in the polar phase in aerogels would indicate that, irrespective of the presence of the impurity scattering effect, the line node of |∆(T )| in the polar phase remains well defined. According to the calculation [8] in the weak-coupling approximation and clean limit, the coefficient a takes the value 8.49, while the estimated a-value taken from NMR data in a nematic aerogel at 30 (bar) was 0.38(∆(0)/T c ) 3 [8]. According to Ref. [8], this estimated coefficient may become comparable [8] with the weak coupling value 8.49 in the limit of strong anisotropy if the strong coupling effect [12] enhancing |∆(0)| is taken into account. In a strongly anisotropic case, |δ u | = 3 × 10 3 , we have obtained the value a = 8.97 comparable with the weak coupling value mentioned above. On the other hand, as presented in Fig.4, our results for the moderately strong anisotropy, |δ u | = 30, clearly show an effect of the impurity scattering on the coefficient of the T 3 term. Here, a stronger scattering strength (2πτ ) −1 = 1(mK) than those used in Fig.3 was used to obtain a wider polar region at lower temperatures. Although the T 3 behavior is still well defined in T < 0.65T c0 irrespective of the pressure value, the coefficient a is, as mentioned in Fig.4's caption, enhanced especially at lower pressures. If the strong coupling effect is taken into account, according to Ref. [8] the coefficient a ≡ a(T c /∆(0)) 3 at 30 (bar) would remarkably decrease so that the estimated value a = 0.38 [8] may be explained. At zero pressure, however, the strong coupling effect is not effective so that the coefficient a of the T 3 behavior at lower pressures may show a large value of order unity. Examining the T 3 term of the energy gap at low pressures may become a test for the present theory. It is valuable to point out that the p-wave Anderson's Theorem on the superfluid transition temperature is also satisfied in the case of a normal to (distorted) A phase transition under plane-like defects with no twodimensional momentum transfer (See Fig.1 (b)) if the l-vector of this A phase is oriented along the normal of the plane of the defects. In fact, when the bare pairing vertex is p k (δ j,k −ẑ jẑk ), and eq.(9) is replaced by the form proportional to δ(k x )δ(k y ), the superfluid transition temperature resulting from eq.(4) becomes T c0 irrespective of the strength of the impurity scattering. In principle, such a situation can be realized in planar aerogels and would result in an extension of the temperature width of the planar-distorted A phase region at lower pressures and hence, according to Ref. [15], in a realization of HQVs in the chiral A phase. As is well known in the context of the dirty s-wave superconductors, the Anderson's Theorem breaks down in systems with a strong enough impurity scattering due to the impurity effect in the repulsive channels of the quasiparticle interaction [16]. In fact, the T c (P ) curve reported in Fig.4 of Ref. [3] shows a remarkable deviation from T c0 (P ). Further, it is possible that the τ −1 dependence of T c (P ) obtained under a finite anisotropy of the type seen in Fig.3(a) occurs even in the limit of strong anisotropy due to the above-mentioned mechanism associated with the Anderson localization, because 1/(τ T c0 (P )) is the pressure-dependent strength of the impurity scattering. To clarify to what extent this Anderson localization effect is effective in real systems, further comparison between the theoretical results and new data will be necessary in future. In conclusion, we have investigated to what extent the Anderson's theorem is satisfied in the polar phase by assuming the correlation length of the random potential in nematic aerogels to be long but finite. It has been found that the low temperature behavior of the superfluid energy gap stemming from the presence of the horizontal line node is robust against the impurity scattering and that the resulting phase diagram is qualitatively consistent with the available experimental data. One of the authors (R.I.) is grateful to Vladimir Dmitriev and Bill Halperin for useful discussions. The present work was supported by JSPS KAKENHI (Grant No.16K05444).
2019-08-29T11:06:34.000Z
2019-08-28T00:00:00.000
{ "year": 2019, "sha1": "7a776778ce39b037372724a4a52da1c37309842d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1908.10712", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f3fe9b9ed1f4809450759b0bdb92280c1b2bf886", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219331577
pes2o/s2orc
v3-fos-license
A pilot open-label feasibility trial examining an adjunctive mindfulness intervention for adolescents with obesity Background Obesity in adolescence is predictive of obesity in adulthood and risk for chronic disease. Traditional behavioral approaches to addressing obesity in adolescence rarely yield meaningful changes in body mass index (BMI), suggesting that adjunctive treatments are necessary. Herein, we describe a study examining whether it is feasible to integrate a brief mindfulness intervention with the usual recommended care for adolescent obesity in a pediatric weight management clinic. Methods We conducted a single arm open-label trial with 11 adolescent patients with obesity. Participants received the recommended standard of medical management of obesity (usual care) plus a six-week mindfulness intervention. To assess our primary aim of feasibility, we examined recruitment, retention, and satisfaction rates. Participants also completed measures of mindfulness, emotion regulation, disordered eating, quality of life, and executive functioning, and had their BMI and blood pressure measured. Results We recruited 11 adolescents to participate in the intervention, with 8 (73%) completing the entire program. Attendance rates (85%) and satisfaction rates (100%) were promising for a larger trial. While preliminary analyses of changes in health outcomes should be examined with caution, effect sizes ranged from small to large with some promising trends in eating behaviors. Discussion It might be feasible to augment existing behavioral interventions for adolescents with obesity with brief mindfulness; however, some adaptations are needed to enhance recruitment and retention. The lessons learned in this feasibility study can inform an adequately powered efficacy trial. Trial registration This research is registered on ClinicalTrials.gov (NCT03874377). Introduction Obesity in adolescence is directly related to risk for cardiovascular disease and type 2 diabetes mellitus [1,2]. Traditional behavioral approaches to treating obesity assume that risk can be reduced through adolescents' adoption of healthy lifestyle behaviors [3]. At present, standard of care lifestyle management interventions for adolescents with obesity do not reliably yield significant changes in weight loss [4], suggesting that adjunctive treatment components are necessary [5]. This paper describes a pilot open-label trial to examine the feasibility of a brief, adjunctive mindfulness intervention tailored to the needs of adolescents with obesity. Adolescence is a time of dramatic change as youth go through puberty and experience a number of neurobiological changes that influence their behavior and decisionmaking [6]. Adolescents face stressors unique to their age group, such as initiating romantic relationships and gaining autonomy from caregivers, that can lead to maladaptive coping strategies if not addressed properly [7]. Mindfulness appears to be a particularly promising strategy in helping adolescents navigate these challenges and cope with them affectively. Mindfulness refers to the experience of paying attention to the present moment in a nonjudgmental manner [8]. Existing literature regarding the receptiveness of adolescents to mindfulness-based interventions (MBIs) indicates that adolescents generally are accepting of mindfulness practice [9]. When analyzed for efficacy, MBIs designed for adolescent populations have been shown to improve self-regulation [10], exercise levels [11], depressive symptoms [12], and resilience [13]. Conducting MBIs with adolescents also has some potential drawbacks, including poor attendance when programming is optional [14], participants' difficulty adhering to MBI regimens [15], and a lack of standard adaptations for MBIs in adolescent populations [16]. Mindfulness-based strategies have been successfully incorporated into weight loss and weight management interventions with adults [17,18], grounded in the understanding that mindfulness training enhances emotion regulation and one's awareness of decision-making processes and internal experiences [19]. Emotion regulation difficulties in adolescents are associated with increased emotional eating and binge eating [20,21]. Longitudinal research suggests that emotional eating influences risk for adiposity and metabolic disorders as adolescents move into adulthood [22][23][24]. Adolescents with higher impulsivity, or the tendency to act rashly when experiencing distressing emotions, are most likely to engage in emotional eating and binge eating [25]. Previous research indicates that impulsivity and emotion regulation, key drivers of disordered eating, are not necessarily stable traits and can be modified through behavioral interventions [26,27]. Mindfulness is posited to reduce impulsivity and improve emotion regulation skills by allowing individuals to become more comfortable with negative emotions without hastily reacting to them [28]. Mindfulness has also been associated with changes in brain regions involved in emotion regulation and inhibition that are relevant to weight management [29]. MBIs have shown particular promise in improving obesityrelated eating behaviors in adults-including binge eating and emotional eating-with effect sizes ranging from medium to large [18,30]. At present, there is a scarcity of studies examining the effects of MBIs in adolescents with obesity (defined as a body mass index > 30 kg/m 2 ), and it is unclear if such interventions are feasible for this population in tandem with standard medical management of obesity. Further, although research suggests that mindfulness can improve insulin resistance in adolescents with obesity [31], it is unknown whether mindfulness can influence other comorbidities in this population, including markers of cardiovascular disease (CVD) risk. Kumar et al. [32] conducted four, 90-min family-based mindful eating sessions with adolescents with obesity and their parents; however, while this program yielded promising retention and attendance rates, it did not yield significant changes in blood glucose, BMI, or total cholesterol. In contrast, a mindful eating intervention for Latina girls yielded reductions in BMI, but retention rates were only 57% [33]. Therefore, in addition to examining the feasibility of this intervention in a hospital weight management clinic setting, we will also examine the preliminary effects of the MBI on health gains including quality of life and CVD biomarkers associated with obesity in order to establish the potential for change. The proposed study had two specific aims: (1) to examine the feasibility of conducting an open-label trial of the mindfulness intervention in a sample of 15 adolescents with obesity, including recruitment, retention, and satisfaction rates; and (2) to establish the potential for change through the measurement of clinical outcomes including BMI, emotion regulation, eating-and weightrelated behaviors, quality of life, impulsivity, and blood pressure at baseline and post-intervention. Study setting This study took place at a multidisciplinary, researchoriented weight management clinic located within a large children's hospital in the Mid-Atlantic during February through August of 2019. This clinic accepts patients with obesity aged 6 months to 21 years. Approximately 100 new patients are seen annually. Approximately 39% of the patients seen are Black or African Americans, 28% are Hispanic American, and 20% are White Americans. Sixtyfive percent of the patients have Medicaid, and 34% are private or commercially insured. The clinic follows the recommended standard of medical management of overweight and obesity. The clinic team conducts a comprehensive evaluation to assess dietary and activity behavior change needs of each patient and family, as well as obesity-associated comorbidities. In addition to management of comorbidities, goals for improved physical activity and dietary behaviors are set with the patient and family at each visit. Inclusion and exclusion Adolescents were eligible if they were between the ages of 12-17, a current patient of the affiliated obesity clinic, and had a BMI greater than 30 kg/m 2 . Adolescents were ineligible if they had a known genetic cause of obesity, had been diagnosed with a severe intellectual or learning disability, had been diagnosed with an autism spectrum disorder or current psychosis, or were currently in psychotherapy. These exclusion criteria were selected because they might influence the degree to which an adolescent responds to behavioral weight loss and/or their ability to actively take part in a standard mindfulness intervention. Recruitment Medical providers and other clinic staff were informed about the study and referred interested patients' parents/ caregivers to the research team. Interested patients were screened for eligibility, and eligible patients were consented in the presence of a parent/caregiver and scheduled to complete a baseline assessment. Participant assessment schedule and measures Participants completed a series of assessments within a private clinic room at the obesity clinic. These measures were completed at baseline and post-intervention, unless otherwise stated. Feasibility outcomes Assessing recruitment feasibility Recruitment feasibility was examined through research staffs' detailed tracking of recruitment processes (e.g., referrals from physicians, patients approached in the waiting room), the number of interested patients, and the number of patients eligible after the initial screening. We documented all cases of ineligibility and the reason for disqualification. Assessing retention feasibility Retention feasibility was tracked via attendance; research staff took attendance during each assessment period and intervention session. Any enrolled participants who dropped out (defined as participants who either explicitly stated that they would like to leave the program or who missed two consecutive sessions in a row without contacting the research team) were contacted by phone to inquire about their reasons for ending the program, as a method of assessing any barriers to intervention completion. Assessing participant satisfaction We assessed participants' satisfaction with the mindfulness intervention by having them complete a satisfaction survey at the end of the intervention. This survey evaluated the following: (1) reactions to the topics discussed and skills reviewed, (2) comfort with facilitators; (3) opinions of the materials used, and (4) overall satisfaction with the intervention at that point in time. Ten of these items were scored on a Likert-scale of 1 (strongly disagree) to 5 (strongly agree) (e.g., "Participating in this program helped me to better manage my eating habits") and eight of these items invited open-ended responses (e.g., What were the challenges of participating in this study?). Participants were also asked to report their favorite and least favorite components at the end of each session. This feedback will inform potential adaptations made to the intervention prior to a larger randomized controlled trial (RCT). Participant self-report measures Demographic questionnaire Demographic questionnaire assesses age, gender, and race/ethnicity. This measure was only completed at baseline assessment. Mindful attention awareness scale-adolescent The Mindful Attention Awareness Scale-Adolescent (MAAS-A) is a 15-item measure of dispositional mindfulness. Participants rated how frequently they experience episodes of mindless behavior (e.g., "I find myself doing things without paying attention.") on a scale of 1 (almost always) to 6 (almost never). This measure has established psychometric properties in diverse adolescent samples [34][35][36]. Eating disorders examination-questionnaire Participants reported instances of overeating, loss of control eating, and binge eating over the last 4 weeks via 3 items of the Eating Disorders Examination-Questionnaire (EDEQ) [37]. This measure was selected to examine potential changes from baseline to post-assessment in eating behaviors that are associated with emotion regulation and risk for obesity that might be influenced by mindfulness training [22][23][24]. The EDEQ yields reliable and valid scores [38]. Adolescent responses to the EDEQ correspond strongly with clinical interviews assessing disordered eating [39]. Difficulties in emotion regulation scale-short form The Difficulties in Emotion Regulation Scale-Short Form (DERS-SF) is an 18-item, widely used self-report measure of emotion regulation problems that has been validated in adolescent samples [40]. This measure was selected to examine potential changes from baseline to post-assessment in emotion regulation, which is theorized to be a key driver of disordered eating (e.g., binge eating, overeating) in adolescents with obesity [41]. Youth quality of life instrument-short form The 15item Youth Quality of Life Instrument-Short Form (YQOL-SF) measures generic quality of life in youth with and without chronic conditions, ages 11-18 years. It has established psychometric properties in adolescent samples [42,43]. Executive function measure Go/no-go task The Go/No-Go Task examines inhibitory control via a computerized program. This measure was selected to examine potential changes from baseline to post-assessment in impulse control, which is associated with binge eating [25] and obesity [44], to provide evidence of the cognitive mechanisms through which MBIs might improve eating behaviors. Participants are instructed to press a button (or "Go") when a certain image is shown on the screen (i.e., an image of food). They are instructed not to respond (or "No-Go") when another image is shown on the screen (i.e., an image of a toy) [45]. The entire task is approximately 15 min, and each image is shown on the screen for approximately 500 ms. Poor impulse control is evident by more failures to inhibit responses in the No-Go condition (e.g., a false alarm). Omission occurs when a participant fails to respond to a Go stimuli. Reaction time is the processing speed for correct Go trials. This task demonstrates reasonable reliability and validity in adolescent samples [46,47]. Anthropometric and CVD biomarker measures Anthropometrics Height and weight were assessed in order to calculate BMI. Height was measured to the nearest 1/8 in. using a wall-mounted stadiometer. Weight was measured in indoor clothing, without shoes, to the nearest 0.1 lb using a calibrated digital scale. Blood pressure measurement Blood pressure was measured using an automatic upper arm cuff while the participant was seated. Participants were instructed to sit quietly, with both feet uncrossed on the floor, and their arm in a still position, to increase accuracy of each measurement. Intervention The mindfulness intervention consisted of 6 weekly sessions, blending material from the evidence-based Learning to BREATHE [48] and Mindfulness-Based Eating Awareness Training [19] manualized interventions. Participants met individually with a therapist for each 60-min session. Sessions focused on the following: experiential mindfulness exercises, such as mindful eating, loving kindness practices, breath awareness, and mindful movement; hunger and satiety awareness; improving responses to emotions; practicing acceptance and being non-judgmental; and tolerating negative feelings and sensations, including those related to hunger and cravings. Participants were assigned brief homework exercises (approximately 10 min) daily in between appointments. Sessions were offered via telemedicine to improve attendance. Adverse events and criteria for discontinuation Expected risks to participants in this study were generally mild. There was the unlikely but possible risk that participants could experience negative emotions during the practice of mindfulness. Participants were informed about this risk during the consent process and encouraged to inform a member of the research team if they had a strong negative reaction. Mindfulness facilitators were trained to provide adaptations to the mindfulness exercises in order to reduce the intensity of these experiences and ground the participant in the present moment. If an individual continued to experience intense negative emotions that interfered with their ability to participate in the program despite these adaptations, they would have been withdrawn from the study and provided with appropriate referrals. The investigators and research staff met regularly to discuss participants' reactions to the assessments and intervention and any study withdrawals. No adverse events were reported. Facilitators and training The mindfulness components of the intervention were facilitated by two interventionists with established mindfulness practices, including one graduate-level clinical psychology student and one first-year medical student. Each facilitator received extensive supervision from the first and second authors (both licensed clinical psychologists) via weekly meetings. Participants met with the same facilitator throughout the course of the intervention. The usual care components were facilitated by medical providers within the obesity clinic. Adequacy of sample size Our study's primary aim was to examine feasibility and refine aspects of the research approach; therefore, a formal sample size calculation was not warranted [49]. Given budgetary and logistical constraints, our aim was to recruit 15 adolescents over a 6-month period through a single clinic site, which is sufficient to provide useful information about the feasibility of the protocol [50]. Data analysis IBM SPSS Statistics version 24 was used to complete all quantitative data analyses. Frequencies were used to examine feasibility, including recruitment and eligibility rates, rates of attendance at each session and assessment appointment, and satisfaction with the intervention. Paired t tests were conducted to examine pre-post differences in the health outcomes of interest, and Cohen's d effect sizes and confidence intervals were calculated to examine the effect size of any changes in BMI, mindfulness, emotion regulation, eating behaviors, quality of life, impulsivity, and blood pressure [51,52]. For participants who did not complete post-assessment (n = 3), we used a last observation carried forward imputation. Data security All participant data was kept in a secure locked location in the PI's research lab. Participant data was also stored electronically on an encrypted data "cloud" that was accessible only from university servers, which is firewalled and password protected to guard against data loss or theft and to avoid any potential breach in subjects' privacy and confidentiality. Participant names were replaced with identification numbers to maintain confidentiality. Only study staff had access to identifiable information, and all of them were required to complete the Collaborative Investigator Training Initiative (CITI) course in human subjects' protection. Figure 1 depicts the flow of study participation. Twentyone adolescent patients of the obesity clinic were screened for eligibility. Five of these patients were ineligible for the study because they were currently in mental health counseling (n = 4) or had a BMI under 30 (n = 1). Sixteen (76%) screened adolescents were eligible and 11 enrolled in the study (73% female; 64% Black/African American, 18% Hispanic/Latino, 18% White; M age = 14.36, SD = 1.90 years; M BMI = 35.70, SD = 5.28; BMI range 31.37 to 50.30). Eight of the 11 enrolled participants (73%) completed post-assessment. Table 1 presents recruitment, retention, attendance, and satisfaction rates. Recruitment was slower than expected, and we did not reach our goal of 15 adolescents (final enrollment was 11 adolescents). Retention rates were slightly below expectations (73% versus an expected 80%), while attendance and satisfaction rates were promising (85% and 100%, respectively). Regarding preferred modality of intervention delivery, 8 participants opted to complete the intervention via telemedicine, 2 participants completed the sessions using a mix of in-person and telemedicine appointments, and 1 participant completed the intervention completely face-to-face. Participants were asked to rate their satisfaction with various aspects of the program on a scale of 1 to 5, with higher scores representing greater satisfaction. Participants responded most favorably to items assessing comfort with program staff (M = 5, SD = 0), improved ability to explain mindfulness to others (M = 4.50, SD = .76), and improved ability to be mindful on one's own (M = 4.50, SD = .53). Participants responded least favorably to an item assessing their use of mindfulness in their daily life (M = 3.88, SD = .64). Table 2 presents participant open-ended responses regarding self-reported benefits of participation, challenges to participation, and changes noticed in oneself post-participation. Additional file 1 includes all participant responses to open-ended satisfaction items. Examination of health-related outcomes Paired samples t tests were used to examine potential differences from baseline to post in BMI, mindfulness, emotion regulation, eating behaviors, quality of life, impulsivity, and blood pressure with the understanding that we were severely underpowered to detect significant differences. Table 3 presents each outcome, the means at baseline and post-assessment, the effect size, confidence intervals for the mean change, and the p value. The largest changes observed according to Cohen's d calculations were in overeating (Cohen's d = − .69, large effect size), and Go/No-Go Reaction Time (Cohen's d = .78, large effect size). The only change that was significant was Go/ No-Go reaction time (p = .05). The decrease in overeating was approaching significance (p = .07). Discussion Traditional behavioral weight management programs have limited efficacy in treating obesity in adolescence [4,5]. MBIs might enhance the efficacy of existing behavioral interventions by enhancing adolescents' cognitive functioning and emotion regulation processes [29], which in turn could improve eating and weight-related behaviors [17]. Our study examined whether such an intervention is feasible for adolescent patients with obesity, in preparation for a potentially larger efficacy trial. We also examined preliminary changes in impulsivity in adolescents (as measured through the Go/No-Go task), thus providing potential evidence of the cognitive mechanisms through which MBIs might improve eating behaviors. An additional innovation of our study was the focus on cardiovascular outcomes, although an adequately powered RCT is needed in order to understand whether MBIs might improve risk reduction for cardiovascular disease in adolescents with obesity regardless of weight reduction. Our primary aim was to determine the feasibility of implementing this intervention in a pediatric outpatient Twenty-one adolescent patients were screened for eligibility. Sixteen (76%) screened adolescents were eligible and 11 enrolled in the study over a 6-month period. Retention 80% 73% Three enrolled participants dropped out of the program, due to scheduling difficulties (n = 2) or for family reasons (n = 1). Attendance 85% 85% On average, participants completed 85% of the 6 intervention sessions. Fifty-four percent (n = 6) attended all of the required sessions. Satisfaction 90% 100% Participants were asked to rate their satisfaction with various facets of the program on a scale of 1-5. All of the participants who completed the satisfaction scale averaged a score of 4 or higher across these items, indicating they were satisfied with the program. weight management clinic. Although prior research with adolescents suggests that recruitment is generally feasible for adjunctive interventions in weight loss clinic settings [53], and for MBIs specifically [31,33], we encountered some difficulty in meeting our target recruitment of 15 participants. Systematic reviews suggest that the majority of clinical trials do not meet their recruitment goals [54,55] with nearly 20% of trials ending prematurely due to unsuccessful recruitment [56]. Adolescents often have to balance school and after school activities with medical treatment, as well as rely on parents or caregivers for transportation to appointments. Given these constraints, families might have been wary to allow their adolescent to participate in an intervention extending beyond usual care. We believe developing more realistic eligibility criteria per the Clinical Trials Transformation Initiative [57] might have enhanced our ability to recruit patients. In particular, one of our exclusionary criteria was concurrent mental health treatment. We learned that the majority of adolescent patients were also working with a psychologist or counselor (and indeed, these patients were often the most interested in our study), but physicians could not refer them to our pilot study. We originally decided to exclude these adolescents from the pilot study because some of the content that was practiced during the intervention (e.g., stress reduction, emotional regulation skills) overlaps with techniques and practices that frequently occur in counseling. Therefore, to limit external influence on the effectiveness of the intervention, we included concurrent mental health treatment as an exclusionary criterion. Although maintaining this as an exclusionary criterion in a larger RCT would uphold internal validity, inclusion of adolescents in current mental health treatment would enhance the feasibility of recruitment, along with enhancing the generalizability of the findings to the average adolescent seeking obesity treatment. Other strategies to enhance recruitment might include conducting outreach beyond clinics in order to maximize the number of potential participants, Benefits of the program "Helped control eating and portions" "Learning when I'm actually hungry and when I'm stress eating" "Learning about mindfulness and sharing it with others" "Help[ed] manage weight loss and emotions" "Losing weight and watching what I eat" Challenges of the program "Coming every Tuesday" "Wishing good to people who have been bad to me" "Remembering homework" "Finding a good time [for appointments]" "Freeing up time each week" Changes noticed in self "I've become more peaceful and able to control my emotions" "Being able to calm down easier" "Thinking about strategies before healthy choices" "Falling asleep easier" "Mindfulness and meditation in my daily routine" Other feasibility outcomes were more promising. Our attendance rate was 85% and consistent with prior mindfulness interventions with adolescents [31,58]. Anecdotally, participants rarely kept their originally scheduled appointment times, and interventionists had to remain flexible to reschedule participants (often at the last minute) for another appointment time each week. Notably, the participants in our study were given the option between in-person and telemedicine sessions, and the majority chose the latter, suggesting that future work in this area should offer telemedicine delivery of mindfulness sessions to counteract scheduling and transportation barriers. Regarding satisfaction, adolescent feedback was generally quite positive. Adolescents evaluated the program, staff, and benefits of participation highly, perhaps because our intervention included aspects of mindfulness that have been identified as most preferred by adolescents, including hands-on exercises, tools to manage stress in the moment, and techniques to improve the quality of interpersonal relationships [59][60][61]. Notably, in this pilot we had no method of examining whether the benefits observed postintervention might have related to participants' positive relationship with the facilitator, as opposed to the intervention itself, and no way to control for effects between facilitators. A larger efficacy trial should include an attention-matched control group and appropriate modeling of facilitator effects [62]. Participants reported having the most difficulty completing homework and engaging in mindfulness activities on a daily basis due to competing responsibilities like school work or part-time jobs. Prior research with adolescents indicates that a lack of incentives and/or consequences for homework completion reduces motivation to complete it, particularly when it is perceived as competing with their schoolwork [58]. Future programming might consider briefer mindfulness exercises adolescents can take part in anywhere (e.g., three mindful breaths) as opposed to more formal time-consuming practices (e.g., body scan recordings). Programs might also consider the incorporation of mindfulness apps (e.g., Calm, Insight Timer) to facilitate daily practice between appointments and to allow for objective tracking of time spent engaging in mindfulness. Overall, we believe these feasibility outcomes support progression to a larger-scale efficacy trial, pending adaptations to exclusion criteria, recruitment methods, and intervention delivery described herein. Encouraging trends were observed in some of the health outcomes of interest, although causality cannot be assumed given the lack of a control group. While not statistically significant, BMI was reduced on average by nearly three points, a promising outcome for reducing youth's risk for chronic disease [63,64]. This might have been related to youth's reports that mindfulness increased their ability to monitor their eating habits and pay attention to cues for hunger and fullness. Indeed, youth's reduction in overeating was approaching significance as they reported nearly two fewer overeating episodes per month. A previous mindfulness intervention with adolescents [33] and several mindfulness interventions with adults [65,66] have yielded reductions in BMI, although the majority of studies suggest that mindfulness alone is not sufficient to lead to weight loss [17]. In contrast to mindfulness intervention research with adults [67,68], we did not observe promising reductions in blood pressure as hypothesized. This could have been due to the brief timeframe of the intervention, variety in the time of day during which blood pressure was measured, or the inability to control for medications known to effect blood pressure. All findings must be interpreted with extreme caution given that the confidence intervals of the effect sizes included zero along with the lack of a control group. Regarding Go/No-Go task performance, we observed a significant difference in reaction time in which participants took a longer time to react on "go" trials following the completion of the intervention. The significant increase in reaction time might be a reflection of increased processing and awareness of the "go" stimuli by participants before responding, suggesting a less impulsive response [69]. No significant changes were observed in the false alarm and omission rates; however, it is worth noting that the false alarm rate-a primary measure of inhibitory control-exhibited a reduction following the intervention with a medium effect size. The lack of significant changes in the false alarm and omission rates might also indicate that the Go/No-Go task is not the most capable measure of inhibitory control. Prior mindfulness research has suggested that changes in neural indicators of response inhibition following a mindfulness intervention have not corresponded with improvements in behavioral performance on the Go/No-Go task [70]. Considering our intervention in the context of broader adolescent obesity treatment recommendations, MBIs seem to align well with established expert committee recommendations for pediatric obesity, including aspects of the seminal model of care put forth by Barlow and colleagues [71,72]. Stage one of this model addresses prevention of obesity through healthy lifestyle habits (e.g., a nutritious diet, physical activity). Research shows that mindfulness can modify behavioral habits, such as emotional eating and binge eating [17]. Reduction in these habits can help prevent the progression of weight gain in patients at-risk for obesity. Stage two of this model includes providing more support and structure for the adolescent with the goal of achieving healthy eating and physical activity-related behaviors. Guided mindfulness practice provides a structured, goal-oriented intervention for adolescents. MBIs also seem to align with the third tier of this model, which emphasizes a comprehensive multidisciplinary approach with an emphasis on behavioral modification through the use of frequent office visits and specialists. Scheduled mindfulness sessions provide consistency as well as opportunities for clinicians to follow-up on the patient's care management. Weekly mindfulness sessions require patients to make more frequent visits to the office as well as expand the scope of their treatment. This was demonstrated directly through our pilot study as patients scheduled mindfulness sessions either right after their visits with their physician or on separate occasions. Limitations of our study include the lack of a control group, which precludes our ability to determine whether any improvements observed in health outcomes were the result of the mindfulness intervention as opposed to outside variables including usual care. Further, the small sample size limited our statistical power to identify significant changes when they occurred and any preliminary results must be interpreted with extreme caution. Finally, our sample was primarily African American and female, with limited representation of other racial/ethnic groups and boys; therefore, the findings may not be generalizable to other samples of adolescents. While limitations exist, ideally, examining the efficacy of this intervention in a larger and more rigorous trial (pending adaptations to recruitment methods and exclusion criteria) will improve outcomes for adolescents with obesity and provide better understanding of how mindfulness might influence eating behaviors. Conclusions In summary, this study aimed to examine whether a brief adjunctive mindfulness intervention was feasible to conduct with adolescent patients with obesity in a pediatric weight management clinic. While participant attendance and satisfaction rates were promising, recruitment and retention proved more challenging. The lessons learned will inform a larger efficacy trial involving more extensive examination of CVD functioning and including an attention-matched control group. Should this intervention prove successful, it could be easily translated into other weight management clinics, given that the intervention is manualized and does not need to be implemented by clinicians with extensive psychological training. Mindfulness may be a successful tool for improving emotional regulation and decision-making in adolescents with obesity, leading to improved weight loss, health outcomes, and quality of life. Additional file 1: Responses to open-ended satisfaction questions for all program completers
2020-06-06T14:11:18.657Z
2020-06-06T00:00:00.000
{ "year": 2020, "sha1": "4a9c286fb43d868acd758e13c7c2b0f7e7f89ccb", "oa_license": "CCBY", "oa_url": "https://pilotfeasibilitystudies.biomedcentral.com/track/pdf/10.1186/s40814-020-00621-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a9c286fb43d868acd758e13c7c2b0f7e7f89ccb", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233573776
pes2o/s2orc
v3-fos-license
Texture methods for evaluating meat and meat analogue structures: A review Meat analogue products are considered to help consumers reducing their meat consumption. Their key success factor is their high similarity in sensory properties compared to meat. Even though the structure and texture characteristics of meat are well documented, dedicated methods used to analyse meat analogues are limited still. This review summarises texture and structure analysis methods of meat and meat analogues: mechanical testing; for example Texture Profile Analysis, spectroscopy; for example NMR and imaging techniques; for example hyperspectral imaging. Furthermore, the advantages and limitations of each texture and structure method are described. Finally, characterizations aspects specific to meat analogues are discussed. Promising methods for future research are described that have potential to get more insight into the fibers of meat analogues and the structure development during thermomechanical processing of meat analogues. Industrial relevance: To be commercially successful for large groups of consumers, alternatives for meat should be highly similar to meat. That is why meat analogues should resemble existing meat in their texture. It is thus important to understand the texture properties with the help of relevant techniques, such as mechanical, spectroscopy and imaging techniques. In this manuscript, we describe promising texture methods for characterization of properties specific to meat analogues. The development of novel techniques to quantify meat analogue properties will stimulate the development of meat analogues that satisfy the values and wishes of consumers. Introduction Plant protein-based meat analogues that mimic the sensory properties of meat could be a route to help consumers to reduce their meat consumption (Elzerman, Hoek, van Boekel, & Luning, 2011;Hoek et al., 2011;Michel, Hartmann, & Siegrist, 2021). A reduction of meat consumption might lead to a lower environmental footprint of the diet because meat production leads to intensive use of land, water and energy (Tilman & Clark, 2014;Weinrich, 2019). However, the different nature of plant materials compared to those of meat, renders the imitation of meat texture a challenge. For example, plant proteins do not naturally occur in fibrillar orientation (Fuhrmeister & Meuser, 2003;Sun & Arntfield, 2010;Taherian et al., 2011). Although meat products are widely different in their properties, they do share many characteristics that they do not share with plant proteins. For example, the very small length scale of meat muscle structure consists of myofibrillar protein and myoglobin positioned into a hierarchical fibrillar structure that is not easily replicated in plant-based meat analogues. The unique juiciness of meat is also a result of this hierarchical structure (Frank, Oytam, & Hughes, 2017). Besides, many of the unique meat properties are strongly dependent on the internal structure of the meat, which ranges from 100 nm to 100 μm. To be commercially successful in the short term, and for large groups of consumers, alternatives should not deviate too much from their current meal and thus resemble existing meat in their texture (Elzerman et al., 2011;Hoek et al., 2011;Michel et al., 2021). As described by , two approaches exist to make meat analogues: top-down and bottom-up. The latter approach aims at mimicking the full hierarchical structure of meat, but these methods are laborious and require more resources than the top-down approach. Examples of the top-down approach are the shear cell technology and extrusion. Extrusion is widely used industrially to make currently available meat analogues. However, the fibrousness of meat analogues from plant proteins created via a top-down approach is typically less hierarchical. An important question though is whether similarities on a larger length scale are sufficient for similarities in sensory properties already. The first step towards insights is a characterization of the structures at different length scales for both meat and meat analogues. The texture of meat has been widely studied. Many analytical techniques and methods are established for meat and fish, including sensory evaluation and mechanical methods. Therefore, while the existing methods are quite adequate for meat, it is not clear whether these would also be sufficient to characterize the differences between meat and the plant-based matrices. The objective of this paper is therefore to understand the potential of those analytical methods developed for meat to be used for meat analogues as well. To investigate this, we will review the available methods on their suitability for analysing plant-based meat analogues. We will then assess whether they cover the complete parameter space and describe the need for new techniques specifically for those properties of plant materials that are different from meat products. Instrumental techniques for texture of meat and meat analogues Although texture is 'the combination of the rheological and structure (geometrical and surface) attributes of a food product perceptible by means of mechanical, tactile, and where appropriate, visual and auditory receptors' as defined in 2008 by the International Standards Organization (ISO, 2008), most techniques are focused on instrumental testing. Instrumental techniques to measure the texture of meat and meat analogues are often used instead of sensory experiments, as the latter is expensive, time-consuming and difficult to make quantitative. Instrumental techniques provide objective information on different structural parameters. Meat texture is characterized by different methods. Each method analyses meat products at a certain length scale. Typical approaches to study the texture and structure of meat and meat analogues include mechanical, spectroscopy and imaging characterization methods. This paper summarizes the basic technologies and the most recent advances of those technologies for processing different types of meat (i.e. beef, pork, and poultry) and meat analogues (i.e. shear cell structures and extruded products) (Fig. 1). Mechanical techniques Traditionally, texture is evaluated with mechanical methods. Such methods are used to analyse the mechanical properties of a product through compressing, shearing and/or pulling. Mechanical methods are applied to all kinds of food products, such as cheese, candy, pasta, but also meat and meat analogues. A limitation of the mechanical methods is that they are destructive, hence tested products cannot be used for other applications. A folding test is often performed as the first mechanical test. The test assesses the structural failure of both meat and meat analogue products based on a five-point grading system Kamani, Meera, Bhaskar, & Modi, 2019). It is an easy and fast method to obtain basic information about the texture of a product, but it is not fully quantitative. After performing the folding test, one or more of the following tests are done. The Warner-Bratzler test measures the maximum shear force as a function of knife cutting movement through a meat product (Novakovi & Tomaševi, 2017). It is difficult to give a precise physical meaning to the Warner-Bratzler shear force because it measures a combination of shearing, compression and tensile stress, making it more a measurement of overall quality attributes (Voisey, 1976). Nevertheless, the Warner-Bratzler test is used to analyse the texture of different types of meat products, in particular whole muscle products and sausages (Table 1). The probe of the Warner-Bratzler test consists of a single blade with a V-shaped notch (Morey & Owens, 2017). This blade is used to cut through the meat product, usually perpendicular to the longitudinal positioning of the muscle fibers, but some studies additionally measure the parallel direction (Cierach & Majewska, 1997). Furthermore, previous studies suggested that differences in the device, blade, product diameter, or settings used, influence the results (Novakovi & Tomaševi, 2017;Pool & Klose, 1969;Voisey & Larmond, 1974;Wheeler, Shackelford, & Koohmaraie, 1996). Thus, standardization will be important to obtain results with the Warner-Bratzler method that allows comparison between studies. A few studies use the Kramer Shear Cell test to measure meat texture in addition to the Warner-Bratzler test (Table 1). This test simulates a single bite into a piece of food. The principle is similar to the Warner-Bratzler test, but it has multiple, blunt blades arranged in parallel that correspond to specific slots in the base of the cell (Barbut, 2015;Morey & Owens, 2017). Products, often multiple at once, are placed in the cell; the products are compressed and sheared when the blades push the products through the slots. The resulting parameters are averages of the forces required to shear the full product (Morey & Owens, 2017). This makes it possible to measure products with an uneven surface for example. Similar to the Warner-Bratzler test, the Kramer Shear Cell test does not evaluate a single mechanical property. Instead, it measures a Figure 1. Destructive ( ) and non-destructive (□) texture and structure methods used for meat (M, ) and meat analogues (MA, ). Abbreviations: WB, Warner-Bratzler; TPA, Texture Profile Analysis; NIR, Near-infrared; MIR, Midinfrared; SA(X)S, Small-angle (X-ray) scattering; (SE)SANS, (Spin-echo) Smallangle neutron scattering; CLSM, Confocal laser scanning microscopy; SEM, Scanning electron microscopy; TEM, Transmission electron microscopy; AFM, Atomic force microscopy; MRI, Magnetic resonance imaging; XRT, Xray tomography. combination of the effects of compression and shear, which could be seen as a limitation of the method. Xiong, Cavitt, Meullenet, and Owens (2006) compared the potential of the Kramer Shear Cell and the Warner-Bratzler method for the prediction of sensory tenderness of chicken breast, and found that the shear values correlated well with descriptive sensory attributes as well as consumer sensory attributes. Another study also indicated that both methods were successful in evaluating rabbit meat tenderness and presented similar levels of correlation with sensory scores (Bianchi, Petracci, Pascual, & Cavani, 2007). For both methods, the products need to have a specific thickness. This means that these methods can only be used on meat and meat analogues (extruded products, sheared, patties, sausages, etc.) that fulfil these requirements. While the methods are therefore suitable within a pre-defined range of similar products with a limited variation of parameter values, it is not clear yet whether these methods would also allow the comparison with plant-based meat analogues, which can have quite different properties. The Kramer Shear Cell is not yet used to measure textural properties of meat analogues as far as the authors are aware. Another mechanical test is the tensile test, which measures the resistance of a product against tearing. A product is mounted between two grips and extended in the tensile direction at a fixed speed until failure. Tensile parameters such as maximum rupture force, breaking strength and energy to fracture can be calculated from obtained stress and strain values. In general, tensile products have a dumbbell or dogbone shape to conduct the stress towards the middle of the product and induce failure at the intended location. Tensile tests are used with a wide product range such as sausages, frankfurters, ham, whole muscle products (Table 1) and in the past also meat patties (Beilken, Eadie, Griffths, Jones, & Harris, 1991;Spadaro & Keeton, 1996). Tensile tests have also been applied to meat analogues (Dekkers, Nikiforidis, & van der Goot, 2016;Schreuders et al., 2019). The ratio between the tensile strengths parallel and perpendicular to the (muscle-) fiber orientation provides insight into the anisotropy of the product (Barbut, 2015; (Daros, Masson, & Amico, 2005;Herrero et al., , 2008) Shear cell structures (ma) (Dekkers, Nikiforidis, et al., 2016;Krintiras et al., 2015;Schreuders et al., 2019; High moisture extruded product (ma) (Pietsch, Werner, Karbstein, & Emin, 2019) F.K.G. Schreuders et al. Food Control 127 (2021) 108103 Nikiforidis, & van der Goot, 2016). For both meat and meat analogues a few studies calculate the anisotropic index (Dekkers, Hamoen, Boom, & van der Goot, 2018;Krintiras, Göbel, Van Der Goot, & Stefanidis, 2015;Schreuders et al., 2019). Christensen, Purslow, and Larsen (2000) studied the tensile properties of whole beef meat as well as single muscle fibers and perimysial connective tissue. The use of a mechanical testing method for single muscle fibers is unique and is not realistic with other mechanical testing methods. Therefore, the tensile test might be able to study the texture of products at a smaller length scale than other mentioned mechanical testing methods. This would allow for measuring the tensile strength of single meat analogue fibers from for example calcium-caseinate materials . Another mechanical method to quantify food texture is a single compression test. The single compression test is often performed as an axial compression test between two flat plates (Barbut, 2015). The products have to be smaller than the contact area of the probe in use. Products can be compressed until failure, or to a certain level of deformation. Single compression tests are not used often (Table 1) as a double compression test, often called Texture Profile Analysis (TPA), can provide more information within a single experiment. Similar considerations regarding reliability for single compression tests have to be taken as for TPA tests (Lepetit & Culioli, 1994). TPA is a compression technique that combines multiple textural parameters such as hardness, chewiness, adhesiveness, cohesiveness and springiness in a single measurement. The TPA parameters can be divided into primary parameters (hardness, springiness, adhesiveness and cohesiveness) and secondary parameters (gumminess, chewiness, resilience) (Novakovi & Tomaševi, 2017). Primary parameters can be directly determined from the obtained force/time graph, while secondary parameters are derived from the primary parameters. The test is based on simulating the biting action of the mouth by a two-cycle compression series (Barbut, 2015). TPA tests are widely applied on meat analogues and meat products ranging from whole muscle products to emulsified sausage products (Table 1). A puncture test is similar to a compression test, but the probe contact area is now much smaller than the size of the product, for example through use of a needle-shaped probe. During a puncture test, the material is compressed to a certain strain by a probe to quantify properties such as maximum force, breaking strength, and the penetration depth. According to Barbut (2015), it is commonly used for restructured products and emulsified meat products. However, literature only showed the use of a puncture test on chicken breast, meat patties and meat analogue sausages (Table 1). Penetration force, as measured with the puncture test, was found to be lower in sausages based on plant proteins than those based on poultry (Kamani et al., 2019). This indicated that the breaking force required to penetrate the outer skin of plant protein sausages is lower than in chicken sausages. In addition, penetration depth of plant based sausages was used as a measure for the strength of binding agents (Arora, Kamal, & Sharma, 2017). As meat and meat analogues are often heterogenous in structure, it can be hard to obtain compression type measurements that is representative for the whole product. A recent technique of multi-point indentation characterizes the local mechanical texture of meat and meat analogues by mapping the elastic modules as measured with a spherical probe of radius 1 mm (Boots et al., 2021). Dolores Romero deÁvila, Isabel Cambero, Ordóñez, de la Hoz, and Herrero (2014) studied the mechanical properties of commercial cooked meat products by both TPA and tensile tests. They showed that the parameters from the TPA could be used to construct models to predict tensile test parameters such as breaking strength and energy to fracture, removing the need for tensile tests. Furthermore, Ruiz De Huidobro, Miguel, Blázquez, and Onega (2005) recommended the TPA method over the Warner-Bratzler method to predict meat texture on basis of a better correlation with sensory data and a higher accuracy. Similar conclusions were drawn by Caine, Aalhus, Best, Dugan, and Jeremiah (2003) who showed that TPA parameters correlated better with variations in sensory results of beef tenderness than the Warner-Bratzler test. Similar to the Warner-Bratzler test, the TPA test requires standardized testing methods for trustworthy comparison between studies, and they can probably only make reliable correlations in limited parameter space. For both meat and meat analogues textural elements can be studied with the Warner-Bratzler test, the tensile test, the TPA test and other compression techniques. The Kramer Shear Cell has only been used to quantify the texture of meat products but offers several benefits, such as the possibility to measure uneven products. Therefore, it might be a future direction for texture analysis of meat analogues. Furthermore, the recently developed multi-point indentation technique shows high potential to characterize heterogeneous meat and meat analogue structures. All described mechanical techniques analyse texture at a macroscale, except for the tensile test which can be used to analyse single muscle fibers at a smaller length scale. Therefore, we believe that the tensile test may also be used to analyse single fibers from meat analogues in the future. Furthermore, there is great importance for standardized testing methods of all mechanical tests described in this review to be able to compare different products (meat and meat analogues) and translate the quantitative analysis in sensory properties. Spectroscopy Spectroscopy (infrared, Raman, fluorescence polarization, NMR and light scattering) provides insight into the local composition (mostly surface of the product), intermolecular interaction as well as anisotropy of meat and meat analogues (Table 2). Proteins, lipids, water and other substances may be localised and quantified simultaneously. Spectroscopy is direct and non-invasive and requires only small products usually. Infrared (IR) spectroscopy provides information on the chemical composition by measuring infrared absorption spectra. The spectrum can be used to characterize specific chemical bonds in products and can yield information about the composition, but also about the state of individual substances. In meat products, Fourier Transform IR spectroscopy (FTIR) was used to monitor conformational changes of myofibrillar proteins and connective tissue (Kohler et al., 2007;Perisic, Afseth, Ofstad, & Kohler, 2011). In meat analogues, FTIR was used to identify structural changes after processing in zein, pea and spirulina/lupin protein (like α-helix and β-sheet) (Beck, Knoerzer, & Arcot, 2017;Mattice & Marangoni, 2020;Palanisamy, Töpfl, Berger, & Hertel, 2019). A near-infrared (NIR) spectrum is often divided into two sections, namely, short wave near-infrared spectral region (SW-NIR) of 780-1100 nm and long wave near-infrared spectral region (LW-NIR) of 1100-2526 nm (Cheng et al., 2013). The spectrum shows broad overlapping peaks and large baseline variations, which requires mathematical processing to extract compositional information (Subramanian & Rodriguez-Saona, 2009). In meat products, NIR-spectra were used to subsequently predict the chemical composition (such as crude protein, intramuscular fat, moisture/dry matter, ash, gross energy, myoglobin and collagen), technological parameters (water holding capacity, Warner-Bratzler and slice shear force) and sensory attributes (juiciness, tenderness or firmness) (Prieto, Roehe, Lavín, Batten, & Andrés, 2009). This would fully eliminate the use of destructive analysis methods like mechanical measurements. However, its prediction is limited to a small range of products and was further hindered by the heterogeneity of intact meat products, and inconsistent product preparation. Another study showed the analysis of food raw materials (such as skimmed milk powder, chicken meat powder, soy protein isolate, pea protein isolate and wheat flour) on the presence of several potential food adulterants (nitrogen-rich compounds, foreign protein and bulking agent) (da Costa Filho, Cobuccio, Mainali, Rault, & Cavin, 2020). Raman spectroscopy provides information on secondary protein conformation (i.e. α-helix and β-sheets) as well as on the amino acid composition (Overman & Thomas, 1999). In meat products, Raman spectroscopy has been successfully correlated with quality parameters such as protein solubility, apparent viscosity, water holding capacity, instrumental texture methods, and fatty acid composition (Herrero, 2008). Furthermore, Raman spectra could be correlated with sensory attributes (i.e. juiciness and chewiness) of pork loins (Wang, Lonergan, & Yu, 2012) and identify structural changes of muscle food components (proteins, lipids and water) due to handling, processing and storage (Pérez-Santaescolástica et al., 2019). Fluorescence polarization spectroscopy analyses the natural fluorescence from a product. In meat, tryptophan is the major intrinsic fluorophore. It is a constituent of the proteins that have two preferential directions of alignment both parallel and perpendicular to the muscle fiber direction. Fluorescence polarization was used to characterize the structural organization and modifications related to sarcomere length in meat caused by processing (Luc, Clerjon, Peyrin, Lepetit, & Culioli, 2008) and in-line detecting of cold shortening in the bovine muscle (Luc et al., 2008). In meat analogues, fluorescence polarization can be used to characterize the anisotropy in high moisture extruded soy protein (Yao, Liu, & Hsieh, 2004). This method is based on the theory that polarization states of fluorescence light are affected by the structure of a product. It was found that products with a higher degree of fiber formation showed a higher polarization degree (Ranasinghesagara, Hsieh, & Yao, 2005). Nuclear magnetic resonance spectroscopy (NMR) provides insights into the interaction between molecules (for example water-protein interactions) and thus provides insight into the structural features of meat and meat analogues. Several studies reviewed the application of ( 1 H, 13 C and 31 P) NMR in meat (Bertram & Ersten, 2004;Renou, Bielicki, Bonny, Donnat, & Foucat, 2003). NMR is also used to study water-protein interaction and correlate this with macroscopic properties such as water holding capacity, cooking loss, water and fat content and distribution, and changes associated during processing and storage (such as slaughtering, salting, frozen storage) (Marcone et al., 2013;Micklander, Peshlov, Purslow, & Engelsen, 2002). In plant-based materials, Time Domain (TD)-NMR gives an indication of the water-binding capacity of different proteins (gluten, soy protein isolate, pea protein isolate and lupin protein concentrate) (Peters, Vergeldt, Boom, & van der Goot, 2017). In addition, the water distribution was studied in a soy protein-gluten blend (Dekkers, de Kort et al., 2016) and pea protein-gluten blend (Schreuders, Bodnár, Erni, Boom, & der Goot, 2020). Small-angle scattering (SAS) methods provide structural information over a size range from nanometer-to-micron length scale, being 0.2-100 mm using light, 1-100 nm using X-rays and 1-20 nm using neutrons (Larson, 1999, p. 150). In small-angle X-ray scattering (SAXS), an X-ray beam passes through a product and encounters structural obstructions (like collagen or myofibrils). SAXS provides insight into the repetitive structure in a product, such as the structure of the fibrils of actin, myosin Table 2 Overview of spectroscopy techniques used in studies on meat and meat analogues from 2005 onwards. The colours in red and green indicate that the method is used for meat and meat analogues, respectively. (Ranasinghesagara, Hsieh, Huff, & Yao, 2009;Ranasinghesagara, Hsieh, & Yao, 2006) and collagen, and potentially provides estimates of the intramuscular fat (Goh et al., 2005;Hoban et al., 2016;Hughes, Clarke, Li, Purslow, & Warner, 2019). Small-angle neutron scattering (SANS) is used to investigate the structure on smaller scales and was used to study the internal structure of a fibrous calcium caseinate material (Tian et al., 2020). Spin-echo small-angle neutron scattering (SESANS) based on neutron diffraction can distinguish structures over three orders of magnitudefrom 10 nm up to 10 μm. SESANS quantified the thickness (±138 μm) and the number of fiber layers (±36) and the orientation of fibers in soy protein-gluten blends that were subjected to heat and shear deformation in a Couette Cell (Krintiras, Göbel, Bouwman, van der Goot, & Stefanidis, 2014). SESANS was also used to study the size and shape of the air bubbles in meat analogues of calcium caseinate (Tian, Wang, van der Goot, & Bouwman, 2018). The continuous-time random walk (CTRW) theory of light transport has been used to study the spatial distribution of light reflectance on the surface of a (fibrous) product (Weiss, Porrà, & Masoliver, 1998). According to this theory, optical scattering depends on the transitional properties of scattering. The pattern of the scatter recorded by transmission or backscatter contains information on the internal structure of a material, such as meat (Ranasinghesagara & Yao, 2007) and meat analogues (Ranasinghesagara, Hsieh, Huff, & Yao, 2009;Ranasinghesagara, Hsieh, & Yao, 2006). In meat analogues, this method visualizes the degree of fiber formation and fiber orientation which shows potential as a fast, non-destructive method to monitor fiber formation in meat analogues (Ranasinghesagara et al., 2009;Ranasinghesagara et al., 2006). An extension of light scattering is diffusing wave spectroscopy (DWS), in which products with strong multiple scattering can be measured. In this novel DWS technique, the transport of photons through turbid products is treated as a diffusion process (Niu et al., 2019). In meat, DWS has been used to study the gelation process of myofibrillar protein extracted from squid (Niu et al., 2019). In summary, spectroscopy can yield important information about the overall resolved composition of both meat and meat analogue, as well as intermolecular interactions and even about conformational changes of substances like proteins. It can be expected that the spectra of meat and meat analogues will be quite different because the spectra contain information about molecular properties. This limits its use for the actual comparison of the two types of materials. However, prediction models could be built from the correlation between the spectra and mechanical properties to make indirect comparisons between the materials. Spectroscopy can also give some information on the anisotropy. Light reflectance and SAS are promising methods to explore further for meat analogues to quantify fiber formation as it is relatively simple and easily incorporated into processing equipment, which will help to investigate the formation of the mesoscopic structure. SAXS and (SE)SANS methods typically yield information on smaller scales, but can also help in understanding the way the anisotropy is created from smaller-scale associations. These techniques require however very large infrastructure, and will thus be limited to research purposes. Imaging Imaging techniques can be used to reveal the structure of meat and meat analogues (Table 3). Visual inspection through splitting a meat or meat analogue is commonly used by product developers (Ranasinghesagara, 2008). Visual inspection is fast but destructive, not quantitative and prone to subjectivity. Microscopic (SEM, TEM, CLSM, AFM) characterization is used to construct images on different length scales ranging from macro to nano structure. The main drawback of those techniques is that they are destructive. Imaging using spectroscopic methods, such as MRI, ultrasound, hyperspectral and X-ray imaging does not require sample destruction. Image processing can be used to quantify the colour, shape, size, porosity and surface texture features of meat (Chmiel, Słowiński, & Dasiewicz, 2011;Du & Sun, 2006a, 2006b; Jackman, Sun, & Allen, 2011; Li, Kutsanedzie, Zhao, & Chen, 2016; Li, Tan, & Shatadal, 2001;Ruedt, Gibis, & Weiss, 2020;Taheri-Garavand, Fatahi, Omid, & Makino, 2019). In meat analogues, edge detection, Hough transformation and region of interest analysis are used to quantify the fiber index value, which is shown to be strongly correlated with the polarization index (Ranasinghesagara et al., 2005). Confocal laser scanning microscopy (CLSM) is a fluorescence technique to acquire 2D and limited 3D images of meat and meat analogue products. In meat, CLSM was used to visualize the connective tissue, myofibers and myofilaments and to monitor differences in structure between fresh and cooked meat of pork muscle, comminuted meat gels and beef (Du & Sun, 2009;Liu & Lanier, 2015;Straadt, Rasmussen, Andersen, & Bertram, 2007). A combination of CLSM and NMR yielded information about microstructural changes and water distribution in meat (Straadt et al., 2007). In meat analogues, CLSM has been used to visualize the effect of deformation on proteinaceous domains by comparing a sheared and a non-sheared pea protein-gluten blend (Schreuders et al., 2019). The domains were aligned along the shear direction in these blends (Schreuders et al., 2019), in soy protein concentrate and soy protein-gluten (Dekkers, Emin, Boom, & van der Goot, 2018) after staining with Rhodamine B. Both the soy, pea and gluten showed fluorescence; the difference in intensity was used to indicate differences in protein concentration in different parts of the products. Scanning electron microscopy (SEM) produces a surface image with resolution down to ~0.5 nm. In meat products, SEM has been used to reveal process-related changes in meat structure (Cheng & Parrish, 1976;Hearne, Penfield, & Goertz, 1978;Wu, Dutson, & Smith, 1985). However, extensive sample preparation is needed for materials containing water or fat. These preparations can significantly change the original structure and may cause artefacts. Several techniques have been developed to overcome the disadvantages of high-vacuum SEM, in most cases at the cost of resolution. In cryo-SEM, water is frozen and may remain in that state in the product. Cryofixation is used to observe changes in the microstructure of beef steaks versus several cooking methods like temperature, time and treatments (García-Segovia, Andrés-Bello, & Martínez-Monzó, 2007) and of pork versus freezing rate and frozen storage time (Ngapo, Babare, Reynolds, & Mawson, 1999). Variable pressure scanning electron microscopy (VP-SEM) is used to examine the microstructure of meat products like the distribution of protein and fat phases in meat products (Liu & Lanier, 2015). Environmental scanning electron microscopy (ESEM) observes wet products at normal vapour pressures. This technique has been successfully used to investigate the microstructural changes of muscle meat in various meat types by heat treatment (Yarmand & Baumgartner, 2000;Yarmand & Homayouni, 2010). The shrinkage of pressure-treated and cooked pork meat structure was observed by ESEM. These ESEM observations were used to provide evidence for a higher shear force as measured with the Warner-Bratzler test (Duranton, Simonin, Chéret, Guillou, & de Lamballerie, 2012). SEM analysis combined with Energy-Dispersive X-ray spectroscopy (EDX) may identify the spatially resolved elemental composition of a surface and therefore identify the distribution of different components over the material surface (Ozuna, Puig, García-Pérez, Mulet, & Cárcel, 2013). SEM has been used to study the microstructure of meat analogues. High moisture extruded soy protein isolate-wheat starch revealed a fine and tightly connected network structure (Lin, Huff, & Hsieh, 2002). In soy protein isolate -pectin blends, alignment along the shear direction was observed (Dekkers, Nikiforidis, & van der Goot, 2016). Soy protein with increasing levels of iota carrageenan showed a more compact network correlated with changes in cooking yield and expressible moisture (Palanisamy, Töpfl, Aganovic, & Berger, 2018). SEM of high moisture extruded lupin protein concentrate and isolate showed that a denser microstructure and higher number of fibrous layers were created by increasing temperature and screw speed along with decreasing water feed (Palanisamy, Franke, Berger, Heinz, & Töpfl, 2019). Like SEM, transmission electron microscopy (TEM) requires extensive sample preparation. As samples are created by microtoming, TEM provides information about the inner structure of meat, such as changes in the myofibrillar structure of beef upon cooking (Zhu, Kaur, Staincliffe, & Boland, 2018), the degradation of myofibrillar structure of lean meat by proteolytic action (Gerelt, Ikeuchi, & Suzuki, 2000) and calcium chloride addition (Gerelt, Ikeuchi, Nishiumi, & Suzuki, 2002). Atomic force microscopy (AFM) explores the local 3D structure of a surface on a nanometer scale. AFM has been widely used to analyse the morphology and mechanical properties of meat proteins for understanding the structure and tenderness/toughness (Soltanizadeh & Kadivar, 2014) and investigates the effects of processing and preservation conditions (ultrasound, CaCl 2 and sodium tripolyphosphate) on meat proteins (goat muscle fiber) (Gao et al., 2016). AFM-based infrared spectroscopy (AFM-IR) combines the spatial resolution of atomic force microscopy (AFM) with chemical analysis using infrared (IR) spectroscopy (Dazzi & Prater, 2017). For meat analogues, AFM-IR was used to determine the phase distribution of protein and lipids during high moisture extrusion of peanut protein at a nanoscale resolution (10 nm) (Zhang et al., 2019a). Ultrasound imaging can be divided into low power ultrasound (LPU) and high power ultrasound (HPU) (Awad, Moharram, Shaltout, Asker, & Youssef, 2012). The latter uses frequencies that are disruptive for the physical, mechanical, or chemical properties of food products and are therefore promising in food preservation. LPU has been used as a non-invasive analysis method for monitoring food materials during processing or storage. In LPU, sound waves propagate through food materials, which leads to absorption and/or scattering of the waves. Different components will have specific local, acoustic impedance, which is the basis for image production. In the meat industry, LPU is used most often for compositional analysis as quality control of carcasses or live animals (Awad et al., 2012;Silva & Cadavez, 2012). Ultrasound imaging has also been successfully used for measuring the composition of chicken meat (Chanamai & McClements, 1999), carcass composition of pigs (Ayuso, González, Hernández, Corral, & Izquierdo, 2013) and dry-cured meat products (Corona et al., 2013). Ultrasound imaging of meat and meat products has even been mentioned to provide estimates of localised viscoelastic properties of meat tissues (Biswas & Mandal, 2020, pp. 3-17). Ultrasound imaging has been used to follow the ripening kinetics of tofu (Ting, Kuo, Lien, & Sheng, 2009). Hyperspectral imaging (HSI) is the combination of multiple wavelengths together with other localised information. Infrared spectroscopy can be combined with microscopy providing spatially resolved compositional analysis (Dazzi & Prater, 2017;Zhang, Liu, et al., 2019). NIR combined with HSI provides both spectral (NIR spectrum) and localised (for each pixel) details together in the scanned region. This was reviewed for meat and fish to predict quantitively and qualitative chemical, textural and structural characteristics of meat such as tenderness, water, water holding capacity, fat and protein content (Reis et al., 2018;Wu & Sun, 2013a,b)). By combining direct identification of different components and their spatial distribution in the tested product, hyperspectral imaging has the potential for objective quality evaluation of both meat and meat analogues. NIR HSI is already used for the detection and quantification of plant (texturized vegetable protein and gluten) and animal (chicken) based adulterants in minced beef and pork (Rady & Adedeji, 2018. Scattering techniques such as X-ray tomography, SAS, or light reflectance provide 3D structural insight. X-ray tomography (XRT) is based on variations in the attenuation of penetrating X-rays. The difference in the degree of X-ray attenuation is determined by the local density and compositional differences, which provides the locally resolved density with a spatial resolution down to 1 μm and at a time scale of minutes. Micro-computed tomography (μCT) is used to study the structure of small products with a resolution from mm to μm. In meat products, XRT is used for microstructural characterization, prediction of salt, water, (intramuscular) fat content and distribution, and the relationship with hardness (Schoeman, Williams, du Plessis, & Manley, 2016). Micro-computed tomography (Mathanker, Weckler, & Wang, 2013) was used to characterize microstructure, as well as the quantification and prediction of the composition in meat and fish hardness (Schoeman et al., 2016), intramuscular fat level and distribution in beef muscles (Frisullo, Marino, Laverse, Albenzio, & Del Nobile, 2010). In meat analogue products, XRT reveals the porosity in the structure. Air pockets that could be elongated and entrapped were studied in soy protein-pectin blends , soy protein-gluten blends and pea protein-gluten blends (Schreuders et al., 2019). In extrusion products, expansion (due to water evaporation) of the materials was visualized in extruded rice starch-pea protein in two directions (Philipp, Oey, Silcock, Beck, & Buckow, 2017). Air bubbles in a composite meat analogue made of calcium caseinate may contribute to fibrous properties . In general, XRT depends on differences in density and therefore is not well suited for finding information on the distribution of components that have similar densities. Advanced contrast modalities such as phase-contrast X-ray tomography describes both the meat structure and the different meat components (i. e. water, fat, connective tissue and myofibrils) qualitatively and quantitatively (Miklos, Nielsen, Einarsdóttir, Feidenhans'l, & Lametsch, 2015). Dual X-ray absorptiometry shows a moderate good correlation with meat tenderness and fat content in pork and beef meat (Brienne, Denoyelle, Baussart, & Daudin, 2001;Kröger, Bartle, West, Purchas, & Devine, 2006). The grating-based multimodal X-ray tomography method (including absorption, phase contrast and dark-field tomograms) was used to quantify the composition (i.e. meat matrix, fat, salt, oil droplets) and visualizes the microstructural changes of meat emulsion induced by heat treatment (Einarsdóttir et al., 2014). As can be concluded from the information described above, imaging reveals important information about the intermolecular interaction, anisotropy and nano to macro structure of meat and meat analogues. While SEM and CLSM are used to reveal the structure of both meat and meat analogues, TEM and AFM have only been used to analyse meat, but not yet meat analogues. The fibrousness of meat analogues from plant proteins created via a top-down approach is typically less hierarchical than meat . This implies that the meat analogues are structured on larger scales than is explored with TEM and AFM. Nevertheless, fibrous proteinaceous materials, such as those based on calcium caseinate may have a finer structure, which could justify further analysis. Given the ubiquity of water in meat analogues, we expect that ESEM and CLSM will be major methods for further structural analysis. CLSM can lead to 3D information through combining a stack of 2D pictures and also yields information on differences in composition, which could provide better insight into the orientation and the length of the structural elements in meat analogues. An important limitation of the microscopy methods is that they require extensive sample preparation, making them less suitable for further analysis. Non-destructive imaging methods like MRI and HSI used for meat provide information on the intermolecular interaction and spatially resolved compositional analysis for meat analogues simultaneously. For both meat and meat analogue products, structural changes have been analysed with XRT. For meat analogue products, XRT was used to quantify and visualize air, while for meat more structural aspects were studied with different types of XRT (like phase-contrast X-ray tomography or grating-based multimodal X-ray tomography). Characterization aspects specific to meat analogues This review focused on the texture and structure of meat and meat analogues as finished products (Fig. 2). But contradictory to meat, the fibrous structure of meat analogues has to be created in a production process. Therefore we are not just interested in the final structure of meat analogues, but also in the mechanism behind the creation of the fibrous structure. As the fibrous structure of meat analogues is often created during thermomechanical processing, it is important to understand the behaviour of different components during processing. The high temperature and pressure, often used during the production of meat analogues, limit the methods of analysis. However, a combination of different methods could be a route to gain information about the structure formation process. Mechanical methods cannot be used during processing, but spectroscopy and imaging techniques show potential. ESEM and XRT could be promising to study the changes in the structural elements during thermomechanical processing of meat analogues. The application of in-line light reflectance, SAS, NIR, or Raman spectroscopy during the processing of meat analogues could provide insight into structural elements. In-line ultrasound imaging is expected to be a promising method for studying air bubbles and mechanical properties of meat analogues during processing, as this was previously used for the analysis of dough (Koksel, Scanlon, & Page, 2016). Another challenge to characterize meat analogues is the fibrous structure itself. To understand how to create a fibrous structure, knowledge of the fibers in meat analogues is required. So far, it is not completely clear what the geometry, size, binding pattern and adhesion or cohesion of the fibers in meat analogues will look like. The simultaneous use of different mechanical and imaging methods can provide a more holistic view on the fiber properties in meat analogues products. It can also be interesting to refer to meat and also non-food products containing fibers. Meat from different origin such a poultry or beef, consists of fibers with very different shapes and physical characteristics as was revealed with multi-point indentation, which is was used to spatially measure the local elastic modulus (Boots et al., 2021). In non-food products, such as thermoplastic, adhesion and cohesion of fibers in a matrix is studied. A combination of fiber-matrix wetting analysis and interfacial adhesion analysis was found to give a good understanding of the fiber-matrix interface (Tran et al., 2015). Such methods could also be promising in understanding the fibrous structure of meat analogues. Conclusion An important step towards the development of next-generation meat analogues is a better insight into the texture properties of those products. To quantify those, analytical techniques are necessary. This review summarizes and discusses methods typically used to characterize the properties and quality of meat products and discusses the feasibility to apply those for meat analogues. At this moment the range of methods used for meat analogues is smaller compared to the methods available for meat. However, we conclude that a broad range of methods could be readily employed to analyse meat analogues or slightly modified to make those methods suitable to analyse meat analogues. Several techniques elucidate structural features. Mechanical methods allow a direct comparison of the texture attributes between meat and meat analogues, tensile analysis, Warner-Bratzler test and compression techniques provide information about the strength of the product and can be applied to both meat and meat analogue products. Spectroscopy methods are non-destructive and fast but more expensive. Most imaging techniques are interesting to compare the structure of both meat and meat analogues. CLSM and XRT reveal 3D information. NIR, MIR, NMR and MIR provide both quantitative information about the structural elements and information of the composition. Tensile analysis, image analysis, fluorescence spectroscopy, SAS and light reflectance, showed to be promising methods to quantify properties of the individual fibers and their formation process. TEM and AFM are interesting for nanoscale structure analysis, but have been applied to meat only so far. Specifically for meat analogues there is a need to study the texture and structure at processing conditions as well. In-line NIR or ultrasound imaging could be promising to study the changes in the structural elements during thermomechanical processing of meat analogues. Furthermore, future research should focus on characterizing the fibers present in meat analogues with regards to geometry, size and adhesion and cohesion. This approach could optimize the conditions used during the processing of meat analogues process with the final purpose of resembling meat products in terms of texture and structure. Declaration of competing interest The authors report no conflicts of interest relevant to this article. Acknowledgements This research is part of the project PlantPromise, which is cofinanced by Top Consortium for Knowledge and Innovation Agri & Food by the Dutch Ministry of Economic Affairs; The project is registered under contract number LWV-19027. This study was financially supported by the Good Food Institute.
2021-05-04T22:05:55.788Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "93755ac928ac2a821c2cbe1ca159d963ec5c80b6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.foodcont.2021.108103", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0f1c15f07aa9af75662fadde2d0245ee1602ce5f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science" ] }
244900420
pes2o/s2orc
v3-fos-license
Mathematical Games Using Real-World Approaches Increasing Kindergarten Students' Learning Creativity Many parents think that games are a waste of time and not useful, so they forbid their children to play, and as a result, the children's soul will be depressed and the words naughty child, cougar, passive, and so on. This assumption is caused by the fact that parents do not fully understand their personality, characteristics, tendencies, and child nature. By looking at conditions like this, Kindergarten is expected to strive to implement the function of play by children's growth and development by providing the freedom to each child accompanied by the cultivation of constructive values Research objectives. To determine the game method's implementation using a real-world approach to increase children's learning creativity in TK Ma'arif 22 Trimurjo. The technique used for this research is qualitative. Subjects were students included and engaged in math games with the real-world: in-depth interview data collection techniques, observation, and documentation. In the implementation of playing activities, students' attitudes are given the freedom to develop their potential and creative power according to their talents and interests. In this case, it can be proven by the games at TK Ma'arif 22 Trimurjo. Mathematical games using a real-world approach in increasing the learning creativity of kindergarten students. INTRODUCTION Education is so essential in human life. In a nation, education is the main factor determining the country's progress (Widodo 2016). Education has a significant influence on development. Having an education can produce competitive human resources globally (Richman 2015:122)-one way to improve the quality of teaching and learn in Kindergarten (TK). Students are taught three necessary abilities at this level of education: cognitive, affective, and psychomotor skills (Hoque 2017). If students are less able to master these three abilities, they will experience difficulties taking education at a higher level. Learning in Kindergarten has a vital role because it is the foundation for further instruction. Kindergarten is one of the educational institutions that have an essential role in children's growth and development in the future. Because in addition to being the first step in escaping from the family environment, it is also an initial effort to bring children to a stable mental preparation to step into the educational process. Kindergarten is a pre-school educational institution that aims to develop children's abilities (Loukatari et al. 2019). As has been expressed by experts in child education, Deutsch and Hechinger stated that it is not valid if we just stay silent and wait for development to come. Both the development of the intellect and the development of children's behavior needs to be guided and stimulated. It means that a sensitive period or needs to be produced. All children can learn as quickly as possible (Brezinka 2012). Playing is a common symptom in the animal, children, youth, and adults. Games are chosen by themselves without any element of coercion, without being pressured by a sense of responsibility. As Kartini Kartono (in Adiebah (Adiebah 2020)), playing is an essential means of socializing children, introducing children to become members of society to know and respect human society. In the atmosphere of the game, a sense of harmony grows, which is very important for forming the social soul as human culture. Improving the quality of human resources is an absolute prerequisite for achieving educational goals. As a determining factor for education success, human resources' quality is enhanced through educational programs carried out systematically and directed based on interests that refer to advances in science and technology (IPTEK) and based on faith and devotion (IMTAQ) (Mulyasa 2015). However, many parents think that games are a waste of time and are not useful, so that they often forbid their children to play. As a result, the children's soul will be depressed, naughty, disruptive, passive, and so on, will arise. This assumption is caused by the fact that parents do not fully understand the children's personality, both their characteristics, tendencies, and children's nature. Kindergarten is expected to strive to implement the function of play. Children's growth and development by providing the freedom to each child accompanied by the cultivation of religious values for children's provision in the future, they become people who believe and devote to Allah SWT. To find out children's mathematical cognitive abilities, researchers observed 20 students at TK Ma'arif 22 Trimurjo. In practice, students memorize numbers and perform addition operations just by writing in books, so a unique approach is needed to increase children's creativity. The survey results are corroborated by the expression Schindler and Lilienthal "Mathematics problems are challenging. I did not know how to do it. That's why I did not finish it. I don't like Maths" (Schindler and Lilienthal 2020). The students struggled with mathematical problem solving, which manifested themselves in various ways to deal with difficulties in gaining knowledge and skills. In presenting engaging learning, should teachers do educational interaction with the understanding by doing principle? This educational interaction process applies the principle of education as well as playing. The learning process will be successful when students feel active, happy, creative, and tend not to be binding (Djamarah and Zain 2016). Learning by games is considered an innovative approach to education and has influenced student learning behavior with more active knowledge and motivation to engage in learning (Hsieh, Lin, and Hou 2016:178-79). According to Shoba Dewey Chugan,(Shoba Dewey Chugan Chugani 2010:12) a child can get useful experiences in playing activities. A teacher has so many opportunities to teach various things through games, including forming mathematical cognitive abilities. Learn and play are two activities that are very urgent and complementary. Playing makes children happy to learn, and by learning through play, children can master more challenging lessons(Triharso 2013:6). When children play, they are also learning. Adopting learning time with games is one way every children's play can be a place for them to learn (Murniati 2012:23). Steps that educators can take are determining the type of fun to be used-the emphasis on learning while playing sites a priority on learning over games. The game is only a means, not an end. The existence of games in a learning process makes the learning process ineffective for children. They think that later children will want to play more than learning. Games have a significant influence on children's mental development. Moreover, if the game is appropriately designed, playing is also a useful learning tool by combining recreational, creative, and educational aspects to create the right learning environment (Licorish et al. 2018). Abramovich said: When teachers construct real and critical events, their creativity model for students, and use space creatively, creative learning is likely to occur (Abramovich, Grinshpan, and Milligan 2019). When teachers provide fundamental and essential education for students, creative learning is expected to occur. Games are activities that aim to acquire skills excitingly (Mujib and Rahmawati 2013:19). In using the game method, the teacher should use the right approach to spur creativity in learning. One that is offered is a game with a real-world system. Using a real-world approach is essential in schools to help children build creativity in education. This concept can encourage students to think and talk about the world around us. Teachers can help connect mathematics to everyday life. For many students, mathematics seems too abstract. When it can be related to things they see and do in everyday life, the concepts become real and meaningful. The activities here all involve doing something. It is not enough to think about things. When you do something in the real world, there is usually a reaction someone or something does back (Nonesuch 2008:1). According to Adam Ishaq et al., more significant in the simulation game of the conventional teaching methods (Ishaq et al. 2019). Elizabeth emphasized the game method appears to be more effective in increasing students' interest in learning (Hanson-Smith 2016:231). Through the game, the approach will lead students from creativity to achievement in education. For creativity, Barron (in Utami Munandar (Munandar 2012:21)) defines it as producing/creating something new. Imagination forms the four capabilities that children have, psychological, intelligence, cognitive, and personality. These four aspects of the mind will help understand what creates a creative individual. Mathematical creativity plays an essential role in learning mathematics. The invention in mathematics can be characterized in several ways, such as divergent and flexible thinking or "unusual" and in-depth solutions to a given problem (Katz and Stupel 2015:68). We are developing a game method using a real-world approach focused on the children's/student's life to develop cognitive, affective, and psychomotor values while creating communicative situations, enabling them to convey authentic messages about exciting information. Mathematical Games with a Real-World Approach The real-world approach began to grow due to a desire to review the re-education seemed less meaningful to mathematical learners (Mujib and Rahmawati 2013). In its view, mathematics must be related to reality, close to children's experiences, and relevant to society to be part of human values. Apart from viewing mathematics as a transferred subject, Freudenthal emphasized mathematical ideas as human activities. Mathematics lessons should allow students to be "guided" and "rediscovered" mathematics by doing it. In mathematics education, the main objective is mathematics as an activity and not a closed system (Tedja 2012:45). Learning the real mathematical world is mathematical learning approaches are made by placing students' realities and experiences as a starting point of learning. Furthermore, students are allowed to apply mathematical concepts to solve daily problems in other fields. This learning is very different from mathematics learning, which has tended to be oriented towards providing information and using mathematics that is ready to be used to solve problems. In their research, childhood education experts stated that the most effective way of learning for children in their learning activities (Ismail 2010:25). Operations for kindergarten education will be more meaningful if carried out through educational methods that are fun, educational, interests, talents, and personal needs. Therefore, they need games as an education media in learning. Play tools do not have to be expensive. The educational element must take precedence. It will be more apparent if you deliver learning material with a learning-by-playing approach (Ismail 2010). According to Dewey,(Shoba Dewey Chugan Chugani 2010) a child can get useful experiences in playing activities. Through games, educators have many opportunities to teach various things, including learning mathematics. Learning and playing are two essential and complementary things. Playing makes children learn happily and can master more challenging lessons(Triharso 2013). When children play, they are learning. Observing learning time with games is one way each children's play can become a place for them to learn (Murniati 2012). The steps that educators can take determine the types of games they want to use. This game is only limited to facilities, not as an objective. Some argue that play in the learning process makes learning ineffective for children. They think that later children will want more games than education(Triharso 2013). Games have a significant effect on the mental development of children. Especially if the game is well designed, playing is also a useful learning tool by combining recreational, creative, and educational aspects and creating the right learning environment (Tok 2015). In using the game method, the teacher must use the right approach to stimulate motivation and creativity in learning, one of which is being offered is a game with a realworld system to play, toys, and fun. The play method using the real-world approach is essential in schools to help children build motivation and creativity in learning, especially in mathematics. This concept can encourage students to think and talk about the world around us. Teachers can help connect mathematics with everyday life with school mathematics. To many students, mathematics seems too abstract. When it can be linked to the things they see and do in their daily life, the concept becomes real and meaningful (Giganti 2010). All activities here involve doing something. It's not enough just to think about things. When you do something real, there is usually a reaction from someone or something doing something back (Nonesuch 2008). Learning Creativity Creativity is the ability to create new combinations based on existing data, information, or elements. Usually, people define creativity as creativity, as the ability to create new things. The more experienced and knowledge a person has, the more likely he is to use and use all this experience and expertise to engage himself creatively. To be able to make something meaningful, preparation is needed (Munandar 2012). Many activities can be designed by the educator, all of which is to enhance children's creativity. Developing children's creativity always requires children to think of various possible answers and solve problems. It is called divergence, assuming in multiple directions, in contrast to convergent thinking, where the child is directed to give one of the most appropriate answers to a problem (Munandar 2012). Thus, it can be concluded that the idea of creativity is the ability of a person to produce a composition, production, or any idea that is fundamentally new and previously unknown to the author. It can be an imaginative activity or a synthesis of thoughts whose results are not just summaries. It may include forming new patterns and combining information gleaned from previous experiences, transplanting old relationships into new situations, and forming new correlations. It must have a defined purpose, only open fantasy, even if it is a perfect and complete result. It may be in the form of an artistic product, literature, scientific product, or procedural or methodological(B. Hurlock 2010:4). Creativity brings joy and satisfaction to children. For example, nothing can give children greater satisfaction than creating something yourself, whether it be a house made of an upside-down chair covered in a blanket or a picture of a dog. And nothing detracts further from self-esteem than criticism or ridicule of creation or the question of what form it takes. Being creative is also essential for young children because it adds spice to their play as the center of their life. One of the critical values of creativity that is often overlooked is its contribution to leadership. Besides the personal satisfaction that children get from imagination, if that creativity increases the sense of pride in playing the role of leader, this will guarantee a good social and personal adjustment scene. The value of creativity is evident in the case of less creative children. Spock says very literal-minded people have limited utility to the world and a limited ability to derive joy(B. Hurlock 2010). Why is creativity essential to be nurtured and developed in children. First, because creating humans can manifest themselves, self-realization is one of the basic human life needs. Second, creativity or creative thinking as the ability to see various possible solutions to a problem is a form of thinking that has not been given much attention to informal education. Third, creative self-activity is not only beneficial but also satisfies the individual. It will be evident if we observe children engrossed in playing with wooden blocks or other constructive play materials. Fourth, creativity is what enables humans to improve their quality of life (Munandar 2012). Several things jeopardize an excellent fit in various areas of creativity, among others (Keong 2006): a) Failure to stimulate creativity Lack of stimulation can be caused by parents' and others' ignorance in the baby's environment about the importance of creativity. It may be caused by the assumption that creativity is innate so that nature will regulate its development, and therefore, stimulation is not needed. b) Inability to detect creativity at the right time Under such conditions, it is not surprising that the stimuli for the development of creativity are ignored. When there is evidence that children have creative potential, it may be too late to provide incentives that can fully develop that potential. Unless tests or other methods can be designed to detect creativity at an early age, the only way to overcome this danger is to assume that every child has the potential to be creative, even to varying degrees, and to provide them with the necessary stimulation until an early age. c) Disliked social attitudes to creativity This inhibiting factor is manifested in 2 general forms, namely: first, a non-positive attitude towards creative feet, and second, a lack of social appreciation for creativity. Although these children have many great ideas, they are quickly said to have strange, irrational, or naughty thoughts. It is difficult to determine their personal development and creative talents in the future. These characteristics may make their behavior more difficult to predict, and it may make their presence in a group troublesome. d) Unfavorable school conditions Among the many schools that discuss developments in schools with a large number of disciplinary needs, the stress of the memorization process, the prohibition of anywhere that does not conform to the original, scheduled classroom events, classroom and strict discipline, and teachers' beliefs about children who are more challenging to complete their work and jobs for children. METHODOLOGY The approach used for this research is qualitative. Qualitative research deals with the ideas, perceptions, opinions, or beliefs of the people being studied, all of which cannot be measured by numbers. Qualitative research aims to obtain a complete picture of something according to the human perspective being studied. In qualitative research, the researcher is the primary research tool. Moleong explains qualitative research as research that is intended to understand the phenomena experienced by research subjects such as behavior, perception, motivation, action, etc., holistically (intact), and using descriptions in the form of words and language, at specific contexts that are natural and by making use of various natural methods. (2014) Besides, Sugiyono also argues that qualitative research as a research method based on the philosophy of positivism is used to examine the conditions of natural objects, where the researcher is a crucial instrument, data collection techniques by triangulation, inductive or qualitative data analysis, the results of qualitative research emphasize the meaning. Rather than generalizations(2017). According to Nana Syaodih Sukmadinata, qualitative descriptive research aims to describe and describe existing phenomena, both natural and human, that pay more attention to characteristics, quality, and linkages between activities(Syaodih Sukmadinata 2011:73). Besides, descriptive research does not provide treatment, manipulation, or alteration of the variables under study but describes a condition as it is. The only treatment given is the research itself, carried out through observation, interviews, and documentation. The type of research conducted by researchers is case study research. A case study is a qualitative research in which researchers conduct in-depth exploration of programs, events, processes, activities to one or more people. A case is bound by time and action, and the researcher collects detailed data using various data collection procedures and in continuous time(2017). RESULTS AND DISCUSSION The purpose of playing mathematics is to optimize the child's overall development and interactive communication. Therefore, playing mathematics at Ma'arif 22 Trimurjo Kindergarten can run well. A learning strategy is needed for children-oriented towards objectives, materials, methods, media or game tools, and evaluation by the child's development obtained based on the results. Observation, interview, and documentation. Description of playing math activities in children's early activities doing arithmetic activities carried out by children outside the room before entering the class to remember the sequence of numbers, counting numbers 1-10, and mentioning the various vehicles or animals land. Mathematics playing activities in class are carried out by clapping two patterns, saying the number of friends who are not present in class, telling stories about personal experiences, mentioning the name of the day, date, month, and year, counting using algebraic fingers and motor activity activities. Figure 1 & 2 Playing Maths Finger Algebra and Motor Activities The main activities outside the classroom are playing line charts, mentioning the vehicle's name on the highway, playing counting steps, playing where my fish is, and observing the boat. Mathematics playing activities in the classroom include tracing geometric shapes, playing toy cars, working on student worksheets, sorting objects from largest to smallest, folding fish shapes, building boats, and forming trains from geometric pieces. In mathematics playing activities, the teacher uses playing, singing, telling stories, field trips, chatting, question and answer, and dramatization. All of the teacher's methods are play activities, where when children do play activities, the children are happy. The essence of play includes feeling happy, democratic, active, not forced, and free to become the soul of every activity. Math play activities in terms of children's participation include individual and group activities. Individual play is a play performed by a child, such as counting activities according to sequence, leaning on the child's own fingers the numbers 1-10, mentioning land vehicles, and telling stories about their personal experiences. From this activity, the teacher has a goal to determine the child's ability about the child's knowledge while studying at school and the understanding that the child gets to share with the teacher and his friends. Group play is performed by several children, such as: clapping two patterns, mentioning the number of presents and not present in class, counting using algebraic fingers, and motor activities. This activity is carried out classically or in groups in the classroom because the teacher aims to find out the children's enthusiasm in the class with the participation of children in groups, observing children's development through the interaction of children with friends, children with their teachers in harmony. In the individual activity, the teacher aims to know each child's abilities about child's abilities as a report on the development of unique children's learning outcomes. Learning through playing, children can manipulate objects that children see, and children can explore, imagine, and create on their own with game tools, teaching resources, and media used in math play activities. Mathematics playing activities are viewed from the tools and materials used by children at TK Ma'arif 22 Trimurjo. The results of interviews with class teachers, children were taught mathematical concepts through playing activities using tools and play materials around the child and using concrete objects (real-world), for example, using children's limbs through applause, number of families, children's house numbers, number of friends, and so on. Therefore, the teacher considers that the child's environment is the largest laboratory as a learning source. Through the environment, children can learn many things, imagine, explore through play, and feel happy. In playing mathematics activities using concrete objects (real-world) and carried out directly by the child, where the five senses of the child are directly involved so that the child gets knowledge from the children's interaction with the environment instantly, play activities are also adjusted to the children's developmental stage according to with their age. A conducive and varied classroom atmosphere is needed to not be bored and bored in learning in the classroom. The principle of playing mathematics has several functions, including developing all aspects of child development according to their developmental stages, being able to introduce children to the world around them, developing children's socialization, introducing simple rules in play and instilling children's discipline, and providing opportunities for children to enjoy playing it. The mathematics playing activities in TK Ma'arif 22 Trimurjo, the mathematics taught in mathematics playing activities, are as follows: the content of playing mathematics must be rich, varied, concept-oriented, and goal-focused. Understand mathematics, and children are taught about mathematical concepts through play activities. Playing math activities must provide opportunities for children to solve simple problems. The physical environment must cover concrete media (real-world) that can be manipulated (blocks, ten basic blocks, patterned blocks, various kinds of partnerships, tangram and counting stuffed animals, plastic toy balls, etc.), symbol media (dice, dominoes, number lines, graphics, computer programs, and other visual media) and abstract media (plastic numbers, list of foodstuffs, 100 tables, building plans, calculator, computers and so forth). The teachers consider the ability of a specific child in the class. The design of playing mathematics requires planning, interacting with children, creating a conducive environment (moving class), schools establishing cooperative relationships with parents, and assessing all aspects of child development to be reported to parents. Playing as an implementation of learning in early childhood provides various benefits to developing potential children's development. As stated by Freeman and Munandar, the benefits obtained from the play are: as a channel for the excess energy of children, as a means to prepare for their future life, as a continuation of the image of humanity, to build lost strength, to get compensation for things that are not acquired, play allows children to release feelings and emotions and provides a stimulus to the personal formation (Munandar 2012). Based on this concept, the study results show that the teachers at TK Ma'arif 22 Trimurjo have used the idea of playing while learning with the approach of real objects around which students are familiar (real-world). Learning is structured in a fun, fun, and democratic way to attract children to be directly involved in learning activities. Children sit quietly listening to the teacher's lecture, but children actively interact with various objects and people in their environment, both physically and mentally. For this reason, in the learning process in early childhood, it is necessary to organize a conducive and varied learning environment, mathematics playing activities in terms of the place where mathematics playing activities can be carried out indoors and outdoors. Figure 3 & 4 Outdoors Playing Activities The results of observations in TK Ma'arif 22 Trimurjo that the class atmosphere is designed in a conducive way, changes in patterns in the classroom that are structured with decorations that are suitable for the child, and the placement of play equipment are also neatly placed so that the children are free to do learning activities in the classroom such as clapping hands, tracing geometry, arranging geometric pieces into a train, drawing, folding and so on. The study's findings show that playing math activities can be done outside the classroom because of an outside classroom environment. Children are freer to play freely, express, explore, and imagine interacting with the territory, friends, and people around the child. Playing mathematics activities in terms of children's participation in TK Ma'arif 22 Trimurjo, children can play individually or in groups, according to early childhood learning, among others: children learn through play, children learn with peers, learning according to children's needs, and teaching in an integrated manner. Children Learn through Interaction, namely: Children learn through play tools and interactions with people around the child. The early childhood learning environment needs to be enriched with learning experiences. Play equipment should be selected based on several criteria, namely: according to the child's age and level of development, quality design and by the characteristics of the child, durable, flexible, and multifunctional in terms of use, safe for children (non-toxic paint, no sharp edges, and corners), and exciting colors and shapes. Whether or not children are active in learning begins with the emergence of a sense of interest and interest in the children themselves in following lessons. The achievement of learning objectives is not seen from fulfilling the target material that must be given. Still, how interested the child is to know and understand the teaching material, for that we need a useful, engaging, and fun learning approach for students, one of which is by playing using a (real-world) method. Children learn through play because playing is an activity that contains a sense of pleasure and is more concerned with the process than the result. Types of games must be adapted to the level of development, age, and abilities of the child, so that all kinds of games can be developed gradually, starting from playing while learning (the element of play is greater) to the level of learning while playing (the aspect of education is greater). It is under the characteristics of early childhood who like to play. This characteristic requires kindergarten teachers to carry out learning activities that contain games, especially for early childhood. Kindergarten teachers should design a learning model that allows for an element of play in it. Play is an essential instrument for children's social, emotional, and cognitive development. Besides that, it is also a reflection on children's development. Playing can also foster children's creativity. During play, a child learns to cope with emotions, interact with others, cope with conflicts, and get a feeling of competence. Through play, children can develop children's imaginations and creativity. In other words, playing is an essential requirement for children. The world of children is a world that is identical to play, especially at an early age. Playing can support the growth of students' cognitive, affective, and psychomotor aspects. Through playing while learning, abstract mathematical concepts can be more easily understood by students. Therefore, teachers' initiative and support in designing and implementing learning that accommodates children's play needs are essential in learning practices appropriate to the child's developmental stages. Playing mathematics activities have several functions, including developing all aspects of child development according to their developmental stages, being able to introduce children to the world around them, developing children's socialization, introducing simple rules in play and instilling children's discipline, and providing opportunities for children to enjoy playing. CONCLUSION Based on the results of research on the mathematical game method using the real-world approach in increasing children's learning creativity in TK Ma'arif 22 Trimurjo, it can be concluded as follows: In the implementation of play activities, students' attitudes are given the freedom to develop their potential and creative power according to their talents and abilities. It can be proven that the games at Ma'arif 22 Trimurjo Kindergarten are very suitable for children's psychological conditions. They will be able to get happiness and channel and develop their excess energy, as well as psychological harmony, making them aware of the essence of humanity. In the child, his honour, pride, and strength. Meanwhile, mathematics games using a real-world approach increase effectiveness in increasing kindergarten students' learning creativity.
2021-12-06T16:03:28.548Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "df14f7ab1a9cc34830ed42d4362a4db0b885e61a", "oa_license": "CCBYSA", "oa_url": "https://journal.iaimnumetrolampung.ac.id/index.php/jcd/article/download/1861/817", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a28d309f21a48fae94dbe2ba43bb2a28ad88688f", "s2fieldsofstudy": [ "Education", "Mathematics" ], "extfieldsofstudy": [] }
3471939
pes2o/s2orc
v3-fos-license
U-Shaped and Surface Functionalized Polymer Optical Fiber Probe for Glucose Detection In this work we show an optical fiber evanescent wave absorption probe for glucose detection in different physiological media. High selectivity is achieved by functionalizing the surface of an only-core poly(methyl methacrylate) (PMMA) polymer optical fiber with phenilboronic groups, and enhanced sensitivity by using a U-shaped geometry. Employing a supercontinuum light source and a high-resolution spectrometer, absorption measurements are performed in the broadband visible light spectrum. Experimental results suggest the feasibility of such a fiber probe as a low-cost and selective glucose detector. Introduction The attention of the scientific community has been focused on low-cost, sensitive, and specific alternatives for the detection and analysis in low concentration solutions of different substances due to the demand in many fields such as clinical diagnosis, drug discovery, food safety, etc., [1]. In this sense, glucose sensing and monitoring is regarded as a critical indicator, so that fabricating reliable solutions would result in a step forward [2]. Biosensors interact with a target molecule, cell, or any other biological element and produce a calibrated signal that is used as a parameter transducer [3]. The combination of optical fiber devices with chemically sensitive coatings offers a universal platform for the development of highly sensitive and selective sensors [4]. The use of polymer optical fibers (POF) fulfils perfectly the requirements of biosensors and provides many advantages in comparison to other types of sensors, mainly owing to the high interaction lengths with the sample and the transmitted light as well as the simplicity of the sensors [5]. POFs are usually made up of poly(methyl methacrylate) (PMMA), a cheap and transparent polymer. Sensors based on POFs are in general low-cost and easy to use compared with other time-consuming traditional methods [6][7][8]. For instance, fiber-optic evanescent wave (FOEW) absorption sensors are widely used, and they rely on detecting a change on the transmitted spectrum due to the coupling of the light to the surrounding medium. This phenomenon is enhanced either by tapering the fiber or by bending the fiber in a U-shape [9]. In most cases, these FOEW sensors depend on indirect measurements to obtain the concentration of glucose, such as measuring the change in the refractive index of the solution induced by the presence of glucose. In contrast, the sensor presented in this work selectively attaches the glucose and releases a reporter, making the sensor media-independent and straightforward. In order to create a sensor with high selectivity that is capable of attaching a specific target separated from other undesired substances, the outer surface of the POF must be specifically functionalized. For that purpose, the PMMA POF probe surface is functionalized with phenylboronic acid (PBA) groups. The glucose sensing behavior of these groups is based on the interaction between boronic acid and glucose, forming a cyclic boronate ester. The measurement principle relies on the release of an optical reporter called Alizarin Red S (ARS), which is bonded to the fiber by a reversible interaction with boronic acid groups. The sensing system is composed by a 500-µm uncladded POF bent in a U-shape with a 2.5 mm diameter. By immersing the sensitive area of the probe in a solution containing glucose, the ARS is released and, therefore, a change in the absorption spectra is observed. Employing an ad hoc designed experimental setup, these absorption spectra are measured and recorded. The obtained results demonstrate that our sensor is able to measure glucose in many different media at biological pH levels. Working Principle of the Sensor Light is launched from one end of the fiber and the spectrum of the transmitted light is measured at the other end. This transmission spectrum depends on the absorption of the evanescent field penetrating into the fluid that acts as a cladding. On the one hand, the U-shaped fiber probe gives the proper sensitivity and, on the other hand, the surface functionalization ensures the required selectivity. The principles of both fields will be explained below. Evanescent Wave Sensing Evanescent waves are well documented in the bibliography [9,10] and have been used in many biosensors [4,6,[11][12][13][14]. They are confined between the core and the cladding of the optical fiber and are associated to a loss or leakage of the transmitted signal. During light transmission through an optical fiber, the evanescent wave decays exponentially with the distance from the core-cladding interface until its intensity is negligibly small [3]; this parameter is defined as the penetration depth (d P ) (Figure 1). This depth defines the distance in which the molecules may have a discernible effect in the evanescent wave [15]: where λ is the vacuum wavelength of the light launched to the fiber, n 1 is the refractive index of the core, θ c is the critical angle in the sensing region with respect to the normal to the core-cladding interface, θ is the angle of the wave with the normal to the core-cladding interface, and θ ϕ is the skewness angle [15] (which is π/2 for a meridional transmission mode). The d P of an evanescent wave is very small in a straight fiber, but it can be notably increased by bending the fiber. Thus, using a U-shaped bending enhances the sensitivity of the fiber probe. Furthermore, the analysis of the skewness can be split depending on whether the light interacts with the outer or the inner surface. In the former case, the skewness angle changes from: to: where R is the bending radius of the probe, ρ is the radius of the fiber core, and h is the height at the entrance of the bent region from the inner core-cladding interface. At the inner surface, the angle goes from: to δ 2 = π/2. Using these equations, it can be proved that d P is much higher for U-shaped bent fibers than for straight fibers (R = ∞) [16]. Moreover, the absorbance is higher for smaller diameters and for lower numerical apertures [12]. biosensors [4,6,[11][12][13][14]. They are confined between the core and the cladding of the optical fiber and are associated to a loss or leakage of the transmitted signal. During light transmission through an optical fiber, the evanescent wave decays exponentially with the distance from the core-cladding interface until its intensity is negligibly small [3]; this parameter is defined as the penetration depth (dP) (Figure 1). This depth defines the distance in which the molecules may have a discernible effect in the evanescent wave [15]: where λ is the vacuum wavelength of the light launched to the fiber, n1 is the refractive index of the core, θc is the critical angle in the sensing region with respect to the normal to the core-cladding interface, θ is the angle of the wave with the normal to the core-cladding interface, and θφ is the skewness angle [15] (which is π/2 for a meridional transmission mode). Phenil-Boronic Acid Diol Interaction Boronic acids bind compounds containing diol moieties with high affinity, forming reversible boronate esters [17]. Consequently, boronic acid compounds have widely been used for the synthesis of artificial receptors for sugars with great success [18]. The scheme in Figure 2 depicts a substrate containing 3-aminophenylboronic acid (APBA), which is a synthetic molecule capable of forming reversible boronates with 1,2-diol, 1,3-diol, or multi-hydroxyl groups including glucose [19]. Boronic acid-diol binding reactions are highly pH-dependent [20], and pH values above the pKa of the boronic acid are required, so that this study makes use of buffers usually employed for biological applications (Phosphate Buffered Saline System (PBS) and Tris base (TRIS)) to control the pH of the media [21]. However, it can be difficult to monitor the binding without using any fluophore. The dP of an evanescent wave is very small in a straight fiber, but it can be notably increased by bending the fiber. Thus, using a U-shaped bending enhances the sensitivity of the fiber probe. Furthermore, the analysis of the skewness can be split depending on whether the light interacts with the outer or the inner surface. In the former case, the skewness angle changes from: to: where R is the bending radius of the probe, ρ is the radius of the fiber core, and h is the height at the entrance of the bent region from the inner core-cladding interface. At the inner surface, the angle goes from: to δ2 = π/2. Using these equations, it can be proved that dP is much higher for U-shaped bent fibers than for straight fibers (R = ∞) [16]. Moreover, the absorbance is higher for smaller diameters and for lower numerical apertures [12]. Phenil-Boronic Acid Diol Interaction Boronic acids bind compounds containing diol moieties with high affinity, forming reversible boronate esters [17]. Consequently, boronic acid compounds have widely been used for the synthesis of artificial receptors for sugars with great success [18]. The scheme in Figure 2 depicts a substrate containing 3-aminophenylboronic acid (APBA), which is a synthetic molecule capable of forming reversible boronates with 1,2-diol, 1,3-diol, or multi-hydroxyl groups including glucose [19]. Boronic acid-diol binding reactions are highly pH-dependent [20], and pH values above the pKa of the boronic acid are required, so that this study makes use of buffers usually employed for biological applications (Phosphate Buffered Saline System (PBS) and Tris base (TRIS)) to control the pH of the media [21]. However, it can be difficult to monitor the binding without using any fluophore. ARS has been used as a reagent for the fluorimetric determination of boronic acid concentrations. Free ARS is an organic dye with a very poor fluorescence. However, when ARS interacts with boronic acid groups, the active protons responsible for the fluorescence quenching are removed (Figure 3), leading to a dramatic increase in the fluorescence intensity of ARS [22,23]. In this work, ARS was used as an optical reporter. In the presence of glucose, the ARS is displaced from the boronic acid ARS has been used as a reagent for the fluorimetric determination of boronic acid concentrations. Free ARS is an organic dye with a very poor fluorescence. However, when ARS interacts with boronic acid groups, the active protons responsible for the fluorescence quenching are removed (Figure 3), leading to a dramatic increase in the fluorescence intensity of ARS [22,23]. In this work, ARS was used as an optical reporter. In the presence of glucose, the ARS is displaced from the boronic acid complex so the change in the absorption of the reporter allows the glucose detection by UV-Vis spectroscopy. In summary, a PMMA fiber containing PBA was developed [19]. Firstly, the fiber surface was functionalized with PBA groups and bound with ARS. Secondly, the fiber was immersed in a solution containing 1,2-diol analytes. The competitive nature of boronic acid/diols complexes will favor the ARS-PBA bond breaking and glucose-boronic acid bond formation. The careful design of the experiment ensures that only the interaction of glucose and boronic acid can cause the ARS-boronic acid disruption. PMMA of optical quality (Plexiglass ® ) was used for the fabrication of the POF and plain samples. Rods with a diameter of 15 mm and sheets with a depth of 1 mm were obtained from Evonik (Essen, Germany). POF Fabrication The POF employed in this work was fabricated in our facilities. We annealed the Plexiglass ® extrusion rod for 7 days in an oven with low humidity. Afterwards, we drew it directly to a 500 μm diameter only-core fiber using our POF drawing tower. This fabrication method allows us to fabricate directly only-core POFs. This way, we have complete control on the fiber diameter and the core surface roughness is much lower in comparison to the results obtained with other methods, such as the stripping of a commercial fiber. After fabricating the fiber, we bent the fiber taking the following procedure: first of all, 30 cm of fiber was cut. Then, using a 2.5 mm wide glass tube as a guide, a hot-air gun set at 120 °C was directed to the section of the fiber subjected to bending, and the U-shape was carefully formed. After that, the fiber probes were washed in isopropanol for 1 h and dried in a vacuum chamber at 60 °C overnight in order to remove internal stresses. Finally, both ends of the fiber were carefully polished. The resultant sensor-probe is shown in Figure 4. In summary, a PMMA fiber containing PBA was developed [19]. Firstly, the fiber surface was functionalized with PBA groups and bound with ARS. Secondly, the fiber was immersed in a solution containing 1,2-diol analytes. The competitive nature of boronic acid/diols complexes will favor the ARS-PBA bond breaking and glucose-boronic acid bond formation. The careful design of the experiment ensures that only the interaction of glucose and boronic acid can cause the ARS-boronic acid disruption. PMMA of optical quality (Plexiglass ® ) was used for the fabrication of the POF and plain samples. Rods with a diameter of 15 mm and sheets with a depth of 1 mm were obtained from Evonik (Essen, Germany). POF Fabrication The POF employed in this work was fabricated in our facilities. We annealed the Plexiglass ® extrusion rod for 7 days in an oven with low humidity. Afterwards, we drew it directly to a 500 µm diameter only-core fiber using our POF drawing tower. This fabrication method allows us to fabricate directly only-core POFs. This way, we have complete control on the fiber diameter and the core surface roughness is much lower in comparison to the results obtained with other methods, such as the stripping of a commercial fiber. After fabricating the fiber, we bent the fiber taking the following procedure: first of all, 30 cm of fiber was cut. Then, using a 2.5 mm wide glass tube as a guide, a hot-air gun set at 120 • C was directed to the section of the fiber subjected to bending, and the U-shape was carefully formed. After that, the fiber probes were washed in isopropanol for 1 h and dried in a vacuum chamber at 60 • C overnight in order to remove internal stresses. Finally, both ends of the fiber were carefully polished. The resultant sensor-probe is shown in Figure 4. Surface Functionalization and ARS Aggregation The functionalization of the PMMA fiber surface was carried out by slightly modifying the method described by Fortin and Klok [24]. Briefly, starting with the U-shaped probe of Figure 5a, 2 cm of the probe were hydrolyzed by immersing them in a 3 M sulphuric acid solution in deionized water at 60 °C for 15 min, then they were washed in deionized water and left in a vacuum chamber overnight (see Figure 5b). Afterwards, an activation process of the carboxylic groups was started by immersing the hydrolyzed probes in a 0.1 M EDC and 0.2 M NHS in deionized water for 4 h at room temperature. Subsequently, they were rinsed in ethanol, and later they were left overnight in a solution of 16 mg·mL −1 APBA in a PBS buffer (pH 7.2). After 12 h, the probes were washed by rinsing in deionized water (20 min, three times) and, finally, they were dried in a vacuum chamber (see Figure 5c). Surface Functionalization and ARS Aggregation The functionalization of the PMMA fiber surface was carried out by slightly modifying the method described by Fortin and Klok [24]. Briefly, starting with the U-shaped probe of Figure 5a, 2 cm of the probe were hydrolyzed by immersing them in a 3 M sulphuric acid solution in deionized water at 60 • C for 15 min, then they were washed in deionized water and left in a vacuum chamber overnight (see Figure 5b). Afterwards, an activation process of the carboxylic groups was started by immersing the hydrolyzed probes in a 0.1 M EDC and 0.2 M NHS in deionized water for 4 h at room temperature. Subsequently, they were rinsed in ethanol, and later they were left overnight in a solution of 16 mg·mL −1 APBA in a PBS buffer (pH 7.2). After 12 h, the probes were washed by rinsing in deionized water (20 min, three times) and, finally, they were dried in a vacuum chamber (see Figure 5c). Surface Functionalization and ARS Aggregation The functionalization of the PMMA fiber surface was carried out by slightly modifying the method described by Fortin and Klok [24]. Briefly, starting with the U-shaped probe of Figure 5a, 2 cm of the probe were hydrolyzed by immersing them in a 3 M sulphuric acid solution in deionized water at 60 °C for 15 min, then they were washed in deionized water and left in a vacuum chamber overnight (see Figure 5b). Afterwards, an activation process of the carboxylic groups was started by immersing the hydrolyzed probes in a 0.1 M EDC and 0.2 M NHS in deionized water for 4 h at room temperature. Subsequently, they were rinsed in ethanol, and later they were left overnight in a solution of 16 mg·mL −1 APBA in a PBS buffer (pH 7.2). After 12 h, the probes were washed by rinsing in deionized water (20 min, three times) and, finally, they were dried in a vacuum chamber (see Figure 5c). Once the fibers were functionalized with PBA groups, they were charged with ARS following the procedure described by Chen et al. [25]. Firstly, the tips were immersed for 3 h at room temperature in a 0.1 mg·mL −1 ARS solution prepared in a buffer of pH 7.4 with 50 mM TRIS and 44.7 mM HCl in deionized water. Secondly, they were washed with the buffer solution and dried in a vacuum chamber (Figure 5d). Experimental Setup The experimental setup employed to carry out the measurements is shown in Figure 6 together with the chemical illustration of the disaggregation of the ARS produced by the glucose. The supercontinuum light source (EQ-99-FC LDLSTM, ENERGETIQ) was launched to the fiber probes using the minimum number of components. The light emitted from the source was first collimated with a collimating lens and then filtered with a band-pass filter (390-750 nm) to remove the UV emission, and then attenuated by an OD3 attenuator to avoid saturation in the detector. In order to cancel power fluctuations of the light source, the light power was monitored using a beam splitter and a photodiode connected to a power meter. The pump light was focused on the end face of the fiber-probe using an objective (40×, 0.65 NA). The output light from the other end face was captured and focused through another collimator to the spectrometer (USB Flame 390-750 nm, Ocean Optics ® Inc., Largo, FL, USA). Once the fibers were functionalized with PBA groups, they were charged with ARS following the procedure described by Chen et al. [25]. Firstly, the tips were immersed for 3 h at room temperature in a 0.1 mg·mL −1 ARS solution prepared in a buffer of pH 7.4 with 50 mM TRIS and 44.7 mM HCl in deionized water. Secondly, they were washed with the buffer solution and dried in a vacuum chamber (Figure 5d). Experimental Setup The experimental setup employed to carry out the measurements is shown in Figure 6 together with the chemical illustration of the disaggregation of the ARS produced by the glucose. The supercontinuum light source (EQ-99-FC LDLSTM, ENERGETIQ) was launched to the fiber probes using the minimum number of components. The light emitted from the source was first collimated with a collimating lens and then filtered with a band-pass filter (390-750 nm) to remove the UV emission, and then attenuated by an OD3 attenuator to avoid saturation in the detector. In order to cancel power fluctuations of the light source, the light power was monitored using a beam splitter and a photodiode connected to a power meter. The pump light was focused on the end face of the fiber-probe using an objective (40×, 0.65 NA). The output light from the other end face was captured and focused through another collimator to the spectrometer (USB Flame 390-750 nm, Ocean Optics ® Inc., Largo, FL, USA). Light detected in the spectrometer was recorded by a custom-made LabView program capable of recording and averaging 100 consecutive measurements. The spectrometer integration time can be tuned automatically during the measurements to avoid saturation effects in the original spectra. Functionalization Process The functionalization process carried out in this work requires an exhaustive control of the surface modification at each step. This is achieved by carrying out an X-ray photoelectron spectroscopy (XPS) study in the plain samples of PMMA sheets; measurements were made using a SPECS system equipped with a Phoibos 150 1D-DLD analyzer and a monochromatic radiation source Focus 500 with an Al/Ag dual anode. The XPS spectra of Figure 7a show the general spectra of unmodified PMMA and PBA-functionalized PMMA plain sheets. Regarding the specified binding energy of the boron atom (187.2 eV binding energy), a marked peak can be observed in functionalized samples. The surface functionalization with PBA groups was confirmed by high resolution spectra of the boron binding energy (Figure 7b), with 0.67% of the surface being covered by boron atoms (Table 1). Light detected in the spectrometer was recorded by a custom-made LabView program capable of recording and averaging 100 consecutive measurements. The spectrometer integration time can be tuned automatically during the measurements to avoid saturation effects in the original spectra. Functionalization Process The functionalization process carried out in this work requires an exhaustive control of the surface modification at each step. This is achieved by carrying out an X-ray photoelectron spectroscopy (XPS) study in the plain samples of PMMA sheets; measurements were made using a SPECS system equipped with a Phoibos 150 1D-DLD analyzer and a monochromatic radiation source Focus 500 with an Al/Ag dual anode. The XPS spectra of Figure 7a show the general spectra of unmodified PMMA and PBA-functionalized PMMA plain sheets. Regarding the specified binding energy of the boron atom (187.2 eV binding energy), a marked peak can be observed in functionalized samples. The surface functionalization with PBA groups was confirmed by high resolution spectra of the boron binding energy (Figure 7b), with 0.67% of the surface being covered by boron atoms (Table 1). Once the functionalization was confirmed and quantified in plain samples (Table 1), the absorption spectra of the U-shaped probes were obtained in each step of the above detailed functionalization process. As the transmitted signal power varies in different steps, all of the obtained spectra were normalized with the minimum absorption value of each measurement. Regarding the absorption spectra shown in Figure 8a, we can observe that there is a significant difference between the unmodified fiber (black line) and the PBA-modified fiber (red line). The light entering the fiber suffers absorption caused by the PBA groups in the surface for the case of the functionalized probe, so comparing it with the unmodified probe, a strong absorption curve appears on the low wavelengths. In order to highlight the absorption caused by the functionalization process, the spectra were normalized with the unmodified probe, Figure 8b (red line). The absorption maximum at 440 nm indicates the presence of the PBA groups. In the second step, when the U-shaped probes are charged with ARS, we observe in Figure 8b (blue line) that the ARS-charged fiber shows wider absorption caused by attached ARS with an Once the functionalization was confirmed and quantified in plain samples (Table 1), the absorption spectra of the U-shaped probes were obtained in each step of the above detailed functionalization process. As the transmitted signal power varies in different steps, all of the obtained spectra were normalized with the minimum absorption value of each measurement. Regarding the absorption spectra shown in Figure 8a, we can observe that there is a significant difference between the unmodified fiber (black line) and the PBA-modified fiber (red line). The light entering the fiber suffers absorption caused by the PBA groups in the surface for the case of the functionalized probe, so comparing it with the unmodified probe, a strong absorption curve appears on the low wavelengths. In order to highlight the absorption caused by the functionalization process, Glucose Detection For the detection of glucose, we recorded and compared the absorption spectra of different fibers before and after their immersion in different glucose solutions for 10 min. The probes were immersed in 6 mL of solution using polycarbonate cuvettes and only 1.5 cm of the sensitive area of the probes were immersed. Then, probes were left in air until the transmission signal was stable before spectra were recorded and, afterwards, these spectra were compared with the beginning spectra. The measurements were made using the same glucose concentration, 0.1 M, and in three different media, two of them being physiological, i.e., with a pH similar to that of the human body, 7.2-7.4. More specifically, these three media were deionized water (H2O), PBS buffer, and TRIS buffer. Notice that these media have pH values well above the pKa of boronic acid, as these buffers are also used for physiological media, making the interaction with the glucose possible. By testing the method in these different media, we intend to prove the suitability of the fiber probe regardless of the selected medium and its hypothetical application to a biological medium. For the H2O solvent, results are shown in Figure 9. From the absorption spectra (Figure 9a), it can be noticed that the absorbance of the ARS-charged fiber probe (blue line) is higher than in subsequent steps for lower wavelengths, i.e., when the same probe has been immersed in the solution with glucose and has released ARS (green line). Ideally, if the glucose were able to disrupt all the ARS-boronic bonds, we would achieve the original absorption curve corresponding to the functionalized fiber (Figure 9a, red line). To see the effect clearly, the right-hand side graph plots the normalization of the immersed absorption with the ARS-charged state, showing a dramatic decrease of the absorbance around the ARS absorption wavelength. In the second step, when the U-shaped probes are charged with ARS, we observe in Figure 8b Glucose Detection For the detection of glucose, we recorded and compared the absorption spectra of different fibers before and after their immersion in different glucose solutions for 10 min. The probes were immersed in 6 mL of solution using polycarbonate cuvettes and only 1.5 cm of the sensitive area of the probes were immersed. Then, probes were left in air until the transmission signal was stable before spectra were recorded and, afterwards, these spectra were compared with the beginning spectra. The measurements were made using the same glucose concentration, 0.1 M, and in three different media, two of them being physiological, i.e., with a pH similar to that of the human body, 7.2-7.4. More specifically, these three media were deionized water (H 2 O), PBS buffer, and TRIS buffer. Notice that these media have pH values well above the pKa of boronic acid, as these buffers are also used for physiological media, making the interaction with the glucose possible. By testing the method in these different media, we intend to prove the suitability of the fiber probe regardless of the selected medium and its hypothetical application to a biological medium. For the H 2 O solvent, results are shown in Figure 9. From the absorption spectra (Figure 9a), it can be noticed that the absorbance of the ARS-charged fiber probe (blue line) is higher than in subsequent steps for lower wavelengths, i.e., when the same probe has been immersed in the solution with glucose and has released ARS (green line). Ideally, if the glucose were able to disrupt all the ARS-boronic bonds, we would achieve the original absorption curve corresponding to the functionalized fiber (Figure 9a, red line). To see the effect clearly, the right-hand side graph plots the normalization of the immersed absorption with the ARS-charged state, showing a dramatic decrease of the absorbance around the ARS absorption wavelength. Regarding the physiological media, PBS and TRIS, their results are shown in Figure 10. For both physiological media, the charged probe released the ARS due to the bonding of glucose on the boronic acid functionalized surface of the probe. For the case of PBS (Figure 10a), we can also observe a decrement in the absorption around 533 nm, which is highlighted in the right-hand side graph. In the case of TRIS (Figure 10b), the probe behaves qualitatively in the same way as in the other media. Indeed, it has been made sure that in the reaction no other effect might influence the absorption of the fiber probe. All in all, we can conclude that, irrespective of the media, the Regarding the physiological media, PBS and TRIS, their results are shown in Figure 10. Regarding the physiological media, PBS and TRIS, their results are shown in Figure 10. For both physiological media, the charged probe released the ARS due to the bonding of glucose on the boronic acid functionalized surface of the probe. For the case of PBS (Figure 10a), we can also observe a decrement in the absorption around 533 nm, which is highlighted in the right-hand side graph. In the case of TRIS (Figure 10b), the probe behaves qualitatively in the same way as in the other media. Indeed, it has been made sure that in the reaction no other effect might influence the absorption of the fiber probe. All in all, we can conclude that, irrespective of the media, the For both physiological media, the charged probe released the ARS due to the bonding of glucose on the boronic acid functionalized surface of the probe. For the case of PBS (Figure 10a), we can also observe a decrement in the absorption around 533 nm, which is highlighted in the right-hand side graph. In the case of TRIS (Figure 10b), the probe behaves qualitatively in the same way as in the other media. Indeed, it has been made sure that in the reaction no other effect might influence the absorption of the fiber probe. All in all, we can conclude that, irrespective of the media, the performance of the sensor is similar, even though the results of Figure 9 suggest a larger amount of ARS released in H 2 O. Conclusions In this paper, we have reported a potential glucose detection platform. The surface modification of a low-cost U-bent fiber is enough for the detection of glucose regardless of the surrounding medium, in contrast to other sensors based on evanescent wave sensing whose detection method relies on measuring the change on the refractive index. The functionalization of the PMMA with phenylboronic groups, which have high affinity to the diol groups of the glucose, allows glucose detection in physiological media. Measurement of the disaggregation of ARS in the visible range together with the simplicity of the probe paves the way for low-cost solutions for glucose detection.
2018-01-08T01:18:09.871Z
2017-12-25T00:00:00.000
{ "year": 2017, "sha1": "6b7eed90678d620f54fd952690a89e0afc71388c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s18010034", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1bd25021c7da45e230ce1ec222c12a721cce3443", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
207888757
pes2o/s2orc
v3-fos-license
A randomised single-centre trial of inhaled liposomal cyclosporine for bronchiolitis obliterans syndrome post-lung transplantation Introduction No proven treatments exist for bronchiolitis obliterans syndrome (BOS) following lung transplantation. Inhaled liposomal cyclosporine (L-CsA) may prevent BOS progression. Methods A 48-week phase IIb randomised clinical trial was conducted in 21 lung transplant patients with BOS assigned to either L-CsA with standard-of-care (SOC) oral immunosuppression (L-CsA group) or SOC (SOC-alone group). Efficacy end-points were BOS progression-free survival (defined as absence of ≥20% decline in forced expiratory volume in 1 s (FEV1) from randomisation, re-transplantation or death) and BOS grade change. Results BOS progression-free survival was 82% for L-CsA versus 50% for SOC-alone (p=0.1) and BOS grade worsened in 18% for L-CsA versus 60% for SOC-alone (p=0.05). Mean changes in ΔFEV1 and forced vital capacity, respectively, stabilised with L-CsA: +0.005 (95% CI −0.004– +0.013) and −0.005 (95% CI −0.015– +0.006) L·month−1, but worsened with SOC-alone: −0.023 (95% CI −0.033– −0.013) and −0.026 (95% CI −0.039– −0.014) L·month−1 (p<0.0001 and p=0.009). Median survival (4.1 versus 2.9 years; p=0.03) and infection rate (45% versus 60%; p=0.7) improved with L-CsA versus SOC-alone; creatinine and tacrolimus levels were similar. Conclusions L-CsA was well tolerated and stabilised lung function in lung transplant recipients affected by BOS without systemic toxicity, providing a basis for a global phase III trial using L-CsA. Introduction Outcomes after lung transplantation are poor due to bronchiolitis obliterans [1]. Since bronchiolitis obliterans is not readily demonstrated by lung biopsies, the term bronchiolitis obliterans syndrome (BOS) is applied, defined as a sustained forced expiratory volume in 1 s (FEV 1 ) decline [2]. Treatments for bronchiolitis obliterans are poorly efficacious [3][4][5][6]. When higher dosages of calcineurin inhibitors are given for improved immunosuppression, nephrotoxicity and opportunistic infections are limiting [7]. This trial, which used a liposomal formulation of aerosolised cyclosporine A (L-CsA), tailored for fast and targeted drug aerosol delivery with a high-performance nebuliser (eFlow), given in addition to standard-of-care (SOC) oral immunosuppression for the treatment of BOS following lung transplantation, is the first randomised controlled study using L-CsA for BOS treatment. Patient characteristics This open-label randomised trial was conducted at the University of Maryland (Baltimore, MD, USA) with Institutional Review Board approval. This study is registered at ClinicalTrials.gov with identifier number NCT01650545. The trial was conducted by way of the primary author's (A.I.) Investigational New Drug (IND) application. Enrolment was from September 2012 to January 2015. Follow-up for lung function was for 1 year and survival until September 2017. Patients ⩾18 years of age were eligible if recipients of a single or bilateral pulmonary allograft, had clinically diagnosed BOS grade 1 or 2 [2] within 4 weeks of study entry and were receiving tacrolimus-based immunosuppression. Exclusion criteria are listed in the supplementary material. No patient had restrictive chronic lung allograft dysfunction or antibody-mediated rejection prior to or at randomisation, or thereafter [29,30]. Patients randomised to the L-CsA arm were scheduled to receive L-CsA twice daily for 24 weeks at doses of 5 mg (single allograft) or 10 mg (double allograft), in addition to SOC. After the initial 24-week treatment period, patients in the L-CsA arm continued on SOC during a subsequent 24-week follow-up. Patients randomised to the SOC-alone arm received standard immunosuppression only. Trial design and evaluations The objective of the study was to evaluate safety and efficacy of L-CsA for grade 1 and 2 BOS. Because single lung recipients have a worse outcome, randomisation was stratified according to single and bilateral status. Patients were then randomly assigned to groups according to block randomisation in a 1:1 ratio to receive either L-CsA or SOC-alone. Study treatment began as soon as possible after randomisation, typically within 7 days. If SOC-alone patients met a primary end-point of ⩾20% decline in FEV 1 from randomisation and still met initial study entry criteria, L-CsA was permitted as "rescue" crossover. Additionally, if this efficacy end-point occurred during the second 24-week follow-up period after L-CsA administration in that arm, L-CsA could then be re-initiated for a second 24-week period. Crossover patients in both arms were followed clinically, but their data were included in the study analyses end-points up until they met a primary study end-point. End-points There were two primary end-points: 1) a composite of BOS progression-free survival, defined as time from randomisation to ⩾20% decline in FEV 1 , re-transplantation or death, whichever occurred first ( prolonged mechanical ventilation and irreversible respiratory failure equivalent to ⩾20% decline of FEV 1 ), and 2) BOS grade progression by grade changes from randomisation to study completion. A decline in FEV 1 was validated for absence of concurrent illness measured at intervals ⩾3 weeks apart. Safety Patient and graft survival and adverse events including infections and symptoms related to L-CsA were quantified as an index of safety and compared between study arms. An Outcomes and Safety Committee adjudicated events. Statistical analysis As the first phase IIb trial using L-CsA for BOS treatment, the number of patients to be randomised was determined by the availability of L-CsA and other resources. The IND study specified the end-points, safety measures and a 3-year enrolment period of 30 patients. No modifications were made after trial initiation. 15 patients per group was deemed appropriate, as absence of the desired outcomes for L-CsA would discourage future drug development. Enrolment of qualifying recipients was discontinued after 3 years after accrual of 21 subjects. The target enrolment goal was not met due to lower than anticipated enrolment rates. Outcome data collection continued until either a primary outcome event occurred or patients without events completed the study at 48 weeks. Patient and graft survival were monitored until September 2017 as an assessment of safety independent of continuation or discontinuation of L-CsA. Patients were analysed according to the intention-to-treat principle. No patient was lost to follow-up. End-point events were compared by Kaplan-Meier survival analyses and log-rank testing as specified a priori by our protocol. A p-value of <0.05 indicated statistical significance. Since the patient survival analysis showed nonproportionality, the Renyi statistic was also used. Data are presented with hazard ratios and 95% confidence intervals. For lung function analyses, multivariate linear mixed effects statistical models (PROC MIXED in SAS version 9.1.3; SAS Institute, Cary, NC, USA) were utilised [31]. Secondary end-points included lung function changes, infection rates and survival. For lung function, as pre-specified for single and bilateral lungs, one mixed model was based only on post-randomisation lung function data using a longitudinal regression model, while a second model accounted for intragroup values pre-randomisation adjusting for within-patient trends that could potentially influence post-randomisation function. Changes in cytokine measurements from pre-to post-randomisation were compared using two-way ANOVA from 42 BAL collections (21 in each group). Sirolimus and tacrolimus levels and routine laboratory values were compared using a mixed effects model. A total of 243 pulmonary function tests (122 L-CsA and 121 SOC-alone) and 603 blood samples were analysed. Patient characteristics Of 43 patients screened, 17 failed to meet BOS grade criteria and 21 were randomised (11 to L-CsA and 10 to SOC-alone) (figure 1). Baseline characteristics and clinical management of the two groups were similar, although more cytomegalovirus mismatches were randomised to L-CsA (table 1). Mean±SD time to BOS confirmation for L-CsA was comparable to SOC-alone (1391±859 versus 1061±796 days; p=0.41). Forced vital capacity (FVC) decline prior to randomisation for both L-CsA and SOC-alone was similar (−0.025 (95% CI −0.034-−0.015) versus −0.021 (95% CI −0.030-−0.012) L; p=0.69). Azithromycin use, induction cycles, BOS grades and absolute FEV 1 decline rates prior to randomisation were all similar (supplementary material). All randomised patients reached the efficacy end-point or completed 48 weeks of follow-up. Both the L-CsA and SOC-alone groups received similar cycles of augmentation of immune suppression after randomisation (three steroid pulses and one ATG cycle). Five patients has positive donor-specific antibody results post-transplantation: two patients in L-CsA and three in SOC-alone (three with human leukocyte antigen class 2 reactivity). Re-transplantation (n=2) L-CsA crossover # , successful (n=1) L-CsA crossover # , unsuccessful, followed by re-transplantation (n=1) BOS progression Mechanical ventilation (n=1) FIGURE 1 Study enrolment. L-CsA: liposomal cyclosporine; SOC: standard of care; FEV 1 : forced expiratory volume in 1 s; BOS: bronchiolitis obliterans syndrome. 43 patients were assessed for eligibility for this study. 17 screened patients did not meet BOS grade 1 or 2 criteria and three patients met exclusion criteria. 23 patients met eligibility criteria. One patient died and one patient withdrew prior to randomisation. 21 patients were randomised: 11 patients to the inhaled L-CsA treatment arm given in addition to conventional oral immunosuppression (SOC) and 10 patients to the SOC-alone arm. Patients were followed until an efficacy end-point occurred (a ⩾20% FEV 1 decline or re-transplantation or death) or until week 48. If the efficacy end-point event occurred before week 48 in the SOC-alone arm, crossover to L-CsA was permitted. If the efficacy end-point occurred in the L-CsA group during the 24-week observation interval only, re-treatment with L-CsA was possible if patients still fulfilled eligibility criteria. One SOC-alone patient developed protracted respiratory failure (>3 weeks duration) due to progressive BOS. # : the mean duration of L-CsA crossover or re-therapy was 156 days (a successful L-CsA "crossover" or "re-therapy" was defined as absence of ⩾20% FEV 1 decline relative to the time of initiation, according to the end-point definition on-treatment period) and stabilised after L-CsA was resumed (a "rescue" crossover). Of the five end-point occurrences in the SOC-alone group, two patients were re-transplanted, one developed respiratory failure and one out of two patients who crossed over from SOC-alone responded to L-CsA without further interventions. BOS grade progression from randomisation occurred three-fold less commonly in patients receiving L-CsA versus SOC-alone ( p=0.05) (figure 2b). Lung function changes prior to randomisation and after L-CsA Data are presented as n, mean±SD or n (%), unless otherwise stated. COPD: chronic obstructive pulmonary disease; HLA: human leukocyte antigen; FEV 1 : forced expiratory volume in 1 s; BOS: bronchiolitis obliterans syndrome. Histopathology before and after randomisation to L-CsA Transbronchial biopsies were performed when indicated by the University of Maryland protocol. Prior to randomisation, histology demonstrated four cases with airway rejection in L-CsA cases (two patients B1 and two patients with histological changes suggestive of bronchiolitis obliterans) and only one receiving SOC-alone (one patient with changes suggestive of bronchiolitis obliterans). Post-randomisation, four patients receiving SOC-alone had airway rejection (two patients B2 and two patients B1), whereas two patients receiving L-CsA had B1 airway rejection. One patient receiving L-CsA and two patients receiving SOC-alone experienced grade 1 acute rejection. No patient had histopathological changes consistent with antibody-mediated rejection or positive C4d staining prior to or after randomisation. Pharmacokinetics Cyclosporine blood sampling was done for all patients randomised to L-CsA and one crossover patient. Mean±SD maximum cyclosporine blood concentration (C max ) was 57.42±34.26 ng·mL −1 achieved after 15−30 min (t max ) and the half-life (t 1/2 ) was ∼2 h. At 24 h, mean±SD cyclosporine blood concentration was 1.42±4.91 ng·mL −1 . Adverse events No adverse event required withdrawal from L-CsA or permanent drug discontinuation. No patient was lost to follow-up. Peak expiratory flow (PEF) at the first dosing was 367.7 L·min −1 prior to inhalation and 327.7 L·min −1 after inhalation (−10.9% decrease). No patients met the pre-specified PEF 20% decline criterion to discontinue L-CsA. Three adverse events were related to L-CsA: conjunctivitis, pharyngitis and productive cough. Discussion Lung transplant survival is limited and has failed to improve substantially during the past two decades. Bronchiolitis obliterans is a leading cause of death [3]. Inhalation of cyclosporine provides high bronchiolar concentrations and may arrest BOS progression [21]. This initial exploratory randomised controlled open-label trial of L-CsA provides evidence for improvement of BOS defined by the composite end-point, i.e. BOS progression-free survival, with a clinically meaningful improvement at 48 weeks (82% L-CsA versus 50% SOC-alone; p=0.1) and a three-fold arrest of BOS grade progression ( p=0.05). Although the difference in BOS progression-free survival was not significant statistically in 21 cases, the clinical magnitude of the benefit, i.e. an absolute difference of 32%, was large. Moreover, comparison of change in BOS grade from randomisation was also impressive, with a two-thirds reduction in L-CsA patients. L-CsA significantly stabilised FEV 1 and FVC compared with SOC-alone. In addition to the intergroup differences in FEV 1 decline, the intragroup change in FEV 1 slopes differed before and after randomisation in L-CsA, converting from a negative to positive slope; in contrast, SOC-alone controls declined functionally post-randomisation at rates similar to pre-randomisation, demonstrating the characteristic inexorable decline of FEV 1 from BOS despite current SOC management without L-CsA [32,33]. Crossover patients who were off L-CsA but started L-CsA because of ongoing deterioration showed similar FEV 1 improvements. Prior investigations of bronchiolitis obliterans using aerosol cyclosporine in propylene glycol, as well as a recent study using inhaled cyclosporine for BOS following haematopoietic stem cell transplantation, have shown similar pulmonary function benefits [20,26,34], as have other nonrandomised studies using immunosuppressive therapies for bronchiolitis obliterans [35]. L-CsA resulted in the improvement of long-term survival and graft survival (4.1 versus 2.9 years with SOC-alone), a finding nearly matching an observational cohort study performed at the University of Pittsburgh in the USA in 2005 comparing histological bronchiolitis obliterans patients treated with inhaled cyclosporine to SOC-alone controls (median survival 4.5 versus 2.4 years) [18]. An L-CsA survival benefit was also noted in a randomised double-blind placebo-controlled trial showing bronchiolitis obliterans could be prevented by the addition of aerosolised cyclosporine-propylene glycol [25]. Allograft histology demonstrated reduced severity and frequency of bronchiolar inflammation after L-CsA randomisation but not before randomisation and synchronous levels of IL-2 in BAL were lower in L-CsA cases after but not prior to randomisation [36]. Elevated cyclosporine concentrations in the rejecting lung would explain these findings [21,28]. Increases in BAL cytokines IFN-γ and IL-10 were also observed in L-CsA patients; IFN-γ regulates cellular proliferation and collagen synthesis, while IL-10 can induce immune tolerance [37,38]. Although the L-CsA and SOC-alone groups were similar with reference to baseline and subsequent BOS treatments, including tacrolimus exposure, immunosuppressive augmentation cycles and azithromycin use [39], the SOC-alone cohort did have significantly higher sirolimus blood levels consistent with physician-directed attempts to control progressing BOS and lung failure. The drug dose of L-CsA was given twice a day to ensure that the beneficial dose of 5 mg would be deposited in the lung allograft. Pharmacokinetics studies demonstrated a low vascular concentration of cyclosporine. Infections and respiratory infections were similar between groups, and L-CsA offered greatly improved tolerability and reduced treatment time with the eFlow nebuliser (10-15 min) compared with cyclosporine-propylene glycol formulations [25]. With increased experience using L-CsA in larger scale trials, systemic immunosuppressive requirements could lessen, as witnessed by reduced sirolimus exposure in L-CsA patients. In this small, single-centre trial, the addition of inhaled L-CsA offered a substantial functional benefit without additional toxicity. Due to the exploratory nature of this study, further experience is needed to confirm the magnitude and duration of the observed effects. Patient enrolment for a phase III international multicentre trial using L-CsA for BOS has begun (CliniclaTrials.gov identifiers NCT03657342 and NCT03656926).
2019-10-31T09:13:02.104Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "72ccdf4c814be0e9a1a3ea129fe68d5175e75fad", "oa_license": "CCBYNC", "oa_url": "https://openres.ersjournals.com/content/erjor/5/4/00167-2019.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "812d00c3a31fdb59281d05c7b063fce0d7fd4c6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246677067
pes2o/s2orc
v3-fos-license
Metabolic snapshot of plasma samples reveals new pathways implicated in SARS-CoV-2 pathogenesis Despite of the scientific and human efforts to understand COVID-19, there are questions still unanswered. Variations in the metabolic reaction to SARS-CoV-2 infection could explain the striking differences in the susceptibility to infection and the risk of severe disease. Here, we used untargeted metabolomics to examine novel metabolic pathways related to SARS-CoV-2 susceptibility and COVID-19 clinical severity using capillary electrophoresis coupled to a time-of-flight mass spectrometer (CE-TOF-MS) in plasma samples. We included 27 patients with confirmed COVID-19 early after symptom onset who were prospectively followed and 29 healthcare workers heavily exposed to SARS-CoV-2 but with low susceptibility to infection (‘nonsusceptible’). We found that the metabolite profile was predictive of the study group. We identified a total of 55 metabolites as biomarkers of SARS-CoV-2 susceptibility or COVID-19 clinical severity. We report the discovery of new plasma biomarkers for COVID-19 that provide mechanistic explanations for the clinical consequences of SARS-CoV-2, including mitochondrial and liver dysfunction as a consequence of hypoxemia (citrulline, citrate, and BAIBA), energy production and amino acid catabolism (L-glycine, L-alanine, L-serine, L-proline, L-aspartic acid and L-histidine), endothelial dysfunction and thrombosis (citrulline, L-ADMA, 2-AB, and Neu5Ac), and we found interconnections between these pathways. In summary, in this first report of the metabolomic profile of individuals with severe COVID-19 and SARS-CoV-2 susceptibility by CE-MS, we define several metabolic pathways implicated in SARS-CoV-2 susceptibility and COVID-19 clinical progression that could be developed as biomarkers of COVID-19. Introduction Despite the effective response to the worst pandemic that humanity has faced in recent decades, the metabolic and biochemical processes during SARS-CoV-2 infection remain poorly understood.Most studies that have thus far investigated the biochemical pathways affected by SARS-CoV-2 rely on powerful bioanalytical techniques.Using untargeted and targeted metabolomics, other groups have identified that disruption of lipid and amino acid metabolism, such as the kynurenine pathway, are potentially relevant pathways associated with COVID-19 pathogenesis (1)(2)(3)(4)(5).Other candidate pathways that could be involved in clinical progression include pyrimidine (1,2) and purine (1,(6)(7)(8) metabolism, fructose, and mannose metabolism (1,7) and carbon metabolism (1,2,9), although the specific mechanism remains unclear.Overall, the necessity to elucidate the global snapshot of biochemical processes behind SARS-CoV-2 infection is still in progress.Metabolomic profiling can be performed by mass spectrometry (MS) coupled to a separation technique such as liquid chromatography (LC-MS), gas chromatography (GC-MS) or capillary electrophoresis (CE-MS).CE-MS is used to study polar and ionizable compounds such as free modified amino acids (MAAs) and "epimetabolites", which are side products of enzyme reactions.These MAAs or the appearance of epimetabolites has been associated with important alterations in cellular, physiological, and pathological processes (10)(11)(12)(13).While CE-MS is a powerful method to characterize unknown mechanisms of disease progression, to our knowledge, it has not been used in individuals with COVID-19.Here, we investigated novel metabolic pathways of SARS-CoV-2 susceptibility and COVID-19 clinical progression using CE-MS in longitudinal plasma samples from patients with COVID-19 with different disease severities and in a population of .CC-BY 4.0 International license perpetuity.It is made available under a preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in The copyright holder for this this version posted September 29, 2021.; https://doi.org/10.1101/2021.09.29.462326 doi: bioRxiv preprint healthcare workers heavily exposed to SARS-CoV-2 but with low susceptibility to infection. General characteristics of the study population We included 63 adults, of whom 27 were in the COVID-19+ group and 36 were in the COVID-19-group, of whom 24 were nonsusceptible.COVID-19+ and susceptible patients were older and had a higher prevalence of comorbidities than COVID-19and nonsusceptible patients.The general characteristics of the study population are described in Table 1.S1). 19) .(B) Plot B represents the comparison of susceptible and non-susceptible participants with R 2 = 0.902, Q 2 = 0.817, and CV-ANOVA p-value = 4.20 x 10 -17 .Models were validated by permutation testing and CV-ANOVA (14,15).Hydroxychloroquine, initially found to be significant, was removed from all statistical analysis as it was empirically used to treat COVID19 at the time of sample collection. Metabolic profile differences associated with COVID-19 clinical severity We then performed subgroup analyses separating COVID-19+ participants by clinical severity.While no differences in untargeted metabolomic profiles were found in the PCA (Fig S1 C), inspection of the PLS-DA score plots (Fig S2 C) showed clear clustering that did not meet the prespecified validation criteria.Pairwise comparisons of OPLS-DA models of all 3 categories fulfilled the validation criteria, indicating that there were statistically significant differences in the metabolomes of mild vs. severe and between moderate vs. severe cases (Fig 2).Similar to other COVID-19 and susceptibility studies, a total of 8 metabolites, including creatine, citrulline and 6 unknown features, were identified as predictors of greater disease severity (VIP ≥ 1 and │p(corr)│ ≥ 0.5) (Table S1). COVID-19 We then sought to assess the effect of time on the metabolomes of participants with COVID-19 following a similar strategy.A clear separation between baseline and day 8 was found for mild and moderate cases (Fig S1 D1-D3; Fig S2 D1-D2).For severe cases, the PLS-DA model could not be fitted due to the limited availability of paired samples.Validated OPLS-DA models (Fig 3 ) showed that the longitudinal differences detected for mild and moderate cases were statistically significant (CV-ANOVA p-value < 0.05 and R 2 -Q 2 < 0.3).We found 10 metabolites whose abundance differed from baseline to day 8 in mild cases and 7 in moderate cases (VIP ≥ 1 and │p(corr)│ ≥ 0.5), see S1 Table ). Complementary characterization of metabolomic predictors of COVID-19 disease status and susceptibility To visually summarize the metabolite fingerprint associated with COVID-19 disease and SARS-CoV-2 susceptibility, we represented the abundance of the metabolites identified by univariate analysis followed by multivariate statistical analysis as predictors of each condition in heatmaps with hierarchical clustering ( In blue those compounds of the nitric oxide or are related with NO regulation.ANOVA-simultaneous component analysis (ASCA) identified age as the only factor significantly associated with the outcome.Thus, we further assessed the metabolites previously identified as predictors of COVID-19 disease severity or susceptibility controlling for age using ANCOVA (Tables S2 and S3).Of them, NG, NG'-dimethyl-Larginine (L-SDMA), L-cystine and L-carnitine lost statistical significance.L-Kynurenine and citric acid remained significantly predictive of COVID-19 disease and SARS-CoV-2 susceptibility, respectively.The selection of metabolites that could be fully characterized and their size effects are summarized in Table 2. Discussion To our knowledge, this study is the first to evaluate the plasma metabolomic profile of individuals with severe COVID-19 and SARS-CoV-2 susceptibility by CE-MS.Our work demonstrates the potential of CE-MS to unveil new plasma biomarkers of COVID-19 and SARS-CoV-2 susceptibility and allows a deeper advancing of the metabolic consequences of SARS-CoV-2 infection (Fig 5).Fig 5 .A model of the metabolic pathways implicated in COVID19 pathogenesis.Impairment of blood oxygenation following SARS-CoV-2 damage results in 1) inefficient mitochondrial metabolism in the liver, resulting in dysregulation of the urea cycle citrulline decreases, phenylalanine increases; 2) dysregulation of energy metabolism and amino acid metabolism, resulting in decreased L-serine, L-alanine, and L-serine; 3) activation of oxidative stress response, resulting in BAIBA accumulation, L-ADMA upregulation, and induction of the kynurenine pathway, which impairs mucosal immunity, allowing bacterial superinfections.Figure generated using biorender.com.Among the significant metabolites, we found that the citrulline concentration decreases over the course of COVID-19 disease, but low levels early on in the course of the disease are associated with greater clinical severity.This finding is consistent with those reported in a recent work, where carbamoyl phosphate levels, a substrate for citrulline biosynthesis in the mitochondria of liver cells, decreased with greater disease severity (1).Because citrulline is an intermediate in the urea cycle and a byproduct of the enzymatic production of nitric oxide from arginine (16), these findings point to either dysregulation in the urea cycle or liver dysfunction as the underlying mechanism explaining the links between this metabolite and COVID-19.Furthermore, increased levels of circulating phenylalanine, which were found to be associated with COVID-19 in our study, have also been reported in patients with hepatic fibrosis, acute hepatic failure and hepatic encephalopathy as well as in COVID-19 disease (5).Apart from phenylalanine, other amino acids (AAs) were found to be significantly different between the groups (Table 2).Among them, L-glycine, L-alanine, L-serine, L-proline, Laspartic acid and L-histidine were downregulated in patients.Previous studies have revealed that SARS-CoV-2 infection dysregulates pathways linked to energy production and amino acid catabolism (17,18).In a murine model of SARS-CoV-2, Li et al. found several genes commonly downregulated in multiple organs that led to significant enrichment in pathways related to oxidative phosphorylation and the electron transport chain (17).As the tricarboxylic acid (TCA) cycle is connected to the electron transport chain, they also analyzed genes associated with the TCA cycle.They found that several TCA cycle genes were downregulated and that TCA cycle metabolites were decreased in animal serum (17).Apart from the AAs that lead to intermediates of the TCA cycle that were downregulated in the COVID-19+ group, the significant downregulation of citrate also suggested that SARS-CoV-2 results in inefficient mitochondrial metabolism (18,19), which can be interpreted as the metabolic response to impaired oxygenation secondary to lung damage (9).Citrate is a direct TCA cycle metabolite obtained by the action of citrate synthase from oxaloacetate.The gene encoding this enzyme exhibits decreased expression (17).Different genes, proteins and/or metabolites involved in the TCA cycle have been found to be suppressed or downregulated in individuals with COVID-19 (18,19).An intriguing finding in our study is the upregulation of 3-aminoisobutyric acid (BAIBA) associated with COVID-19.BAIBA is a catabolite of thymine and valine metabolism that has been proposed as a novel regulator of carbohydrate and lipid metabolism associated with aerobic exercise (20).Although little is known about the implications of BAIBA in pathogenesis, the fact that two enantiomers of BAIBA (R-BAIBA and S-BAIBA) are ultimately metabolized in mitochondria further supports the idea that mitochondrial and TCA cycle abnormalities are a metabolic hallmark of COVID-19 pathogenesis, as also indicated by the abnormalities detected in amino acid and citrate metabolism (21).As BAIBA is primarily metabolized by mitochondria, the accumulation of BAIBA in patients with COVID-19 could be explained by a reduction in mitochondrial functionality and TCA cycle suppression following impairment of blood oxygenation.To our knowledge, BAIBA has never been proposed as a putative metabolite involved in COVID-19 disease.This result is of special interest not only to further investigate BAIBA as a novel biomarker for COVID-19 disease but also to elucidate its role in metabolism under physiological stress conditions or hypoxemia.We also found evidence that SARS-CoV-2 affects metabolic pathways implicated in endothelial dysfunction, thrombosis, and cardiovascular disease.First, nitric oxide synthase (NOS) is an enzyme that catalyzes the production of citrulline and nitric oxide (NO) from arginine.This enzyme is inhibited by asymmetric dimethylarginine (L-ADMA), which is upregulated in COVID-19 patients and is an endogenous competitor of arginine, the nitric oxide precursor (22).L-ADMA has been associated with elevated oxidative stress (23).The higher L-ADMA concentrations found in individuals with COVID-19 suggest inhibition of NOS activity, which would ultimately result in decreased levels of NO.Because NO is among the principal redox molecules exploited by the immune system as a defensive mechanism, NO has been implicated in the control of viral replication, including that of HIV, influenza A and B, and vaccinia virus (24,25).Because it is as-yet unexplained how SARS-CoV-2 produces severe endothelial injury, widespread thrombosis and microangiopathy (26), our findings offer a new mechanistic explanation for this hallmark of SARS-CoV-2 pathogenesis and point to the nitric oxide synthesis pathway as a potential therapeutic target.Second, 2-aminobutyric acid (2-AB) and N-acetylneuraminic acid (Neu5Ac) were also upregulated in the COVID-19+ group.2-AB is a marker that seems to be a compensatory mechanism to oxidative stress (27) and has been implicated in the modulation of glutathione metabolism in the myocardium (28).This finding indicates that 2-AB deserves further attention as a biomarker of the myocardial dysfunction associated with COVID-19 (29).Finally, Neu5Ac is the most widespread form of sialic acids and is a family of compounds with a broad range of implications in human physiology (30).Because Neu5Ac concentrations have been correlated with the development of cardiovascular disease via RhoA signaling pathway activation (31,32), the fact that we found higher Neu5Ac concentrations associated with COVID-19 provides a new pathway possibly linked to the excess risk of cardiovascular diseases associated with SARS-CoV-2.Inflammation gained early attention as a crucial mechanism of SARS-CoV-2 pathogenesis (33).Indoleamine-2,3-dioxygenase-1 (IDO1), which is involved in tryptophan catabolism via the kynurenine pathway, is correlated with epithelial barrier disruption, bacterial translocation and inflammation in other viral infections (34).Induction of IDO1 results in the production of kynurenine derivatives with immunosuppressive effects, impairing mucosal immunity and promoting bacterial translocation and higher mortality (35).Impairment of the kynurenine pathway, resulting in reduced tryptophan (Trp) and elevated kynurenine (Kyn) levels associated with COVID-19, has previously been reported (3,7,36).Our data reveal not only the same tendency for Trp and Kyn but also the increasing tendency of the Kyn/Trp ratio with severity.This ratio has previously been associated with renal insufficiency in patients with SARS-CoV-2 and in many other diseases, such as inflammatory lung disease (5,37).Strikingly, IDO activity is induced by interferon-gamma (IFN-γ), as well as other cytokines and mediators (38,39), and it is inhibited in oxidative stress conditions by NO (39,40).Considering the reduction in NO synthesis mentioned previously, the alterations observed in the kynurenine pathway could be a result of the aforementioned metabolic abnormalities and result in further impairment of mucosal immunity, providing an explanation for the significant rates of bacterial pneumonia associated with COVID-19 (35).The major strengths of our study include 1) the inclusion of COVID-19 cases in an early phase since the onset of symptoms, 2) the assessment of a special population of nonsusceptible individuals, 3) the high-throughput CE-MS method used to characterize the metabolome of the study participants, and 4) the inclusion of follow-up samples to assess the longitudinal variations of the plasma metabolites in a subset of participants.Our study is also subject to some limitations.First, the samples were collected during the first COVID-19 wave in Madrid.It is unknown yet whether the emerging SARS-CoV-2 variants could lead to different metabolic consequences.Second, as expected, cases in the severe group were older and had more comorbidities than milder cases, so we considered potential confounders in our statistical approach.Third, in the subgroup analyses separated by clinical severity, the statistical power to detect differences in metabolite abundances was lower due to the smaller sample sizes.In summary, in this work examining for the first time the metabolic changes associated with COVID-19 by CE-MS, we report the discovery of new plasma biomarkers for COVID-19 that provide mechanistic explanations for the clinical consequences of SARS-CoV-2, including mitochondrial and liver dysfunction as a consequence of hypoxemia (citrulline, citrate and BAIBA), energy production and amino acid catabolism (L-glycine, L-alanine, L-serine, Lproline, L-aspartic acid and L-histidine), and endothelial dysfunction and thrombosis (citrulline, L-ADMA, 2-AB, and Neu5Ac), and we found interconnections between these pathways (Figure 5).These biomarkers deserve further attention as biomarkers of SARS-CoV-2 susceptibility and COVID-19 clinical severity and as potential targets for interventions. Reagents All reagents, solvents and standards used for sample treatment and subsequent analysis are described in the Supporting Information. Patient enrollment and sample collection We analyzed data from adults recruited at Hospital Universitario Ramón y Cajal, Madrid, Spain.Participants had confirmed SARS-CoV-2 (COVID-19+ group) infection by PCR from nasopharyngeal swabs, sputum, or lower respiratory tract secretions within the first 7 days from the onset of symptoms and were classified according to clinical severity as follows: mild disease, defined as those without a need for supplemental oxygen and who were asymptomatic one week after diagnosis; moderate disease, defined as the presence of bilateral radiologic infiltrates or opacities and clinical assessment requiring supplemental oxygen; and severe disease, defined as the development of acute respiratory distress syndrome (41).Hospitalized participants provided samples at baseline and 8 days later.Participants without SARS-CoV-2 (COVID-19-group) were asymptomatic subjects with a negative PCR from nasopharyngeal swabs.We considered adults to be "susceptible" when they had positive IgG for SARS-CoV-2 or previous COVID-19 confirmed by polymerase chain reaction (PCR) from nasopharyngeal exudate.Nonsusceptible adults were healthy healthcare workers who had been on duty for at least three months in COVID-19 wards or intensive care units and reported at least three high-risk exposures to SARS-CoV-2 (42) without having experienced symptoms suggestive of SARS-CoV-2 infection, were persistently negative for SARS-CoV-2 PCR testing and did not have SARS-CoV-2 IgM and IgG in plasma.The most frequent exposure was largely unprotected exposure to aerosolgenerating procedures or patient secretions and close contact without face masks with other confirmed cases of COVID-19.We measured SARS-CoV-2 antibodies by indirect chemiluminescence immunoassay (Vircell, Granada, Spain).Cryopreserved plasma was processed for virus inactivation by adding 1500 µL of cold methanol:ethanol (MeOH:EtOH) in a 1:1 (v/v) proportion to 500 µL of plasma.Then, samples were vortex-mixed for 1 min, incubated on ice for 5 min and centrifuged at 16,000 x g for 20 min at 4 °C to precipitate and remove proteins.The clean upper layer or supernatant, which contained the metabolites of interest, was transferred to Eppendorf tubes and stored at -80 °C until analysis. Sample treatment Two hundred microliters of frozen supernatant was thawed on ice and evaporated to dryness using a SpeedVac Concentrator System (Thermo Fisher Scientific, Waltham, MA).Then, it was resuspended in 100 µL of 0.2 mM methionine sulfone (MetS) in 0.1 M formic acid.Samples were vortex-mixed for 1 min, transferred to a Millipore filter (30 kDa protein cutoff) and centrifuged for 40 min at 2000 xg at 4 °C.Finally, the ultrafiltrate was transferred to a CE-MS vial for analysis.Quality control samples (QC) were prepared by pooling equal volumes of plasma supernatant from each sample and were treated as previously described.Finally, blank solutions were also prepared with MeOH:EtOH (1:1, v/v). Nontargeted metabolomics by CE-MS The plasma metabolome was analyzed by using a 7100 capillary electrophoresis (CE) system coupled to a 6230 time-of-flight mass spectrometer (TOF-MS) from Agilent Technologies equipped with an electrospray ionization (ESI) source.The analysis was performed using a previously developed method (43) with the analytical conditions described in detail in the Supporting Information.The prepared QCs were analyzed at the beginning of the run to condition the CE system and then every seven randomized samples to reduce any timerelated effect.The QCs were used not only to assess the reproducibility, stability and performance of the system but also to correct any signal deviation within the analytical sequence.A pair of blanks were injected at the beginning and end of the run to remove metabolites coming from the extraction solvent. Data processing CE-MS raw data were checked using MassHunter Qualitative software (version 10.0) to determine the data quality, the system mass accuracy and the reproducibility of the QC sample and IS injections.Then, raw data were aligned and processed with MassHunter Profinder software (version 10.0 SP1).Molecular feature extraction (MFE) and batch recursive feature extraction (RFE) algorithms, both included in MassHunter Profinder software, were used to obtain the list of mass-to-charge ratios (m/z) and their corresponding abundances (43).The resulting list was imported in Microsoft Excel, and the data matrix was filtered before statistical analysis by removing metabolites with a percentage of coefficient of variation (% CV) greater than 30% in the QC samples.All the data processing steps are described in detail in the Supporting Information. Statistics Multivariate (MVDA) and univariate (UVDA) statistical analyses were carried out to determine differences among groups.Different comparisons were performed to evaluate COVID-19 disease, disease severity, disease progression, and susceptibility.For this purpose, samples were labeled based on the comparison as infected or noninfected for disease diagnosis; susceptible or nonsusceptible for disease susceptibility; mild, moderate or severe at day 0 (d0) for disease severity; or day 0 and day 8 for disease progression.Then, the filtered matrix obtained in the previous step was processed by SIMCA-P version 15.0.2 (Umetrics, Umea, Sweden), MATLAB software (The MathWorks, Maticks, MA, USA), MetaboAnalyst 5.0 and SPSS version 24 (IBM SPSS Statistics) for different purposes.When needed, the intensity drop was corrected with the QC correction function included in the toolbox freely available online at https://github.com/Biospec/cluster-toolbox-v2.0.Statistical analysis is described in more detail in the Supporting Information.Briefly, unsupervised PCA was performed to visualize tendencies, determine the presence of outliers, and assess data quality by the explained variance (R 2 ) and the predicted variance (Q 2 ), considering as an appropriate value a difference between them of lower than 0.3 (15).Then, the supervised methods PLS-DA and OPLS-DA were performed followed by model validation.In those validated OPLS-DA models, variable selection was performed by using a variable influence on projection (VIP) and absolute value of p(corr) greater than 1.0 and 0.5, respectively (14).Afterwards, UVDA was performed simultaneously to assess the significance of each metabolite separately.In short, nonparametric tests were applied for the comparisons previously mentioned as follows: a) the Kruskal-Wallis test for disease severity (mild, moderate, and severe patients at d0) followed by a multiple comparison test; b) the Wilcoxon signed-rank test for disease progression; and c) the Mann-Whitney U test for COVID-19 disease and susceptibility.In all cases, the p-value had to be less than 0.05, and the false discovery rate at a level of α = 0.05 was controlled by the Benjamini-Hochberg correction test.Finally, ASCA was applied to study the influence associated with sex and age (44).When the ASCA model was not validated by permutation testing, analysis of covariance (ANCOVA) was carried out to eliminate the variability associated with age, sex or both (45). Metabolite identification The selected features in the statistical step by UVDA or MVDA were tentatively identified based on the m/z of the metabolites and the relative mobility time (RMT) (RT metabolite /RT MetS ) by using the CEU Mass Mediator (http://ceumass.eps.uspceu.es/mediator)(46), which is an 'in-house' useful tool for identification.This tool joins several databases, which are available online, such as METLIN (47), LIPIDMAPS (48), and KEGG (49), making the identification task faster and easier.Features assigned to metabolites have to fulfill an appropriate mass accuracy (maximum error mass of 15 ppm), as well as a comparable isotopic pattern distribution.Once metabolites were identified, confirmation was performed by injecting commercial standards, samples, and samples spiked with standards.Finally, for fragmentation pattern recognition, the QC sample was analyzed under the same analytical conditions as used in the previous analysis but applying different voltages in the MS fragmentor (150, 175 and 200 V) (50).It is important to point out that any drug associated with COVID-19 treatments that was identified among the significant metabolites was excluded from both MVDA and UVDA statistical analysis. Study approval The study was carried out at the Ramón and Cajal University Hospital in Madrid (Spain) and was approved by the local Research Ethics Committee (ceic.hrc@salud.madrid.org,approval number 095/20).All subject unable to provide informed consent or witnessed oral consent with written consents by a representative were excluded. Declaration of Interest Authors declare that no competing interests exist -19) .(B) Plot B represents the comparison of susceptible and non-susceptible participants with R 2 = 0.902, Q 2 = 0.817, and CV-ANOVA p-value = 4.20 x 10 -17 .Models were validated by permutation testing and CV-ANOVA (14,15).Hydroxychloroquine, initially found to be significant, was removed from all statistical analysis as it was empirically used to treat COVID19 at the time of sample collection. Fig 4 . Fig 4. Heatmap with group average of statistically significant metabolites detected in human plasma samples by CE-MS modified by virus SARS-CoV-2 virus infection.In green, metabolites involved in TCA cycle.In purple, those involved in kynurenine pathway.In blue those compounds of the nitric oxide or are related with NO regulation. Table 2 . Fold change of metabolite abundance in plasma samples associated with COVID19 disease status and susceptibility. Compound COVID19+ vs. COVID19-Susceptible vs. Nonsusceptible Severe vs. Mild COVID19+ day 8 vs baseline the fold change representing the increase of metabolite abundance and red color represents the decreases (see TableS1for additional information).
2021-10-03T13:14:53.492Z
2021-09-29T00:00:00.000
{ "year": 2021, "sha1": "e82db0a1dd6948054dd4fcc59bdcb83f212d436a", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/09/29/2021.09.29.462326.full.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "6371ee1effe2cee9c4778f18dcb1b0a7440a5cff", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248953130
pes2o/s2orc
v3-fos-license
BestOf: an online implementation selector for the training and inference of deep neural networks Tuning and optimising the operations executed in deep learning frameworks is a fundamental task in accelerating the processing of deep neural networks (DNNs). However, this optimisation usually requires extensive manual efforts in order to obtain the best performance for each combination of tensor input size, layer type, and hardware platform. In this work, we present BestOf, a novel online auto-tuner that optimises the training and inference phases of DNNs. BestOf automatically selects at run time, and among the provided alternatives, the best performing implementation in each layer according to gathered profiling data. The evaluation of BestOf is performed on multi-core architectures for different DNNs using PyDTNN, a lightweight library for distributed training and inference. The experimental results reveal that the BestOf auto-tuner delivers the same or higher performance than that achieved using a static selection approach. Introduction Artificial intelligence, and, in particular, machine learning via deep neural networks (DNNs) have experienced explosive growth due to the appearance of new algorithmic techniques, vast amounts of computer power, and an increased amount of training data [1][2][3][4][5]. This scenario has pushed the industry to design customised architectures for deep learning (DL), e.g. NVIDIA's Tensor Cores or Google's TPUs, as well as to develop frameworks such as Google's TensorFlow or Facebook's PyTorch. Tuning and optimising DL frameworks on these customised platforms are fundamental to reducing the overall training and inference costs [6]. For instance, the realisation of the forward and backward passes for the training of a convolutional layer may deliver distinct performance results depending on the selected algorithmic variant and the problem (layer) size. Similarly, the configurations to conduct individual tensor operations, such as paddings, shrinks or transpositions, may also affect the overall run time depending on their specific tensor size. A naive approach is to manually optimise the execution of DNN layers by selecting the best implementation according to post mortem profiling data. However, auto-tuners have been demonstrated to provide a better solution in these scenarios by selecting the algorithm for each problem that obtain the best performance [7,8]. Following this trend, in this work, we present a novel online implementation selector for DL frameworks which automatically selects the best possible implementation at run time. In particular, this work makes the following contributions: -We present BestOf, an online auto-tuner that selects the best algorithm for each problem according to their previous performance profiles within the same program execution. BestOf has been designed as a Python module and its interface can be easily used to replace actual calls in the original code for making selections. This auto-tuner is able to deal with grouped selections, where all routines in a group must be selected together due to implementation dependencies. Moreover, it can automatically manage and discover nested selections, allowing recursive decisions when inner functions also present alternative implementations. -We integrate BestOf as a module on PyDTNN, a lightweight framework for distributed training and inference of DNNs [9], and instrument it to permit the selection of (i) algorithms to perform the forward-backward passes in convolutional neural networks (CNNs), via either im2row+gemm (lowering convolution to gemm or General Matrix Multiplication [10]), convgemm (gemm with implicit im2row, see [11]), or variants of the Winograd algorithm [12]; and (ii) implementations to conduct 4D tensor transpositions. -We evaluate the performance obtained by BestOf for training and inference with VGG16 and inference with ResNet34 using two multi-core nodes equipped with Intel Xeon Skylake processors. This study is completed with a per-layer analysis that assesses the performance gains, as well as the throughput attained along with the training steps. The rest of the paper is organised as follows. In Sect. 2, we revisit some related work on auto-tuning tools and frameworks and compare them against the approach presented in this work. In Sect. 3, we describe the user interface and the internals of BestOf. In Sect. 4, we briefly introduce PyDTNN and detail how BestOf was integrated to select different implementation alternatives. In Sect. 5, we evaluate the benefits of BestOf by comparing its throughput with native versions. Finally, in Sect. 6, we close the paper with a summary and a collection of concluding remarks. Related work Current software libraries in general deploy distinct computational kernels depending on the underlying hardware. Typically, once the user selects the processor type (or specification) from within a limited list, the optimum computational kernel is selected [13]. Usually, this approach does not take into account other considerations that could affect the kernels performance, such as the problem dimensions. Conversely, several automatic selections have been applied for decades in order to extract the maximum computational power of the hardware. The automatic selection of the best implementation for a computational kernel dates back to the ATLAS dense algebra library [14]. This library was probably the first popular BLAS implementation to execute benchmarks during its installation phase in order to select the best algorithm parameters. Among others, the main parameter in ATLAS is the matrix multiplication block size which depends heavily on the memory cache properties. This automatic selection has been extended recently to accelerator platforms, selecting not only the algorithm implementation but also which hardware to use (for example, CPU or GPU) [15,16]. Nevertheless, as the selection is performed offline, the adopted decision cannot be changed afterwards. The main drawback of offline selectors is the potentially very large search space for all possible input sizes of an algorithm. Typically, libraries with offline selectors use heuristics or some form of optimisation to limit the number of tests performed during the installation process. This is particularly difficult for the convolution in neural networks which have a large number of parameters, exponentially increasing the search space. For instance, the work by Anderson et al. [17] uses partitioned boolean quadratic programming (PBQP) for selecting the optimal configuration after benchmarking all possible combinations of convolution implementations and layer sizes. In contrast, an online approach traces the execution time during the actual computation, cycling over the different alternatives, to make a decision after sufficient performance data are collected [18][19][20]. This technique requires the repeated application of an algorithm with the same parameter set, a condition that is met by the "iterative" nature of DNNs training. The selection of a proper convolution algorithm has a major impact on DNN training performance as shown in [6] for GPUs. Popular toolkits, such as cuDnn (up to version 7) and openvino, employ heuristics to predict which implementation will be faster given the specific set of parameters of the convolutions at hand. Offline selection methods have appeared in recent literature [7,8], but even though the majority of neural network toolkits have provisions for benchmarking, as far as we know the latest version of cuDnn is the only one to provide a run-time selector for alternative convolution implementations. The BestOf online selector presented in this work differs from other state-of-theart alternatives in the following aspects: (i) it is implemented as a Python module, allowing an easy integration into the PyDTNN DNN training/inference framework, which is developed in the same programming language; (ii) it presents a very simple interface that permits making selections by simply replacing the actual calls in the original source code with calls to BestOf instances; (iii) it supports grouped selections and can automatically manage and discover nested selections for recursive decisions; and (iv) it is open source. Unfortunately, as we have not found comparable online selection tools that could be easily applied to our target application, it was not possible to experimentally compare BestOf with other solutions. BestOf: An Online Implementation Selector In this section, we present BestOf, an online auto-tuner developed in Python that is able to automatically execute a set of alternative algorithms and eventually select, after a given number of rounds, the best performing option for each problem type. The selections made by BestOf occur at run time according to the execution time data gathered from previous executions of the considered routine/algorithm for each problem size. Application programming interface The BestOf API is detailed in the example shown in Listing 1, where we have declared a BestOf object for selecting the best implementation for the transposing of a Numpy 4D array. There, the constructor receives the transposition alternatives as a list of pairs, where each pair is formed by a name and a pointer to the function that should be called when that alternative is selected. BestOf requires all the alternatives to receive the same parameters in the same order. However, if this was not the case, it could easily be solved by wrapping the non-conformant functions. In the example, the first two alternatives of the operation are developed in Cython and present different loop orderings for transposing the tensor dimensions, while the last invokes the native transpose routine from Numpy. The constructor also requires a pointer to a function that returns the problem size as a hashable object (get_problem_size parameter in Line 8). For the case of the transpose, this parameter corresponds to the array shape and is key to enabling BestOf to identify all the transpose calls that share the same problem size. Other parameters of the constructor are the rounds value, which specifies the number of times that all the alternatives have to be executed until a decision is made (see Line 9); and the pruning_speedup factor, which aims to accelerate the decisionmaking by pruning those alternatives that are slower than any other by the specified factor. The pruning is performed only after all the alternatives have been evaluated a minimum given number of rounds, according to the prune_after_round parameter (see Lines 10-11). Finally, the example also shows how the created BestOf object can be called on to perform the transpose (see Lines 13 and 14), while it will silently evaluate the alternative used on that occasion (or will call on the selected alternative if a decision has already been made for that problem size). Internals The BestOf auto-tuner is defined as a Python class implementing the constructor, a series of auxiliary member methods, and the __call__ method, which permits calling the instantiated object as if it were a function. In fact, the __call__ method is the function in charge of measuring the execution times and making a selection of the best performing alternative when appropriate. This procedure is repeated until a specific number of rounds is reached. At that point, BestOf selects the alternative that delivers on average the best performance. Functionality The BestOf auto-tuner is characterised by supporting the following two features: grouping and nesting. Grouping One of the requirements for using this auto-tuner is that all the alternatives should work interchangeably, that is, they should not present any side effects or dependencies among them. In some cases, however, the alternatives may perform a series of optimisations that assume the state left from a previously called function. A practical example is the use of the im2row transform in the forward and backward propagation methods in a convolution layer. As the same computed im2row transformation is used in both methods, the forward method stores it in a temporary variable, so that the backward method does not need to re-compute it. This optimisation trades memory for execution speed and forces the use of the same algorithm for both the forward and backward phases. To tackle such dependencies, BestOf can evaluate grouped implementations, which consists of a set of algorithms that have to be executed in conjunction. Listing 2 declares a BestOf object for selecting the best group of alternatives for executing the forward and backward phases of a convolutional layer using either: (i) im2row+gemm; (ii) convgemm; or (iii) the Winograd algorithm. To leverage this feature, each of the alternatives in the list has to be defined as a tuple containing the name given to that group and a list with the function pointers that constitute the group. Internally, BestOf keeps track of the execution times for each group and problem size, eventually executing the best performing group after completing the number of rounds specified. Note that, the problem size in the example has to be defined to include the input parameters of all functions in the group. In this case, we use a tuple that combines the shapes of the input and weight tensors, which serves to univocally determine the problem size in the group. From the user's perspective, calling on a BestOf object that uses the grouping feature requires passing an index for identifying which function of the group has to be executed in the user's code (see Lines 12-13 in Listing 2). Nesting The second feature of this auto-tuner is the support for making nested selections, i.e. an alternate implementation that internally contains other calls to BestOf objects. In such cases, the selection proceeds by exploring the different branches of a decision tree that is evaluated at run time. To build the decision tree, the auto-tuner uses the traceback Python module, which reports the function calls made in the code at a specific point by retrieving the stack. Using the stack frames, BestOf checks whether the current object has been invoked from another instance in a previous frame and, in such cases, registers it as the parent. To select the best-performing branch in the tree, the auto-tuner makes decisions from the leaves to the root nodes. This is because each node is required to know the selection made in all its children prior to measuring the execution time of its alternatives. For this, the implementation of BestOf delays the evaluation of the parents until all their children have determined their best alternative. A practical example is shown in Fig. 1. In this case, the im2col forward version is among the forward options being evaluated by a BestOf instance. When the im2col forward version is invoked, a 4D transposition must be performed. As there are different possible implementations for this transposition, an additional BestOf instance will evaluate which implementation is faster. While the different transpositions of a given size are being compared, the corresponding BestOf parent will be locked, i.e. it will pause its own time comparisons. Another example is when the Winograd algorithm is among the different forward alternatives being evaluated by BestOf . In this case, as the Winograd algorithm also selects among different variants, BestOf will automatically discover and manage these nested selections. Apart from the aforementioned functionalities, the auto-tuner also provides, as a result, the collected performance metrics and the associated decision trees, which can be analysed postmortem by users to gain insights into the best performing implementations in different problem sizes. Integration in PyDTNN In this section, we briefly describe PyDTNN, a framework for distributed training and inference of DNNs, as the BestOf auto-tuner has been integrated into it. Next, we list the operations in PyDTNN that offer different implementation alternatives in order to leverage our implementation selector. Overview of the DL framework. PyDTNN 1 is a lightweight framework for distributed training of DNNs on clusters of computers that has been designed as a research-oriented tool with a low learning curve. PyDTNN presents the following appealing properties: -Flexible PyDTNN regards extensibility (and, to a certain extent, simplicity) as a first-class citizen to allow users to customise the framework to prototype research ideas. -Ample functionality PyDTNN covers DL training and inference for a significant part of the most common DNN models: multi-layer perceptrons (MLPs), convolutional neural networks (CNNs), and transformers for natural language processing. In practice, PyDTNN provides training and validation accuracies on par with those attained by Google's TensorFlow [9]. -High performance PyDTNN exploits data parallelism [21], relying on specialised message-passing libraries for efficient communication, and kernels from high performance multi-threaded libraries for the major computational operations in CPUs and GPUs. While PyDTNN lacks the level of maturity and the complete functionality of production-level frameworks, such as TensorFlow or PyTorch, we believe that PyDTNN offers a more accessible and easier-to-customise solution for the efficient training and inference of DNN models. All these reasons have motivated us to accommodate BestOf within this framework and to evaluate the performance gains that can be obtained in both training and inference stages of DL models while selecting the best alternative for the three previous operations. Experimental Results In this section, we evaluate the performance of the BestOf auto-tuner within PyDTNN against the static selection of the different algorithms previously described. In particular, we evaluate the training and inference phases of the VGG16 model using different configurations of threads, datasets, and multi-core architectures; and the inference phase of the ResNet34 model. For that, we measure the overall training and inference throughput of PyDTNN with each statically selected variant and with BestOf using the platforms listed in Table 1. The selected parameters for running the experiments in PyDTNN are shown in Table 2. The per-layer evaluation analyses the time spent by PyDTNN on the convolutional layers appearing in the VGG16 [22] and ResNet34 [23] models for the different convolution algorithms shown in Table 3. Note that, as all filters in the VGG16 and part of the ResNet34 models are of dimension 3 × 3 , each time the Winograd alternative is called on to perform the corresponding convolution, BestOf will also evaluate the two possible Winograd variants that can be applied for this filter size. Figure 2 reports the throughput obtained by the different convolutional algorithms and the BestOf auto-tuner using 1, 4, 8, and 12 threads on Altec and voltA. The results show that the im2row transform followed by a gemm is consistently the best option for all cases. Even in this scenario, BestOf achieves the same performance when using the CIFAR-10 dataset (see left-hand side plots), and nearly the same performance as the best option in the case of ImageNet. Note that, these results have been obtained by training during a single epoch and 120 steps. Under a more realistic scenario, i.e. over 40 epochs with more than 240 steps per epoch, the BestOf overhead due to the evaluation of non-optimal variants would be mostly diluted. Figure 3 shows, for each VGG16 convolutional layer, the average time of the different forward-backward algorithms. For simplicity, the convolution layers of VGG16 that use the same input and kernel sizes are grouped in the plot. As reported there, the im2row transform followed by a gemm achieves the best performance in nearly all the layers for any combination of dataset and platform. Nevertheless, it is interesting to note that the relative performance among the different algorithms varies depending on the target architecture. BestOf: an online implementation selector for the training… Figure 4 depicts the throughput obtained by the different convolutional algorithms and the BestOf alternative when using 1, 4, 8, and 12 threads. The best algorithm for inference depends on the number of threads, the dataset, and the node architecture. This behaviour differs from that observed in the training scenario. For example, the best option for VGG16 and 8 threads on Altec is the Winograd algorithm, while for 12 threads, im2row+gemm is the best option. Likewise, the preferred option for VGG16 and 8 threads on Altec is the Winograd algorithm while for the same scenario on voltA, convgemm is the best alternative. It is worth noting that BestOf not only achieves the performance of the best algorithm in each case, but also outperforms the other algorithms in all scenarios. This is because BestOf does not select the same algorithm for all the VGG16 layers. Figure 5 shows the same information as that in the previous figure, but for the ResNet34 model. As can be observed, the results are similar to those for VGG16, as the best choice on Altec on all the cases but one corresponds to the Winograd algorithm, and the best alternative on voltA in all the cases is the convgemm algorithm. Figure 6 shows, for each VGG16 convolutional layer, the average time of the distinct forward algorithms with 12 threads. As shown, the best forward algorithm depends on the layer (or problem size), the dataset, and the target node architecture. The same effect can be observed when the ResNet34 model is leveraged (see Fig. 7). Note that, for this Evolution of the training and inference performance To gain insights into the behaviour of BestOf, we have also analysed its throughput over time. Figure 8 depicts the performance evolution over time of the different algorithms for training and inference with VGG16 using 12 threads on Altec. As expected, all the PyDTNN variants performing a static selection perform quite uniformly during their entire execution. In contrast, the BestOf variant starts at a given performance that is steadily increased until the best alternative is identified. For the training experiment, the achieved performance is similar to the im2row+gemm variant, while for the inference scenario, the BestOf selection outperforms all other variants, as it individually selects the best algorithm for each VGG16 layer. In this work, we have presented BestOf, a novel online implementation selector, that is capable of selecting at run time among different alternatives the best performing one. Two important features of BestOf are the ability to evaluate groups of alternatives as a whole as well as making nested decisions. The experimental results on the VGG16 and ResNet34 model demonstrate that our auto-tuner is able to improve the overall training and inference times when different algorithms are used to process the convolutional layers. We also observed that, when the preferred algorithm depends on the target architecture, BestOf easily identifies the best alternative, avoiding manual efforts to profile each available alternative. With this in mind, we can conclude that the benefits of the BestOf autotuner highly compensate for the negligible costs in terms of overheads and lines of code that have to be introduced in the original application. As part of a future work, we plan to apply the auto-tuner to other fields in order to test its applicability and to find out which additional requirements should be incorporated. As part of this effort, we plan to implement the possibility of retrieving the decisions made in a previous execution so that BestOf could be useful even for short-lived applications.
2022-05-22T15:12:50.825Z
2022-05-20T00:00:00.000
{ "year": 2022, "sha1": "3c702962f7af671ec3b36324b4870522192cb6cd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11227-022-04577-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "0a494c5f687dc5680f3d9829fcdd774685f356f8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248506008
pes2o/s2orc
v3-fos-license
Taming graphs with no large creatures and skinny ladders We confirm a conjecture of Gartland and Lokshtanov [arXiv:2007.08761]: if for a hereditary graph class $\mathcal{G}$ there exists a constant $k$ such that no member of $\mathcal{G}$ contains a $k$-creature as an induced subgraph or a $k$-skinny-ladder as an induced minor, then there exists a polynomial $p$ such that every $G \in \mathcal{G}$ contains at most $p(|V(G)|)$ minimal separators. By a result of Fomin, Todinca, and Villanger [SIAM J. Comput. 2015] the latter entails the existence of polynomial-time algorithms for Maximum Weight Independent Set, Feedback Vertex Set and many other problems, when restricted to an input graph from $\mathcal{G}$. Furthermore, as shown by Gartland and Lokshtanov, our result implies a full dichotomy of hereditary graph classes defined by a finite set of forbidden induced subgraphs into tame (admitting a polynomial bound of the number of minimal separators) and feral (containing infinitely many graphs with exponential number of minimal separators). Introduction For a graph G, a set S ⊆ V (G) is a minimal separator if there are at least two connected components A, B of G − S with N (A) = N (B) = S (so that S is an inclusion-wise minimal set that separates a vertex of A from a vertex of B). Around the year 2000, Bouchitté and Todinca presented a theory of minimal separators and related objects called potential maximal cliques and showed their usefulness for providing e cient algorithms [2]. In particular, the M W I S problem (given a vertex-weighted graph, nd a subset of pairwise nonadjacent vertices of maximum total weight) can be solved in time bounded polynomially in the size and the number of minimal separators in the graph. This result has been generalized by Fomin, Todinca, and Villanger to a large range of problems that can be de ned as nding an induced subgraph of constant treewidth with some CMSO 2 -expressible property [3]; this includes, for example, L I P or M I F , which is by complementation equivalent to F V S . When do these metaalgorithmic results give e cient algorithms? In other words, which restrictions on graphs guarantee a small number of minimal separators? On one hand, it is easy to see that an n-vertex chordal graph has O(n) minimal separators. On the other hand, consider the following two negative examples. For k 3, the (k, 1)-prism consists of two k-vertex cliques with vertex sets X = {x 1 , . . . , x k } and Y = {y 1 , . . . , y k } and a perfect matching {x i y i | i ∈ [k]}. It is easy to see that the (k, 1)-prism has 2 k − 2 minimal separators: any choice of one endpoint of each edge x i y i gives a minimal separator, except for the choices X and Y . The (k, 3)-theta consists of k independent edges {x i y i | i ∈ [k]}, a vertex x adjacent to all vertices x i and a vertex y adjacent to all vertices y i (the intuition behind the notation is that the graph consists of k paths of length 3, joining x and y). Again, any choice of one endpoint of each edge x i y i gives a minimal separator. Thus, both the (k, 1)-prism and the (k, 3)-theta have an exponential (in the number of vertices) number of minimal separators. In 2019, Milanič and Pivač initiated a systematic study of the question which graph classes admit a small bound on the number of minimal separators in its members [5,6]. A graph class G is tame if there exists a polynomial p G such that for every G ∈ G the number of minimal separators of G is bounded by p G (|V (G)|). Clearly, if G is tame, then M W I S and all problems captured by the formalism of [3] are solvable in polynomial time when the input graph comes from G. On the opposite side of the spectrum, G is feral if there exists c > 1 such that for in nitely many graphs G ∈ G it holds that G has at least c |V (G)| minimal separators. Following the previous examples, the class of chordal graphs is tame while the class of all (k, 1)-prisms and/or all (k, 3)-thetas (over all k) is feral. Milanič and Pivač provided a full tame/feral dichotomy for hereditary graph classes (i.e., closed under vertex deletion) de ned by minimal forbidded induced subgraphs on at most 4 vertices [5,6]. A subsequent work of Abrishami, Chudnovsky, Dibek, Thomassé, Trotignon, and Vuskovič [1] indicated that the main line of distinction between tame and feral graph classes should lie around the notion of a k-creature. A k-creature in a graph G is a tuple (A, B, X, Y ) of pairwise disjoint nonempty vertex sets such that (i) A and B are connected, (ii) A is anti-adjacent to Y ∪ B and B is anti-adjacent to A ∪ X, (iii) every x ∈ X has a neighbor in A and every y ∈ Y has a neighbor in B; (iv) |X| = |Y | = k and X and Y can be enumerated as X = {x 1 , . . . , x k }, Y = {y 1 , . . . , y k } such that x i y j ∈ E(G) if and only if i = j. We say that G is k-creature-free if G does not contain a k-creature as an induced subgraph. Similarly as in the examples of the (k, 1)-prism and the (k, 3)-theta, any choice of one endpoint of every edge x i y i gives a minimal separator in the subgraph induced by the creature (which, in turn, can be easily lifted to a minimal separator in G). Hence, if G contains a k-creature as an induced subgraph, it contains at least 2 k minimal separators. In fact, the notion of a k-creature is a common generalization of the examples of the (k, 1)-prism and the (k, 3)-theta. Indeed, the (k, 3)-theta contains a k-creature with A = {x} and B = {y} while the (k, 1)-prism contains a (k − 2)-creature with A = {x k−1 }, B = {y k }, X = {x 1 , . . . , x k−2 }, and Y = {y 1 , . . . , y k−2 }. In particular, Abrishami et al. conjectured that if for a hereditary graph class G there exists k such that no G ∈ G contains a k-creature as an induced subgraph, then G is tame. (Observe that a presence of arbitrarily large creatures in a hereditary graph class does not immediately imply that the graph class is feral, as the sets A and B can be of superpolynomial size in k.) A counterexample to the conjecture of [1] has been provided by Gartland and Lokshtanov in the form of a k-twisted ladder [4]. They observed that, despite the fact that the conjecture of [1] is false, every example they can construct "looks like a twisted ladder", which indicates that the tame/feral boundary for hereditary graph classes should not be far from the said conjecture. To support this intuition, they introduced the notion of a k-skinny ladder (a graph consisting of two induced antiadjacent paths P = (p 1 , . . . , p k ), Q = (q 1 , . . . , q k ), and independent set R = (r 1 , . . . , r k ), and , noted that a k-skinny-ladder is an induced minor of every counterexample they constructed, and proved the following. Theorem 1. For every k there exists a constant c k such that if a graph G is k-creature-free and does not contain a k-skinny-ladder as an induced minor, then the number of minimal separators in G is bounded by c k |V (G)| c k log |V (G)| , that is, quasi-polynomially in the size of G. Gartland and Lokshtanov conjectured that this dependency should be in fact polynomial. Our main result of this paper is a proof of this conjecture. Theorem 2. For every k ∈ N there exists a polynomial q of degree O(k 3 · (8k 2 ) k+2 ) such that every graph G that is k-creature-free and does not contain k-skinny-ladder as an induced minor contains at most q(|V (G)|) minimal separators. That is, every hereditary graph class G for which there exists k such that no member of G contains a k-creature nor k-skinny-ladder as an induced minor, is tame. As proven in [4], Theorem 2 implies a dichotomy result into tame and feral graph classes for all hereditary graph classes de ned by a nite list of forbidden induced subgraphs. (For the exact de nitions of graphs in the statement, we refer to [4].) Theorem 3. Let G be a graph class de ned by a nite number of forbidden induced subgraphs. If there exists a natural number k such that G does not contain all k-theta, k-prism, k-pyramid, k-ladder-theta, k-ladder-prism, k-claw, and k-paw graphs, then G is tame. Otherwise G is feral. Our proof builds upon the proof of Theorem 1 of [4] and provides a new way of analysing one of the core invariants. For a graph G and a set S, de ne ζ G (S) = max{|I| : I ⊆ S is an independent set and for every v / That is, we want a set I ⊆ S of maximum possible size that is not only independent, but no vertex outside S is adjacent to more than one vertex of I. In the proof of Theorem 1 of [4], an important step is to prove that a minimal separator S with huge ζ G (S) gives rise to a large skinny ladder as an induced minor. Our main technical contribution is an improved way of analysing minimal separators S with small ζ G (S). Theorem 4. For every k, L ∈ N there exists a polynomial p of degree O(k 3 · L), such that the following holds. For every k-creature-free graph G, the number of minimal separators S satisfying ζ G (S) L is at most p(|V (G)|). After brief preliminaries in Section 2, we prove Theorem 4 in Section 3. We show how Theorem 4 implies Theorem 2 (with the help of some tools from [4]) in Section 4. Preliminaries Let G be a graph, v be a vertex of G, and S be a subset of vertices. By N G (v) we denote the set of neighbors of v. Similarly, by N G (S) we denote the set x∈S N G (x) \ S. If the graph G is clear from the context, we simply write N (v) and N (S). For sets A, B, C, whenever we write A \ B \ C, the set di erence operation associates from the left, meaning that A \ B \ C is equivalent to (A \ B) \ C (and, alternatively, to A \ (B ∪ C)). By G − S we denote the graph obtained from G by deleting all vertices from S along with incident edges, and by G[S] we denote the graph induced by the set S, i.e., G − (V (G) \ S). By CC(G) we denote the set of connected components of G, given as vertex sets. A matching in G is a set of pairwise disjoint edges. We say that a matching For vertices u, v, a set S ⊆ V (G) \ {u, v} is a u-v-separator if u and v are in di erent connected components of G − S. We say that S is a minimal u-v-separator if it is a u-v-separator and no proper subset of S is a u-v-separator. A set S is a minimal separator if it is a minimal u-v-separator for some u, v. Equivalently, S is a minimal separator if there are at least two components A, B ∈ CC(G − S) such that N (A) = N (B) = S. Any component A ∈ CC(G − S) with N (A) = S is called full to S; a minimal separator has at least two full components. We de ne S v G = {N (v) ∩ S : v / ∈ S and S is a minimal separator of G}. The following result of Gartland and Lokshtanov will be a crucial tool used in our argument. Lemma 5 (Gartland and Lokshtanov [4]). If G is a k-creature-free graph, then for every Let us also recall the crucial de nition. For a set S ⊆ V (G) we de ne ζ G (S) = max{|I| : I ⊆ S is an independent set and for every v / ∈ S we have |N (v) ∩ I| 1}. Proof of Theorem 4 We prove the theorem by induction on L with the exact bound of n L(4+(k 2 +2)(k+2)) minimal separators. Note that if S = ∅, then ζ G (S) 1, since for any u ∈ S, the set I = {u} satis es the required properties. Thus, in the base case, when L = 0, the only candidate for S is the empty set, therefore the claim holds vacuously. Also, the claim is immediate for n = 1, so we assume n > 1. Let S be a minimal separator of G, and let A and B be two connected components of G − S that are full to S. If there is a vertex v ∈ V (G) \ S such that N (v) ⊇ S, then S ∈ S v G . There are at most n k+2 such separators S by Lemma 5; we may therefore assume that no such vertex exists. Let B be a minimal connected subset of B that still dominates S, i.e. such that N ( B) ⊇ S. Let u ∈ B be such that B \ {u} is still connected. Such a vertex u can be found, for instance, as a leaf of a spanning tree of B. We de ne the following sets that will be important throughout the proof, see Figure 1. N ( B \ {u}). In words, v is a private neighbor (with respect to B) of u in S. Such a vertex v exists by the minimality of B. Our goal is now to identify a small set that dominates S * = S u ∪ S v ∪ S A ∪ S B . We will repeatedly use Lemma 5 on the vertices of this set in order to bound the number of choices for S * . We then show that we can nd a minimal separator S 0 in S \ S * such that A is a full component in G − (S * ∪ S 0 ) and there is a component containing B \ {u}. We will be able to show that ζ G−S * (S 0 ) < ζ G (S) which allows us to conclude using the induction hypothesis on G − S * . where Z A , D, and Z D for D ∈ D are as de ned in Claims 1 to 3, respectively. For all z ∈ Z, let Q z = N (z) ∩ S. Let Q = z∈Z Q z . Note that Q contains S u since u ∈ Z, that Q contains S A since Z A ⊆ Z, and that Q contains S B . The latter is due to the fact that the vertices in Claim 4. There are at most n k 2 (k+2) choices for Q, and at most n k+2 choices for R. P C . We already observed the second statement of the claim above. For the rst statement, by Claims 1 to 3 we know that |Z| < k 2 , so there are at most n k 2 choices for Z. For each z ∈ Z, Q z ∈ S z G , so by Lemma 5, there are at most n k 2 (k+1) choices for each Q z , and therefore at most n k 2 (k+2) choices for Q. We conclude that S 0 is a minimal separator of G 0 , with A and B 0 being connected components of G 0 − S 0 that are full to S 0 . We now show that we can use the induction hypothesis to bound the number of choices for S 0 . . Let I 0 ⊆ S 0 be an independent set such that for all y ∈ V (G 0 )\S 0 , |N G 0 (y)∩I 0 | ≤ 1. Let I = I 0 ∪ {v}; I is still an independent set since S 0 ⊆ S \ N G (v). We argue that for all y ∈ V (G) \ S, |N G (y) ∩ I| ≤ 1. Suppose that y ∈ N G (v). Since S 0 ∩ (S A ∪ S B ) = ∅, we have that N G (y) ∩ S 0 = ∅ and therefore |N G (y) ∩ I| = 1. We may now assume that y / ∈ N G (v). Suppose that |N G (y) ∩ I| > 1. Since y / ∈ N G (v), we conclude that y / ∈ V (G 0 ) \ S 0 , otherwise y would have at least two neighbors in I 0 , a contradiction with the choice of I 0 in S 0 in the graph G 0 . This means that y ∈ R \ S, and therefore y ∈ N G (v) ∩ B, which is a contradiction with our assumption that y / ∈ N G (v). This completes the proof. Wrapping up the proof of Theorem 2 To conclude the proof of Theorem 2, we observe that the following statement essentially follows from the combinations of Lemma 9 and the proof of Lemma 15 of [4]. Lemma 6 ([4]). If G is a k-creature-free graph that contains a minimal separator S with ζ G (S) > (8k 2 ) k+2 , then G contains a k-skinny-ladder as an induced minor. P ( .). Let G and S be as in the lemma statement. Let I 0 ⊆ S be an independent set of size ζ G (S) such that no vertex v ∈ V (G) \ S is adjacent to more than one vertex of I 0 . Let L 0 and R 0 be two full sides of S. Lemma 9 of [4] asserts that there exists an induced path L in L 0 , an induced path R in R 0 , and a set I ⊆ I 0 of size at least |I 0 |/k 2 > (8k 2 ) k+1 such that L dominates I and R dominates I. This is exactly the situation at the end of the rst paragraph of the proof of Lemma 15 of [4]. A careful inspection of that proof shows that the remainder of the proof (as well as the invoked Lemmata 8, 13 and 14) do not use other assumptions of Lemma 15. Hence, we obtain the conclusion: a k-skinny ladder as an induced minor of G. By combining Theorem 4 and Lemma 6, we obtain Theorem 2. Conclusion In Theorem 2 we showed that if a graph class G excludes k-creatures as induced subgraphs and k-skinny ladders as induced minors, then G is tame. However, note that while k-creatures have exponential (in k) number of minimal separators, this is not the case for k-skinny ladders: the class of k-skinny ladders (over all k) is tame. Thus the implication reverse to the one in Theorem 2 does not hold. Observe that the full tame/feral dichotomy for arbitrary hereditary graph classes is simply false due to some very obscure examples. Let H k be the (k, 2 k + 1)-theta graph: k paths of length 2 k + 1 with common endpoints. Note that H k has 2 k 2 + 2 O(k) minimal separators (2 k 2 of them choose one internal vertex on each path) and k2 k + 2 vertices, so the number of minimal separators of H k is around |V (H k )| log |V (H k )| . Hence, the hereditary class of all induced subgraphs of all graphs H k for k ∈ N is neither tame nor feral. However, it is still interesting to try to obtain a tighter classi cation between tame and feral graph classes for some more "well-behaved" hereditary graph classes. As discussed in Conjecture 4 of [4], a good restriction that excludes arti cial examples as in the previous paragraph is to focus on induced-minor-closed graph classes.
2022-05-04T01:15:51.049Z
2022-05-02T00:00:00.000
{ "year": 2022, "sha1": "4133d6a95f69d65918e8fbd3afdc0b20009e5579", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "675d2e10b33257c437d2d7e996abdc29750c3cf3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
55667463
pes2o/s2orc
v3-fos-license
Performance of Elite Genotypes of Tomato (Solanum lycopersicum Mill) for Yield and Quality Traits under Hisar Condition, Haryana, India Tomato (Lycopersicon esculentum Mill.), a member of Solanaceae family, is native of Andean region that includes parts of Colombia, Ecuador, Peru, Bolivia and Chile (Rick 1973, Taylor 1986).It is one of the most popular vegetable crops grown widely all over the world as it is a very versatile vegetable, ranking second in importance to potato in many countries. It is often referred to as a luxury crop because of its high consumption rate in developed and developing countries. In England, it is popularly known as Love Apple and is grown in home, market and truck gardens. Whatever has been the early history of its cultivation, the popularity of tomato has increased rapidly from the middle of nineteenth century to the present time. It is also a forcing crop being grown in greenhouse in off-season, thus, it has now become a good source of income to small and marginal farmers. Introduction Tomato (Lycopersicon esculentum Mill.), a member of Solanaceae family, is native of Andean region that includes parts of Colombia, Ecuador, Peru, Bolivia and Chile (Rick 1973, Taylor 1986).It is one of the most popular vegetable crops grown widely all over the world as it is a very versatile vegetable, ranking second in importance to potato in many countries. It is often referred to as a luxury crop because of its high consumption rate in developed and developing countries. In England, it is popularly known as Love Apple and is grown in home, market and truck gardens. Whatever has been the early history of its cultivation, the popularity of tomato has increased rapidly from the middle of nineteenth century to the present time. It is also a forcing crop being grown in greenhouse in off-season, thus, it has now become a good source of income to small and marginal farmers. In India, it ranks second among vegetables in area and production and occupies an area of 0.88 million hectares with a production of 18.70 million tonnes and average yield of The experiment was carried out at research farm of the department of vegetable science, CCS Haryana Agricultural University, Hisar during spring summer season of 2014-15 to study performance twenty-three tomato genotypes for yield and quality traits. Among all the genotypes, AVT-1-2 had highest plant height (140.33cm) and maximum number of branches per plant was observed in AVT-2-4 (7.60). The highest values for number of flowers per cluster were recorded in genotype AVT-2-2 (6.67). The genotype Hisar Arun had the highest number of fruits per plant (38.33) and AVT-2-7 had maximum number of fruits per truss (4.23). The maximum fruit yield per plant was recorded in genotype DVRT-3 (1540.00 g). The maximum polar and equatorial diameter of fruit was recorded in genotype AVT-2-6 (5.10 cm) and Punjab Kesari (6.24 cm), respectively. The maximum number of locules was registered with genotype DVRT-3 (6.20) and fruit weight with H-86 (64.03 g). The genotype PKM-1 had highest TSS (8.43Brix) and highest acidity was recorded in H-86 (0.90). The minimum days was taken to ripening in genotypes Hisar Arun (79.00). 21.2 tonnes per hectare (anonymous, 2015). In Haryana, its area and production during 2012-2013 was 22606 ha and 38232 tonnes, respectively, representing eight major tomato growing districts of Karnal, Yamuna Nagar, Mewat, Kurukshetra, Gurgoan, Ambala, Sonipat and Faridabad in the state. The ripe tomato fruits are consumed fresh as salad or after cooking. A large proportion of tomato is utilized in the preparation of various value added durable products such as puree, paste, powder, ketchup and sauce. The fully ripened whole fruits are canned, while the green unripe fruits are used for making pickles and chutney. In fact, tomato tops the list of processed vegetables and occupies a distinct place in the realm of vegetables because of its large-scale utilization and high nutritive value, as it supplies lycopene, ascorbic acid and β-carotene (potent antioxidants), and add colour and flavour, therefore, in many countries, it is considered as poor man's orange (Singh et al., 2004). The production of tomato is highly influenced by environmental factors such as temperature, light, relative humidity and carbon dioxide level in the atmosphere. Being a warm season crop and reasonably resistant to heat and drought, it can be grown under a wide range of soil and temperature but the most optimum range of temperature for its record yield is 20 to 24ºC but there should be 5 to 8ºC difference between day and night temperature for getting higher yield from this crop. The mean temperature below 16ºC and above 27ºC is not desirable for its cultivation. Lycopene responsible for its red colour is synthesized highest at the temperature range of 21 to 24ºC. Materials and Methods An experiment was carried out at Research Farm and Laboratory of the Department of Vegetable Science, CCS Haryana Agricultural University, Hisar which is located at latitude of 29º 10' North, longitude of 75º 46' East and at an altitude of 215.2 meters above mean sea level on south western border of the Rajasthan state and at a distance of about 175 km in west of the national capital city New Delhi having connectivity with National Highway Number 10, during spring-summer of the year 2014-15. The climate of Hisar is semi-arid and subtropical with hot and dry winds during summer months. Warm humid in monsoon and cold dry weather in winter are the general features of this region. Both, winter and summer are usually harsh to bear upon. The mean minimum and maximum temperature exhibit wide range. A maximum temperature zooming 44 to 48ºC during summer and temperature dipping as low as to freezing point accompanied with chill frost in winter is of common occurrence. The seeds of twenty three germplasm lines including released varieties were procured from different sources ( Table 1). The seedlings were raised in outdoor nursery beds in the field. The seeds of all twenty lines after treating with Captan @ 2 g/kg were sown in rows 10 cm apart in last week of November 2014. The beds after sowing seeds were covered with fine compost, and water was sprinkled regularly with a fine rose-can. The beds were kept moist until the seedlings emerged out in the beds. The nursery beds were covered with transparent polythene sheet to protect the seedlings from frost and cold waves. After germination, proper care was taken to ensure the proper growth of seedlings in the nursery. Seedlings became ready for transplanting in the last week of January 2015. The experimental design which was followed for analysis of data is randomized block design with the three replications. Five plants in each entry were selected randomly and tagged. These tagged plants were used for recording observations for plant height (cm), number of branches per plant, days to 50% flowering, number of flowers per cluster, number of trusses per plant, number of fruits per truss, number of fruits per plant, total fruit yield per plant (g), average fruit weight (g), polar diameter (cm), equatorial diameter (cm), number of locules per fruit, total soluble solids (%), acidity (%), ascorbic acid (mg/100 g) and days to ripening. Results and Discussion The analysis of variance indicated significantly higher amount of variability among the genotypes for all the characters studied viz., plant height, number of branches per plant, days to 50% flowering, number of flowers per cluster, number of trusses per plant, number of fruits per truss, number of fruits per plant, equatorial diameter of fruit, polar diameter of fruit, number of locules per fruit, total soluble solids, average fruit weight, total fruit yield per plant, ascorbic acid content, acidity and days to ripening ( Table 2). The mean performance of different genotypes for different characters and grand mean for different characters are presented in table 3. Plant height (cm) The range of plant height was from 59.73 to 140.33 cm, with average plant height 90.14 cm ( Table 3). The maximum plant height was recorded in genotype AVT-1-2 (140.33 cm) and the lowest plant height in Punjab Upma (59.73 cm). Number of branches per plant The number of branches per plant ranged from 3.43 to 7.60, with a mean value of 5.25 (Table 3). The maximum number of branches per plant was observed in AVT-2-4 (7.60) and minimum in . The other varieties with number of branches per plant above six were Arka Abha, Arka Sourabh, PKM-1 and Pusa Sadabahar, whereas, the remaining genotypes were having 3 to 6 branches per plant. Days to 50% flowering Significant difference was recorded among the entries with respect to days to 50% flowering ( Table 3). The average days taken to flowering in 50% plants was 49.33 days, with a range from 40.00 to 61.00 days. The minimum days was taken to flower in 50% plants by the genotype Arka Sourabh and AVT-2-7 (40.00), whereas, the maximum days was taken to flower in 50% plants by the genotype DVRT-2 (61.00). Number of flowers per cluster The average value for number of flowers per cluster ranged between 3.63 and 6.67 with a mean value of 5.33 (Table 3). The highest and lowest values for number of flowers per cluster were recorded in genotype AVT-2-2 (6.67) and H-86 (6.67), and Pusa Gaurav (3.63), respectively. The highest number of flowers per cluster was closely followed by AVT-2-3, AVT-2-6, Arka Abha, Arka Abha, Arka Meghali, PKM-1 and Pusa Sadabahar. Number of fruits per plant A wide variation was found among the genotypes for the number of fruits per plant, which significantly varied from 12.33 to 38.33 among the genotypes, with an overall mean of 21.78 (Table 3). The genotype Hisar Arun had the highest number of fruits per plant (38.33) and Arka Sourabh showed lowest number of fruits per plant (12.33). The other genotypes with more than 25 fruits per plant were AVT-2-4, AVT-2-7, Arka Vikas and Punjab Kesari. Twelve genotypes were recorded with fruits per plant lower than general mean and the remaining 11 genotypes had the number of fruits per plant above the general mean. Number of fruits per truss The number of fruits per truss varied significantly among the genotypes investigated (Table 3). The number of fruits per truss ranged from 1.33 to 4.23, with a mean value of 2.62. The maximum number of fruits per truss was recorded in genotype AVT-2-7 (4.23) and minimum in genotype Punjab Upma (1.33). The other genotypes, i.e., Hisar Arun, Punjab Ratta and DVRT-3, showed good number of fruits per truss. The results were in accordance to (Ahirwar and Prashad, 2013). Number of trusses per plant The tomato genotypes studied in the present investigation showed a wide range of variation, i.e., from 5.90 to 15.50 trusses per plant, with a mean value of 9.84 trusses ( Table 3). The genotype AVT-2-4 (15.50) recorded with highest number of trusses per plant, whereas, the genotype AVT-2-1 (5.90) showed the minimum number of trusses per plant. Among all the genotypes studied, ten genotypes were having number of trusses per plant above mean value and remaining thirteen genotypes were lower than the mean value. Polar diameter of fruit (cm) There were significant differences among genotypes for polar diameter of fruit, which ranged from 2.33 to 5.10 cm, with a mean value of 3.86 cm ( Table 3). The maximum polar diameter of fruit was recorded in genotype AVT-2-6 (5.10 cm) and minimum in AVT-2-1 (2.33). Equatorial diameter of fruit (cm) Significant differences were observed among the genotypes for equatorial diameter of the fruit. It was ranged from 3.17 to 6.24 cm, with a mean value of 4.56 cm ( Table 3). The maximum equatorial diameter of fruit was recorded in genotype Punjab Kesari (6.24 cm) and minimum in genotype HisarArun (3.17 cm). The other genotypes having wide equatorial diameter above the mean were AVRT-2-6, AVRT-2-7, Arka Vikas, Arka Alok, Arka Abha, PKM-1, Pusa Sadabahar, H-86, Punjab Upma and Pusa Gaurav. Average fruit weight (g) The average fruit weight of tomato varied significantly among the genotypes. It ranged from 25.27 to 64.03 g, with a mean value of 45.82 g ( Table 3). The maximum fruit weight was recorded by the genotype H-86 (64.03 g) and minimum by AVT-1-2 (25.27 g). The other genotypes ranging above the mean were AVT-2-1, AVT-2-3, Arka Vikas, Arka Alok, Arka Sourabh, Punjab Upma, Panjab Ratta, Punjab Kesari, Pusa Gaurav and DVRT-3. The remaining 12 genotypes were having the average fruit weight lower than the mean. Total soluble solid content of fruit (TSS ⁰ Brix) Significant difference was noticed among genotypes for total soluble solids content of fruit at marketable stage. TSS of fruit ranged from 3.53 to 8.43 ⁰ Brix, with a mean value of 5.53 ⁰ Brix (Table 3). The highest TSS content of fruit was recorded with the genotype PKM-1 (8.43 ⁰ Brix), while the genotype Punjab Upma (3.53 ⁰ Brix) showed the lowest TSS content. The genotypes showed TSS greater than mean value were AVT-2-2, AVT-2-3, AVT-2-4, Arka Vikas, Arka Meghali, Hisar Arun, DVRT-2, DVRT-3 and Pusa Gaurav, whereas, the remaining thirteen genotypes had total soluble solids content of fruit less than general mean. Similar results were also obtained by Dar et al., (2012). Ascorbic acid (mg/100g) Significant difference was observed among the genotypes for ascorbic acid content of fruit at marketable stage. The ascorbic acid content of fruit at marketable stage ranged from 19.33 to 28.67. The general mean of population in relation to ascorbic acid content of fruit was 23.53 (Table 3). Among the genotypes studied, eleven genotypes had the ascorbic acid of fruit more than general mean, and the remaining twelve genotypes had the ascorbic acid of fruit less than general mean. Days to ripening Significant differences were recorded among the entries with respect to days to ripening. The average days taken to ripen were 96.20, value ranging from 79.00 to 125.33 days. The minimum days was taken to ripening in genotypes Hisar Arun (79.00) and maximum in genotype Pusa Gaurav (125.33). Total fruit yield per plant (g) The fruit yield per plant of tomato evaluated varied significantly among 23 genotypes, ranging from 460.00 to 1540.00 g. The general mean of genotypes was 980.00 g. The maximum fruit yield per plant was recorded in genotype DVRT-3 (1540.00 g), while minimum in genotype DVRT-2 (460.00 g). The most promising genotypes having fruit yield greater than general mean were AVT-2-3, AVT-2-4, Arka Vikas, Arka Alok, H-86, Punjab Ratta, Punjab Kesari, Hisar Arun and Pusa Gaurav, whereas, the remaining genotypes were found to have yield lower than general mean. These results were in supported by the findings Sharma and Thakur (2008), Prajapati et al., (2015). From the obtained results, it can be concluded that the genotypes studied in the present investigation exhibited a wide range of variation for various yield and yield contributing characters observed. The most promising genotypes based on higher fruit yield and quality of fruits were DVRT-3, AVT-2-3, AVT-2-4, Arka Vikas, Arka Alok, H-86, Punjab Ratta, Punjab Kesari, Hisar Arun and Pusa Gaurav which can be further subjected to selection for desired traits or can be utilized in different breeding programmes to exploit the heterosis.
2019-04-02T13:09:49.334Z
2017-08-20T00:00:00.000
{ "year": 2017, "sha1": "c8b1f8706ea9187a0368572403ca36fdbc0d5e8a", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-8-2017/Pradeep%20Kumar%20Jatav,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5e395e32c62128d29eeb1808eb183a437f654701", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
14479213
pes2o/s2orc
v3-fos-license
A Revolutionary Rural Agricultural Participatory Sensing Approach Using Delay Tolerant Networks To provide sustainable digital agro-advisory services to farmers, seamless flow of information from the farmers to the experts/expert systems, and vice versa is required. The query generated by the farmers, which may contain multimedia data regarding disease or pest attack in the crops is required to be transmitted to the experts for analysis. Further, after analyzing the query, an alert or advice from the expert system is required to be communicated back to the farmers within some tolerable delay. However, in a country like India, network connectivity is extremely poor in several agricultural regions which makes the end-to-end connectivity between the farmers and the expert system intermittent. Therefore, providing agro-advisory services to farmers in a reasonable time becomes a challenge. In this paper, we propose a Delay Tolerant Network (DTN) based relay application model which enables agro-advisory services to farmers located in \emph{No-network} or \emph{Poor-network} zones. In the proposed model, end-to-end communication has been enabled with the help of Device to Device (D2D) communication and by introducing mobile relay nodes which can carry the queries (responses), from (to) the poor or no network zones to (from) the zones where communication is possible. Implementation of this model has been presented for the tea farmers of West Bengal and Assam, which can be extended for various other applications in future. I. INTRODUCTION In India, cellular network connectivity is very poor in several villages. It has been observed that rural and tribal folks travel for several kilometers to make a voice call. Data connectivity of 2G/3G is even worse in these areas. From our extensive involvement in Digital Farming initiatives [1], the communication barrier due to the poor cellular network connectivity has been felt in several deployment locations. For instance, in remotely located tea farms and tea estates of Assam, telecommunication network is intermittent and very poor. Further, in the Araku constituency of Andhra Pradesh, which is known for its organic cultivation of coffee, rubber and spices, network is only available in selected pockets. Due to this, farmers of these regions face difficulty to get the relevant advice on their cultivation queries. In the recent past, we have seen increasing applications of the Human Participatory Sensing in urban scenarios [2]- [5] such as traffic control, disaster information flow, epidemic monitoring, etc. In that direction, we have proposed a Rural Participatory Sensing (RuralSense) framework for agricultural applications [6] and have deployed the same for tea crop in West Bengal and Assam. A RuralSense mobile application is provided to small and marginal tea growers of these areas in order to facilitate them to report various geo-tagged events from their farms. In addition, access to the web-based dashboard has also been provided to the experts to analyze the reported events and to advise back to the growers. These events serve multiple purposes such as (a) asking queries to the experts, (b) digitization of the farm diaries, (c) self certification or tracing of the chemical applications and (d) generating contextual data for developing pest/disease/ yield/ water models for the cultivated crops. Fig. 1 illustrates a model digital farming platform; dotted lines indicate actual/physical communication between the farmers and the expert system either through cellular or any other communication mode. The continuous lines indicate the logical flow of messages between the sensors placed in the farm and the expert system. This framework for connecting farmers with the experts warrants a good communication network, which is the main focus of this paper. To overcome the limitations of poor cellular networks as discussed before, we propose DTN-RuralSense, a novel Delay Tolerant Network (DTN) approach for agriculture applications. Note that, throughout this paper we have used query and event interchangeably. Delay Tolerant Networks [7] has gained the interest of many researchers in the last decade. It is designed for harsh and challenging conditions where continuous Internet connectivity can not be guaranteed. To compensate data loss due to frequent disconnections, a relay node (DTN node) is required to store data packets for long time periods until connection to a forwarding node is established again. The use of this paradigm becomes critical in challenging scenarios like satellite, military, and rural applications. In countries like India, where the lack of infrastructure makes seamless end-to-end connectivity difficult, DTN can be used as an alternative. In this paper, we have introduced a novel DTN-based agroadvisory framework to farmers working in a remote location with limited network connectivity. Using DTN framework, we have proposed a mobile relay-based architecture which can be deployed in remote villages and provide expert advice to farmers' queries in a time bound manner. We have also provided an Android-based deployment framework to realize this framework. The rest of the paper is organized as follows. In Section II, we analyze the related work and in Section III, we explain the problem and our approach towards a practical solution. We then discuss the deployment framework in Section IV, and conclude this paper in Section V. II. RELATED WORKS Participatory Sensing [2]- [4] has gained tremendous growth in the recent past and has created various innovative applications using crowd-sensing and crowd-sourcing technologies. In these technologies, huge data from the crowd workers is generated and processed for appropriate actions. We have explored crowd-sensing in our previous work on Rural Participatory Sensing [6] and Distributed Crowd Sourcing [8]. In [6], events are captured and reported with the help of a mobile application while in [8], tasks are assigned to the farmers to diagnose the plant diseases and pest classification. It is believed that for rural applications, crowd assisted sensing is an important source of data generation and the analytics performed on this big data shall serve the society for the years to come. DTN was initially intended for Inter Planetary Networks (IPNs) [9] with a low network dynamic of satellites and rovers. In the DTN paradigm, communications can be intermittent and hence the nodes need to store the data till connection to a forwarding node is available again. By doing this, data communication can be guaranteed, even though the delay is substantial. In mobile wireless networks, the applications of DTNs have been envisioned for Vehicular Ad-hoc NETworks (VANETs) [10], Under Water Networks (UWNs) [11], Pocket Switched Networks (PSNs) [12], and suburb networks for developing regions [13], etc. To provide Internet connectivity in rural India a Sustainable Access in Rural India (SARI) program [9], a DTN like framework has been initiated in which Internet kiosks have been distributed in different villages. However, providing and maintaining kiosks in every village is not feasible. Therefore, another project called Computers on Wheels (COW) [14] was started in 2006, in which motorcycles with Internet equipment act as mobile kiosks and travel to remote villages to collect the data from users. In this paper, we explore the feasibility of the DTNs in enabling value added services to the rural farmers in a time bound manner. We use the smartphones/mobiles with the mKRISHI ® application instead of kiosks and provide DTNbased end-to-end connectivity to the farmers. III. PROBLEM STATEMENT AND SOLUTION APPROACH Time-bound agro-advisory services to farmers located in a remote area with either no or limited network connectivity is the challenge considered in this paper. The key goal of this paper is to remove the communication barriers due to unavailability of cellular networks. In one of our endeavors on Rural Participatory Sensing (RuralSense) framework, farmers can collect (sense) various farming related information such as disease, pest, nutrient disorder and other activities. mKRISHI ® is a Personalized Services Delivery Platform that enables two-way information exchange between farmers and expert systems that include virtual knowledge banks, Agriculture Experts and Procurement Officers (PO), etc. At the farmers' end, it uses participatory sensing (RuralSense) to collect and digitize the field data and mobile-based framework for the data communication. Each event/query generated at the farmers' end may consist of geotagged photographs, voice clip, meta-data, etc., and has a unique pair of Event ID (EID) bounded with the User ID (UID) which is assigned from the server. Moreover, it uses an application which can store the captured events/query in the local memory for communication at a later point of time. At the expert system's end, it uses analytic tools and expert advisory services to evaluate the sensed data obtained from the farmers before communicating actionable advice to the farmers. This enables digitization of the farm and the farming related events. Note that, the availability of this information is extremely critical for decision makers in responding to the farmers' queries, in routing their agriculture inputs to the right location and at the right time. In poor network zones, farmers are however finding it difficult to have a seamless communication of the information. Therefore, a solution is desired which can be deployed in the mKRISHI ® [1] framework such that seamless time-bounded communication can be realized. Moreover, the new solution should be easily integratable with the existing mKRISHI ® framework. It should also be easy to use for the farmers. A. Our Solution -a DTN Approach In this section, we propose a novel solution which can be included in the mKRISHI ® framework without much changes at the end points. The aim here is to provide network connectivity to the farmers in an alternative way. To realize this, we propose to use relay-based communication which can operate on the principle of DTN; an opportunistic mode that becomes critical in challenging scenarios like satellite applications and rural communications in emerging countries like India or Africa, where the lack of an infrastructure makes regular data communications almost impossible. DTN communications are thus the natural choice for a networking paradigm where nodes can be disconnected from the regular network for the majority of the time and exchange of data can take long time. To provide DTN-based solution, we propose a new entity into the mKRISHI ® framework called Relay Node. A relay node can be any other smartphone which can be used by other fellow farmers, farm agents, or field executives, that comes in close proximity to the farmers at the remote locations and has the following capabilities: (i) collect the farmers' sensed data or queries using Device to Device (D2D) services, (ii) transmit the collected queries from one or more farmers to the expert system using available network technologies (Wireless Fidelity -WiFi, cellular or any other) in a time-bound manner, (iii) collect and communicate the acknowledgement (ACK) and advisories from the expert system to the farmers. Fig. 2 illustrates the possible modes where multi-hop relay communication can be introduced to eliminate the network connectivity problem at the farmers' end. B. Building Blocks of our Solution Various modules related to our architecture and the communication possibilities are explained through Fig. 3. Basic building blocks of our solution are as follows: Farmer Node: In the present setup of the mKRISHI ® framework, farmer node is an Android Application (App) targeted for users with Android devices having OS version not less than 3.0 and basic configurations like camera, Global Positioning System (GPS) and WiFi. The application empowers the users, farmers, situated in remote locations to send ground truth in the form of 'events' that fundamentally consists of an image of the situation or surrounding, predefined textual labels and/or a voice clip. In such cases where a particular farmer has a basic mobile phone instead of smartphone, he can still any other farmer's smartphone with his own authentication to upload his queries. In this case, there will be two different identities for the farmer -one physical identity, i.e., his own phone number, and one logical identity, which is the phone number of the smartphone he uses for log-in. These events are required to be uploaded to the server over the Internet so that users can receive advices or suggestions. The application allows users to store these events, for later submission, in case of poor or no network connectivity. While uploading, the system follows DTN approach to relay the data to an agent or relay node who can reach to a good network zone. Relay Node: It employs the aforementioned farmer node Android App with an additional privilege of aggregating queries from farmer nodes which are incapable of uploading events. The relay node is required to press a button in the application to create a WiFi Protected Access 2 (WPA2) secured WiFi Hotspot in the region. A limited set of farmer nodes (based on device hardware), simultaneously, could connect to the relay node and send their events to it. The relay node would then upload all the events to the server and the acknowledgement is sent to the respective farmer nodes. Server or Expert System: It is is a Linux powered, 16 core Xeon processor rack with 32GB RAM, hosted in a virtualized environment and capable of handling huge loads by applying standard techniques of distributed computing. It hosts the mKRISHI ® server application which handles the 'events' coming from remote locations. The application is designed to maintain data integrity and has the capacity to send instant acknowledgements to the farmer nodes. Communication between the Farmer node and the Relay node is realized through low powered ad-hoc D2D communication, such as WiFi Direct, WiFi Hotspot and Bluetooth, etc. It uses the licence free spectrum (ISM band) and selfinitiated discovery and signalling techniques for the D2D communication initialization. At present, it uses a very low power (fixed) for the D2D communication; transmission power can be further reduced using sophisticated signalling techniques. IV. DEPLOYMENT AND EVALUATION To evaluate our proposed architecture, we have extended the mKRISHI ® framework and included the DTN functionalities. The system model is implemented to work seamlessly under all communication scenarios. The details of our implementation framework is explained as follows: A. Scanning of Access Networks This module is implemented in the farmer node as well as in the relay node using Android App. Once the event is sensed, the farmer node creates a query message and scans for an access network to communicate the same to the central expert system. The communication can be direct through the cellular network or indirect through the mobile relay node. B. Communication Modes Based on the availability of access networks we have implemented two modes for access network selection and communication. Mode-1 belongs to direct communication whereas Mode-2 is Relay-based communication. In this paper we focus on Mode-2 communication only. • Mode-1(a) -Direct: When the Cellular or WiFi connectivity is available at the farmers' end, it can directly transmit the query to the central expert system. In such situations, if the channel is bad then most likely it will remain same for a long time. Therefore, for such situations a relay agent with a DTN enabled relaying application module will visit the farmer place and collect farmers' queries to its own device and then relay the collected queries to the server. The relay node can also collect the ACK/responses for the queries from the server and deliver to the farmer nodes through this communication mode. In this mode, the query won't be deleted from the farmer node till an ACK/response is received from the server. In case of no ACK/response received from either the server (directly) or through the relay node within a pre-define time frame, the farmer node will restart the access scanning process for retransmission. C. Relay Node Communication For query collection from the farmer nodes, the relay node creates a WPA2 secured WiFi Hotspot in the region. Farmer nodes can register with this WiFi Hotspot and send their queries to the relay node. Once registered, the farmer's authentication details are saved in the relay node and need not to register again with the same relay node. In this way, secure D2D communication is realized between the farmer and the relay node. The relay node would then transmit all the queries to the server using its own network or by physically moving to a available network zone. Multi-hop relaying is also possible but currently out of the scope of this paper. Similar to the forward direction flow of queries, the relay node can also collect the ACK/responses for the queries from the server and deliver to the farmer nodes within a permissible time bound. D. Reverse flow of ACK/Response The ACK/response can be sent directly to the farmers using Short Message Services (SMS) or Hyper Text Transfer Protocol (HTTP) based messaging. In case of non-availability of network ACK/response can also be sent through designated relay nodes. In case of the query came from a farmer who has two identifications as physical and logical, the ACK/Responses will be routed to the logical identity only and it is the job of the farmer to logging into the smartphone and retrieve its message. The forward and reverse flow of message between the farmer node and the server can also happen using different relay nodes. To enable this multiple relay node options, the server maintains a map of relay nodes and their registered farmer nodes. Note that, Responses are required to be received by the farmer within a specified time period (T r ) starting from the query generation time (24-hour in this paper). Else, the farmer node discards the query; new queries can be generated again. Similar to the Responses, the ACKs are also to be received by the farmer node within a smaller time period T d (T d << T r ). It is generally assumed that relay node can go online fairly quickly (< T d ) since it is mobile. To take care of the situation that the relay node due to some exigency is unable to connect to the server within this time period, we are having a timeout and a re-transmission to a possibly different relay node to ensure reliable message delivery to the server. We now explain the control flow of all the events required for the end-to-end communication between the farmer node and the server. Fig. 4 illustrates the different phases and modes of the communications. Under Mode-1 communication queries can be sent directly to the server node and ACKs/Responses can be delivered to the farmer node directly. Upon receiving the ACK the query associated with the event is deleted from the farmer node. In Mode-2, two cases are possible. In the first case, i.e., when the relay node is on-line, both the query as well as the ACK/Response can be received before the expiry of the desired time; event can be deleted after the ACK is received at the Farmer node. Note that, the ACKs/responses can also be received directly using SMS. In the second possible case, i.e., when the relay node is off-line, even after collecting the queries, the relay node may not be able to transmit the queries to the server immediately due to network issues. In this case, the relay node needs to physically move to a network zone to transmit the queries. This can lead to delay in receiving the ACK/response. Hence timeout can occur at the farmer node resulting in retransmission of the queries (and possible deletion of the query). Note that, retransmission of the farmer queries can be made through another relay node; multiple retransmission attempts can be made till the Response timeout. E. Proposed DTN-based Server Application As discussed before, Android Apps are being designed for the farmer and relay nodes. Appropriate Java-based applications are also designed at the server end. These applications provide necessary platforms to visualize the queries, analyze them and communicate back responses or general information (advices/alerts) to the selected farmers (unicast or multicast). An SMS Gateway has been interfaced with the server in order facilitate the communication of ACKs, alerts, or responses in the form of the SMSs. Fig. 5 shows the screen-shots of the Android Apps designed and implemented on the Farmer and the Relay nodes. These are simple and easy to use applications, which can be used by the farmers without much difficulty. Local language support is also being provided in the Apps, such that they can be adapted easily. Appropriate care has been taken by keeping security and privacy settings of smartphones intact while installing these Apps. F. Practical Deployment At present laboratory prototypes are tested for (i) end-toend data communication, (ii) ACK delivery through SMSs and through data communications, (iii) event creation, deletion, transmission and re-transmission, (iv) effect of D2D communication on interference and it's impact on throughput, (v) scalability test for the maximum number of Farmer nodes one Relay node can support, (vi) privacy and security threats which can result out of the relay-based communication, etc. Post laboratory tests, practical deployment will be considered in the Tea gardens of Assam followed by other areas in the North-East, Andhra Pradesh and Maharashtra. V. CONCLUSIONS In this paper, we have observed the need of a delay tolerant framework for the remote farmers with poor network connectivity and have proposed a novel DTN-based architecture for end-to-end communication. In our model, mobile relay node visits the poor channel farmer nodes, collects their queries and transmits to the server. The implementation of the DTN-based framework for first time in any Rural Participatory Sensing application has been presented in this paper. Simultaneous transmission of data through WiFi Direct and Cellular is also possible and can be tried out in future. Further, the estimation of the relay users availability at a certain location can help us to trigger other participatory sensing applications. This will enable the sensing operation just before the relay nodes are expected to arrive in communication range of the farmer nodes. We believe that this model can benefit the farming community to a large extent and can provide Digitization of the Agriculture domain in India.
2016-12-09T07:04:32.000Z
2016-12-09T00:00:00.000
{ "year": 2016, "sha1": "fd0b427a183cd8f9de917029cd51aea111ae7ae6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fd0b427a183cd8f9de917029cd51aea111ae7ae6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16776196
pes2o/s2orc
v3-fos-license
OPA1 mutations induce mitochondrial DNA instability and optic atrophy ‘plus’ phenotypes. Brain 131: 338–351 Mutations in OPA1, a dynamin-related GTPase involved in mitochondrial fusion, cristae organization and control of apoptosis, have been linked to non-syndromic optic neuropathy transmitted as an autosomal-dominant trait (DOA). We here report on eight patients from six independent families showing that mutations in the OPA1 gene can also be responsible for a syndromic form of DOA associated with sensorineural deafness, ataxia, axonal sensory-motor polyneuropathy, chronic progressive external ophthalmoplegia and mitochondrial myopathy with cytochrome c oxidase negative and Ragged Red Fibres. Most remarkably, we demonstrate that these patients all harboured multiple deletions of mitochondrial DNA (mtDNA) in their skeletal muscle, thus revealing an unrecognized role of the OPA1 protein in mtDNA stability. The five OPA1 mutations associated with these DOA 'plus' phenotypes were all mis-sense point mutations affecting highly conserved amino acid positions and the nuclear genes previously known to induce mtDNA multiple deletions such as POLG1, PEO1 (Twinkle) and SLC25A4 (ANT1) were ruled out. Our results show that certain OPA1 mutations exert a dominant negative effect responsible for multi-systemic disease, closely related to classical mitochondrial cytopathies, by a mechanism involving mtDNA instability. The large majority of mutations in the OPA1 gene described to date are predicted to lead to a truncated protein and to haploinsufficiency (see http://lbbma.univangers.fr)(Ferre et al., 2005).These mutations are invariably associated with a non-syndromic, slowly progressive form of optic neuropathy, as originally described by Kjer (1959).Classic DOA usually begins before 10 years of age, with a large variability in the severity of clinical expression, which may range from non-penetrant unaffected cases up to very severe, early onset cases, even within the same family carrying the same molecular defect (Delettre et al., 2002;Carelli et al., 2004;Olichon et al., 2006;Cohn et al., 2007).However, there is at least one clear example standing out of this paradigm.This is a mutation in the OPA1 gene, i.e. the c.1334G4A leading to p.R445H amino acid change, being associated with a syndromic form of optic neuropathy and sensorineural deafness (Amati-Bonneau et al., 2003;Shimizu et al., 2003), and in some of the reported cases with chronic progressive external ophthalmoplegia (CPEO), ptosis and myopathy (Treft et al., 1984;Meire et al., 1985;Payne et al., 2004). CPEO, isolated or variably associated with a wider syndromic clinical expression, is the most frequent feature of mitochondrial myopathy and has a heterogeneous genetic basis, again driven by both primary defects in the mtDNA, i.e. single deletions and point mutations (DiMauro and Schon, 2003), or by mutations in nuclear genes resulting in multiple deletions of mtDNA (Zeviani et al., 1989).At least four nuclear genes are now known to be involved in CPEO associated with mtDNA multiple deletions and autosomal recessive or dominant inheritance.These are POLG1 (Van Goethem et al., 2001), the enzyme replicating mtDNA, the mitochondrial replicative DNA helicase Twinkle (PEO1) (Spelbrink et al., 2001), the heart/ muscle-specific adenine nucleotide translocator ANT1 (SLC25A4) (Kaukonen et al., 2000), and finally the thymidine phosphorylase (TP) involved in the nucleoside pool maintenance (Nishino et al., 1999).Among these genes, mutations in at least two of them, i.e.POLG1 and TP, may present with a combination of deletions and depletion of mtDNA in skeletal muscle (Hirano et al., 2004;Hudson and Chinnery, 2006). The association of CPEO and mitochondrial myopathy with optic atrophy is not frequent (Treft et al., 1984;Meire et al., 1985) and never reported as due to mutations in the above-mentioned genes.Thus, the clinical phenotype associated with the OPA1/R445H mutation is somehow a novel combination bridging autosomal-dominant CPEO and DOA (Payne et al., 2004).Recent studies showed that the biochemical phenotype of the OPA1/R445H mutation consists in a defective OXPHOS in fibroblasts (Amati-Bonneau et al., 2005).A defect in muscle bioenergetic efficiency was also documented by MR spectroscopy (MRS) in patients with the c.2708delTTAG microdeletion and classic DOA (Lodi et al., 2004).Furthermore, slight reduction of mtDNA copy number was reported in blood cells from DOA patients (Kim et al., 2005), overall supporting the notion that OPA1 may be involved in control of mtDNA content and ultimately in OXPHOS efficiency. We here report the association of different mis-sense point mutations in the OPA1 gene in six families affected with 'plus' phenotypes of optic atrophy and wider OPA1 mutations and mtDNA instability Brain (2008), 131, 338 ^351 neuromuscular involvement including sensorineural deafness, cerebellar ataxia, axonal sensory-motor polyneuropathy and mitochondrial myopathy frequently complicated by CPEO.Most remarkably we provide evidence that multiple deletions of mtDNA are accumulated in the skeletal muscle of these patients, thus revealing an unrecognized role of the OPA1 protein in maintaining mtDNA integrity. Case reports We here present the clinical histories of eight patients belonging to the six families investigated (Fig. 1) and a summary of the clinical and laboratory findings is reported in Table 1. Case 1 (Family 1, I-2) This family is fully described elsewhere and we here detail again the clinical histories of the two affected subjects (Liguori et al., in press).The proband is a 38-year-old man from Italy who was noted for poor vision at 4 years of age.At 6 years of age a rapid deterioration of his visual acuity led this patient to legal blindness.At 9 years of age he also suffered a progressive hearing loss needing acoustic prosthesis.At 30 years of age he developed gait difficulties with frequent falls.We observed this patient for the first time when he was 38 years old and his neurological exam Case 2 (Family 1, II-1) Case 1 (Family 3, II-1) The case of this 39-year-old French woman has already been reported (Amati-Bonneau et al., 2003, 2005) but at that time we had no information on the muscular pathology.Briefly, she presented with optic atrophy at the age of 6 years and with moderate sensorineural hearing impairment since the age of 15 years.Neurophysiological studies (BAEPs and evoked otoacustic emissions) suggested that deafness was caused by auditive neuropathy.At a recent neurological examination she showed severe optic atrophy and deafness and mild ataxia with positive Romberg sign.She underwent muscle biopsy that showed evidence of RRFs and COX negative fibres (Fig. 2, panels G, H and I). Case 1 (Family 4, I-2) This patient and her daughter were also previously reported (Amati-Bonneau et al., 2005).Briefly she is a 57-year-old Spanish woman who has been diagnosed with optic atrophy at age 13 years and a moderate sensorineural hearing loss was found when she was 30 years old.Case 2 (Family 4, II-1) The daughter of the proband of Family 4 was also diagnosed with optic atrophy and hearing loss at age 9 years.Her clinical condition is now slowly progressing. Case 1 (Family 5, II-1) This 43-year-old man from southern France suffered visual impairment since childhood and diffuse myalgia in both legs since adolescence.At age 39 years he experienced gait difficulties and ataxia was reported at neurological examination.At age 42 years, he was admitted to a gastroenterology unit for an episode of colic occlusion without any evident mechanical cause and was thereafter admitted in the neurology unit.At this time, his visual acuity was reduced to counting fingers in both eyes and fundus examination showed bilateral optic atrophy.At neurological examination he also had bilateral ptosis and ophthalmoplegia, impaired sensation to all modalities predominantly in the lower limbs, marked gait ataxia and positive Romberg sign. Neurophysiological studies (BAEPs and evoked otoacustic emissions) showed auditive neuropathy.EMG revealed an axonal sensory-motor neuropathy without clear evidence of myopathy.Muscular biopsy evidenced RRFs and COX negative fibres.Brain MRI revealed an atrophy of the corpus callosum and brainstem and a mild cerebellar atrophy (Fig. 3, panel D).Furthermore, bilateral basal ganglia hypointensity was detected at T2-weighted MRI scan (Fig. 3, panels E and F).Interestingly, brain MRS was normal, notably the lactate content (data not shown).The family history of this proband was remarkable for optic atrophy in his mother and brother, but we did not obtain further details from these cases. Case 1 (Family 6, II-8) and ophthalmoplegia, mild optic atrophy and areflexia at four limbs.Muscular CPK as well as blood and CSF lactate levels were normal.EMG revealed an axonal sensory-motor neuropathy and myopathic involvement of orbicularis oculi muscle.Muscular biopsy evidenced numerous RRFs and COX negative fibres and electron microscopy revealed paracristalline inclusions in mitochondria.The family history of this patient was remarkable for ptosis in his father, sister and brother but no further clinical details were available on these patients. Muscle histopathology and ultrastructure Quadriceps, deltoid, or tibialis anterior muscle biopsies, either by needle or open surgery, were performed under local anaesthesia and after informed consent of the patient.Muscle specimens were frozen in cooled isopentane and stored in liquid nitrogen for histological and histoenzymatic analysis including Gomori modified trichrome staining, COX activity, succinate dehydrogenase (SDH) activity and double COX/SDH staining according to standard protocols.A fragment was also fixed in 2% glutaraldehyde and processed for ultrastructural analysis. Fibroblasts culture Fibroblasts culture was established from skin biopsies, having obtained informed consent of the patient.Fibroblasts were grown in DMEM medium supplemented with 10% fetal bovine serum (FBS), 2 mM l-glutamine and antibiotics.For the experiments, fibroblasts were grown in DMEM glucose medium or DMEM glucose-free medium containing 5 mM galactose, 5 mM pyruvate (DMEM galactose medium).Mitochondrial morphology was assessed after cell staining with 10 nM Mitotracker (Molecular Probes) for 30 min at 37 C. Fluorescence was visualized with a digital imaging system using an inverted epifluorescence microscope with 63Â/1.4 oil objective (Diaphot, Nikon, Japan).Images were captured with a back-illuminated Photometrics Cascade CCD camera system (Roper Scientific, Tucson, AZ, USA) and Metamorph acquisition/analysis software (Universal Imaging Corp., Downingtown, PA, USA). Molecular investigations Informed consent for genetic investigations was obtained from all patients after approval of the study by the board of the local ethical committee in the different institutions participating to this project.Total DNA was extracted from the platelet/lymphocyte fraction and skeletal muscle by the standard phenol/chloroform method. Sequencing of the OPA1 gene For the OPA1 gene analysis genomic DNA was amplified by PCR with specific primers designed to amplify all exons and flanking intronic regions as previously described (Pesch et al., 2001).PCR reactions were carried out in a 50 ml volume with 50-100 ng genomic DNA, 10 mM Tris-HCL pH 8.9, 50 mM KCL, 1,5-3 mM MgCl 2 and 200 mM of each dNTP, 10 pmol of primers and 1 U AmpliTaq polymerase (Applied Biosystems, Weiterstadt, Germany).PCR products were purified by ExoSAP treatment (Amersham) and sequenced employing BigDye Terminator chemistry (Applied Biosystems). Analysis of mtDNA deletions Southern blot analysis was performed, as previously reported (Moraes et al., 1989), on the linearized mtDNA molecule after digestion with the restriction enzyme PvuII, separated by agarose electrophoresis (0.8%), transferred onto nitrocellulose membranes and hybridized with the entire human mtDNA probe labelled with digoxigenin-alkaline phosphatase (Roche Diagnostics, Switzerland). Long-range PCR on mtDNA was also performed by two different protocols.One method is essentially as reported by Nishigaki et al. (2004).The set of primers used is as follows: F1482-1516 and R1180-1146 (wild-type mtDNA fragment of 16.267 bp) F3485-3519 and R14820-14786 (wild-type mtDNA fragment of 11.335 bp), F5459-5493 and R735-701 (wild-type mtDNA fragment of 11.845 bp).The PCR conditions were: one cycle at 94 C for 1 min; 30 cycles at 98 C for 10 s and 68 C for 11 min; a final superextension cycle at 72 C for 10 min.The PCR was performed using Takara LA Taq DNA polymerase for the first pair of primers, and Takara Ex Taq DNA polymerase for the other set of primers (Takara Shuzo Corp., Japan).The PCR products were separated by a 0.8% agarose gel.The second method is just similar to the one previously described, the PCR being performed by using Takara LA Taq DNA polymerase (Takara Shuzo Corp., Japan) and two set of primers: F8285-8314 and R15600-15574 (wild-type mtDNA fragment of 7315 bp); F8285-8314 and R13705-13677 (wild-type mtDNA fragment of 5420 bp).The PCR conditions were one cycle at 94 C for 2 min; 30 cycles at 98 C for 5 s and 68 C for 15 min; a final superextension cycle at 72 C for 10 min. Sequencing of mtDNA The complete mtDNA was amplified in 24 overlapping PCR fragments using specifically designed primers (available upon request) based on the revised human mtDNA Cambridge reference sequence (www.mitomap.org/mitoseq.html)(Andrews et al., 1999).The PCR fragments were sequenced in both directions using a dye terminator cycle sequencing kit (Applera, Rockville, MD).Assembling and identification of variations in the mtDNA was carried out using the Staden Package (Staden et al., 2000). Sequencing across the junction points of some mtDNA deletions was achieved by amplifying specific mtDNA fragments to detect the 5 kb deletion, using the set of primer F8287-8306 and R13590-13571, and the 8.1 and 7.6 kb deletion using the set of primer F5651-5670 and R14268-14249.The PCR conditions were: one first cycle at 94 C for 5 min; 30 cycles at 94 C for 1 min, 55 C for 1 min, 72 C for 1 min; a final superextension cycle at 72 C for 7 min.The PCR products, isolated from the agarose gels by QIAquick gel extraction kit (Qiagen, Valencia, CA), were sequenced in an ABI Prism 310 Genetic Analyzer using Big Dye Terminator Cycle Sequencing Reaction Kits (Applied Biosystems). Evaluation of mtDNA copy number Quantitation of mtDNA relative to nuclear DNA (nDNA) was performed by two real-time PCR-based different methods. Both were multiplex assays based on hydrolysis probe chemistry.In the first method the target genes were the 12S ribosomal gene of mtDNA (primers and probe sequences and PCR reaction conditions are available on request) and the RNAseP nuclear gene (TaqMan RNAseP Control Reagent Kit, Applied Biosystems, Foster City, CA, USA).Calibration curves were used to quantify mtDNA and nDNA copy number, which were based on the linear relationship between the crossing points cycle values and the logarithm of the starting copy number. The second method was as previously described (Cossarizza et al., 2003).Briefly, a mtDNA fragment (nt 4625-4714) and a nuclear DNA fragment (FasL gene) were co-amplified by multiplex polymerase chain reaction.PCR reaction conditions, primers and probes are as previously detailed (Cossarizza et al., 2003).A standard curve for mtDNA and nuclear DNA was generated using serial known dilutions of a vector (provided by Genemore, Modena, Italy) in which the regions used as template for the two amplifications were cloned tail to tail, to have a ratio of 1:1 of the reference molecules. For both methods the data are means of at least three independent measurements. Sequencing of nuclear genes involved in mtDNA multiple deletions Direct sequencing of the complete coding region and the exon/ intron boundaries of the genes POLG1, PEO1 (Twinkle) and SLC25A4 (ANT1) were carried out as previously described (Gonza `lez-Vioque et al., 2006) in an ABI 3730 (Applied Biosystems, Foster City, CA, USA) sequencer. Statistics Data were analysed by one-way ANOVA, using the software SigmaStat Ver.3.5 (Systat Software Inc.).Data were considered significantly different when P-values 50.05. Mutation analysis of OPA1 gene All six probands with optic atrophy 'plus' clinical phenotypes underwent complete sequence analysis of the OPA1 gene and in each case a mis-sense pathogenic mutation was found.Sequencing of the mutated exon was performed in other members of the family, when available, and revealed a full penetrance of the mutated alleles in other affected patients.The mutation c.1316 G4T (p.G439V) in exon 14, found in Family 1, was previously reported and molecular analysis of multiple family members demonstrated that the mutation was present only in the proband and her daughter suggesting a de novo event (Liguori et al., in press).The mutation of Family 1 is close to the previously reported mutation c.1334 G4A (p.R445H) in exon 14 (Treft et al., 1984;Meire et al., 1985;Shimizu et al., 2003;Amati-Bonneau et al., 2003;Payne et al., 2004), which was found in Family 3 and Family 4 of our series.Both these mutations, as well as the others identified in Family 5 (c.1635C4G, p.S545R, exon 17) and Family 6 (c.1069 G4A, p.A357T, exon 11), introduce amino-acid changes in highly conserved positions of the GTPase domain and were not found in a panel of 460 control chromosomes.The mutation found in Family 5, which is of French ancestry, has also been detected in an unrelated pedigree from the United Kingdom displaying a similar clinical phenotype (Hudson et al, in this issue).Furthermore, this same mutation has also been previously found in a study on a series of DOA patients from Japan (Nakamura et al., 2006).Finally, the c.2729 T4A (p.V910D) mutation in exon 27 found in Family 2, stands out because it affects the GTPase effector domain and is associated with a milder phenotype compared to all other mutations (Table 1). Muscle histochemistry and ultrastructure All probands from the six families here reported underwent skeletal muscle biopsy and in all occasions, except for case 1 from Family 2, RRFs and/or COX negative fibres were detected at Gomori-modified trichrome and double COX/ SDH staining (Table 1 and Fig. 2, panels A, B, G and H).SDH staining also showed different degrees of mitochondrial proliferation ranging from increased subsarcolemmal staining to full SDH positive fibres (Fig. 2, panels C and I).Case 1 from Family 2 again stands out because he did not display any clear sign of mitochondrial dysfunction at muscle histochemistry, even if a frequent prevalence of the SDH stain was observed at double COX/SDH staining, possibly indicative of a partially depleted COX activity (Fig. 2, panel E).However, this patient had non-specific histological changes, such as marked variability of fibres size with both hypo and hypertrophic changes, splitting fibres, central nuclei and sporadic subsarcolemmal rimmed vacuoles, all suggestive of myopathy (Fig. 2, panels D and F).He underwent muscle biopsy because of pathological serum lactate levels after standardized muscle exercise (see case report).Overall, the patients of our case series ranged between age 38 and 67 years and, with the exception of case 1 from Family 2, RRF were 2.5 to 8%, SDH hyperintense fibres were 6 to 35%, and finally COX negative fibres were 7 to 35%.In all cases who underwent electron microscopy there was ultrastructural evidence of mitochondrial pathology, including the proband from Family 2, most frequently mitochondrial proliferation, altered morphology of mitochondria and cristae organization, and paracristalline inclusions (Fig. 4). Mitochondrial DNA analysis The finding of clear signs of mitochondrial myopathy, with a mosaic distribution of RRFs and COX negative/SDH hyperintense fibres pointed to the possible occurrence of mtDNA defects and prompted various molecular investigations on mtDNA.Initially, due to the apparent maternal inheritance, the index case of Family 4 underwent a complete sequence analysis of muscle mtDNA, which did not reveal any candidate pathogenic mutation.All changes detected were well established population-specific polymorphic variants defining haplogroup J1 and the complete sequence has been deposited in GenBank [http:// www.ncbi.nlm.nih.gov/Genbank/GenbankOverview.html(accession number is EU151466)].None of the other probands underwent complete mtDNA sequence analysis, mostly because of unequivocal evidence of dominant transmission of the disease, as in Families 1, 2 and 6. All probands underwent mtDNA analysis to screen for the presence of large-scale rearrangements, either by Southern blot analysis followed by long PCR, or directly by long-range PCR when the availability of muscle mtDNA was limited.All probands who underwent Southern blot analysis (Families 3, 4, 5 and 6) showed variable levels of mtDNA multiple deletions (one example is given in Fig. 5, panel A).The presence of multiple deletions was confirmed in all probands by long-range PCR performed with different sets of primers (Fig. 5, panels B and C).We did not perform quantitative analysis, but the abundance of multiple deletions was quite variable ranging from patients with very low levels (Fig. 5, line 1 in panel C) to cases with OPA1 mutations and mtDNA instability Brain (2008), 131, 338 ^351 higher abundance (Fig. 5, line 4 in panel C).To confirm that the bands detected by long-range PCR were truly deleted molecules of mtDNA, we specifically re-amplified by PCR with appropriate primers some of the putative deletions, purified the band and performed sequence analysis detecting the junction point of the deleted molecule (Fig. 6).The proband from Family 4 was not investigated by this approach, because muscle DNA was no longer available.We detected the ÁmtDNA 4.9 kbp (common deletion) in all the other probands investigated (the same of panel C in Fig. 5).The ÁmtDNA 8.1 kbp was particularly abundant in proband 1 from Family 2, but present only at low level in all other cases, whereas the ÁmtDNA 7.6 kbp was easily detected in all probands except the one from Family 2. The absolute quantitation of mtDNA copy number was performed on DNA samples extracted from skeletal muscle of the six probands and of 14 normal individuals (Fig. 7).As negative and positive controls we respectively used total DNA extracted from 143B.TK-derived 209 Rho 0 cells (a kind gift of Giuseppe Attardi), completely devoid of mtDNA, and from skeletal muscle of a patient with mitochondrial encephalomyopathy, lactic acidosis and stroke-like (MELAS) syndrome with abundant RRFs and over 80% heteroplasmy of the 3243A4G tRNA Leu mtDNA point mutation (Valerio Carelli, data not shown). The mean values of mtDNA copy number in control groups were 1949 AE 948 (age-matched controls) and 2008 AE 927 (total controls).We found similar values in affected individuals, except for the probands of Families 3 and 6 that showed a non-significant increase of mtDNA copy number compared to the control group, respectively 3185 AE 1431 and 2954 AE 1033 (Fig. 7).Variably abundant extrabands due to mtDNA deleted molecules from muscle DNA of all probands except the one from Family 4 are present.Lane 1 is proband from Family 2; Lane 2 is proband from Family 1, Lane 3 is proband from Family 5, Lane 4 is proband from Family 3, Lane 5 is proband from Family 6 in both left and right electrophoresis; in the right electrophoresis Lane 6 is a positive control (a CPEO patient previously diagnosed with mtDNA multiple deletions) and Lane 7 is a negative control.Molecular weight is marker X (Roche) and the size of some reference bands is indicated.The presence of mtDNA deletions in all these probands has been further confirmed using the other set of primers described in the methods (not shown). Mitochondrial network in fibroblasts Considering that OPA1 function is thought to be relevant for mitochondrial network organization, we investigated the mitochondrial morphology of fibroblasts bearing the OPA1 mis-sense mutation c.1316 G4T (p.G439V) in exon 14 (proband from Family 1) (Liguori et al., in press) compared with fibroblasts obtained from normal individuals, in glucose medium and after 24 h incubation in glucose-free medium containing galactose.Under this latter condition cells are forced to rely predominantly on oxidative phosphorylation for ATP synthesis, given the low efficiency of this carbon source to feed the glycolytic pathway (Ghelli et al., 2003).After loading with Mitotracker Red and examination by fluorescence microscopy, fibroblasts from control subjects displayed a typical filamentous interconnected network in both growth conditions, with glucose medium and after 24 h switch on galactose medium (Fig. 8).The OPA1 mutant fibroblasts presented with filamentous mitochondria, but less interconnected and occasionally with balloon-like enlargements, in glucose medium (Fig. 8).After 24 h growth in galactose medium the mitochondrial network of most OPA1 mutant fibroblasts underwent a complete fragmentation resulting in only non-interconnected discrete organelles (Fig. 8).Fig. 8 Mitochondrial network in fibroblasts.The mitochondrial network of fibroblasts, as visualized by the Mitotracker Red dye, is shown comparing a control subject with the proband from Family 1at time 0 in glucose medium, and after 24 h growth in galactose medium.The two conditions do not influence the interconnected tubular organization of the filamentous mitochondrial network in the control fibroblasts.The patient's fibroblasts present less interconnected filamentous mitochondria in glucose medium and after 24 h growth in galactose medium the mitochondrial network is completely fragmented. Fibroblast cells from Families 3 and 4 patients (Table 1) were previously investigated and shown to display similar propensity to hyperfragmentation of mitochondrial network (Amati-Bonneau et al., 2005). Exclusion of nuclear genes involved in mtDNA multiple deletions To rule out the contribution of the genes known to be involved in mtDNA multiple deletion formation we performed their sequence analysis.No sequence changes of pathogenic significance were identified in the coding regions and flanking exon-intron boundaries for POLG1, PEO1 and SLC25A4, excluding any involvement of these genes in the pathogenic mechanisms. OPA1 protein modelling Sequence searches indicate that OPA1 is a 961 amino acid residue protein belonging to a family of highly conserved GTPases related to Dynamin and the closest structural homologue identified to date is the bacterial dynamin-like protein (BDLP) (Low and Lowe, 2006).The BDLP structure provides an OPA1 homology model for the C terminal region of the transmembrane helix and the PARL cleavage site (residues 220-960).Currently, OPA1 is considered a mechano-enzyme that uses GTP hydrolysis to switch between distinct conformations implicated in membrane fusion.Except one, all OPA1 mis-sense mutations found in the DOA patients here investigated reside in the highly conserved GTPase domain (Fig. 9).GTPase activity is critical for OPA1 function and these miss-sense mutations (A357T, G439V, R445H, S545R) affect the GTPase domain just adjacent to its active site potentially impairing GTP hydrolysis by locking the protein in an 'on' or 'off ' state.Thus, these mutations may interfere with nucleotide binding and alter the affinity and hydrolysis rate of the GTPase domain.Overall, these mutations possibly impair the fine tuned conformational states of the active-inactive balance of OPA1, directly impacting on its properties.The only mis-sense mutation differently located (V910D) resides outside the GTPase domain, at the interface of the two effector domains performing the conformational change (Fig. 9).This mutation replaces a hydrophobic valine with a negatively charged aspartate and may impact the integrity of the interface by destabilizing the 'off ' state leading to an activated conformation of the protein. Discussion In this study we show the unprecedented finding that mutations in the OPA1 gene, not predicted to produce protein truncation and haploinsufficiency as patho-mechanism for DOA, are associated with mtDNA instability and result in complicated clinical phenotypes that we propose to define as OPA1 'plus' syndromes.These clinical phenotypes seem to be invariably defined at least by the association of optic atrophy and muscular involvement, which may range from non-specific myopathy to classical mitochondrial myopathy with RRFs and COX negative fibres and CPEO, and in all cases except for Family 2 also by the occurrence of sensorineural deafness.Central and peripheral nervous system may also be variably involved, with frequent occurrence of cerebellar or spinocerebellar ataxia and peripheral axonal neuropathy.The common molecular feature we have found in all cases is the accumulation of multiple mtDNA deletions in the skeletal muscle from these patients.The age of the patients we have investigated ranged between 38 and 67 years, thus excluding that the amount of RRF/COX negative fibres and the levels of mtDNA multiple deletions we observed could be ascribed to their age-related somatic accumulation (Johnston et al., 1995;Bua et al., 2006).Our findings link for the first time OPA1 protein function with mtDNA integrity maintenance, making OPA1 the fifth gene involved in mtDNA multiple deletion pathologies, together with POLG1, PEO1 (Twinkle), SLC25A4 (ANT1) and TP. OPA1 is a 960 amino acid residue protein that belongs to a family of highly conserved GTPases related to Dynamin (Praefcke and McMahon, 2004;Hoppins et al., 2007).OPA1 is anchored to the mitochondrial inner membrane and has an important role in the mitochondrial fusion process and in protection from apoptosis.Indeed, downregulation of OPA1 using specific small interference RNA leads to fragmentation of the mitochondrial network concomitantly to dissipation of the mitochondrial membrane potential and to a drastic disorganization of the cristae (Olichon et al., 2003).Moreover, OPA1 is also involved in protection from and regulation of the apoptotic process, by dealing with cytochrome c storage and release (Olichon et al., 2003;Frezza et al., 2006).Recent studies point to a mounting evidence that OPA1 is also involved in OXPHOS efficiency (Lodi et al., 2004;Amati-Bonneau et al., 2005), even if details of its mechanism and role in mitochondrial respiratory functions are lacking.One possibility is the involvement of OPA1 in regulating the amount of mtDNA, as suggested by a study showing that DOA patients may have slightly reduced mtDNA copy number in blood lymphocytes (Kim et al., 2005).It is also known that mutant Mgm1 protein, the homologous protein of OPA1 in yeast, may lead to loss of mtDNA and petit phenotype (Herlan et al., 2003).However, our current results on the possible involvement of OPA1 also in mtDNA maintenance in human subjects failed to reveal mtDNA depletion in the skeletal muscle of our probands, besides the documented occurrence of mtDNA instability with multiple deletions.On the contrary, in two cases the mtDNA copy number was increased compared with controls, even if without reaching statistical significance, in accordance with the presence of RRFs and compensatory enhancement of mitochondrial biogenesis. It is remarkable that, contrary to the majority of the OPA1 mutations associated with DOA to date, all the mutations investigated in the present study are mis-sense point mutations changing amino acid residues in the highly conserved GTPase domain, with only one exception (V910D).This rules out haploinsufficiency as a pathomechanism in our cases, and suggests more likely that gain or loss of function of the protein activity is responsible for mtDNA instability.The current hypothesis states that OPA1 is a mechano-enzyme that uses GTP hydrolysis to switch between distinct conformations that either facilitate membrane fusion directly or recruit machinery for it (Praefcke and McMahon, 2004;Olichon et al., 2006;Hoppins et al., 2007).GTPase activity is critical for OPA1 function and DOA miss-sense mutations found in the GTPase domain adjacent to its active site (A357T, G439V, R445H, S545R) may impair GTP hydrolysis locking the protein in an 'on' or 'off ' state.Thus, these mutations have the potential to interfere with nucleotide binding and affinity, possibly affecting the hydrolysis rate of the GTPase domain.Hence, these mutations may alter the finely tuned conformational states of the active-inactive balance of OPA1 protein and have a direct impact on its properties.The other DOA miss-sense mutation, i.e.V910D in the GTPase effector domain, points to the existence of further such critical residues in the OPA1 protein.Knowing the possible involvement of dNTP pools in the mtDNA deletion formation in TP (Nishino et al., 1999), and possibly in ANT1 (Kaukonen et al., 2000) related syndromes, a scenario we may hypothesize, based on the currently reported OPA1 defects, is that differences in GTPase activity of OPA1 may affect the dGTP pool, ultimately leading to mtDNA instability. However, OPA1 is attached to the inner mitochondrial membrane pointing towards the intermembrane space with a crucial, well-established role in cristae conformation (Olichon et al., 2003(Olichon et al., , 2006;;Frezza et al., 2006).mtDNA nucleoids (Malka et al., 2006) are also anchored to the same inner mitochondrial membrane but on the matrix side.Thus, the other possible mechanism through which OPA1 mis-sense mutations may lead to mtDNA instability is either the indirect interaction through changes in cristae morphology or the direct interaction of OPA1 protein through its N-terminal matrix-tail with mtDNA nucleoids and its possible role in stabilizing them.It is reasonable to predict that shifting the mitochondrial network organization towards a more fragmented conformation, as shown by our investigation on fibroblasts from four patients reported in the current and previous studies (Amati-Bonneau et al., 2005), may also imply changes in the cristae organization and stabilization of mtDNA nucleoids, as well as their actual amount.Comparatively to other tissues, the skeletal muscle mitochondrial network is differently organized and specific studies investigating the fission/fusion activity in this tissue are lacking.Muscle cells are a syncytium with mitochondria being intercalated among the myofibres and abundantly located subsarcolemmally, the site where mitochondria increase in numbers when compensatory proliferation occurs in mitochondrial disorders.Our electron microscopy images of muscle mitochondria show some of the classical changes previously reported in mitochondrial myopathies, but these are most probably due to the accumulation of mtDNA deletions and not primarily generated by the OPA1 mutations. The clinical presentation of our patients is similar to what is usually seen in other mitochondrial encephalomyopathies and more specifically in syndromes related to mtDNA multiple deletions.These cases widen the phenotypes that are associated with molecular defects in the OPA1 gene.The association of CPEO, mitochondrial myopathy with RRFs and COX negative fibres, cerebellar or spino-cerebellar involvement, and peripheral neuropathy has been all previously seen in patients with mutations in the POLG1 gene (Hudson and Chinnery, 2006).The only remarkable difference from these clinical phenotypes is the consistent presence of optic atrophy, as the core clinical manifestation of any OPA1-related phenotype.We propose that screening of the OPA1 gene in families with dominantly inherited CPEO and optic atrophy is mandatory, and only further screening of familial CPEO inherited as a mendelian trait without optic atrophy will provide evidence if the optic atrophy is a pathognomonic manifestation of OPA1-related disorders.It is of note that the only patient with a mutation lying outside the GTPase domain (V910D, Family 2) had the milder phenotype characterized essentially by only optic atrophy as in classic DOA, but had evidence of myopathy.The latter had no clear morphologic signs of mitochondrial dysfunction at muscle histoenzymatic staining, such as RRFs or COX negative fibres, but had pathologically increased lactic acid after exercise, mitochondria with paracristalline inclusions at electron microscopy, and the lowest amount of mtDNA multiple deletions.Thus, contrary to the truncative mutations in the OPA1 gene predicted to lead to haploinsufficiency, which do not present a tight genotype-phenotype correlation, we suggest that with OPA1 mis-sense mutations the genotype may be associated with specific clinical phenotypes.However, it must be noted that a number of other mis-sense mutations in the OPA1 gene have been described and listed in the eOPA1 website (http://lbbma.univ-angers.fr)(Ferre et al., 2005), including the exons building up the GTPase domain.Concerning these latter mutations there is no report of 'plus' features in these patients, which seem to be affected by classic DOA.Thus, it is reasonable to assume that mis-sense mutations in the GTPase domain not necessarily lead to 'OPA1 plus' phenotypes, which may be strictly dependent on amino acid location and/or change. Overall, the involvement of OPA1 in mtDNA stability opens a wide and unexpected scenario, where all the other proteins involved in the machinery of mitochondrial fission/fusion may be implicated (Chen and Chan, 2005).The field of human disorders related to molecular defects in such proteins is rapidly growing, having at least other three examples besides OPA1 mutations in DOA: mutations in the Mfn2 gene causing Charcot-Marie-Tooth (CMT) type 2A dominant peripheral neuropathy (Zuchner et al., 2004), mutations in the GDAP1 gene causing CMT4A (Niemann et al., 2005), and the most recently reported first mutation in the DLP1 gene associated with a lethal infantile neurological syndrome (Waterham et al., 2007), all these disorders being frequently if not always associated with optic atrophy.A further confirmation that fission/fusion machinery may be very relevant for both mtDNA maintenance and integrity comes from recent studies on Mfn1 and Mfn2, as well as OPA1 null cells, suggesting loss of mtDNA nucleoids and defective oxidative phosphorylation (Chen et al., 2007). In conclusion, we report the novel implication of specific mis-sense mutations in the OPA1 gene in the maintenance of mtDNA integrity and the association with optic atrophy 'plus' syndromes, similar to classic mitochondrial encephalomyopathies.The mechanism leading to mtDNA multiple deletions is unclear and needs further investigations.However, the obvious consequence of this report is to consider OPA1 gene analysis in patients with unexplained mitochondrial diseases, in particular if optic atrophy is present and mtDNA multiple deletions are recognized in the skeletal muscle. showed bilateral ophthalmoplegia and optic atrophy, severe deafness, pes cavus, hypopallestesia at lower limbs, weak deep tendon reflexes, positive Romberg sign and ataxic gait.Laboratory investigations showed mild elevation of AST (45 U/l, normal value 538) and ALT (65 U/l, normal value 541).Serum lactic acid after standardized exercise was abnormally elevated (54.5 mg/dl, normal value 522).Muscle biopsy was positive for Ragged Red Fibres (RRFs) and cytochrome c oxidase (COX) negative fibres (Fig.2, panels A, B and C).Electron microscopy of skeletal muscle showed mitochondria with morphologically abnormal cristae and accumulation of lipid droplets.Nerve conduction studies revealed a mild sensory-motor axonal neuropathy.Somatosensorial evoked potentials (SEPs) showed absent cortical responses from the lower limbs and increased latencies from the upper limbs, suggestive of a posterior column involvement.Motor evoked potentials (MEPs) were normal.Pattern visual evoked potentials (PVEPs) showed absent cortical responses bilaterally, whereas electroretinogram was unremarkable.Brainstem auditory evoked potentials (BAEPs) showed absent responses on left ear and increased latencies of the IV and V response with absence of II and III response on right ear.Audiometric exam showed a severe bilateral sensorineural hearing loss.A brain MRI showed variable degrees of atrophy affecting cerebral cortex, brainstem and cerebellum (Fig.3, panel A).Bilateral hypointensity of basal ganglia was detected at the gradient echo MRI scan, which at CT scan was compatible with bilateral calcifications (Fig.3, panels B and C).Electrocardiogram (EKG) was normal. Fig. 1 Fig. 1 Genealogical trees of the families investigated.Arrows indicate the probands of each pedigree, for which geographic origin and the OPA1 mis-sense point mutation, exon, and amino acid change are also provided.Asterisks indicate the individuals for which a clinical history has been provided in the text.With the exception of Family 3, which is a sporadic case with a de novo mutation, all the other families showed a pattern compatible with autosomal-dominant inheritance. Fig. 2 Fig. 2 Muscle histopathology (Gomori modified trichrome, COX/SDH and SDH stain).(A), (B) and (C) refer to the proband of Family 1.In panel A two fibres displaying increased eosinofilic material with subsarcolemmal distribution, which resemble RRFs are shown (asterisks).In panel B, at the double COX/SDH stain some COX-deficient fibres are recognized by the prevalent SDH violet stain (arrows), and one hyperintense SDH fibre is also shown (asterisk).In panel C, a section serial to the previous in panel B shows numerous fibres with increased SDH stain, in particular in the subsarcolemmal region (arrows).(D), (E) and (F) refer to the proband of Family 2. In panel D a hypertrophic fibre is shown with numerous centralized nuclei (arrows), whereas this patient did not present RRFs.Panels E and F also show the great variability of fibre size, but no clear COX-deficient or hyperintense SDH fibres were present.However, a prevalent SDH stain was frequent in some fibres at COX/SDH double stain, as well as some parcellar increase of SDH only stain was evident in a few fibres.(G), (H) and (I) refer to the proband of Family 3. In panel G a typical RRFs is shown (asterisk).In panel H frequent COX-deficient fibres are seen (arrows), and in panel I increased subsarcolemmal staining of SDH is present in numerous fibres (arrows). Fig. 3 Fig. 3 Brain MRI and CT scan.(A), (B) and (C) refer to the proband from Family 1.In panel A a mid-sagittal T1-weighted brain MRI scan shows variable degrees of atrophy affecting cerebral cortex, brainstem and cerebellum.In panel B the axial gradient echo MRI scan shows bilateral hypointensity within the globi pallidi (arrows), which is detected as depositions of calcium in the CT scan (arrows) shown in panel C. (D), (E) and (F) refer to the proband from Family 5.In panel D a mid-sagittal T1-weighted brain MRI scan shows a thin corpus callosum as well as brainstem and cerebellar atrophy.In panel E the axial T2-weighted scan shows bilateral hypointensity within the globi pallidi, which are also detected in the coronal scan (arrows) shown in panel F. Fig. 4 Fig.4Ultrastructure of skeletal muscle.At electron microscopy, a collection of aberrant subsarcolemmal mitochondria is recognizable, with 'parking lot'-like paracristallin inclusions, in the muscle biopsy from the proband of Family 6. Fig. 5 Fig. 5 Molecular investigation.(A) Southern blot analysis (proband from Family 4).Lanes 1 and 2 are muscle DNAs from two age-matched healthy controls, who show the presence of a single band correspondent to wild-type mtDNA.Lane 3 is muscle DNA from the proband of Family 4, showing multiple bands: the wild-type mtDNA and at least other two lighter bands of very low intensity corresponding to deleted mtDNA molecules.Lane 4 is muscle DNA from a patient with a single large-scale mtDNA deletion.Arrows indicated mtDNA deleted molecules.(B) Long PCR (proband from Family 4).This panel shows an agarose electrophoresis separation of the wild-type mtDNA long-PCR amplified fragment of 5420 bp on the left, and the fragment of 7315 bp on the right (see 'Methods' section for details).Lane 1 shows wild-type and deleted molecules of muscle mtDNA from a patient known to carry mtDNA multiple deletions as detected by Southern-blot (positive control); Lane 2 shows a single wild-type mtDNA band amplified from muscle DNA of a healthy individual (negative control); Lane 3 shows a single wild-type mtDNA band amplified from fibroblast DNA of the proband from Family 4; Lane 4 shows wild-type and deleted molecules in the muscle mtDNA of the proband from Family 4. (C) Long PCR (probands from all other families).This panel shows an agarose electrophoresis separation of the wild-type mtDNA long-PCR amplified fragment of 11.335 bp on the left, and the fragment of 11.845 bp on the right (see 'Methods' section for details).Variably abundant extrabands due to mtDNA deleted molecules from muscle DNA of all probands except the one from Family 4 are present.Lane 1 is proband from Family 2; Lane 2 is proband from Family 1, Lane 3 is proband from Family 5, Lane 4 is proband from Family 3, Lane 5 is proband from Family 6 in both left and right electrophoresis; in the right electrophoresis Lane 6 is a positive control (a CPEO patient previously diagnosed with mtDNA multiple deletions) and Lane 7 is a negative control.Molecular weight is marker X (Roche) and the size of some reference bands is indicated.The presence of mtDNA deletions in all these probands has been further confirmed using the other set of primers described in the methods (not shown). Fig. 6 Fig.6ÁmtDNA junction points.The deletion junctions of three different mtDNA deletions (4.9 kbp or common deletion in the proband from Family 1; 8.1kbp in the proband from Family 2; 7.6 kbp in the proband from Family 5) amplified from muscle DNA and directly sequenced are shown.The arrows on the sequence pherograms delimitate the repeat location at the boundaries of each mtDNA deletion and corresponding nucleotide positions are also indicated.
2014-10-01T00:00:00.000Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "564086bb96d00eede2ac2482c5f89cd1809dd295", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/brain/article-pdf/131/2/338/1127951/awm298.pdf", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "564086bb96d00eede2ac2482c5f89cd1809dd295", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10772932
pes2o/s2orc
v3-fos-license
An Exercise Health Simulation Method Based on Integrated Human Thermophysiological Model Research of healthy exercise has garnered a keen research for the past few years. It is known that participation in a regular exercise program can help improve various aspects of cardiovascular function and reduce the risk of suffering from illness. But some exercise accidents like dehydration, exertional heatstroke, and even sudden death need to be brought to attention. If these exercise accidents can be analyzed and predicted before they happened, it will be beneficial to alleviate or avoid disease or mortality. To achieve this objective, an exercise health simulation approach is proposed, in which an integrated human thermophysiological model consisting of human thermal regulation model and a nonlinear heart rate regulation model is reported. The human thermoregulatory mechanism as well as the heart rate response mechanism during exercise can be simulated. On the basis of the simulated physiological indicators, a fuzzy finite state machine is constructed to obtain the possible health transition sequence and predict the exercise health status. The experiment results show that our integrated exercise thermophysiological model can numerically simulate the thermal and physiological processes of the human body during exercise and the predicted exercise health transition sequence from finite state machine can be used in healthcare. Introduction There is evidence that healthy exercise can minimize the physiological effects of an otherwise sedentary lifestyle and increase active life expectancy by limiting the development and progression of chronic disease and disabling conditions [1]. Research on healthy exercise is important and has been focused on for the past few years. During exercise, the human body exchanges energy with the clothing systems and environmental conditions in different forms of heat transfer; a coupled system about thermoregulatory mechanism is determined based on the Human-Clothing-Environment (HCE) [2,3]. Particularly, the thermoregulatory responses of the body and the sensory responses of skin nerve endings follow the laws of physiology [4]. The human active tissues produce additional metabolic heat, which must be intricately offset by heat loss to the environment [2,4]. The core temperature increases and several physiological reactions in internal temperature regulating system are automatically activated to accelerate body heat dissipation including sweating by stimulating the sweat gland and automatically adjusting the cardiovascular system [5]. During cardiovascular adjustment, the blood is redistributed from the core organs to the skin to facilitate heat dissipation, and the active muscles require blood supply to deliver oxygen for maintenance of activities. The heart rate increases correspondingly to sustain cardiac output and blood supply to the working muscles and the skin [6]. With the dynamic changes of physiological indicators during exercise, many health phenomena such as thirst, breathing disorders, and dizziness can appear. Without adopting effective preventive measures in time, health accidents (dehydration, exertional heatstroke, syncope, and even sudden death) may happen [4]. At the Standard Chartered Hong Kong Marathon 2013, 55 runners were reported to have fallen unconscious, been rendered comatose, and suffered from collapse because of heatstroke; more than 100 athletes have died from excessive heat stress because of exertional heat stroke during competitions in the recent 20 years [6]. If these health accidents can be analyzed or predicted before they 2 Computational and Mathematical Methods in Medicine happened, it will be beneficial to alleviate or avoid disease and mortality [7]. Hence, the research of exercise physiological performance is significant for the health monitoring, analysis, and accident precaution. Some technologies have been used to obtain the body physiological performances and predict the health states. The wearable health monitoring system (WHMS) usually takes the advanced sensory technology to get the immediate physiological values and then deal with these values for real-time health judgment and risk prediction [8,9]. For example, a large variety of laboratory prototypes, test beds, and industrial products of WHMS [10,11] have already been produced. The Nike+ Fuel Band is an activity tracker worn on the wrist to track wearers' physical activity, heart rate, and amount of energy burned [12]. The My Heart project [13] and the SmartVest project [11] are smart clothes, where the sensing modules are either garment-integrated or simply embedded on the piece of clothing. All of them need participants to put on various wearable products at any moment to collect continuous physiological data. They are costly and inconvenient for daily exercise sometimes. Data mining (DM) method takes advantage of the historical exercise data and personal health data to assess or predict the health status [14]. Various data mining methods have been adopted to deal with physiological information and predict the human health status. Li and Clifford applied a multilayer perceptron neural network to estimate the quality of the pulses in PPG [15]. Pantelopoulos and Bourbakis presented a health prognosis methodology based on fuzzy regular language [16]. Calderon and de Brito introduced data mining models such as decision tree, -nearest neighbors ( NN), and support vector machine (SVM), for analyzing electrocardiograms (ECG) in order to identify heart attack and the probability of incidence [17]. But, if there is no enough historical physiological data for a participant to analyze, the accuracy of prediction may be a big problem. Computer simulation modeling in exercise healthcare is an attractive proposition. Obtaining the mathematical model that describes the human physiological regulation mechanisms can improve our understanding of exercise physiology and is helpful for the prediction of health accidents during exercise. Some significant research results can be developed around human thermal behavior simulation as well as the human heart rate response simulation. Reviewed by Cheng et al. [18,19], all the models for human body can be characterized in terms of their viewpoints of development. They are (1) one-node model [20], (2) two-node model [21], (3) multinode model [22][23][24], and (4) multielement model [25,26]. All these models can simulate the thermal performance of human body, while their mechanisms such as heat conduction, sweating, vasoconstriction, and vasodilatation can be implemented from simple to complex. In the one-node model, human body is regarded as a single node, and it is only applicable to thermal environment. In the two-node model, human body is divided into core and skin; the basic thermoregulation mechanisms such as heat conduction, sweating, vasoconstriction, and vasodilatation can be simulated. This model is easy to be understood and implemented. In the multinode and multielement models, the division of the human body is customized to the requirement of researchers. In these two models, a series of complex mathematical equations is used to describe the more physiological mechanisms (e.g., the blood perfusion phenomenon, the negative feedback control process). These two models require complex simulation settings and higher computation abilities, and they can obtain local physiological performance. Physiological models about cardiovascular system in human body increasingly receive attention in recent years. Cheng et al. proposed a series of nonlinear heart rate models to simulate the heart rate regulation process during exercise [27,28]. Ataee et al. developed a low-order lumped parameter model to describe the autonomic-cardiac regulation behaviors [29]. Buller et al. presented a quadratic regression model to implement heart rate regulation by controlling the human core temperature [30,31]. From the literature review performed above, we have found that the fundamental knowledge of human thermoregulation mechanisms has been established. Several physiological indexes can be numerically computed by the mathematical model. However, the existing models focus on different emphasis points; the thermal performances and human physiological performances are simulated individually. The relationship between these performances has not been established in the existing work. Besides, some problems such as what method can be used to predict the exercise healthy status and how to alleviate or avoid exercise accidents before they happen are unresolved. Therefore, it is important to develop a comprehensive simulation model integrating various human regulation mechanisms to obtain human thermal performance and physiological performance during exercise. Further, based on these exercise simulation results, some research on healthy exercise is conducted. In this study, we propose an integrated human exercise physiological model, in which a two-node human thermal physiological model and a nonlinear heart rate response model are coupled together to simulate the human physiological regulatory mechanism; a series of thermal and physiological performances can be computed according to the numerical computation model. Both human thermal sensation (temperature of skin, relative humidity of skin) and physiological status (core temperature, sweat rate, skin blood flow, and heart rate) are obtained. They are important in understanding, analyzing, predicting, and preventing the health problems (accident, disease, etc.) during exercise. Then, a fuzzy logical method is employed to deal with our simulated results in exercise health prediction. Specifically, a special fuzzy finite state machine is defined to describe the health state transition in exercise process. Finally, two different cases are designed to evaluate the proposed approach. Compared with the existing approaches, our approach can be used to predict the health status before the exercise starts. Further, the research results can be used in the healthcare service which may also be beneficial in predicting and reducing cardiovascular disease mortality [32]. This may also lead to an improvement in developing training protocols for athletes and more efficient weight loss protocols for the obese and in facilitating evaluation of physical fitness and health of individuals [33]. To clarify, we noted the importance of Heart rate (bpm) HR (t) = 4.0x1 (t) + HR Φ x1 (t) := a4x1(t) 1 + e −(x1(t)−a5) t = 0 t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8 t = 9 t = 10 t = 11 t = 12 t = 13 computer simulation technique in the study of human sports and proposed a method to assess human exercise comfort in 2016 [34]. Different from the work of this paper, the previous one employs human physiological model to obtain physiological indicators and defines a set of fuzzy rules to measure the human comfort, while this work applies the obtained physiological indicators as input of a complicated fuzzy finite state machine and then quantifies the human exercise health status. Figure 1 shows the flow chart of the exercise health simulation method, in which the various parameters in the left side are input and the predicted health state list is output. From Figure 1 we can see the important issues are exercise thermophysiological modeling and exercise health prediction. In exercise thermophysiological modeling, two important simulation models must be considered, which are human thermal regulation model and heart rate regulation model. Using the simulated results, an exercise health prediction model is developed. That can be used in the exerciser to obtain the healthy exercise effects. An Integrated Exercise Thermophysiological Simulation Model. During exercise, active tissues in body will produce large amount of heat; this will break the body thermal balance and affect the human physiological performances. Hence, human body thermophysiological regulation mechanisms used to speed up body heat dissipation are activated to make the body in a proper thermal status. Such regulation mechanisms mainly include sweating by stimulating the sweat gland and automatically adjusting the cardiovascular system. Modeling of these regulation mechanisms (especially the thermoregulatory mechanism and heart rate regulation mechanism) is significant. According to the literature review we can find that the heat and moisture performances of human body can be simulated by some mathematical models. The basic thermoregulation data such as heat conduction, sweating, vasoconstriction, and vasodilatation, as well as the physiological indicators of human core temperature, human skin temperature, sweat rate, and so on, can be simulated. On the other hand, the individual physiological regulation models are presented. In this paper, a nonlinear heat rate regulation model is integrated into the human heat and moisture transfer model to simulate the exercise thermophysiological performances. Compared with the thermal simulation model, the main character of the integrated simulation model is focusing the human exercise thermophysiological properties. Particularly the human heart rate can be simulated during exercise. Two-Node Thermal Regulation Model. Considering the complexity and efficiency of numerical simulation, a twonode thermal regulation model is used to simulate the thermal behaviors and represent the thermoregulatory mechanisms of the human body [21]. Core temperature and dehydration amount are main parameters used in the exercise health prediction model. Some mathematical equations are used to calculate these two parameters. The mathematical equation of two-node thermal regulatory model in unit skin area is presented as follows: where is the rate of heat storage, is the rate of metabolic heat production, is the heat loss by exercise accomplished, is the heat gained or lost by radiation, is the heat gained or lost by convection, and is the total evaporative heat loss, and it includes the heat of vaporized moisture from the lungs during respiration ( res ), the heat of vaporized water diffusing through the skin layer ( diff ), and the heat of vaporized sweat necessary for the regulation of body temperature ( rsw ). It should be noted that there is a positive correlation between and exercise intensity. Therefore, when exercise intensity increases, the rate of metabolic heat production increases. In detail, the mathematical models can be represented as follows: where sk is the rate of heat storage in core, cr is the rate of heat storage in core, sk is the skin temperature, cr is the core temperature, min is the minimum heat conductance of skin tissue, bl is the specific heat of blood, and bl is the rate of skin blood flow. With the heat storage changed, the values of skin and core temperature at any simulation time can be calculated as follows: where sk is the skin temperature change rate, cr is the core temperature change rate, sk ini is the initial temperature of skin, cr ini is the initial temperature of core, cr is the core mass, cr is the core specific heat capacity, and is the body surface area, it is a function of body height and weight proposed by Schlich et al. [35]. Sweating is usually caused by temperature stimuli from both the skin and core. An effective sweating mechanism can take away the additional heat and help human body work well during exercise. The sweat rate rsw is used to measure the performance of the sweating mechanism in our model, and it is written as follows: where sk ini and cr ini are the initial values of sk and sk , sk − sk ini and cr − cr ini can be seen as the temperature control signals (they are responsible for the thermoregulatory control actions), is the coefficient of the additional sweat amount during activities, and sw is the coefficient of sweating rate model. The sweating accumulation in (5) can be used to diagnose whether the body is dehydrated or not; it is defined as dehydration amount (DA) in this paper. Heart Rate Regulation Model. Heart rate regulation behaviors are important in maintaining a physiological balance state in exercise process. During exercise, large amount of blood is required to facilitate heat dissipation and deliver oxygen into muscles. The human heart rate increases to sustain cardiac output and blood supply to the working muscles and the skin. Nonlinear heart rate regulation model aiming to simulate the heart rate behaviors and represent the heart rate regulation mechanisms of the body can be introduced [27]. In this model, the neuroregulation mechanism can well reflect the dramatic change of heart rate especially in the strenuous exercise. The thermal regulation mechanism combined with some other mechanisms is usually utilized to describe the slow-acting effects of HR. The mathematical equations of the nonlinear heart rate regulation model are presented as follows:̇1 where 1 ( ) describes the change of HR mainly due to the neural effects to exercise (the effects comprise the sympathetic and parasympathetic), 2 ( ) describes the change of HR due to the peripheral effects comprising the human thermoregulation system, the hormonal system, and other physiological phenomena, is the exercise intensity, and it directly affects human metabolic rate, ( = 1, . . . , 6) is positive parameter which depends on the specific individual performing various exercise, HR rest is the heart rate at rest, and its default value is 74, and HR is the output we need. Exercise Health Prediction. Applying various indicators obtained from the proposed thermophysiological model to predict the potential exercise health risk is a worthwhile method. Among the various simulated physiological indicators, core temperature, dehydration amount, and heart rate are the most important ones in exercise symptoms diagnosing, and they are chosen as health prediction variables. Fuzzification. Instead of characterizing simulated physiological indicators in a crisp manner, we can employ fuzzy logic [36] to describe the degree of occurrence of a certain indicator. Particularly, the trapezoidal function in fuzzy logic is selected to define the membership function of every input indicator. With the guidance of medicine experts, the severity interval for health symptoms is divided and the corresponding fuzzy symptoms are obtained. Figure 2 shows fuzzy symptoms extracted from simulated indicators. Especially in the heart rate membership function, the THR is the target heart rate, which indicates the recommended optimal heart rate [37]. MHR is the maximum heart rate that the human body can tolerate [38]. These two thresholds are directly related to the participant's age and exercise intensity; their values should be calculated as follows: MHR = 163 + (1.16 * age) − (0.018 * age 2 ) , Computational and Mathematical Methods in Medicine where HR rest is the rest heart rate (its default value is 74) and EIP is the exercise intensity percentage. As shown in Figure 2, concerning the core temperature, the human health is commonly classified into three states, that is, hypothermia, normothermia, and hyperthermia. The hypothermia usually shows symptom of low temperature (lt). The normothermia shows symptom of normal temperature (nt). The symptoms of hyperthermia include slightly high temperature (sht), moderate high temperature (mht), and high temperature (ht). The dehydration amount is classified into two states, that is, normal and dehydration. The normal shows symptom of nondehydration (nd). The dehydration shows three symptoms, namely, mild dehydration (mid), moderate dehydration (mod), and severe dehydration (sd). The heart rate is also classified into three states, that is, bradycardia, normal, and tachycardia. Each state corresponds to one symptom, namely, low heart rate (lhr), normal heart rate (mhr), and high heart rate (hhr), respectively. Finite State Machine Definition. Once the fuzzy symptoms are generated, we need to predict the health transition states based on the obtained fuzzy data. Traditional rulebased health judgment method is wildly used to calculate the health state of discrete time [39], the degree of healthy, and the health tendency during the whole process which are unknown. Therefore, the finite state machine (FSM) [40] is introduced and applied in exercise health prediction. Finite state machine is useful in the situations where behavior is driven by many different types of events; the response to a particular event depends on the sequence of previous events. In this case, the change of CT, DA, and HR can be used as trigger events and a specific finite state machine is defined to simulate the health transition sequence during exercise. The FSM is represented as a 4-tuple (Σ, , , ), where we have the following: (1) Σ denotes the set of all possible health symptoms extracted from the simulated physiological data. The total number of symptoms in the current FSM is 12 (5 + 4 + 3). All the symptoms and their corresponding notations are listed in Table 1. Each symptom has a degree of membership (DOM) 0 ≤ ( , , ) ≤ 1, which denotes the certainty or strength of the corresponding symptom, where belongs to the set of all indicators simulated by our physiological model, belongs to the set of all symptoms that can be extracted from the th indictor, and is the simulated value. For example, (1, 3, 37.7) means that the current core temperature is 37.7 ∘ C, and the symptom of core temperature is slightly high temperature. The current membership degree of (1, 3, 37.7) is 1. (2) denotes the set of all possible health states. These states signify the various possible combinations of health symptoms presented in Σ. The total number of health states in the current FSM is 18 (3 * 2 * 3). These health states and their possible syndromes [41,42] are summarized in Table 2, where the first letter signifies the state of the core temperature, the second letter means the percentage of dehydration amount, and the third letter means the heart rate. The state NNN is usually regarded as the beginning state; any state can be regarded as the final state when the exercise ends. (3) denotes the weighting function. It associates a weight with every transition rule in the FSM and represents the causal associations between symptoms and unhealthy/healthy states. This function is commonly based on the medicine knowledge [41] and it is helpful to determine the occurrence of a health state. In current FSM, all the transitions weight values are set equal to 1; for example, for HR signs, The defined fuzzy finite state machine is depicted in Figure 3, which shows all possible transition paths, health judgment rules, health symptoms, and states graphically. Health State Transition Metrics. In order to derive the heath state transition sequence during exercise and assess the healthy degree, it is necessary for us to calculate the state transition probabilities as well as the state probabilities [16]. For each input fuzzy symptom , its state transition probability ( ) in every time step is given by where means the symptom set of the th indicator (CT, DA, and HR), ( ) signifies the DOM of , and it can be achieved by the degree of membership in Figure 2, and denotes the transition weight between the current state and the state we are transitioning to. The equation means that when a new symptom is acquired, we will look for the most plausible transition state. In general, the initial state probabilities of the health variables as well as the initial transition probabilities are assumed to be 1.0. Figure 3: Fuzzy finite state machine (FSM) used in health prediction. (9) ( ) is the state probability of the th indicator at the discrete time . When the current state is unchanged, ( ) is calculated as the average of the previous state and the new computed probability. However, when the current state changes to a new state, the complement of the previous state probability is averaged with the transition probability to calculate ( ). In order to evaluate the whole health status under the three input health indicators, we should deduct an overall probability overall ( ) at the discrete time for the current health state. where is the number of indicators that did not change, is the number of indicators that did change, and ( − 1) is the state probability of the th indicator at time − 1. Thermophysiological Model Validation. To validate the integrated thermophysiological simulation model, five adult male subjects are selected to do exercise in two different scenes [6]. The average information of the subjects is 21.7 years, 176.8 cm, and 72.2 kg. The detailed settings of these two exercise scenes are shown in Table 3. Figure 4 shows the comparison curves of the core temperature in measurement and simulation in walking and running scenes. The pink dot line represents the measured values and the blue line represents the simulated values. The range of error bars is ±0.3 ∘ C. It can be seen that the simulated core temperature curves in both scenes have good agreements with the experimental ones and the errors between the simulated values and measured values are acceptable [43]. The weight loss before and after exercise is usually considered to be the amount of sweating. As the weight loss is easy to measure, we adopt weight loss to validate the effectiveness of sweating mechanism in our thermophysiological model. Table 4 lists measured average weight loss and simulated water loss of the two scenes. It can be seen that the simulated values are slightly below the experimental values, and the dehydration percentages which were used to determine whether there was dehydration or not in measurement and simulation are very close. Figure 5 shows the measured and simulated heart rate in the walking and running, respectively. The purple dot line represents the measured values and the blue line represents the simulated values. The range of error bars is ±10 bmp. In Figure 5, the heart rate values in measurement and simulation are increased sharply in the first few minutes and then slowly. The errors between the simulated values and measured values are within 10 bmp and they are acceptable [27]. Through the comparison analysis in model validation experiments, it can be concluded that the integrated thermophysiological model can well simulate physiological mechanisms as well as the dynamic changes of body physiological indicators in different ambient conditions and exercise intensities. That is, our integrated thermophysiological model is effective and it is feasible to apply this model for exercise health prediction. Exercise Health Prediction Cases. After human thermophysiological model validation, two exercise health prediction cases with different subjects, clothes, external environments, and exercise intensities are designed in Table 5. The Computational and Mathematical Methods in Medicine 9 Figure 6 shows the simulation results of the thermophysiological model. In Figure 6(a), the core temperature increases rapidly in the first twenty minutes, and then it remains approximately 38.8 ∘ C. At the same time, lots of sweat are secreted; the changes of sweat accumulation (dehydration amount) are shown in Figure 6(b). Figure 6(c) shows the change curve of the simulated heart rate. By analyzing the simulated indicator values, a series of fuzzy symptoms along with their probabilities are extracted and the assessed health states are listed in Table 6. In Table 6, the values behind the states are the probabilities of the current state. While a new set of fuzzy symptoms is extracted, the current state probabilities can be updated by (10). The greater the probability value, the greater the likelihood for the subject to be in the current state. For example, from the 6th minute to the 10th minute, the probability of FNN is increased from 0.68 to 0.98. That is to say, while the core temperature increases and reaches slightly high temperature, the body health state of the subject A is in FNN with a high-probability. As listed in Table 6, the user is initially in state NNN and its corresponding probability is 1. The end state is FDT and its corresponding probability is 0.89. In the 5th minute of simulation, the fuzzy symptom of core temperature changes from nt to dt and the health state changes from NNN to FNN; at this time, subject A's core temperature is higher than normal body temperature. As the core temperature continues to increase, the fuzzy symptom of core temperature changes into mt and lasts until the end of simulation. With the heart rate increasing during running, the fuzzy symptom of HR is from nhr to hhr at the 49th minute. The health state is from FNN to FNT correspondingly. Besides, people are dehydrated at the 114th minute, and the fuzzy symptom of DA is from nh to mih and the health state is from FNT to FDT. It is known that high body temperature in people for a long time is harmful to the human organs and physiological functions. Some symptoms like dehydration and heatstroke usually appear at the same time. Hence, when the current state is FNN, especially when the fuzzy symptom of CT is mht, health warning should be given to users and heat dissipation of body should be enhanced. While the fuzzy symptom of DA is mih, people must drink more water to stay hydrated and to stay in a good physiological condition. Moreover, tachycardia for a long time also can cause poor physical fitness. We should adjust the sport plans while the symptom hhr of HR arises. In short, during the whole simulation process, the body goes through four states: NNN, FNN, FNT, and FDT. This health state tendency agrees with the real physiological changes. Based on the simulated results, we can take reasonable behaviors to avoid potential health risk. [28,35]. The tendency curves of core temperature, dehydration amount, and heart rate of subject B are shown in Figure 7. Compared with jogging of subject A, the physiological values of subject B increase more quickly. Particularly the core temperature increases to 40 ∘ C and the heart rate increases to 170 immediately. The corresponding health state transition sequence is shown in Table 7. In Table 7, the human health symptom goes through three states: NNN, NNT, and FNT. At the 2nd minute, the fuzzy symptom of HR changes from nhr to hhr and the current health state changes from NNN to NNT. And immediately following that, the fuzzy symptom of core temperature changes from nt to sht, mht, and ht; the health state changes in FNT. During this case, the sweat accumulation is in the normal range; its related symptom is nh during the whole simulation process. As the heart rate sharply increased to the MHR, the exercise performed by subject B is risky. That is, subject B is not suitable for this running plan. We should adjust the running intensity or running time. Discussion. The experiments show that our approach can simulate the physiological changes of human body and predict the health states in different exercises. Furthermore, important exercise health warnings can be given to participants when the human body gets into a risky health state [45]. This is very helpful for individual when he (or she) is not sure about how long he (or she) should be running in a specific environment temperature while maintaining a healthy state. And the appropriate exercise suggestions also can be given according to the simulated health states before they start the exercise. Case 1 shows that jogging for a long time may cause a mild dehydration phenomenon, although in a pleasant environment. This is because sweating takes effect in thermoregulation system and a lot of sweat is secreted in the whole exercise process. So water should be supplemented in a longtime jogging in time and exercise duration should be arranged reasonably (e.g., not more than 2 hours) [44,46]. Case 2 simulates the physiological changes of human body in fast running. When fast running is more than 15 minutes in an environment temperature of 28 ∘ C, human body reaches a high load state (performance at core temperature and heart rate). Therefore, our simulation result suggests that fast running should not be more than 15 minutes when the environment temperature exceeds 28 ∘ C [47,48]. Also, a fast running is not suitable for the people with heart disease, since the heart rate sharply increases at the first minutes of running. Conclusion During exercise, the physiological changes of human body are caused by the various physiological regulation mechanisms such as thermoregulation and cardiovascular regulation. These physiological mechanisms are directly related to the health evaluation and prediction. For the purpose of obtaining the human exercise health, we propose a novel exercise health simulation approach, which comprises an integrated thermophysiological model and a fuzzy finite state machine. Some common physiological indicators like core temperature, dehydration amount, and heart rate used in exercise health prognosis can be well simulated by our thermophysiological model. Then a fuzzy finite state machine is defined to describe the health state transition during exercise, and the health status can be obtained at an earlier stage. The further work is discussed as follows: (1) The exercise health simulation and analysis in this paper are aimed at healthy people; the similar research on specific populations (such as cardiac patients or other unhealthy people) should be analyzed and discussed; (2) the real-time exercise monitoring is a hot research topic. We have proposed a real-time exercise monitoring framework based on the given thermophysiological model; its corresponding real-time exercise monitoring APP has been implemented. However, with the increasing of client users, the problems such as simulation efficiency, load balancing, and the analysis and storage of the increasing physiological data are yet to be solved. Conflicts of Interest The authors declare that they have no conflicts of interest.
2018-04-03T01:28:20.688Z
2017-06-15T00:00:00.000
{ "year": 2017, "sha1": "2fb72142757403d76d4552fd5420e61b06acbd04", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cmmm/2017/9073706.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ed4ed6827a2eee824d2e206351155aaf4ff09b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
207823923
pes2o/s2orc
v3-fos-license
Synthetic terpenoids in the world of fragrances: Iso E Super® is the showcase The history of fragrances is closely associated with the chemistry of terpenes and terpenoids. For thousands of years mankind mainly used plant extracts to collect ingredients for the creation of perfumes. Many of these extracts contain complex mixtures of terpenes, that show distinct olfactoric properties as pure compounds. When organic synthesis appeared on the scene, the portfolio of new scents increased either in order to substitute natural fragrances without change of olfactoric properties or to broaden the scope of scents. This short review describes the story of the most successful synthetic fragrance ever which is called Iso E Super® as it is an ingredient in a large number of perfumes with varying percentages and is the first example being used as a pure fragrance. Structurally, it is related to natural terpenes like many other synthetic fragrances. And indeed, the story began with a classic in the field of fragrances, the natural product ionone. Introduction -classical terpenes in perfumes Perfumes (Latin "per fumus", which means "through smoke") have accompanied mankind for thousands of years dating back well before biblical times [2,3]. Plants and resins served as source for perfumes after alcoholic extraction. These extracts were not only used as fragrances but also as medicine (aqua mirabilis), aphrodisiac and elixir of life (aquavitae). Obviously, nature served as starting point and guideline for creating scents and these lists reveal, that terpenes, particularly mono-and sesquiterpenes, have played a rather dominant role in the fragrance industries [6]. Over the last decades, the demand for fragrances has grown dramatically, so that plantations serve to provide the raw materials. In parallel, synthetic efforts also dramatically expanded to fulfil the huge demand of the consumer markets, new olfactory experiences included. Indeed, synthetic compounds were not introduced until the dawn of 19th century and first and foremost coumarin played a key role, first synthesised by Perkin in 1868 [7,8]. Following this breakthrough, many other perfumes were created based on synthetic molecules born from the newly established discipline synthetic organic chemistry. This made odorants available for the broad masses and perfumes to be worn according to one's daily mood [2,3]. Key enabling milestones were the musk ketone accidentally discovered in 1894, being an important compound not derived directly from nature. Other musky compounds are (−)-(3R)muscone, isolated in small-yields from glandular secretion of the musk deer, and 15-pentadecanolide were utilised too [9]. The discovery and modern applications of Iso E Super ® It has to be stressed that musky odours were not the only scents of interest but also the spectrum of fragrances from violet flower oils. In fact, these were the most expensive of all available essential oils. Exorbitant quantities of flower petals were extracted to collect the oil, used directly in cosmetic formulations or spread on laundry to generate a characteristic smell. As for musk fragrances, there was a quest in the perfume industries to find a synthetic solution to create scents that mimic violet flower oils. First, a similarly smelling but more affordable orris root oil (Iris pallida Lam., fam. Iridaceae) was chosen for structural analysis. Thiemann and Krüger isolated irone (27, Figure 3), whose molecular formula was first falsely assigned as C 13 H 20 O [10]. In an attempt to recreate this compound by condensation of acetone with citral (28) a compound with "a strange but not very characteristic odour" was formed, later named pseudoionone (29, Scheme 1). It turned out not to be suited for further investigations. However, after cleaning the glassware with sulfuric acid, a distinctive scent of violets was noted which later was linked to ionone (30) being created in the acidic medium. Thus, the category of synthetic ionone (30) and woody smelling compounds was born in 1893 and investigated further in the following years [10][11][12][13]. Following this invention, many derivatives were produced to find new viable targets. These studies mostly focused on Diels-Alder cycloadditions to create structures that resemble terpenoids readily available from easily accessible and affordable starting materials like myrcene (1). One of the newly found products was Ambrelux (32, Scheme 2) that was further cyclised in a similar fashion previously mentioned for ionone compounds. This process yielded Isocyclemone E ® (33), later rebranded to the famous name Iso E Super ® (33) that is valid until today [9,14,15]. Indeed, myrcene (1) is one of the most versatile monoterpenes to be used as starting material for generating products in various industries. These include polymers, insect repellents, vitamins, flavours and fragrances [16]. Commercially, it is obtained from turpentine, a side product in paper manufacturing. Its main constituents are α-pinene (3) and β-pinene (8), 3-carene (20), limonene (6) and camphene (21). Since no large-scale source for myrcene (1) was available, a short route from readily available monoterpenes was established. Under pyrolytic conditions β-pinene (8) as constituent of turpentine undergoes a rearrangement to myrcene (1) (Scheme 3) [17][18][19][20][21]. To produce Ambrelux (32), myrcene (1) is reacted with dienophile (31) in a Diels-Alder cycloaddition promoted under Lewis-acidic conditions. In order to obtain Iso E Super ® (33), Brønstedt acid-mediated cyclisation, similar to the one utilised for the first synthesis of ionone (30), proved feasible on large scale. As it turned out, not only the one depicted, but several other cyclisation products formed. The main constituent was Iso E Super ® (33). A minor byproduct is now referred to as Iso E Super Plus ® (34, Scheme 4). Small modifications of the reaction conditions yielded other geometric isomers. In 2007, a thorough study was published by Fráter et al. disclosed of how such variations of parameters affect product formation and composition [22]. Interestingly, Iso E Super ® (33) itself shows a comparably high odour threshold of 500 ng L −1 as was reported in the original patent [15]. An impurity of ca. 5%, now called Iso E Super Plus ® (34), was made responsible for the characteristic smell having an odour threshold as low as 5 ng L −1 [23]. Naturally, Scheme 4: First synthesis of Iso E Super ® (33), Iso E Super Plus ® (34) and Georgywood ® (35) as a mixture of isomers [15]. this impurity was thoroughly analysed in the laboratories of Givaudan SA and finally secured in a patent as Iso E Super Plus ® (34). Later, also the second impurity Georgywood ® (35) with a higher odour threshold of 15 to 30 ng L −1 but better odour characteristics was patented [17][18][19][20]. Further details on the individual components of this complex mixture are listed in Table 1. It must be noted that the conditions for the synthesis of all Iso E Super ® related compounds vary slightly. The main difference lies in a prolonged isomerisation process of the Diels-Alder product 32 before and after the second cyclisation step. Georgywood ® (35) named after Georg Fráter is industrially produced with, e.g., methanol as additive to enforce isomerisation and suppress premature cyclisation [24][25][26]. (33). Orb_ital from Nomenclature (75% Iso E Super ® ) followed in the year 2015. This fragrance collection has set itself the task of using a range of synthetic fragrances as "overdoses" in perfumes. The name Orb_ital derives from Orbitone, a brand name from the olfactory active (2R, 3R)-Iso E Super ® (33) [27]. It has to be stressed, that all compounds related to Iso E Super ® are not handled as single isomers but rather as varying mixtures because none of the industrial syntheses is very stereo-and regioselective as shown by GC analysis in Figure 4. So far efforts in industrial production have been directed towards product mixtures that are dominated by one isomer with favourable olfactory properties. What seems to be counterintuitive for purely synthetically oriented or medicinal chemists, can be rationalised, when briefly considering the biochemical mechanism of the smell and the operation of scents. The odour impression is created by olfactory receptor neurons inside the nose. Since olfaction is a very complicated and broad field, it is hard to predict how molecules and mixtures of different molecules affect the perception. This is especially complex since odour impressions may change when concentrations are altered. On the lowest level, compounds of interest interact with so-called G-protein receptors consisting of seven intermembrane domains [28]. The quaternary structure including the membrane set up the active site. Approximately 370 different G-type proteins are known, that are linked with the odour perception. Because molecules can bind to an array of olfactory receptors generating a complex odour impression, an exact determination which proteins are linked to which smells or molecules is a very ambitious task. Hence, studies towards understanding interactions led to a Nobel Award in 2002 [29][30][31]. Even today correct modelling and protein crystallisation are immense challenges to be solved. Hydrophilic and hydrophobic interactions with the unpolar lipid layer make the tendency to yield suitable crystals even more difficult. Nevertheless, Palczewski and co-workers were able to crystallise the first GPCR (G-protein-coupled receptor) in 2000 confirming the previously described structure [28,32]. In contrast, the enantiomer (+)-Georgywood ® (35) was found to possess a relatively weak odour which was described as distinctly unpleasant and acrid-musty by several members of the Corey group [33]. The same approach led to the discovery of (+)-Iso E Super Plus ® (34) as a highly active component (Scheme 6). Fráter et al. confirmed these experiences after isolation of active olfactory compounds of Iso E Super Plus ® (34) and Georgywood ® (35). Racemic resolution provided a crystalline material that served to obtain an X-ray structure of the oxime derivative of (−)-(1R,2S)-Georgywood ® ((−)-35) [33,34]. boxylation and elimination yielded the lactone 43. A series of functional group manipulations provided enone 44, which underwent a cuprate-mediated Michael addition and liberation of the aldehyde 46 upon ozonolysis. After intramolecular aldol condensation the resulting enone 47 was transformed into cyclohexene 48 with shifted olefinic group by means of a reductive variant of the Wolff-Kishner deoxygenation. A straightforward four-step sequence finally yielded Iso E Super Plus ® ((+)-34). Industrially pursued syntheses do not involve a specific stereoinducing step. In fact, it is mentioned in the patents that the standard industrial process of Iso E Super ® (33) utilises technical grade chemicals for both synthetic steps. The mixture of resulting isomers is then used in perfumes, when the smell meets standard criteria by quality control [14]. As encountered earlier, the second step of production is the most important one for product formation and composition. Therefore, several patents exist describing the isomerisation and cyclisation steps involved. In the first step, both olefinic double bonds of the primary Diels-Alder product 32 can isomerise, thereby creating several precursors 49-52 that, accept for 52, are suited to undergo a second cyclisation as depicted in Scheme 7. After the following cyclisation step, the double bond of the racemic products obtained isomerises between α, β and γ [25]. Furthermore, Erman and co-workers from Millenium Speciality Chemicals Inc. described a process, which involves methanol and other alcohols or alternatively organic acids as nucleophilic additives that can reversibly be introduced and removed again (Scheme 8). Typically, methanol, ethanol, isopropanol and 2-methoxyethanol served as suitable alcohols. According to patent information di-or polyols can also serve as "dummy" additives. Alternatively, also acetic acid was suggested. Using this method, the desired Iso E Super Plus ® (34) concentration ranged from 5% to 7% as judged by GC analysis [26]. Scheme 8: Isomerisation using additives such as alcohols or carboxylic acids. The product with the γ-positioned double bond is the desired Iso E Super Plus ® (34). Products 58 (α double bond) and product 53 (β double bond) are not desired [26]. Fráter and Schröder discovered that Iso E Super Plus ® (34) can undergo an additional cyclisation through compound rac-53 (Scheme 9). This is initiated by the acid employed in the second step of the synthesis. Thus, the ketone is protonated and the highly electrophilic carbon atom reacts with the alkene moiety. The resulting tertiary carbocation undergoes a 1,2-methyl shift to yield a new cation, which in turn is nucleophilically trapped by the carbinol moiety. The resulting tetrahydrofuran 59 is chemically stable and this observation was used as rationale for the erosion of the isomeric ratio observed during prolonged reaction times. In the same piece of work Fráter et al. investigated the influence of Brønstedt and Lewis acids on the formation of Georgywood ® (35). It was found that Lewis acids such as AlCl 3 shift the equilibrium towards Georgywood ® (35) type products especially when employed in over-stoichiometric amounts. Using different Brønstedt acids, the ratios between the products obtained can change drastically [22]. Conclusion and Outlook Here, we presented a short story on Iso E Super ® and derivatives formed during synthesis, a group of molecules that has changed the perfume industry, but has its roots in the terpenoid ingredients of classical essential oils geranium and bergamot ( Figure 5). Starting from ionone (30) an "evolutionary process" towards synthetic products with similar olfactory properties led to Iso E Super ® (34), Iso E Super Plus ® (35) and Georgywood ® (35), a development that took almost hundred years and saw koavone and timberole as intermediates. An analysis of today´s fine fragrances reveals that almost all of them combine synthetic scent molecules with traditional essential oils, despite the fact, that the ongoing consumer trend is towards natural ingredients. Avoiding synthetics like Iso E Super ® (33) would rule out many favourite scents. In fact, about 100 natural fragrance ingredients are known, but perfumers have more than 3,000 synthetic molecules at hand of which several examples 60-66 with terpene-like structures are listed in Figure 6. Noteworthy, the fragrance properties of synthetically-derived unnatural compounds commonly mimic those of natural products. Enzymatic derivatisation of terpenes by means of biocatalysis is another opportunity to create new fragrance molecules or to achieve chiral resolution of racemates. The former process is commonly associated with oxidation reactions, while the latter process is often based on the action of lipases. Very recently, a new concept was disclosed that probed sesquiterpene cyclases to accept unnatural farnesyl pyrophosphates and generate unnatural cyclisation products with unusual backbones. Thus, in the presence of presilphiperfolan-8-β-ol synthase (Bot2) a novel tricyclic product 70 was obtained from unnatural farnesyldiphosphate ether 69. The olfactory analysis revealed an ethereal, peppery and camphoric scent (Scheme 10) [39]. Future prospects of the fragrance industry will be linked with a bouquet of methods to broaden the platform of molecules with favourable olfactory properties. These include chemical synthesis, microbiology and molecular biology associated with biotechnology and combinations based on these methods. Hence also the most recent developments in synthetic biology will appear on the stage of the world of fragrances [40,41].
2019-11-01T22:34:23.042Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "b5ae838e79eaaaa953812de10e6beff7ea879acd", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-15-252.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5ae838e79eaaaa953812de10e6beff7ea879acd", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
261554068
pes2o/s2orc
v3-fos-license
A systematic literature review of university-industry partnerships in engineering education ABSTRACT Over the last few decades, a wide range of works have featured studies documenting successful pedagogic collaborations in the form of university-industry partnerships in engineering education. In light of this, we conducted a systematic literature review of these studies centred around five key research questions: (a) purposes of university-industry collaborations, (b) theories used to guide such work, (c) types of methods employed, (d) evidence-based best practices identified and (e) areas of future work to be explored. Publications were selected for inclusion by screening and appraising results obtained from databases and keywords refined through a scoping study. We conclude from our findings that future studies would benefit from better alignment with literature or theoretical frameworks and specific robust methods. Additionally, early and middle years of undergraduate engineering programs offer underutilised opportunities for partnership, in line with designing a more futures-focused educational curriculum. Introduction Developing university-industry partnerships aligns well with current US workforce development goals calling for broadening participation in Science, Technology, Engineering and Mathematics (STEM) (National Science Board 2021).Likewise, the UK Royal Academy of Engineering has made it clear in the past that industry requires more involvement with undergraduate education (Educating Engineers for the 21st Century 2006).Forging partnerships between industry and universities is a global phenomenon and has long been touted as a way to achieve excellence through strategic changemaking at universities (e.g.Graham 2012).However, there is a need to bridge this ideal with the more conceptual study of collaboration from other fields if we are to gain a better understanding of exactly what makes collaborations work in engineering education.Some newer work has begun to bridge this gap (e.g.Gillen et al. 2021), but in order to continue to make theoretical strides and find gaps and new avenues for scholarship, it has become necessary to now map the landscape of literature around university-industry partnership in engineering education. To start, it is necessary to briefly explore the fundamental research around collaboration across organisations in general.This has been studied for decades in a variety of contexts.There are a few highly cited works that come close to foundational pieces in interorganizational collaboration from Barbara Gray and others (e.g. Gray 1989;Gray and Purdy 2018;Gray and Wood 1991).While Gray and Wood (1991) acknowledge that a comprehensive cross-contextual theory of collaboration may not be possible, these conceptualisations are arguably the closest we have.The general principles build on negotiated order theory (Day and Day 1977;Strauss 1978).Later and more taxonomised works branching off what came before give us processes around organisational interactions such as the tension between organisational interests and collaborative interests as described by public administration scholars (Thomson and Perry 2006;Thomson, Perry, and Miller 2007).While these works are arguably of the most robust categorisation and have been applied within engineering education (Gillen et al. 2021), collaboration has also been characterised across a continuum, for instance, considering superficial partnerships all the way to fully collaborative ones (Kernaghan 1993). While these efforts from public administration, organisational behaviour, and other fields begin to articulate a strong background for the study of collaborating across organisations, there is a need to see to what extent engineering education takes this into account in the study of university-industry partnerships.Moreover, if researchers in engineering education are not utilising this rich history of interorganisational collaboration, what do their studies look like?Thus, while the relevance of university-industry partnership is clear, the landscape of research guiding the practice has not been clearly articulated.To this end, the purpose of our systematic literature review of university-industry partnerships in engineering education is to map five key areas: . RQ1: What are the purposes/goals of university-industry collaborations for education? .RQ2: What theories/lenses have been used to guide the study? .RQ3: What are the methods that have been used in the study of university-industry partnerships? .RQ4: What are major findings/conclusions from such studies and what evidence-based best practices have been identified? .RQ5: What are the areas of future work that need to be explored further?by conceptual as well as operational definitions, with the latter undergoing continual iterative refinement (Cook and West 2012).They must also seek to minimise bias (i.e. they should not intentionally or unintentionally exclude undesirable or inconclusive results). With this in mind, we developed the following set of inclusion criteria for a source to be selected for review.It must be (a) written in English and from a peer-reviewed source, a common practice adopted in systematic reviews (e.g.Abdul Jabbar and Felicia 2015;Brown et al. 2015); (b) relevant to one or more of the research questions outlined in Section 1 (as endorsed by EPPI-Centre 2010); (c) published within the period 1980-2020 (sources earlier than 1980 were not considered to be as relevant or up to date, as per the guidelines from Cook and West (2012), and it is important to note that the early 1980s were time at which engineering industry was starting to become more vocal about workforce skills in conversation with universities (Jørgensen 2007)); (d) focusing on a university-industry partnership dedicated exclusively to teaching or pedagogic research within engineering education (studies solely on research-focused partnerships were excluded); (e) documenting US/UK-based university-industry partnerships (this geographical restriction was necessary in order to narrow the context of our work in conjunction with the tight scope required, for which there is a precedent, for example in Holloman et al. (2021) for scoping to a US context to make the scope more feasible); (f) concerned with partnerships dedicated to undergraduate education (sources targeted to graduate students were only included if studies were also conducted in conjunction with undergraduate students). As a consequence of the above criteria, the following types of sources were excluded: (a) studies focusing on school/K-12/pre-college/pre-university/postgraduate education (as we want to focus our study on undergraduate education); (b) studies documenting outreach work, community partnerships, distance learning, faculty professional development and workplace training for practising engineers (as we are primarily concerned with intracurricular university-industry partnerships); (c) sources primarily featuring outputs of symposiums/workshops/conferences as well perspective articles and opinion pieces (as these are typically devoid of some form of research or evaluation); (d) studies within the disciplines of software engineering/computer science/information technology/engineering entrepreneurship (as we wish to limit our focus to the traditional engineering sub-disciplines); (e) studies featuring case studies highlighting non-US/UK university-industry partnerships (these were necessary to omit in order to constrain the large number of relevant works obtained including those from Australia, Ireland and Brazil). Contexts outside our scope, such as non-US/UK partnerships and studies focusing primarily on graduate education, merit their own reviews.This is based on our assessment of the quantity of literature available in these areas during our scoping review.Limiting ourselves was necessary to protect the feasibility of our review and transferability of our findings. Scoping study, databases and search terms We conducted a scoping review to initially test preliminary sets of databases and search terms and to survey the breadth of literature around university-industry partnerships in engineering education.During the course of this, we iteratively refined search terms and database selections to eliminate sources that did not satisfy the inclusion criteria listed in Section 2.2. The final search terms used were: (University OR College) AND (Industry OR Business) AND (Partnership OR Collaboration) AND Engineering Education The final selection of subject-specific databases, adopted from those suggested by Borrego, Foster, and Froyd (2014) were: The first three of these (a), (b) and (c) are authoritative databases containing records of indexed and full-text education-related literature and resources, while the last two (d) and (e) constitute definitive scientific and technical databases within the engineering disciplines. More general databases such as Scopus, PsycINFO, Journal Storage (JSTOR), ScienceDirect and Wiley were excluded as they yielded too many results as were the databases Communication Abstracts (EBSCO), Communication and Mass Media Complete (EBSCO), Academic Search Complete and Directory of Open Access Journals, which were not easily accessible.The focus on subject-specific, as opposed to more general databases, was guided by similar methodologies adopted by other systematic literature reviews such as those by Morelock (2017) and Holloman et al. (2021), whose approaches served as useful models for our work.Moreover, this decision was also endorsed by an experienced external colleague in systematic reviews, whom we consulted during the process. We also considered expanding our scoping review by performing citation searching or snowball sampling (i.e.reviewing works cited by already identified sources), as recommended by Borrego, Foster, and Froyd (2014), in case of insufficient results being obtained through database searching.However, since our database searches yielded an adequate number of relevant studies, we did not need to pursue this option.Table 1 presents the final list of databases, search strings and additional details that may be useful for replicating the search. Results and filtering After obtaining the search results used for the final review, we filtered the 668 resulting articles using the search-screen-appraise method adopted by Morelock (2017), in which results were filtered using a combination of title and abstract screening, after which the remaining studies were appraised for inclusion via full-text analysis.In lieu of the limitations identified there by the author, with regard to the filtering of articles solely through title screening, we decided to employ a mixed title/abstract screening procedure to improve the robustness of our method. A result was therefore excluded if its title or abstract was specific enough to suggest the study's irrelevance, however in more ambiguous cases, the study was retained for appraisal in the next step.For instance, one of the works entitled 'The Role of Collaborative Capstone Projects -Experiences from Education, Research and Industry' by Hess et al. (2013) was deemed relevant from the initial title screening phase, but was subsequently omitted during the abstract and full-text analysis stage in line with exclusion criteria (d) described in Section 2.2, as the study pertained to collaborative university-industry capstone projects within the software engineering curriculum. During the final stage involving full-text appraisal, only studies that satisfied all of the inclusion criteria listed in Section 2.2 were included as part of the synthesis.Figure 1, adapted from Morelock (2017) in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, depicts a visual flowchart documenting the removal of studies at each filtering stage.Table 2 below provides some examples of studies that were excluded for not satisfying various inclusion criteria during the full-text appraisal stage. The final study comprised a total of 28 papers selected as part of the systematic review, a complete list of which can be found in the appendix Table A1. To increase the reliability of the process, screening was first performed by one author and then audited by the second author.In addition, the authors consulted with an experienced external colleague in systematic review during the process. Demographics of selected studies In order to classify the 28 selected papers, we categorised them based on the year of publication, methods used, publication source (journal or conference) and geographical location of universityindustry partnerships. Year of publication During our scoping study, we enforced a lower bound date restriction by searching for studies ranging from the 1980s through to 2020.This enabled us to limit our results to more relevant works featuring university-industry partnerships, considering that such collaborations only began to take place relatively recently.Our earliest included source appeared in 1996 (Tener 1996), though we found that a large proportion of our selected papers were published from 2010 onwards. Methods used The majority of our selected studies (11 in total) employed qualitative methods such as feedback surveys and questionnaires and thematic content analysis techniques. Two of our selected studies (Burns et al. 2018; Na Zhu 2018) made use of statistical analysis tools to examine data procured from student assessments and survey questionnaires.Eight of the papers performed mixed methods research, defined by Tashakkori and Creswell (2007) as a combination of qualitative and quantitative methods in a single study.It is also worth noting that some studies (7 in total) adopted unnamed and non-specific methods of data collection and analysis, the implications of which are explored further in Section 4. Geographical location of partnerships In accordance with our selection criteria, the majority of our studies (21 in total) featured US-based case studies highlighting university-industry partnerships, with five studies documenting collaborations between UK-based universities and companies.The remaining two studies (Conradie et al. 2016;Shaul Norback et al. 2014) were more general in nature in that they did not explicitly describe any specific case study examples of partnerships between universities and industries. Thematic analysis In addition to recording demographic information, we coded our selected studies around the five research questions pertaining to university-industry partnerships stated in Section 1.These questions essentially mirror the expected structure of our selected research studies, thereby facilitating an easier transferability of common characteristics as part of the content analysis and key findings to practitioners as part of the emerging recommendations. For each research question, we identified various common features shared across multiple studies and used these as codes to categorise each study, in accordance with the content analysis process described by Borrego, Foster, and Froyd (2014).It is these codes that were used to answer our research questions and they represent the focus of the results presented in this review. It is worth mentioning that our content analysis methodology differed from usual analysis procedures for grounded data, which often combine codes into 'concepts' and subsequently combine these into 'categories', such that these categories and concepts interrelate to form theory (Corbin and Strauss 2014).Since the purpose of this review is to capture existing literature rather than to develop theory from it, the use of codes was sufficient to organise the data. Research quality 'Consistency and transparency' are drivers of quality in systematic literature reviews (Borrego, Foster, and Froyd 2014, 63).Working towards these goals, we carefully detail our methodological approach within this paper, including inclusion criteria, search terms, and databases used.A complete catalogue of papers included in this study is also available upon request.In addition, we held regular debriefing meetings to review ongoing work and add validity to the process (Creswell 2014).Borrego, Foster, and Froyd (2014) also states that collaboration improves reliability in literature reviews.Throughout the analysis process, we conferred interpretations between researchers (Creswell 2014). Limitations This review is limited by the biases introduced as part of the inclusion criteria for our selected works.Firstly, by considering only peer-reviewed sources written in English, we excluded potentially contributive theses and dissertations, non-English studies, non-academic reports, perspective articles and opinion pieces as well as all forms of grey literature not published by commercial publishers. Secondly, owing to the large number of results generated, we had to narrow our scoping study by only considering subject-specific databases, thereby excluding more general databases such as Wiley, Scopus and JSTOR. As an additional consequence of the extensive results obtained, we also had to limit our geographical focus to papers documenting only US/UK-based university-industry partnerships.Despite their relevance to our research questions, several studies featuring university-industry collaborations from countries such as Norway, South Africa, Brazil, China, Germany, Spain, India, Ireland, Denmark and Australia had to be omitted.While the focus on including works featuring case studies at US/UK universities may narrow the focus of the work, this enabled us to complete the review, since widening the scope would have made the number of relevant results unfeasible for analysis.This was a scoping decision we made following the large number of results we obtained across a wider demographic when deciding on our inclusion criteria and research questions. Finally, since our work focused exclusively on teaching and education-related forms of universityindustry partnerships for undergraduate engineering students, we did not consider the various forms of research-based collaborations that exist between universities and companies, particularly those involving academic faculty and graduate students.Based on our scoping study, we realised that focusing on university-industry collaborations for graduate student education alone constitutes enough data to merit a separate systematic literature review study of its own and represents a valuable source for potential future work in this area. While it could be argued that these limitations impact the quality of the research produced, they were also a necessary part of scoping the process.Moreover, such shortcomings are often unacknowledged in published reviews.In being transparent about our limitations, we hope to instil further confidence in our results. Findings The appendix table lists all 28 selected studies, along with the codes used to categorise each of them for each of the specific research questions.The findings subsections below provide a detailed analysis to answer each of our proposed research questions. What are the purposes/goals of university-industry collaborations for education? Twenty five of the 28 papers identified specific purposes for educational partnerships between universities and industries within the context of their case studies.The remaining 3 studies (Burns et al. 2018;Shooter and Buffinton 1999;Tener 1996) did not explicitly discuss any overarching goals motivating such forms of collaboration.Some of the general benefits of such partnerships for the various stakeholders involved, as noted by the majority of our studies, comprised the following: solutions to complex projects with the help of additional resources at low cost (for industrial companies), acquisition of real-world problem-solving skills and professional experience (for students) and potential to keep up to date with disciplinary knowledge from industrial perspective (for academic faculty).The specific purposes governing these types of collaborations are provided within the subsections below in further detail. 3.1.1.Promoting industrial involvement in senior/final-year capstone design project courses (11 studies) Over a third of our studies cited increasing participation of industrial companies within the development and implementation of senior/final-year capstone design projects as one of the primary motivators behind university-industry partnerships.Collaborations of this nature were found to be mutually beneficial in fulfilling the needs of both students and industrial partners (Trent Jr and Todd 2014), through industry involvement as curriculum advisors, project mentors and guest lectures offered within final-year capstone design courses (Goldberg et al. 2014). We found several instances of industry participation within final-year undergraduate capstone design courses documented in the form of case studies among our selected papers.These featured the inclusion of an integrated product development (IPD) component for bioengineering students (Herz et al. 2011), an evaluation of industrial and business mentorship in mechanical engineering projects (Abu-Mulaweh and Abu-Mulaweh 2019;Demetry 1997;Na Zhu 2018), the implementation of a collaborative problem-based learning (PBL) framework through execution of Lean Six Sigma (LSS) projects in industrial engineering programmes (Martínez León 2019) and the development of a new aluminium engineering design course for mechanical engineering students (Pai and DeBlasio 1997). A more non-traditional form of industry involvement within project-based design courses through the less-demanding route of podcasting and use of multimedia content was discussed in Ruikar and Demian's (2013) study.Alexander et al. (2015) identified best practices for administering capstone programmes, while Shaul Norback et al. ( 2014) captured a snapshot of students' experiences and perspectives of industry involvement in such courses. Preparing graduating students with employability skills (6 studies) Several of the works also considered the goal of university-industry partnerships to be centred around providing engineering graduates with the necessary skills required to be successful in the workplace.This was achieved through integrating elements of design, manufacturing and business as part of a practice-based engineering curriculum known as the learning factory (Lamancusa, Jorgensen, and Zayas-Castro 1997), incorporating cooperative education practices within electrical and computer engineering programmes (Duwart et al. 1997) and creating a common standard design framework across multiple senior capstone projects (Estell and Hurtig 2014).Some of the case studies highlighted how industry involvement led to undergraduate students acquiring a host of authentic learning skills relevant to current industrial practices.These arose from establishing a learning environment for advanced energy storage technology within laboratory-based engineering courses (Gene Liao, Young, and Moss 2013), providing students in project-based design courses with opportunities to create tangible user interfaces (TUIs) with local small and medium-sized enterprise (SME) companies (Conradie et al. 2016) and using building information modelling (BIM) and IPD concepts in architectural engineering courses (Solnosky, Parfitt, and Holland 2014). Providing students with short-term industrial internships and work-placements (4 studies) A few authors focused on the short-term internships and work placement opportunities offered by sponsoring companies to university students as extracurricular activities taking place beyond the classroom outside the standard academic terms.Durkin (2016) presented a case study on the implementation of experiential learning techniques, within which students were able to apply their existing knowledge through summer industrial projects, while Murray, Hendry, and McQuade (2020) showcased how students achieved the same through co-curricular evening workshops established in conjunction with practising civil engineers. The efficacy of such forms of internship programmes was measured by assessing alignment with the programme criteria set out by the Accreditation Board for Engineering and Technology (ABET) (Haag, Guilbeau, and Goble 2006) and documenting industrial work placement statistics to ascertain the engagement of civil engineering undergraduate students (Tennant et al. 2018). Bespoke goalsnot aligned to a common theme (4 studies) We noted that there were some studies whose identified purposes for university-industry partnerships were uniquely suited to the context of their individual case studies and consequently did not fit any of the common themes mentioned above.Examples of the motivating factors driving industry involvement included promoting retention of female students in STEM and technologyrelated careers (Wasburn and Miller 2007) as well as enhancing student knowledge and attitudes towards corporate social responsibility (CSR) (Smith et al. 2018).Wade (2013), for instance, noted instances of strategic university-industry partnerships in which companies provided technical support to universities to help manage their resource and technology platforms for engineering education.Industrial companies have also been known to provide sponsorship funding to undergraduate students to complete their degree studies, as a form of financial support designed to assist in the initial training of future engineers (Soltani, Twigg, and Dickens 2012). What theories/lenses have been used to guide the study? The case studies from 20 of the 28 papers were guided by a sound theoretical foundation comprising references to existing learning frameworks as well as to past literature sources on university-industry partnerships.The remaining 8 studies (Demetry 1997;Estell and Hurtig 2014;Gene Liao, Young, and Moss 2013;Haag, Guilbeau, and Goble 2006;Herz et al. 2011;Shooter and Buffinton 1999;Trent Jr and Todd 2014;Wade 2013) were characterised by the absence of any such theoretical backbone underpinning their work.This was often because these were never explicitly mentioned or delved into in sufficient detail by the authors.Consequently, this raised an important concern about the prevalence of studies documenting university-industry collaborations, devoid of any theoretical lens whatsoever (discussed further in Section 4).The subsections below highlight the specific sources of the theories that guided the majority of the studies. Guidance from existing theoretical learning frameworks (10 studies) Over a third of our papers featured case studies that were largely guided by a variety of existing learning theories, which have been systematically listed alongside each corresponding paper in Table 3. Guidance from prior literature calling for greater university-industry collaboration (7 studies) A quarter of our studies featured case studies that were guided by several prior literature sources that emphasised the need for increased collaboration between universities and industry and these have been compiled and listed in Table 4. Bespoke theoretical guidancenot aligned to a common theme (3 studies) There were also a few studies whose work was guided by literature sources citing theoretical concepts that did not identify with any of the common themes presented above.Na Zhu's paper (Na Zhu 2018), for instance, evaluated the effectiveness of mentoring by industry and business professionals within a senior mechanical engineering capstone design course.The author discusses how the development of such capstone courses by universities are based on different methods such as the iterative model of continuous improvement (Mirzamoghadam and Harding 2013), the impact of group projects and teamwork (Stettina et al. 2013;Wilbarger and Howe 2006) and the importance of capstone projects in facilitating a smooth transition from academic study to practical engineering (Hanna and Sullivan 2005;Magleby et al. 2001). Smith's study (Smith et al. 2018) focusing on CSR arising within industry-university partnerships was guided by engineering students' sense of social responsibility (Layton 1986;Noble 1979;Wisnioski 2012) and discussions of previous sources highlighting the importance of CSR in the engineering workplace (Blowfield and Frynas 2005;Ekwo 2013;Loureiro, Dias Sardinha, and Reijnders 2012). Finally, the case study by Solnosky, Parfitt, and Holland (2014) outlined the implementation of an architectural engineering capstone course designed to address the needs of the architecture, engineering and construction (AEC) industry.This made use of BIM and IPD in education settings to simulate an integrated industry process in academia as well as the differences between educational objectives and educational outcomes (Jestrab, Jahren, and Walters 2009) and aspects of teambased learning (Fong 2010). Table 3. Theoretical learning frameworks guiding the study of university-industry partnerships. Paper Framework used Ruikar and Demian (2013) • Accommodation of several learning styles and abilities (Fry et al., 2009;Horgan 2009;Ramsden 2003) by multimedia podcasting approach based on Gardner's theory of multiple intelligence (Gardner 1983) • Adoption of an active learning-based approach (Gibbs, Habeshaw, and Habeshaw 1998) • Adult learning philosophies such as the pillars of adult learning (Knowles 1980) used to design the course in which students had substantial input through self-evaluation activities, with professors acting as facilitators Durkin (2016) • Guided by experiential learning theory (ELT) in which students acquire and apply knowledge gained through prior experiences (Dewey 1963;Kolb 1984) • Integration of ELT with teamwork and peer interaction through reflective conservation, team learning and functional leadership (Kayes, Kayes, and Kolb 2005) Lamancusa, Jorgensen, and Zayas-Castro (1997) • References to cognitive processes and behavioural psychology citing limitations of lecture-based teaching approaches (Koen 1994;Mestre 2001; Wankat and Oreovicz 1994) • Allusions to visual learning and importance of practical hands-on experience in engineering education (Felder andSilverman 1988) Wasburn andMiller (2007) • Design of intervention programmes based on theoretical concepts from literature explaining male-female gender gap such as testing-based, biological determination and cognitive/learning differences and socio-psychological theories (Clewell and Campbell 2002) • Female retention-enhancing strategies based on theoretical framework by Tinto (1975) that students' decision to remain or withdraw from a course is based on their academic and social experiences within the university 3.3.What are the methods that have been used in the study of university-industry partnerships? Investigating the specific methods of data collection and data analysis employed by each paper to study university-industry partnerships helped us propose changes in methodology which future works on this topic could take into consideration (discussed further in Section 4). Seven studies in particular (Conradie et al. 2016;Duwart et al. 1997;Goldberg et al. 2014;Lamancusa, Jorgensen, and Zayas-Castro 1997;Shooter and Buffinton 1999;Tener 1996;Wade 2013) failed to either adopt or explicitly mention the concrete methodology approach used to derive the conclusions for their work.Consequently, this raised an important consideration for future studies to incorporate, with regard to including a specific methods section within their work as well as documenting their techniques of data collection and analysis in sufficient detail.The subsections below explore, in more detail, the different types of methods used by the remaining set of studies. Qualitative methods (11 studies) The majority of papers made use of qualitative methods of data collection comprising surveys, questionnaires and feedback assessment forms provided to each of the key stakeholders (students, faculty, industry sponsors) in order to gauge the effectiveness of university-industry partnerships within the context of their own case studies.These have been summarised in greater detail in Table 5. It is also worth mentioning however, that while most of the studies listed in Table 5, stated their methods of data collection, they often did not mention the specific qualitative data analysis techniques employed within their work.The few studies that did so primarily used thematic analysis techniques inspired by Braun and Clarke (2006) to analyse the results from surveys and questionnaires.Moreover, while Table 5 principally records the various qualitative data collection methods comprising surveys, interviews and questionnaires employed by the selected works, some of these data collection methods did also contain quantitative aspects within them, but on the whole they can still be categorised to be qualitative.References made to studies by Hamelink (1994), Karimi (2003) and Todd, Sorenson, and Magleby (1993) highlighting the importance of industry involvement in senior design projects Soltani, Twigg, and Dickens (2012) Need for closer collaboration between industry and university engineering departments as stated in The Lambert Review (Lambert 2003) Quantitative methods (2 studies) We found only two studies from our selection set that made exclusive use of quantitative methods to study university-industry partnerships.For data collection, Na Zhu (2018) designed two modes of assessment types (course materials and a capstone project) to measure and compare student outcomes with and without industrial or business mentorship involvement.On the other hand, Burns et al. ( 2018) developed a questionnaire-based survey using a seven-point Likert-type scale (Finstad 2010) conducted using the online software Qualtrics to gauge student perceptions of different industry engagement activities.Both of the works above made use of statistical methods to analyse the quantitative data obtained, with Na Zhu (2018) making use of data analysis techniques to calculate the mean and standard deviation scores for different groups of students and Burns et al. (2018) adopting the respondent selection technique to choose the key sampling group and evaluating the hypotheses using the multivariate analysis of variance (MANOVA) method to compare student perception scores across different activities. Mixed methods (8 studies) Several of our studies also made use of mixed methods, consisting mainly of qualitative as well as quantitative data collection tools such as surveys and questionnaires, combined with quantitative metric-based, statistical data analysis methodologies.Within their case studies, Estell and Hurtig (2014) and Soltani, Twigg, and Dickens (2012), for instance, employed both qualitative (surveys, interviews, document reviews) and quantitative (course evaluation questionnaires) methods to capture feedback and reflections from students, alumni, academic staff and industry partners.Demetry (1997) and Haag, Guilbeau, and Goble (2006) made use of similar types of surveys to ascertain the fulfilment of the goals of university-industry partnerships from the viewpoint of each of the key stakeholders.To analyse their data, all of the works mentioned above utilised statistical analysis techniques such as conservative Mann-Whitney non-parametric tests (Haag, Guilbeau, and Goble 2006) Table 5. Qualitative data collection methods used. Paper Data collection method used Y. Gene Liao, Young, and Moss (2013) End-of-semester surveys used to gauge mastery of learning outcomes by students in new courses and face-to-face focus group interviews conducted by external evaluators asking students to respond to specific questions Trent Jr. and Todd (2014) Surveys used to collect data to understand needs and expectations of industry with regard to new engineering graduates, also sent to industry sponsors for feedback to make necessary course adjustments Alexander et al. (2015) Web surveys and literature review used as data collection methods to sample baseline of current administrative practices and identify challenges in implementing capstone programmes Murray, Hendry, and McQuade (2020) Free text questionnaires used to gain verbatim student feedback Abu-Mulaweh and Abu-Mulaweh ( 2019) Feedback assessment forms provided to industry sponsors, faculty and students to evaluate effectiveness of university-industry partnership Smith et al. (2018) Surveys provided to students both before and after a petroleum engineering field session course to assess changes in their knowledge, skills and attitudes towards CSR and the engineering profession Solnosky, Parfitt, and Holland (2014) Verbal/written grades feedback from faculty used to evaluate course objectives, compare them with student performance and confirm if course outcomes were met Martínez León ( 2019) Student feedback surveys from completion of an engineering and management capstone design course used to assess bridging of the gap between theory and practice Tennant et al. (2018) Peer-reviewed placement questionnaires using a five-point Likert scale provided to students as a means of gathering data Pai and DeBlasio (1997) Formative and summative course evaluations carried out by students used to improve course content for future sessions with the summative evaluation performed using four levels of evaluation guidelines (Kirkpatrick 1994) Durkin (2016) Student essays and feedback used to evaluate learning experiences and project success from university-industry partnership to determine whether the differences in responses between various stakeholder groups was statistically significant or not. In order to evaluate the benefits of and assess students' learning from the use of podcasting in final year design projects featuring industry involvement, Ruikar and Demian (2013) made use of quantitative metrics to analyse data collected from qualitative questionnaires and interactive discussions.Similarly, Herz et al. (2011) provided surveys to students to gauge their response to a new bioengineering programme as well as to employers to assess the performance of students in industrial summer internships, which were subsequently analysed by assigning rubric score metrics. We also found that many of our selected papers, barring two of the studies, failed to use literature-informed measures of quality such as triangulation to improve the credibility and validity of their research findings.The work by Shaul Norback et al. (2014) proved to be particularly notable for corroborating its results by using the classical content analysis methodology (Krippendorff 2012) which consisted of human analysts coding transcriptions from the responses of student panel discussions into content themes using a meta-thematic framework, in conjunction with analysis performed by a computer-based program (QDA MINER). As part of their case study, Wasburn and Miller (2007) conducted a statistical analysis of pre-and post-seminar surveys provided to students to evaluate their attitudes and beliefs towards women in technology-related disciplines.They too made use of the triangulation method (Patton 1990), which involved combining multiple methodologies including a review of the literature on women in technology and on freshman seminars with comments obtained from end-of-year student feedback forms, to boost the validity of their findings. What are the major findings/conclusions from such studies and what evidence-based best practices have been identified? The large majority of our selected studies (24 out of 28 papers) presented critical, overarching findings from their work, which also formed the basis for recommendations for evidence-based best practices for university-industry partnerships.While some authors listed these explicitly, we found that in most cases, the identification of best practices to be adopted would only be implicitly contained within the findings (discussed further in Section 4).While the remaining four papers (Demetry 1997;Gene Liao, Young, and Moss 2013;Pai and DeBlasio 1997;Shooter and Buffinton 1999) did list their conclusions, these were not deemed relevant for the present research question, as the findings were too specific to the context of the individual case studies.The subsections below present, in more detail, the findings and best practices identified by our chosen works. Findings related to industry involvement in senior/final-year capstone design project courses (10 studies) A substantial proportion of the studies contained conclusions dedicated to industry partnerships arising within final-year capstone design projects, which was to be expected, considering the fact that several works identified these to be one of the principal purposes of collaborations between universities and industries, as mentioned in Section 3.1.1.These have been collated and summarised together in Table 6, along with the recommended forms of best practice. Findings related to industry-focused authentic learning opportunities (6 studies) Some of the studies also generated findings emerging from authentic learning opportunities featuring industry involvement, in which students were able to work on relevant problems motivated by real-world projects and applications.Duwart et al. (1997) received positive feedback from the cooperative education community and division of the American Society of Engineering Education (ASEE) on their curriculum model combining classroom-based education with practical work experience as part of an electrical and computer engineering programme. Meanwhile, Murray, Hendry, and McQuade (2020) also acquired positive responses from students, industry speakers and workshop facilitators on the establishment of co-curricular learning initiatives featuring evening workshops between practising engineers and civil engineering undergraduate students.Their findings confirmed that, within such settings, relevant learning did indeed take place as students working in teams on real-world problems were able to identify crucial links and gaps within material presented in the curriculum and in the workshops. Industry-led internship programmes were found to be beneficial to students as they led to attainment of high levels of technical competence, confidence and engagement (Durkin 2016) as well as industry members who were extremely satisfied with the performance of student interns (Haag,Table 6. Summary of findings for industry involvement in capstone design courses. Paper Findings and identified best practices Estell and Hurtig (2014) • capstone design projects led to improvement in students' project management skills, confidence and design experience and provided valuable preparation for alumni for their future careers • best practices to improve the capstone project experience include the application of a corporate management standard, use of project management documentation, assessment using specialised rubrics and use of project review boards Goldberg et al. (2014) • capstone courses resulted in highly positive benefits for industry, students and faculty involved, though industry sponsorship was identified as the major challenge • recommendations for best practice include choosing specific projects that meet the needs of both industry sponsors as well as students, setting clear expectations, deliverables and roles when seeking industry sponsors, recruiting good-quality industrial guest speakers and creating an industrial advisory board Trent Jr. and Todd (2014) • working on a capstone project offers great value in preparing students for what to expect in industry • the evaluation and assessment of a project can be very challenging and difficult for industry sponsors due to the elements of subjectivity involved • placing students into teams and stressing the importance of collaboration can address the perceived weaknesses of new graduates in industry Alexander et al. (2015) • sampling of current baseline practices indicated a lack of consistency in the administration of capstone programmes across institutions • recommended forms of best practice include enabling transparency in administrative paperwork to help faculty recruit industry sponsors for projects and devoting increased time and attention to drafting externally sponsored capstone programme agreements Shaul Norback et al. (2014) • students were found to have a different perception of capstone design projects as compared to faculty and industry sponsors • student reflection and feedback on capstone design projects not captured by literature is of vital importance and cannot be overlooked Herz et al. (2011) • positive reception from students to the incorporation of IPD in a capstone design course led to its being made a mandatory requirement, as opposed to an optional component of the course • inclusion of IPD within the programme led to an overall improvement in the experiential learning component of the bioengineering curriculum, while also successfully meeting the ABET's capstone design requirements Abu-Mulaweh and Abu-Mulaweh (2019) • industry sponsors were extremely satisfied with their involvement in the capstone project partnership and keen to continue sponsoring projects • student feedback indicated that they were well prepared for the job market following their exposure to real-world design problems Na Zhu (2018) • engagement with industry professionals acts as a stimulant for student by making them pay more attention to courses • frequent weekly meetings with industry professionals led to a temporary negative impact on course assessment tests Solnosky, Parfitt, and Holland (2014) • implementation of a multidisciplinary pilot programme within an IPD and BIM senior capstone course is an excellent tool for training young engineers entering the workplace • generation of such a programme in an academic environment is feasible, with its course objectives being both relevant and effective to meet industry needs Martínez León (2019) • students taking the enhanced LSS capstone course had an enhanced learning experience accompanied by a growth in their self-confidence, theoretical and practical knowledge and preparedness for work environments • engaging in meaningful, collaborative industry projects prepared students to solve real-world problems and transition to the workplace more easily Guilbeau, and Goble 2006).While Haag, Guilbeau, and Goble (2006) commented on how student interns were able to imbibe a majority of the skills from the ABET criteria, Durkin (2016) noted how summer internships enabled the partnering university to achieve its objective of increasing STEM graduates, with all students graduating successfully within their chosen undergraduate degrees. A study by Tennant et al. (2018) while highlighting positive student satisfaction feedback from their experiences on industrial placements, also emphasised students' lack of structured reflective analysis and thinking and signposted opportunities for university faculty to prepare and support students better through the placement experience.From their case study introducing the development of a new, practice-based engineering curriculum known as the learning factory, Lamancusa, Jorgensen, and Zayas-Castro (1997) pinpointed several recommendations for best practice methods to implement these successfully.These included facilitating cross-university development and sharing of course materials, promoting industry sponsored senior design projects, creating industrial advisory boards and encouraging student participation in course development. Findings related to other forms of industry engagement (8 studies) The remaining 8 papers contained findings and recommendations pertaining to other, more bespoke modes of industry partnerships that did not identify with any of the common themes presented above.These have been discussed in more detail in Table 7, however, it is Table 7. Summary of findings for bespoke forms of industry involvement. Paper Findings Ruikar and Demian (2013) • there is great potential for using audio-visual podcasts in project-based learning as it promotes motivation and learner engagement • podcasting accommodates most learning styles, facilitates self-paced learning, encourages active student participation and augments synergies between industry and academia Conradie et al. (2016) • collaboration with industry for prototyping TUIs through involvement with SME companies was enriching for students working on open-ended real-world design problems • integration of TUI into the curriculum was difficult due to the inflexible educational system, the challenges of working in multidisciplinary teams and the wariness of companies in involving users at an early stage of product development Wasburn and Miller (2007) • students involved in the first-year freshman seminar for women formulated as a result of a university-industry partnership were more favourable to technology careers than those in other freshman courses • freshman seminars of this kind can make a difference in student attitudes as evidenced by the universal positive feedback received regarding the course Smith et al. (2018) • industry-university partnerships through field-based learning imparted students with a more holistic understanding of CSR • increase in students' capacity to understand CSR prepared them better to successfully navigate responsibilities in the industrial workplace Burns et al. (2018) • students perceived some industry engagement activities such as internships, tours, guest speakers and projects as being the most effective at enhancing their learning Wade (2013) • embedding platform and tools provided by the industry partner for a first year electrical engineering practical course led to improvements in student engagement and a boost in student satisfaction survey results • technology support provided by the industry partner for final year capstone courses resulted in successful industrial projects, with students invited to present their work at global conferences Soltani, Twigg, and Dickens (2012) • students found that the industrial sponsorship of their studies resulted in long term benefits such as the development of skills like project management, leadership, data analysis, communication, teamwork and application of learning to real-life situations • employers also agreed that such sponsorship added value and provided them with positive opportunities to influence the curriculum, which in turn would improve the quality of engineering graduates Tener (1996) • the general characteristics and elements of a beneficial university-industry partnership within an engineering programme were found to comprise effective joint strategic planning, a committed industrial advisory committee, a student internship programme, faculty with industrial experience and the use of outcome-based indicators of success worth mentioning that none of these works explicitly identify any recommendations for specific forms of best practice. 3.5.What are the areas of future work that need to be explored further? Twenty one of the 28 papers suggested areas of future work for forthcoming studies on universityindustry partnerships to explore further.While in most cases, we found the recommendations to be broadly generic and easily transferable to other institutions, we also noted that 7 papers (Abu-Mulaweh and Abu-Mulaweh 2019;Demetry 1997;Gene Liao, Young, and Moss 2013;Goldberg et al. 2014;Martínez León 2019;Na Pai and DeBlasio 1997;Zhu 2018) identified areas of future work whose scope was limited simply to extending the context of their own case studies.Since these suggestions were found to lack meaningfully transferable or generalisable suggestions, they were not included within the areas of future work discussed in detail within the subsections below. 3.5.1.Future work pertaining to industry involvement in senior/final-year capstone design project courses (7 studies) Considering the sizable number of works with findings dedicated to industry engagement in finalyear capstone design projects as stated in Section 3.4.1,we expected to have studies identifying avenues for future work within this area.Estell and Hurtig (2014) for example, discuss ways of extending their case study to other universities by adopting best-practice methods such as introducing multi-year projects, incorporating customer-stakeholder relationships and performing more progress reviews within capstone courses.Trent Jr and Todd (2014) emphasised the need for promoting industry partnerships through capstone design courses in order to improve students' learning experience, while Shooter and Buffinton (1999) noted that future projects could be improved by setting realistically attainable goals, establishing clear objectives and engaging in a cycle of continuous iteration for courses. The recommendation to improve the transparency of administrative paperwork provided by Alexander et al. (2015) within their case study can be put into practice by drafting externally sponsored capstone programme agreements at other institutions to ensure effective execution of project outcomes. Avenues for further work also include encouraging faculty and industry sponsors of such courses to embed student and alumni input gathered through focus groups, panels and conferences (Shaul Norback et al. 2014) as well as focusing on how to incorporate larger teams or student groups comprised of multiple disciplines within capstone courses (Solnosky, Parfitt, and Holland 2014).Finally, Herz et al. (2011) also examined the expansion of their ongoing interdisciplinary undergraduate bioengineering programme by fostering additional commercial partnerships and launching a new graduate programme with a similar interdisciplinary focus. Future work pertaining to industry-focused authentic learning opportunities (6 studies) Following on from Section 3.4.2,some studies discussed possibilities for exploring future work related to industry-focused authentic learning opportunities such as placements, internships and other cooperative education initiatives.While Duwart et al. (1997) offered suggestions to apply the concepts and practices of the cooperative education model to curricula within other universities and countries, Murray, Hendry, and McQuade (2020) considered expanding their co-curricular learning initiative featuring evening workshops for civil engineering undergraduate students to the daytime curriculum.The latter also noted how students' exposure to industrial engineering can be enhanced through mentoring by graduate engineers and through the introduction of degree apprenticeship programmes.Haag, Guilbeau, and Goble (2006) highlighted the need to further examine improving the provision of skills such as planning, preparing, report-writing and presenting, which students from their engineering internship programme were found to lack.This was similarly echoed by Durkin (2016) as part of the author's summer internship case study, which suggested that experiential learning processes should be embedded in engineering technology education.The benefits of short-term industrial placements also motivated Tennant et al. (2018) to emphasise the need to further develop similar academic-industry partnerships by exploring closer collaboration and increased opportunities. Finally, as part of their future work, Lamancusa, Jorgensen, and Zayas-Castro (1997) concluded that their case study on the manufacturing engineering education partnership featuring the development of a practice-based engineering curriculum (the learning factory) should be continued and accompanied in the future by the reporting of detailed assessment results of the project's outcomes and deliverables. Future work pertaining to other forms of industry engagement (8 studies) The remaining papers, similar to those from Section 3.4.3,presented avenues for future work dedicated to other forms of industry engagement that did not identify with the common themes identified thus far and these have been summarised in more detail in Table 8 below. Opportunities for new areas of focus in university-industry collaborations Over one-third of the studies identified the development and implementation of senior/final-year capstone design projects as the primary purpose of university-industry partnerships.This is unsurprising as a main characteristic of capstone design teaching is to promote employability, including forming connections with potential employers in industry (Pembridge and Paretti 2019). While capstone lends itself to industry partnership, this finding demonstrates the need for future work in university-industry partnerships centred around the earlier years of the undergraduate engineering curriculum.As more and more first-year engineering programs crop up that emphasise design thinking, a unique opportunity for industry collaboration is available for motivated educators.Beyond capstone and first-year, the middle years of engineering education have been neglected Table 8.Summary of future work identified for bespoke forms of industry involvement. Paper Areas for future work Ruikar and Demian (2013) practitioners are recommended to explore podcasting opportunities within project-based learning featuring industry involvement, while pedagogic researchers are encouraged to develop and assess these from a learning context Conradie et al. (2016) SME companies should be more involved in project-based courses as this allows students to experience working as professionals under budget and time constraints, while being part of multidisciplinary teams Wasburn and Miller (2007) results from the current study on female-focused freshman seminars led by industry can be made more statistically significant by conducting future research using a larger, randomly drawn sample of both male and female students Smith et al. (2018) future work can be aimed at comparing the efficacy of the petroleum engineering field sessions with ongoing classroom-based learning in both social science and engineering courses that include content on CSR Burns et al. (2018) current study focused on the perception of senior engineering students to various industry engagement activities can be expanded to gauge the differences in the student learning perceptions of sophomore and senior year students Wade (2013) the case studies presented highlighted the need for more universities to pursue mutually beneficial partnerships with industrial technology providers through involvement in sponsored summer internships, employability workshops and graduate placement opportunities Soltani, Twigg, and Dickens (2012) further longitudinal and cross-section studies of industry sponsorship schemes should be carried out over a longer timescale to include a larger survey sample covering other universities, programmes and industry sectors and to also increase student awareness of such sponsorship schemes through publicity material Tener (1996) more universities and industry companies should come together to emphasise the stature and prestige of a construction engineering degree by demonstrating the rigour and professional training it provides to its graduates when it comes to design teaching (Lord and Chen 2014).Research exploring industry connections as it pertains to design teaching in the middle years is another opportunity for development. The role of theory in partnership studies Just over a quarter of the studies had no theoretical framework/foundation to serve as the guide behind their work.This was often absent or never explicitly mentioned in sufficient detail for the reader.This could be an indication of a lack of theoretical underpinnings for the study of university-industry partnerships or that this area of study is still in its relative infancy compared to other areas in engineering education.Future works would benefit from a strong theoretical backbone drawing from other fields, or at least references to past literature/existing theories.Additionally, there appears to be an opportunity for grounded approaches that seek to develop theoretical frameworks.However, 'no single theoretical perspective provides an adequate foundation for a general theory of collaboration' (Gray and Wood 1991, 3), so any theoretical advancements would lend themselves to being context-dependent. The value of research methods in the study of educational partnerships Notably, several studies do not apply a concrete methodology to derive conclusions for their work, and in particular, data analysis techniques were often not mentioned.As with the absence of a theoretical underpinning, the lack of methods indicates underdevelopment of research in university-industry partnerships in engineering education.Most studies have employed unnamed/ non-specific qualitative methods of data collection and analysis, with just a few works that explore quantitative methods.Future studies should seek to include a specific methods section within their research work documenting the overarching method type (e.g.qualitative, quantitative approach) as well as describing their sources of data collection and data analysis.Shaul Norback et al. (2014) model this well by applying content analysis techniques from Krippendorff (2012).More concerningly, in most of the studies reviewed, there were no obvious measures of research quality (i.e.promoting validity, reliability, trustworthiness, etc), with the notable exception of Wasburn and Miller (2007) who make use of triangulation (Patton 1990).While there are many resources for promoting quantitative research quality, qualitative research quality in engineering education has been primarily guided by trustworthiness as outlined by Lincoln & Guba (1985) and the newer framework from Walther, Sochacka, and Kellam (2013). The need to highlight evidence-based best practices While not all studies explicitly stated the evidence-based practices to be adopted in university-industry partnerships arising from within their own findings, many could be inferred from closer scrutiny and interpretation of their conclusions.We recommend that future work make a more direct link in their specific findings to key overarching recommendations for practitioners of partnership.Given that many scholars in the field of engineering education are practitioner-researchers, this becomes particularly salient. The need to emphasise future work beyond study-specific contexts Several papers identified areas of future work whose scope was limited simply to extending their own case studies by adopting or incorporating a recommended form of best practice.While this is certainly helpful locally, future studies should also comment on the future directions that their work could take within the larger context of engineering education and how it might broadly inform the scholarly literature on the subject (preferably by providing recommendations both to practitioners and researchers). Concluding remarks Through this systematic literature review, we documented the recent history leading to the current state of the research around university-industry partnerships in engineering education.In doing so, we identified purposes for collaborations, theories used, research methods, evidence-based practices identified, and areas of future work.This paper can be used as a starting point for researchers looking to contribute to the growing body of knowledge on educational partnerships as well as practitioners looking to implement evidence-based approaches.While there is a significant body of work being developed, there is still a major need to conduct more robust research in this area as evidenced by the limited nature of the theoretical underpinnings, methodologies, and measures of research quality employed.Without this, future work will be limited in the conclusions it can draw. Disclosure statement No potential conflict of interest was reported by the author(s). Dr. Rehan Shah is a Lecturer in Mathematics and Engineering Education at Queen Mary University of London.He has a PhD in Applied Mathematics (Nonlinear Dynamics) from University College London (UCL), an MSc in Applied Mathematics from the University of Oxford (St.Anne's College) and a BEng in Mechanical Engineering with Business Finance from University College London (UCL) with the London School of Economics (LSE).Dr. Andrew L. Gillen is an Assistant Teaching Professor in the First Year Engineering Program at Northeastern University.He has a PhD in Engineering Education from Virginia Tech and B.S. in Civil Engineering from Northeastern University. Table 1 . Databases and search strings used to locate articles. Table 2 . Examples of studies excluded during the full-text appraisal stage. Pai and DeBlasio (1997)4) (1997)Elata 2003))volvementConradie et al. (2016)• Emergence of new user-experience design paradigm(Moczarny, de Villiers, and van Biljon 2012)with the notion of empathic design(Kouprie and Visser 2009)and TUIs with the notion of interactivity(Satyanarayanan 2001) requires a PBL approach facilitating contextual and experiential learning(Dahlgren 2003;Frank, Lavy, and Elata 2003)based on Kolb's educational model(Kolb 1984) Duwart et al. (1997)• Purpose of higher education is driven by students' cognitive, behavioural and affective needs(Chickering 1969) • ABET Engineering Criteria (Engineering Criteria 2000, 1995) used to pinpoint key skills expected from all graduates of engineering programmes Murray, Hendry, and McQuade (2020) • Need for an authentic curriculum to contextualise student learning as alluded to by Watts (2006), Lowden et al. (2011) and Pegg et al. (2012) • Use of inductive learning through PBL (Prince and Felder 2006) and its alignment with social constructivism through the importance of collaborative and peer-led learning (Ashwin and McVitty 2015) Martínez León (2019) • PBL approach (Bell 2010; Borror et al. 2012) used to bridge the gap between theory and practice, while the LSS methodologies used for curriculum development (Anderson-Cook, Patterson, and Hoerl 2005; Mitra 2004) Tennant et al. (2018)• Guided by the benefits of industrial placements with regard to academic and situated cognition(Murray and Tennant 2014), identity formation in a community of practice(Johri and Olds 2011)and contextual learning through authentic work experience(Pegg et al. 2012)Pai and DeBlasio (1997) Table 4 . Past literature studies highlighting the need for greater university-industry collaboration. and reports published by the Royal Academy of Engineering (Educating Engineers for the 21st Century 2007) and the Department of Education and Skills (The Future of Higher Education, 2003)
2023-09-06T15:17:16.150Z
2023-09-03T00:00:00.000
{ "year": 2024, "sha1": "8834596fc61dae49b99b0dec053194e30da10913", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/03043797.2023.2253741?needAccess=true&role=button", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "3bb60750397a36911fb7a16b9b543f675160581d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
235747428
pes2o/s2orc
v3-fos-license
Programming Language Training With the Flipped Classroom Model The flipped classroom method, which could be considered as one of the crucial new generation teaching approaches, is a permutation of the educational activities that are carried out inside and outside of the classroom environment. The main purpose of the present study is to determine the impact of the flipped classroom approach on students’ academic achievement and their attitudes toward programming and methodology at the higher education level. The current study employed a mixed research method as findings were transcribed on the basis of quantitative and qualitative data sets. Academic achievement tests and attitudes toward programming scales were used to collect quantitative data, whereas a semistructured focus group interview was used to collect the qualitative data set. The findings demonstrated that a statistically significant difference existed among the students in the experimental group and students within the control group regarding their attitudes toward programming and academic achievement. The results of the study reported that the experimental group had more positive attitudes and higher levels of academic achievement when compared with the control group. The advantages of the flipped classroom model include the elevated teacher–student interaction, raised independence in terms of accessing courses regardless of time and place, the opportunity to save time particularly during practicing, student-centered structure and increased motivation. This method also has many disadvantages. These include the need for technological requirements, students not watching videos, poor attendance to the course, and the lowered student–teacher interaction, especially outside the classroom. Introduction In the evolving and changing world, technology plays a crucial role in human life from now on. This development has also taken place in educational settings and has yielded substantial changes. To be more precise, the introduction of the internet to our lives and the spread of mobile devices have generated opportunities for people to reach all kinds of information regardless of time and location concerns. Taking into account all of these, it can be stressed that learning and teaching processes have evolved and have thus started to take place in different settings outside of the classroom context and usually in the form of online education. Technology and technological advancements have injected new terminologies into the relevant fields. Digital citizens are one of the new terms derived from technology and technological advancements. Prensky (2001) described the term as members of a new generation who are well informed about technology and prefer to use technology in almost all aspects of their life. It can be articulated that digital citizens are actively dealing with technology, particularly when playing games, building social relationships, completing homework, and conducting research. It is believed that the new generation might experience challenges when developing high-level skills such as critical thinking, coding, digital literacy, commitment, and problem-solving and they will fail to generate new ideas through listening, reading, and following teachers' presentation, which is considered as the components of the traditional education method. Furthermore, it may be articulated that these developments have led to the introduction of new methods that are student-centered and efforts are being made to develop their technological experience. In this context, the "flipped learning classroom method" is considered as one of the popular new generation teaching approaches in educational settings (Lo et al., 2018;Ozer et al., 2018;Roehling et al., 2017). The flipped class approach is identified as a combination of in-class activities with out-of-class activities and where out-of-class activities to the classroom setting (Bergmann & Sams, 2012;Strayer, 2012). The flipped learning approach has three crucial dimensions: (a) In contrast to the traditional education approach, the flipped learning education method disseminates theoretical knowledge to the students through course-related videos, and infographics in the form of out-ofclass activities; (b) High-level teaching-learning activities such as homework and projects that are designed to reinforce the knowledge of the students are completed outside of the classroom context and these high-level teaching and learning activities are revised in the classroom setting with the guidance of the teacher along with active interactions among students themselves and with their teachers; and (c) Out of class activities should be completed to elevate students' success in-class activities (Abeysekera & Dawson, 2015;Bergmann & Sams, 2012;Limniou et al., 2018;Lo et al., 2018;O'Flaherty & Phillips, 2015;Thai et al., 2017). Furthermore, video creation and monitoring tools, new generation technological elements such as infographics, augmented reality, gamification applications, social media, Object-Oriented Dynamic Learning Environments (Moodle), learning management systems, different Web 2.0 tools, mobile devices, and cloud technologies are actively used in the flipped classroom approach (Lo et al., 2018;Lopes & Soares, 2018;Pellas, 2018;Thai et al., 2017). Regarding the advantages of flipped learning environments, Serçemeli (2016) stated that flipped learning classroom management is a model that contributes significantly to the negativity, disruptions, and deficiencies experienced in the classroom setting. In addition, it is stated that it facilitates teacher-student and student-student interactions in the classroom (Moore et al., 2014). Also, this method allows the teacher to focus on the students on a one-to-one basis during the learning period, thus giving students time for the implementation of high-level activities in the classroom (Olakanmi, 2017), and providing self-learning opportunities for learners. Therefore, students have the opportunity to participate in active and collaborative learning by taking responsibility for their learning (Nouri, 2016;Roach, 2014). In addition, since the learning part of the process is carried out in an online environment, this method provides suitable opportunities for students who cannot physically attend the course due to various reasons (Gençer et al., 2014). Besides, this method eliminates many problems arising from individual learning differences in traditional settings, as it allows students to learn and repeat as much as they want. The flipped classroom technique, one of the new generation teaching techniques, is now gaining more popularity. This technique has numerous advantages as well as many limitations. The limitations include difficulties in making the videos, technological needs, students not watching the videos, students being accustomed to the traditional methods, and low participation (Bergmann & Sams, 2012;Simonson, 2017;Talbert, 2012;Touchton, 2015). Coding skills are very important in the current digital era. However, programming languages are taught using the show-and-apply technique. Theoretical lectures that have been taught by experts are later practiced by the students (Ersoy et al., 2011). According to studies on this subject, students have reported that their attitudes toward programming are either low or medium (Başer, 2013). This will affect both success and learning in programming languages. The main purpose of this study is to investigate the impact of using the flipped classroom approach on the academic achievement of students in higher education, their attitudes toward programming languages, and their opinions regarding the approach. This study will answer the following research questions: Research Question 1 (RQ1): Does the programming success of the students who study with the flipped classroom method differ from those learning with the traditional method? Research Question 2 (RQ2): Is there a significant difference between the attitudes of students in experimental and control groups toward programming after the study? Research Question 3 (RQ3): What are the students' opinions on the application of the flipped classroom method in programming teaching? Related Studies It is believed that the flipped classroom model, which has recently been applied around the world, can solve the deficiencies and disruptions encountered in modern education systems. Due to the increasing use of this method, numerous scholars have conducted research to inject findings to the relevant field (Lo et al., 2018;Pellas, 2018). In addition, when a search was performed through Google by typing "inverted or flipped classroom" as the keyword, 33,600,000 results were reached. Furthermore, when a search was conducted through Google Scholar on the relevant scope, 127,000 results were reached. In addition, the SCI-EXPANDED, Web of Science, SSCI, AHCI, CPCI-S, CPCI-SSH, ESCI indexes were scanned to search for articles related to the flipped classroom using the "Inverted Classroom" or "flipped learning" keywords. The search found that there were 112 studies published in 2013, 243 in 2014, 559 in 2015, 712 in 2016, 895 in 2017, and 656 in 2018, with a total of 3,150 studies. Yestrebsky (2015) conducted a study to explore the effectiveness of the flipped classroom model for freshman chemistry classes. She employed an experimental research model to draw findings regarding the impact of the flipped classroom on the academic success of students in chemistry. The research took place in two crowded classes (415 and 320 students, respectively). The results demonstrated that 415 students who learned via the flipped classroom model obtained better academic grades when compared with the 320 students who were educated through the traditional education model. Moreover, the results also implied that the flipped classroom model could be more beneficial for the students. In addition, the scholar employed a questionnaire to identify the courserelated perceptions for both groups. The results revealed that the participants who were educated through the flipped classroom model stated that online instruction was much more beneficial. Chao et al. (2015) designed a study to investigate the attitudes of students toward their courses. The results signaled that students who were educated through the flipped classroom model had positive attitudes toward their courses. Street et al. (2015) performed a study to investigate the efficacy of the flipped classroom model. The study participants were pre-clinical medical students who were studying physiology. In the study, the participants were divided into two groups. One group was educated through the traditional education model whereas the other group was educated through the flipped classroom model. Results signaled that no statistical significance existed between the two groups in the context of academic achievement. Turan and Göktaş (2015) performed a study to explore the views of students toward the flipped classroom model. The scholars applied the case study method and employed both semistructured interviews and a student view questionnaire to obtain data for their study. They listed the advantages and disadvantages of the flipped classroom model on the basis of their findings. To be more precise, the results indicated that the advantages of the model included the retention of learning, prevention of memorization, encouragement of students to prepare themselves before coming to class, reduction in problems related to the attention of the students, and the opportunity for students to learn the topics at any time. Conversely, the drawbacks of the flipped classroom model included the lack of technological equipment, the time required, challenges faced during the adaption process, the necessity to watch videos before attending the courses, and finally, not having instant feedback. Asiksoy and Özdamli (2016) performed a study to discover the impact on the achievement, motivation, and selfsufficiency of students through the flipped learning approach. The scholars employed Keller's Attention, Relevance, Confidence, and Satisfaction model. The sample of the study consisted of 66 physics students segregated into two classes. Specifically, the first-class learned through the traditional model whereas the flipped classroom model was employed for the second class. The data were collected through a physics concept test, motivation questionnaire, physics self-sufficiency scale, and semistructured interviews. The results of the study indicated that the students in the experimental group achieved better grades when compared with the students in the control group. In addition, the results also showed that the students in the experimental group had higher motivation and self-sufficiency levels. Finally, the semistructured interview results postulated that the students in the experimental group had positive attitudes toward the flipped classroom model. Aydın (2016) performed a study to determine the impact of the flipped classroom method on academic achievement, homework/task stress levels, and the learning transfer of university students. The scholar also aimed to explore the attitudes of the students toward the flipped learning education model. The author employed an experimental research model to draw findings for the study. The respondents were segregated into two groups named experimental and control. The flipped classroom model was practiced for the experimental group whereas the traditional education model was assigned for the control group. The results showed that the homework/ task stress levels decreased for the students in the experimental group. In addition, the results also indicated that students in the experimental group had better academic achievement when compared with the students in the control group, while no statistical significance existed among the two groups in the context of learning transfer. The study concluded that most of the students had positive attitudes toward the flipped classroom model. Olakanmi (2017) performed a study to explore the impact of the flipped classroom model on academic performance. The study sample comprised 66 first-year secondary school students who were studying chemistry. The scholar employed a pretest and posttest experimental design to collect the data set. Therefore, students were categorized into two groups. The first category was named the experimental group and was educated by flipped classroom model; thus, video lessons and reading materials were given to the students which they could revise at home. However, the second category was designated as the control group and was educated through the traditional education method. The results revealed that positive significant differences existed in all assessments with the flipped class students performing higher on average. Students in the flipped classroom model group benefited by preparing for the course beforehand, and had the opportunity to interact with peers and the teacher during the learning processes in the classroom. Sezer (2017) performed a study to investigate the impact of a flipped classroom environment enriched by technology on students' learning and motivation. The author employed a pretest posttest experimental model and a qualitative data technique to transcribe findings. The study was conducted in a public middle school in Turkey for 2 weeks (3 course hours) on students attending a science course. Respondents were divided into two groups named experimental and control groups. Numerous flipped classroom materials were supplied to the students in the experimental group in the form of electronic materials 3 days prior to the courses. Furthermore, before the normal course hour, the main outline of the topic(s) was discussed with the students and problem statements were constructed and the most appropriate suggestions were carried out. This provided the opportunity to concentrate on the topics that the students had experienced difficulties understanding and the interaction among students and teachers reached optimum levels. Results of the study exerted that the flipped classroom model triggered higher levels of academic achievement and motivation when compared with the control group. Chiang (2017) performed a study to investigate the effectiveness of problem-solving strategies merged with flipped learning contexts. The results revealed that problem-solving strategies were more influential when combined with flipped learning contexts. Tugun et al. (2017) performed a study to discover the impacts of the flipped classroom model on digital game development and the attitudes of the students. Ninth-grade students who were attending an Information Technologies II course at secondary education constituted the sample of the study. The research design took the form of an experimental study; thus, students were divided into two groups named as experimental and control groups. Students who were in the experimental group were educated through the flipped classroom method whereas the traditional education approach was employed in a laboratory context for the students in the control group. The research concluded that the success in digital game development and the students' opinions were more favorable among those in the experimental group who were educated through the flipped classroom model. Lucke et al. (2017) performed a study to investigate the impact of the flipped classroom model on the motivation and participation of the students. A total of 44 students who were taking a Fluid Mechanics course constituted the sample of the study. The results indicated that the participation and motivation of the students were elevated through the flipped classroom model. Cheng and Weng (2017) performed a study to investigate the key roles that affect the success of a flipped classroom. The research was performed with a questionnaire created from the literature review study. Four hundred and twentyfour valid samples (96.14%) were taken from 441 samples from teachers. The main results of the study were as follows: (a) the main leadership has a positive effect on the student's learning achievement; (b) basic leadership has a positive effect on the teacher's attitude toward digital media teaching; (c) the attitude of a teacher to use digital media has a positive effect on the student's learning success; (d) a teacher's attitude toward digital media teaching moderates the relationship between basic leadership and student learning success; and (5) parental involvement has a positive effect on student learning success. Ö. Özyurt and Özyurt (2017) aimed to determine student views on not using the flipped classroom model in programming and algorithm learning. The qualitative research method was used in the study. The semistructured interview form developed by the researchers was used as the data collection tool. The study took place within the scope of the Introduction to Programming and Algorithm course with the participation of 94 students for 14 weeks, and the course was designed and conducted according to the flipped classroom approach. At the end of the semester, 32 volunteering students were interviewed among the students who participated in the study. The data obtained from the study were subjected to content analysis. According to the results of the study, the majority of the students expressed a positive opinion about the flipped classroom approach. According to the findings of the study, it can be said that the inverted classroom approach can be used effectively in programming and coding courses. Ozer et al. (2018) performed a study to investigate the attitudes of pre-service teachers toward gamification supported by a flipped classroom in the context of a coding course. The scholars employed a mixed research design to collect data from the respondents. Specifically, the qualitative part of the study was shaped by pretest and posttests to assess the attitudes of the participants toward the coding. In addition to these, semistructured interviews were conducted to enhance the understanding of the research topic. It was concluded that the majority of the teacher candidates was satisfied with the activities applied in the gamification supported the flipped classroom method and there was an increase in their motivation. Steen-Utheim and Foldnes (2018) conducted a research on 12 students in a Norwegian higher education institution in which in-depth interviews were used to inquire about their learning experiences in a two-semester-long mathematics course. The results revealed that the students who were educated in the flipped classroom were more positive toward the learning process and tended to have elevated engagement levels. The results also highlighted seven factors that students reported as being conducive to their learning experience. These categories were commitment to peers, being recognized, feeling safe and instructor relationship, physical learning environment, learning with peers, and using videos to learn new content. The results indicated the impact on student engagement was particularly prominent when students reflected upon learning in the flipped classroom method. Limniou et al. (2018) conducted a research to explore students' views about teaching methods practiced by two lecturers under the perspectives of higher-order thinking skills development and their choices in terms of learning materials and activities. First-year psychology students followed either the traditional or flipped-classroom approach delivered by two different teachers. In total, 81 students assessed their experience in social psychology and 119 students in clinical psychology. Although all students had similar preferences in terms of the traditional or flipped classroom approach in both topics, a significant difference existed concerning the students' views associated with the teachers' contribution to the teaching approach, students' higher-order thinking development, and the choice of learning materials. Lopes and Soares (2018) performed a study to compare the academic achievement levels of students who were taking a financial mathematics course. The results indicated that the students who were educated through the flipped classroom model obtained higher academic achievement when compared with the students who were educated with the traditional education model. H. Özyurt and Özyurt (2018) analyzed the effects of the flipped classroom model on students' programming success, attitudes, and self-efficacy in their study. The sample of the study consists of 46 students who took the introduction to programming and introduction to the algorithm for the first time. As a result of the study, the use of the flipped classroom model positively affected the academic achievement and programming self-efficacy levels of the students in the Introduction to Programming and Introduction to Algorithm course. However, it did not affect their attitudes according to programming. Tomas et al. (2019) conducted a research to discover how the flipped class method influenced learning and engagement in science and sustainability education courses. The study reported that there was a high level of engagement with the videos and students believed that they supported their learning; however, opinions were divided as to whether the flipped classroom was preferred over traditional lectures. Angelini and García-Carbonell (2019) performed a study to investigate whether simulation-based courses as a component of the flipped classroom model make a substantial contribution to the students' progress in their written production in the English language. The results indicated that students in the experimental group who were educated by simulationbased courses were more likely to improve their skills at writing in the English language when compared with the students in the control group who were educated through the traditional education model. In addition, the results also revealed that the experimental group performed better than the control group in the contexts of organizing and linking ideas in the English course. Durak (2020) aimed to reveal the effect of programming education carried out through the flipped classroom model on student achievement in his study with 149 computer science students. According to the results of the study, it has been revealed that the flipped classroom model positively affects academic achievement in programming education. Etemi and Uzunboylu (2020) performed a study to investigate the effect of the flipped classroom model on students "academic achievement and students" perceptions of the flipped classroom model in the introduction to Java programming. This study was carried out in a university in the Republic of Kosovo for 14 weeks in the fall term of 2018-2019. There are 87 students in the experimental group and 87 students in the control group. According to the findings of the study, it was determined that there is a significant difference between the pre-and posteducation entrance success points of the students in the experimental group who teach the course according to the flipped classroom model. In addition, as a result of the research, it was observed that there was a significant difference in favor of the experimental group between the achievement scores of the students in the experimental group and the success scores of the control group. According to the qualitative data collected, it was concluded that students were mostly satisfied with the flipped classroom model, gave them autonomy in their learning, cooperated better with teachers and classmates, but were a little skeptical and afraid at the beginning of the course. Many studies have been conducted on the use of the flipped classroom model in educational environments. The effects of the flipped classroom model were investigated in different courses. Most of the studies show that the flipped classroom model increases academic achievement, affects student motivation and attitude positively, and students have positive opinions. In contrast to these studies, in the relevant literature, some studies have also posited that the flipped classroom model has no positive effect on the academic achievement of students. For instance, Yavuz (2016) performed a study to explore the impact of the flipped classroom model on students' academic achievement. The scholar divided students into two groups named as experimental and control groups. The findings elicited that no statistical significance existed among the two groups in terms of academic achievement; however; it was found that the students enjoyed the flipped classroom model and it positively influenced their motivation levels. Thus, the students stated that the flipped classroom model should be employed for all courses. Theoretical Background The flipped classroom method was first applied in 2007 by two high school chemistry teachers, Jonathan Bergmann and Aaron Sams. The first application aimed to record the video courses and publish them online so that the students who missed the lesson could watch the lesson later. The flipped classroom method became widespread based on the idea that it could be applied to all students at any time with the publication of courses on downloadable online platforms and due to its efficiency in allocating the time required for the teaching of theoretical knowledge in combination with practical study activities in the classroom (Bergmann & Sams, 2012). When the theoretical foundations are examined, one can observe that the flipped learning model is a kind of blended learning model in which the learning takes place according to the students' own learning levels, at their own pace and the responsibility is transferred to the student. In the literature, it is called blended learning where traditional teaching and technology are used together. In other words, it is a combination of face-to-face (traditional) and online teaching (Staker & Horn, 2012;Yavuz, 2016). In addition, this model is a method that supports problem-based, collaborative, inquiry-based, and active learning theories. It removes the limitations of the learning environment to a certain extent, as in mobile learning theory. Besides, it is seen that the flipped learning model is based on a social constructivist approach. Social constructivism states that the structuring of knowledge is achieved through social and culturally regulated experiences (Brame, 2013;Hung, 2015;Torun & Dargut, 2015). As shown in Figure 1, the flipped class approach was associated with Bloom's Taxonomy by Williams (2013) and Brame (2013). Accordingly, the "remembering" and "understanding" steps are explained as the activities through which the teacher conveys the theoretical knowledge outside the classroom, whereas the activities of high-level cognitive learning such as "Applying," "Analyzing," "Evaluating," and "Creating are taught in the classroom. Research Model of the Study As previously mentioned, the current research aimed to investigate the impact of the flipped classroom approach on students' academic achievement, their attitudes toward computer programming as well as the opinions of the students regarding the application and to transcribe findings. Research had appointed quantitative and qualitative research design, therefore, employed mixed research model to interpret findings. Johnson and Onwuegbuzie (2004) defined the mixed research model as an analysis and interpretation of collected quantitative and qualitative data in a single study. Moreover, one of the reasons for assigning mixed research for the current study is that it compensates for the weakness of one research model by applying the strength of another to obtain more reliable findings (McMillan & Schumacher, 2010) As postulated in Table 1, in the qualitative design of the study, the impact of the flipped classroom model (as the independent variable) was investigated on students' academic achievement through a quasi-experimental pattern with the pretest and posttest experimental group method. In addition, the quasi-experimental pattern with pretest and posttest control group method was employed to investigate the students' attitudes toward programming languages. One of the main reasons for utilizing the quasi-experimental pattern is that the participants in the experimental design were chosen on the basis of statistical analysis rather than being selected randomly (McMillan & Schumacher, 2010). Before forming the research, a pretest was conducted by the authors to discover whether the experimental and control groups were equivalent in the context of academic achievement. The results of independent sample t-test analysis signify that no statistical significance existed in terms of the academic achievement scores for the students in the experimental and control groups, t(42) = 0.2, p> 05. The focus group interview method was employed to collect qualitative data for the current study. Study Group The present study was carried out on 64 students (24 females, 40 males) who had successfully passed the Programming Languages I course and were taking the Programming II course for the first time in the IT department. The sample of the study consisted of 30 students (11 females, 19 males) in the experimental group who were taught the programming language course through the flipped classroom approach and 34 control group (13 females, 21 males) students who took the programming course using the traditional teaching method. The Experimental and Control group consisted of two different groups taking the Programming II course of a university. Measurement In terms of data collection tools, the academic achievement test and attitudes toward programming scale were used to collect quantitative data, whereas a semistructured focus group interview was used to collect the qualitative data set. Pretests and posttests were conducted to determine whether there was a statistical difference between the students in the experimental group who were educated through the flipped classroom approach and the students in the control group who were lectured through the traditional learning method in the context of academic achievement. In this context, an academic achievement test was administered at the beginning of the course to examine whether the groups were equivalent or not, which consisted of a set of 15 questions in the form of 6 multiple-choice questions, 4 open-ended questions, and 5 gap-filling questions. Academic achievement test questions that were prepared for pretesting covered the basic topics related to the introduction to programming languages. The pretest was prepared based on the opinions of five lecturers who were experts in the field and in the light of these opinions, necessary corrections were made and the academic achievement test was finalized. Aside from these, at the end of the semester, four openended practice questions were designed to compare the academic achievements of students in both groups. Learning outcomes and in-class activities that were performed during the teaching process were taken into consideration while designing these questions. Furthermore, these questions were prepared based on the opinions of five lecturers who were experts in the relevant field. The current study employed the Attitudes toward Computer Programming Scale, which was proposed by Başer (2013) to compare the attitudes of the students in both groups (Experimental-Control) toward computer programming. Data were collected through a five-point Likert-type Scale. Furthermore, the Cronbach's alpha test was examined to test the reliability of the scale. The Cronbach's alpha value was computed as 0.947, which indicated that the scale was reliable for data collection. In terms of the qualitative data collection tool, a focus group interview form was used to collect findings regarding the opinions of the experimental group students about the flipped classroom approach. The focus interview method presents various opportunities to participants such as intensive interactions that can be helpful for drawing conclusions and performing brainstorming. Several scholars have stated that the focus group interview method generates a welcoming and informal atmosphere that can facilitate the collection of respondents' opinions about the research topic (Finch & Lewis, 2003;Yıldırım & Simsek, 2008). The researchers designed five open-ended questions for the focus group interview form based on the opinions of lecturers who were experts in the relevant field and more importantly, a pilot study was conducted to verify that the designed questions were easy to understand and could be answered by the respondents. These questions were "How did you get used to the Flipped Classroom Method?, "Can you tell us about the adaptation process," What would you like to say about the advantages of this method when you compare it with the traditional education method?," "What are your thoughts about the limitations of this method when you compare it with the traditional education method?," What was Data Collection A pretest was conducted to determine the students' level at the beginning of the semester then a posttest was carried out in the last 2 weeks of the Programming Languages II course to explore the students' attitudes toward the course. In addition, the students who took the programming languages II course were invited to participate in a focus group interview through social media. A total of 20 students responded positively to this invitation. Since 20 participants were considered to be excessive for the focus group interview, participants were divided into three groups and the interviews were conducted separately particularly by arranging a time convenient for the participants. Interviews with the participants who voluntarily accepted to contribute to the study were held within the classroom environment. The focus group interviews, in which the students voluntarily participated, took place in the classroom with the researchers. Also, with the students' permission, audio recordings were made. Data Interpretation The SPSS 24.0 program was used to analyze both qualitative and quantitative data. The Shapiro-Wilk test was employed to determine whether the data regarding academic achievement for both groups were normally distributed or not in order to ascertain whether parametric or nonparametric analysis should be implemented (Razali & Wah, 2011) As can be seen in Table 2, the Shapiro-Wilk test results signaled that the students' attitudes toward programming were normally distributed, p (attitude) = 0.145; p> .05. Therefore, the independent t-test was employed to compare the pretest-posttest academic achievement and attitude toward programming scores for both groups. The results of the analysis were interpreted with a 0.05 significance level. As can be seen in Table 3 of the result, the independent samples t-test, t(62) = 0.34, p> .05, demonstrated that no statistical significance existed for both groups in terms of academic achievement. These results also imply that the two groups were equivalent to each other before the application of the flipped classroom model. Content analysis, which is considered as one of the most appropriate methods in qualitative data analysis, was conducted by converting voice recordings gathered from the participants (during the focus interviews) into text format (Barbour & Kitzinger, 1998/1999. Furthermore, the voice recordings that were converted into text format were codified on the basis of the research questions of the study. Then, these codes were compared with each other on the basis of similarity and relationships. Codes that were related or similar to each other were categorized and themes were formed. In addition, while the researchers were analyzing the texts, a scholar that did not actively participate in the current study but is considered as an expert on the relevant field double-checked the themes to ensure the reliability of the present study. At the end of this process, the Reliability = Consensus / Consensus + Dissensus formula was applied to the coding (Miles & Huberman, 1994). The rate of adaptation among coders was calculated as 79%. The categories created after coding have been listed in a frequency table. In addition, the qualitative data obtained from the focus interviews were transcribed through quotations of the students' attitudes. Used Tools The tools used in the application process of the research are shown below: Application Process For the experimental group, which was comprised of a total of 30 students, the course videos were prepared by the instructor using the Camtasia application according to the weekly course plan and the theoretical information about the subject to be taught in that week. The videos were shared with the experimental group at least 3 days earlier via a channel opened on YouTube and a group created via Facebook. At the beginning of the course, teachers asked the students whether there was any information they did not understand from the videos, and the necessary answers were given by the teacher. Afterward, to provide motivation and encourage students to watch videos and repeat the subject, in-class activities were enriched by media support, and multiple-choice questions were provided by a Kahoot event. The questions prepared by the teacher were projected onto the blackboard in the Kahoot event. The students answered the questions in a gamification process with their tablets, smartphones, or personal computers. Fast and accurate responders scored higher and students competed with each other. As shown in Figure 2, the leader board consisting of the top five students at the end of the Kahoot event was shared on social media and the winners were recognized. Then, as shown in Table 4, the teacher started the in-class activity of the week, which was either individual or group work. All instructions and other course contents related to the activity of the week were shared with the students via Moodle (Figure 3). During the activity, the teacher guided the students by giving them the tips they needed to do the activities. At the end of the course, the students sent these activities to the teacher via Moodle. The teacher graded the students on the basis of these activities. In addition, a discussion platform was created on the Facebook group about the activity to be held in the next course and the students expressed their opinions accordingly, which presented the opportunity to see different perspectives. Taking into account these discussions, the teacher prepared the activity for the next week. Table 5 shows weekly classroom plan. The control group students learned the theoretical part of the programming language course with the PowerPoint presentations prepared by the teacher according to the traditional method during the course. After the theoretical lecture, the application covering the topics of the week was done in the remaining time. The course teacher shared the course contents with the students on the website. The Impact of Flipped Classroom Approach on Students' Academic Achievement in the Content of Programming As mentioned earlier, students in the experimental group were educated through the flipped classroom model while students in the control group were lectured by the traditional teaching approach. To measure the impact of the flipped classroom approach on academic achievement in the context of programming, a normality test was conducted by practicing the Shapiro-Wilk test for both groups. The Shapiro-Wilk test findings demonstrated that the posttest results of both In addition, to test the existence of statistical significance in terms of posttest results of students' academic achievement (for both groups), the independent sample t-test was employed. The independent sample t-test results are summarized in Table 6. The independent sample t-test results indicate that there was a statistically significant difference between the two groups in the context of their academic achievement in programming. In other words, students in the experimental group were more successful at programming (M = 79.81), t(62) = 3.80, p < .05, when compared with the students in the control group (M = 64.41). As portrayed in Figure 4, the traditional teaching approach positively influenced the students' academic achievement. However, the study concluded that the flipped classroom approach, which is accepted as one of the effective new generation teaching methods, had a more significant impact on the students' academic achievement. As illustrated by Table 7 above, a considerable difference existed between the groups in the context of their pretest and posttest measurements. From this framework, it can be concluded that the students in the experimental group (who were educated through the flipped learning approach) tended to have higher scores when compared with the students in the control group (students who were educated through the traditional education method), F (1-62) = 21.73, p < .01. Thus, it can be stated that the two groups were statistically different from each other. Students' Attitudes Toward Programming Through the Flipped Learning Approach The Shapiro-Wilk test was employed to determine whether the attitudes of both groups (Experimental and Control groups) toward programming were normally distributed or not. The Shapiro-Wilk test findings demonstrated that the posttest results of both groups were normally distributed, p(attitude)= .145; p>.05. To test the existence of statistical significance in terms of their attitudes toward programming (for both groups), the independent sample t-test was employed. Independent sample t-test results are illustrated in Table 8. Methods in Java Method Creation, Arithmetic Operations, Nested loops. As seen in Table 8, statistical significance existed among both groups in the context of their attitudes toward programming. In other words, the results of the study indicated that the attitudes of the experimental group toward programming were more favorable when compared with the attitudes of the control group, t(62) = 8.24 p < .01. To be more accurate, the results exerted that the attitudes of the students in the control group toward programming were moderate, whereas the students who were studying with the flipped classroom approach in the experimental group had more positive attitudes. Attitudes of the Students Toward the Flipped Classroom Approach Students' attitudes toward the adaption process regarding the flipped classroom approach. The students were asked to identify how the adaptation process took place in the context of the transition from the demonstration approach to the flipped classroom model. Themes were formed in the light of the participants' responses. Responses were portrayed in Table 9. It could be concluded that most of the students expressed that the gamification activities and competitive sphere had facilitated the adaptation process, (f = 13), while the videos and other course-related materials also facilitated the process of adaptation (f = 5). In addition, two students indicated that they encountered some difficulties since they did not experience this type of approach before. Some of the responses obtained from the participants on this topic are as follows: P2: The awards that were provided at the end of in-class-and out-class activities increased my attention towards the approach. Besides these, I used some other sources which were really helpful for expanding my knowledge and advancing my skills regarding the system. P5: At the beginning, I had some worries about the approach, but then the awards that were given at the end of the in-class and out-class activities provided me with a better understanding of the new approach and helped me to make a smooth transition during the adaptation process. Attitudes of the students regarding the advantages of the flipped classroom approach. Students were asked to compare the traditional education method and flipped classroom approach and then articulate the advantages of the flipped classroom approach. The responses of the participants have been displayed in Table 10. It can be seen that the students provided various responses regarding the advantages of the flipped classroom approach. The most pronounced response could be depicted as "it helped me to build better communication with my friends" (f = 8). Other responses that were mentioned about the advantages of the approach included "I have the opportunity to watch the course from anywhere that I can connect to the internet" (f = 6), "We have more time to apply what we have learned" (f = 5), "It is student-centered" (f = 5), "It is fueling my motivation" (f = 4), "My attention towards the course is increasing" (f = 4), "It helps us to build better communication with our teacher" (f = 4), "It teaches better" (f = 4). Some of the responses obtained from participants on this topic are as follows: p10: First, it is time-friendly and it stimulates our motivation as we have covered the topics before coming to class. Moreover, when a student realizes that he/she knows at least something regarding the course, his /her attention towards the course will be increased even if he/she previously was not interested in the course. p8: In the flipped classroom approach, students tend to be more active and since teachers act as counsellors, the student can realize their talents. Besides, the communication that we build with our friends inside the class can continue outside of the class for communicating regarding our courses. Students' attitudes toward the disadvantages of the flipped classroom approach. Students were asked to compare the traditional education method and flipped classroom approach and then list the disadvantages of the flippedclassroom approach. The responses of the participants are illustrated in Table 11. With regard to the flipped classroom approach to the programming course, of the need for a technological substructure (f = 5), the necessity of watching videos (f = 3), decreased class participation (f = 1), and unusual method (f = 1) was accepted as the most significant drawbacks of the flipped classroom approach. Some of the responses gathered from participants on this topic are as follows: P11: Some students might not have a computer or access to the internet, while watching videos may cause them to exceed their internet limits. P13: To perform activities, we have to watch course-related videos before attending the class. In my opinion, watching courserelated videos and the reduced time for relaxation at home could be considered as some of the main drawbacks of the approach. The most admired features of the flipped classroom approach. The students were asked which of the features of the flipped-classroom approach they admired the most. The themes are presented in Table 12. The findings indicated that the quizzes that were conducted through the Kahoot application (f = 12) in-class activities (f = 8), group work (f = 8), and the competitive atmosphere were the prominent themes that the participants particularly enjoyed during the implementation of the flipped classroom approach. Some of the responses gathered from the participants on this topic are as follows: P13: I really liked the Kahoot activities since they created a competitive atmosphere. I did not really watch the courserelated videos in the first weeks of the implementation; however, I and other students began to watch the course-related videos y during the following weeks. P4: I really enjoyed the group work, videos, and in-class activities P1: "I liked having the opportunity to watch the course contents regardless of space and time and since you have an opportunity to watch course-related videos, you can automatically prepare yourself for the course and reinforce your knowledge. Discussion According to the findings obtained in this study, the academic success of students in the experimental group who were trained by the flipped classroom model was observed to be higher than those who were taught using traditional methods. This finding of the study corresponds to the findings of Durak (2020), H. Özyurt and Özyurt (2018), Etemi and Uzunboylu (2020) that conducted a study to investigate the effect of the flipped classroom model on students "academic achievement and students" perceptions of the flipped classroom model in programming. Besides, similar effects were observed in the flipped classroom applications performed in different courses (Dill, 2012) and (Ekmekçi, 2014), whereas the results of the current study were not compatible with Marlowe's (2012) research. To be more precise, Marlowe (2012) designed a study to identify the impact of the flipped classroom approach on the academic achievement of students who were taking an Environmental Systems and Societies course. The findings of the research signified that no statistical significance existed among the experimental and control groups in the context of academic achievement. However, the scholar argued that the experimental group tended to have higher academic achievement scores when compared with the control group. From this perspective, it can be determined that the flipped learning approach had a positive impact on the academic performance of the students in the context of programming languages. The factors that stimulated the academic performance of the students included the opportunity to spend more time on in-class activities and review the course materials whenever they wished, the students could shape their studies based on their own learning speed, as well as the elevated student-teacher interaction. As can be seen in Table 7, a statistically significant difference existed among both groups in the context of their attitudes toward programming. In other words, the results of the study indicated that the attitudes of the experimental group toward programming were more favorable when compared with the attitudes of the control group, t(62) = 8.24 p < .01. To be more accurate, the results showed that the attitudes of the students in the control group toward programming were moderate, whereas the students who were studying with the flipped classroom approach in the experimental group had more positive attitudes. Therefore, the findings of the present study are in parallel with those reported in Stone's (2012) and Chao et al.'s (2015) studies. However, the results of the current study were not compatible with H. Özyurt and Özyurt (2018) research. As a result of H. Özyurt and Özyurt's (2018) study, the use of the flipped classroom model positively affected the academic achievement and programming selfefficacy levels of the students in the Introduction to Programming and Introduction to Algorithm course. However, it did not affect their attitudes in a programming language course. The dynamics of the flipped learning approach such as the opportunity to access enriched course contents easily regardless of time and space, learn theoretical knowledge via out-class activities while also applying the theoretical knowledge through in-class activities, and the role of teachers as coaches who encourage collaborative learning are considered some of the crucial factors that influenced the positive attitudes toward programming. According to the qualitative results of the study, the students stated that it is easy for them to familiarize themselves with the flipped learning method with the help of the awards and videos. From this framework, the findings signaled that the usage of the gamification technique played a crucial role in contributing to a positive impact during the process of adapting to the flipped learning model that was employed for the programming course. In addition, the students provided various responses regarding the advantages of the flipped classroom approach. The most pronounced response was that the flipped learning approach increases communication between teachers and students, makes the course contents accessible from anywhere and at any time, saves time for practice, is studentcentered, and increases motivation. Several scholars have also articulated that the flipped classroom approach yields better student-teacher communication, facilitates learning and acts as an influential learning tool (Aşıksoy & Özdamlı, 2016;Sarsar et al., 2015;Toğay et al., 2013;Yavuz, 2016). However, some studies have also documented the disadvantages of the flipped classroom approach. For instance, some researchers have mentioned the lack of opportunities to have instant feedback, particularly for the courses with theoretical content, and it is believed that courses take too much time to complete, which could be accepted as crucial limitations of the flipped classroom approach (Stone, 2012;Turan & Göktaş, 2015). Moreover, the progressive perspective, opportunities for high-level cognitive learning, chances to combine the approach with various teaching methods, and a structure that is enriched with technology could be considered as the major characteristics of the flipped classroom approach that influence positive attitudes toward education model. Students were asked to compare the traditional education method and flipped classroom approach and then list the disadvantages of the flipped-classroom approach. Its limitations include the need for technological requirements, the need for students to watch videos and the fact that students do not need to physically attend the course. Several scholars have conducted studies to explore the disadvantages of the flipped-classroom approach. Their findings revealed that due to the necessity to have a technological substructure, students may not be able to watch course-related videos before attending their classes, which presents challenges when implementing the flipped classroom approach (Talbert, 2012;Touchton, 2015;Yavuz, 2016). The contents and design of the videos, lack of technological substructure in the learning context, and/or students' lack of technical competency may also cause students to articulate negative comments about the approach. The students were asked which of the features of the flipped-classroom approach they admired the most. In addition, the majority of students stated that the most admired aspect of the method was the Kahoot application. In a study by Aydın (2016), it was found that the Kahoot application was helpful for increasing the motivation of students who were taught through the flipped classroom approach. Therefore, the findings of the present study are compatible with those of Betül's (2016) study. The main reasons behind such findings include that the Kahoot application generates an enjoyable, competitive sphere through gaming, and students receive scores from questions that they answer correctly based on timing. Implications for Research and Practice • • This study will shed light on academic studies on the flipped learning class method, gamification, and coding. • • The results of this study can be a guide for studies to be carried out with larger samples. • • The results of the study will assist teachers when selecting methods for providing coding training and will guide the trainers who will use this method for the first time. • • In the learning process applied using the flipped learning class method, it can be said that the activities planned in advance and supported by gamification and collaborative learning activities increase learner success and attitudes. • • In this study, it was found that gamification activities such as Kahoot encouraged watching lecture videos. Researches on the support of the flipped classroom method with different gamification applications and teaching approaches can be conducted in large-scale groups. The results of this study would add to the steadily growing literature of mixed methods, quasi-experimental studies of post-secondary students' learning in disciplinary fields (computer programming) in terms of "traditional" versus "flipped classroom" approaches. Conclusion Today, programming and coding training for all age groups have become increasingly popular around the world. It is necessary to use new methods to increase student's achievement and attitude in programming teaching. In this study, it was found that the attitudes of the students in the experimental group toward programming were higher than the students in the control group. In addition, it was determined that the achievement levels of the students in the experimental group were higher than the students who were taught with traditional methods. Continuation of courses with a student-centered model, presentation of rich course contents to the students outside the classroom, students' ability to learn independently from time and place, and spending more time for applications within the classroom all contributed to the students' positive attitudes toward programming and academic performance. According to the qualitative results of the study, the students stated that it is easy for students to familiarize themselves with the flipped learning method with the help of the awards and videos. In addition, it was found that the flipped learning approach increases communication between teachers and students, makes the course contents accessible from anywhere and anytime, saves time for practice, is studentcentered and increases motivation. Its limitations include the need for technological requirements, the need for students to watch videos and the fact that students do not need to physically attend the course. In addition, the majority of students stated that the most admired aspect of the method was the Kahoot application. In-class activities, group work, competitive learning environment, repetition of course contents and social media integration were identified as additionally favorable features. Furthermore, almost all of the students had positive views regarding the flipped learning classroom method. Recommendations It is necessary to try different teaching methods in programming courses and to do more research on these subjects. The learning differences of the students should be taken into consideration and the information should be enriched and presented to the students with materials that will address these differences. More time should be devoted to applications within the classroom for skills that can be gained entirely by application, such as programming. As in every study, this study has some limitations. The most important limitation of the study is that the study was carried out purely with the participation of students who were taking a Programming II course. For further studies, it is recommended that the opinions of students and educators in different coding courses should be taken. In addition, this study is limited to students at the higher education level. However, programming courses are now being taught at the primary school level. Further studies should be conducted with students at different levels. The present study was conducted with students with individual learning responsibilities as an age group.
2021-07-07T13:10:05.092Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "423ccb1e73227665d994eb21963068a3fb23cac8", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440211021403", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "423ccb1e73227665d994eb21963068a3fb23cac8", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [] }
232324833
pes2o/s2orc
v3-fos-license
Cellular solitary fibrous tumor in the mental area: a case report and literature review Solitary fibrous tumors (SFTs) are rare benign mesenchymal tumors that occur mainly in the pleura. We herein report the first case of a cellular SFT located in the mental region of the head and neck in a 46-year-old woman. Facial computed tomography revealed a mass measuring 0.8 cm with clear boundaries in the right mental region. After excision of the mass, expert pathologists diagnosed a cellular SFT. To our knowledge, this is the first case of a cellular SFT identified in the subcutaneous tissue of the mental region of the head and neck. Because the postsurgical prognosis of SFTs is unpredictable, long-term follow-up and further studies are necessary to determine the characteristics of cellular SFTs in the head and neck region. Introduction Solitary fibrous tumors (SFTs) are rare benign mesenchymal tumors that were first described by Klemperer and Rabin in 1931. 1 The World Health Organization (WHO) defines SFTs as benign mesenchymal tumors because most originate from the submesothelial cells of the pleura. 2 Although SFTs were originally described in the pleura, they have been reported in almost every anatomic site. 3 Histological features of SFTs include bland, uniform, fibroblast-like spindle cells and branching hemangiopericytoma-like vessels. 2 The most prevalent primary site in the head and neck region is the sinonasal tract, followed by the orbit, oral cavity, salivary glands, deep tissues of the neck, and subcutaneous tissue. [4][5][6][7][8][9] Even when SFTs are pathologically diagnosed as benign, some SFTs may recur or metastasize. 10 Therefore, these tumors are uniquely challenging to diagnose and treat. We herein report the first case of a cellular SFT arising in the subcutaneous tissue of the mental region and discuss the clinical and pathological features of SFTs of the head and neck. Case report A 46-year-old woman with an incidentally detected hard mass in the mental region visited Chonnam National University Dental Hospital in October 2018. The patient had undergone treatment for polyarthralgia 7 years previously, but she had no unusual symptoms during the hospital visit. Physical examination revealed unremarkable findings with the exception of a palpable mass in the mental region. Laboratory parameters were within the reference ranges. Facial computed tomography revealed a mass measuring 0.8 cm with clear boundaries in the right mental region ( Figure 1). Based on a clinical diagnosis of fibroma, complete surgical removal of the tumor was performed. On macroscopic examination, the mass had a well-defined oval shape and a diffuse fibrotic appearance. Furthermore, it was white to faint yellow in color and had mild elasticity. Microscopically, the lesion was well circumscribed with a thin-walled capsule and exhibited an SFT pattern including hypercellularity and frequent blood vessels (Figure 2(a), (b)). On high magnification, the lesion displayed high cellularity with bland, ovoid to spindle-shaped cells haphazardly arrayed in a "patternless pattern" with stromal collagen bundles arranged near variably sized ectatic vessels in a characteristic "staghorn" configuration ( Figure 2(c)). One cell per 10 high-power fields (HPFs) was observed undergoing mitosis. Immunohistochemistry revealed strong expression of vimentin, signal transducer and activator of transcription-6 (STAT6), and CD34 (Figure 3 proliferation of tumor cells, was <2% ( Figure 3(f)). Based on the histological and immunohistochemical results, the tumor was diagnosed as a cellular SFT. After surgical excision of the tumor, no recurrence was observed during the 28-month follow-up (until January 2021). Discussion SFTs have rarely been reported in the subcutaneous area of the head and neck region. Of the 88 cases of SFTs arising in the head and neck region summarized by Smith et al., 11 SFTs in subcutaneous tissues of the head and neck region were found in only 7 cases, with 3 cases involving the cheek, 2 cases involving the eyelids, 1 case involving the external auditory canal, and 1 case involving the chin. Table 1 summarizes the clinicopathological characteristics of all eight cases of SFTs in the subcutaneous tissue of the head and neck region (including the present case). The median patient age at diagnosis was 45.1 years (range, 17-64 years). The tumor size ranged from 0.6 to 6 cm. Based on the limited number of cases, SFTs appear to occur slightly more frequently in female patients than in male patients (1.7 vs. 1.0, respectively). Among the eight cases of subcutaneous SFTs, five (62.5%) were of the cellular type, three (37.5%) involved atypia, and two had a mitotic count of 4 per 10 HPFs. None of the cases showed epithelioid cytomorphology (excluding one case lacking cytomorphological data). Excluding two cases that were lost to follow-up, recurrence was noted in only one case of cellular SFT (2 mitotic cells per 10 HPFs) without atypia or epithelial features 11 months after the surgery. The median follow-up time in the remaining five patients with no evidence of recurrence was 18.6 months (range, 3.5-43.2 months). The pathological characteristics of SFTs include ovoid or spindle-shaped fibroblastic cells arranged in a storiform or haphazard patternless pattern. These cells are separated by keloid-like collagen bundles and branching of staghorn-shaped vessels resembling a hemangiopericytoma-like pattern. 12 SFTs are classified into two pathological categories: classic SFTs and cellular SFTs. Classic SFTs predominantly show low to moderate cellularity of spindle cells interspersed within the collagen matrix. In contrast, cellular SFTs exhibit dense hypercellularity of ovoid to spindle-shaped cells and a patternless distribution with little stroma. 11 In our case, the hypercellular fibrillary neoplasm had ovoid to short spindle-shaped cells arranged in no particular pattern; thus, it was diagnosed as a cellular SFT. The overall differential diagnosis of SFTs is quite broad because such tumors may be misdiagnosed as other spindle cell neoplasms (such as dermatofibrosarcomas or synovial sarcomas), mesenchymal lesions (such as hemangiopericytomas or fibrous histiocytomas), smooth muscle tumors (such as leiomyomas or leiomyosarcomas), neural tumors (such as schwannomas or nerve sheath tumors), or other benign soft tissue tumors (such as perineuriomas or cellular angiofibromas). 13 However, immunohistochemical tests for CD34 and STAT6 may be useful to distinguish SFTs from histological mimics because consistent CD34 expression has been reported in SFTs. 14 In a multi-institutional study by Smith et al., 11 80 of 88 SFT specimens were positive for CD34. In another study, CD34 expression was also identified in approximately 90% to 95% of typical SFTs. 15 The specificity of this marker is low because it is also expressed in other tumor types that may be confused with SFTs, including spindle cell lipoma, soft tissue perineurioma, and dermatofibrosarcoma protuberans. 13 To address the need for a more specific marker, Chmielecki et al. 16 and Robinson et al. 17 introduced NAB2-STAT6, a novel pathognomonic gene, as a genetic hallmark of SFTs. Several clinical studies have shown that the nuclear expression of STAT6 can be useful for distinguishing SFTs from histological mimics in the head and neck, gynecological tract, and prostate. 14, 18 Doyle et al. 19 analyzed 231 cases of soft tissue tumors, and 59 of 60 SFTs exhibited nuclear expression of STAT6. However, only strong and diffuse nuclear staining of STAT6 is highly specific for SFTs. More recently, molecular analyses of NAB2-STAT6 have been used to confirm SFT diagnoses; thus, this approach may have prognostic value. STAT6 is a member of the STAT family of cytoplasmic transcription factors regulating gene expression. STAT signaling is critical for normal cellular processes (such as regulation of cell differentiation, growth, and embryonic development). 20 NAB2 acts as a transcriptional repressor by interacting with the early growth response family of transcription factors. 21 However, NAB2 gains an activation domain when fused to STAT6, and overexpression of the fusion gene NAB2-STAT6 causes translocation to the nucleus, where it acts as a transcriptional activator and increases cell proliferation. 17 This gene fusion is considered the primary pathogenic event in SFT development. 17 Because NAB2 and STAT6 are in close proximity on chromosome 12q13, conventional fluorescence in situ hybridization may produce false-negative results; therefore, it is not considered an ideal diagnostic tool. 13 Nuclear STAT6 expression detected by immunohistochemistry and NAB2-STAT6 fusion detected by reverse transcription-polymerase chain reaction are considered more useful methods to confirm SFTs. In our case, immunohistochemical staining for STAT6 produced intense and diffuse nuclear staining, confirming the diagnosis of an SFT. In addition, the tumor was immunohistochemically positive for vimentin and CD34. Furthermore, the Ki67 index was <2%; therefore, a benign SFT was diagnosed. Approximately 80% of patients with SFTs have benign SFTs and are asymptomatic; however, a significant fraction of patients may have SFTs that display malignant behavior. 22 The tumor location is highly associated with disease-specific death, and patients with large (!8 cm) tumors in the chest or abdominal/retroperitoneal cavity have the highest mortality risk. 23 However, it is difficult to accurately predict the prognosis of head and neck SFTs because the criteria used to determine tumor behavior are controversial on account of the rarity of these tumors. According to the WHO, hypercellularity, increased mitotic activity (>4 mitotic cells per 10 HPFs), cytological atypia, tumor necrosis, and infiltrative margins in SFTs can be risk factors for malignancy. 24 Our case had none of the above factors and the mitotic activity was low (1 mitotic cell per 10 HPFs), suggesting the possibility of benign features. However, Table 1 shows that the tumor in Case 1 exhibited 7 mitotic cells per 10 HPFs had no recurrence, whereas the tumor in Case 5 exhibited cellular features and 2 mitotic cells per 10 HPFs (in accordance with the WHO guidelines for benign tumors) had recurrence. Other studies have also supported the observation that the biological behavior of SFTs is not strictly dependent on tumor size and mitotic count; thus, even SFTs that are considered benign based on histology may aggressively recur. 25 The gold standard treatment for SFTs is surgical resection. 26 Demicco et al. 26 analyzed 110 patients with SFTs after surgery and reported that the overall 5-and 10-year patient survival rates were 89% and 73%, respectively. Our patient was diagnosed with a benign cellular SFT, and no recurrence was observed for 28 months after surgery. It is important to track disease progression in patients, and long-term follow-up is necessary to understand the disease behavior. In conclusion, we have herein reported the first case of a cellular SFT arising in the subcutaneous tissue of the mental region of the head and neck. Further extensive studies are required to fully understand the biological potential and clearly define the clinical behavior of head and neck SFTs. The postsurgical prognosis for this disease is unpredictable, and long-term observation and follow-up are therefore required to determine its nature. Declaration of Conflicting Interests The authors declare that there is no conflict of interest. Ethics The Ethics Committee of the Dental Hospital of Chonnam National University waived the requirement for ethical approval and documentation for this study (IRB No. CNUDH-EXP-2020-008) because the patient in this case report sustained minimal harm, all of the patient's information was de-identified, and none of the patient's genetic information (e.g., DNA data) was used. Written consent for treatment was obtained from the patient before surgery. To increase the accuracy, transparency, and usefulness of this case report, this study was performed in compliance with the CARE guidelines. 27
2021-03-24T06:16:51.028Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "95cc6d08e37250bc498f028a6434d383fb990926", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03000605211000536", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d77244987cdc4b6b6dbe653d14f07c8530f88f74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37614927
pes2o/s2orc
v3-fos-license
Dimensional transitions in small Yukawa clusters We provide the detailed analysis of structural transitions leading to the rapid changes in dimensionality of small Yukawa clusters. These transformations are induced by the variations in the shape of confinement as well as the screening strength. We show, that even in the most primitive systems composed of only a few strongly interacting particles, the order parameter exhibits a power-law behavior in the vicinity of the critical point of the continuous transition. The critical exponent \gamma=1/2 is found to be universal in all studied cases, which is consistent with the general theory of continuous phase transitions. I. INTRODUCTION The emergence of ordered patterns within complex systems of interacting entities draws attention of researchers in diverse fields, including physics, biology, mathematics and computer science [1][2][3]. Confined Wigner crystals are among the most primitive systems where the phenomenon of self-organization is observed. A small number of charged particles is placed into a confining potential well, where in the limit of low temperatures ordered structures are formed. This crystalline type of matter was successfully realized experimentally in a variety of well known systems, such as electrons on the surface of liquid helium [4,5], cooled ions in traps [6] or strongly coupled particle clusters in complex plasmas [7]. Systems of strongly coupled particles are of high scientific interest due to various collective phenomena, e.g. cooperative dynamics, waves and phase transitions. First order transitions between solid and liquid states of matter, namely melting and crystallization, were widely investigated in the studies of many-particle dusty plasma crystals, also known as Yukawa clusters [8,9]. Crystalline structures formed by dust particles in complex plasmas turned out to be an extremely handy tool for these studies, as the convenient length and time scales, stability and transparency of these systems allow for direct optical observation and accurate measurements [7]. On the other hand, there is another type of transitions, occurring in the simplest few-particle systems confined by asymmetric traps. These are observed when a small change in one of the control parameters causes a sudden change in the dimensionality of the system, and therefore are called dimensional or zigzag phase transitions [10]. Note that these transitions take place in finite systems and are analogous to the second order phase transitions commonly defined and studied in the thermodynamic limit. Dimensional transitions in small two-dimensional Yukawa clusters have been extensively studied in Ref. 11 both experimentally and numerically. The authors demonstrated an excellent agreement between the computed configurations of particles and the arrangements observed in complex plasma experiments. Structural zigzag transitions were induced by variations in the number of particles, the value of the Yukawa shielding parameter κ or the shape of the confinement potential. A power-law behavior of the order parameter in the vicinity of phase transition was observed, and interpreted as a characteristic feature of second order phase transitions. However, the numerical values of critical exponents provided in [11] cast some doubt. These values are distinctly different from the classical mean-field value 1/2 which one would naturally expect in a finite system of just a few particles. Therefore, one of the main motivations behind the current work is to provide the results of numerical modelling with higher precision and show the universality of critical exponents. We also extend the investigation to the clusters of different sizes and dimensional transitions of other types. Our paper is organized as follows. In section II the model system is described and the procedure of our calculations is presented. Section III presents the results of the numerical modelling of dimensional transitions, grouped by their type in subsections. Main points of the article are summarized in section IV. Additionally, Appendix A provides details on the analytically solvable cases and exact values of critical parameters. II. NUMERICAL PROCEDURE We investigate numerically systems of N identical particles of mass m and charge Q, interacting through the Yukawa inter-particle potential. The interaction energy of two charges embedded in a screening environment thus reads where κ stands for the shielding strength (inverse screening length) and controls the range of interaction. Particles are kept together by a harmonic confinement potential V c (r) = 1 2 mω 2 0 x 2 + α 2 y 2 + z 2 in 3D or V c (r) = 1 2 mω 2 0 x 2 + α 2 y 2 in 2D. The anisotropy parameter α controls the shape of the confinement, which reduces to arXiv:1205.2524v1 [cond-mat.stat-mech] 11 May 2012 the symmetric spherical or circular form at α = 1, and takes the shape of oblate (α > 1) or prolate (α < 1) spheroid in 3D and an ellipse in 2D. In the regime of strong correlations the potential energy dominates over the kinetic one, and the total energy of the model system is given by (2) The units of length and energy are conveniently chosen as r 0 = (Q 2 /4π 0 mω 2 0 ) 1/3 and E 0 = Q 2 /4π 0 r 0 . Obviously, κ is now measured in r −1 0 . In two dimensions, the z coordinates of all particles are set to zero, so that all the particles lie within the (xy) plane. Stable arrangements of particles correspond to the local and global minima of the potential energy (2). In our present work, stationary states are located by the method of multiple heating-relaxation cycles, based on the Monte Carlo and numerical minimization algorithms. The method was already proven to be efficient and reliable in our previous studies [12]. As it turned out, the potential energy landscape of (2) is rather complex even for small values of N and might be described as a collection of local minima, separated by potential barriers of various heights. In a first stage of the algorithm, thermalization takes place, that is, the system is heated to the temperature high enough to overcome all the potential barriers. This stage is accomplished by performing a large number (few thousand) of Metropolis Monte Carlo steps [13], which leads to a configuration, that may be regarded as drawn randomly from the Boltzmann distribution corresponding to the given temperature. Each minimum controls a certain area of coordinate space, called basin of attraction. The area of attraction varies from basin to basin, which means that different stable states are realized with different probabilities [12,14]. Some of the minima are located within the regions with steep walls, while others lie in broad shallow valleys and therefore require a considerable effort to find. In a second stage of the computational procedure, the temperature of the system is suddenly set to zero and the closest local minimum of potential energy is located by employing the steepest descent and Newton optimization techniques. As frequently there is more than one local minimum [12], the whole cycle of thermalization and relaxation is repeated a large number of times, to ensure that all of the basins are visited and all stationary points of (2) are revealed. Departure of the anisotropy parameter α from unity breaks spherical (or circular in 2D) symmetry and, as a result, prolate or flattened structures are formed. Eventually, at the critical value of parameter α c , dimensional transitions are observed, three-dimensional clusters become planar, while two-dimensional structures are transformed into linear ones. In order to determine the critical values of α with high precision, we repeat our calculations by incrementing α in small steps. As it will become evident shortly, properties of Yukawa clusters, including critical values of the anisotropy parameter and critical exponents, depend strongly on the screening parameter κ. Therefore, in most cases we use four different values of Yukawa screening strength, κ = 0, 1, 2, 3. In the simplest case of κ = 0, inter-particle potential reduces to the simple unscreened Coulomb interaction. A second order phase transition is marked by a sudden appearance or disappearance of some property of the system, called an order parameter, in response to a small change in a control parameter. We investigate dimensional phase transitions by keeping an eye on the total potential energy and order parameter y , which is defined as the root mean square of the coordinate y: Naturally, dimensional transitions are signified by a sudden change of y to zero. In particular, y is a good choice for an order parameter, since y = 0 in the 1D (2D) configuration and y > 0 in the 2D (3D) configuration. The state variables that determine the system configuration are then N , κ and α. A. 2D → 1D transitions We first investigate the simplest few-particle twodimensional systems, undergoing 2D → 1D structural transitions. In the case of α = 1, the confinement potential is symmetric and particles form ordered states, that were previously observed experimentally and modelled theoretically [15,16]. Systems with N = 3, 4, 7 particles form only one stable configuration (ground state), while clusters with N = 5, 6 particles in symmetric traps support both ground-and one metastable configurations. Various states can be represented by listing the occupation numbers of different shells -ground state of 5particle system is therefore the configuration (0, 5) and metastable state is (1, 4) (for the arrangements of particles see Figure 1). As the anisotropy parameter α departs from unity, metastable states can become ground states, some states can disappear and new ones appear. In all investigated cases, however, there is only one stationary configuration near the dimensional transition -a zigzag shaped pattern, which soon becomes a 1D linear chain of particles at α > α c . In the simplest symmetric case of N = 3, particles form an equilateral triangle in the (xy) plane. As the value of α increases, the triangular configuration is gradually deformed until the transition occurs at α = α c ≈ 1.55, as shown in Figure 1 with κ = 0. In fact, dimensional transition in three-particle system can be modelled analytically, which gives the value of α c = 12/5 (see Appendix A). Numerical simulation gives exactly the same value. The structural transition is slightly more intriguing in the case of N = 4 particles. As it can be deduced from the evolution of y in Figure 1, there is a discontinuity in a first derivative of the order parameter d y /dα at the value of anisotropy parameter α ≈ 1.69. As it turns out, there are two stages of the four-particle cluster compression. In the first, slow stage, four particles form a rhombus-shaped structure, with the particles located exactly on x or y axis. Later, the stage of rapid compression takes over, with two particles departing from the line x = 0 and forming a zigzag shaped pattern. Two typical rhombus-and zigzag-shaped configurations are presented in the insets of Figure 1. As it was shown in the previous study, the transition between these two stages is followed by the specific oscillation of the heat capacity [17]. Analogous scenario applies to the other clusters with even number of particles. The dimensional transition is observed at α c ≈ 2.04, where y suddenly drops to zero. Two competing configurations are first observed in the case of N = 5 particles, namely states (0, 5) and (1,4). As it is depicted in Figure 1, metastable state (1,4) exists only in the narrow window of anisotropy, 1 < α < 1.05. On the other hand, the pentagonal ground state undergoes a continuous structural transformation, forms a zigzag-shaped cluster and finally becomes linear at α c ≈ 2.50. Structural transitions become more complex for the systems with N ≥ 6. Six particles in a symmetric confinement can form two stable states. As α increases, metastable (0, 6) state vanishes at α = 1.05 only to reappear again and become a new ground state later. The former ground state (1, 5) then disappears completely near α = 1.22. Six-and eight-particle clusters both feature the same discontinuity in d y /dα as four-particle system, discussed above. We have already seen in Figure 1, that critical value of the parameter α c increases with N , when inter-particle interaction is of the Coulomb type. As it might be expected, α c also grows as Yukawa potential screening parameter κ is increased, which is shown in Figure 2 for N = 3-6. The critical value of the anisotropy increases rapidly for κ < 1.5 and almost saturates for high values of screening, i.e. κ > 4.0. The lines in Figure 2 represent boundaries between different phases of clusters. The structures are twodimensional below the line and form linear configurations above. Although we devote most of the present work to the transitions induced by deformations of the confinement well, structural changes can actually be caused by variations in any of three parameters α, κ or N . As the figure shows, two-dimensional cluster can become linear without any changes in α, for example, when a value of κ is diminished, or when a particle is removed from the system. Below the corresponding line, the cluster of N particles is two-dimensional and one-dimensional above. We see, that dimensional transitions can be induced by variations in any of three parameters α, κ or N . We further examine the power-law behavior of the order parameter y near its critical point α c in more detail. The power law is easily identified by plotting the logarithm of the order parameter, lg ( y ), as a function of lg(α c − α). The function turns out to be linear for small values of (α c − α). This observation confirms that in the vicinity of the transition point, the order parameter y demonstrates a power-law behavior, which is a typical property of second-order phase transitions: We determine the values of the exponent γ near the critical point by analyzing the slope of the above discussed log-log plot. Namely, we take the numerical derivative of the function lg ( y ) = f (lg (α c − α)). Calculated exactly at the critical point this derivative yields the exact 'theoretical' value of the critical exponent. However, in an experimental or numerical investigation the precise location of the critical point may not be known. Thus, calculating the numerical derivative a bit away from the critical point we are able to mimic the uncertainty and errors present in a realistic experimental situation. It turns out, that in all cases, γ = 1/2 as long as α is close to its critical value α c (Figure 3). However, the local value of the exponent (determined as the numerical derivative) is very sensitive to the deviation of anisotropy parameter from α c . Figure 3 shows the dependence of the power-law exponent γ on the deviation of α from its critical value. The case of 2D cluster with N = 3 particles and four different values of screening length is presented. We see, that γ departs from the value of 1/2 significantly when the deviation from α c reaches third decimal and attains its minimum near the first decimal. Furthermore, the exponent γ attains significantly lower values far from α c in the systems with stronger screening. Other than that, there are no qualitative differences in the critical behavior of systems with different values of κ. The exponent γ of the other systems with N > 3 behaves similarly to the case of three particles presented here. A thorough analysis of transitions in a planar cluster of five particles was reported by Sheridan in [11]. The given value of critical parameter is α c = 2.96 and the critical exponent is said to be γ = 0.39 = 1/2, while we find critical anisotropy parameter to be α c = 3.01. According to the results of our calculations, a value of α = 2.96 corresponds to the exponent γ = 0.37, which is close to the value reported by Sheridan et. al. Therefore, the reason of differences between the results, published in [11] and our discoveries almost undeniably lies in the extreme sensitivity of y on the value of anisotropy parameter and distance from its critical value. Thus, the accuracy of the results presented in [11] is probably not sufficient. As it was demonstrated in [11] and shown in Figure 2, continuous transitions might also be induced by the variations in the screening strength κ. By keeping N and α constant, we gradually change a value of κ while tracking the changes parameter y undergoes. The dimensional transition takes place when the value of y suddenly drops to zero, at which point the critical value of κ is obtained (see the inset of Figure 4). It turns out, that y exhibits the same power law behavior near the transition, i.e. y ∝ (κ − κ c ) β . Figure 4 shows the dependence of critical exponent β on a logarithm of the distance from the critical value κ c for three systems. As opposed to the results presented in [11], we see again, that in all cases close to the transition point β = 1/2. Moving away from the critical point, however, exponent β departs from the value of 1/2 significantly. B. 3D → 2D transitions We further investigate structural transitions in threedimensional Yukawa clusters with N = 4 to N = 8 particles and integer values of screening parameter up to κ = 3. Increased values of the parameter α, turn initially spherical structure into the oblate one; eventually, after the anisotropy parameter reaches its critical value α c , dimensional phase transition takes place and familiar twodimensional clusters are formed. In three-dimensional transformations of five-and six-particle clusters, two different final states are possible, as opposed to the zigzag transitions in 2D, where only one linear configuration can be formed. Therefore, in 3D → 2D transitions, there is a distinct value of α c for each final configuration and we are concerned by the properties of phase transitions of a particular stable state. As the confinement potential well is squeezed in y direction, it is handy to label small clusters according to the arrangement of particles in the projection to (xz) plane, in a manner similar to the state labeling by shell occupation numbers in two dimensions. Moreover, particles in three-dimensional anisotropic traps frequently organize themselves within the layers, parallel to the (xz) plane. For the sake of clarity and unambiguous definition of the configurations, we will also use a list of particle numbers in distinct layers, enclosed within curly brackets. The simplest system, undergoing a non-trivial 3D → 2D transition is the cluster composed of four particles. Not surprisingly, four particles in a symmetric threedimensional trap form a regular tetrahedron, and there is only one possible square-shaped (0, 4) state in two dimensions. Figure 5 shows dependence of the order parameter y and potential energy of a system E on the anisotropy parameter α. We see, that y changes continuously and the transition is remarkably similar to the one in 2D case of N = 3 particles. The potential energy gradually increases as the potential trap is flattened, until a two-dimensional structure is formed at α c ≈ 1.22. As it is demonstrated in Appendix A, this symmetric transition can be modelled analytically; the critical value turns out to be α c = (4 √ 2/(1 + 2 √ 2)) 1/2 -exactly the same as determined in our numerical modelling. Naturally, the value α c is sensitive to the range of the interparticle Yukawa potential. As Figure 8 shows, critical value of the anisotropy parameter increases rapidly with the strength of screening for κ < 2 and significantly slower after that, thus reminding of the transitions from two-to one-dimensional configurations. Figure 5. Order parameter y and potential energy E of a four-particle 3D Coulomb cluster in the asymmetric potential trap with anisotropy parameter α. As it was already pointed out before, there are two competing stable states observed in a two-dimensional system with N = 5 particles. Five particles in a spherically symmetric 3D confinement potential, however, can form only one stable configuration. Slightly increased anisotropy leads to the formation of a three-layer structure, with the arrangement of particles within these layers being {2, 2, 1}. As parameter α increases above the value of α = 1.05, two layers merge forming a square and thus transforming the configuration into a pyramidal structure {4, 1} with projection (1,4) xz . This structural transition from three-layered cluster to the pyramidal configuration is signified by the discontinuity in the derivative d y /dα ( Figure 6). As it is demonstrated in Figure 6 for a pure Coulomb interaction, a second stable state appears when the anisotropy parameter reaches the value of α 0 ≈ 1.29. A new pentagonal state undergoes an asymmetric dimensional transition and is soon transformed into the new ground state (0, 5). The metastable configuration (1,4) xz , on the other hand, becomes two-dimensional only at α c ≈ 1.60, through the so-called "pyramidal" transition mechanism. Both point of appearance of the second state in fiveparticle system α 0 , and its critical value α c depends on the type of the interaction potential and its screening parameter κ. As it is shown in Figure 8, both parameters grow with the strength of screening. The distance between α 0 and α c , however, rapidly diminishes. As the screening reaches the value of κ ≈ 4.5, two lines merge and a new stable state appears already in its twodimensional pentagonal form (0, 5). A pyramidal configuration might be described as a planar base, composed of n = 4-6 particles lying parallel to the (xz) plane, and a single particle located right above the center of the base, that is, configuration {N − 1, 1}, (1, N − 1) xz . A pyramidal structural transition takes place, when the apex of the polyhedron is pushed into the base, thus becoming a two-dimensional configuration with only one particle in the center. A typical behavior of the order parameter y during such transitions was already discussed and is presented in figures 6 and 7. As a matter of fact, the dimensional transitions of a pyramidal type can be modelled analytically and exact values of critical parameters α c can be found, see Appendix A. Even more stable configurations are observed in clusters with N = 6 particles, as Figure 7 shows for the Coulomb inter-particle potential. Evolution of the system starts with a single stable state in the symmetric 3D trap -the octahedral configuration (full line in Figure 7). As the parameter α increases, this bipyramid is deformed by pushing two of its particles lying exactly on y-axis, towards each other, thus lowering the height and forming a configuration {1, 4, 1} with projection (1, 4) xz . y decreases slowly, until the said two particles start to depart form the y-axis near α ≈ 1.46, at which point a phase of rapid deformation begins. Unfortunately, right after this happens, the stable state disappears. The same scenario of bipyramidal deformation also applies to the larger clusters, e.g. N = 7, 8, 9, and has a specific, well recognizable shape of its y = f (α) curve, with the segments of slow and rapid changes. . Order parameter y and potential energy E of the six-particle 3D Coulomb cluster in the asymmetric potential trap with anisotropy parameter α. A new metastable state emerges near α ≈ 1.06: the particles lie on the six vertices of two parallel equilateral triangles, centered precisely on y-axis and rotated by π/3 with respect to each other, that is, state {3, 3} and (0, 6) xz . These triangular layers are pushed towards each other by deformations of the confinement, however, they fail to ever become a truly two-dimensional configuration. Instead, as it is demonstrated in Figure 7, the configuration ceases to exist at α ≈ 1.52 where the r.m.s. value of y coordinate is still y ≈ 0.05 > 0. However, right before the disappearance, a new similar purely two-dimensional state shows up. The new planar configuration is composed of six particles lying on the vertices of two triangles of slightly different sizes (see Figure 7). Therefore, in a brief range of α values these two states exist simultaneously and there is no continuous transition between them. Finally, a pyramidal configuration {5, 1} appears near α ≈ 1.19 and undergoes the usual pyramidal dimensional transition at α c ≈ 1.59, the value predicted by our analytical model (appendix A). Close to the critical point of continuous transitions from three-to two-dimensional systems, a power-law behavior of the order parameter is detected once again, i.e. y ∝ (α c − α) γ . In the same manner as in 2D case, Figure 9 shows the dependence of the power-law exponent γ on the logarithm of α c − α. It turns out, that in a close vicinity of transition point, the critical exponent γ = 1/2 does not depend on the screening strength κ. Deviations from this value occur when the departure of α from its critical value reaches third decimal. Just as in the two-dimensional case, the value of a critical exponent is lower for systems with stronger inter-particle potential screening, and drops as low as γ ≈ 0.35 for κ = 3. Essentially the same behavior of the exponent γ is observed in larger three-dimensional systems, where dimensional transitions take place, be it pyramidal transitions or transformations of any other type. As the number of particles grows, more and more stable states emerge and, as a consequence, y = f (α) graphs become convoluted and somewhat difficult to study. An illustrative example is given in the inset of Figure 10, where the behavior of the order parameter in stable states of 20-particle Coulomb system is presented. We can still see a few distinct continuous phase transitions in the vicinity of α ≈ 2.26, however different lines become hardly distinguishable at the lower values of α. In very large systems, the values of y for all metastable states lie virtually on the same line, as it is illustrated in Figure 10 with a 100-particle cluster and κ = 0. It might be worth discussing the structural evolution of Yukawa clusters confined by traps with a prolate equipotential surface. In our model, this effect is achieved by lowering the value of anisotropy parameter α towards zero. In that way, elongated clusters are formed, with low potential energies and high values of y . Consequently, in order to study this type of structural transitions, a new order parameter must be defined. We choose to rely on the root mean square of the distance from y-axis: As Figure 11 shows for N = 3-6, dependencies of the order parameter ρ on the anisotropy α are not smooth and in some cases feature discontinuities. We conclude, that in fact there is no direct transition from three-to one-dimensional configurations. Instead, the system is first transformed into the elongated 2D zigzag pattern, and only later, 2D → 1D structural transition takes place. With that in mind, there is no surprise, that the values of critical parameters α c found by lowering α are the exact inverses of those, determined in subsection III A. Large three-dimensional clusters in prolate traps become one-dimensional through the mechanism, which seems to be universal for all values of N used in our modelling. At first, the system is squeezed and elongated until the particles arrange themselves into the shape of double-helix. As α is lowered further, a number of helical turns decreases, until the helix unwinds and cluster becomes a two-dimensional zigzag configuration. 2D system then undergoes the usual zigzag transition with the power-law behavior near the critical point. IV. CONCLUSION Confined Yukawa clusters are among the physical systems, where simple interparticle interactions lead to the emergence of complicated patterns and spontaneous ordering. In this article, we present our findings in the numerical and analytical studies of two-and threedimensional clusters confined by asymmetric parabolic traps. We confirm, that dimensional transitions from oblate three-to two-dimensional systems as well as from planar to linear configurations can be induced by changes in the anisotropy of the confinement α, and screening strength κ. On the other hand, there are no direct transitions from three-to one-dimensional systems in prolate harmonic traps; two-stage transformations take place instead. A critical value of the anisotropy parameter in general grows with the screening strength κ. The growth is steepest for small values of κ and almost saturates for the large ones. In a close vicinity of dimensional phase transition, the order parameter y exhibits power-law dependence on a control parameter, be it α or κ. In all cases studied here, the critical exponent is found to be universal and equal to 1/2, which is consistent with the general theory of second order phase transitions. However, a value of the power-law exponent turns out to be very sensitive to the deviations of a control parameter from its critical value. Far from the critical point, the exponent attains lower values in systems with stronger screening and shorter range of the inter-particle interaction. Appendix A: Analytical values of αc In a few simplest cases of high symmetry, when total energy of a cluster after a transition depends on a single generalized coordinate, values of a critical anisotropy parameter α c can be identified analytically. These are the transitions in 3-particle 2D system, 4-particle 3D cluster and all of the pyramidal transitions. Consider the N -particle three-dimensional system undergoing a pyramidal 3D → 2D dimensional transition. Immediately after the transition, a planar cluster consists of n = N −1 particles, positioned on the circumference of a circle with radius R, and a single particle in the center of confinement. The 2D cluster lies in (xz) plane. The total potential energy of the system can be expressed as A second term here represents the Coulomb interaction energy of n particles positioned on the circle with radius R, thus forming a regular polygon. The function f (n) depends only on the number of particles and is sin −1 (mπ/n) if n is even, 1 2 (n−1)/2 m=1 sin −1 (mπ/n) if n is odd. (A7) Values of the function f (n) and corresponding critical parameters of pyramidal transitions are collected in table I for all 3D clusters with this type of structural transformation. We see, that in general, α c slightly decreases with N in the range of 5-8 particles. In the two-dimensional case of N = 3 particles, the system forms a triangular cluster. During the structural transition, one of its particles is pushed in-between the others, thus forming a linear structure. This case is basically a generalization of pyramidal transitions to the two-dimensions. Therefore, equation A7 is still valid and we find critical anisotropy to be α c = 12/5, which is exactly the value observed in our numerical modelling. A slightly different mechanism of transformation is observed in 3D → 2D transition of highly symmetrical N = 4 particle cluster. The final configuration is a 2D square, with potential energy Minimization of A8 by solving ∂U 0 /∂R = 0 in turn gives R = f (N ) 1/3 . Right before the transition, two particles sharing a common diagonal in the final square are elevated by the distance of δy above (xz) plane, while other two are located at the same distance below it; ergo the energy of the perturbed system U p = 1 2 N (R 2 + α 2 δy 2 ) + 1 4 By setting δU = U p − U 0 = 0, we again get This value is exactly the same as found by our numerical procedure.
2012-05-11T13:54:35.000Z
2012-05-11T00:00:00.000
{ "year": 2012, "sha1": "22e388ec7a34907ac1d2ddb41edd88bdc247d343", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.2524", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "22e388ec7a34907ac1d2ddb41edd88bdc247d343", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
5503454
pes2o/s2orc
v3-fos-license
Quantized visual awareness The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom. INTRODUCTION Neuroscience has made great strides in understanding the structure and workings of vertebrate brains. Nowhere is this more evident than in describing the functional architecture of the mammalian, and more specifically, the primate visual sensory pathway and cortices. Over the past century, neuroscience has evolved from rudimentary understandings of neurons to investigating the nature of visual awareness and the neural correlates of consciousness (NCC). The NCC are defined as the minimal neural activities (circumscribed by the neural circuits and centers involved) required to generate a conscious experience (Koch and Braun, 1996;Tononi and Koch, 2008). Although great advances have been made in identifying the circuits and centers that process specific aspects of vision, it is still not clear how the activity of these circuits and centers generates our inner visual experience. Much research indicates that a high-level of integration is required to generate our subjective experience of vision with many of the identified centers prodigiously interacting with each other (Edelman et al., 2011). This level of integration corresponds well with our daily experience of the outer world since we humans have a holistic experience of the outer world. That is to say that our inner subjective experience is not fragmented but completely integrated with depth, color, and motion all imbedded within our overall visual experience. Indeed, this holistic-level experience is so common that it is hard to conceptualize anything different. I believe our intimate connection with our personal visual experience has biased our approach in thinking about vision and the questions that are currently being asked about visual awareness in neuroscience. For example, there is an assumption that visual awareness only exists at the level of our complete human experience, but few current researchers have asked the question of whether awareness is quantized or can exist at a smaller level independent of our overall visual experience. Many in the past have discussed the idea of quantized visual awareness (perceptual atoms) and it is clear that philosophers as far back as Hume and Descartes were considering concepts related to the questions proposed above (Julesz and Schumer, 1981;Garrett, 1997). The reason these philosophical assertions have persisted is that the concept of quantized awareness is in line with what we know about nature. Indeed, if we take the wider perspective of scientific discoveries over the past few centuries, it is clear that nature operates on this principle of quanta, and this applies to more abstract forms of nature like energy (Bohr, 1913;Planck, 1914). All forms of energy and matter in the universe appear to be quantized. In other words, there is some smallest unit that still retains the qualities associated with whatever form one studies. Examples of this are found all around us and include: atoms (elemental matter), cells (life), and photons (energy). Given that this is a general principle of nature, it stands to reason that the same should apply to the natural phenomenon of visual awareness. Another way of stating the same is "Why should visual awareness be the exception to the rule?" THEORY Although we tend to think of information in a symbolic way, in nature information is represented as structure or gradients (e.g., electrochemical). Given that the brain is a natural system, it is likely the information that leads to a visual experience has a structural component. In the field of the philosophy of mind and in psychology, it has become routine to label inner-subjective experiences, like seeing the color blue, as qualia (Searle, 1997;Edelman et al., 2011). Thus, any personal experience of the outer world would include a great many qualia describing the various aspects of the scene in which one finds herself/himself. The hypothesis proposed here states that at its most fundamental level, visual awareness is www.frontiersin.org composed of quanta of awareness or qualia 1 . Each one of these qualia is produced by neural circuits comprised of hundreds to thousands of neurons, and it is the unique topologies of these circuits that result in distinct, specific, and reproducible qualia 2 . Thus, a neural circuit with a specific topology will reliably reproduce the same bit of awareness that corresponds to perceiving the color blue, while another topology results in perceiving the color red. The physical manifestation of a quale is not the neural circuit itself but the electromagnetic field (EMF) produced by the active neural circuits. Each unique neural circuit should produce a distinct EMF pattern and it is the EMF pattern that is the physical aspect of a quale. A corollary to this hypothesis is that the production of qualia is not dependent on the material making the circuit itself and therefore it should be possible to make synthetic qualia in the laboratory by designing circuits that mimic the topology of circuits identified in primate brains that produce qualia. This corollary marks a significant break with philosophical proposals made in the past about perceptual atoms or qualia (Julesz and Schumer, 1981;Llinás and Churchland, 1996) The contribution to our overall visual experience of each quale is small. A metaphor used in a previous communication is that we can consider each quale as a pixel in a computer screen (Escobar, 2011). The big difference being that a quale does not represent a specific color but awareness of the color itself. Moreover, qualia are so small compared to our overall visual field that they are hard to experience individually. Our subjective visual experience requires the integration of a large number of these qualia, and this integrated state is the level of experience that correlates most closely with our common everyday vision. Although it may appear that this hypothesis requires a multiplicity of different types of qualia, it is likely a few diverse types could be used to create a great number of different visual states. For example, we might only need three color qualia to synthesize all perceived colors in the same way red, blue, and green pixels 1 The term quale (pl. qualia) is associated with internal perceptions of outer phenomena. For example, the perception of the color blue in your visual field would be considered a quale. Although I maintain this general definition for qualia in this article, I am redefining the scope of a quale to be a quantum of awareness. Included in this redefinition is the assumption that the contribution of a quale is increasingly small compared to our overall visual experience, and that a patch of blue in our field of view would require the integration of many qualia for this percept to be created. 2 The estimate for the number of neurons participating in a circuit that creates a quale is based on estimations of the volume of an ocular dominance column. The cerebral cortex is about 2 mm thick (Hubel et al., 1978), and the average dimensions for ODC is 1 mm (Cheng et al., 2001) by 0.1 mm. The latter dimension is given due to the significant scattering of receptive fields in the movement of electrodes by as little as 0.1 mm. I am assuming neural cells with a cell body diameter of 20 μm (pyramidal cell) and, to simplify, assuming the cells are shaped like cubes. 3 It is clear that many of the philosophical proposals that were previously put forth suffered from a dearth of knowledge of the neuroanatomy and neurophysiology available today about the visual cortices of primates. For instance, the further back one goes in time, one can see that philosophers struggled more often with the question of whether our human awareness was connected to the physiology of the brain or perhaps existed in a state apart from the brain. It has become obvious, that the ideas of Patricia Churchland and others are closer to the true state of the matter in that it appears the functioning of the brain results in awareness. More correctly, it is the activity of certain neural circuits that is visual awareness. The physiological activity and aware state are one and the same -two different aspects of the same process. are used to create a variety of colors on computer screens. Besides color qualia, there must also be qualia types that correspond to different aspects of vision. For instance, there are likely qualia that produce the sensations of depth, motion, or orientation. Once again, varying the number and combination of these few qualia types could create a large variety of experiences. The ideas described in the previous paragraph parallel the structure and function of biological systems. A countless variety of proteins in biological systems with wide-ranging properties result from covalently linking varying numbers of the same twenty amino acids. The complex and highly structured tissues and organs of our bodies are created from a limited set of approximately 200 cell types. Recombining four distinct nucleotides into diverse nucleic acid sequences creates the immense number of genes found in biological systems. The reason this approach is employed at all levels in biological systems is that it allows for the production of a vast number of complex structures from a limited set of building blocks. This maximizes structural diversity with a minimal investment of energy. Given the constantly changing, highly diverse environments we encounter, it seems likely that the same evolutionary pressures that have produced the complex structures of our bodies would also select for a visual system that could represent a vast number of possible environments from a limited set of qualia building blocks. Not all neural circuits produce qualia. I believe that most do not with clear examples of these being the circuits that produce automated responses like the knee-jerk response. Qualia require circuits with specific topologies containing hundreds to thousands of neurons and even all of these do not result in qualia. We are born with a surplus of neurons and circuits in our brains. Circuits that are activated through our experience are retained while those that are not are diminished. Gerald Edelman has described this process as neuronal group selection (Edelman, 1992;Edelman et al., 2011). This concept must apply to qualia and we can say that neural circuits producing qualia that effectively describe our environment and aid in our survival will be enhanced and maintained while those circuits that do not will be degraded. Although I do discuss ideas about how qualia are initially integrated in order to understand how this qualia proposal fits in with what is known about the circuitry of primate visual cortices, the process of binding various aspects of vision into an overall visual experience is not the focus of this paper. There are various schools of thought on how different attributes of what we perceive are bound together; for example, the ideas of Crick and Koch on temporal synchronization (Crick, 1995;Llinás and Churchland, 1996;Crick and Koch, 2003;Edelman et al., 2011). This issue of binding, however, is altogether a different question than whether or not visual awareness is quantized or whether individual quanta of awareness can serve as fundamental units of visual awareness. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits. Obviously, this would have implications for the fields of neurobiology, psychology, and philosophy of mind, but beyond these the idea of quantized awareness could open the doors to a quantitative discussion of the evolution of awareness. The focus is no longer the maximally complex central nervous systems of mammals or primates but perhaps any organism that displays these microcircuits Frontiers in Psychology | Consciousness Research (no matter how simple the organism). In other words, the discussion of awareness becomes much more expansive since we can look for and identify certain neural circuit topologies throughout the animal kingdom, and in the process, create a phylogeny of visual awareness. EXPERIMENTAL SUPPORT Support for the existence of qualia is given by the detailed structure of the cerebral cortex itself. When probed at microscopic scales, we find the cortex is composed of a myriad of local circuits that are communicating with each other. Although many centers have been identified in the visual cortex, each of these is composed of many microcircuits that process information locally and then send off their data to other centers or communicate with other local circuits. Ramón y Cajal (1899) and other researchers observed this architecture over a century ago. More recently, the work of David Hubel and Torten Wiesel demonstrated the presence of a large number of highly organized neural circuits they referred to as ocular dominance columns (ODCs; Hubel, 1959;Wiesel, 1959, 1968). These columns are packed tightly together in the part of the visual cortex known as the primary visual cortex (V1 or striate cortex), and each corresponds to a specific location in our visual field (Hubel et al., 1978;Cheng et al., 2001;Adams et al., 2007). V1 is the first cortical structure to receive visual information, and it is believed to operate at a rudimentary level in generating vision. Despite its basic role, however, V1 is thought to process the initial stages of various aspects of vision like color and motion. For example, the complex cells found in ODCs respond most actively when a line of a specific orientation moves through their corresponding receptive field. Moreover, Hubel and Wiesel identified areas of V1 known as blobs that respond strongly to colored stimuli (Hubel, 1983). The qualia model proposed here offers that these well-known and described ODCs are more than way stations for visual information and serve as the seat of visual awareness. That is to say that activated ODCs found in V1 of several primates (including humans) produce bits of awareness corresponding to color, motion, orientation, or depth and that the highly organized structure of V1 itself allows for these qualia to be appropriately mapped into our overall visual awareness (Adams et al., 2007). Most importantly, these bits of awareness can exist independently of whether they are integrated into a conscious visual experience. This proposal forces us to think about awareness in a fundamentally different way because we must consider that it is possible to have bits of awareness within our striate cortex that are independent of visual perception. There seems to be a contradiction here since how can there be any visual awareness that is independent from our conscious experience? Most current models that aim to explain visual awareness do not allow for this condition. However, there is a form of visual awareness that is not well understood and seems to demonstrate these properties in humans. This odd case of perception was first described in the 1970s by Weiskrantz and others and is known as blindsight (Weiskrantz et al., 1974;Koch and Braun, 1996;Milner, 1998;Tong, 2003). Blindsight occurs when lesions to V1 completely eliminate conscious visual perception. Individuals with Type 1 blindsight report they can no longer perceive visual stimuli including seeing motion, color, shapes, or any other visual cues. A typical experiment would be to ask a patient to choose the direction of a right or left moving object. Strangely enough, when blindsight patients are probed under forced-choice conditions, they respond with the correct answer at a frequency greater than chance regarding visual stimuli (Azzopardi and Cowey, 1998). Indeed, these patients are often surprised by their success rate in these forced-choice experiments. Thus, it appears that individuals do retain a form of awareness that is not directly tied to the patient's experienced perception. Although blindsight is most often associated with activity in extrastriate cortices, it nonetheless demonstrates that a form of awareness can exist independent of conscious experience. Lamme (2001Lamme ( , 2003 has proposed a model for the production of conscious visual experiences. In his model, he separates the feed-forward sweep (FFS) of the geniculocortical pathway from the recurrent processing that follows the activation of higher visual centers. The FFS is the rapid (∼50 ms) progression of neural signals through V1 and onto higher centers like V3, V4, V5, and the inferior temporal cortex (ITC). Recurrent processing occurs only after a region is activated by the FFS (100-150 ms). According to Lamme, the FFS is an unconscious process while recurrent processing (for example, from V4 or V5 to previously activated centers like V1) results in phenomenal consciousness. Phenomenal consciousness relates to an experience that is not fully conscious. An example is given by Block when he describes the sensation one has when the motor of a refrigerator shuts off and one has the impression that it has been on for a while previous to that moment (Block, 1996). In other words, the experience was there but not fully accessible to consciousness (access consciousness). I agree with Lamme's assertion that recurrent processing results in phenomenal awareness and I would even agree with Lamme that the FFS is unconscious, as we tend to think of consciousness. The model I am proposing here, however, states that awareness is produced by the FFS but it is in the form of individual bits of visual awareness -unintegrated and not experienced as a whole. The model holds that these individual bits of visual awareness (qualia) exist independently of whether they are integrated into a phenomenal conscious experience. Moreover, it is the integration of a subset of all of these independent qualia through recurrent processing that results in phenomenal consciousness. This proposal requires us to think about consciousness in manner that is bottom-up instead of top to bottom. Here we start with bits of awareness (qualia) uniting to create a larger more comprehensive form of awareness that we can think of as a high-level primate visual experience. Cases that support this perspective come from studies of patients with visual agnosia. These patients have lost the ability to recognize familiar objects and, in some cases, lose the ability to recognize even simple shapes. Milner reports the case of patient D.F. who suffered from a severe form of agnosia resulting from carbon monoxide poisoning. D.F. retained an intact V1 and demonstrated an outstanding level of visual acuity as evidenced by her ability to distinguish between gray patches and a fine pattern of dots (Milner, 1991). Also, Zeki reports the case of agnosia in a stroke victim resulting in a severe lesion of the prestriate cortex (V2) while retaining an intact V1 (Zeki, www.frontiersin.org 1992). This individual could draw local features of objects (corners, line segments of specific orientation) but did not retain the capacity to understand what was drawn. This patient could even draw structures as complex as St. Paul's Cathedral in London and yet not understand the figure he had just rendered. Both of these examples demonstrate that an intact V1 yields awareness of local, small features as would be expected if activated ODCs were creating qualia associated with specific locations of the visual field. Visual perception did arise in these patients, but this perception seemed to manifest itself as independent bits of visual awareness corresponding to local features. Since V1 remained intact in these patients, we can infer that the integration of these bits of awareness must occur through the action of extrastriate cortices. It is well established that extrastriate centers of the visual cortex play a role in processing high-level visual information: V5 is associated with motion processing, V4 plays a role in color and form processing, and V3 activity relates most closely to dynamic form processing (Zeki, 1978). In the model proposed here, these centers are synthesizing these high-level experiences by recruiting and integrating individual qualia produced in V1. A simple way of thinking of this is that V1 provides the palette that is used by the extrastriate centers (V5, V4, or V3) to paint our perceptive canvas. In addition, when these same centers emphasize or diminish the contribution of certain V1 qualia, it is possible to focus our perception on different aspects of the scene we see. There is a significant level of recurrent communication that occurs between V1 and areas like V3, V4, and V5. It is likely that the back projections from these extrastriate regions play a role in integrating the qualia produced in V1. V1 activity is modulated by extrastriate centers as monitored by functional magnetic resonance imaging (fMRI; Kosslyn et al., 2001;Tong, 2003). Mehta (2000) has shown that attentional modulation of neural activity occurs after the initial transient response of V1, and that recurrent activation of V1 occurs after attentional effects of V4 and the ITC. An interpretation is that feed-forward pathways from V1 to extrastriate cortical regions supply a surfeit of information and that recurrent pathways play a role in selecting and integrating the bits of awareness used for conscious perception (Figure 1). This ties in well with the work of Logothetis and others demonstrating that cells of the macaque monkey's V1 do not respond well to changes in perception due to binocular rivalry (Leopold and Logothetis, 1996;Sheinberg and Logothetis, 1997). These studies present different and competing images to either eye of primates that have been trained to respond when they see one image or another. The primate pulls a lever or otherwise indicates when it sees one image over the other while electrodes indicate the level of activity of specified cortical cells. In this way, researchers can tell when the primate's perception has shifted and can correlate this shift to changes in the activity of individual neurons. Logothetis and others have shown that ∼90% of ITC cells, ∼40% of the cells of V5, ∼40% of V4 cells, and <20% of V1/V2 cells correlate to changes in perception. Many have interpreted these results to indicate that the macaque V1 is not playing a direct role in perception since it appears that so few cells respond to changes in perception. FIGURE 1 | V1 receives stimuli from the geniculocortical pathway as indicated by the arrows below. V3/V4/V5 corresponds to a generalized higher visual center and q1-5 are qualia produced in V1. (A) Qualia from V1 supply a surfeit of information in feedforward pathways (FFS) to these higher centers. (B) Extrastriate calculations select for output cells indicated by small circles. The choice of output cells is key for the selection and integration of specific qualia from V1. (C) Changing recruitment patterns of extrastriate centers allow for changes in perception -compare to (B). This can be monitored physiologically as shown by electrodes 1 and 2. The output of calculations in extrastriate centers manifests as output cells with specific recurrent pathways back to V1. Thus, we would expect the activity of cells in extrastriate centers to change more significantly than V1 since the choice of output cells changes as calculation outputs and percepts vary [compare electrodes 1 and 2 in (B,C)]. In addition, V1 cells continue to receive input from the geniculocortical pathway and this would moderate changes in activity of V1 cells. In macaques, integration of individual qualia may depend on changing extrastriate recruitment patterns and not in modulating the activity of individual ODCs within V1 (Kosslyn et al., 2001). Thus, ODC activity should not change in response to binocular rivalry since ODCs only become important to highlevel visual experiences as individual qualia are recruited by extrastriate centers that integrate these bits of awareness and create our overall perception. Referring to the painting example above, imagine that a painter is working on two canvases at the same time. The paints on the palette do not appear or disappear in a canvas-dependent manner. The paints remain on the palette even as the painter goes back and forth between canvases. The palette represents V1 activity. The extrastriate centers correspond to what is happening on the canvases. If you monitor activity at one of these canvases, you will notice that the activity comes and goes depending on whether the painter is working on that canvas. However, the activity of the palette remains constant and does not change with the choice of canvas (Figure 1). Binocular rivalry studies in humans using fMRI have shown a modulation of V1 in response to changes in perception. In humans, recurrent pathways may play a greater role in modulating the activity of ODCs, and perhaps a mix of changing recruitment patterns of extrastriate cortices and modulating ODC activity is used to accentuate the contributions of specific ODCs in a given percept (Polonsky et al., 2000;Tong and Engel, 2001). DISCUSSION AND COMPARISON In the "neurobiological theory of consciousness," Francis Crick and Christof Koch propose the ∼40 Hz oscillations observed in the cerebral cortex are the means by which disparate bits of information are bound together and that this oscillatory process contributes to the formation of a conscious experience (Crick and Koch, 2003). Frontiers in Psychology | Consciousness Research Crick and Koch's theory of consciousness does not allow for awareness to exist at smaller levels (i.e., qualia) and considers all the information-processing preceding binding to lack an explicit experiential component. The qualia model differs with this oscillatory-binding theory in that qualia are considered independent units of awareness that exist whether or not they are incorporated into a conscious experience. However, I believe it is difficult for humans to experience an individual quale and that our visual phenomenal consciousness only arises after a significant number of these qualia have been integrated or bound together. It is possible that the mechanism for the global-level binding of qualia is the oscillatory mechanism proposed by Crick and Koch, and consequently, these two theories may dovetail together at this level (see below). Many have argued that the neural activity of V1 cannot play a significant role in the direct experience of visual consciousness (Silvanto, 2008;Tononi and Koch, 2008). For example, the modulation of neural activity in higher visual centers like V5 or V4 in response to changes in perception seem to indicate an active role in seeing a stimulus while the lack of perceptual modulation in V1 activity in macaques indicates the striate cortex has little to no role in the visual experience (Leopold and Logothetis, 1996). As I have previously proposed, this result could be interpreted as changing recruitment patterns of extrastriate regions with V1 providing the basic units (qualia) for recruitment. A strong argument made by Zeki and Bartels (1999) against the necessity of V1 in direct conscious experience is the welldocumented cases of visual awareness arising in patients with blindsight. Since blindsight patients have significant lesions in V1, this special visual awareness (e.g., sensing fast motion) is thought to bypass V1 through subcortical pathways. I would contend that these studies indicate that some qualia may exist outside of V1 but that this is the exception to the rule and not the general case. For example, it is easy to imagine that a rapid pathway for generating the sensation of fast motion from external stimuli would yield a significant selective advantage to organisms that possessed it. The split second saved in such a case would elicit the fight or flight response all the sooner and could make a difference in surviving a predatory attack. It is possible there are other exceptions to the rule of qualia in V1 that yield other advantages but these are special in nature and not the general case. Unlike many of the studies that try to associate visual experience with extrastriate visual centers and beyond, Lamme has taken a different approach and argued that it is not the specific location as much as the direction of processing that matters. In his model, the rapid flow of information in the feedforward sweep (FFS) of the geniculocortical pathway is an unconscious process while the recurrent activation that occurs after an area is activated by the FFS creates at least phenomenal consciousness. This is in line with Mehta's work, which demonstrates that attentional modulation of neural activity occurs post the V1 FFS and is more closely related to the recurrent activation of V1 by V4 and ITC (Mehta, 2000). In addition, Lamme's model proposes that phenomenal awareness is gated by attention and that attention allows phenomenal consciousness to move into access consciousness (Lamme, 2003). As its name implies, this form of consciousness provides access (verbal and other motor control) to the items contained within it and it is what is most commonly understood as a conscious experience. Lamme's model supports the idea that there are different levels of consciousness. The model I am proposing here holds that there is at least a third level of "conscious" experience or awareness. This form of awareness is very simple and small compared to our overall visual experience but it exists as awareness whether it is integrated into a larger experience or not. These bits of awareness are found in large numbers in V1, although this does not preclude them from existing in extrastriate cortical regions. Each quale is so small in comparison to our usual visual experience that it may not seem to be a form of awareness and this coincides with Lamme's interpretation of the FFS being unconscious. Thus the FFS through V1 results in the production of qualia associated with specific aspects of the scene we see -color, depth, motion, and orientation at certain locations in our visual field. These are for the most part independent of each other and remain so until extrastriate centers, through recurrent processing, recruit, and integrate them into phenomenal sensory experiences. This is an important point to make again. Individual qualia do not correspond to phenomenal consciousness. It is only after many individual qualia are integrated that phenomenal consciousness arises. Changes in recruitment patterns of V1 qualia by extrastriate centers result in different percepts being produced (e.g., binocular rivalry) and it is these percepts that are competing for entrance into access consciousness. In their theoretical paper, Zeki and Bartels (1999) argue for the existence of microconsciousnesses. This proposal parallels the idea of qualia put forth in this paper in that we are all postulating the existence of smaller forms of visual awareness that come together and contribute to the more complete form of vision we experience. Zeki and Bartels base their proposal on various features of visual processing including the observation that several perceptual aspects like color, grating orientation, and motion arise at different times. These authors have shown that the asynchronous perception of these visual attributes (30-40 ms between each) cause test-subjects to improperly associate visual cues that correspond to these visual attributes. They use these results along with a number of human studies looking at achromotopsia and akinotopsia to state that these attributes are being produced and perceived independently of each other. Therefore, each perceived visual attribute corresponds to an independent microconsciousness. Zeki and Bartels believe these microconsciousnesses arise from the processing that takes place within each system devoted to processing these specific attributes. For instance, the system devoted to processing color, beginning with the blobs of V1 and continuing specifically through the thin stripes of V2 and V4. Processing within this and other visual systems is known to be hierarchical in that the cells activated by the FFS implicitly contain the information of cells feeding into that cell. One consequence of this multistage integration is that cells further along in the system (compare V4 to V1) have much larger receptive fields. www.frontiersin.org Zeki and Bartels propose that the nodes of the color system (e.g., the thin stripes of V2 and V4) produce microconsciousnesses that incorporate what came before in the FFS through a binding process they call generative binding. Thus, processing in higher centers like V4 produces a microconsciousness that explicitly contains previously produced microconsciousnesses due to the architecture of the color system. This would also apply to the other systems devoted to attributes of vision like motion. In contrast, the qualia model proposes that qualia (bits of awareness) are produced in V1 by the FFS but not in the higher visual centers. Instead, these higher centers use the implicit information contained by them to inform the calculations taking place in these centers with the eventual goal of selecting specific reentrant pathways back to V1. Indeed, this is likely the reentrant activation described by Lamme that results in phenomenal consciousness. Another important difference between the two hypotheses is that Zeki and Bartels never specifically define the neural structures associated with producing microconsciousnesses. A core aspect of the qualia hypothesis (as in nature) is that information always corresponds to a definite structure or gradients of some form. Qualia are described here as the bits of awareness produced by neural circuits with specific topologies. These circuits exist in abundance in the striate cortex and are known as ODCs. By defining the basic units of awareness in this way we create a condition that allows for experimentation and a testing of this hypothesis (see the section on proposed experiments). Furthermore, Zeki and Bartels differentiate between the neural pathways of the FFS and the lateral or reentrant circuitry back to V1 (Figure 2). Lateral circuitry being the interconnections between the various centers (e.g., V3, V4, V5) or different areas of the same center. The authors state that the FFS connects cells that process similar types of information they term "like with like" activation. Thus, cells of V1 tuned to movement of a specific direction will activate cells further on in the FFS that are tuned to movement in the same direction. This "like with like" activation clarifies why the FFS does not integrate the disparate qualia produced in V1 but leaves qualia independent at this stage of processing. In contrast, lateral or reentrant circuitry is more diffuse in its activation and results in the physical integration of unlike stimuli (Shipp and Zeki, 1989 -both). The authors call this integrative binding and they propose that this is the means by which microconsciousnesses are brought together. The reentrant activation of V1 allows for the low-level integration of qualia. However, the function of lateral pathways within and between higher visual centers is to calculate which cells of V1 are activated by the higher centers and not for binding microconsciousnesses. Color constancy is a case in point. In this case, the "like with like" activation proceeding through the FFS would need to be modified since the perceived color for a given location of the visual field is different than that indicated by the light entering the eye. The "cross-talk" occurring in the higher centers through lateral pathways allows for these calculations to be made and activates the appropriate reentrant pathways to V1 from V4. To demonstrate these points, I will describe a hypothetical visual stimulus in which a green colored object is darkened by FIGURE 2 | (A) Arrows indicate the feedforward sweep (FFS) moving from the V1 onto extrastriate centers. FFS connectivity is of the "like with like" form. (B) Back arrows indicate recurrent activation of V1 from extrastriate centers: V3, V4, V5, and ITC. Circular arrows indicate lateral connectivity within each center. X corresponds to the cross-talk (lateral circuitry) between the centers. Recurrent and lateral connectivity is of the "diffuse" form as indicated by Zeki and Bartels. a shadow. Imagine that combining just three types of qualia produces all perceived color: red, blue, and green. In the process of visualizing a scene, green wavelength light impinges at a certain location of the visual field and a green quale is produced at the corresponding location of the striate cortex. This results from the activation of the respective areas of V1 (ODCs in blobs) by the visual processing pathway leading from the retinas to the occipital cortex. Next, the color information is passed through the FFS up through the thin stripes of V2 and onto V4. In V4, lateral processing indicates that this location is in shadow through calculations of the illumination ratios of the scene (Figure 3). The output of these calculations is the activation of a specific cell or set of cells in V4 with the appropriate reentrant neural pathways leading back to V1. These pathways accentuate the original green qualia in V1 and also modulate the activity of other local circuits that produce green, red, and blue qualia. The extra qualia combine to form the equivalent of white light and in combination with the original green quale, produce a lighter shade of green 4 . The proximity of these qualia within V1 (all corresponding to the same attribute -i.e., color) and the modulation of their activity, allow for these qualia to be bound at a low-level. As stated above, reentrant pathways are diffuse and this allows unlike qualia to be physically integrated as part of the same activation process. Due to the well-known structure of V1, the spatial relationship of all qualia is implicitly encoded within each quale produced and proximal qualia can be integrated at a local level since they represent close points in the visual field. Note that this spatial synchronization of qualia still allows for the temporal asynchrony observed by Zeki and Bartels since different attributes of vision (motion, grating orientation) can arrive independently at V1. Keliris et al. (2010) have recently published a paper studying perceptually correlated modulation of V1 activity. By using binocular flash suppression they were able to monitor the activity of neurons in macaques and correlate these modulations of neural Each output cell will have a specific feedback pathway back to V1. Activation of this feedback pathway will modify the activity in V1 in specific ways and allow for the local integration of proximal qualia as part of the same feedback process. The modulation of V1 circuit activity results in qualia achieving the correct oscillatory range or to fall out of the correct range for binding to occur. activity to changes in perception. Monkeys in these studies were shown gratings with orthogonal orientations to either eye at the same locations of the visual field. Similar to previous binocular rivalry studies with macaques, the authors found that approximately 20% of V1 cells demonstrated changes in their spiking frequency as a function of perceptual changes. This level of modulation is what we would expect if we were trying to modify a percept and not necessarily create a new one as described above in the example for color constancy. Reentrant pathways modulate the activity of V1 and either enhance or suppress the spiking frequency of cells. It is possible that these modulatory effects push circuits into or out of the correct oscillatory range (Crick and Koch -see above) for binding to occur. As circuits enter a suitable oscillatory range, their respective qualia are bound to the percept. Conversely, when circuits fall out of the appropriate oscillatory range, their corresponding qualia are removed from the percept. Thus, modulation of activity in V1 leads to the integration and binding of the specific qualia to be included in a percept. At the point that a threshold number of qualia are bound, phenomenal consciousness arises. From this description we can see that phenomenal consciousness is quantitatively and qualitatively distinct from the individual qualia contained within it. Beyond the large number of qualia that comprise phenomenal consciousness, this form of consciousness also includes an implicit awareness of the spatial and temporal relationships of the qualia that contribute to it. In the 1980s Bernard Baars proposed that consciousness arises through the action of the global workspace. He originally presented this idea in his book A Cognitive Theory of Consciousness (Baars, 1988). As he described it, the global workspace serves as a centralized location from which disparate information processors can retrieve information and to which they can broadcast their output. Although the information processors are not conscious in and of themselves, the global workspace is conscious and it contains all the diverse elements of consciousness. Baars' cognitive psychological approach to describing consciousness has been extended by others (Dehaene and Naccache, 2001;Dehaene et al., 2006) to include models that reproduce some proposed properties of a global workspace. For example, Dehaene et al. (2006) specify that activation of associative areas like the prefrontal and anterior cingulate cortices create a reverberating neuronal assembly with a long lasting reverberation that extends temporally past the initial stimulus. In addition, the authors define states (subliminal, preconscious) that are precursors to access consciousness (global workspace) but whose content does not necessarily enter access consciousness. Although the global workspace model addresses many of the resonant, associative, and other properties thought to be part of access consciousness; it does not address what it is about these processes that result in conscious awareness. Consciousness is attributed to the global workspace and when unconscious information processors supply their contents to the global workspace (access consciousness) this information becomes conscious. But what makes the global workspace conscious? The qualia model avoids this problem because it proposes that the information reaching access consciousness is already aware or conscious (qualia, phenomenal consciousness). The question ceases to be "How does information become conscious as it moves from non-conscious preprocessing centers to access consciousness (the global workspace)?" and becomes "How are smaller, independent forms of consciousness (qualia, phenomenal consciousness) incorporated into access consciousness?" Most importantly, the quality that is changing here is whether the material contained by a conscious state is reportable. Dehaene and others maintain the belief in the existence of consciousness without the ability to report is based on the illusory "intuition that visual awareness includes a richness of content that goes beyond what we can report" (Dehaene et al., 2006). However, it is clear from cases of locked-in syndrome that conscious states can exist without report. Individuals with total locked-in syndrome have lost all voluntary muscle control but remain conscious (Schnakers et al., 2009). Thus conscious states can exist without the ability to report and reportability is not a requirement for consciousness. In biological systems, we find basic units coming together to form more complex systems, which in turn contribute to systems of even greater complexity. These systems all contain top-down and bottom-up control mechanisms and at the highest levels result in very sophisticated processes that seem quite different from the properties of their basic units. All organs of the human body adhere to these principles and evolution has selected for all of these systems to operate in this way. The qualia model proposes that awareness operates in a similar fashion. Here we have the basic units (qualia) coming together to form the more complex phenomenal consciousness, which itself contributes to access consciousness. Access consciousness has the unique property of reportability due to its access to motor centers but this does not negate the existence of independent and simpler forms of visual consciousness or visual awareness in previous steps along this pathway (Figure 4). PROPOSED EXPERIMENTS If qualia exist, their corresponding circuits must be observable in biological systems. I believe that qualia circuits or parts of www.frontiersin.org FIGURE 4 | A conceptual map of the relationship of qualia, phenomenal consciousness, and access consciousness. Individual qualia (small circles) produced in V1 are bound together into phenomenal consciousness (larger light gray circles) by achieving the correct oscillatory range. Note the phenomenal conscious state shown below in this figure excludes many qualia that are not incorporated into this state (small circles to the left). Lamme and others postulate entrance into access consciousness (largest dark gray circle) is gated through attention and it is likely these phenomenal states (other light gray circles) compete for entrance into access consciousness (e.g., Necker cube). As phenomenal consciousness becomes part of access consciousness, it joins with the many qualia (small circles to the right) that are already part of access consciousness. Models describing the global workspace (Dehaene et al., 2006) include properties (resonance between prefrontal areas and posterior visual centers) that address the mechanism by which visual phenomenal conscious states are integrated into access consciousness. qualia circuits have already been detected, but the appropriate theoretical framework has not existed to completely interpret these results. I have argued that qualia exist in the striate cortex, but they may exist in other areas throughout the brain. The microscopic scale that corresponds to the activity of a few hundred to thousands of cells is bordered by technologies used to study the activity of a few cells (electrodes) and larger regions of neural activity: positron emission tomography (PET), fMRI, and magnetoencephalography (MEG). The rate of advancement in the field of detecting neural activity is such that it will become possible in the near future to detect the activation and topology of individual circuits. The first step in this line of research would be to determine if there is indeed a recurring set of ocular dominance column topologies within the human striate cortex. Given recent advances in imaging technology, it should be possible to do this within the next decade if not sooner. For example, the Human Brain Project has produced a detailed map of the human brain by generating 7400 slices (20 μm thick) of a human brain and then imaging each through histological staining and microscopy. Slices were digitized and reassembled in silico to generate a cellular level map of the brain (Amunts et al., 2013). This massive database is publicly available and might prove suitable for the fine-grained structural analysis required to identify repeating circuit topologies. Another example of a publicly available database is The Human Connectome Project (HCP). HCP provides diffusion tensor data (DTI) of the axonal connections between centers of the cerebral cortex (Rosen et al., 2010). This project may prove useful in probing the connections between the higher visual centers and V1 in great detail. One desired product from these fine-grained studies is a list of the number and type of unique circuit topologies represented in the striate cortex. Given the existence of a recurring set of neural circuit topologies within the striate cortex, the next step would be to establish a correlation between specific visual stimuli and each of these recurring circuit topologies. An example would be to shine points of light of a certain color (red, blue, or green) at specified locations of a subject's visual field. It should be possible to determine the structure of activated circuits within the striate cortex by using techniques like high-field fMRI. Key to this line of research is the reporting of the observed color that would allow identified neural circuit topologies to be correlated to internal subjective experiences. Of special interest in these studies would be to see if combining the activity of two distinct neural circuits can produce colors. For example, can activating circuits previously associated with the colors red and green create the subjective experience of the color yellow? Although high-field fMRI would allow researchers to correlate specific circuit topologies to reproducible internal experiences (qualia), determining causation would require the use of transcranial magnetic stimulation (TMS) to precisely disrupt certain circuits implicated in producing qualia. This would allow researchers to see if eliminating the activation of certain circuits interrupts the production of the visual experience. Conversely, it would be interesting to investigate if the specific activation of ODC neural circuits (with TMS) of known circuit topologies would result in the subjective experience of exclusively one type of colored light. To demonstrate that these circuit topologies are found throughout the animal kingdom, these studies must be extended into model systems like zebra fish. Muto et al. (2013) have recently demonstrated that it is possible to visualize circuit activation through transgenic expression of GCaMP. This research group observed Ca 2+ transients produced in embryo zebrafish in the absence and presence of their natural prey paramecium. As a result, these researchers identified the specific circuit structure activated by the paramecium. In a similar fashion, it should be possible to determine if circuits with specific topologies are reproducibly activated in response to colored stimuli and to catalog the structures of these circuits. Studies in other model systems like mice are also producing descriptions of the detailed structure of visual circuits (Bock et al., 2011). In their recent paper, Bock et al. (2011) described the structure of activated visual cortex circuits through the combined use of light microscopy with the calcium indicator BAPTA 1-AM (OGB) followed by serial-section transmission electron microscopy (TEM). Through these techniques, Bock et al. (2011) generated a line-orientation preference map with detailed structural information about the neural cells and circuits within this section of cortex. In the same way, it should be possible to generate a color preference map that gives detailed structural information about the circuits activated by color stimuli. The proposed qualia hypothesis predicts that at least some of the same circuit topologies activated by visual stimuli in humans should be found in various animal models since these topologies are not species specific. By creating a catalog of circuit topologies across many species and relating them to specific stimuli, we will create the foundation for understanding visual awareness and perhaps primary sensori-motor consciousness in general. It will also become possible to ask questions relating to the evolution of visual awareness and the distribution of qualia throughout the kingdom Animalia. CONCLUSION Over the past few decades, there have been numerous attempts to understand visual awareness through elaborate top-down mathematical models. For example, Giulio Tononi has borrowed concepts from the field of information theory and puts forth the idea that we produce specific internal impressions by a process of making a large number of distinctions. "Not this but that" repeated many times results in only one possible answer (Tononi, 2008;Tononi and Koch, 2008). It may be that these ideas accurately describe the means by which we recognize percepts, but I believe this type of computational process must occur at later and higher stages of image processing than the production of qualia. Qualia are produced early and create the fundamental percept first. As these percepts enter access consciousness they can then be compared to previously experienced percepts and this would require the types of distinctions that Tononi describes. So it may be possible these two hypotheses are complementary and not mutually exclusive. In contrast to the several top-down approaches proposed over the past few decades, the model I put forth here is a bottom-up proposal. In some ways, this model aligns with Chalmers' idea that there are fundamental entities in nature that need to be accepted as such (Chalmers, 1995). He postulates that consciousness is one such case and I believe this manifests as a fundamental unit of awareness defined here as a quale. In this case, it is the topology of neural excitation itself that is a simple form of awareness. Like any other biological system it is the structure that determines the function. This tenet holds throughout all biological systems and I believe this serves as a significant argument for the same applying to awareness. Quantized visual awareness is consistent with what we know of biological systems and the generation of complexity. Basic sets of building blocks are used to create a huge variety of intricate structures in biology. Given the unpredictable and incalculable number of visual states produced in primates, it seems logical that an approach that used basic building blocks like qualia would prove advantageous. This hypothesis enhances our ability to study visual and other forms of awareness. By establishing the existence of quantized awareness, we are liberated to look for and catalog the various forms of qualia and eventually analyze each individually to understand how their form leads to function. This approach puts awareness squarely into the realm of established scientific principles and approaches. Qualia are likely found throughout the animal kingdom. We know there are a wide variety of nervous systems across the kingdom Animalia. However, this does not preclude the existence of rudimentary forms of awareness in many of these animals. We no longer need a whole brain but just small circuits of hundreds to thousands of neurons with specific topologies. Indeed, this may be at the heart of the evolution of visual awareness. Paleobiological studies clearly show that early animals were small and much simpler than the most complex forms that exist today (Butterfield, 2007). Nervous systems arose early in the animal kingdom and likely had roles in movement and mechanical responses to stimuli. It is much easier to imagine how a simple form of awareness like a single quale could have arisen in early animals if we accept the idea of quantized awareness. Although V1 represents one highly structured way of organizing qualia, we should not expect that qualia would be arranged in the same way in other animals. Indeed, the striate cortex, as we have come to understand it, is found in only a subset of mammals. Thus the question of the presence or absence of qualia in any specific organism can be separated from the presence or absence of structureslike V1. We have remained tethered to vision in our discussion of awareness because the visual system is the most thoroughly studied sensory system. If qualia exist for vision, however, they likely exist for other sensory modalities. For instance, the gustatory pathway gives us five distinct types of tastes that could easily correspond to individual qualia. These five are the common sweet, bitter, sour, salty, and umami. The primary gustatory cortex is composed of sections of cortex called the frontal operculum on the inferior frontal gyrus of the frontal lobe and the anterior insula of the insular lobe (Bushnell et al., 2007). Although these regions are not as well understood as V1 they may well contain neural circuits that produce qualia. Similarly the somatosensory system responds to stimuli from various types of receptors: mechanoreceptors, chemoreceptors, thermoreceptors, and nociceptors. One can imagine either a one to one correspondence with qualia or perhaps a limited number of variant qualia for each. Although it may seem overly simplistic, a natural consequence of the model proposed here is that the complex, rich, and distinct experiences that we humans share arise through varying the number and combinations of a limited set of qualia types. This leads us to a starting point for the creation of a Table of Qualia ( Table 1). The descriptions given in this paper are by necessity a first approximation, a starting point. With time and experimentation, the model described in this communication will be reshaped and modified. The true value of the qualia model proposed here is not that it has the final word in how visual awareness comes into being but that it fundamentally changes the questions that can be asked and the approach taken to study visual and other forms of sensory awareness.
2016-06-18T01:28:08.123Z
2013-11-22T00:00:00.000
{ "year": 2013, "sha1": "08b3f8453e146d0eb63237b06297aeff68cbb313", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyg.2013.00869", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6084b10d72c459365fc010418a4737b6dab1195c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
8464427
pes2o/s2orc
v3-fos-license
Reliability Analysis for Aviation Airline Network Based on Complex Network ABSTRACT: In order to improve the reliability of aviation airline network, this paper presents an empirical analysis on the airline network structure of an aviation company in China from the perspective of complex network, and the calculation result of the statistical features and degree distribution of the network, proves that the network is a small-world network and a scale-free network. Four indicators, i.e. degree, closeness, vertex betweenness and flow betweenness, are utilized for aviation network centralization so as to distinguish the most appropriate method. The influence of nodes in local network is to be measured through the indicators. The results show that vertex betweenness can achieve the best aviation network centralization effect. Specifically, the centrality degree reaches 95.87%. On this basis, the network reliability is analyzed to discover that when two nodes with maximum degree or maximum betweenness are removed, the network performance is reduced by a half. Eventually, countermeasures are proposed for further improvement according to the results. In other words, complex network method is feasible used to analyze the topological structure and statistical features of aviation network. Based on this, a study is conduced to the network reliability and suggestions are proposed for optimizing the aviation network. INTRODUCTION Aviation network refers to an airline system constituted by airlines connected in a certain way in a district, serving as a basis for the production and development of the airline company.In the studies on aviation network with network research methods, it is found that the aviation network has relevant statistical features (Guimera and Amaral, 2004;Guimera et al., 2005;Barrat et al., 2004) of "small-world network".However, most studies (Barrat et al., 2005) focus on the analysis on physical statistical features of aviation network structure and the evolution of overall topological structure, while only a few studies are made on the analysis of route network with social network methods (Porta et al., 2006).Since there are significant differences between aviation network nodes, it is particularly necessary to conduct comparative analyses on relevant nodes and studies on centrality of aviation network.The concept of network centrality can be traced back to the idea of applied statistics in the 19th Century (Gaertler and Wagner, 2001).Typically, different centrality indicators are required for centralization for different types of networks, and the multiple centrality study method needs to be applied in combination with parameters.In China, some scholars adopt other theories and methods to study the centrality of aviation network (Dang and Li, 2011).For instance, some scholars use rank-size model to measure the air transport concentration degree so as to assess the position of hub airports; some scholars mainly adopt the dominant flow method supplemented by squared Euclidean distance method and distance-based cluster method to analyze the level and change of major cities in China in domestic passenger aviation network, based on the air passenger statistical data; moreover, some scholars (Porta et al., 2006) employ the Geographic Information System (GIS) method to study the spatial pattern of the domestic aviation network airport system on the basis of air flow data. With the development of the civil aviation industry in China, the air transport network keeps booming in scale.Nevertheless, it still suffers imperfection, low reliability and low operation efficiency.On the airline company's side, while planning aviation network, network planners basically conduct decision analysis according to experience.They only take the demand for a single airline as the primary indicator for assessing the necessity of launching an airline.Besides, they often select airlines that are basically the same in route of airlines benefiting other companies while neglecting the network reliability and its overall synergistic effect.The safety and reliability of aviation network exert on an important impact on market competitiveness and economic benefits of an airline company.Therefore, the aviation network should be planned in a systematical manner to improve its overall synergistic effect.A complex network method is proposed to analyze the topological structure and statistical features of aviation network by taking China Southern Airlines (CSA) as an example.Based on this, a study is conduced to the network reliability and suggestions are proposed for optimizing the aviation network of CSA. COMPLEX NETWORK PROCESSING METHODS AND RELATED RESEARCH As a small world model and a scale-free network model were proposed in the end of the 20th century, the complex network gradually became the research hotspot in different discipline.In order to facilitate the study of complex network effectively, all kinds of research software are introduced, such as Pajek, Ucinet, NetworkX and NetMiner 3. In this paper, Ucinet is used for airline network.Ucinet is a social network analysis program developed by Steve Borgatti, Martin Everett and Lin Freeman.The program is distributed by analytic technologies.The software Ucinet involves, in the network analysis, programs such as community discovery and region analysis, ego network analysis and the hole structure analysis and so on.It also contains a large number of analysis programs, such as cluster analysis, multidimensional scaling, singular value decomposition, factor analysis and correspondence analysis, the role and status analysis, including structure, and the role and regular equivalence. In this paper, we take airline passenger flow data as samples, the city for the network nodes, routes between cities as the network edge, aviation passenger flow between cities as the mapping relationship between node and nodes in the network structure and construction of air traffic network, as shown in Fig. 1: Based on Ucinet software, two kinds of simulation systems were introduced (Hongguang and Liping, 2012), the deliberate targeting system and random interference system are designed, and some simulation experiments are done.The structure diagrams of air passenger flow network are plotted with the software (Dang and Li, 2010) and analyzed from the perspective of structural characteristics, degree of distribution and network centrality.However, traditional research methods fail to thoroughly identify the complexity of aviation network spatial relations between airports in a proper way.Centrality tests for network nodes are important means for judging the importance of nodes in the network, adjusting the aviation layout, and optimizing resource allocation, which, in particular, is greatly significant for safety of aviation network. EMPIRICAL DATA ACQUISITION AND PROCESSING In this paper, the domestic and international flight data from CSA´s data centre, in 2010, are taken as samples.Usually, cargo flights are arranged at night and the characteristics of transport cargo flights are different from passenger flights.In this article, cargo flights will not be considered.According to the model, let the airport as network node, the direct airline as network edge, and the number of navigable flights among airports as weight of edge, which constituted a weighted aviation network of CSA.Furthermore, adjacency matrix (K ij ) nxn (n refers to the number of nodes) is used in order to represent the aviation network, K ij , in the matrix, refers to the number of flights from Airport i to Airport j.Due to data limitation, the network in this paper is an undirected one.That is, out-degree and in-degree are not involved. STATISTICAL FEATURE AND THE DEGREE OF DISTRIBUTION According to the statistics, there are 187 nodes and 1,245 edges.It is thus evident that the aviation network is concentrated and its structure is relatively complex.To show network hierarchical relationship, a backbone aviation network of CSA is built.The node weight threshold is taken as 500 in the weighted network.According to the statistics, there are 99 nodes and 369 edges. In the complex network theory (Newman, 2003), the indicators reflecting the statistical features of network structure are mainly degree of nodes, average degree, average path length, density, clustering coefficient, etc. The relation matrix (a ij ) is built for the aviation network, in which a ij indicates the relation of flight numbers between city i and city j: if there are any flights between city i and city j, a ij = 1; otherwise, a ij = 0. Upon calculation, the statistical feature indicators of 2010 aviation network of CSA are listed in Table 1. the network.According to Table 1, the average degree is 6.684, indicating that, in average, each city node is connected to other 6.684 city nodes.The degree of a node in an aviation network refers to the number of airports having direct flights with this airport (node); greater degree of a node means greater importance to some extent.Table 2 shows CSA ten airports with top degrees in 2010, among which Guangzhou ranks the first of airport degree as an airline hub of CSA.The average degree of a network refers to the average value of all degrees of nodes in Generally, the distance between two nodes is defined as the number of edges of the shortest path between two nodes, and the average path length of a network is the average value of the distances between all node pairs.In an aviation network, the distance describes the path from one airport to another one using the minimum transit times; the shorter the distance is, the less transit times are required.Meanwhile, the average path length stands for the depth of the air transport, which is a property of the transport shortcut in the integral network; the shorter the average path length is, the less transit times are required between any two airports, bringing more convenience for the passengers.In 2010, the average path length of CSA aviation network is 2.558, which means only 1.558 transit times are required for transporting from one airport to another, favorably meeting the air transport demand. An undirected network density is defined as the ratio of actual connection number to the maximum possible connection numbers in the figure.In an aviation network, it describes the ratio of actual number of opened segments to the number of all possible segments.The density indicates the closeness of the air connections among all cities in the network.The value is taken between 0 and 1; if the value is closer to 1, the network structure is more perfect and the connections of the air transport are closer.In 2010, the density of aviation network is 0.0358, which is relatively small, indicating that the connections are not very strong among CSA airports. The clustering coefficient of a node in the network stands for the ratio of the actual connection number to the maximum possible connection edges between this node and its adjacent nodes.In an aviation network, the clustering coefficient of a node indicates the average cluster degree of the local network comprised of the airport and its adjacent airports.Higher clustering coefficient means greater cluster degree of the local network, and smaller impact of this node on the adjacent airports; on the contrary, lower value means more dependence of the adjacent airports on this node.Guangzhou Airport has a clustering coefficient of 0.06, which is the smallest of all, indicating that the adjacent airports are highly dependent on Guangzhou Airport, and large numbers of flights will be affected if failure occurs in Guangzhou Airport.The clustering coefficient of the integral network is the average value of the clustering coefficients of all city nodes.As shown in Table 1, CSA aviation network, in 2010, has relatively small average shortest path length and large clustering coefficient; hence this aviation network belongs to a small-world network. Betweenness is generally defined as the capability of a node for controlling the connection of other node pairs, i.e., the effect of a node acting as a bridge between other node pairs.Higher betweenness of a node indicates stronger effect of the node as a bridge, and more important role in the network.See Table 3 for the betweennesses of top 10 airports of CSA Aviation Network in 2010, based on which Guangzhou Airport, as a CSA hub, has the maximum degree and betweenness, taking up the most important position in the network.Secondly, Urumqi, Beijing Capital and Shenzhen Airports also play prominent roles as bridges.Specifically, Pudong Airport ranks third in the betweenness value; although its degree value is not particularly high, it is still a significant transit in the network.If a fail occurs in important transit airports mentioned above, the connections will be greatly affected between other nodes. The degree distribution of the nodes in the network can be described in the power-law distribution function; (1) The power-law distribution coefficient α shows the degree distribution characteristics of a network (Newman, 2003). The distribution p(k) is the probability of a randomly selected node of degree k. Power-law distribution is also known as scale-free distribution; a scale-free network is a network whose degree distribution follows a power law.In the log-log plot, a straight line with negative slope can be obtained by conducting a linear fitting for the degree distribution, and the absolute value of slope is the power exponent.If the absolute value is relatively small, this network is scale-free.Correlation coefficient indicates the fitting degree of the curve; higher correlation coefficient means the curve is more favorably fitted, explaining actual problems more adequately.The degree distribution of undirected network is discussed here.By applying a double-segment fitting in the log-log plot, we obtained the power exponent and correlation coefficient variance of the degree distribution (Table 4), and degree distribution (Fig. 2) of CSA aviation network in, 2010. Based on Table 3 and Fig. 2 , at the significance level of α=0.0001,Segment 1 and Segment 2 both have excellent fitting, with correlation coefficient exceeding 0.96.Since Segment 1 and Segment 2 both have relatively small power exponents, the degree distribution of CSA aviation network is subject to double-segment power-law distribution, and thus this network is scale-free.Hence, the nodes in the aviation network are heterogeneous, certain nodes (hubs) have large numbers of connections, playing the dominant role in the network, while other large number of nodes only have small numbers of connections and are located on the edge of the network, according to the Matthew Effect.Where, W refers to the whole network, represents the centrality value of the node with the largest centrality degree. According to the equation, if the centrality of all nodes is the same, namely the network has no center, then .In case the centrality degree of only one node is of 1 and that of other nodes is 0, will be greater, and the handful of center nodes will be more prominent, which shows that the larger the centrality difference between network nodes, the higher the centrality indicators of the handful of center nodes.Thus, the accuracy of center nodes will be higher, and so will the centrality degree. Different network centrality (Friedkin, 1991;Newman, 2005) degrees can be figured out by substituting degree, closeness, vertex betweenness and flow betweenness into the following equations respectively. (3) Where C D (x), C C (x), C B (x), and C FB (x) represents the degree, closeness, vertex betweenness and flow betweenness indicator values of network nodes, respectively.In contrast, the degree indicator is more suitable for measuring the influence of nodes in local network.In global scope, however, the closeness indicator needs to be referenced.The two indicators are only applicable to static network analysis, while the betweenness indicator is more suitable for analysis of dynamic network.When the degree, closeness, vertex betweenness and flow betweenness indicators are used for aviation network centralization based on the formula above, the centrality degree under the different indicators is shown in Table 3. Obviously, the centrality degree varies with the selected centrality indicators for network centralization.The centrality degree of closeness is relatively low, which indicates that closeness is not suitable for aviation network centralization.Moreover, the COMPARISON OF CENTRALITY DEGREE UNDER DIFFERENT INDICATORS Relevant network centrality indicators serve as the basis for measuring the centrality degree of the network.On the assumption that centrality indicators have been defined in Network C A with n nodes, the centrality degree of the network will be defined as follows (Costenbader and Valente, 2003): (2) top 10 nodes in the aviation network are selected to catch the distribution of indicator values for each node, as shown in Fig. 2. Therefore, degree and closeness fail to obviously distinguish nodes, while the difference of distributions of betweenness is relatively large.Through calculating the centrality degree of each indicator, and comparing the cumulative distribution of centrality data of top 10 nodes, vertex betweenness is the most suitable indicator for aviation network centralization. NETWORK RELIABILITY ANALYSIS There are a lot of network attack means in reality (Barrat et al., 2005;Holme et al., 2002;Kai-Quan et al., 2012;Li and Cai, 2012), where the random attack and the hostile attack are relatively representative, and the hostile attack is very destructive.As it was demonstrated before, the aviation network of CSA is a scale-free one, with associate scale-free properties, such as "Stable and Fragile" in attacks, which means it has a very strong flexibility in random attacks or unexpected malfunctions while it is very fragile in hostile attacks (Xiaohuan Wu et al., 2013). The most important indicators to characterize the network topologic structure are the average path length and the clustering coefficient.The average path length in the aviation network represents air transport depth, the clustering coefficient represents air transport width, and the network efficiency represents overall coordination of the network.With a smaller average path length of the network, a bigger clustering coefficient, and higher network efficiency, it is indicated that the network has a better performance and a stronger fault-tolerant capability.Therefore, this paper will have the reliability analysis on the current network of CSA with the three indicators, and the calculation formulas are as follows: (4) In Eq.( 4) N stands for network node number, d ij stands for the shortest distance from Node i to Node j. In Eq. ( 5) k i stands for the number of edges directly connecting with airport i, and E i stands for the number of existing connecting edges between airports in number of k i . (6) In Eq.( 6) with 0≤E≤1 .When E=1, the network is completely connected; and when E=0, all nodes in the network are isolated.Airport nodes in the aviation network of CSA are sorted according to the degree and the betweenness value in descending order, and then the airport based on a relatively big degree and that, based on a relatively big betweenness, are removed orderly, the average path length, the clustering coefficient, and the network efficiency of the network are calculated respectively, and changes of the average path length, the clustering coefficient and the network efficiency corresponding to the decrease of the airport number are counted and compared, then the contents indicated in Fig. 4, Fig. 5 and Table 5 are respectively obtained. As indicated by Figs. 3 and 4, the fluctuation of average path length is relatively strong when several nodes are removed, and the fluctuation of average path length based on the remove policy of degree priority is stronger than the one based on the policy of However, overall, it shows that after the node with the greatest degree or the highest betweenness has been removed, the average path length fluctuates widely, which causes the network instability.As it is indicated in Fig. 4, the clustering coefficient decreases after several nodes are removed, and the decrease of the one based on the remove policy of degree priority is slightly larger than the one based on the betweenness priority; therefore, the clustering coefficient decreases and the clustering degree becomes smaller when the node with the greatest degree or highest betweenness has been removed from the network. As shown in Table 6, when 1~5 nodes with the greatest degree and highest betweenness are removed, the network efficiency suffers a relatively great impact and the backbone network suffers an even greater one, because both of the removed airlines and the flights are of huge numbers in the original network.Especially, when 5 nodes are removed, the efficiency of the original network is decreased by more than 30% and the backbone network decreased by over 70% and the whole network is almost paralyzed.In addition, the decrease based on the remove policy of degree priority is larger than that based on the betweenness priority.In order to understand the network change condition deeply, taking the test of removing two cities one-time for example, we conduct specific analysis on the network performances, based on the two kinds of remove policies before and after the test.Before the test, backbone network of the CSA contains 99 nodes and 369 routes; when two airports with the greatest degree value (Guangzhou Airport and Shenzhen Airport) have been removed according to the first policy, 306 routes disappear, the original network efficiency is decreased by 18.36%, the backbone network efficiency is decreased by 51.46%, and the network performance is decreased by half.When two airports with the highest betweenness (Guangzhou Airport and Urumqi Airport) are removed according to the second policy, 300 routes disappear, the original network efficiency is decreased by 17.54%, and the backbone network efficiency is decreased by 48.73%.Its decrease is slightly smaller than that of the remove policy based on the degree priority. When two airports with the greatest degree are removed, the number of routes of the whole network in south-to-north direction is decreased substantially; when two airports with the highest betweenness are removed, most airports in the western area become isolated nodes, holding up flights, and the overall network performance is decreased by half or so.However, the network performance decreases approximately by about 25% (Dang and Li, 2011) when two nodes are removed in the America aviation network, indicating that the southern aviation network is relatively fragile when facing a selective attack and the overall network reliability is expected to be enhanced.Apart from the selective attack, the current network layout of CSA can cause heavy strike to the aviation transportation when facing random attacks, including natural disasters.Therefore, while enlarging the scale of network development, CSA should also develop its aviation network into a multihub system.Judging from the degree value and betweenness value of the airport node, number of airlines removed from the network, and the network efficiency decrease condition, in addition to taking Guangzhou Airport as the core hub, CSA can further plan and revise its aviation network by defining Beijing as the important hub between Europe, America and the inland of China, defining Urumqi as the regional hub between the middle Asia and the inland of China.On the other hand, the Company can promote connections in east-to-west direction, for example, establishing connections between the west and the east centered in Zhengzhou to fill up the blank in this direction.In this way, the overall performance and reliability of the network can be enhanced, ensuring fluent running of the air transport system. CONCLUSION In this paper, complex network theory is applied, an aviation network structure model is built for CSA, and its structure is analyzed.It is discovered that Guangzhou Baiyun Airport is the hub of the aviation network of CSA, the majority of airlines are in the south and north directions, the airlines in Western Region are distributed around Urumqi in a radial way.Moreover, analysis is conducted on the distribution of statistical features and degree distribution of the network, and it is proven that the aviation network of CSA is a small-world network and a scale-free network.Based on this, the reliability of the network is analyzed according to the degree and betweenness of nodes in the network.The results show that the backbone network performance will be reduced to a half, once 2 nodes are removed and it basically breaks down once 5 nodes are removed.The overall reliability of the network is far from high.Therefore, CSA should put more efforts in the overall programming of the aviation network as well as the construction and management of the aviation hub, seek to develop as an multi-hub airport, launch more flights in west and east directions, reasonably allocate air transport resources, and improve the overall performance of the network so as to meet the demand of sustained and healthy development of air transport of CSA. The Complex network method is used to analyze the topological structure and statistical features of aviation network. Based on this, the suggestions are proposed for optimizing the aviation network from the analysis of the network reliability the application of the methodology (airport planning, fleet sizing, route planning, etc) isn't mentioned in the paper.They will be discussed in other papers. Figure 1 . Figure 1.Network structure and construction of air traffic network. Figure 2 . Figure 2. Node degree distribution of CSA aviation network, in 2010. Figure 3 .Figure 4 . Figure 3.The cumulative distribution of centralized data under four indicators of the top 10 nodes. Table 1 . Statistical features of 2010, CSA aviation network. Table 2 . Top 10 degree values of CSA airports, in 2010. Table 3 . Betweenness values of top 10 airports of CSA aviation network, in 2010. Table 4 . Feature values of degree distribution of domestic flight network. Table 5 . Centrality degree under the different indicators of CSA aviation network. Table 6 . Comparison of network efficiency changes based on two different remove policies.
2016-01-15T18:20:01.362Z
2014-05-28T00:00:00.000
{ "year": 2014, "sha1": "3c0d8e50c1ed7527d480fe07ec515038ca89aeae", "oa_license": "CCBY", "oa_url": "https://jatm.com.br/jatm/article/download/295/pdf_22", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3c0d8e50c1ed7527d480fe07ec515038ca89aeae", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Engineering" ] }
235678774
pes2o/s2orc
v3-fos-license
The Potential of Trichoderma spp. Endophytes as Biological Control Agents Against Ceratocystis sp. Causes of Acacia Stem Rot Disease in vitro Ceratocystis sp. is an important pathogen in acacia plants. This pathogenic fungus causes stem rot disease which can cause the death of mature plants within 4 7 years with a disease attack percentage of nearly 80%. The common controlling technique is carried out by using chemical pesticides, for that we need another alternative to reduce its negative impact, namely the use of Trichoderma spp. endophyte. This research aimed to determine the potential of the Trichoderma spp. endophytes in suppressing the development of the Ceratocystis sp. The study was conducted experimentally using a completely randomized design (CRD) consisting of 7 treatments with three replications, namely without Trichoderma sp. endophytic (E0), two isolates of T. virens endophytic of oil palm plants (E1-E2), one isolate of T. harzianum endophytic of rubber plants (E4), three isolates of Trichoderma sp. endophytic of acacia plants (E5-E7), and one isolate Trichoderma sp. endophytic of eucalyptus plant (E14). The parameters observed were the antagonistic ability, colony diameter, growth rate of Trichoderma spp. colonies, and its inhibition mechanism against Ceratocystis sp. Based on the results of this study, Trichoderma sp. endophytic E074 isolated from 4 years old Acacia crasicarpa had the highest antagonistic ability (30.76%), diameter (90 mm), and growth rate (33.33 mm.day ) compared to other isolates with hyperparasitic entrapment type. INTRODUCTION Acacia plantations (Acacia crasicarpa) are industrial plantation forest commodities that have high economic value because they are widely used, especially as a source of raw material for the paper (pulp). One of the important diseases found in acacia is stem rot disease caused by Ceratocystis sp. Ceratocystis sp. is a pathogen that attacks forestry plants or industrial plants such as acacia plants with an attack percentage of nearly 80%. Ceratocystis sp. can cause death in mature plants within a period of 4-7 years which is marked by yellowing of the leaves, drying up and the tree finally dying, the bark of the trunk becomes blackish-brown and cracked [1]. According to Accordi, Ceratocystis sp. is a pathogen that does not have an incubation period, so that when there is a fungal wound it will immediately colonize and infect, therefore fast and precise control is needed [2]. Control efforts have been made to overcome attacks by Ceratocystis sp. namely by controlling the technical culture, the use of resistant varieties and using synthetic chemical fungicides, but the use of synthetic chemical fungicides excessively can cause poisoning in humans, environmental pollution, death of non-target organisms and resistance to pathogens. Pathogen control techniques that are safer and more environmentally friendly are biological control by utilizing biological agents. Biological agents can be utilized, one of which is endophytic fungi. Endophytic fungi are fungi originating from plant tissue and do not harm their host plants [3]. One of the endophytic fungi that are often found and capable of acting as biological control agents is Trichoderma sp. This fungus can suppress disease development in plants, especially soil-borne pathogens through processes of mycoparasitism, competition, and antibiosis, which indirectly stimulate plant growth and induce resistance to disease [4]. The endophytic Trichoderma harzianum AK51 isolate from rubber plants has 70.6% inhibition power to control white root fungal disease in rubber plants [5]. Trichoderma virens endophytic fungus originating from oil palm roots is better, compared to the Trichoderma virens endophytic fungus originating from stems and midribs shown by the best antagonistic ability of 71.11% and the best antifungal compound of 25.74%. and the inhibitory mechanism that occurs is antibiosis in inhibiting the growth of the fungus Rigidoporus microporus [6]. Trichoderma sp. local isolates can inhibit Ganoderma sp. in Acacia mangium plants with inhibition power ranging from 38.09% -58.06% [7]. Control of Ceratocystis sp. by using Trichoderma sp. endophytes have not been widely reported. The purpose of this study was to determine the effect of several Trichoderma sp. isolates endophytic and obtained the best isolates Trichoderma sp. endophyte to control Ceratocystis sp. in vitro. MATERIALS AND METHODS The research was conducted at the Laboratory of the Plant Protection Department of PT. Arara Abadi, Pinang Sebatang Village, Tualang District, Siak Regency, Riau. The research was conducted for 4 months from March to June 2019. The materials used in this study were Trichoderma sp. endophytic origin of acacia and eucalyptus plants collected by the Laboratory of PT Arara Abadi R&D Perawang, isolate Trichoderma sp. endophytic origin of oil palm, the collection of Fifi Puspita and isolates from Trichoderma sp. endophytic origin of rubber plants from the collection of Wilda Andini, a pathogenic isolate of Ceratocystis sp. (013C-R1) PT Arara Abadi R&D Perawang laboratory collection, potato sucrose agar (PSA), streptomycin, 70% alcohol, 96% alcohol, distilled water, cotton, tissue paper, plastic wrap, aluminum foil and label paper. The observation parameter of this study was the antagonistic ability of several Trichoderma sp. Isolates. endophytes against Ceratocystis sp., colony diameter (mm) and growth rate (mm.day -1 ) isolates of Trichoderma sp. endophytic and hyperparasitic interaction type isolates Trichoderma sp. endophyte with high antagonistic ability against Ceratocystis sp. The data obtained from the observations were analyzed descriptively and by variance. Descriptive data analysis includes the results of observations of the types of hyperparasitism. Data on the fungus Trichoderma sp. endophyte with high antagonistic ability against Ceratocystis sp. and data is presented in tables and figures Analysis of variety includes inhibitory data of the fungus Trichoderma sp. endophyte, diameter and growth rate of the Trichoderma sp. endophyte which has high antagonistic ability against Ceratocystis sp. To compare the average between Trichoderma sp. Isolates. The endophyte was followed by further tests with Duncan's new multiple range test (DNMRT) at the 5% level. Antagonist Ability of Trichoderma sp. Endophyte Against Ceratocystis sp. In Vitro Antagonistic ability of 19 isolates of Trichoderma sp. endophytes against Ceratocystis sp. significant effect after analysis of variance. The results of the DNMRT follow-up test at the 5% level can be seen in Table 1. Table 1 shows that the T5 isolate has a high antagonistic ability to inhibit the growth of the fungus Ceratocystis sp. namely 30.76% and significantly different from other isolates and without Trichoderma sp. isolates. endophytic but not significantly different from isolate E6 because isolate T5 has a faster growth and can be seen in the measurement parameters of diameter and growth speed (Table 2) so that it can inhibit the pathogen Ceratocystis sp. because of competition for space and nutrition. According to Harman et al. fast growing fungi can outperform in control of space and nutrition and the end can suppress the growth of their opponent's fungi, the competition is in obtaining nitrogen and carbon [8]. This is following the results of research by Hutabalian et al. stated that biological agents that have fast growth will have the ability to inhibit the growth of pathogens with higher competition for space and nutrients [9]. Advances in Biological Sciences Research, volume 13 According to Sunarwati and Yoza, antagonistic fungi that have an inhibitory ability of 26-50% are included in the group of fungi that have the low antagonistic ability because the inhibitory ability of antagonistic fungi is influenced by several factors, one of which is the growth of the fungus itself [10]. Amaria states that fungal isolates that have high inhibitory ability are antagonistic fungi isolates whose colony growth is faster than pathogenic colonies [11]. This is in following what was stated by Octriana that antagonistic fungi can be categorized as having high inhibitory activity against pathogen growth if the percentage of inhibition reaches more than 60%, but if the percentage of inhibition only reaches 30% then the antagonistic fungi can be categorized as having an inhibitory effect [12]. Based on Figure 4, it can be seen that 7 isolates (T1, T2, T4, T5, T6, T7 and T14) with moderate antagonistic ability did not form an inhibition zone but isolates of Trichoderma sp. endophytes have a mechanism of competition for space and nutrients in inhibiting the growth of the pathogenic fungus Ceratocystis sp. on PSA medium. Competition between Trichoderma sp. endophytes and Ceratocystis sp. occurs because of competition for nutrients from the PSA medium so that the fungus Trichoderma sp. Endophytes are superior in control of space and nutrition and can grow to meet PSA medium. This causes the growth of Ceratocystis sp. become obstructed. According to Melysa et al. antagonistic properties arise due to the competition that occurs between two types of fungi that are grown side by side [13]. This competition occurs due to the same needs of each mushroom, namely the need for a place to grow and nutrients from the media used to grow. Purwantisari and Hastuti reported that the faster the growth of antagonistic fungi, the more suppressed the growth of pathogenic fungi would be due to running out of growth space [14]. The diameter and growth rate of Trichoderma sp. high antagonistic ability (mm.day -1 ). The results of observing the diameter and growth rate of Trichoderma sp. endophytes with moderate antagonistic ability had a significant effect after analysis for variance. The results of the DNMRT follow-up test at the 5% level can be seen in Table 2. Table 2 shows that the T5 isolate has a diameter and growth rate that tends to be high, namely 90.00 mm and 33.33 mm.day -1 and is significantly different from isolate T7 (84.00 mm and 31.00 mm.day -1 ) but is different not real with other isolates. The growth of isolate T5 was very fast so that it was able to fill the growing space on the third day of observation. This is related to the results of the antagonistic ability of the fungus Trichoderma sp. endophytes that can compete in the struggle for space and nutrition with Ceratocystis sp. Hutabalian et al. also reported that the faster the growth of a biological agent, the higher its ability to suppress pathogen growth due to competition for nutrients and living space [9]. Based on Figure 2, it can be seen that the growth of isolates T5 was faster than isolates T4, T14, T1, T2, T6, and T7. This is following the opinion of Djafarudin that the growth rate of fungal colonies is an important factor in determining its potential as a biological agent against pathogens [15]. According to Gusnawathy et al. the ability of high growth rates is one of the main factors that determines the ability of an antagonistic organism to control diseases that attack plants, so that with this ability microorganisms can compete in control of space and nutrients [16]. Table 3 and Figure 3. Table 3 and Figure 3 shows that there are three types of hyphae interactions in Trichoderma sp. isolates. T1, T2, T4 and T6 isolates have interaction in the form of hyphal coiling and make the sticking and widing type that cause damage to the hyphae of Ceratocystis sp. This is suspected because the antagonistic fungi produce various chemical compounds that are toxic to Ceratocystis sp. Sunarwati and Yoza stated that the genus Trichoderma was able to inhibit the growth of pathogenic hyphae by producing gliotoxin and viridin antibiotics [10]. The interaction between T4 isolates and Ceratocystis sp. Attachment is an initial process in inhibiting pathogenic fungi from the hyperparasitic activity which results in the cessation of pathogenic growth. The results of Alamsyah's research state that the process of attaching hyphae have shown an initial interaction between antagonistic fungi, which then the hyphae will pierce and suck the contents of pathogenic hyphae cells, resulting in the cessation of the growth of pathogenic fungal hyphae [17]. Advances in Biological Sciences Research, volume 13 The isolates of T5 and T7 show the entrapment type that causes the hyphae of Ceratocystis sp. does not develop and stop to growth. This is supported by the opinion of Kurnia et al. stated that the antagonistic hyphae that trap pathogenic hyphae will cause the pathogenic hyphae to not develop so that their growth stops [3]. The T14 isolate causes the growth of pathogenic hyphae break off and it can be seen that in some parts of the hyphae becomes clear. It is suspected that there is a lysis mechanism so that the growth of pathogenic fungal hyphae becomes abnormal. This is supported by the opinion of Sunarwati and Yoza which states that another way biological agents can inhibit pathogens is by lysis. Lysis, namely the mycelium of biological agents, is capable of destroying or cutting the mycelium from pathogens, which in turn causes the pathogen's death [10]. The lysis mechanism is characterized by changing the color of the pathogenic fungal hyphae to become clear and empty, then some break off and eventually destroy [18]. Isolate Trichoderma sp. endophytic E074 Acacia crasicarpa 4 years (T5 isolate) is the best treatment in inhibiting the pathogen Ceratocystis sp. with the highest Advances in Biological Sciences Research, volume 13 percentage of inhibitory ability (30.76%), and the largest colony diameter (90.00 mm), and the highest growth rate (33.33 mm.day -1 ). ACKNOWLEDGMENT This work was supported by Plant Protection Department Laboratory, PT. Arara Abadi to allowed doing this research there.
2021-06-30T08:10:35.090Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "68a4b3152462f809dfbbd461628212412f07a0e3", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125957678.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "68a4b3152462f809dfbbd461628212412f07a0e3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
236505412
pes2o/s2orc
v3-fos-license
Developing functional requirements for Temporary Housing by integrating Axiomatic Design with the 5 Gaps Model of Service Quality Temporary Housing (TH) schemes are a controversial component of post-disaster recovery plans, and yet they offer a fundamental service to the homeless population. Their sustainability should be understood and addressed in terms of service quality for all clients, rather than as a matter of product engineering. Since the evaluation of service quality is different from that of goods, value in TH assistance should be measured according to how well it matches clients’ expectations. This paper adopts the 5 gaps model of service quality as a framework for TH quality assurance and advances that closing the current performance gap requires tackling issues in the briefing, design, project execution and conformance phases, as well as in communication. Against this background, engineering methods such as Axiomatic Design (AD) can effectively be exploited to reduce the gap between what people want and what they get, considering the needs and objectives from humanitarian actors. Results indicate that AD can reveal conflicts and potential for cooperation between the many “clients” of TH, via the joint analysis of their different needs, and the associated Functional Requirements (FRs) and illustrates via a post-factum analysis what mechanisms need to be in place to ensure better preparedness for future disasters. IOP Publishing doi: 10.1088/1757-899X/1174/1/012014 2 livelihood [1]. This means TH requires providing culturally and technically adequate housing and neighbourhood conditions to local communities, its main beneficiaries, but also promoting sustainable building construction practices and employing local resources, responding to governmental objectives related to efficiency and efficacy in project delivery considering its whole life-cycle [2]. These requirements qualify TH as an urban planning service (an intangible component having the distinctive function of supporting a full recovery post-disaster) to the affected communities, and a critical component of building sustainable cities and society to the PA. In Italy, the TH supply and delivery process involves several actors and stakeholders, among which designers and engineers commissioned by the PA, including the National Department of Civil Protection (NDCP), which is responsible for delivering the service ('Client 2'), as well as TH beneficiaries, i.e., the affected communities displaced by the earthquake ('Client 1') [3]. For experts, reconciling the views and needs of these multiple clients in TH design and site planning is a complex task, which requires a careful management of operational decision-making components -especially those with potential for conflicts' generation [4] -and the consideration of organisational aspects, resources, plans and information from the part of the PA responsible for project delivery. An important part of a service delivery process, in the built environment, is customer satisfaction, in which a considerable part of it is related to customer perception (ISO 55002), [5]. Customers' perception of the housing building service is important in housing construction projects, as ad-hoc adaptions to a set of non-standard situations are often needed [6]; what applies also to the construction of TH sites after disasters. The quality of the TH assistance service depends on how it is perceived by all clients, during the design and construction stages and beyond (e.g., if maintenance and/or buy-back options are included in the TH procurement document). Even when time an budget targets are met, several projects fail to realise the intended value -understood as a manyfold and multi-domain concept -due to approach inconsistencies, misalignments in decision-making, and a reductive focus on costs and project financing rather than on whole-life performance [7]. Thus, to improve, and possibly even measure, TH assistance value, we suggest linking AD [8] and the 5 gaps model of service quality by [9] within a novel TH delivery framework as outlined in Section 2. This will support a joint analysis of multiple clients' needs, revealing conflicts as well as potential for cooperation between top-down and bottom-up inputs. Specifically, in the analysis of a selected case study in Section 3, the 2 main clients of TH plans are separated to better administer the gathering of Customers' Needs (CNs) and discuss associated FRs. The conclusions situate the proposal within the need to support the reconciliation of TH schemes with the context-dependent value drivers of different clients and national disaster recovery policies. This is understood as the first step to address quality assurance in TH planning and control gaps in procuring, designing, and delivering sustainable TH solutions to resolve and/or prevent technical and community clashes associated to TH sites' location and layout choices, as well as TH units' design. [9] advance that clients' evaluation of services quality is different from that of goods, as their satisfaction depends on how well the perceived value of a services matches their initial expectations. Thus, in the 5 gaps model, they use this consideration to define the TH project performance gap (Gap 5), which is seen as the combined result of four different sub-gaps. Using the terminology of [10], who merged the 5 gaps construct with a "project as an information process" model, these four sub-gaps correspond to the briefing (Gap 1), design (Gap 2), execution and conformance (Gap 3) and communication (Gap 4) problems. Hypothesis and methods: linking TH, AD and the 5 gaps model According to [11], in each gap service quality can be evaluated according to 5 key determinant dimensions -reliability, responsiveness, assurance, tangibles and empathy -which apply at different stages of service delivery and with different weightings. These, coincide with some of the issues identified as critical in [4], and respectively: (i) TH assistance consistency of performance; (ii) readiness of TH service; (iii) competence of humanitarian professionals to perform necessary tasks and/or deliver the service considering its whole lifecycle; (iv) physical outcomes for process tracking; and (v) attention This study advances that AD can be linked to the 5 gaps model and support quality improvements in TH provision by closing Gaps 1, 2 and 3, ultimately reducing the TH performance gap. Figure 2 shows that some conceptual correspondences exist between AD and the 5 gaps models. In fact, both refer to multiple domains in which: people's needs are understood (costumer domain); functional requirements are specified (functional domain); design parameters are chosen to meet the brief (physical domain); and the implementation process is set up to adequately reflect the design intention (process domain). Accordingly, AD could effectively support the mapping of CNs for determining TH functional requirements and their systematic decomposition (Gap 1), the mapping of independent relations between these and candidate Design Parameters (DPs, Gap 2) and the TH supply and delivery process (Gap 3), considering input constraints, by specifying problems and solutions explicitly, in parallel, following a hierarchical order while maintaining data [8]. This proposal aligns with that of [12], who pose that AD could enhance the reliability of disaster response operations, as reliability is one of the 5 key service quality determinants identified by [11]. Albeit originally intended for product design, AD has been successfully transferred to architecture to help problem-solving [13] and to service design [14]- [16]. For instance, in [14], the authors pose that the mirror decomposition through zigzagging between the functional, the physical and the process domains enables focusing designer' attention on CNs. In this case, AD is used to design web services for remote patient monitoring by systematically detecting the FRs of all the stakeholders considering also their interactions. [15] use AD to build a framework for optimising the flow of patients in hospitals to improve the efficiency of health services. In [16] AD is adopted to suggest improvements to the service offered to passengers with reduced mobility in airports by linking CNs with process components. These studies address specialised services which require complex process choreographies with several actors, while keeping a focus on costumers' needs. The authors of this paper pose that the TH assistance delivery process presents similar characteristics, and hence, could be effectively approached by adopting AD to close service quality gaps, while taking into accounts the CNs from all stakeholders. This point is illustrated in what follows, where AD is exploited to address the TH briefing problem in a suitable test case, covering Gap 1 of the service quality framework. The case of the 2016-2017 Central Italy earthquakes is used to illustrate the approach and explore the setting and reconciliation of a selection of high level FRs from the service deliverer, the PA (Client 2), to the affected community (Client 1). The two clients: conflicts and overlaps in the Italian case. Identifying high-level FRs starts by independently mapping in detail the needs and aspirations of both clients, so they can be analysed to assess the presence of matches (common FRs), explore the overlap of tolerances when FRs converge (completely or partially) and flag potential conflicts between FRs from Client 1 and Client 2, so they are assessed with regards to their logical inconsistencies. The authors adopt a traffic light system to label these different types of FRs denoting: • 'G' (green) to FRs which are either common or completely converge with regards to tolerances required from Clients 1 and 2. • 'A' (amber) to FRs which partially converge with regards to tolerances required from Clients 1 and 2. As pointed by [8] the writing of FRs is one of the most important parts of a design process and can take several iterations so conflicts are avoided and convergences are carefully crafted so the needs of multiple stakeholders are addressed. The authors therefore focus on assessing FRs from Clients 1 and 2 by inferring them based on CNs selected from different documentary sources. To detect the needs, and in some cases the FRs themselves, of the PA (Client 2), the TH procurement documents used in Central Italy, namely the "Capitolato Tecnico D'Appalto" [17] and [18] are used as core references together with the Sphere handbook [2], which is a key international standard used by most NGOs and governments in post-disaster situations. These documents also allow extracting systems constraints (Cs), i.e., bounds on acceptable solutions, which in this case are identified as regulatory restrictions related to the provision of TH in Italy. In the Italian case, the definition of the TH brief involves multiple experts, who, in different phases, contribute to its refinement. The process begins with the loose drafting by the NDCP of the technical document of the strategic framework agreement for TH post-disaster procurement (in forth referred to as the "Capitolato"). This seeks to translate TH needs into a set of "solution neutral" FRs about TH supply and delivery. Tender's participants respond to the brief by presenting a technical and an economic offer (including the design of TH units), detailing the service they will provide. The actual number and type of TH units needed (including for instance special accessibility requirements) is decided by public authorities only after a disaster strikes, following a detailed assessment of damage to the housing stock and of TH needs. Next, candidate TH sites' locations are identified. When the plots become available for construction, the TH purchase order is raised, and the supplier is contracted to design the layout of the TH sites by the PA. The professionals working for the supplier, i.e., experts in the firms' technical office, need to translate the requests detailed in the supply order into a workable project brief, by refining the lower-level constraints and FRs, considering all the contractual, regulatory, site-specific, planning and design inputs. Within this framework, civil protection actors seek to include in the strategic brief of the "Capitolato", in later executive ordinances, and in the purchase orders their understanding of people needs, translated into Client 1 FRs. The same is presumably done by town planners and designers, according to their professional ethics, albeit normally within the framework set up by the PA, and possibly through a negotiation process with other stakeholders. Thus, the final list of FRs for TH units and sites is determined following what seems to be a mainly top-down and multi-staged process. Although, in principle, the community should be directly involved in the process of defining the project brief to implement a truly people-centred TH programme, the way the TH assistance delivery process is currently organised makes this task particularly challenging, especially due to its tight time constraints. In addition, organisational and technical constraints may as well prevent civil protection actors from delivering what disaster-affected people expect. Thus, to detect the needs of TH occupants (Client 1) this work uses a mix of first [19] and secondary information [20] from official documents and semi-structured interviews. Context independent common FRs Some important common FRs, which open the door for setting converging tolerances in relation to DPs required to fulfil them, are illustrated in Table 1 (all assigned to group G). These mainly come from noncontext specific documents such as [2] (used as guidance by humanitarian agencies) and -albeit in minor part -the "Capitolato", which attempt to respond to the needs and aspirations of both clients in coordination. G.3 -Locate new settlements at a safe distance from actual or potential threats and/or remove hazard from sites and repair any serious environmental degradation. G.4 -Provide acceptable distance and safe travel (or transport) to essential services and facilities. G.5 -Provide essential services and livelihoods opportunities including child-friendly spaces, gathering areas, and spaces to respond to religious needs. Housing Design G.6 -Provide living space, toilet and bathing facilities, spaces for sleeping, food preparation, cooking and eating, socialising, and play areas 1 . G.7 -Provide optimal lighting conditions, ventilation, and thermal comfort. FRs from Client 2 -Anticipating issues FRs from Client 2, listed in Table 2, are found both in context-independent documents (in [2] Client 2 is not uniquely defined and could be the local PA as well as an international NGO) as well as in countryspecific strategic documents related to the case study under examination [17], where they are stated as duties of the PA. These FRs mainly refer to speed of delivery and catering for social and environmental sustainability and can be understood as convergent with community FRs. However, since tolerances for these might vary between what is expected from the two different clients, they are labelled as 'A' rather than 'G'. These FRs should be carefully analysed with regards to the range of tolerances expected by each client, to ensure they have at least some alignments so they can be resolved throughout the design process. Table 2. FRs from Client 2 assessed with regards to how they align with expectations from Client 1. G.10 -Ensure that price and quality meets market standards. G.11 -Ensure any energy supply options meet user needs, and provide training and follow-up as needed. A.3 -Salvage and reuse, recycle or re-purpose available materials, including debris. In addition to these, and as part of the duties form Client 2, it is not uncommon to find constraints related to the urban context which imply working within a geographically defined area-based approach to better understand community dynamics and demands to follow standards for safety, protection and dignity in IOP Publishing doi:10.1088/1757-899X/1174/1/012014 7 construction and planning, ensuring public health and safety as well as providing physical security, dignity, privacy, and protection from the local weather. For transparency purposes [21], it is also expected that Client 2 follows appropriate and traceable tender, bidding, procurement, contract and construction management processes and codes of conduct. The reality brought by adding context -Mismatch of tolerances and potential conflicts. When examining a real context and seeing FRs from the PA (Client 2) and the community (Client 1) involved in the Central Italy earthquake of 2016, the situation gets far more complex than what is assessed in sections 3.1 and 3.2. FRs in [18] and in the many disaster-specific NDCP ordinances issued from 2016 to 2018 get far more specific, and pragmatism needed from the PA to deliver TH plans using established resources can easily overwrite community FRs related to demographics and sense of place. Table 3 uses the same traffic light systems adopted in sections 3.1 and 3.2 to assess commonalities, explore the overlap of tolerances and flag potential conflicts between FRs from Client 1 and Client 2. The reality of the Italian case study shows that when FRs are defined for a given context, special attention needs to be dedicated to their writing at lower levels, considering the range of tolerances expected by each client to ensure minimum alignment for DPs to fulfil them. For instance, some of the disaster specific FRs from the PA (e.g., A.7, A.8, A.13) present a rather limited range of tolerances which can well be in conflict with FRs from the community (e.g., R1, R4 and R2) once further decomposed into second level FRs. In fact, Table 2 shows that most of the conflicting FRs identified (R.2 -R.8) can arise from decomposing high-level FRs into low-level ones, exposing a set of community FRs not featured as part of TH project objectives. A.6 -Provide green infrastructure with local species or suitable to local climate inserted in the local landscape but with minimal plants and urban furniture. R.3 -Provide paved walkways to access TH units, which don't get muddy when it rains. A.7 -Provide hierarchical internal road and pedestrian path infrastructure with 2 parking spaces per TH unit. N.4 -Optimize construction speed. Housing design G.13 -Connect TH units to the grid for water, energy electricity, sewage, phone, gas. R.4 -Give special consideration to accessibility for the elderly. A.8 -Provide 3 TH units typologies (40, 60, 80 m 2 to host a max of 2,4,6 people respectively) with 20% of these ensuring disabled accessibility if necessary. R.5 -Provide spaces to house caregivers to those without easy access to care services. A.9 -TH units' structure needs to be prefabricated and units to be combined to reach two-stories with external individual access. R.6 -Ensure energy systems provided are affordable. A.10 -Use passive design strategies and minimise thermal energy demand. A.11 -Ensure TH units are flexible and adaptable to change of use in the same site and removeable. R.8 -Provide complete design including finishes, and easy to maintain. A.12 -Ensure TH with a minimum lifespan of 10 years fully furnished and ready to use. A.13 Provide external private spaces for THs with 25% of net floor area. Table 3 shows that as soon as a context is brought into place and FRs need to be further decomposed towards the development of a tangible solution, the lack of community involvement on the process becomes not only evident but also a potential problem. Narrow tolerances for overlaps as well as conflicts are unfolded as potentially unsolvable whereas they could have been resolved through early consultations and iterative conversations among Clients 1 and 2 throughout the decision-making process. Since time is a tight constraint for emergency management in the aftermath of a disaster, a postfactum analysis of this sort is essential to extract lessons to be learned for informing future endeavours. An important one is that community interactions should be featured in preparedness plans in a pragmatic way to ensure a better post-disaster response. Conclusions and suggestions for future work The post-factum application of AD to the analysis of the case of the 2016-2017 Central Italy earthquakes enables lessons to be learnt from a strategic TH procurement, policy, and tactical planning perspective. Results indicate that the Italian PA should revisit its approach to problem definition. They also show local authorities should raise awareness of disaster risk and engage with communities likely to be affected by future hazards about TH response plans to discuss FRs with them early, as part of preparedness. In this way both clients would already know each other and be therefore in a better position to ensure FRs from Client 1 are extracted rapidly when needed so design objectives can be negotiated/reconciled promptly. The relationship between candidate DPs and the resulting FRs could then be systematically mapped on a case-by-case basis using the matrix representation {FR} = [A] {DP}, to check for factors' independence and enhance the completeness, consistency, and plausibility of processed information, enabling the delivery of more sustainable and context-based design proposals. As construction supply chains as well as climate and culture can change from region to region, this approach could support a shift of focus from the design of universal shelter solutions to the proposal of a sound design workflow without slowing down the process of TH assistance delivery. The aforementioned case can well be used as a prompt to consultations with a focus on rehearsing mitigating mechanisms to refine and/or re-write conflicting FRs and poorly convergent tolerances, as proposed by [8], so that they can more flexibly accommodate a successful design response at a time of crisis, by for instance: • Increasing the range of FRs' tolerances. • Increasing the range of DPs' tolerances. • Integrating DPs to respond to multiple FRs. • Searching for independent FRs. • Negotiating and establishing priorities between FRs so a strategic plan can be in place. Notably, the analysis made a simplification in the number of clients and related needs to be considered in post-disaster situations which in reality is rather large and could extend, for instance, to comprehend NGOs, volunteers and other local residents in addition to TH occupants and the PA. The latter can also be further subdivided into national and regional governments, different local authorities, professional bodies, the army, different civic protection bodies etc. Nevertheless, AD does not differentiate according to their number. On the contrary, it enables to flexibly reconcile all clients' needs within a unique framework without considering the distinctive characteristics of different clients and IOP Publishing doi:10.1088/1757-899X/1174/1/012014 9 their different power relations which can visibly play a role in the FRs' selection and refinement processes. Future work should therefore focus on investigating and developing mechanisms to ensure all different stakeholders, especially Client 1, are involved from the beginning in the TH decision-making process so the 5 gaps framework in connection with AD can be effectively applied when a disaster strikes, preventing it to simply pay a lip service to the affected community. Better preparedness plans are indeed essential to enhance technical procurement documents for TH supply and delivery services. By embedding in those the possibility to include the value drivers of all the clients of TH schemes in different contexts, designers will be better instrumented to perform the hard task of rapidly delivering context-sensitive TH solutions which foster the sustainable recovery of settlements impacted by rapid onset disasters.
2021-07-28T20:39:43.565Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bb7333fe423e2309e3157d3ada40f99d7cb368b2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1174/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8130e41b33d1b49e9a4e036ba6ecc9ecc538a54e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
199272105
pes2o/s2orc
v3-fos-license
Language-specific Differences in Online Reviews – Case of Fine-dining Prague Restaurants Word-of-mouth is an important factor in all phases of the purchase decision-making process. As the web 2.0 came in, online reviews platforms grew up and started significantly influencing the travel behavior in the last years, including gastronomy. The largest platform is Tripadvisor.com. With respect to worldwide usage and importance of Tripadvisor.com in the tourism sector, this was chosen as a base for the research of restaurant reviews. This study aims to identify whether there are differences in the satisfaction level with the restaurants between language groups. Prague is the top highlight of the tourism offer of the Czech Republic. The capital is also a leader in regards to quality and luxury services, gastronomy is no exception. According to Tripadvisor.com, there are almost 120 restaurants offering fine dining and most of them offer a good quality according to the reviews. This type of restaurants for the high-end customers was analyzed to identify the most demanding language groups and the less critical language groups. The German, Spain, Italian, French and Czech language were analyzed. With some limitations, we can assign the language to a nationality. The average evaluation for each language is calculated and the statistical tests are made to confirm the findings. Introduction Tourism of nowadays is strongly influenced by the internet and different mobile applications. The behavior of tourists changes as the new possibilities arise. To make the right decision and to choose the best valuable product is one of the most important issues of today's tourists. In this process, the word-of-mouth can help. Word-of-Mouth (WOM) is an informal communication towards customers about usage of particular goods or services or about the sellers (Westbrook, 1987). In tourism we are mostly concerned about services and word-of-mouth has an important influence on decision-making in tourism. Recommendations and reviews of destinations, hotels and restaurants play a role in the whole process, starting with inspiration and ending with a post-stay evaluation, comparison with other´s feelings and impressions and sharing. With the web 2.0, the channels of communication changed; however, the importance of electronic "word-of-mouth" is very high or even higher than the offline WOM has ever been. A term electronic WOM (eWOM) is used for online WOM and the reviews are one of the most powerful eWOM. User-generated reviews can be defined as peer-generated reviews posted on the company´s website or third parties´ platforms (Mudambi, Schuff, 2010). The most important platform for reviews is Tripadvisor.com with more than 702 Mio. reviews of 8 million of accommodations, airlines, experiences and restaurants (www.tripadvisor.com). As one of the biggest platforms, the Tripadisor.com is an important source of information and tries to be a reliable partner and is also a leader in online review fraud prevention. The real number of reviews changes every day. Visitors and their online reviews are the WOM of today. They have the power to influence future visitors´ decisions and therefore the companies care about social media and review platforms and create strategies for online reputation management. Most of the managers do their best to satisfy the customers and to get better ratings. The aim of this research is to identify whether all the tourists are equally demanding and will write the same review and give the same rating after being served in the same way. This knowledge could help the companies to identify and pay higher attention to the most demanding groups of customers and increase the online rating. For this paper, the language groups and the Prague fine-dining restaurants were chosen to be analyzed. Language-specific differences in the rating of the fine-dining restaurants will be analyzed based on 10,497 reviews. Online reviews in tourism and restaurants Online review is any positive or negative comment from a potential, actual or former client regarding a product or company that is accessible to many people through the internet (Hennig-Thurau et. al, 2004). It is possible to rate almost any service (including teachers, lawyers or doctors), however, in tourism it is already a standard to use the reviews for decision making and nobody hesitates to write one. O´Reilly a Marx (2011) define three main purposes why to read a review: 1. creating a better image in the community, 2. minimize the risk related to the purchase of (expensive) goods and services and 3. acknowledge the decision to buy a particular product. It is not possible to try or touch the product of hotels, destinations, restaurants or tour operators in advance. Therefore the client undergoes the risk of buying a low-quality product for a higher price. The decision-making process is even complicated because of the following facts: 1. non-tangible product in return to the money spent, 2. the amount of expenditures in comparison to regular earnings is high, 3. expenditure involves savings and usually, it requires long-term planning and 4. it is not a spontaneous or capricious purchase (Wahab et al., 1976). The role of reviews as a tool of minimizing the risk is therefore crucial. Reviews are also an important object of the research. Most of them are dedicated to the hotel industry, e.g. impact of the reviews on sales (Lu et. al., 2014). According to the (Viglia et al., 2016) an increase of 1 point in rating led to an increase of 7.5 % in sales. The factors leading to overall satisfaction are also being analyzed (Qu et al., 2000). In the restaurant business, online reviews are accepted as expert opinions (Parikh, 2013). The more reviews the restaurant has the higher is the probability of gaining a new customer (Park et al., 2007). The more reviews the restaurant has, the more popular it is supposed to be. This process leads to an increase in sales as well. Anderson and Magruder (2012) measured the impact of increasing rating on Yelp and found out that an increase from 3.5 to 4.0 increased restaurant sales by upwards of 19 %. According to Gunden´s findings, 62 % of respondents are influenced by online reviews and 55.5 % check the online reviews most of the time or always (Gunden, 2017). Language-group specifics of rating and reviews There are several papers analyzing the language or national specifics of tourists. Tourists with different cultural backgrounds tend to different consumption concepts, different values and priorities and therefore to different ratings (Ayoun, Moreo, 2008). The fact that different cultures have a different behavioral intention was confirmed in (Wen et al., 2012) where the difference between American and Chinese customers of fast-food restaurants were analyzed. Pacheco (2016) analyzed the relationship between online hotel reviews and five different language groups and nationalities and found a significant difference in behavior of Spanish guests, especially in lower-class hotels. In (Liu et al., 2017) the language-specific drivers of satisfaction of hotel guests from different countries were researched and the authors found out a substantial difference in the role of various hotel attributes for each language group. In some cases, the language is assigned to a nationality (Pacheco, 2016), in some researches the nationality is not important and the results are analyzed on the level of culture (individualistic or collectivistic) (Wen et al., 2012), some of the papers leave the results on the level of language. Methodology and data For this research, a method of web scraping was used. The scraping and data downloading were processed on the 28th of September 2018. The collected data were manually checked and corrected in case of any problem. Together 42,422 reviews were collected and from these, the selected languages were analyzed -together 10,497 reviews, for more details see Table 1. The languages for analysis were chosen according to the tourism statistics in Prague -from the top 10 source countries for tourism in Prague. The criteria for adding a language to the research were the following: 1. Latin alphabet (therefore the South Korean, Chinese and Russian are not included), 2. enough reviews in that language in the fine-dining restaurant (excludes Slovakia) and 3. possibility to assign the language to a nationality (therefore the English reviews cannot be analyzed, as there are not only tourists from Great Britain and the USA, but also different tourist from other countries who do not write the review in their native language). As a subject of the research, the fine-dining restaurants were selected -as the representatives of high-end services for the most demanding clients. On Tripadvisor.com there were found 120 restaurants tagged as fine-dining restaurants in Prague. The minimum number of reviews was set to 100 and the minimum for each language to 5. After applying these limits 40 restaurants were considered for the research with a minimum of 200 reviews in all languages together. The most reviews are in the English (25,881), however, English is not the subject of the research. The next language is French, followed by Italian and Czech. Least reviews are in Spanish. If we compare the number of reviews with the number of tourists in Prague, we can see, that there is a significant difference in the relative representation of French tourist and French reviews. It would be interesting to investigate whether the French tourists visit more often the restaurants or whether they are more active in writing the reviews. With German tourists and German reviews, an opposite situation is observed. The data were first analyzed with simple statistical characteristics -mean and standard deviation, for details, see Table 2. The first results indicate a difference in the ratings among the different language groups. The statistical T-test is used to confirm the differences, see Table 3. Each language is compared with the data set collected as "All languages". Results and discussion The downloaded data were first analyzed with means and standard deviations. This analysis is shown in the following table. According to the average rating the Italian, Spanish and French guests are the most demanding and it is difficult to satisfy them. The German guests are average demanding and the Czech guests are the most tolerant customers. Spanish and French guests have the highest standard deviation; this can indicate a higher diversity of difficulty level in these language groups. The statistical significance of the language-specific differences will be tested with twotier pair t-test at mean value. The t-test is comparing the particular language group with all reviews that represent a set of 42,422 reviews. Source: own data analysis The t-test confirmed differences for some language groups at the 0.05 level and on the 0.01 level as well. The statistically significant difference was confirmed for Italian, Spanish and French language groups. These language groups are substantially more demanding and more difficult customers than average. The statistical significance was confirmed for the Czech speaking customers as well, however, in an opposite scale -they are substantially more tolerant and more satisfied with the supplied service and food. The only language group that is satisfied on an average level is the group of German-speaking customers, where no significant difference was found. The research confirms the findings of the previous authors, that the cultural and national determinants are important for the satisfaction level. This paper, presenting research of fine-dining restaurants in Prague interestingly correlates with findings in (Pacheco, 2016), where the Spanish tourists are also identified as one of the most demanding customers in the hotel business of Porto. In the case of this paper, it would be possible to assign the language to nationality with some limitations. German-speaking tourists are not only from Germany but from Austria as well, the Spanish language is used in other countries etc. However, the dominance of the main countries -Germany, Spain, France -as source countries for Prague is so prevailing that it would be possible to undergo the risk of having guests from Switzerland or South America in the sample without influencing the results in a significant measure. As future research, it would be interesting to compare the results with Hofstede's cultural dimensions. After the language specifics have been identified, the differences in the importance of each component of the satisfaction could be analyzed. For this purpose, more restaurants should be considered (not only fine dining) to get a wider range of ratings as most of the fine-dining restaurants have the evaluation of the factors 4.5 or 5.0 and more detailed information is on Tripadvisor.com not accessible. It would be also useful to add some other languages -eg. Chinese, Russian -in future research. Conclusion The aim of this paper was to identify whether there are some differences in language groups in regards to evaluating the experience and rating the high-end restaurants. From the literature review, it was evident that the reviews in tourism and restaurant business play an important role in the decision-making process and that we could expect also some differences between the language groups. As a research sample, the ratings of Prague fine-dining restaurants on Tripadvisor.com were chosen. In this type of restaurants, we expect the most demanding customers and the most critical attitude. According to the statistics of Prague visitors the important language groups were selected and after application of the requirements these languages were analyzed: Czech, German, Spanish, Italian and French. After application of the limits for restaurants (more than 100 reviews, minimum 5 reviews in each language) 40 restaurants stayed in the research together with 42,422 reviews and 10,497 reviews in the analyzed languages. After a simple statistical analysis, the first results could be observed and the statistical t-tests confirmed the significant differences for Italian, Spanish and French language as the most demanding language groups. A statistical significance was confirmed also for the Czech language, however in an opposite scale -as the most tolerant language group in the research. For future research, a more detailed analysis of each satisfaction component for the different language groups could be considered.
2019-08-03T01:36:10.884Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "668f65c18420c73d13ff15a31ec08a2685c42eb0", "oa_license": null, "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1450-6661/2019/1450-66611902100X.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d3841fc6c9008e7652b5127715884663c51b5465", "s2fieldsofstudy": [ "Business", "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
213964852
pes2o/s2orc
v3-fos-license
Some legal aspects of environmental engineering The article considers a number of legal aspects related to environmental engineering. The levels of legal regulation of this environmental and economic institution are presented. The main problematic aspects are highlighted. Certain options are proposed for their solution and improving the quality of legal regulation as a prerequisite for the successful functioning of environmental engineering mechanisms. The importance and significance of legal regulation, scientific support and human resources for the effective solution of the tasks posed to environmental engineering are noted. Currently, it is absolutely obvious that the problems of environmental management, environmental protection, economics and law are closely interrelated: economic growth should not be accompanied by the destruction and depletion of the natural environment. Technical and economic development should not conflict with the interests of the environment. Environmental and economic regulation should be designed so that the producer is economically interested in resource-saving technologies, and the requirements to maintain a favorable environment should not mean the curtailment of production. On the contrary, they are obliged to stimulate the development of new resource-saving technologies and ideas. Environmental and economic regulation should combine solutions that are technologically feasible, economically feasible, socially desirable, environmentally friendly. This means that the legislation enshrining the mechanisms of the functioning of the ecological and economic system should be balanced, systemic, clear and logical, contributing to, and not hampering, their implementation at the proper level and with the maximum effect corresponding to the goals and objectives facing them. Since the UN Stockholm Conference on the Human Environment (1972), there has been a steady positive trend towards improving international and national legislation of different states on environmental protection and environmental management, as well as the conservation of particularly valuable ecological systems and natural sites. More than 100 UN member states have adopted comprehensive laws on environmental protection, which regulate environmental policies and fundamental legal provisions. Laws and by-laws have been introduced that determine the procedure for planning environmental protection and the use of natural resources, rationing, licensing and standardization, environmental review and inspection. Nevertheless, to this day there are more than enough problems in the field of legal regulation of environmental and economic activity. Environmental engineering, representing an environmental tool of a complex environmental and economic system, is designed to ensure compliance with environmental requirements of technological processes and technology itself at industrial facilities. Its main goal is the feasibility study of a set of However, all this should be carried out within the framework established by law. Therefore, the legal regulation of specific areas of environmental engineering, the definition of its facilities and the consolidation of the legal status of subjects of environmental engineering activities, as well as the features and guarantees of its implementation, as well as the grounds and limits of legal liability for violations in this area, are all the prerogatives of legislation in which there are several levels of legal regulation of environmental engineering: 1. International. These are international treaties and agreements on natural resources and objects under the national jurisdiction of the state (for example, on the basis of the Convention for the Protection of the World Cultural and Natural Heritage of 1972, Lake Baikal, Kamchatka volcanoes and a number of other unique Russian natural complexes and objects have the status of World Territories natural heritage), and international treaties and agreements for the protection of "shared" 2. The Constitution of the Russian Federation, adopted at a popular referendum 12/23/1993, for example, Part 1 of Art. 9 of the Constitution of the Russian Federation establishes a basic principle that determines the importance of land and other natural resources as the basis for the life and work of peoples living in the corresponding territory; P. 1 Art. 36 establishes that citizens and their associations are entitled to have land in private ownership; Art. 41 -financing of federal programs for the protection and promotion of public health; activities promoting environmental and sanitaryepidemiological well-being are encouraged; the right of citizens to a favorable environment corresponds to the obligation of citizens to preserve nature and the environment, to take care of its wealth -Art. 58 and others). Nevertheless, despite the seemingly large number of regulatory legal acts of various levels and legal force, there are many issues in the field of environmental engineering that need to be resolved in a legal manner. First, on the basis of the already universally recognized importance of environmental engineering for both the ecology and the economy of the state, an appropriate regulatory and legal framework is necessary for the legal consolidation and execution of this complex interdisciplinary institute as such. Second, the legislative consolidation of environmental regulation in relation to individual components of the natural environment (water, soil, forests, aquatic biological resources) is not up to the mark -the movement of pollutants from one environment to another, their accumulation and total concentration in the environment are not taken into account [2]. Third, the problem is that the process of developing and approving quality standards lags behind the amount of chemicals that appear. To solve this issue, in practice, temporary indicative safe exposure levels and approximate permissible levels are used. Their substantiation is carried out using accelerated experimental and calculation methods, as well as by analogy with structurally similar compounds that were previously normalized. Although temporary indicative safe exposure levels and approximate permissible levels are not provided for by applicable law, their usefulness for practice is undeniable: like quality standards, they are used in the design, conduct of environmental assessments, as well as in environmental monitoring. Therefore, it is necessary at the legislative level to establish requirements regarding their legal regime. As for the standards of permissible environmental impact, the situation is better: the literature draws attention to the lack of uniformity of legal terminology regarding the standards of permissible environmental impact used in the federal laws "On Environmental Protection" and "On Air Protection" ("Normative of permissible anthropogenic environmental load" and "maximum permissible (critical) load"; "emission and discharge limits" and "temporarily agreed release", etc.). And although the differences in the content of the same standards referred to in different ways are not fundamental, nevertheless, agreement on the terms would still be appropriate. One of the most problematic aspects of legal regulation of rationing should include rationing of the extraction of natural resources: in the Law of the Russian Federation "On Subsoil" there are no requirements for rationing the withdrawal of mineral resources. Such a position of the legislator does not comply with the Constitution of Russia, the Declaration on the Environment and Development, the Basic Provisions of the State Strategy of the Russian Federation on Environmental Protection and Sustainable Development (approved by Decree of the President of the Russian Federation on February 4, 1994 No. 23614), the Concept of the Russian Federation's transition to sustainable development. Accordingly, one of the directions for improving the legal mechanism for rationing the extraction of natural resources should be the establishment in the Law "On Subsoil" of scientifically based restrictions on the extraction of subsoil resources [3]. Fourth, a legal update of the previously adopted various rules in the field of environmental and economic activity that were previously adopted (often quite long ago) and no longer correspond to the expectations of modern reality. So, in October 2019, the Ministry of Agriculture of the Russian Federation proposed a new draft rules for waste management. Fifth, for the implementation of environmental and economic activities, it is necessary to resolve the personnel issue, and this also requires an appropriate legal settlement [4]. Sixth, it is important to use a comprehensive method in the implementation of economic and legal regulation of issues affecting related industries and fields of activity, in particular, in the implementation of regional policies (for example, in the Arctic region [5]), when developing priority directions of the state policy for rural development economy [6] and the use of land [7] and water [8] resources, when introducing environmental engineering mechanisms in managerial decisions [9], when implementing environmental-eco ideas in the activities of enterprises nomic management [10], etc. The same applies to various kinds of legal institutions, in which it is necessary to fix at the legislative level the specifics of solving certain problems related to environmental and economic activities, for example, in resolving constitutional law problems [11], in solving criminal law problems of combating crime [12] etc. Seventh, it is necessary to focus on national projects in the development of environmental engineering systems. So, one of the main such projects is the digitalization of the national economy in general and its individual branches [13]. Therefore, the development of mediumterm (and especially long-term) environmental engineering programs should include such ideas, which implies the need for their legal consolidation and support. Here are just some aspects of a legal nature, the solution of which is of great importance for the quality functioning and further improvement of the environmental engineering system in the Russian Federation.
2020-01-09T09:13:56.689Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "9a13d14fe7b89ed0e8f09a34f763dd57a811ca99", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/421/7/072006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "907c8c49578cefb01b8b442370d4d1aa9902eb9f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
11492683
pes2o/s2orc
v3-fos-license
A New PVSS Scheme with a Simple Encryption Function A Publicly Verifiable Secret Sharing (PVSS) scheme allows anyone to verify the validity of the shares computed and distributed by a dealer. The idea of PVSS was introduced by Stadler in [18] where he presented a PVSS scheme based on Discrete Logarithm. Later, several PVSS schemes were proposed. In [2], Behnad and Eghlidos present an interesting PVSS scheme with explicit membership and disputation processes. In this paper, we present a new PVSS having the advantage of being simpler while offering the same features. Introduction A secret sharing scheme is a cryptographic method allowing splitting a secret between a set of participants such that only some predefined subsets of participants can recover the shared secret.These qualified subsets are called access structures.A secret sharing scheme proceeds in two phases: a dealing phase in which a dealer computes shares and gives to every participant his own share and a reconstruction phase that consists in trying to reconstruct the shared secret by pooling the elements of a qualified subset of shares. Secret sharing schemes were introduced firstly and independently by Shamir [16] and Blakley [3].The first scheme is based on polynomial interpolation while the latter is based on hyperplane geometry.Most of the proposed secret sharing schemes [1,11] are based on Shamir's secret sharing scheme.Although its efficiency, Shamir's scheme still presents some problems.In fact, there is an absolute trust in the dealer.This latter, can distribute some inconsistent shares leading the participants to recover a secret which differs from the initial one.Verifiable Secret Sharing (VSS) schemes [5,6,13] were proposed to allow participants to verify the validity of the shares they received from the dealer.However, a malicious shareholder can receive a valid share but submit an invalid one in the reconstruction phase.Publicly Verifiable Secret Sharing (PVSS) schemes [2,4], [8][9][10], [14,15], [17][18][19][20][21] were proposed to solve this problem.In fact, PVSS schemes were proposed to prevent cheating by the dealer or/and the shareholders.In a PVSS scheme, the validity of the distributed shares can be verified by anyone. In [2], Behnad and Eghlidos present an interesting PVSS scheme where participants can prove their membership and the validity of their shares to prevent unauthorized parties from participating in the reconstruction process.Moreover, their scheme offers an explicit disputation process aiming to prove to a third party in conflict situations between the dealer and a participant who among them is lying. In this paper, we present a new PVSS scheme providing a disputation and a membership proof processes.We show that our PVSS scheme is simpler than the PVSS scheme presented in [2] while still being as secure as the mentioned scheme. This paper is organized as follows: First, PVSS schemes are presented.After that, our new PVSS scheme is introduced.Then, the security of our PVSS scheme is studied and a comparison between it and the previous PVSS schemes is done.Finally, we provide some concluding remarks. PVSS Schemes PVSS schemes as introduced by Stadler in [18] aim to allow anyone, not only participants, to verify that shares were correctly distributed by the dealer.This property has been defined by Stadler in [18] and has been denoted public verifiability. Stadler proposed in this paper, two PVSS schemes that can be used with general access structures.The first one is used for sharing a discrete logarithm.It requires a non standard assumption called DDLP "Double Discrete Logarithm Assumption".In fact, Stadler dealt with expressions of the form y = g (h x ) (with g a generator of a group of order p, and h a fixed element of high order in Z * p ) such that given y, it is hard to find x.Under this assumption, his scheme is as secure as the Decisional-Diffie-Hellman problem.The second one is based on the RSA root problem.It is used for sharing the n-th root and depends on the RSA assumption.Encryptions are based on a variant of the Diffie-Hellman keyexchange protocol.But we should notice here that the security of this scheme was not formally studied.Moreover, the verification in these two schemes requires information exchanges between the verifier and the shareholder.We say that it is an interactive verification. In [8], Fujisaki and Okamoto defined the non-interactivity for a PVSS scheme as the fact that the verification of a share can be done without communicating with the dealer or with any other participant.The scheme they proposed in [8] depends on the "modified RSA assumption" assuming that inverting the RSA function is still hard.This modified RSA assumption allows partial recovery. Notice that the schemes of [8,18] depend on some non standard assumptions.However, Schoenmakers provided in [15] a stronger PVSS scheme by adding the fact that when submitting his share, the shareholder must provide its correctness proof.His PVSS scheme is simpler than the previous schemes.It uses techniques working in any group for which the Discrete Logarithm Problem is hard.This scheme is as hard to break as the Decisional Diffie-Hellman problem. In [20], Young and Yung proposed an improvement of Schoenmakers's PVSS scheme.The scheme they proposed to share discrete logarithm is as hard as the Discrete Logarithm Problem itself.They proved in [21] that their scheme is computational zero-knowledge.In addition, in PVSS schemes, secure encryption assumptions are employed.But in their scheme, Young and Yung can use any probabilistic encryption function. In [4], Boudot and Traoré proposed new PVSS schemes allowing shareholders to recover their shares quickly (fast recovery) or after a predetermined amount of computations (delayed recovery).In fact, they provide a PVSS scheme for sharing discrete logarithm with fast recovery and a PVSS scheme for sharing factorization with fast recovery.They also present a PVSS scheme for sharing discrete logarithm with delayed recovery and a PVSS scheme for sharing factorization with delayed recovery. In most of the existing PVSS schemes, the verification phase is interactive.This is due to the use of Fiat-Shamir zero knowledge protocol [7].In [14], Ruiz and Villar proposed a PVSS scheme with non interactive verification.It is the first efficient PVSS that does not use the Fiat-Shamir technique.It is based on the homomorphic properties of Paillier's encryption scheme [12].It is the first known PVSS scheme based on the DCRA 1 (Decisional Composite Residuosity Assumption).The verification process in this scheme is simpler than in the other known schemes. In [9] , Heidarvand and Villar proposed a new PVSS scheme based on pairing.They took back the scheme of Shoenmakers using the pairing.The security of this scheme is based on the DBSDH 2 problem (Decisional Bilinear Square Diffie-Hellman problem).In [10], Jhanwar proposed a new non-interactive PVSS scheme based on pairing.In this scheme, the dealer has not to compute and to distribute the shares of a given secret; he provides a set of private keys for participants.Then, every participant uses his private key, joined to another public value to compute his share. Recently, other PVSS schemes have been proposed.In [21], Yu and all proposed a publicly verifiable secret sharing scheme with the possibility of enrollment.In [19], Wu and all proposed a pairing based PVSS scheme reducing the computation cost while keeping the same security level of the existing public key systems. Behnad and Eghlidos provided, in [2], a PVSS scheme with non interactive verification and having two peculiarities.First, after distributing the shares and in case of any complaint from any participant, a third party can run a disputation process to identify who is lying.This third party can then vote against the dealer or against the participant.Second, Behnad and Eghlidos added a membership proof process in the beginning of the reconstruction phase.In this phase a shareholder has to prove his membership and the validity of his share at the same time.In [17], Ben Shil, Blibech and Robbana proposed another PVSS scheme with a disputation and a membership proof processes.In this scheme, rather than publishing the encrypted coefficients of the polynomial used to compute the shares, the encrypted shares are published.Thus, the set of shares is public and any insertion or deletion will be detected by all the old participants.This scheme is, then, recommended for applications where the number of participants is limited while the access structure is dynamic and where it is worthy to keep a track of any change in the set of participants. In this paper we introduce a new PVSS scheme providing a non-interactive verification process and presenting explicit disputation and membership processes.We show that our PVSS scheme is simpler than the schemes proposed in [2] and [17] while keeping the same level of security. A new PVSS scheme In our scheme, given two large prime numbers p and q such that q|p − 1 3 , the following notations are used: -G q is a subgroup of prime order q in Z * p , such that computing discrete logarithm in this group is infeasible and g ∈ G q is a generator of the group. In our PVSS scheme, we perform all the computations in Z q . 1 The Decisional Composite Residuosity Assumption, used in the proof of the Paillier cryptosystem, says that given an integer z and a composite n, it is hard to decide whether z is a n-residue modulo n 2 or not. 2 Let e : G 1 * G 1 → G 2 a bilinear application such that G 1 and G 2 are two multiplicative group with the same order p.Let g be a generator of G 1 and a, b and z elements of Z * p .The Decisional Bilinear Square Diffie-Hellman (DBSDH) problem says that g a , g b and e(g, g) z is hard to decide whether e(g, g) a 2 b = e(g, g) z . Dealing phase 3.1.1 Distribution process In the distribution process, the dealer sets Zq and F 0 is the secret to share.Moreover: 1. Every participant chooses a private key a i where a i ∈ R Zq and publishes g a i as his public key, for 1 ≤ i ≤ n where n is the number of participants. 2. The dealer D computes the shares s i = F(i), for 0 ≤ i ≤ n. 3. He publishes C j = g F j , for 0 ≤ j ≤ k − 1 and g s i , for 0 ≤ i ≤ n. 4. He sends an encrypted share E i = s i ⊕ (g a i ) s i to the participant Pr i , for 1 ≤ i ≤ n (Notice that s 0 is the secret and thus there is no associated encrypted share to be sent to anyone). Verification process Every shareholder Pr i , computes then, verifies the following equality 5 : Otherwise, the shareholder complains against the dealer. Disputation process In the case of any complaint, both the dealer D and the shareholder Pr i try to prove their honesty to a third party R. For doing that, D has to publish an encrypted value leading Pr i to extract g s i and to verify the validity of the associated share s i .If D sends an invalid share, Pr i has to prove this fact to R. This process is done using the following protocol: 1. Pr i chooses his private key a i and publishes his public key g a i . 2. Pr i and D publish independently g [(g a i ) s i ] −1 .Then, R verifies that D and Pr i published the same value.Else, Pr i sends a i to R. R computes g a i and g [(g a i ) s i ] −1 in order to discover who is lying.Notice that R can compute g s i from the published values g s i = ∏ k−1 j=0 (C j ) i j . D computes and publishes , he sends a commitment to R and the disputation process is stopped.Else, he sends α to R. R computes g α and verifies that g If it holds, D lied else Pr i lied. Membership only proof If a verifier wants to verify that Pr i is an authorized participant, this latter has to prove his membership to the verifier without revealing his share.Our membership proof is the following: 4 Randomly chosen. 5Given C j = g F j , we compute: 1.The verifier chooses a ∈ R Zq and sends g a to the prover. 2. The prover sends R P = g [(g a ) s i ] −1 to the verifier. The verifier computes 4. If R V = R P , the prover is the shareholder who possesses the share s i . Pooling the shares The secret is reconstructed from the submitted shares, as follows: s = ∑ k i=1 w i s i where w i = ∑ i = j i/( j −1).Notice that the shares can be submitted using the same encryption function of the distribution process where a is the private key of the party concerned by the reconstruction of the secret and g a is its public key. Notice also that this party does not need to run the membership process before the pooling phase since using this encryption function allows the verification of a share and its extraction at the same time. Security In this section, we prove the security properties of our PVSS scheme.First of all, we provide our definition of a secure PVSS scheme: Definition A PVSS scheme is secure if and only if: -During the dealing phase, neither the dealer D can cheat by sending an invalid share to a given participant Pr i , nor the participant Pr i can claim that he received a non valid share while it was. -During the reconstruction phase, an unauthorized party cannot pretend to be a shareholder. -During all the stages of the scheme, the secrecy property is verified. Let's prove at first that, in our scheme, the dealer D cannot cheat by sending an invalid share to the participant Pr i .We show here that Pr i can prove this fact to the third party R in the disputation phase.Thus, we prove the following lemma: Lemma 4.1 "The dealer D cannot cheat by sending an invalid share to the participant Pr i ". Proof In the disputation phase, a honest dealer has to compute λ = s i ⊕ (g a i ) s i .But a malicious dealer can have another behavior.In fact, he can compute λ using an invalid share s i or an incorrect value g a i rather than the public key g a i of the participant Pr i . So, there are seven values of λ that D can use: In each of these cases, Pr i will compute α = λ ⊕ (g s i ) a i at step 3 of the disputation process, and since λ = s i ⊕ (g a i ) s i , he will find α = s i and he will send this value to R. R will verify that g α = ∏ k−1 j=0 (C j ) i j and that g 1/(λ ⊕α) = g 1/(λ ⊕λ ⊕(g s i ) a i ) = g [(g a i ) s i ] −1 .So R will conclude that D lied. We prove also that, in our scheme, a malicious behavior of a participant Pr i , who received a valid share from the dealer D, but claims that his share is invalid, will be detected.We show here that, in the disputation phase, the dealer D can prove to a third party R that Pr i cheated.Thus, we prove the following lemma: Lemma 4.2 "The participant Pr i , cannot claim that he received a non valid share while it was".Proof In the disputation phase, if a participant Pr i received a correct share s i but claims that he received an invalid one, he has to send a fake value α to R. In fact, Pr i computes α = λ ⊕ (g s i ) a i but sends α = α to R. So, R computes, g α and verifies that it is not a public value.Then, R verifies, at step 5 of the disputation process, that g 1/(λ ⊕α ) = g [(g a i ) s i ] −1 .Since it does hold, R concludes that Pr i lied. In addition, we prove the following lemma: Lemma 4.3 "Under the Computational Diffie-Hellman assumption, it is infeasible to break the encryption of the shares". Proof Breaking the encryption of the shares is equivalent to computing s i from the encrypted share To be able to do that, we have to compute s i = E i ⊕ (g a i ) s i from the inputs E i , g a i , g s i .This implies computing g a i * s i given g a i and g s i . Recall that the Computational Diffie-Hellman assumption states that it is infeasible to compute g a i * s i given g a i and g s i .Therefore the unauthorized party is not able to compute the share s i . Furthermore, to break the encryption of a share s i , the adversary should be able to compute s i from g s i .This implies solving the Discrete Logarithm Problem. Given that computing the discrete log in G q is infeasible, the unauthorized party is not able to compute s i from g s i . Then, we prove the following lemma: Lemma 4.4 "Under the Computational Diffie-Hellman assumption, an unauthorized party cannot extract the share s i from g a i , g s i and the published masked value λ in the disputation process". Proof To extract the share s i , the adversary has to compute s i from the public masked value λ = s i ⊕ g a i * s i .This implies that he needs to compute s i = λ ⊕ g a i * s i given λ , g a i and g s i . For doing that, the adversary should be able to compute g a i * s i from the inputs g a i and g s i .However, the adversary is not able to compute s i due to the Computational Diffie-Hellman assumption. We prove also that: Lemma 4.5 "Under the Computational Diffie-Hellman assumption, an unauthorized party cannot retrieve the share s i from g a i , g s i and g [(g s i ) a i ] −1 in the two first steps of the disputation process". Proof Under the assumption that computing Discrete Logarithm in G q is hard, an unauthorized party cannot extract g a i * s i from g [(g s i ) a i ] −1 and under the Computational Diffie-Hellman assumption, it is not possible to retrieve s i from g s i and g a i . Moreover, we prove that: Lemma 4.6 "Under the Computational Diffie-Hellman assumption, an unauthorized party cannot pretend to be a shareholder". Proof This feature is fulfilled within the membership process.In this process, to pretend to be the shareholder possessing s i , the unauthorized party should be able to compute (g a i ) s i from the values g a i and g s i in the membership process.However, under the Computational Diffie-Hellman assumption, this is infeasible. Finally, we prove that: Lemma 4.7 "Under the Computational Diffie-Hellman assumption, it is infeasible to break the encryption of the shares submitted in the reconstruction phase".Proof In the reconstruction phase, only the party possessing the private key a can extract the share s i from the encrypted value E i = s i ⊕ (g a ) s i .This party has just to compute For a dishonest party knowing only E i , g a and g s i , breaking the encryption of the shares means computing (g a ) s i from the public value g s i and the public key g a which is infeasible under the Computational Diffie-Hellman assumption. In this section, we proved that neither the dealer can cheat by distributing invalid shares nor a dishonest participant can cheat by claiming that the share he received is not valid while it was.Moreover, we proved that under the Computational Diffie-Hellman assumption, no one can break the encryption of the shares neither in the distribution process, nor in the disputation process or in the reconstruction phase.We proved also that, under the Computational Diffie-Hellman assumption, an unauthorized party cannot pretend to be a shareholder possessing a valid share. In the following section, we compare our new PVSS scheme to the PVSS schemes presented in section 2. Comparison with previous PVSS schemes In this section, in order to compare our PVSS scheme to the existent PVSS schemes, we first present the different security properties of the most known schemes.We point that the schemes proposed in [12] and [18] do not appear in this section because we consider that these schemes have a specific context 6 .However, we include the scheme of Feldman [6] in this comparison since we consider that it is the first PVSS scheme, although public verifiability was not defined yet when this scheme was proposed.So, for each studied PVSS scheme, we identify the cryptographic techniques it uses in every process (distribution, verification. . . ) and we verify if they satisfy our definition of security.Since most of the used cryptographic techniques are based on some hard problems, we classify these hard problems into four classes: • Discrete Logarithms: Hard problems based on the Discrete Logarithm Problem. • Factoring : Hard problems based on the Factorization Problem.As we said before, a comparison is done for every process of PVSS schemes.For the distribution process, we study the security assumptions (DLP 7 , CDH8 , ...) of the encryption functions used to encrypt the shares before distributing them among the set of participants.Then, we evaluate the problem on which the security of the process is based.The evaluation is based on the following reduction: ELGamal ≤ P CDH ≤ P DLP. However, for the scheme of Feldman and the scheme of Young and Yung, this evaluation is infeasible, because the cryptographic techniques used in these schemes are not specified.For more details, see table1.For the verification process, we explicit also the problem on which the security of the verification process is based.This evaluation is based on the following reductions: Encryption and distribution of shares • ELGamal ≤ P CDH ≤ P DLP. We also classify the verification process into two classes: interactive verification and non-interactive verification.The verification is interactive if the verifier has to communicate with other participants and/or with the dealer to verify the validity of a share.It is non-interactive if the verifier can verify the validity of a share without any communication with other participants or with the dealer.Obviously, non-interactivity is preferred in order to reduce communications.For more details, see table 4. After the verification process, a participant can initiate a disputation process to complain about the validity of the share he received.The disputation process aims to verify if the dealer is honest.We say that this process is explicit if it leads the dealer to send the share to the participant who complains in the presence of a third party.This latter has to identify who among the dealer and the participant is lying.Otherwise, the disputation process is supposed to be implicit (the dealer is considered as dishonest if the number of participants complaining about the validity of their shares is greater than a given parameter).Notice that only the three schemes of table 2 offer an explicit disputation process.The security assumptions of this process for these schemes are studied in table 3. The membership proof can be implicit (a participant has to give his part, in the reconstruction process, to prove that he is an authorized participant) or explicit (a participant can prove to a verifier that he is an authorized participant possessing a valid share without revealing this share).When this process is explicit, it can be interactive or non-interactive.In table 3, we focus on PVSS schemes with explicit membership proof process and study the interactivity of each process and its security assumptions. Disputation To summarize, we provide in this paper a new PVSS scheme having the following properties: First, during the distribution process, our scheme uses a simple encryption function to encrypt the shares before distributing them.The encryption of the shares is secure under the CDH assumption. When he receives a share of the secret, a participant can extract and verify the validity of his share without any communication with any party, even the dealer.We say that our verification process is non-interactive. In case of any complaint against the dealer, the concerned participant, the dealer and a third party R can run a disputation process in order to establish who is cheating.The disputation process is secure under the CDH assumption. Later, an explicit Zero-Knowledge membership process can be run to allow every participant to prove interactively his membership to a verifier who asked for that.This process is secure under the CDH assumption.Notice here that only three schemes offer an explicit membership proof and an explicit disputation process at the same time: the present scheme, the scheme of Behnad and Eghlidos [2] and the scheme of Ben Shil, Blibech and Robbana [17]. Moreover, notice that in our scheme, when submitting an encrypted share to the party concerned by computing the secret, an implicit membership proof is given and it is not necessary to run the explicit membership only proof. Finally, we point that the use of the XOR operator in our scheme makes it less timeconsuming than the schemes presented in [2] and [17]. Conclusion The new PVSS scheme proposed in this paper is very simple while being secure.In fact, thanks to the use of a simple encryption function, we reduce computations in all the processes of the scheme.In addition, like in the scheme proposed in [2] we added two new processes: a disputation process and a membership proof process.Thanks to these processes, no one can cheat. • Paillier's cryptosystem: Hard problems based on the Paillier's cryptosystem proof.• Pairings: Hard problems based on the Bilinear Pairings. Table 1 : Evaluation of the distribution process Table 3 : Evaluation of the disputation process Table 4 : Evaluation of the membership proof
2013-07-30T20:28:18.000Z
2013-07-30T00:00:00.000
{ "year": 2013, "sha1": "ad188ab1246158a4604d0033f8a307f402cd1f12", "oa_license": "CCBYNCND", "oa_url": "https://arxiv.org/pdf/1307.8209", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a74f64d05f3331a2384125873dc635f0106461e9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
139823487
pes2o/s2orc
v3-fos-license
Stimuli-Responsive Mechanoluminescence in Different Matrices Here, for the first time, we investigated the effects of matrixes with different nature on the stimuli-responsive mechanoluminescence (ML) of incorporated nanoparticles. It turned out that the contraction forces initiated by polymerization process can have compressive effects that differ by orders. This effect was achieved owing to the introduction of ML crystals in an alumina sol–gel system, which has large surface of coagulation contact. As one particle of boehmite results in a tension of 10–17–10–16 N per one particle of matrix, compared to 10–19 N of PDMS matrix, the threshold of mechanoluminescence was reached at 0.04 Pa, whereas the most active materials to date did not exceed this value. Thus, this material can be a perspective for the production of impact detectors, photonic displays of the next generation, and other advanced devices. ■ INTRODUCTION Nowadays, the steady trend to transition from electronic elemental base and electric systems to photonic ones is seen. The new direction of researchphotonicshas a huge amount of achievements, which are used increasingly during the development of information-measuring systems of new generation, so the interest in sensor elements or photoninduced materials based on ML properties increases. 1,2 Because the feature of as-mentioned sensor elements is a direct transformation of mechanical action into an optical signal, these can be integrated in fiber-optic informationmeasuring systems and networks. Sensor elements may be applied in common with devices of integral and fiber optics, 3 impact detectors, 4,5 registration and monitoring systems of impulse mechanical loads and vibrations, 4,6 and production of sensor photonic displays of new generation. 7 Furthermore, application of these elements in optical memory cells is highly perspective. 8 Mechatronic networks based on mechanoluminescent sensor elements and fiber-optic data transmission lines are insensitive to external electromagnetic interference, which provides automatically galvanic isolation. The use of amplitude−time parameters of optical signal, its spatial modulation, spectrum (color), and polarization state raises the informativity of transmitted signal. In this case, both investigation of known ML materials, which are rather effective in the transformation of mechanical action into optical signal, and the search for new ones become especially actual. A large number of chemical compounds are famous for their mechanoluminescence (ML). 9 Despite the common name, ML mechanism varies in different groups of composites, which is important for further mechanoluminophore application in device engineering. At the moment, ML mechanisms of several groups of compounds, such as metals and quartz, 10,11 semiconductors doped with d metals in the form A(II)B(VI), 12 and composites MAl 2 O 4 (M = Sr or Ba) doped with rare-earth elements, 13 are relatively well studied. The recent reviews 14,15 reported and summarized the information about ML materials and described further applications of composites (bioapplications are also present 16,17 ) and some perspectives of this field of science. However, it is now hard to achieve light emission because strong mechanical action is required for an ML effect. In this work, we examine the influence of an alumina sol−gel matrix on mechanoluminescence and compare its effect with an Siorganic matrix. The investigation was carried out with use of SrAl 2 O 4 :(Eu 2+ ,Dy 3+ ) nanocrystals. ML process in a group of chemical composites MAl 2 O 4 (M = Sr or Ba) doped with rare-earth elements (i.e., europium and dysprosium) is the result of a multistage mechanism, 18 as seen in Figure 1a and shown as the original mechanism by Matsuzawa et al. 19 However, this raises doubts and is rarely used by this day. The model of the process described by Dorenbos et al. is stated to be more suitable. 20 In this case, electrons that are excited from Eu 2+ are released to the 5d level and then to the conduction band, which is situated close to the 5d level. Next, these are caught by Dy 2+ ions. Then, when pressure is applied, recombination takes place, which results in light emission (Figure 1b). Organic (PVC) or inorganic (silica) substrates may be used as matrices of mechanoluminescent crystals. 18 The highest emission was reached owing to polydimethylsiloxane (PDMS) matrices. 7 However, a soft PDMS matrix absorbs the basic load and transmits enough amount of energy to tribocrystals only under conditions of significantly high tension. The general advantage of alumina matrix is its ability to demonstrate contraction. While the gel is desiccated, recombination of matrix particles happens, which results in its compression. This provides an additional load to ML crystals, which are included in the matrix, so the required value of force for the activation of ML effect decreases. It is known that the state of the luminescent material and its photon emission can be significantly influenced by the matrix, which contains a luminescent substance, or by a solvent with this material. 21 The purpose of the study is to investigate the influence of different transparent matrices on ML properties inhibition of Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 nanoparticles, followed by the possibility of coating creation, which is able to induce photon emission in the visible spectral region under an action of extremely low loads. ■ RESULTS AND DISCUSSION Contraction Phenomenon in the Composites. The effect of contraction forces, which emerges during condensation of colloidal inorganic systems, is found to exert a favorable action on mechanoluminescence. The most interesting example of this effect is the observed consequence of transition of the crust from the liquid to the solid state, which is supported by the generation of a developed rough surface that is mountains (Figure 2d). Tangential forces of compression originating from these formations are highly valuable, because crustal folding, oscillatory motion, magma rising, and discontinuous dislocations are produced. The translation of analogous processes to micro-and nanoscale processes leads to the same effects. For instance, drying of hydrogels, regardless of their characteristics, is connected to a drop in surface tension of intermicellar liquid. It is obvious that if the surface tension of intermicellar liquid falls, then capillary contraction forces, which thicken a structure of a solid body, will decrease. Sol−gel transition, such as the formation of a hard xerogel with inorganic network compaction, depicts the most successful example of this phenomenon. At the same time, the instances of liquid-phase hardening, when a compressing action on damped objects is absent, are known. One example of these is the production of amber, which is a high-molecular-weight compound of organic acids ( Figure 2b). Consequently, it is possible to assume that the mechanisms of contraction force action in organic and inorganic polymers are principally different. According to the SEM data in Figure 2a,c, the same effects are observed at the studied systems. Namely, Figure 2c demonstrates strontium aluminate formations, which stand out on the surface of boehmite xerogel. These were obtained during the polycondensation process and drying of the xerogel. On the other hand, the organosilicon matrix does not show the effect that is identical to the classical behavior of organic polymers ( Figure 2a). The comparison involved allows the determination of the influence of matrices on the processes of quenching and amplification of mechanoluminescence of encapsulated objects. XRD Data. The XRD pattern of Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 phosphor samples is depicted in Figure 3 as the gray graph. It is shown that the diffraction peaks match well with the 90 JCPDS card no. 74-0794, indicating that the pure monoclinic crystal structure with space group P2 is obtained. Co-doping with Eu 2+ and Dy 3+ does not cause any significant change in the host structure. Diffractograms of composite materials based on alumina and PDMS matrices, which were obtained in the presence of Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 nanoparticles, prove the preservation of strontium aluminate structures (characteristic peaks of strontium aluminate present on both composites). Moreover, the structure of matrices also persists in composites. The only difference is the appearance of a wide peak in the alumina composite (left graph, wide peak at 20°) due to the presence of polyvinyl alcohol as a stabilizer in this system. Influence of Solvent on Phosphorescence. As MO· xAl 2 O 3 doped with rare-earth elements are usually produced by the ceramic method, it was comfortable to employ the ball mill to obtain the acceptable fine powder of strontium aluminate doped with Eu and Dy. This methodology partially destroys the crystal structure of powders and increases their reactivity. As soon as the ground powder is in contact with water, the hydrolysis reaction looks like Hydrolysis products include aluminum hydroxide and are soluble in water strontium hydroxide, which lowers the pH of the medium after dissolution of the powder. During hydrolysis, phosphorescence of the powder is assumed to degenerate, whereas the crystal structure of luminophores decays. Polyvinyl alcohol played the role of a modifying additive in this work to avoid the abovementioned effect, since it prevented direct contact with water molecules because of competitive physisorption from the solution, stabilized the formed coatings, and declined their fragility. Figure 4 depicts the dynamics of phosphorescence change in Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 entrapped in the alumina matrix during the drying process. It is shown that primary emission persists after drying. The graphs of excitation of the green-blue region at wavelengths from 400 to 480 nm and intensity of photoluminescence (PL), which were measured before drying, are displayed in Figure 4a,b, respectively. It is found that PL has two peaks at excitation wavelengths of 400 and 440 nm (the material glows in interval between blue and green regions). Upon drying, the photoluminescence of the composite shifted mostly from blue to green regions (Figure 4c,d). The curves depicted by solid lines fromFigure 4c correspond to two states of phosphorescence, where the left line is photoluminescence before drying, the right line is photoluminescence after drying, and the intermediate lines are photoluminescence during drying. In our previous work, 22 it was demonstrated that water is removed upon drying of the alumina gel, which results in this shift. Influence of the Matrix on Quenching of Phosphorescence. Thermal quenching of crystallophore phosphorescence is connected not only with the impact of heating on a glowing center but also with the appearance of the new processthe filling of excited electron-free centers by electrons rising from the valence band, which is influenced by thermal energy. The resulting neutralized emissive centers can no longer play a role in further recombination processes. The dots, which were formed as a result of escape of electrons, move through the valence band, meet the centers of quenching, and localize to these. For the sustainable localization, it is necessary that the levels of quenching centers situated up on the valence band were noticeably higher than the levels of the glow centers. In this position of the quenching levels, electrons from the valence band cannot rise to the localized dots by the thermal path. The dots recombine with electrons from the conduction band. However, recombination close to the quenching center does not glow. Although it is taken from the point of view that PDMS and alumina matrices have principal differences at their thermal conductivity properties (Table 1), it becomes clear that the activation of these processes proceeds in different ways. As shown in the obtained data (Figure 5a), the mechanism of phosphorescence quenching in the boehmite matrix has the character of a pseudo-first-order kinetic model, whereas the PDMS matrix has a zero-order character. Most likely, this is connected with the appearance of diffusion heat transfer processes in matrices, because their heat-conducting properties are different essentially, as depicted in Table 1. It is clearly seen in Figure 5b that the introduction of ML particles in matrices does not have an essential influence on phosphorescence lifetime: the form of graphs and lifetime values of composites are close to the original. Dependence of Contraction Forces on the Nature of the Matrix. The development from the sol−gel state of boehmite to xerogel is a multistage process, which contains both ion-exchange process and drying. The bonds recombine, and the volume decreases in the alumina matrix upon drying because of solvent removal. Hence, the matrix compresses and squeezes the phase entrapped in it. The setup was constructed for the detection of the value of these forces (Figure 12), which is described in Experimental Details. The dependence of compressing action in time in the boehmite gel, which was measured during the drying process, versus cross-linking of the PDMS matrix is depicted in Figure 6 (left). As illustrated in the left graph of Figure 6, the forces of capillary contraction thicken the structure of the alumina matrix during the drying process (stage I), followed by the convergence of its elements and providing the possibility of the appearance of numerous secondary cohesive and adhesive strengthening bonds (stage II) over the growth of coagulation contact between particles when dried. The intensity of those forces rises during drying and reaches the maximum at the border of conversion gel−xerogel. The last step of viscosity increase is characterized by total liquid removal, so that the forces of capillary contraction gradually disappear (stage III). For the PDMS matrix, those phenomena were not observed, which is assumed to take place because of independent effects from the stage of drying and density fluctuations of particle coagulation contact. Macroscopic measurements of compression described above are equitable for volume V = 0.3 mL of both composites before drying. The compression of alumina (average diameter of particle is 15 nm) and PDMS (average molecular weight is 27,000, repeating unit molecular weight of 207.4 g/mol) per one particle of matrix was calculated to estimate the contraction forces. The assumptions of further calculations are as follows: (1) the compression force is evenly distributed throughout all units of the matrix where F is the compression force of the matrix, F 0 is the force per one unit (particle of alumina or PDMS molecule), and N is Thus, F 0 can be found as For PDMS, one particle is considered as one molecule (V p = V mol ), so the second fraction in eq 3 transforms to 1. The results of calculations are shown in Figure 6 (right graph). The stress of one particle of alumina in N is stated to be higher by two orders than the stress of one PDMS molecule. Consequently, the inorganic matrix should activate triboluminescence more intensively and suppress the absorption of mechanical load. Taking into account the value of operating forces on the model nanoparticle after entrapment in the alumina matrix, morphological transformation of trapped strontium aluminate nanoparticles was discovered. It is expected that this transformation is the edge dislocation of crystalline layers relative to each other. The determination of composition of eluated formations by the EDX mapping method (Figure 7) proves this conclusion. According to the data obtained, outgrowths with a similar composition to strontium aluminate form on the surface of alumina xerogel during the drying process. Their presence is explained by partial hydrolysis of trapped strontium aluminate particles and by the action of capillary contraction forces on these objects during the drying process. In this way, the consequence about the possibility of an excessive compressive load on the trapped objects in sol−gel systems over this process is established. The viscometry graphs of alumina and PDMS matrices are displayed in Figure 8a. Alumina gelates over 5 min, whereas PDMS cures for 10 min. This demonstrates that these materials harden fast enough to cause production, as soon as its cycle gives the required time for convenient processing. Fluctuations of luminescence intensity depending on applied load were measured to acknowledge the aforementioned information. The change in emission intensity due to mechanical tension (triboluminescence) is shown in Figure 8b. According to the data obtained, the activation thresholds of triboluminescence for Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 entrapped in matrices with various chemical nature are significantly different. This proves the conclusions about the presence of residual tension in the system and the residual action of compression forces in inorganic sol−gel matrices contrary to organic ones. For instance, the activation threshold of triboluminescence for the PDMS matrix is 0.21 Pa, whereas that for the alumina matrix is only 0.04 Pa. It is obvious that the use of inorganic matrices, which condense with compaction of their skeleton, has a positive effect on the excitation of triboluminescence of the captured agent. Particularly, this may be applied as photon-induced coatings, which activate by applying pressure using a human finger (Figure 9). One more perspective form of the application of highly sensitive ML composite is the advanced glowing detector on a product package. Namely, if some products, such as dairy and meat products, are spoiled, excess gas medium will generate inside a package with this product. Obviously, gas generation is a result of the vital activity of bacteria, and it is dangerous to consume such products. The additional pressure of this gas results in mechanical deformation of a package (swelling). Thus, because an alumina ML material is very active, we assumed that a triboluminescent label added on the cover of a fermented milk product can serve as the visual detector of its freshness ( Figure 10). First, a package with an ML indicator was exposed to UV irradiation to activate tribocrystals inside the composite material. In the case of a product before its expiration date, a very dim glow was observed (Figure 10c), because any pressure influenced the label. On the other hand, a package swelled after the expiration date and pressurized the cover with an ML composite, so it glowed rather brightly. The experiment proves that the alumina-based triboluminescent material can help discourage customers from buying spoiled products due to visibility and impossibility to hide the fact of spoilage. ■ CONCLUSIONS Overall, this investigation reports how matrices with different nature have an influence on the mechanoluminescent and morphological properties of corresponding composite materials, which consist of Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 dispersed in a matrix. The alumina xerogel and PDMS were chosen as inorganic and polymer matrices, respectively. As a result, the threshold of triboluminescence activation of the alumina composite was seen to require a rather weak external tension because of internal contraction phenomenon of the matrix, which takes place during the drying process. Because this phenomenon is absent in polymer compounds and these matrices absorb external tension, the boehmite triboluminescent material activates at load, which is lower by two orders, than the PDMS composite (0.04 Pa vs 0.21 Pa). Finally, authors expect that inorganic matrices will expand their application in mechanoluminescent devices, such as impact detectors or photonic displays, because most of the developed materials do not exceed their properties. Alumina Sol Synthesis. The technique described in ref 27 was employed to synthesize the sol−gel boehmite matrix. Aluminum isopropoxide (8.2 g) was added to 50 mL of distilled water, which was heated to 90°C. This step resulted in immediate formation of a white precipitate. After that, the obtained mixture was thoroughly stirred for 15 min at 90°C. Thus, the formation of boehmite nanoparticles was stopped, and isopropanol, which was collected because of hydrolysis, was evaporated. Finally, 1 mL of concentrated nitric acid was poured into the mixture and cooled down to room temperature under continuous stirring. PDMS Synthesis. The matrix based on PDMS was produced from Sylgard 184, which contains two parts: a polymer and a cross-linker. First, these parts were mixed at a weight ratio of 10:1 (base/curing agent). Then, the mixture was put in a vacuum oven to remove air bubbles and stop them from forming. After that, this matrix was applied to produce composites. Preparation of ML Samples. The Sr 0.95 Eu 0.02 Dy 0.03 Al 2 O 4 powder was shredded preliminarily by a ball mill (0.03 g) up to 100 nm average diameter and immersed in concentrated boehmite sol (1 mL) with polyvinyl alcohol as a stabilizer (2 mL of 4% aqueous solution) or mixed with PDMS matrix (3 mL). Thus, the concentration of SrAl 2 O 4 :(Eu, Dy) in solutions was 4.78 × 10 −5 mol/mL. These mixtures were stirred thoroughly and left in air until complete drying and curing. The scheme of the process involved is shown in Figure 11. Thus, both thin films were obtained, which covered flexible substrates and individual thin films with low mechanical strength. Characterization. A Tescan Vega3 electron microscope was employed for surface scanning of the created composites. X-ray structural analysis was made by a Bruker D8 Advance apparatus using Cu K radiation (λ = 1.54 Å); scanning was conducted at 2θ at a speed of 0.5°per minute. Measurements by dynamic light scattering were implemented by means of a Photocor Compact-Z analyzer at 25°C. pH level was regulated by 0.1 M HCl or 0.1 M NaOH. Fluorescence emission spectra and spectra of excitation of colloidal solutions were measured through water. Thermal quenching of phosphorescence was measured in a ditch quartz cell by an Agilent Cary Eclipse fluorometer with ±0.05°C precision. Viscosity gain of the alumina matrix was measured by a Fungilab Expert Series viscometer. Mechanoluminescence was detected by an Ocean Optics Flame spectrometer (Figure 12a). The tripod UV laser (power, <300 mW; λ = 375 nm) focused through a quartz window was fixed on the contact point of the examined sample and a texturometer was traversed to the sample, so that induced mechanoluminescence is present at the load point. Next, the texturometer traversed down to the sample while making the tension required for fast initiation of mechanoluminescence at the load point (Figure 12c). Meanwhile, luminescence intensity changes were detected by a Flame-S-UV-Vis spectrometer (Ocean Optics) and were transmitted to the computer. Thus, the dependence of luminescence intensity on the degree of loading was recorded. Contraction forces were measured by an Instron 5943 strain gauge machine. The model of the experiment is shown in Figure 12b. The alumina or PDMS matrix (0.3 mL) was set between two planar surfaces made of nickel polished plates (diameter = 1.5 cm), which Figure 10. ML detector of a swollen package. Appearance of (a) the package with an ML indicator, (b) the package in a dark room, (c) the package before the expiration date excited by UV light (any tension affects the cover, so the ML label dimly glows because of excitation), and (d) the package after the expiration date excited by UV light (the ML label underwent mechanical load and began to glow brightly). were fixed in a strain gauge machine, at a distance of 1 mm (Figure 12d). After that, registration of contraction forces occurred, which are conditioned by compression of the matrix during condensation process and air drying.
2019-04-30T13:08:56.440Z
2018-12-28T00:00:00.000
{ "year": 2018, "sha1": "929c0e1325242fa85a5d359cb20e81e2700ff8a6", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b02696", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e67ca34b3b3c280b49451f1568cbecf4ef3b4f29", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
261759266
pes2o/s2orc
v3-fos-license
All-Inside Posterior Cruciate Ligament Reconstruction With Remnant Preservation: Anteromedial Portal Technique The posterior cruciate ligament (PCL) is an important restraint to posterior tibial translation. PCL reconstruction is one of the most challenging procedures, with the literature having described many techniques for reconstruction. Protecting the neurovascular structures, overcoming the “killer turn,” preserving bone, and reducing morbidity and postoperative pain are a few of the technical challenges that surgeons often encounter during PCL reconstruction. We describe a technique using a graft-link construct through the anteromedial portal for all-inside PCL reconstruction with remnant preservation that protects the graft from the killer turn of the tibia by smooth passage of the graft over the remnant and improves proprioception, thereby reducing postoperative pain and morbidity and achieving excellent functional outcomes. technique, allow us to perform reconstruction using sockets instead of tunnels. 7,8e describe a technique using a graft-link construct through the anteromedial (AM) portal for all-inside PCL reconstruction with remnant preservation that protects the graft from the "killer turn" of the tibia by smooth passage of the graft over the remnant and improves proprioception, thereby reducing postoperative pain and morbidity and achieving excellent functional outcomes. Surgical Technique Patient Positioning The patient is operated on while under spinal anesthesia in the supine position; the affected knee hangs from the side of the operating table and remains in 90 of flexion throughout the surgical procedure (Video 1). Graft Harvest and Preparation A padded tourniquet is applied, and PCL laxity is appreciated clinically.Arthroscopic confirmation (sloppy anterior cruciate ligament [ACL] sign) is performed using a 30 arthroscope after standard anterolateral and AM portals are created.An isolated semitendinosus graft is harvested and quadrupled to achieve an 8-mm thickness, and a graft link is created using 2 adjustable loops on each side (Fig 1, Video 1).A graft length of 8 cm is achieved.Because remnant preservation is planned, a graft thickness of 8 mm is considered adequate to avoid overstuffing and allow smooth passage of the graft.The planned intraosseous socket length over the femoral side is 15 mm, with 30 mm on the tibial side. Arthroscopic Approach For improved visualization, 2 windows are created: one between the ACL and the remnant PCL (lateral window) and the other between the remnant PCL and the medial femoral condyle (medial window) (Fig 2, Video 1).The posteromedial (PM) portal is created in standard fashion by use of a spinal needle and switching stick through the medial window (Fig 3).A shaver is introduced through the PM portal, and adhesions from behind the remnant PCL are released.Similarly, viewing from the PM portal and bringing the shaver and radiofrequency probe from the AM portal, the surgeon removes adhesions from the remnant PCL; thereby, a clear cleavage plane is developed to protect the neurovascular structures during creation of the tibial retro-socket.The endpoint of clearance is The PCL femoral retro-socket is prepared using an allinside femoral zig (Fig 8).Initially, the PCL femoral point is marked, the PCL femoral zig is placed over the mark (Fig 9), and the retro-reamer is passed through the zig from outside into the joint.The retro-reamer is flipped, and creation of the retro-socket is achieved once retro-reamer is flush with the bone.A femoral retro-socket 15 mm in size is created.After clearance of debris, a Beath pin along with suture is passed into the joint.To prevent the intertwining of suture, a PassPort cannula (Arthrex) is passed.First, the tibial leading suture is retrieved, followed by retrieval of the femoral suture outside. The adjustable-loop graft-link construct is loaded over the tibial leading suture and pulled inside; the tibial retro-socket is then taken outside.Through the AM portal, the graft is delivered inside the tibial retro-socket ( Postoperative Care A knee brace with posterior tibial support is applied for 3 weeks, followed by passive range-of-motion exercises.Toe-touch weight bearing is started immediately postoperatively, with a progressive increment after 3 weeks.Sporting activities are restricted for 6 months. Discussion Arthroscopic PCL reconstruction is a technically demanding surgical procedure.Various technical challenges have been described in the past, which have led to the development of different techniques for its management. Our technique for all-inside PCL reconstruction with remnant preservation has various advantages (Tables 1 and 2), in addition to the benefits of all-inside PCL reconstruction previously described by Adler 3 and further substantiated by Vasdev et al. 8 Because there is a risk of damaging the posterior neurovascular structures during the creation of tibial tunnels, 7 retro-sockets are created using a FlipCutter to minimize this risk.The creation of retro-sockets preserves the intervening bone bridge and helps in achieving cortical fixation of the graft. 8n our technique, the graft-link construct is passed above the PCL remnant, thereby aiding in smooth passage of the graft and avoiding the killer-turn effect.The graft is delivered through the AM portal, hence helping to achieve continuous visualization of graft passage into the sockets through the PM portal. The limitations of our technique include inadequacy of graft thickness when using isolated semitendinosus graft, which can be overcome by gracilis supplementation (Table 3).There is a risk of intertwining of suture during retrieval, which is prevented by using a PassPort cannula (Video 1).Furthermore, there is a possibility of overstuffing owing to the presence of the remnant PCL; this is avoided by using 8 mm of graft as opposed to previously described techniques. 8To summarize, the described technique provides a reproducible way to perform all-inside PCL reconstruction with remnant preservation and graft passage via the AM portal, which has multiple advantages over previously described methods.Passage of the graft over the remnant helps achieve a smooth excursion, thereby negating the killer-turn effect.Only an 8-mm-thick graft is required; hence, semitendinosus graft alone suffices, which helps reduce morbidity.Passage of the graft above the remnant helps achieve a better intraarticular length and preserves graft isometry.The presence of mechanoreceptors in the remnant provides more proprioception, thereby allowing early rehabilitation. Table 3. Limitations of Technique A smaller semitendinosus graft can result in a thickness < 8 mm when tripled; this pitfall can be overcome by combining gracilis with the graft.There is a risk of intertwining of suture during retrieval, which is prevented by using a PassPort cannula.There is a possibility of overstuffing of graft owing to the presence of the remnant PCL; this is avoided by using 8 mm of graft. Fig 2 . Fig 2. Arthroscopic view of left knee showing creation of medial window (between remnant posterior cruciate ligament [PCL] and medial femoral condyle) and lateral window (between anterior cruciate ligament [ACL] and remnant PCL). Fig 3 . Fig 3. Arthroscopic view of left knee showing creation of posteromedial portal through medial window between remnant posterior cruciate ligament (PCL) and medial femoral condyle. Fig 4 . Fig 4. Identification of popliteus as final level of clearance of adhesions (with posteromedial portal as viewing portal and anteromedial portal as working portal). Fig 5 . Fig 5. Arthroscopic view of left knee showing marking of distal part of remnant posterior cruciate ligament (PCL) as tibial footprint of graft. e1696 achieved when reaching underneath the popliteus muscle (Fig 4).The FlipCutter retro-reamer is used for socket creation.The tibial exit point of the FlipCutter is marked by identifying the distal-most part of the remnant PCL tissue to avoid damage to the native PCL (Fig 5).The tibial zig is inserted through the lateral window and kept over the mark; the FlipCutter is introduced (Fig 6) and flipped flush with the bone, followed by the creation of a tibial socket 30 mm in size.A Beath pin along with No. 2 FiberWire (Arthrex) is passed through the socket and retrieved through the lateral window in the anterior compartment (Fig 7). Fig 6 . Fig 6.Arthroscopic view of left knee showing FlipCutter and tibial jig for creation of tibial retro-socket. Fig 7 . Fig 7. Arthroscopic view of left knee showing retrieval of suture from lateral window between anterior cruciate ligament and remnant posterior cruciate ligament. Fig 8 . Fig 8. Intraoperative view of left knee showing creation of femoral retro-socket by all-inside femoral zig. Fig 10 . Fig 10. (A) Intraoperative view of passage of graft through anteromedial portal by PassPort cannula.(B) Arthroscopic view of left knee showing passage of graft through anteromedial (AM) portal by PassPort cannula Fig 12 . Fig 12. Arthroscopic image of left knee showing passage of graft in femoral tunnel with outside toggling of femoral adjustable sutures. e1698 socket and adequately tensioned.All of these steps are achieved under continuous vision through the PM portal.A strong anterior drawer force is placed on the tibial side until the tibial step-off is re-created, after which final tensioning of the graft is performed (Fig 13).Final patency and tension are assessed with a probe before closure (Fig 14). Fig 14 . Fig 14.Arthroscopic image of left knee showing assessment of final patency of graft with probe. Table 1 . Advantages of All-Inside PCL ReconstructionBone-preserving surgery is achieved owing to the creation of sockets instead of tunnels.Pain and morbidity are decreased.The technique is ideal for multiligamentous reconstruction because it is a bone-preserving procedure.The graft-link construct with suspensory fixation helps in tensioning the graft from both sides even after passage into the sockets.Retrograde reaming with the FlipCutter avoids the risk of injury to the neurovascular bundle. Table 2 . Advantages of Remnant-Preserving PCL Reconstruction
2023-09-14T15:35:44.412Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "775c7cfe8a6076bd33d834d9ac555a9d42190142", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S2212628723001676/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bccb1942e52459ced43e98981f5fc7f5fec58958", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
231774215
pes2o/s2orc
v3-fos-license
An Insight into the Indian Railways COVID-19 Combat Coronavirus pandemic has affected many lives, and several rigid rules and policies had to be implemented across the globe to curb the spread of the disease. A nation-wide lockdown was announced on March 22, 2020, in India to curb the spread of the Coronavirus (COVID – 19) pandemic. The entire nation was brought to a standstill with only the essential services running. The pandemic had put many of the organizations on the moratorium, especially the travel industry. Indian Railways were no exception to it. But they have risen to the occasion to stand strong with the nation. The efficient disaster management planning of the Indian railways has helped them to fight the battle bravely. Though the passenger trains were brought to a halt in the initial lockdown period, the freight services were functional, carrying out essential duties during the lockdown. From manufacturing and distribution of PPE kits, transportation of essential commodities, service from medical warriors, Indian Railways has taken all the possible steps in the nation's fight against the pandemic. This article focuses on the COVID-19 guidelines followed by the Indian Railways and their noble work during the COVID-19 national crisis for the wellbeing of employees, passengers, and the general public by using secondary data such as journals, newspapers, magazines, and memorandums. Keyword: Indian Railways, COVID-19, Pandemic, Disaster Management, Health, and Safety INTRODUCTION : The SARS-CoV-2 Coronavirus was first identified in the Chinese city of Wuhan and has spread throughout China and the world. The World Health Organization has declared COVID-19 a pandemic. India announced its first case of COVID-19 on January 30, 2020. A nation-wide lockdown was announced on March 22, 2020. Lockdown was carried out in several phases bringing many of the organizations on the moratorium. Some of the facilities, however, were operational, including pharmacies, hospitals, banks, grocery stores, and other vital services to keep life going [1]. Indian Railways, which is considered the nation's lifeline, has once again proved to be at its best in combating the pandemic of Coronavirus. It had briefly suspended its passenger transportation services for the first time in its 167 years of existence to avoid the spread of the virus [2]. However, the pandemic did not halt the IR services. In their battle against the pandemic, the freight services continued alongside some of the significant contributions IR has initiated. They used this time of lockdown to train and educate their personnel to handle the pandemic safely. They conducted several pandemic awareness camps for workers, the general public, and the residents of the Railway colony throughout the country. Many of the offices of the government and non-essential departments were requested to operate remotely. The workers have successfully adapted themselves to these changes. The IR used the existing resources efficiently to manufacture PPE required to protect from COVID-19 in large numbers for their staff and distributed to the public. Several temporary hospitals have been established to treat patients affected by COVID-19. Besides, many developments have been promoted that can aid in the period of crisis, and one of them was the manufacture of low-cost ventilators [3]. At the time of crisis, the IR's freight services transported critical goods across the country. To accommodate growing COVID-19 cases, many of the coaches were converted to isolation wards. This is not the first time IR is facing natural or man-made disasters have been faced by the IR. In the past, there have been many instances, such as floods, landslides, and other disasters, where IR played a significant role in helping the country in times of crisis [4][5]. Special trains were scheduled to transport essential agriculture products and milk across the country. OBJECTIVES : This paper mainly focuses on identifying the Indian Railway's contribution to fighting the Coronavirus pandemic. The main objective of the research paper are as follows: (1) To explore Indian Railways guidelines during the COVID-19 pandemic. (2) To identify the contribution of Indian Railways during the COVID-19 pandemic. (3) Identifying initiatives taken by Indian Railways for the wellbeing and maintenance of employees. The article highlights some of the steps taken by the Indian Railways for the wellbeing of their passengers and the employees during the COVID-19 pandemic such as awareness camps, distribution of PPE, inhouse manufacturing of essentials for COVID-19 precautions, medical aid, etc. METHODOLOGY : The paper consists of conclusions deduced through ABCD (ABCD analysis results in an organized list of advantages, benefits, constraints, and disadvantages in a systematic matrix) analysis of Indian railways dealing with the COVID-19 pandemic which is based on secondary data from varying databases like peer -reviewed journals, reports, magazines, news websites, circulars, and books. INDIAN RAILWAYS GUIDELINES DURING THE COVID-19 PANDEMIC : From March 25, 2020, in the midst of the national lockdown to tackle the spread of the COVID-19 pandemic, the Indian Railways suspended all its passenger, mail, and express trains and began operating the Shramik Special train from May 1, 2020 to transport stranded migrant laborers and others. However, the freight trains were continuing to provide its service to meet the transportation of essential goods throughout the country. In line with WHO COVID-19 protection guidelines, many precautionary steps have been introduced phase-wise [6]. Management of suspect/confirmed COVID-19 cases: The guidelines for the management of COVID-19 cases in the Railway Coaches-COVID Care Center was issued on 7 th April 2020 [7]. As per the guidance document, the Railway coaches were used as COVID Care Centers for the very mild and mild, suspected/confirmed COVID-19 cases. People were screened for the symptoms and clinical conditions and accordingly assigned to the coaches. In case of moderate or severe symptoms, they were referred to the designated centers or hospitals for further management. Standard treatment protocols of the ministry were followed by the trained doctors and paramedical staff deployed to the special coaches for management of cases. Railway Protection Force (RPF) were deployed to monitor the security of the coaches, patients, and staff. Food was arranged by the IRCTC. Proper signages were placed all around the Railway stations. AC temperature of the coaches were controlled. A Basic Life Support Ambulance was also placed in the railway stations. Before handing over the train back for normal use, these coaches should be disinfected and cleared of all biomedical waste. The coaches should be again disinfected regularly by the railways as per the protocol of Ministry of Family and Health until further notice. [7] Guidelines for passengers: Passenger trains were gradually started with some criteria being followed like advanced reservation period of 30 days. Reservation Against Cancellation (RAC) was permitted for the passengers. Boarding of train by waiting list passengers and unreserved coaches were cancelled during this period. Ready to eat packed food were provided and other precautionary measures such as social distancing, temperature screening, and hygiene protocols were followed in trains and stations [8]. IR implemented the use of face masks, and Aarogya Setu mandatory for passengers. Also, it was mandatory for the passengers to [24] International Journal of Management, Technology, and Social Sciences (IJMTS), ISSN: 2581-6012, Vol. 5, No. 2, December 2020 SRINIVAS PUBLICATION reach the station a minimum of 90 minutes prior to the scheduled train. The AC and Non-AC coaches were modified to make travel safer with new normal requirements [9]. Guidelines for Railway employees: As a preventive measure for the containment of COVID-19 in the initial phase, several instructions were issued to the Railway employees. All branch offices were asked to prepare a duty roster for employees providing essential services within each department. The offices were kept open with bare minimum staff. Instructions were issued to employees nominated to work from home on a turn basis in the initial phase. They were asked not to leave their city limits without prior approval. They were always instructed not to leave their home during office hours and to be available on telephone or electronic means of communication. All staff were instructed not to venture out to avoid contact with outsiders to safeguard themselves. The offices and surroundings were sanitized regularly. Privilege passes, Privilege Ticket Orders (PTOs), and post-retirement passes were suspended briefly [10]. COVID-19 awareness campaign: The Indian Railways has initiated a Coronavirus awareness campaign for its passengers and employees. Educational Posters, including graphical representations displaying necessary measures to curb the virus's spread, were displayed at prominent locations at the railway stations for passenger information. Regular announcements were made through the public addressing system, health department video clips, and circulated social media messages. Besides, Railway hospital staff, divisional health officers, and other employees conducted awareness campaigns to the passengers, employees, and Railway colony residents. The awareness campaigns helped to reduce panic among the passengers and the employees and educated them on the precautionary measures. The IR has joined the public movement against Covid-19 called the Jan Andolan Campaign -A COVID Acceptable Behavior in the new normal state, which is a COVID protocol awareness campaign such as wearing mask/face cover, maintaining physical distance, etc. Under the 'Break the Chain' campaign some of the initiatives like no curtains, blankets in trains, foot-operated hand wash soap dispensers, social distancing reminders were introduced [5][11]. Manufacturing of PPE kits: The IR initially procured PPE kits for its utilization. With the growing need for PPE kits, Railways started in house manufacturing of PPE kits. In April 2020, they initially manufactured 1000 PPE kits per day in their 17 workshops spread across the country, which went on to 4000 per day in the subsequent month. These PPE kits were not just utilized by the doctors and paramedics of Indian railways but were also supplied to other doctors in the country. The design and quality of these PPE kits were approved by DRDO. Apart from PPE kits which include the apron, gloves, face masks, and face shields, sanitizers were also manufactured in huge quantity. Face masks and sanitizers were supplied to all staff, including contract laborers who were on duty. As of 21 st May 2020, 1.2 lakh PPE's, 1.4 lakh liters of sanitizers, and 20 lakh reusable face covers were produced in the workshops of IR [5][12]. Manufacturing of ventilators and other innovations: The Rail Coach Factory (RCF) trained itself well in advance to combat the deadly Coronavirus and produced many of the crucial pandemic fighting equipment. They began investigating the possibility of having an in-house ventilator, as recommended by the Railway Board, New Delhi. A low-cost ventilator prototype 'Jeevan' was successfully developed. Most of its components were manufactured in the factory from pre-existing raw materials. Two of its components were outsourced from a company located in Delhi and Noida. Besides this, low-cost commercial-grade sanitizers and foot-operated hand wash dispensers have also been developed by the Rail Coach Factory. The Southern Railway Signal and Telecommunication Engineers have created a Social Distancing Ensuring Device with a minimal price of Rs. 800, that alerts social distancing by setting the alarm until the people are 3 meters apart when two or more people wearing the device come within 2 to 3 meters range. A contactless ticket checking system was introduced which works using QR code. Besides, few other innovations have been developed mainly the hands-free amenities, disposable linens, plasma air purifiers in the post-COVID coaches [5][13] [14][15]. Cleanliness drive: To curb the spread of coronavirus infection, the Indian Railways took unprecedented safety steps. Train coaches were sanitized with disinfectants as part of the cleanliness drive to maintain the standards of hygiene, making travel safe for passengers. East Central Railways has issued guidelines for disinfecting the seats, washbasins, toilets, doorknobs, etc. to the sanitation workers of major stations. The Railways have removed the providing of blankets in their AC coaches temporarily. To mark Independence Day, the IR observed a cleanliness week from August 10, 2020. The Ministry of Railways also reported that the critical focus would be the intensive cleanliness of stations, trains, water sales points, toilets, and drains [5]. Hospital and Medical services: The IR healthcare facilities are distributed all over the 586 health units and 125 Railway hospitals throughout the country. The IR has mobilized more than 2500 doctors and 35000 paramedic personnel to meet COVID-19 needs in a phased manner. All emergency services are continued, and elective surgeries have been deferred temporarily. More than 50% of the hospital beds were dedicated for treatment of COVID-19 patients. The community centers were turned into quarantine centers, and more physicians and paramedics were hired to manage additional workloads due to increasing COVID-19 cases. Also, a mobile doctor's booth named 'CHARAK' was launched by the national transporter's coach rehabilitation workshop at Bhopal. The portable booth enables zero-contact patient check-ups and guarantees that medical professionals or doctors considering the outbreak of COVID-19 are healthy. Online training for doctors and paramedics were provided. Railway Emergency Cell a 24X7 helpline number for COVID was created as a comprehensive nation-wide unit comprising of around 400 officers and staff. During the lockdown, the cell received queries, requests, and suggestions through their five communication and feedback platforms [16][17]. Isolation wards: One of the most impressive and innovative innovations of IR is that its 5231 coaches have been transformed into COVID Care Centers by the Ministry of Railways. These coaches were used for mild COVID cases that can be clinically allocated to the COVID Care Centers (CCC) as per the Ministry of health's guidelines. These coaches are adapted to be used in areas where the facilities have been exhausted by the state and need to expand the capacity to separate both suspected and confirmed COVID cases. These coaches served as LEVEL 1 COVID Care Centers, with two beds having oxygen Railway Protection Force (RPF) contribution: Being the loyal protectors of Indian railways, RPF were deployed to their duties with full PPE at the railway stations. They handled panic/distress calls related to COVID-19 in addition to ensuring and tracking COVID-19 safety measures in the IR stations and offices. RPF provided free food and sanitizers to the poor and vulnerable. They ensured that the RPF control rooms operated smoothly [5]. Free meals, temporary essential products market, and other services: The Railways Catering and Tourism Corporation Limited (IRCTC) base kitchens across the country in more than 70 locations, distributed free cooked hot food, especially for the stranded people, daily wage laborers, migrants, homeless and poor people. The food was distributed with the help of RPF, GRP, state government commercial department, district administrations, and NGOs. More than 30 lakh free meals were distributed within a month. Water bottles were supplied to police personnel who were on COVID duty. The Central Railway has permitted the vendors to use its vacant land at Mumbai railway stations for selling fruits and vegetables. The approval was granted immediately by the Government of Maharashtra and the Railway administration. The same model was followed in many other parts of the country to run the temporary markets to supply essentials during the pandemic. Some of the railway staff have come forward generously lending helping hands to coolie/ luggage porters who have lost their jobs during the lockdown. The team have contributed money on a charity basis, purchased and distributed essential commodities to the needy porters [5]. Special Trains: When the whole world was on lockdown, Indian Railways were continuing its wheels for the transportation of medical and essential commodities throughout the country with the freight trains which were operational 24X7. Parcel trains were initiated between various destinations. Drone surveillance was introduced in some of the stations to monitor physical movements during the lockdown. Jai Kisan special freight trains were scheduled for speedy delivery of farm products and food essentials to different parts of the country. In this new concept two trains which mean 84 (42 + 42) covered wagons are clubbed together to move as a single train carting 5200 tons of food grains to different destinations [20]. Doodh Duronto special trains which are railway milk tankers were introduced in the national interest to transport milk to different destinations. A single tank could hold up to 42000 liters of milk. The special train reached the destination in 36 hours on par with express trains. After the lockdown, the main challenge was to ensure proper thermal scanning of the large number of passengers arriving at stations from special trains. Railway officials coordinated with the district administration to arrange appropriate road transportation to transfer the passengers to their native place and vacate the stations [21]. Shramik Special trains: 3840 Shramik special trains were operational since 12 th May from various states across the country. Over 52 lakh passengers moved. Free meals and drinking water were provided to the passengers. These trains were specially started for migrant workers and others who were stranded at different places during the lockdown [5] [22]. 6.10 Disinfection chamber: A fumigation/disinfection tunnel was set up by Indian Railways at different locations. The fumigation spray starts and sanitizes the entire body of that person when someone enters through it. It can also be used to sanitize pieces of equipment or any items. At Indian Railways' Jagadhri Workshop, a Fumigation Tunnel was set up. A disinfection tunnel at the Electric Loco Shed, Bhusawal, was made available. At the electric loco shed, Kanpur and Jhansi, a total of 3 distinct designs for sanitization tunnel prototypes were prepared [5] [23]. Rail Tel: RailTel Corporation, a Mini Ratna (Category-I) Power Supply Unit under the Ministry of Railways, is one of the country's largest neutral telecommunications infrastructure providers, holding an exclusive Right of Way (ROW) Pan-India optical fiber network along the railway route covering all major cities and towns. RailTel handles crucial communication systems, video conferencing, and e-office platforms as well as storing essential IR data. RailTel has implemented an e-office for IR enabling a paperless office system to work from home using this platform. The users were trained remotely. Most of the meetings took place as video conferencing and video meetings supported by the RailTel HD video conferencing network operation center. Apart from providing essential services, RailTel has contributed Rs 12 crores to Prime Minister Citizen Assistance and Relief in Emergency Situations fund (PM CARES fund) and additionally volunteered Rs. 15.5 Lakh, which is a 1-day salary of all RailTel employees [5]. EMPLOYEE WELL-BEING INITIATIVES OF INDIAN RAILWAYS DURING THE PANDEMIC : Awareness campaign: Employees were educated on the dos and don'ts to curb the spread of the COVID-19 pandemic. Posters and display boards were put across the Railway stations premises, trains, and other Railway offices and establishments. Dissemination of information related to COVID-19 was done through social media channels. The Railway employees across different zones and divisions took part in the Jan Andolan, a public campaign. Railway officers and staff took the COVID pledge through Video Conferencing. A collection of steps to be taken by zonal railways to keep employees safe are described in the Rail Parivar Dekh-Rekh Muhim. The procedure to separate its employees from COVID-19 drawn up by the Central Railway is part of the mapping of all its 13-lakh staff and the identification of potential quarantine facilities for each of them. All senior officials concerned were always advised to keep a complete map of the staff with them. They were also told to develop a database of safe staff and volunteers. The protocol notes that the existence of other health conditions such as hypertension and diabetes must be given special consideration to workers and their dependents who have co-morbidities. Each employee has been contacted and mapped wisely so that support can reach them quickly in case of an emergency. In addition, railway employees who collected details such as travel history, the presence of any symptoms of COVID-19 of the employees and their family members were added to a user-friendly web application online survey. Healthcare services: All central government employees have access to railway health facilities. Half of the beds in the railway hospital were reserved for patients with COVID-19. OPD drug bills could be reimbursed by the staff. Hospital Helpline 24X7 hours was introduced Mask supply, sanitizers, and PPE kits: Masks and hand sanitizers were distributed to all employees and contract workers. The offices and railway colonies are routinely sanitized. Safety Counselling: Railway workers are periodically advised about the coronavirus outbreak and are encouraged to download the Arogya Setu App. Mobile train radio communication was used for safe and efficient freight train operations [5]. ABCD ANALYSIS : Advantages, Benefits, Constraints and Disadvantages (ABCD) is a model framework used to analyze the effectiveness of a model plan in any organization. In this article we are analyzing the contributions of Indian Railways during the COVID-19 pandemic using the ABCD analysis framework. This analysis will help to understand the constraints of the current system of Indian Railways and can help to develop adequate measures for the smooth transition to new normal [24,25]. Table 2: ABCD Analysis of the Indian Railways contribution and services provided during the pandemic Factors Contents Advantages The Indian Railways has good infrastructure and manpower resources 1. The medical and healthcare system of Indian railways were geared up to accommodate the challenges of the pandemic. for the development of Indian Railways. This analysis may help to initiate an action plan for developing the Indian Railways economy, revenue, employment opportunities and services [25]. CONCLUSIONS : Being one of the pillars of the nation, Indian railways have risen to stand with the country to curb the pandemic. They have provided the much-needed relief materials to the general public all over the nation. The Railways were used extensively during the lockdown period to transport essential goods. Additionally, the Railways manufacturing units came up with several innovations to fight the pandemic and have manufactured right from a few basic hospital furniture like stretchers, beds, medical trolleys to ventilators. The production units also produced consistently PPEs like masks, apron, sanitizers, and face shields for railway employees, doctors, and paramedics across the nation. Based on the guidelines issued by the Ministry of Family Welfare and Health, the IR has successfully trained their staff to work remotely. Their medical insurances were upgraded to accommodate COVID-19 care and were provided with the required medical help during this crisis whenever necessary. The railways proved to be a model organization by its decisive steps taken during the COVID-19 pandemic.
2021-01-29T10:10:26.521Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "9c8c19fccb70626104a48a5c11e2fb39782f5dbb", "oa_license": null, "oa_url": "https://doi.org/10.47992/ijmts.2581.6012.0126", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9c8c19fccb70626104a48a5c11e2fb39782f5dbb", "s2fieldsofstudy": [ "Engineering", "Business", "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
244838540
pes2o/s2orc
v3-fos-license
GIS and RS based analysis of LULCC in Indian Himalayan Land use is the main essential resource of the total ecological system.. Analysing LULCC is important for a vast range of applications such as landslide, land planning etc. In this study, LULCC have been considered for a period of 20 years (2000-2021) using RS and GIS based analysis of Shimla, Himachal Pradesh, India. Supervised classification technique is used to analyse LANDSAT images from the year 2000 to 2021. The output is identified and changes in land use pattern was obtained for each successive imagery and final changes were obtain by comparing 2000 and 2021 usgs data. The result obtained indicate a major change in the growth. Thickly vegetated land reduced from 95.52% to 20.22% in the year 2021 whereas the Moderately Vegetated land reduced from 60.25% to 10.50%. In the year 2021, The Urban Land increased from 75.65% to 180.50% while the agricultural land is also increased from 70.63% to 190.25%. Barren Land also gets increased from 65.25% to 150.23%. 1.Introduction Landuse Landcover Change (LULCC) is a primary issue for changing the pattern of land use because of human anthropogenic activities. The population has changed the atmosphere for a very long time to satisfy their own desires for obtaining products like food, fibre, timber, healthful herbs etc. . This accelerated to a great extent within the last 3 decades since the economic . As world population is increasing very fast , stress exerts on the land surface which show inadequate forces between environmental variables . So it is essential to inspect the changes in LULC, so that its impact on the worldly ecosystem can be noticed, and sustainable land use planning can be formulated . LULCC are two different terminology that are used together . Land cover are the ground cover surface like Forest, Waterbody, Soil, and Vegetation etc. Whereas Land use define to the land used by people and their habitat for different activities . As the increasing population pressure and the development in the various fields especially in the aspects of urban and industrial field changing the land use pattern drastically and causing land degradation . This increasing change warning us and cause an huge impact on the local, regional, national and global environment and consequently affect food availability . The LULC is continuously changing the surface of the earth . The increasing population pressure puts bad effect on LULC . There are many other factors on the land cover dynamics, A few researchers concluded that demographic factor is intensively accelerated to LULC change [1][2][3]. As Shown in figure 1 , Geographical information systems (GIS) , remote sensing (RS) and global positioning systems (GPS) are widely used as a powerful and efficient tool for detecting and analysing the change in the pattern of land around the RS offers an multispectral data which shows the changes in the land through the sensor without come under physical contact . GPS are often used to collect the positioning information of a refecnce point for Remote Sensing classfication and correction . GIS is beneficial for capturing,analysing and storing the information and change of pattern for LULC .Integrative use of this technology has established its effectiveness with relation to the change of spatial data and particularly to the provision of correct and timely geospatial information illustrating LULC change patterns [4][5][6]. This information will then comes under a knowldge of urban area planners for designing and township planning . Many researchers already studied about the importance of LULC in the . studied Land use changes for the Mandhala watershed located in Solan, HP, India. studied the behaviour of LULC of Vamanapuram geographical area, southern Kerala, India. studied the LULC with special reference on the Mandovi-Zuari water complex in Goa. studied the connection between land use changes and change in water flux for Sal watershed in Chamba, Himachal Pradesh, India [7,8]. Figure 1 Relation between Remote Sensing and GIS Now this study has been focused to examine the LULC modification and monitor the urban area of Shimla Tehsil, Himachal Pradesh. The Shimla Tehsil includes a variety of issues such as fast growth of urban area , rapid reduction of forest area and steep slopes etc. The study has been conducted for better understanding of LULC modification . The main objective of this study is to select the study area for LULC detection, analysis of LULC detection using various datasets of previous two decades, to study the present status of forest land, urban land, and agricultural land using satellite data [9,10]. 2.Description of Study Area Shimla is situated between 30°59'3" to 31°14'10" North latitude and 76°58'19" to 77°19'21" East line of longitude covering an area of 418sq. Km as shown in Figure 2. It has an average altitude of 2206 meters above mean sea level. Shimla is divided into Shimla rural and Shimla urban. Literacy rate of Shimla stands at 83.64% which is higher than the state rate. Total 3 rivers evacuate through Shimla specifically Sutlej, Pabbar and Giri. Most of the area in the shimla comes under agriculture land . The main crop for growing in Shimla is apple. The season of apple generally has during the months from august-October. The soil is sandy soil at the plains and starved within the mountainous areas. The main trees in the forest are pine, deodar and oak . The climate is moderate within the plains and little bit high within the hilltops. The annual precipitation is 999.4mm out of which 75% occurs throughout the monsoon period July to September. The temperature varies from 0°C in winter to 40°C in summer [11,12] , see figure 2. Figure 2 Location of the study area 3.Materials and Methods The Topographical maps were prepared with the scale of 1:50,000. The data has downloaded from United States Geological Survey (USGS) website for eight different years (2000,2003,2006,2009,2012,2015,2018,2021). A maximum likelihood classifier method( Supervised Classification) has been performed for change detection analysis. To execute the change detection analysis, for respective periods of downloaded data i.e, LANDSAT4-5, LANDSAT 7 ETM+, LANDSAT 8 (OLI) are selected for the year 2000, 2003, 2006, 2009, 2012 ,2015, 2018 and 2021 as given in Table 1. To know the changes in LULC classes of eight years , a post classification comparison of change detection was used [13]. A comparison between the classified maps were carried out by post classfication technique.The methodology adopted for analysing LULC pattern of this study area is mentioned in the flow chart in Figure 3. 4.Results and Discussions The downloaded USGS data for two decades (2000, 2003, 2006, 2009, 2012, 2015, 2018, and 2021) are classified and compared for LULC analysis. Spectral satellite imageries are illustrated in the Figures 4, 6, 8, 10, 12, 14, and 16 whereas Figures 5,7,9,11,13,15,17 shows the nature of the trend of the change in LULC categories for the respective years. Figure 18 in the form of bar chart. 13 Vegetated Land reduced from 95.52% to 20.22% from years 2000 to 2021 whereas the Moderately Vegetated Land reduced from 60.25% to 10.50%. A major change has been observed in Urban Land which increased from 75.65% to 180.50% while the agricultural land is also increased from 70.63% to 190.25%. The Barren land gets increased from 65.25% to 150.23%. LULCC mapping provide awareness into the working plan for controlling natural resources and environmental issues. Unplanned Urban Land (UL) may shows increased land surface temperature ,decrease the purification of water , increase air pollution etc. Remote sensing and GIS prove effective tools in township planning, water shed design etc. Therefore, this study will help out in better understanding the growth pattern of various LULC classes and suggests planners to design a proper management strategy for economic and sustainable development.
2021-12-03T20:09:59.610Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "ec7aa73808d00f5a9f93ba80d2fdafe3a4a78665", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/889/1/012001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ec7aa73808d00f5a9f93ba80d2fdafe3a4a78665", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Physics" ] }
14899895
pes2o/s2orc
v3-fos-license
Systemic Inflammatory Response during Laparotomy Background. The aim of this study was to analyze the influence of laparotomy on the systemic inflammatory response in human patients suffering from secondary peritonitis. Study Design. A prospective study investigating the levels of white blood cells, C-reactive protein, platelets, interleukin-six, and tumor necrosis factor-alpha during laparotomy in five patients who suffered from secondary peritonitis. Six venous blood samples were collected perioperatively from each patient. The data were summarized by descriptive statistics and presented in a box plot. The hypothesis was that laparotomy increases the systemic inflammatory response, as has been described in animal models in previous studies. Results. The median age of the patients in this study was 84 years, the male to female ratio was 2 : 3, and the mortality rate was 80%. The most common cause of generalized peritonitis was ischemia of the colon. Analysis of the data showed no significant changes in the level of plasma inflammatory mediators during the surgical procedure, except for the platelet count which showed a significant decrease (P = 0.001). Conclusions. In contrast to experience with animal models, laparotomy in human patients with secondary peritonitis did not significantly increase the systemic inflammatory response. Furthermore, it contributed in significantly decreasing some of the systemic inflammatory mediators. Introduction Acute peritonitis may be classified as primary (spontaneous), where an infection has been raised de novo within the peritoneum, or secondary, where the inflammation involving the peritoneum is the result of an identifiable primary process. Secondary peritonitis is one of the most common indications for urgent abdominal surgery. Despite great advancements in diagnostic tools, surgical equipment, and technique, management of patients with severe secondary peritonitis remains a surgical challenge, with major morbidity and a mortality rate over 50% in most series [1]. The systemic inflammatory response represented by white blood cells count, C-reactive protein levels, platelets count, interleukin-six, and tumor necrosis factor-alpha level was found to be increased during surgical management of peritonitis in an animal model, and when comparing the laparotomy to the laparoscopy approach, the inflammatory response was significantly higher in the first [2]. To the best of our knowledge all the data in the English literature concerning systemic inflammatory response during abdominal surgery for secondary peritonitis are derived from animal models. The objective of our study was to define the characteristics of the systemic inflammatory system during urgent laparotomy due to secondary peritonitis in humans. The levels of white blood cells, C-reactive protein, platelets, interleukin-six, and tumor necrosis factor-alpha were used to assess the systemic inflammatory response. Sample Collection and Analysis. Six venous blood samples were collected from each patient throughout the period before, during, and after surgery. They are indicated here as T0 to T5: one hour before the anesthesia (T0), immediately after the anesthetic induction (T1), immediately after the abdominal wall incision (T2), one hour after the abdominal incision (T3), immediately after the abdominal wall closure (T4), and 48 hours after the abdominal wall closure (T5) ( Table 1). WBC and PLT counting was done using a differential blood smear. CRP amount was determined using Latex tests. The plasma was separated from the blood samples using centrifugation at 3000 g at 4 ∘ C for 10 minutes immediately after withdrawal and stored at −20 ∘ C. TNF-and IL-6 serum levels were determined by using commercially available IMMULITE 1000 Immunoassay System (Siemens, Siemens Medical Solutions Diagnostics, USA). Statistics. The data were summarized by descriptive statistics and presented in box plots. Differences between measurements at two time points were analyzed by Wilcoxon Signed Rank Test and differences between several related samples were tested using the Friedman Test, if appropriate. < 0.05 was considered statistically significant. Results Five patients were included in our study and their clinical details are summarized in Table 2. Median age of the patients in this study was 84 years (range: 22-93), male to female ratio was 2 : 3, and mortality rate was 80%. The most common cause of generalized peritonitis was ischemia of the colon (three patients: in two cases it was induced by primary vascular insufficiency, while in the third case distal colon obstruction with extreme proximal bowel dilatation resulted in full thickness ischemia and necrosis). The fourth patient had small bowel ischemia and necrosis due to strangulation within an internal hernia, and the fifth patient had both small and large bowel perforation due to a motorcycle accident. Plasma Levels Cytokines. In comparison with its levels one hour before anesthesia, plasma levels of TNFdecreased insignificantly during surgery until immediately after the abdominal wall closure. Median values were 10.4 and 9.1 pg/mL ( = 0.273), respectively. At 48 hours after abdominal wall closure, a mild increase in TNF-levels was noticed, with a median value of 14.2 pg/mL ( = 0.225) (Figure 1). In contrast, IL-6 increased during surgery from 362 pg/mL to 540 pg/mL ( = 0.5), ending with 400 pg/mL 48 hours after abdominal wall closure ( = 0.686) (Figure 2). The patterns of values distribution concerning TNF-and IL-6 were not statistically different ( = 0.449 and = 0.375, resp.). Acute Phase Reactants. CRP levels insignificantly decreased during surgery until immediately after abdominal wall closure. Median values were 15.27 and 7.25 mg%, respectively ( = 0.893). Then, the CRP levels started to increase, reaching 24.7 mg% at 48 hours after abdominal wall closure ( = 0.08). No statistically significant trend was shown in the CRP levels dynamics ( = 0.262) (Figure 4). Discussion Secondary peritonitis occurs most often after disruption of the integrity of the gastrointestinal tract. Despite great improvement in standards of diagnosis, antimicrobial therapy, and intensive care support, surgical treatment remains fundamental in the management of secondary peritonitis [3]. The operative approach is based on three basic principles: elimination of the source of the infection, reduction of peritoneal cavity bacterial contamination, and prevention of persistent or recurrent intraabdominal recolonization [4]. Despite optimal treatment, this life-threatening condition remains associated with high morbidity and mortality [5]. When generalized fecal peritonitis exists, the mortality rate varies and ranges from 50% to 100% [6]. Our mortality rate was 80%. The only survivor, out of the five patients, was a young healthy male, who had both small and large bowel perforation due to a motorcycle accident with mild peritonitis. Peritonitis is defined as inflammation of the peritoneal cavity, where the peritoneal fluids increase in volume with the passage of a transudate rich in polymorph nuclear cells and fibrin [7]. Microbial contamination of the peritoneal cavity initiates the innate immune response. The balance between pro-and anti-inflammatory cytokines is thought to be related to the severity and outcome of peritonitis [8,9]. T5 T4 T3 T2 T1 T0 35 The suggested mechanism of morbidity and mortality due to secondary peritonitis is a vicious circle, started by intraperitoneal inflammatory and toxic mediators. These induce vasodilatation and enhance the permeability of the visceral and parietal capillary vessels, thus facilitating the to the multiple organ failure syndrome (MOFS) and often ending with death. Intraperitoneal cytokine measurements using an animal model of peritonitis have been suggested as early markers for adverse outcomes in patients with secondary peritonitis [10]. Surgical procedures in animal models with secondary peritonitis increased the systemic inflammatory response, especially when laparotomy was performed [2]. The theoretical explanation for this situation is that while opening the abdominal layers and thereby damaging the continuity of the histological tissue, we create a port of entry for the inflammatory mediators and microorganisms from the peritoneal cavity to the systemic circulation of the blood. This accelerates the vicious circle described above. Analysis of our data showed no significant changes in the levels of plasma inflammatory mediators during surgical laparotomy, except for platelet count. Despite the decreasing trend of WBC, CRP, and TNF-levels, they were statistically insignificant ( = 0.438, = 0.262, and = 0.449, resp.). On the other hand, the IL-6 levels showed an increasing trend during surgery but still without statistical significance ( = 0.375). The only significant change in the systemic inflammatory response identified in our study was attributed to the PLT count, which decreased during surgery ( = 0.001). According to this data and in contrast to experience with animal models, laparotomy in humans with secondary peritonitis did not significantly increase the systemic inflammatory response. Furthermore, it contributed to significantly decreasing some of the systemic inflammatory mediators. Our study had some limitations which must be considered: a small sample size due to logistic difficulties, no adjustment for the degree of peritonitis (mild, moderate, and severe), and type of peritonitis (purulent and fecal). In conclusion, the systemic inflammatory response did not significantly change during laparotomy in humans suffering from secondary peritonitis. In order to have more solid and statistically significant data, a large, multicenter, and adjusted study is needed.
2016-05-04T20:20:58.661Z
2014-08-05T00:00:00.000
{ "year": 2014, "sha1": "060138788d9f5bcb9aeb2d30b9ce8d594a1482b1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2014/674303", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "289072d53e93fe580c2014fb1810195c66d76ea8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249650073
pes2o/s2orc
v3-fos-license
Preeclampsia and COVID-19: the Role of Inflammasome Activation Purpose of Review It is well established that controlled immune activation and balance is critical for women’s reproductive health and successful pregnancy outcomes. Research in recent decades in both clinical and animal studies has demonstrated that aberrant immune activation and inflammation play a role in the development and progression of women’s reproductive health and pregnancy-related disorders. Inflammasomes are multi-protein cytoplasmic complexes that mediate immune activation. In this review, we summarize current knowledge on the role of inflammasome activation in pregnancy-related disorders. Recent Findings Increased activation of inflammasome is associated with multiple women’s health reproductive disorders and pregnancy-associated disorders, including preeclampsia (PreE). Inflammasome activation is also associated with the novel coronavirus disease 2019 (COVID-19) disease caused by the SARS-Cov-2 virus. We and others have observed a positive association between increased PreE incidences with the onset of the COVID-19 pandemic. Here, we present our recent data indicating increased inflammasome activation, represented by caspase-1 activity, in women with COVID-19 and PreE compared to normotensive pregnant women COVID-19. Summary The role of inflammation in pregnancy-related disorders is an area of intense research interest. With the onset of the COVID-19 pandemic and the associated increase in PreE observed clinically, there is a greater need to identify mechanisms of pathophysiology and targets to treat this maternal disorder. Inflammasome activation is associated with PreE and COVID-19 infection and may hold therapeutic potential to improve outcomes associated with PreE and curb the morbidity attributed to PreE. Introduction Inflammation in the female reproductive system contributes to various disorders of pregnancy and reproductive health [1][2][3][4]. Inflammasomes are cytoplasmic, multi-protein complexes that mediate innate and adaptive immune responses and inflammation [5,6]. As members of the innate immune system, inflammasomes recognize patterns indicative of infection or changes in cellular homeostasis and initiate responses by the immune system to eliminate pathogens and repair tissue damage to restore homeostasis. Recent studies have identified associations between various women's reproductive disorders, particularly in pregnancy, and inflammasome expression and activation [2,[7][8][9][10][11]. More recently, the discovery of inflammasomes has peaked research interest in investigating the roles of this novel immune modulating complex in these reproductive and pregnancy-associated disorders and complications. This review will summarize 1 3 the recent studies investigating the role of inflammasome activation in reproductive disorders with a special emphasis on preeclampsia (PreE). The advances in our understanding of how inflammasomes contribute to these women's health-related issues may provide novel targets and therapeutic strategies to improve women's reproductive health and pregnancy-related outcomes. Inflammasomes and the Immune System Normal pregnancy requires a controlled state of inflammation in order for proper placentation and vascular remodeling to occur during the initial stages of pregnancy [12]. These immune changes promote tolerance of the semi-allogenic fetus, while also still protecting the mother from external infectious diseases. In pregnancy-related disorders and complications, the immune changes that occur are associated with chronic immune activation and inflammation that contribute to pathological changes in reproductive tissue and pathophysiology in women and their offspring. Both clinical and preclinical studies have identified the roles of various factors of the immune system including innate and adaptive immune cells and cytokines in contributing to these disorders. Inflammasomes are formed by a complex of proteins consisting of a sensor protein, an adaptor protein, and caspase-1. The sensor proteins are named based on their structural domain: (1) the nucleotide-binding domain and leucine-rich repeat containing proteins (NLRs); (2) the absence in melanoma 2 (AIM2)-like receptors (ALRs); and (3) the pyrin receptor. The adaptor protein, apoptosis-associated speck-like protein containing a caspase activation and recruitment domain (CARD), also known as Pycard (ASC) contains the PYD (pyrin domain) and CARD (caspase activation and recruitment domain) which serves as the connector protein to bring the sensor protein and caspase-1 together in the inflammasome complex [5, 6, 13•]. The inflammasomes that have been described to date include NLRP1, NLRP3, NLRC4, NLRP7, NLRP12, AIM2, and PYD. Other sensor proteins that have been proposed include human NLRP2, NLRP7, and IF116, and murine NLRP6 and NLRP9b in mouse [13 •, 14]. These inflammasomes have recently been reviewed elsewhere [13•]. Canonical activation of inflammasomes includes the recruitment of the sensor protein, ASC, and pro-caspase 1 into a single multi-protein complex within the cytoplasm. After the inflammasome is assembled, caspase-1 is autocleaved into enzymatically active caspase-1 p20 and p10 subunits [13•, 15]. The active caspase then goes on to cleave inactive forms of the pleiotropic cytokines interleukin-1 beta (IL-1β) and IL-18 into their active forms to be secreted from cells. More recently, it has been discovered that caspase-1 also cleaves a protein, gasdermin D, which initiates pyroptosis, inflammatory cell death that is induced following inflammasome activation [16]. Inflammasomes and Preeclampsia PreE is a disease defined by new onset hypertension after 20 weeks of gestation in combination with organ dysfunction and is a major contributor to maternal and fetal morbidity worldwide [17,18]. The exact mechanisms underlying the development of PreE are not known. However, endothelial dysfunction and a pro-inflammatory state resulting from placental ischemia caused by shallow trophoblast invasion and insufficient uterine spiral artery remodeling is a wellaccepted mechanism [19,20]. Recent studies are beginning to report elevations in inflammasomes or related mediators among women with PreE or women who are at risk for developing PreE [21][22][23]. As much of the data and studies have focused primarily on NLRP3 inflammasomes, the remainder of this review will also be composed of mostly NLRP3 studies. NLRP3 and PreE NLRP3 inflammasomes are intracellular protein complexes associated with the innate immune system. Overactivation of NLRP3 inflammasomes contributes to a variety of disorders such as atherosclerosis, diabetes, and obesity-induced insulin resistance [24]. NLRP3 inflammasomes can be activated via transcriptional dependent and independent pathways through endogenous or exogenous damage-associated molecular patterns (DAMPs) [25•] (Fig. 1). Regardless of the activation pathway, women with PreE have evidence of a number of inflammatory events that can activate each component of the NLRP3 inflammasome, which may provide some foundation as to why immune system dysfunction is prevalent among women with PreE [26]. There have been numerous reports of elevations in placental and systemic NF-κB among women with PreE compared to normotensive women [27]. From these studies one can theorize that with elevated levels of NF-κB, there is also an increase in the translation of NLRP3. Women with PreE have increased levels of circulating cellular debris, protein aggregates, hypoxia factors and more soluble factors such as uric acid and cholesterol; all of which have been reported to activate NLRP3 at a higher rate in comparison to other inflammasomes [28,29]. In regard to women with PreE, studies have reported that cholesterol and uric acid can cause placental and decidual inflammation by activating NLRP3 [28,[30][31][32]. Along similar lines, Weel et al. reported that PreE women had significantly higher placental expression of NLRP3, caspase-1, and IL-1β compared to normotensive women [21]. In addition to uterine and placental tissues expressing NLRP3, a recent study by Ozeki et al. reported that human umbilical vein endothelial cells (HUVECs) isolated directly from women with PreE had evidence of a greater NLRP3 response relative to HUVECs isolated from normotensive women [33], suggesting that circulating factors, in particular S100A9, directly stimulate NLRP3 activity in endothelial cells. HELLP (hemolysis elevated liver enzyme low platelet) syndrome is a severe obstetric complication that affects 15-20% of women with PreE and 1-2% of women without PreE [34]. Unlike PreE, the diagnosis of HELLP requires evidence of significant organ, thrombosis, and coagulopathy via a series of laboratory diagnoses. Even though at the time of this review, there are not any published reports of inflammasome activity among women with HELLP syndrome, there is also evidence to support a role for NLRP3 inflammasome activation in the pathogenesis of this disorder. Heme, which is increased in women with HELLP syndrome, has been found to stimulate activation of HUVEC NLRP3 inflammasomes [35]. Women with HELLP also have a a high degree of apoptosis which is mediated though the death receptor Fas ligand [36]. Activation of this pathway contributes to DAMPs which lead to activation of the NLRP3 inflammasome and eventually pyroptosis [37]. Preclinical Studies Similar to what has been reported among pregnant women, NLRP3 inflammasomes are also activated in rodents due to numerous factors. A variety of PreE animal models have been used to provide more insight into immunological mechanisms associated with PreE [38][39][40]. A study by Zeng et al. which utilized a PreE rat model reported that PreE rats had higher mRNA expression of NLRP3 in uterine tissues compared to normal pregnant rats [41]. This group also found that both uterine protein expression of caspase-1 along with mRNA expression of caspase-1 and IL-1β were increased, leading them to conclude that the NLRP3 inflammasome has a key role in regulating uterine inflammation. Similarly IL-17, which we have previously reported to have a key role in mediating the hypertensive and immune response in PreE was recently reported to also regulate NLRP3 inflammasome activation [42,43]. Chang et al. reported that in a PreE mouse model inhibition of fatty acid binding protein 4 prevents the Treg (T regulatory)/Th17 (T helper) imbalance, IL-17A production, and NLRP3 inflammasome activation [44]. Inflammasome, Spontaneous Miscarriage, and Preterm Labor In addition to their roles in pregnancy disorders such as PreE and HELLP syndrome, inflammasomes have also been linked to recurrent pregnancy loss as well as missed abortions and spontaneous preterm labor. It is estimated that 21% of preterm births from women in the USA occur in women with PreE, a risk factor for spontaneous preterm labor [45,46]. Several studies have suggested that inflammasome activation is an important component driving spontaneous preterm labor. Elevated levels of pro-inflammatory mediators that are downstream of inflammasome activation such as HMBGI, caspase-1, IL-1β, and IL-18 were found in women that underwent spontaneous preterm labor [47][48][49]. Direct evidence of increased NLRP1, NLRP3, and NLRC4 expression have been found in chorioamniotic membranes from women that experienced acute histologic chorioamnionitis in spontaneous preterm labor [47]. The authors in those studies proposed that inflammation associated with spontaneous preterm labor is driven by inflammasome activation, triggering production of IL-1β and IL-18 [47,48]. These findings suggest that inflammasome-driven inflammation in the chorioamniotic membrane can induce spontaneous preterm labor. Given the amount of immunological crosstalk between reproductive disorders, the role of inflammasomes provides yet another avenue of potential linkage among women affected by reproductive complications. Coronavirus Disease 2019 (COVID-19), Inflammasome Activation and PreE Inflammasomes have a role in regulating inflammation which makes them crucial in inflammatory diseases or diseases where inflammation plays a role [50]. The exact role and incidence of this becomes evident when focusing on COVID-19, a disease commonly associated with hyperinflammation. Among the first clues for inflammasome activation in patients with COVID-19 was the noted cytokine storm in patients with severe COVID-19 which is composed of cytokines often activated via NF-κB and IL-1β pathways [51 ••]. As data began emerging, there was also a direct correlation between high circulating IL-18 levels, COVID-19 severity and increased mortality [52]. The specific mechanism by which SARS-Co-V2 activates inflammasomes, and NLRP3 in particular, is still unknown. Previous studies examining the SARS-Co-V-2 virus have indicated that NLRP3 is assembled and activated in response to changes in plasma permeability to calcium and potassium ions and increases in mitochondrial reactive oxygen species [53,54]. Pan et al. recently reported that the N protein of SARS-CoV-2 selectively interacts with NLRP3 and not with NLRP1 or NLRC4 or AIM2 proteins [55]. This group also reports that the N protein promotes the assembly of the NLRP3 inflammasome through promotion of ASC oligomerization. The N protein is responsible for packaging the viral genome into a nucleocapsid protein located within the phospholipid bilayers, which is covered by spike proteins [56]. Due to COVID-19, there has been an upsurgence of research into this virus and into the N protein. A recent study [62] has suggested that the N protein itself can regulate immune function to either an immunosuppressive state or an overactive state. Along these same lines, it has been suggested that NLRP3 inflammasome-mediated pyroptosis and the cytokine Fig. 2 Caspase-1 activity in endothelial cells exposed to sera from COVID pregnant patients. Serum was collected from women consented and enrolled in an IRB-approved study and placed over semiconfluent HUVECs for 24 h, as previously described [60]. Following 24 h of exposure to experimental media (50% Dulbecco modified Eagle's medium (Invitrogen), 50% medium 199 (Invitrogen), 1% antimycotic-antibotic solution (Invitrogen), and 10% patient serum) basal media (media without any serum) were placed on cells for 24 h before a sample of the media was collected for caspase-1 evaluation. All cell culture experiments were performed in duplicate, and media samples were assayed in duplicate for caspase-1 activity via the Caspase-Glo 1 Inflammasome assay (Promega, Madison, WI). Luminescence was recorded after 90 min. The number of patients whose serum was evaluated is represented in white within the respective bar located on the bar graph. All the data are represented as mean ± standard error mean. Gestational age at serum collection for normotensive women was 39.1 ± 0.27 weeks (range 37.1-40 weeks), 34.58 ± 0.99 weeks (range 30.2-38.4 weeks) for COVID positive preeclamptic women, and 34.8 ± 2.1 weeks (range 30.6-38.4 weeks) for non-COVID positive preeclamptic women storm only occur in individuals with an impaired immune system, suggestive of an initial low grade inflammation prior to the onset of infection [62]. This theory has been used to explain some of the differences in symptoms (asymptomatic vs symptomatic) that has been reported between patients with COVID-19 [58]. The combination of these studies, along with the fact that COVID-19 is associated with an increased risk of PreE or HELLP syndrome (both published studies [59 •] and observations from our own patient population), led us to question if there was evidence of NLRP3 activation among women with COVID-19 and PreE. To determine if there was evidence of inflammasome activation among pregnant women with COVID-19, we measured caspase-1 activity in the cell culture media. Previous studies have reported that circulating factors from women with PreE stimulate HUVECs in vitro to produce a variety of vasoconstrictive and inflammatory markers [60]. We conducted this assay to see if (1) pregnant women diagnosed with COVID-19 at term had evidence of inflammasome activation, (2) if PreE in the presence of a COVID-19 diagnosis at term was associated with inflammasome activation, and (3) if there were differences between symptomatic vs. asymptomatic patients. As shown in Fig. 2, serum from women with PreE induced more caspase-1 activity in cultured HUVECs relative to normotensive women. There was also a difference in caspase-1 activity between symptomatic and asymptomatic women regardless of their PreE diagnosis. Specifically, inflammasome activation is increased in women who were symptomatic for COVID-19. Furthermore, only COVID-19 positive symptomatic women diagnosed with PreE had caspase-1 levels higher than PreE women without COVID-19 (mean caspase-1 activity is denoted by black line). As caspase-1 is activated by inflammasome complex formation and activated via both the non-canonical and canonical pathways, it serves as an essential marker of inflammasome function and activity [61]. Inflammasome Targeted Therapy Not only has NLRP3 activation been shown to increase inflammation, but several in vivo studies have also shown inhibition of NLRP3 activation can reduce the abnormal inflammatory cascade seen in PreE [44,[65][66][67][68]. Currently, there are several potential therapeutic approaches for inhibiting the inflammasome or targeting pro-inflammatory cytokines. However, as many of these therapies pose a potential danger to the growing fetus; there is often no direct treatment that can be safely administered during pregnancy. There is also a large body of evidence indicating a decrease in the maternal immune response shortly after delivery of the placenta, which decreases the need for pharmaceutical therapy. However, there are a few therapies that are administered during pregnancy that may have an indirect effect on inflammasome activity (Table 1). Conclusions The pathophysiology of PreE is still unclear; however, there is a clear consensus on the involvement of the immune system in the disease process. Understanding the role of the inflammasome(s) and how potential inflammasome inhibition could mitigate other immune pathways and pyroptosis warrants further investigation. New data, demonstrating a link between inflammasome activation and COVID-19 and the increased risk for women with COVID-19 to develop PreE, further highlights the need to better understand the role of inflammasomes in PreE to possibly identify new and novel therapeutic interventions thereby potentially decreasing maternal and fetal mortality and morbidity. Aspirin prevented the nuclear translocation of NF-κB from monocytes exposed to serum from women with PreE [62] Magnesium sulfate (MgSO 4 ) Administered to prevent seizures MgSO 4 has been shown to inhibit NLRP3 inflammasome activation, IL-1β upregulation, and pyroptosis in human monocytic cell line [63]; placentas from PreE women administered MgSO 4 secreted less IL-1β compared to PreE women not administered MgSO 4 [64]
2022-06-15T13:35:28.986Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "5f464aec5ae0c112cbd4fb69997c7d351ad683b3", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11906-022-01195-4.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f8ecd755f22dbf0c57175084e164aab37b6317a8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
95005124
pes2o/s2orc
v3-fos-license
Structural properties of molten silicates from ab initio molecular-dynamics simulations: comparison between CaO-Al$_2$O$_3$-SiO$_2$ and SiO$_2$ We present the results of first-principles molecular-dynamics simulations of molten silicates, based on the density functional formalism. In particular, the structural properties of a calcium aluminosilicate $ [$ CaO-Al$_2$O$_3$-SiO$_2$ $ ]$ melt are compared to those of a silica melt. The local structures of the two melts are in good agreement with the experimental understanding of these systems. In the calcium aluminosilicate melt, the number of non-bridging oxygens found is in excess of the number obtained from a simple stoichiometric prediction. In addition, the aluminum avoidance principle, which states that links between AlO$_4$ tetrahedra are absent or rare, is found to be violated. Defects such as 2-fold rings and 5-fold coordinated silicon atoms are found in comparable proportions in both liquids. However, in the calcium aluminosilicate melt, a larger proportion of oxygen atoms are 3-fold coordinated. In addition, 5-fold coordinated aluminum atoms are observed. Finally evidence of creation and anihilation of non-bridging oxygens is observed, with these oxygens being mostly connected to Si tetrahedra. INTRODUCTION Silicate melts are the precursors to industrially and technologically important materials including ceramics and nuclear and industrial waste confinement glasses. They also occur naturally in the form of geologic magmas. Despite their technological relevance and geophysical importance, however, their microscopic characteristics are not well understood [1]. The primary reason for this is that the structure of the melt is far more difficult to characterize than that of crystalline silicates, necessitating a combination of indirect methods. In recent years, theoretical studies based on classical molecular-dynamics (MD) simulations have been able to treat binary or ternary silicate systems with reasonable success. However, accurate and reliable descriptions of systems containing more than three different atomic species by classical MD methods has proved far more difficult, although several studies have been able to predict general trends that are consistent with experimental results [17][18][19][20][21]. This problem is particularly acute when cations such as Na + or Ca 2+ are introduced in silicate glasses, as it is generally difficult to find interatomic potentials that can accurately treat both the covalent and the ionic nature of the interactions and are also capable of describing the bond breaking and forming events that can occur in such chemically diverse environments. These difficulties can be circumvented by employing the ab initio molecular dynamics method, in which internuclear interactions are calculated "on the fly" from electronic structure calculations. Indeed, recent ab initio MD studies of silica glass and of its melt have demonstrated the ability of this approach to describe the local structure and dynamics of such systems with reasonable accuracy. These studies have also highlighted the advantages of employing a method which, additionally, allows direct access to the electronic properties of the system [22,23]. It is well known that many important macroscopic properties of silicate melts, such as the viscosity, the glass transition temperature T g , or the resistance to chemical change, for example via corrosion, are dramatically altered by changes in the composition [1,14,18]. For instance, introduction of Na + ions into a SiO 2 melt causes the viscosity at a given T g /T ratio to decrease by several orders of magnitude from that of the pure SiO 2 [1]. The presence of cations such as Na + , K + , Ca 2+ or Mg 2+ , known as network modifier cations, induces such changes by breaking some fraction of the Si-O bonds thereby creating non-bridging oxygens and disrupting the tetrahedral silicate network. Non-bridging oxygens (NBO) are oxygen atoms which do not connect two tetrahedral cations or network-forming atoms, such as Si. Non-bridging oxygens provide relatively weak connections between the network forming atoms and the network modifier cations. However, when other network-forming atoms such as aluminum are introduced into the system, there is a gradual conversion of non-bridging oxygens into bridging oxygens. This arises from the fact that most of the Al atoms are tetrahedrally coordinated (AlO − 4 ), and the resulting negative charge compensates the positive network modifier cation charge. In such cases, non-bridging oxygens can be created and the network broken only if there is an excess of network modifier cations, and it is for this reason that the viscosity of ternary liquids, such as CaSiO 3 or Na 2 SiO 3 , progressively increases as the network modifier oxide (CaO or Na 2 O) is replaced by Al 2 O 3 . The conventional explanation for the increase in viscosity is the transformation of non-bridging oxygens into bridging oxygens as the concentration of Al 2 O 3 is increased. Generally, the number of NBO can be predicted based on a knowledge of the composition by simple stoichiometric arguments. However, it has recently been shown that such simple stoichiometric predictions are not exactly fulfilled in calcium aluminosilicate (CAS) glasses and that a small proportion of NBO can be present even if all the modifier cations should, in principle, exactly compensate the AlO − 4 tetrahedra [12,13]. In this paper, the results of an ab initio molecular-dynamics simulation of a calcium aluminosilicate (CAS) melt are presented and its microscopic characteristics are compared to those of a pure silica melt. To our knowledge, these are the first fully ab initio MD studies of the CAS melt. We have chosen a CaO-Al 2 O 3 -SiO 2 system with a composition as close as possible to the basic composition of the confinement matrix for the nuclear wastes. This system also possesses a local structure close to those of some rapid cooling magmas. The chosen composition contains more Ca 2+ ions than are needed to compensate the AlO − 4 tetrahedra, thus leading to the formation of non-bridging oxygens. By carrying out a comparative study of the CAS and pure silica melts, it is possible to describe the detailed modifications in the network due to the presence of Al and Ca 2+ in the system. This paper is organized as follows: In Sec.2, the ab initio methodology is briefly described and the details of the particular simulations performed here are given. In Sec.3, main results of the comparative study are presented, including structural properties of the CAS and silica systems. These results are discussed in Sec.4 and conclusions are given in Sec.5. SIMULATION DETAILS Equilibrated configurations of the two liquids were generated by classical molecular dynamics runs, the details of which are described in Secs. 2 A and 2 B below, and were subsequently used to initialize the ab initio MD simulations. The two systems were then equilibrated within a Car-Parrinello (CP) ab initio MD run [24] performed with the ab initio MD code, CPMD [25]. In the ab initio MD simulations, the electronic structure was treated via the Kohn-Sham (KS) formulation of density functional theory [26] within the local density approximation for the pure silica system and within the generalized gradient approximation for the CAS system employing the B-LYP functional [27,28]. The KS orbitals were expanded in a plane-wave basis at the Γ-point of the supercell up to an energy cutoff of 70 Ry for both systems. Core electrons were not treated explicitly but were replaced by atomic pseudopotentials of the Bachelet-Hamann-Schlüter type for silicon [29] and the Troullier-Martins type for oxygen [30]. A Goedecker-type pseudopotential [31] was employed for aluminum, and a Goedecker-type semi-core pseudopotential was employed for calcium. The choices of the pseudopotentials, exchange and correlation functionals and plane-wave cutoff are justified by previous studies carried out on amorphous SiO 2 [23] as well as total energy calculations carried out on small molecules (see Table I). A. Molten SiO 2 The molten silica system contains 26 SiO 2 units in a cubic box of edge length 10.558Å, which corresponds to a mass density of 2.2 g·cm −3 . The density of the glass was chosen so that the configurations could later be used in quenching runs to generate glass structures. Although the density is, therefore, a little too high compared to the real liquid, it is not expected that this will significantly affect our findings, which are based on the comparison of network and disrupted network systems. The SiO 2 initial configuration was obtained by melting a 216 SiO 2 units β-cristobalite crystal at 7000 K with classical molecular dynamics using the van Beest, Kramer and van Santen (BKS) potential [35,36] and then cooling it to 4200 K using the same potential. At this temperature, a cubic box of edge length 10.558Å and containing 26 SiO 2 was extracted from the 216 SiO 2 system and equilibrated during ∼ 35 ps. The classically equilibrated SiO 2 liquid configuration was further equilibrated within a 6-ps ab initio MD run at 4200 K using a time step of 0.096 fs, then quenched to 3500 K at a quench rate of 3 10 15 K·s −1 with the same time step, and finally equilibrated at 3500 K for 6 ps using a time step of 0.108 fs. In order to achieve rapid equilibration and efficient canonical sampling of the system, a separate Nosé-Hoover chain thermostat [37] was placed on each ionic degree of freedom (known as "massive" thermostatting [38]) and an additional Nosé-Hoover chain thermostat was placed on the electronic degrees of freedom [37,39]. In all cases, a fictitious electronic "mass" parameter, µ (having units of energy×time 2 ) of 600 a.u. was employed. B. Molten calcium aluminosilicate The CAS system contains 22 SiO 2 , 4 Al 2 O 3 and 7 CaO, which gives approximately 67 %, 12 % and 21 % molar percentages of these units, respectively, and a total of 100 atoms. For this particular composition, there are 8 Al atoms which give rise to 8 AlO − 4 tetrahedra under the assumption that all Al atoms form tetrahedra. Four of the Ca 2+ ions then compensate the negative charges of the AlO − 4 , leaving three Ca 2+ that can break the network and create, ideally, 6 non-bridging oxygens. The system is confined in a cubic box with an edge length of 11.3616Å, which corresponds to a mass density of 2.4 g·cm −3 . This density has been chosen by extrapolating to 2500 K the data obtained by Courtial and Dingwell for a system of close composition [14]. For this case, classical MD simulations were also carried out on systems containing 100 and 5184 atoms with the same composition described above. By comparing the structural properties obtained from the classical MD simulations at the two system sizes, it was possible to estimate the finite-size effects on the ab initio MD data and to validate the choice of the system size for the ab initio simulations [40]. The initial configuration of the CAS system for the ab initio MD simulation was generated using a Born-Mayer-Huggins potential [19] in a classical MD run to obtain a melt at 2000 K. The CAS liquid was then heated to 2300 K with CPMD and further equilibrated for 2 ps using a time step of 0.12 fs and an electron mass parameter of 800 a.u. It was then heated again to 3000 K and equilibrated for 6.8 ps, during which stronger diffusion effects occurred than at the lower temperature. Again, rapid equilibration and efficient canonical sampling was achieved by coupling a Nosé-Hoover chain thermostat [37,39] to each ionic degree of freedom. PRESENTATION OF RESULTS In this section, the structural properties of the CAS and SiO 2 melts are presented in terms of network pair correlation functions, angle distributions, examination of Al-O-Al linkages, proportion of NBO, and cation pair correlation functions. A full discussion of these results is presented in Sec. 4. A. Network pair correlation functions In this subsection, the pair correlation functions (PCF) corresponding to the networkforming atoms (Si, O and Al for CAS) are presented for the CAS melt and compared to those of the silica melt when appropriate. The Si-O PCF and the corresponding integrated coordination number are almost identical for the two systems, which confirms that the basic tetrahedral unit is conserved between these two liquids ( Fig. 1(b)). Moreover, the similarity in CAS between the Al-O ( Fig. 1(d)) and Si-O PCFs is clear evidence of the fact that the Al atoms can substitute for the Si atoms at the center of the tetrahedra. The ability of Al to replace Si in the network has also been observed in experimental studies of aluminosilicate glasses [7][8][9][10][11]. The slight shift of the Al-O peak to a higher r value is consistent with previously observed and calculated Al-O and Si-O bond lengths in aluminosilicates [2] as well as with the larger covalent radius of Al compared to Si. Comparison of the Si-Si and O-O PCFs between the CAS and SiO 2 liquids also shows a slight shift of the first peak toward higher r values in the CAS case, and the plateau in the running Si-Si coordination number is considerably lower in the CAS system. In order to explain the shift in the Si-Si peak, we first note that in silica, the first Si-Si neighbors correspond to neighboring tetrahedra (i.e. the first Si-Si distances correspond to the Si-O-Si linkages). In the CAS system, however, some of the oxygens, in particular, the NBO, are connected to only one network forming atom (Si or Al) and one network modifier (Ca), thus forming Si-O-Ca or Al-O-Ca linkages. Therefore, some of the first-neighbor Si-Si distances are due to these more complex linkages. The reduction in the Si-Si coordination is due primarily to the fact that some of the Si atoms are replaced by Al atoms. As a result of the Al substitution for Si and the presence of NBOs, there are fewer direct Si-O-Si linkages in CAS, leading to an average coordination of 2.4 compared to 4 in the pure silica system. The shift in the position of the first O-O peak in the CAS system is simply a reflection of the fact that the Al-O bond length is larger than that of Si-O. Thus, if Al substitutes for Si, maintaining both the regular tetrahedral coordination and the angles between neighboring tetrahedra as in silica, then the observed shift in the O-O peak is expected. B. Angles distributions Evaluation of the angles distributions and coordination numbers presented in this subsection was based on distance cutoffs determined from the first minimum of the PCFs (2.38 A for Si-O and 2.56Å for Al-O). In , it can be seen that, although Al substitutes for Si in the network, the fraction of 3-fold and 5-fold coordinated Al exceeds that of Si. Recent high-temperature NMR measurements of aluminosilicate melts [16,41] directly revealed the presence of 6-fold coordinated Al and strongly suggested the possibility of 5-fold coordinated Al, although the latter were not directly observed in these experiments. However, MAS NMR experiments [42,43] have provided clear evidence of the existence of small amounts of both 5-and 6-fold Al coordination in binary Al 2 O 3 -SiO 2 and in ternary CaO-Al 2 O 3 -SiO 2 glasses [42,43]. Indeed, the average proportion of 5-fold coordinated Al in the present simulations is also small compared to 4-fold coordinated Al. However, we observe less than 1 % of 6-fold coordinated Al for this particular composition. We also investigated the effect of the Al substitution for Si on the angles between neighboring tetrahedra. The Si-O-Al and Si-O-Si angles distributions were computed for the CAS melt and compared to the Si-O-Si angle distribution for the SiO 2 melt (Fig. 4). The Si-O-Si distributions are very similar in the two systems, only showing a slight difference in the intensity of the shoulder around 90 • -100 • . It can be shown that values of the Si-O-Si angles around 90 • -100 • can be attributed to 2-membered rings and/or to oxygen tri-clusters, oxygens bound to three network-forming atoms (see Sec. 4). A comparison of the proportions of these units in the two different systems is consistent with the differences observed in the angle distributions and will be discussed in more detail in section 4. In CAS, the Si-O-Al angle distribution shows a higher shoulder around 90 • -100 • than the Si-O-Si one. Again the Si-Al 2-membered rings and the oxygen tri-clusters are at the origin of this shoulder. Overall, the introduction of Al and Ca does not significantly affect the Si-O-Si angle distribution in the molten state. C. Al-O-Al linkages Although the Al-Al PCF ( Fig. 1(e)) is somewhat noisy due to the small number of Al atoms in the system, this correlation function exhibits a first peak at ∼ 3.2Å, which indicates that some Al-O-Al linkages are present in the system. This would appear to be in direct violation of the so called Al avoidance principle or Löwenstein's rule [44], an empirical rule which states that two Al tetrahdera are never found linked by an oxygen atom in aluminosilicate crystals and glasses, at low Al content. However this rule has been found experimentally not to be exactly fulfilled in some glasses and melts, in particular when Ca atoms are present, leading to Al/Si alternance disorder [45][46][47]. The degree of Al avoidance violation can be quantified by examining the average number of Al-O-Al linkages formed over the course of the simulation. By computing the number of Al atoms around each oxygen atom (using the first minimum of the g(r), 2.56Å, as the Al-O cutoff distance) and counting one Al-O-Al linkage when two Al atoms are found, an average value of 2.26 Al-O-Al linkages is obtained, i.e., ≈ 57 % of Al atoms form Al-O-Al linkages, if they do not form chains. Moreover, half of the oxygen atoms involved in the Al-O-Al linkages are found to be 3-fold coordinated, on average. The proportion of Al-O-Al linkages is greater than that which would be obtained from a purely random model [46]. Thus, the Al atoms appear to favor Al-O-Al linkages in the CAS liquid and adhere only minimally to the Al avoidance principle. That the presence of Ca 2+ cations preferentially favor Al-O-Al linkages and, thus, strong violation of the Al avoidance principle, has also been suggested by static ab initio calculations of clusters [48] and is likely due to the greater aggregation of negative charge around the Al-O-Al linkage. In a disordered (liquid or glass) state, the relatively large Ca 2+ charge is, therefore, able to induce formation of such linkages. D. Non-bridging oxygens In the present CAS system, stoichiometry dictates that four of the Ca 2+ ions should compensate the eight aluminum tetrahedra (assuming all the Al are four-fold coordinated), leaving three Ca 2+ cations to create six non-bridging oxygens. The number of NBOs in the system can be computed by counting the number of oxygen atoms which have only one Si or Al neighbor and one Ca neighbor. The neighbors are defined to be any atoms within a sphere of radius determined by the first minimum of the corresponding PCF (2.38Å for O-Si, 2.56Å for O-Al, and 3.40Å for O-Ca -see Sec. 3 A and Sec. 3 E), centered on each oxygen atom. The distribution of the number of non-bridging oxygens found in the CAS system during the simulation is depicted in Figure 5. The distribution is peaked at 9, while the probability of the system's possessing six NBOs is relatively small. Thus, the average number of NBOs in the system is larger than would be predicted from a simple stoichiometric argument. In a recent experiment of Stebbins and Xu [12], it was found that in a particular CAS glass system in which the Ca 2+ ions perfectly compensate all of the negative Al tetrahdera, some NBOs are present, although stoichiometric arguments would predict that there should be none. Since the present study is concerned with the liquid state, where bond-length fluctuations and other dynamical effects are more significant than in the glass, we cannot directly compare our result with that of Stebbins and Xu. However, given that the trend is toward a larger number of NBOs in the glass than the stoichiometry would predict, the fact that our calculations predict a similar trend in the liquid accords well with the experimental result. In Figure 6, we present the first peaks of the PCFs between the oxygen atoms and the network-forming atoms (Si and Al), evaluated separately for bridging (BO) and non-bridging (NBO) oxygen atoms. The maximum intensity of the X-BO and X-NBO (X=Si,Al) peaks are located at different r values, the X-NBO distances being shorter than the corresponding to those for X-BO. This result is in agreement with experimental results concerning sodium silicates and aluminosilicates [2,49] and with ab initio calculations on clusters [48,50]. In the simulated CAS melt, the most probable distances are r(Si − BO) ≈ 1.64Å and r(Si − NBO) ≈ 1.58Å, r(Al − BO) ≈ 1.75Å, and r(Al − NBO) ≈ 1.70Å. It is also interesting to note that the Si-O and Al-O distances in the molten state are very close to their respective values in the glass. E. Calcium pair correlation functions In this subsection, structural properties involving the calcium atoms in the CAS melt, such as PCFs and coordination numbers, are presented. The calcium PCFs are depicted in Fig. 7. The Ca-O radial distribution function ( Fig. 7(b)) exhibits a first peak at approximately 2.33Å, in agreement with experimental values obtained for glasses and minerals of similar composition [3,9]. The coordination of calcium atoms is found to be equal to 6.2 ± 1.3 on average (see Fig. 8), while the experimental value in calcium aluminosilicate glasses is estimated to be between 5 and 6 [3,9]. This comparison is only meant as a qualitative one, since direct comparison between the liquid and glass states is not possible. Indeed, the high temperature of the molten state gives rise to large fluctuations in the Ca-O coordination number. The Ca-Si and Ca-Al PCFs (( Fig.7(a) and (c)) also show well defined first peaks at ∼ 3.45Å, which are due either to direct bonds with non-bridging oxygens, i.e., Si-O-Ca and Al-O-Ca linkages, or to the proximity of Ca 2+ ions to compensate the negative AlO − 4 groups. It can be shown that the Ca-Si peak is due to the former and the Ca-Al to the latter. Indeed, we observe mostly Si-NBO bonds and very few Al-NBO bonds (∼ 91 % of the NBOs are connected to Si atoms), a result that is consistent with recent X-ray experiments on glasses of similar composition [9]. In Fig. 6, in which PCFs are evaluated separately for bridging (BO) and non-bridging (NBO) oxygen atoms, we observe that Si PCFs possess similar characteristics for both BO and NBO. In contrast, the Al-NBO PCF is generally small compared to the Al-BO PCF, which indicates that almost all the NBO atoms are connected to Si tetrahedra. Thus, the first peak in the Ca-Al PCF is due to nearby Ca 2+ ions which, in this close proximity, are able to compensate the negative AlO − 4 groups. Note that the Ca-Ca PCF, shown in Fig. 7(d) does not exhibit a well defined first peak, which suggests that at liquid conditions, there is no ordering of the network modifier cations. DISCUSSION In order to explain the presence of excess non-bridging oxygens, Stebbins and Xu [12] proposed two possible structural units: AlO 5 groups and tri-clusters. Tri-clusters are oxygen atoms bonded to three network-forming atoms, either Al or Si, and they can be of four types : oxygens bonded to three silicon atoms (3 Si), to two silicon and one aluminum atoms (2 Si -1 Al), to one silicon and two aluminum atoms (1 Si -2 Al), and to three aluminum atoms (3 Al). In the molten state we observed that the fraction of 3-fold coordinated oxygen atoms is not negligible. A more detailed analysis of these tri-clusters in CAS showed that, due to the small number of aluminum atoms in the system, 3-Al tri-clusters are absent and that the most numerous tri-clusters are the 2 Si -1 Al and 2 Al -1 Si. When added together and averaged over the trajectory, the percentage of oxygen atoms forming tri-clusters is around 6.9 % in the CAS system, with large fluctuations of about ± 2.8 %. This result accords well with the idea of Stebbins and Xu [12] that a given number of oxygen tri-clusters should be present in the glass in order to compensate the formation of the non-bridging oxygens. It could be argued that the relatively small number of tri-clusters may be a consequence of thermal fluctuations at liquid conditions. Although thermal induced tri-cluster formation cannot be ruled out, it is worth noting that only 4.7 % of tri-clusters could be identified in the silica melt at higher temperature. On the other hand, in CAS, we also find a relatively high number of AlO 5 units: on average, approximately ∼ 1.9 or 23% of Al are 5-fold coordinated, however the fluctuations are such that in any configuration, anywhere between 0 and 3 AlO 5 units may exist. This result suggests that the presence of highly coordinated aluminum atoms could favor the creation of excess NBO atoms. Most experimental studies [8][9][10][11] only show evidence of 4-fold coordinated aluminium atoms in aluminosilicate glasses. However, as discussed in Sec.3 B, experimental evidence of higher Al coordination numbers (mainly 5 and 6-fold) in the glass and molten state of calcium aluminosilicate systems exists, and, recently, evidence of high Al coordination in magnesium aluminosilicate glasses has been reported [14,16,[41][42][43]. In Fig. 5, we present the distribution of the number of NBOs found during the simulation of the CAS melt. The relatively large width of the distribution indicates that the identity of the oxygen atoms (BO or NBO) does not remain constant during the simulation. Indeed, bond-breaking events that lead to the creation and the anihilation of non-bridging oxygens, i.e. Q 4 and Q 3 exchanges, are observed. This mechanism has been suggested as underlying the induction of shear flow and, hence, a decrease in the viscosity in these systems [14][15][16]. Since almost no NBOs are found to be connected to Al atoms on average during the simulation (see Sec. 3 E), it is highly probable that more Al-O bonds are broken than Si-O bonds. In silica, however, since there are no NBOs, a different mechanism must underly the flow process. Unfortunately, we can not extract direct dynamical quantities from the present simulations. This is mainly due to the fact that at high temperature in the molten state, the electronic gap is too small compared to k B T to ensure the decoupling of the ionic and the electronic degrees of freedom, which is needed for ab initio molecular dynamics in a constant energy ensemble. The use of thermostats becomes compulsory and the direct access to dynamical properties is no longer available. We are currently developing new techniques to treat this problem [51]. Nevertheless, the structural characteristics of the melt can give some insight into its dynamical properties. For instance, it has been suggested that five-coordinated network cations could act as transition states for the flow process [14][15][16]43]. This atomic-scale flow step was described in Ref. [13] as an oxygen atom changing from bridging to non-bridging, or vice versa: "A non-bridging oxygen bonded to a modifier cation can approach a silicon atom and make it over-coordinated. Dissociation of the over-coordinated SiO 5 results in an exchange of the roles of oxygen from bridging to non-bridging ...". In the present simulations, we find a significant fraction of 5-fold coordinated silicon atoms (≈ 9.9 % in SiO 2 at 3500 K and ≈ 3.8 % in CAS at 3000 K) and a large fraction of 5-fold coordinated aluminum atoms in CAS (see Fig.3). These numbers show very large fluctuations around their average values which supports the idea that these units participate in the transition state of the flow steps. The relatively large number of 5-fold coordinated network cations in CAS compared to SiO 2 , even at a lower temperature, could be a signature of a lower viscosity in the aluminosilicate melt. It is also interesting to note that the number of 5-fold coordinated Si atoms is larger in SiO 2 than in CAS. In the latter system, the 5-fold coordinated Al atoms, which are energetically more favorable, replace the 5-fold Si atoms as the transition state of the flow step. In Sec. 3, it was seen that the local order in the two silicate melts is only slightly affected by temperature and by the introduction of a small number of modifier cations. The Si tetrahedral units remain the most probable basic units of the system, with probabilities of 85 % in SiO 2 and 89% CAS at 3000 K. From structural characteristics such as pair correlation functions or angle distributions, therefore, it is almost impossible to discern the disruption of the silicate network by the modifier cations, at high temperature. The effect can be seen, however, by looking at characteristic patterns in the network. An example is the oxygenoxygen coordination number, which is a more sensitive probe of the network disruption. We have evaluated these coordination numbers by counting the number of oxygen neighbors around each oxygen atom inside a sphere of radius determined by the first minimum of the O-O radial distribution function. Histograms of the oxygen-oxygen coordinations are depicted in Fig. 9 for the two different systems. For SiO 2 at 3500 K, the distribution of the O coordination peaks around 7 or 8 atoms, which indicates that the oxygen atoms have more oxygen neighbors than would be expected from two connected tetrahedra. In the SiO 2 glass, in which the network is not broken, the maximum of the distribution occurs at 6 which corresponds to the number of oxygen neighbors belonging to two connected tetrahedra. The fact that, at 3500 K, the maximum is located between 7 and 8 in silica shows that part of the network has been broken. One of our hypotheses is that 2-membered rings and highly coordinated network formers are the cause of the increased number of oxygen neighbors. Indeed, the percentage of 2-membered rings in silica is not negligible at 3500 K (∼ 12.4 %). It could be argued that our liquid silica system is too compressed (its density is equal to the experimental density of the silica glass at room temperature) and that the high pressure induces the shift in the O-O correlation and the large number of edge-sharing tetrahedra. However, results of classical-molecular dynamics on liquid silica at several densities and temperatures show that this is not the case [52,53]. At ≈ 3200 K and zero pressure, the O-O distribution ressembles the distribution in Fig. 9 for silica [52] whereas at higher pressure (for a density of 2.35 g.cm −3 ) and temperatures up to 4200 K, the O-O distribution is peaked between 6 and 7, which is close to that of the glass structure [53]. The shift of the O-O distribution with pressure is accompanied by a decreasing number of edge-sharing tetrahedra. This "non intuitive" behavior has already been seen in previous Monte Carlo simulations of liquid silica under pressure, in which it has been shown that the proportion of small rings decreases with increasing pressure [54]. Given the similarities between our result and that of Ref. [52], and given the small thermal expansion of silica (0.54 10 −6 K −1 at 1000 • C [55]), it is clear that the high O-O correlation is a consequence of the high temperature of the system. The distribution of O-O coordination in CAS at 3000 K is shifted towards higher numbers compared to that of SiO 2 (Fig. 9). In the former, the disruption of the network is not only due to the NBOs but also to the presence of highly coordinated Al atoms which weaken the structure [18]. The CAS melt density is only slightly larger than that given by the extrapolation to 3000 K of experimental data [14], therefore the high correlation of the oxygen atoms is clearly due only to the high temperature. The relative shifts of the O-O distributions between the two different systems can be related to the disruption of the three dimensional network. In the two melts, thermal effects create defects such as 2-membered rings and SiO 5 units in comparable proportions. In CAS, however, the disruption of the network is also driven by the presence of non-bridging oxygens, which break some of the Si-O bonds, as well as by the presence of the aluminum atoms which possess high coordination numbers. Indeed, the presence of aluminum atoms has already been proven to be responsible for the increased fragility of the silicate network [18], which can be traced to the fact that the Al have broader coordination distributions, leading, therefore, to an increased flexibility of the network structure than in SiO 2 . The use of the Nosé-Hoover chain thermostatting method [37] in the present simulations allows the heat capacity at constant volume, C v to be computed efficiently using the relation: where ∆E = E 2 − E 2 denotes the fluctuations of the potential energy plus the kinetic energy of the ions over the course of the simulation. In silica at 3500 K, we find that the heat capacity at constant volume is equal to 91.5 J mol −1 K −1 . As a first approximation, we can roughly estimate the difference between C v and C p , the heat capacity at constant pressure, which is the experimentally measured quantity: where α is the thermal expansion coefficient and β the isothermal compressibility. Using the experimental values α=0.54 10 −6 K −1 , valid at 1000 • C, and β = 2.74 10 −2 GPa −1 at 300 K [55], we find a difference of C p -C v = 1.0 10 −3 J mol −1 K −1 , so C p ≈ 91.5 J mol −1 K −1 . This value does not disagree with an extrapolation of the experimental values of C p from Ref. [56] and is close to the values obtained by Scheidler et al. [57] from molecular dynamics simulation of the SiO 2 melt in the same temperature range, using the BKS potential [35]. Scheidler et al. computed the frequency dependent specific heat from T =2750 K up to T =6100 K and found a reasonable extrapolation of the experimental C p (T ) above T g . For the CAS melt at 3000 K, we find a value of C v = 99.28 J mol −1 K −1 and an estimate of the difference, C p -C v = 0.189 J mol −1 K −1 , which gives C p ≈ 99.47 J mol −1 K −1 . The C p -C v difference has been estimated using experimental values for calcium aluminosilicate systems of close compositions: α = 54.5 10 −7 K −1 at 815 • C (25 mol% CaO, 15 mol% Al 2 O 3 , 60 mol% SiO 2 ) [58] and β = 1.28 10 −2 GPa −1 at 300 K (26.7 mol% CaO, 13.3 mol% Al 2 O 3 , 60 mol% SiO 2 ) [55]. Although this estimate of C p is relatively crude, it can be used to give an order of magnitude for C p at high temperature. To our knowledge, no experimental values of C p have been reported for a CAS system of close composition, at high temperature. CONCLUSION The structural properties of two different molten silicates have been analyzed using firstprinciples molecular-dynamics. It has been observed that, even in the calcium aluminosilicate melt, the basic structural unit of the system, i.e. the SiO 4 tetrahedron is not destroyed at high temperature and that the aluminum atoms can substitute for the silicon atoms in the center of the tetrahedra. Analysis of the structure of two melts show that the temperature effects induce the creation of 2-membered rings, 3-fold coordinated oxygen atoms and 5-fold cooridnated silicon and aluminum atoms. In CAS, in particular, a relatively large proportion of Al atoms are found to be 5-fold coordinated. In the CAS system, a larger number of non-bridging oxygens than would be predicted from stoichiometry was found. This excess of NBO is in agreement with recent experimental results on glasses of composition similar to the present composition [12] and is likely due to the presence of AlO 5 units as well as oxygen tri-clusters (mainly 2 Si -1 Al). Also in the CAS system, it is found that the aluminum avoidance principle is violated and that most of the non-bridging oxygens are located on silicon tetrahedra, both results being in agreement with recent experimental data [9,45,46]. Finally, evidence of the creations and anihilations of non-bridging oxygens has been obtained. These processes likely play a key role in the flow mechanism in this system. A detailed understanding of the diffusion process, although extremely interesting, would require considerable computer time and would represent a methodological challenge for the ab initio approach. While we are currently developing new techniques to treat this problem [51], the study of the dynamical properties in these systems remains, for the present, beyond the scope of this paper. In addition, since it is not possible to determine which of the dominant structural motifs in CAS are a direct consequence of thermal fluctuations, we intend, as future work, to perform a quench of the CAS system to room temperature in order to study its structural properties in the glassy state.
2019-04-05T03:32:09.025Z
2001-09-14T00:00:00.000
{ "year": 2001, "sha1": "34ba84843a3d320569a0b1bdcb2c11d22d561296", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0109267", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c4ad8309f00e9ed2ed97d89d936037a13b8c6148", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
54841221
pes2o/s2orc
v3-fos-license
Marketing challenges experienced by small-to-medium enterprises over formal clothing industries in Harare, Zimbabwe Abstract small-to-medium enterprises (SMEs) are faced with a myriad of competitive business organisations which are broadly categorised as formal industries. These organisations exhibit varying marketing strategies to remain viable, survive and to be a going concern. This study investigated marketing strategies employed by clothing entrepreneurs in the SMEs over big formal clothing companies in Harare, Zimbabwe. The study sought to understand why customers prefer to buy from small clothing entrepreneurs over big clothing entrepreneurs. Non-probability sampling and purposive sampling techniques were adopted and a total of 75 entrepreneurs constituted the research participants. A post-positivism research philosophy was adopted, and a combination of questionnaires and open-ended interviews were the research instruments of choice. The study found that small clothing manufactures in Harare are increasingly becoming competitive in their marketing strategies as compared to rigid large formal clothing companies. The study found that, large companies need to implement creative and decisive marketing strategies and catch up with the prevailing winning marketing strategic practices suitable for the Zimbabwean economic environment. The study found that legal and illegal imported clothes (mainly from Mozambique) are threatening the viability and sustainability of Harare clothing manufacturers. It is therefore recommended that the Zimbabwean government needs to capacitate the small-scale clothing manufacturers, curb illegal smuggling of clothes from Mozambique and fight anti-competitive practices of large clothing manufacturers. Small-scale clothing manufacturers are contributing to employment creation in Harare and Zimbabwe at large. Abstract: small-to-medium enterprises (SMEs) are faced with a myriad of competitive business organisations which are broadly categorised as formal industries. These organisations exhibit varying marketing strategies to remain viable, survive and to be a going concern. This study investigated marketing strategies employed by clothing entrepreneurs in the SMEs over big formal clothing companies in Harare, Zimbabwe. The study sought to understand why customers prefer to buy from small clothing entrepreneurs over big clothing entrepreneurs. Non-probability sampling and purposive sampling techniques were adopted and a total of 75 entrepreneurs constituted the research participants. A post-positivism research philosophy was adopted, and a combination of questionnaires and open-ended interviews were the research instruments of choice. The study found that small clothing manufactures in Harare are increasingly becoming competitive in their marketing strategies as compared to rigid large formal clothing companies. The study found that, large companies need to implement creative and decisive marketing strategies and catch up with the prevailing winning marketing strategic practices suitable for the Zimbabwean economic environment. The study found that legal and illegal imported clothes (mainly from Mozambique) are threatening the viability and ABOUT THE AUTHOR Lucia Sithole is a lecturer at Chinhoyi University of Technology in the School of Art and Design. She is a PhD student and holds a Master of Science Degree in Family and Consumer Sciences and a Bachelor of Education Degree in Home Economics. She is an experienced researcher with five publications to her credit. Members of the group are also seasoned lecturers and have done some researches in clothing fashion design. The informal sector requires government assistance to curb the importation of clothes which threatens the marketing of informal sector clothes. The importation of clothes is also threatening the livelihoods and employment of workers in the informal clothing industries. Foreign investors in clothing industry are discouraged to come to Zimbabwe due to the influx of cheap imported clothes. There is lack of political will to rescue the threat on the clothing industries. PUBLIC INTEREST STATEMENT This study explores challenges faced by small-tomedium Enterprises as they compete with formal industries to market their products. In order to remain viable, these organisations must employ a variety of marketing strategies. This study used questionnaires and open-ended interviews. Findings of the study are that small clothing manufactures in Harare are increasingly becoming competitive in their marketing strategies as compared to rigid large formal clothing companies. It was also found that large companies need to implement creative and decisive marketing strategies and catch up with the prevailing winning marketing strategic practices suitable for the Zimbabwean economic environment. Another important finding was that legal and illegal imported clothes are threatening the viability and sustainability of Harare clothing manufacturers. The findings of this study should assist the informal sector traders to improve their marketing strategies with a view to access the marketing opportunities enjoyed by the formal industries. sustainability of Harare clothing manufacturers. It is therefore recommended that the Zimbabwean government needs to capacitate the small-scale clothing manufacturers, curb illegal smuggling of clothes from Mozambique and fight anticompetitive practices of large clothing manufacturers. Small-scale clothing manufacturers are contributing to employment creation in Harare and Zimbabwe at large. Introduction Harare has been characterised by monopolistic big clothing companies at the expense of small emerging clothing manufacturers. Notwithstanding, large clothing manufactures in Harare have served the city for a considerable time and they have created and sustained their marketing efforts through: quality products, account schemes, extensive promotions and discounts. Due to their market dominance and viable operations, large corporations would then contract small clothing companies to sell the stocks of large corporations at a premium, all to the economic benefit of large clothing companies. However, the realisation of the importance of small entrepreneurs coupled with convenience in their geographical location has propelled them to compete with large clothing manufacturers. The city of Harare has witnessed a rise of small clothing companies engaged in manufacturing clothes ranging from: corporate clothing, school uniforms, work suits, designer and customised outfits. In countries like America and Canada, small clothing companies have long risen to the occasion and have variety of fashion designs to their names. In America, small business organisations in the clothing industry conduct a variety of workshops to enhance the viability of their operations. British small clothing businesses are known for providing specialised customer services, custom-made suits and an opportunity to try clothing before purchase-a rare find in large clothing companies. In Africa and in particular reference to South Africa, small clothing companies conduct massive road shows, fashion weeks and even give sample outfits for free as part of their marketing efforts. In Zimbabwe, small clothing manufacturing companies adopted the use of posters and fliers as their marketing tools. However, as the economic situation in Zimbabwe deteriorated, there was a shift whereby there was an influx of second-hand clothing from Mozambique with the eastern border city of Mutare, the gateway to and from Mozambique, being the major supplier of second-hand clothes. Combined with their existing services, SME clothing companies in Harare are supplying second-hand clothes offsetting the market position of large clothing companies. It is on the basis of this background that this study investigated the marketing challenges experienced by small clothing companies and how marketing strategies are applied to compete with large clothing manufacturing companies in Harare. Problem statement It is a business fact that, sustainable businesses are the ones that employ decisive marketing strategies. The world is increasingly adopting the consumerism concept which encourages the protection or promotion of the interests of consumers. Therefore, the success of business organisations is linked to how customer tastes and experiences are met and fulfilled. Modern-day customers prefer to buy outfits that match their needs rather than satisfy the needs for the clothing manufacturers. According to Easey (1995), in the world of fashion, outfits and clothing, the brunt of success lays on designers to come up with products that not only meet but exceed customer expectations. Be as it may, large clothing companies have experienced highly paid clothing designers but Harare is faced with poorly operating large clothing companies. The study sought to answer the following questions: • Are large clothing companies failing to attract customers due to marketing strategies being employed by small-to-medium enterprises (SMEs)? • What marketing challenges are being faced by clothing companies? • Are small clothing entrepreneurs producing products according to customer preferences which big clothing companies are failing to do? Research objectives The study sought to: • Identify marketing strategies used by clothing manufacturing companies in Harare, Zimbabwe. • Establish marketing challenges faced by clothing manufacturing companies in Harare, Zimbabwe. • Identify the determining factors of product price range. • Identify where customers are getting knowledge of clothing products on the market. • Establish the impact of marketing strategies used by clothing manufacturing companies. Literature review Several studies on the clothing and manufacturing industries have been carried out by some researchers both in Zimbabwe and other countries internationally. For instance , Brooks, (2015) carried out a study on the hidden world of fast fashion and second-hand clothes. Tarisayi, (2014), explored the effects of the ebola scare on second-hand clothing Traders in Zimbabwe. Norris, (2015) studied the limits of ethicality in international markets, with special emphasis on imported second-hand clothing in India. According to Drucker (2008), marketing is a very significant aspect in the success of any business especially the SMEs in the clothing industries and the big companies as well. In other words, without marketing, people might not know the products on the market particularly if they are from small-scale clothing businesses. Marketing is a management process responsible for identifying, anticipating and satisfying consumer's requirements profitably (Moghaddan & Foroughi, 2012). One of the most important aims of companies is to enhance market share to achieve greater scale in their operations and improve profitability (Kotler & Armstrong, 2010;Smith, 1994). Marketing is the process by which firms create value for customers and build strong customer relationship in order to capture value from customers (Kotler & Armstrong, 2010). It is through this strong entrepreneur customer relationship that SMEs clothing entrepreneurs in Harare are becoming more popular and have more customers than big companies. Most of their garments are sewn to customer specification and satisfaction to build this strong relationship unlike clothing retailers who have ready-made items. When they succeed in this, they attract new customers by promising superior value and by keeping and increasing current customers by ensuring their satisfaction (Kotler & Armstrong, 2010). Generic marketing strategies A business strategy is a long-term plan of action designed to achieve a particular goal or set of goals or objectives and it is designed to strengthen the performance of the business entity and it states how business should be conducted to achieve the desired goals (Mozer, 2013;Tribou, 2012). There are many alternative strategies that are available to an organisation and the organisation's choice of a strategy is a reflection of the strategic intent of the organisation and the various internal processes and practices (Lawson, 2015;Tribou, 2012). Martins (2017) cites that business leaders are faced with a range of potential business strategies to pursue and to attain competitive advantage and distinctive capabilities such as cost-leadership strategy, focus and differentiation. Tribou (2012) argued that, cost-leadership is a concept developed by Michael Porter, utilised in business strategy. It describes a way to establish the competitive advantage and it means the lowest cost of operation in the industry. The cost-leadership is often driven by company efficiency, size, scale, scope and cumulative experience (learning curve) (Martins, 2017, p. 43;Lawson, 2015). A costleadership strategy aims to exploit scale of production, well-defined scope and other economies (e.g. a good purchasing approach), producing highly standardised products, using high technology (Keller & Sood, 2013). Mozer (2013) cites that the sources of the cost-advantage are: economies of scale, the learning curve, low-cost access to factors of production, technological advantages and policy choice. This strategy works well when there are price wars, products are standard and when buyers are large and have significant bargaining power (Lawson, 2015;Monaghan, 2016). The cost-leadership strategy is always a best strategy to implement at the organisation in the sense that, it provides the organisation with economies of scale, access to capital due to low internal costs and it allows the maximisation of profits as compared to competing organisations (Tribou, 2012). However, it is not always a best strategy to implement in the sense that, economies of scale will eventually lead to diseconomies of scale resulting in the loss of profits (Tribou, 2012). Mozer (2013) argued that, if cost-leadership strategies can be implemented by numerous firms in an industry, or if no firms face a cost disadvantage in imitating a cost-leadership strategy, then being a cost leader does not generate a sustained competitive advantage for a firm. The ability of a valuable cost-leadership competitive strategy to generate a sustained competitive advantage depends on that strategy being rare and costly to imitate. Tribou (2012) added that, costleadership can also lead to reduction in organisational performance which in turn leads to low sales and low profits. Another potential strategy that can be pursued by an organisation is the focus strategy. The focus strategy refers to a market focus in terms of segmentation, meaning to say that it is the provision of a product/service directed at a narrow market segment niche (Lawson, 2015;Mohamed, 2014). For example, the organisation will need to identify the various niche markets it can effectively serve and focus its products to the market niche and this can be done using the customer preferences. The greatest advantage of this strategy is that it enables the organisation to acquire mastery in the niche market (Mozer, 2013). However, the organisation can easily lose the niche market to competitors. The third alternative strategy is the differentiation strategy which refers to a market position where customers are willing to pay a price premium for products or services offering added value from design, quality and service (Keller & Sood, 2013). This means that an organisation can offer superior quality products and services to its clients, so as to enable it to charge higher prices for quality and superior experiences (Mozer, 2013). However, this strategy will not effectively work for an organisation if it aims to increase numbers of clients rather than mere financial gains (Mohamed, 2014). Importance of marketing strategies Marketing is a very critical component for any business to survive, given the harsh economic Zimbabwe climate (Kotler & Armstrong, 2010). The twofold important goals of marketing are mainly to attract new customers and to keep current customers. They continue to say that marketing strategies draw customers and when the entrepreneurs take care of their customers, market share and profits will follow. Drucker (2008) cites that marketing has to answer to two important questions namely: • How your enterprise will address the competitive marketplace and • How you will implement and support day to day operations. With marketing, selling becomes easy given that the entrepreneurs have a full understanding of customer needs, customer value and prices promote products effectively due to marketing strategies used (Drucker, 2008). Nowadays, small-scale business reaps the rewards of creating superior value and good marketing strategies (Business Resource Software, 1994). In today's very competitive market place, a strategy that ensures a consistent approach to offering your product or service in a way that will outsell the competition is very critical (Easey, 1995, p. 93). Marketing strategies also help to research on certain objectives related to the clothing industry like how to beat competition (Drucker, 2008;Easey, 1995). According to Easey (1995), marketing helps to provide this additional knowledge and skills needed to ensure that the creative component is used to the best advantage, allowing business to succeed and grow. Easey (1995) continues to say that marketing itself is important because it helps reduce some uncertainty in the fashion industry and cut down the number of business failures. Having good marketing strategies makes it easy to attract and keep customers (Flick, 2005). It appears that small clothing entrepreneurs have better strategies to attract customers. Flick (2005) says, customers and entrepreneurs strike some understanding and share information on what can be done and what they need. The advantage is that with small-scale clothing businesses, they do not always have ready-made clothes unlike big clothing manufacturers. They allow new designs to be brought in by customers. This is achieved through marketing that establishes a strong relationship between the small clothing entrepreneurs and the customers (Flick, 2005; Kotler & Armstrong, 2010). The marketing mix The marketing mix refers to the seven marketing variables that need to be considered to make sure that the business operates to the maximum. The elements of the marketing mix are: place, price, promotion, physical evidence, people, product and process (Flick, 2005;Kotler & Armstrong, 2010). Flick (2005) cites that for products like clothing, marketers consider the four "Ps" which are: price, place, promotion and product. Price According to Kotler and Armstrong (2010), the price is the amount of money charged for a product or service. Easey (1995) says price is the amount of money that is exchanged for a product or service offered. It can also be the value that is placed on an item, what the item is worth given all the cost inputs. In most instances, the two are used synonymously and interchangeably. There is need for small-scale clothing entrepreneurs to seriously consider the issue of costing and pricing, so that they do not lose customers unnecessarily but at the same time remaining profitable. Jewell (1990) cites that the determinants of price are: • New products-consumers are often initially prepared to pay higher prices. • Qualities of products-products are sold at a price reflecting their quality. • Insurance premium-price is calculated from the risk involved. • Competitors-prices charged by competitors also influence price of the product. • Market target-the kind of group that is on target and also the culture of the group. • Time of selling-whether it is the season for the product. These may not be all the factors that influence prices as in Harare clothing manufacturers both SMEs and large companies. The notable ones are the prices charged by competitors and time of selling. Wolonski and Coates (2009) postulate that there are pricing strategies that can be implemented and these are: The importance of price Prices are very important to customers as they either attract or push away potential customers. Kotler and Armstrong (2010) cite that the value of price to customers is determined by: • Customer price sensitivity, • The level of competitive activity, • Availability of competitor products. Product Good marketing means products that fit the market. The products on the market need to be designed correctly and then developed to keep pace with the market changes (Jewell, 1990). Market research is essential, thus it helps to understand the customer and the products (Strokes & Wendy, 2008). Market research tells the entrepreneurs who the customer is, how the customer makes the purchasing decision, what the customer wants from the product, if there are gaps in the market and what competitors are doing (Gwin, 2009). Product research concentrates on the product in order to modify existing products and to produce new products. The role of new product New products are important to small clothing entrepreneurs as they give a new look in the market and the profits may be maximised when new products are introduced (Gwin, 2009). They also give competitive advantage enabling small entrepreneurs to produce articles of better quality. Gwin (2009) postulates that with new products on the market, there is need to understand four important considerations: • Product messaging, • Pricing strategies, • Channels of communication and • Promotion plans, Generally all products go through a product life cycle which has stages such as: launching stage, growth stage, maturity and decline. Advertising media benefits and critics Advertising is a controlled and paid non personal marketing communication (Easey, 1995). The aforementioned argued that the objective of advertising is to sell. According to Drucker (2008), advertising is meant to achieve the following: retain loyal customers, retrieve lost customers, recruit new customers and reassure old and new customers that they have made the right decision. Small entrepreneurs are coming up with more effective advertising media. Easey (1995) cites that the choice of advertising media can be influenced by six key factors namely: the type of product, type of message, budget constraints, frequency, cost effectiveness and coverage. Wolonski and Coates (2009) cites that advertising enables: • Consumers to receive information on new products. • An increase in sales and makes mass production possible which may lead to lower prices, promoting competition and hence lower prices for better quality products. • To keep down prices of newspapers, televisions and radio licences. • Reduction of sales fluctuation. • Consumers to make a more informed choice. Notwithstanding, Swinker and Hines (2007) cites that there are several setbacks that are associated with advertising. The aforementioned argued that advertising leads to: higher prices, realisation of monopoly power, high opportunity costs, which can be misleading and costly. SMEs in Harare might not use newspapers since they are expensive even if they have high coverage and everyone can understand the language in the newspapers locally produced. They can use outdoor media like posters, and also has high coverage, so that many customers will be informed. It also has high readership because everyone wants to read public posters. The success of a business depends on how entrepreneurs market, price/cost and advertise their products and various authorities have said these are important. Start-up steps to consider Many individuals who launch clothing lines do so because they are artistic, yet these individuals generally are not entrepreneurs who understand how business operates. Elu, Dradley, & Moser (2003) cites that before launching an apparel line, individuals should consider the following seven start-up steps: • Understand the commitment-The potential business owner must understand the time and money commitment necessary to make the clothing line succeed. It is wise to double their estimates about the time and capital required to start a business (Elu et al., 2003). • Plan the business venture-Any experienced entrepreneur knows that identifying key elements of the clothing line's business strategy is crucial (Li, Gouhui, & Eppler, 2008). The business plans must include: general company description, products overview, operations overview and marketing strategies. • Organise the business-The most overlooked aspect of creating a business is the fact that the company is an entity all of its own. It helps the business' tax structure and legal name. • Prepare for Manufacturing-Knowing where to produce the clothing line is an extremely important decision. A small clothing business may choose to manufacture its products, but outsourcing should be considered. • Establish a Pricing Model-Making a profit off the clothing line is necessary to the business success. Profit comes from making more revenue than fixed and variable costs combined. Fixed costs are expenses that have already been invested and cannot change (equipment purchases or buying a facility for the business). Variable costs-expenses that can vary from one period to another for example price difference between manufacturers or the cost to produce different apparel items. To ensure profit, the entrepreneur must establish wholesale and retail rates higher than the expenses. • Market the Clothing Line-After the business essential have been developed through previous steps, entrepreneurs should start to consider the following items in their marketing strategies: selecting company logo, identifying target audience and creating online presence. • Analysing and Adjusting-This step involves the entrepreneur identifying loopholes in the system and rectifying accordingly. Social media marketing strategies Business owners need to reach out to a bigger audience online. Establishing a presence on the internet even if there is a physical store is critical. Being active on social media sites will not only increase brand awareness but will also boost the company's rank on search engines and prove that business is in sync with the times. The generic strategies that can be employed by the organisation are: getting out of the store, sharing expertise freely, growing a network, customer relationship management and employing traditional marketing strategies (Mulu-Mukutu, Namusonge, & Odhumo, 2004;Ngoze, 2006). Methodology According to Burns and Grove (1997), the research design of a study is the end result of a series of decisions made by the researcher concerning how the study will be conducted. It is the blueprint for conducting the study that maximises control over factors that could interfere with validity of the finding (Flick, 2005). A post positivism research strategy was adopted where questionnaires and open-ended interviews were used to collected primary data. In this study, participants from clothing companies formed the research sample. Population and sampling The targeted population for this study was made up of SMEs and big clothing companies in Harare. The accessible population comprise all the cases that conform to the designated criteria and are accessible to the researcher as a pool of participants for a study (Burns & Grove, 2003). According to Drucker (2008), a sample is a subgroup of the target population that the researcher plans to study for the purpose of making generalisations about the target population. A total of 75 participants who were purposively sampled from clothing industries responded to the questionnaires interviews. The sample comprised of only the small clothing entrepreneurs and big clothing manufacturers in Harare urban. This sample was chosen because of its convenience geographical location. Harare has about 75% of the clothing manufacturing companies in Zimbabwe. Fashion trends are set in Harare and other clothing manufacturing companies tend to copy from those in Harare. Research instruments In this study, questionnaires and interviews were used to collect data. The researcher assumed that two instruments would enable the collection of reliable and useful data and at the same time triangulating findings. Open-ended questionnaires were the major data collection instruments used in the study. Chilisa and Preece (2005) indicate that a questionnaire involves the gathering of data from a sampled population using set questions. The respondents managed to freely give their opinions and facts about their clothing industries in Harare. However, the questionnaires presented some problems especially with some participants who seemed not to understand English. It was therefore, necessary to translate into the vernacular language during the interviews. Interviews are face-to-face interactions where data is collected verbally, and the researcher and the interviewee communicate in a free environment (Chilisa & Preece, 2005). The researcher prepared an interview guide, which contained the questions to be asked. The interview technique produced qualitative data from the interview guide. Interviews managed to produce quality findings which contributed immensely to the study. Findings and discussions The following themes emerged from the collected data: marketing and its implications, marketing problems, product pricing, advertising media, preferred apparel and customer choice and influences and marketing strategy. The demographic characteristics that were of interest in this study were: gender, age, level of education, experience and place where clothing designing skills were obtained. This helped to know the type of participants in the study and the knowledge of the area under study. From the interviews and questionnaires distributed, the study found that the participants were evenly distributed across all the gender categories. Clothing industry has to respond to age group demands and from the questionnaire, all age groups were represented except for the 51 and above range. From the questionnaire, it seems fashion cuts across all age groups. However, there were more respondents in the 41−50 years range. It appears either they had the resources to start off their own clothing businesses or they were employed before the current harsh economic situation prevailing in Zimbabwe where a lot of youths are not employed. All the participants who took part in the study had some form of education. Only eight respondents had "O" level, one (40) had 'A 'level and one (25) university degree. This means that the clothing literacy level among the Harare participants was very high. This means they could understand fashion trends, marketing strategies and their importance. From the questionnaires, most of the Harare respondents had five years and above experience in the clothing industry. It was most likely that their experience with Harare customers, with different clothing materials and pricing had kept them afloat in the clothing business. High schools and higher and tertiary institutions have combined and played an important role in developing entrepreneurial skills that have found their way in big and SME clothing companies in Harare. Technical colleges (30) seem to have made the biggest contribution in terms of producing graduates with clothing fashion design skills. Not to be underestimated are schools (13), private colleges (15) and universities (17) who have graduates who make an impact in the clothing industry. Marketing and its implications In the clothing sector in the City of Harare, tailors and designers had different views on what marketing is. From questionnaires, they understood marketing from different viewpoints. Small clothing entrepreneurs could not define it in a more comprehensive way in English. However, when interviewed, some of them clearly explained it in (Shona) vernacular language. For the big companies, they had all-encompassing definitions that showed that they understood the concepts very well. An SME clothing entrepreneur participant defined marketing as "Opportunity to show people what you have in different ways" while participant from a large corporation defined the same by saying, "It is an interesting and innovative way of advertising what you sew to potential customers". Challenges in marketing clothing products An appreciation of the problems faced by both big and SME clothing entrepreneurs revealed that big clothing companies have more problems. One challenge faced by big Harare clothing companies was that of changing the way they have been doing business to suit the changing times. From the questionnaires and interviews, the clients currently prefer garments that are sewn in their own specifications and individual tastes. Yet with big clothing companies, they target a big market and use a standard measure to produce some of the clothes they sell. Given this situation, big companies lose a lot of business to SME entrepreneurs who are innovative, flexible and accommodate new designs brought in by customers. They can sew just one item for an individual customer resulting in a unique design. However, one big challenge with SME clothing entrepreneurs is that of resources. Expenses such as rentals which are usually too high, labour costs and cost of materials always hinder the growth of these small clothing entrepreneurs. The costs are prohibitive. At times, they do not have the mechanism to enforce payments after they have sold an item on credit to a customer. Some of the sold items would be paid for after a long period of time, yet they would have incurred some expenses. The other problem identified by both SMEs and big clothing manufacturers was the stiff competition from cheap smuggled second-hand clothing from Mozambique and other neighbouring countries. Zimbabwe boarders Mozambique and a lot of second-hand clothes find their way illegally to the market. Given the cash crunch and the harsh economic situation, customers opt for affordability rather than quality or any brand name. According to previous studies on the consumption of second-hand undergarments in Zimbabwe, Chipambwa, Sithole, and Chisosa (2016) indicated that Zimbabwe, like any other African nation, is facing challenges ranging from obsolete equipment resulting in high operating costs thereby affecting price of the clothing products. Power shortages and high labour costs were also cited. Given that background, there is need to protect the clothing industry from unregulated imports, so that these companies become viable and return to the production levels of early 1980. The SME clothing industries in Harare have become highly competitive and without good marketing strategies, most clothing companies were finding it difficult to survive. It becomes important for clothing companies in Harare, both big and SMEs to be innovative and be creative in marketing their clothes. Pricing of clothing From the questionnaires and interviews, both SMEs and big clothing manufacturers have similar consideration when pricing their clothing products. In both cases, they were affected by the same micro-and macro-economic environment. As they competed for the market share, prices have become important. To attract customers through pricing, big companies have sale periods where some clothing items were sold at reduced prices. According to Chipambwa, Sithole and Chisosa (2016), many clothing retail giants in Zimbabwe such as Edgars, Truworths and Topics are facing stiff competition from the sale of secondhand clothing. A study on clothing fashion design and shrinking customer base in Zimbabwe by Sithole, Mutungwe, Chirimuta and Muzenda (2016) revealed that in Zimbabwe, the period from 2000 to 2008 has been the most difficult for the textile and clothing sectors. This period has seen a number of clothing companies closing down and most people employed by this sector losing their jobs. The study further purported that imported second-hand clothing flooded the market at a time when wages were falling forcing the consumers to buy the cheapest clothing items available. As for SME clothing entrepreneurs, they had negotiable payment terms and negotiable pricing regime. The price of an item was basically a negotiated affair. The idea was to keep their customers but, at the same time the price helped them recoup cost of material and with a small profit margin. In both instances, they considered the production costs and the profit that they make in the process. The prices charged for a clothing item should cover the cost of the fabric, electricity, labour, statutory payments like import tax, and a profit margin that kept them going especially the big companies. It emerged that big registered companies had many payment obligations. SME clothing entrepreneurs did not have a list of statutory payments and prizes were mainly determined by cost of material, rentals and some profit margin. Marketing media While both SMEs and big clothing manufacturers agreed that in using media to advertise clothing, they use different strategies. Big companies could afford to use both the electronic and print media through advertisement on television, magazines and newspapers. SME entrepreneurs could not afford this expense. Big clothing companies engage advertising agents which costs a lot of money. They appealed to a certain class of people in society especially professionals and the affluent. Apparels' entrepreneurs merchandise According to information from interviews and questionnaires, most big companies in Harare had a wide range of apparels for all age groups and for most purposes like casual wear, uniforms and formal clothing. On the other hand, SME entrepreneurs did not specialise in particular apparels. Their range was motivated by market needs, which is what people want. Thus they did not specialise, they had apparels ranging from uniforms which they concentrated on when the demand was high, African traditional attire for men and women, children's wear, suits and costumes. Customer choice, influences and marketing What has made SME clothing entrepreneurs to continue to survive was that they construct their clothes to customer choice. While they use some designs to display their fashion, they are largely influenced by what customers desire and need. They could modify designs to suit customer specifications. In that way, they would have marketed themselves with mostly the low-income earners who could not afford to go to big designer shops. On the other hand, big clothing companies offered very little of customer choices because their focus is on mass production. Customers who entered these big clothing shops looked at what was available whether it suited their choices. They did not adjust to individual needs and choices. Conclusions Based on the findings, the study concluded that both big and SME clothing manufacturers in Harare need to be more creative and aggressive in the way they market their merchandise. The observations were that there was a cut throat competition from cheap and illegally imported second-hands clothes from Mozambique and neighbouring countries. These imports have eroded their business competitiveness. Further, the production cost and other service costs were too high, thus making it very difficult for these manufacturers to have a big profit margin from their business. The study also noted that the space to do clothing business in Harare was limited and crowded. This did not help especially the small clothing entrepreneurs to market their business as they were located in small dirty places. Most of the ideal spaces were occupied by big companies who can afford high rentals, labour cost, electrical bills and other statutory national payments like taxes, value added tax (VAT) and national social security authority (NSSA) for workers. The market relationship between small and big clothing manufacturing companies has been characterised with competition for customers who in turn have very little disposable income. As they market their merchandise, SME entrepreneurs and big clothing manufacturing companies in Harare use different platforms motivated by affordability and accessibility to the customers. In terms of clothing design skills, most of the interviewees and those who responded to questionnaires had minimal qualifications. Most of the respondents had obtained the skills from technical colleges and a few had advanced to university level. Skills upgrade and update are important to compete favourably in this harsh Zimbabwean economic environment. Recommendations From the findings, the study makes the following recommendations: • Clothing manufacturers need to be creative, innovative and sensitive to customer needs and demands if they are to continue and survive in this line of business. This should lead to increased customer satisfaction and increased business for the clothing manufacturers. • Government needs to put in strict measures to stop imports of second-hand clothing from neighbouring countries for clothing industry in Zimbabwe to survive. Police and the army are encouraged to constantly patrol the boarders with these neighbouring countries. Those caught breaking the law should be given harsh punishment, so that they do not repeat the same offence. The benefits of banning imports of second-hand clothes are that clothing manufacturers will enjoy an increase in sales, thus creating opportunities for employment creation. However, Chipambwa, Sithole and Chisosa (2016) highlighted the negative effects of banning imports of second-hand clothes as the loss of employment opportunities, increase in criminal activities and rising clothing prices which becomes a disadvantage to the consumer. • Harare City Council to avail more space for people to operate from. It should be understood that in this Zimbabwean environment where the unemployment rate is high, clothing manufacturing and selling helps in employment creation. • Government should look at service costs of doing business with the view of lowering them to ensure affordability and viability of the business. This should benefit the majority of the consumers in the low-income bracket. • SMEs and big clothing manufacturing enterprises to be motivated to help their employees upgrade and update their skills for competitiveness. An increase in employee skills should help the employees who wish to start their own business enterprises. • Both SMEs and big clothing entrepreneurs need to intensify their marketing strategies to increase sales. This is supported by Kotler and Armstrong (2010) who claim marketing is a very critical component given the harsh economic environment in Zimbabwe.
2018-12-15T03:01:12.144Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "46dfa30dd3ef6a11e7c9fe366b51e1141561d69f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311886.2018.1488234", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "46dfa30dd3ef6a11e7c9fe366b51e1141561d69f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
270357070
pes2o/s2orc
v3-fos-license
Impact of early versus delayed umbilical cord clamping on term neonates' haemoglobin levels: a randomized controlled trial Objective To compare the effects of early and delayed cord clamping on the haemoglobin levels of neonates delivered at term. Methods This randomized controlled trial enrolled pregnant women during the second stage of labour. They were randomized into either the early cord clamping (ECC) group or the delayed cord clamping (DCC) group in the ratio of 1:1. Following delivery of the baby, the umbilical cords of participants in the ECC group were clamped within 30 s of delivery of the neonate while those of participants in the DCC group were clamped after 2 min from the delivery of the neonate. The primary outcome measure was the effect of ECC and DCC on the haemoglobin levels of neonates delivered at term. Results A total of 270 pregnant women were enrolled in the study. Their baseline sociodemographic and clinical characteristics were similar in both groups. There was no significant difference in the mean haemoglobin level between ECC and DCC groups at birth. The mean haemoglobin level of the neonates at 48 h postpartum was significantly higher in the DCC group than the ECC group. Conclusion DCC at birth was associated with a significant increase in neonatal haemoglobin levels at 48 h postpartum when compared with ECC. Trial Registration: The trial was registered at Pan African Clinical Trial Registry with approval number PACTR202206735622089. Introduction 2][3][4][5] There is a wide variation in the definitions of early and delayed cord clamping in terms of the timing of the clamping. 55][6] Although there have been some randomized controlled trials comparing the benefits of delayed versus early cord clamping, an ideal time has not been mapped out. 7Previously, early cord clamping was incorporated as an important component of the active management of the third stage of labour, when it was thought to prevent primary postpartum haemorrhage. 1 However, recent studies have revealed that it is of no significant benefit in the prevention of postpartum haemorrhage. 6,8,9][12] Nevertheless, others are of the opinion that clamping of the umbilical cord should be delayed for more than 1 minute after delivery.The timing of umbilical cord clamping determines the amount of blood transfused into the neonatal cardiovascular system from the placenta through the umbilical cord. 134,15 Reduced haemoglobin concentrations associated with iron deficiency anaemia are common problems encountered by children in developing countries because nutritional deficiencies, hookworm infestations, malaria infections and repeat pregnancies remain common in these regions. 16eonates also run the risk of anaemia at birth since some of the mothers have anaemia during pregnancy. 17Anaemia in infancy is a public health issue in developing countries and can lead to poor neurological development. 3,17,18Worldwide, one-fourth of all preschool children are estimated to be affected by reduced haemoglobin concentration and iron deficiency anaemia, with the attendant complications of altered affective response, impaired motor development, and cognitive and behavioural deficits. 10,19,20elayed cord clamping allows time for the transfusion of foetal blood from the placenta to the neonate at the time of birth. 18he newborn receives up to a 30% increase in blood volume and approximately a 60% increase in red blood cells through placental transfusion at birth. 12This placental transfusion is a physiological process and accounts for between 19 and 40 ml/kg of birth weight on average, equivalent to as much as 2% of the newborn's final birth weight. 12,21It also protects very low-birthweight infants from intraventricular haemorrhage, the late onset of sepsis and motor disability. 12,18,21On the other hand, delayed cord clamping could lead to neonatal polycythaemia, an increased rate of hyperbilirubinaemia and the need for phototherapy. 4here appears not to be any definite conclusion on whether to practice early or delayed cord clamping in both developed and developing countries.In Europe and other developed countries, there is also a great variation in what is practised. 12,14,15evertheless, early cord clamping is apparently the dominant practice in many nations around the world.Despite the established benefits of delayed cord clamping in improving iron status and preventing anaemia in neonates and infants, there is still a palpable disconnect between this knowledge and the practice thereof, even in developing countries where irondeficiency anaemia is very prevalent. In our facility at Enugu State University of Science and Technology (ESUT) Teaching Hospital, Parklane, in southeast Nigeria, we tend to use early cord clamping, just like many other hospitals in our environment.However, there is a great need for us to educate our clients on the need for them to be involved in the decisionmaking concerning the timing of the cord clamping to be adopted for them in the 3rd stage of labour.This formed the basis for this current study, which in addition sought to contribute to establishing an evidencebased recommendation for policy change on the appropriate timing of cord clamping, especially in developing countries like ours. This study aimed to determine the effect of early versus delayed cord clamping on haemoglobin levels of neonates delivered in this environment and verify the benefits and risks associated with the two cord clamping methods. Study design, setting and population This hospital-based, randomized controlled trial was conducted at the labour ward/ delivery unit of ESUT Teaching Hospital, Parklane, Enugu, Nigeria between 22 June 2022 and 22 December 2022.The study population comprised of all the term (37-42 weeks) newborn babies delivered in the hospital within the study period.The inclusion criteria were as follows: (i) healthy consenting pregnant women with normal singleton term pregnancies; (ii) healthy neonates without any complications delivered via spontaneous vaginal delivery.The exclusion criteria were as follows: (i) women with multiple gestations; (ii) preterm pregnancies; (iii) post-term pregnancies; (iv) pregnancies that were complicated by medical conditions such as diabetes mellitus, hypertension, cardiac diseases, sickle cell disease, chorioamnionitis, antepartum and postpartum haemorrhage, maternal haemoglobin level <10.0 g/dl at 36 weeks gestational age for booked mothers and unbooked pregnant women with haemoglobin level <10.0 g/dl on admission to the labour ward; (v) neonates with congenital diseases; (vi) nonvigorous neonates requiring any form of resuscitation; (vii) neonates who became sick within the first 48 h. This study was approved by the ethical committee of ESUT Teaching Hospital, Parklane, Enugu, Nigeria on 10 June 2022 (reference no.ESUTHP/C-MAC/RA/034/ 141).A written informed consent was obtained from all participants before recruitment into the study.The study adhered to the CONSORT guidelines. 22 Sample size estimation The minimum sample size N for each arm of this study was determined with the following formula: where a ¼ probability of making type 1 error, b ¼ probability of making type 11 error, and Za ¼ level of significance of type 1 error probability, determined from a statistical table based on the value of level of significance a, for this study was set at 0.05.The 95% confidence interval (CI) was 1.96 for a two-tailed test (standard normal deviation).Zb ¼ type 2 error probability, corresponding to the standard normal deviation for the stated power of the study to detect a significant difference.For this study, a power of 90 was used, therefore Zb ¼ 1.28, r ¼ 7. Standard deviation of haemoglobin level after cord clamping from previous studies, 12 u 1 -u 2 ¼ difference between the two groups that the study hopes to detect.It was anticipated that early cord clamping will reduce the haemoglobin level by 0.88 g/dl, f ¼ attrition rate, which was placed at 10%.Therefore, the minimum sample size for the study was 133 participants per arm. Randomization technique and group allocation sequence Randomization and allocation concealment were applied for the study using a computer-based random sequence generator (http://www.randomization.com)created by a statistician (who was not part of the study team) in a 1:1 ratio using randomization blocks of 4. Sealed, non-transparent brown envelopes were marked serially from 1 to 270; each numbered envelope contained a white piece of paper labelled either ECC for the intervention group that received early cord clamping or DCC for the control group that received delayed cord clamping.The envelopes were handed over to a trained nurse midwife who was completely unaware of the group the participant belonged to.The trained midwife gave the envelopes to the women to pick up when they were admitted to the labour ward.The researcher, the midwives and the participants in labour were not blinded.The women were counselled on the procedure during the antenatal visits and again when they entered the first stage of labour, with informed written consent obtained from them.The paediatrician in the labour ward examined the baby and excluded anomalies from it; the researcher who took the blood sample to the laboratory; the laboratory scientist who analysed the blood sample for haemoglobin level; and the trained staff for the second-day follow-up test were all always available and were blinded. Pilot study A pilot study was carried out on 10% of the total population of the study, which was a small group of babies who were not part of the main study.This was done to ascertain the feasibility of this research.The feasibility study was done in a very big mission hospital (Annunciation Hospital, Emene, Enugu, Nigeria) and the result showed that DCC was more beneficial to the neonates 48 h after delivery. Obstetric and blood sampling procedures Having counselled and obtained informed consent from the participants, during the second stage of labour, the midwife unfolded the paper and revealed the newborn who underwent early cord clamping (which involved double clamping of the umbilical cord within 30 s of delivery of the neonate) or the newborn who underwent delayed cord clamping (which involved double clamping of the umbilical cord after 2 min from the delivery of the neonate.).The researcher took the delivery and kept the neonate at 15 cm below the maternal vulva for 30 s (early) or 2 min (delayed) according to the intervention in the envelope picked by the pregnant woman.The obstetrics care giver clamped the cord and collected a cord blood sample.The umbilical cord was cut in between the two clamps.The neonate was then placed on the maternal abdomen, which was covered with cloth, which provided warmth.The trained midwife did the timing with the stopwatch, which was set according to the intervention picked by the mother, either 30 s or 2 min.The cord was double clamped and 1.5 ml of blood was collected from the point between the clamps into an ethylenediaminetetraacetic acid (EDTA; 1.8 mg/ml) tube (sample A) (BD Vacutainer V R ; BD, Plymouth, UK).The blood sample for haemoglobin was stored at 4 C and analysed within 24 h.A single dose of 10IU of oxytocin (Syntocinon V R ; Norvartis, Basel, Switzerland) was administered intravenously immediately after cord clamping.Then, after blood sample collection, the cord was cut with cord scissors.Controlled cord traction was undertaken and after the delivery of the neonate, routine care was provided to the mother and the neonate.The neonate was handed over to the neonatologist and finally to the mother.Then, 48 h after the delivery of the neonate, an intravenous blood sample (1.5 ml, sample B) was collected from the baby's vein into an EDTA tube (1.8 mg/ml; BD Vacutainer V R ) after gentle application of a tourniquet and topical anaesthetic cream on the puncture site.Samples were also collected into EDTA tubes (1.8 mg/ml; BD Vacutainer V R ) from the mother in the active phase of labour from their cubital fossa (sample C) and another sample was collected 48 h after delivery.The blood samples were stored at 4 C and analysed within 24 h.The blood samples were sent to the haematology laboratory for analysis using a Sysmex XE-2100 TM automated haematology system (Sysmex, Kobe, Japan). Study definitions The following definitions were used throughout the study: (i) the normal haemoglobin level in neonates was defined as 14-24 g/dl; (ii) anaemia in a neonate at 48 h was defined as HB < 14 g/dl; (iii) ECC was the clamping of the umbilical cord within 30 s of the delivery of the neonate; (iv) DCC was the clamping of the umbilical cord 2 min after the delivery of the newborn baby. Primary outcome measure The primary outcome measure was the effect of ECC and DCC on the haemoglobin levels of neonates delivered at term. Statistical analyses All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 21.0 (IBM Corp., Armonk, NY, USA).The sociodemographic variables were used to categorize the data.Data were subjected to comparative statistical evaluation, which yielded frequencies (%) and mean AE SD.Comparisons between groups used v 2 -test for categorical variables and Student's t-test for continuous variables.A P-value <0.05 at one degree of freedom (df ¼ 1) was considered statistically significant. Results This randomized controlled trial included 270 neonates and 270 mothers; 135 had ECC and 135 had DCC. Figure 1 shows the flow diagram of the study participants.There were no significant differences in the sociodemographic characteristics and clinical profile of the mothers between the ECC and DCC groups (Table 1). The gestational age at presentation ranged from 37-40 weeks.There were no significant differences in the neonatal characteristics of the newborns between the ECC and DCC groups (Table 2). Table 3 shows the mean difference in haemoglobin levels of the neonates at birth between the ECC and DCC groups.There was no significant difference in the mean haemoglobin at birth between the two groups.However, there were significant differences in the mean haemoglobin at 48 h postpartum between the two groups (P < 0.01); and in the change in haemoglobin at birth and 48 h postpartum within each group (P < 0.01 for both comparisons).There was no association between the maternal sociodemographic and clinical profile and the neonatal haemoglobin level at 48 h in the ECC and DCC groups (data not shown).A total of 28 of 135 (20.7%) neonates in the DCC group had low neonatal Hb (<14 g/dl) compared with 50 of 135 (37.0%) neonates in the ECC group at 48 h postpartum (P < 0.01). Discussion The principal findings from this current study showed that there was no significant difference in the mean haemoglobin level between the ECC group and the DCC group at birth.However, the mean haemoglobin level of the neonates at 48 h postpartum was significantly higher in the DCC group than the ECC group.In addition, the change in haemoglobin level from birth to 48 h postpartum in each group was significant.There was no significant association between maternal and neonatal sociodemographic and clinical characteristics and neonatal haemoglobin level at 48 h postpartum in both groups.This current study demonstrated that the mean haemoglobin level of the neonates at 48 h postpartum was significantly higher in the DCC group than in the ECC group, which suggests that DCC significantly improved the haemoglobin levels of neonates at 48 h postpartum.[25][26] However, there was no significant difference in the mean haemoglobin level between the ECC group and the DCC group at birth.This was consistent with a previous similar randomized control trial. 27he reason for this could be that the effect of the extra volume of whole blood transfused from the placenta to the neonate might not have been reflected in the DCC group then, at birth, when the haemoglobin levels were measured.Usually, after receiving red blood cells, haematocrit equilibration takes place gradually, leading to a stable packed cell volume. 28urthermore, the mean haemoglobin level at birth was higher than that at 48 h postpartum for the ECC group, while the mean haemoglobin level at birth was lower than that at 48 h in the DCC group.These findings suggest that those who underwent ECC had a relative drop in their haemoglobin level over the first 48 h postpartum.This drop could be explained by the fact that the neonates who underwent ECC did not benefit from placental transfusion following the initiation of respiration at birth, unlike those who underwent DCC.Hence, it was noted at 48 h postpartum that the majority of the babies (50 of 78 neonates; 64.1%) who had haemoglobin concentrations <14 g/dl belonged to the ECC group, while the majority of those who had haemoglobin concentrations > 14 g/dl belonged to the DCC group (107 of 192 neonates; 55.7%).In contrast to the findings of this current study, a study undertaken in Pakistan observed that there was a significant difference in haemoglobin level even at birth between neonates who had ECC and those who had DCC. 25 It is not clear why there is a difference between the current study and the previous study, but it might be due to a difference in the sample sizes of the two studies; there were 135 in each group in the current study compared with 100 in each group in the other study. 25Furthermore, the duration of ECC in the other study was not defined (not specific) and their DCC was after the cessation of cord pulsation, 25 which might have taken a longer duration, allowing a greater volume of blood to be transfused through the fetoplacental compartment to the neonates.In this current study, ECC was defined as 30 s and DCC as 2 min.Variations in the study locations and populations could have also contributed to the difference in the outcomes of the two studies. In the current study, there were no significant associations between maternal sociodemographic/clinical profile and neonatal haemoglobin at 48 h postpartum in both the ECC and DCC groups.This finding is consistent with a previous study, which also revealed that there was no significant difference in maternal or neonatal outcomes or characteristics between those randomized to ECC or DCC. 6This is an interesting finding in the sense that it demonstrated that the use of either ECC or DCC was not significantly associated with maternal or neonatal characteristics, including the use of uterotonics and the prevention or treatment of primary postpartum haemorrhage. 6,24In view of all these findings, it suffices to say that the practice of DCC is safe enough for both the neonates and their mothers. 6,24his current study corroborated many other studies, both from developing and developed countries, in elucidating the fact that the practice of DCC should be the better cord clamping method since it is more beneficial to the neonates and, at the same time, harmless to the mothers.DCC has saved the neonates and infants from anaemia and its consequences, thereby conserving the little resources of families in developing countries, with which other necessary needs of the children will be catered for.Yet, it is not routinely practiced in the environment where it is most needed.This matter should be viewed with the utmost seriousness.Hence, the practice of DCC should be encouraged and adopted in our centre (ESUT Teaching Hospital) and other hospitals in developing countries. This current study had several limitations.First, the follow-up could not be extended to 3-or 6-monthly measurements of the haemoglobin levels of the infants.Secondly, it was also not possible to determine their neurological and motor developments postpartum.Thirdly, it was not possible to assess the haematocrit and ferritin levels, the latter of which could have given insight into the iron stores of the neonates.These could be explored in further robust studies, possibly in a multicentre trial in our environment.It should be noted that the fact that there was no difference in the neonatal and maternal characteristics of the participants between the two groups, especially in terms of mean gestational age and mean maternal age, suggests that the participants were well selected and hence randomized to either the ECC or DCC groups under the same conditions.These conditions might have eliminated bias that could have adversely affected the results of the study.This is desirable and should be observed as one of the strengths of the study.Furthermore, the medical laboratory scientist who analysed the samples was blinded to the treatment groups. In conclusion, delayed DCC following delivery is associated with a higher neonatal haemoglobin level at 48 h postpartum than ECC.The practice of DCC is beneficial in preventing anaemia among neonates and infants; hence, it should be encouraged, especially in resource-poor settings like ours where maternal and neonatal anaemia are very high.In view of the benefits and safety of DCC, skilled birth attendants may adopt and consolidate this practice all over the world, especially in developing countries.Health talks to pregnant women should include the need for mothers to embrace this idea owing to its advantages in reducing neonatal anaemia and its attendant morbidity and mortality, as well as reducing the need for blood transfusion and resource conservation.Counselling on DCC practice may be fitted into the programme of education and the birth planning process for pregnant women.Advocacy for DCC should be raised and implemented by policymakers at various levels of governance. Figure 1 . Figure 1.Consort flow diagram showing the enrolment, randomization, allocation and analysis of newborns and their mothers in this hospital-based, randomized controlled trial that aimed to determine the effect of early versus delayed cord clamping on haemoglobin levels of neonates. Table 1 . Sociodemographic and clinical characteristics of the mothers (n ¼ 270) enrolled in a hospitalbased, randomized controlled trial that aimed to determine the effect of early cord clamping (ECC) versus delayed cord clamping (DCC) on haemoglobin levels of neonates. Table 2 . Neonatal characteristics of the neonates (n ¼ 270) enrolled in a hospital-based, randomized controlled trial that aimed to determine the effect of early cord clamping (ECC) versus delayed cord clamping (DCC) on haemoglobin levels of neonates. Table 3 . Mean difference of the haemoglobin (Hb) level of the neonates (n ¼ 270) enrolled in a hospitalbased, randomized controlled trial that aimed to determine the effect of early cord clamping (ECC) versus delayed cord clamping (DCC) on haemoglobin levels of neonates. a Student's t-test was used to compare continuous variables.SD, standard deviation; MD, mean difference; CI, confidence interval; SE, standard error.
2024-06-10T06:16:51.664Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "c80fd2d18526a88d058a49220ae9b522f9fdd89d", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0051fd7ff21ccc07bf50d184d02530ea7a2e0975", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15787978
pes2o/s2orc
v3-fos-license
Molecular Identification of Candida Dubliniensis Isolated from Oral Lesions of Hiv-positive and Hiv-negative Patients in São Paulo, Brazil SUMMARY Candida dubliniensis is a new, recently described species of yeast. This emerging oral pathogen shares many phenotypic and biochemical characteristics with C. albicans, making it hard to differentiate between them, although they are genotypically distinct. In this study, PCR (Polymerase Chain Reaction) was used to investigate the presence of C. dubliniensis in samples in a culture collection, which had been isolated from HIV-positive and HIV-negative patients with oral erythematous candidiasis. From a total of 37 samples previously identified as C. albicans by the classical method, two samples of C. dubliniensis (5.4%) were found through the use of PCR. This study underscores the presence of C. dubliniensis, whose geographical and epidemiological distribution should be more fully investigated. INTRODUCTION An increased incidence of fungal infections has been well documented throughout the last decade.The most important factor contributing to this phenomenon has been the increased number of immunocompromised individuals.As a result, many species previously unassociated with human diseases have become important pathogens, some examples being Penicillium marneffei, Emmonsia pasteurina and Candida dubliniensis 6,30 . Candida dubliniensis was first identified as a new species in 1995 in Dublin, Ireland 39 .Since then, infections by this yeast have been widely reported in a large number of HIV-positive and AIDS patients 23 , being isolated mainly from the oral cavity 9 .Moreover, C. dubliniensis has been implicated as a causative agent for oral candidoses and HIVnegative individuals, both in healthy individuals and diabetics 42,43 .This species shares many phenotypic and biochemical characteristics with C. albicans 39 , making it difficult to differentiate between the two species, since C. dubliniensis expresses the serotype A of C. albicans, and is able to form germ tubes and abundant numbers of chlamydoconidia 3,4,39,41 .Moreover, C. dubliniensis is characterized by a high resistance to fluconazole, and the susceptible isolates are able to develop resistance to this drug in vitro 26,40 .This high degree of similarity between the two species has contributed to the identification of some isolates of C. dubliniensis as C. albicans 31 .This species has most likely been present in the community for long time, although identified as C. albicans 36 . Therefore, various phenotypic methods for the identification of C. dubliniensis and its differentiation from C. albicans have been reported.These tests include: the formation of chlamydospores 39 ; the pattern of carbohydrate assimilation 33 , β-D-glucosidase activity 35 ; the color of colonies after seeding in different mediums such as CHROMagar Candida, Staib agar 37 , Niger agar 17 , Tobacco agar 13 and others; as well as growth in Sabouraud agar at temperatures between 42 and 45 °C31 . However, individual variations among the strains have been reported for these phenotypic characteristics 3,39 , raising the necessity to study its genotypic characteristics.Analyses of the DNA of different samples of C. dubliniensis have demonstrated that this species presents conserved sequences of DNA elements, these being important in the identification of isolates for a differential diagnosis of candidiasis between C. dubliniensis and C. albicans 11 . Currently, there exists a wide variety of molecular techniques able to identify C. dubliniensis, which include: DNA tests using analyses with restriction endonucleases, methods based in pulsed field electrophoresis, DNA tests using probes, as well as PCR-based methods 16,19,28,42 . The definitive identification of C. dubliniensis is still a problem in routine laboratories; it is therefore necessary to know the phenotypic and genotypic characteristics of the isolates to obtain a final characterization.Studies on the incidence of this yeast, carried out by reference laboratories, are necessary for a better understanding of the epidemiology of this new species, especially in South America, where its frequency is not well known 5 . In this study, PCR (polymerase chain reaction) was used to identify the presence of C. dubliniensis in samples isolated from HIV-positive and HIV-negative patients with oral erythematous candidiasis, in the city of São Paulo, Brazil. MATERIAL AND METHODS Yeast isolates: This study involved 39 isolates of yeasts from HIVpositive and HIV-negative patients with erythematous oral candidiasis, which had originally been identified by the classical method 15 .All the patients released the HIV tests by the ELISA method.These samples were kept, for nine months, at the culture collection of the Laboratory of Pathogenic Yeasts, Department of Microbiology, Biomedical Science Institute, University of São Paulo (ICB II/USP).Parallel to this, analyses were made of standard samples of C. albicans (LSHT 330) and C. dubliniensis (ATCC 777). DNA analysis: The DNA analyses were carried out on the isolated samples using PCR, according to MAGEE et al. 18 , PFALLER et al. 29 and SMITH et al. 38 . DNA extraction: Each sample was inoculated into 5 mL of YPD medium and incubated for 18 hours at 37 °C.After incubation, 1.5 mL of the culture was transferred to an Eppendorf tube and centrifuged at 10,000 rpm for five minutes, under refrigeration.The supernatant was discarded, the pellet was resuspended by vortexing after addition of 1 mL sorbitol (Merck), and the resulting suspension was centrifuged at 10,000 rpm for two minutes.The pellet was resuspended in 1 mL of lyticase buffer (50,000 U -Sigma) with 350 µg/mL of that enzyme.The Eppendorf tube was then incubated at 30 °C in a humidified incubator for 30 minutes and subsequently centrifuged at 10,000 rpm for one minute.The supernatant was discarded and 0.5 mL of the digestion solution was added to the tube containing the pellet.The tube was then incubated at 70 °C for 30 minutes in a humidified incubator.The Eppendorf tube was then left at room temperature for 10 minutes, followed by the addition of 50 µL of 5M potassium acetate solution.After 60 minutes at 0 °C, the Eppendorf tube was centrifuged at 10,000 rpm for 15 minutes.The supernatant was transferred to another Eppendorf tube containing 1 mL of 95% ethanol (Merck) for the precipitation of DNA.Then the tube was centrifuged at 10,000 rpm for five minutes and the sediment was washed twice with 0.5 mL of 70% ethanol (Merck).The precipitated and washed DNA was centrifuged at 10,000 rpm for five minutes.Then, the sediment was resuspended in 100 µL of TE (Tris + EDTA). Estimation of the quantity of DNA: The agarose gel (Sigma) was prepared at 1% in TBE buffer and placed on an acrylic plate for an 8-toothed comb, and covered by TBE run buffer on a horizontal electrophoresis plate.Each well was loaded with 10 µL of a mixture of the extracted DNA with bromophenol blue stain (V/V).The run was carried out at 90 V for 30 minutes, until the material reached the opposite end of the gel.The gel was stained with ethidium bromide in a concentration of 1 µL of a 20 mg/mL stock solution in 100 µL of distilled water, for 15 minutes.The total-DNA band was observed in a UV transilluminator. PCR reaction: According to MANNARELLI & KURTZMAN, 1998 19 .The primers were obtained from Life Technologies do Brasil.Two pairs of the primers were used: one for C. dubliniensis (sense: CDU2 -5'AGT TAC TCT TTC GGG GGT GGC CT 3'; anti-sense: NL4CAL -5' AAG ATC ATT ATG CCA ACA TCC TAG GTA AA 3') and another for C. albicans (sense: CAL5 -5' TGT TGC TCT CTC GGG GGC GGC CG 3'; anti-sense: NL4CAL -5' AAG ATC ATT ATG CCA ACA TCC TAG GTA AA 3').The mix was prepared in an Eppendorf tube with 10x MgCl 2 (2 mM), 0.2 mM of dNTP, 0.4 µM of each primer, 1 U of Taq and 2 µL of the sample, resulting in a final volume of 50.0 µL.Amplification was carried out in a PTC-200 thermal cycler (Peltier Thermal Cycler, MJ Research) as follows: 98 °C for three minutes, 95 °C for one minute, 52 °C for 1.5 minutes, 72 °C for 10 minutes, for total of 35 cycles.After the DNA of the sample was amplified, it was submitted to electrophoresis on a horizontal plate (Horizon 58-Life Technologies) in 1% agarose gel in TBE buffer at 100V for 35 minutes.The gel was then stained with ethidium bromide (Sigma) and the DNA bands were observed in a UV transilluminator fitted with a video camera linked to a computer (Multiimage Light Cabinet by Alpha Innotech Corporation) and photographed. RESULTS The results obtained in the amplification of the fragments using the primers CAL5 and NL4CAL (C.albicans), and CDU2 and NL4CAL (C.dubliniensis) are shown in Table 1. DISCUSSION C. dubliniensis is a yeast species recently described as an opportunistic pathogen associated with oral candidiasis, particularly in HIV-positive individuals and AIDS patients 41 . This species is phenotypically similar to C. albicans, which has resulted in problems in the identification of clinical samples 8 , as well as in the reidentification of isolates kept in culture collections and initially identified as C. albicans. In a retrospective study carried out on a collection of yeast, COLEMAN et al. 3 demonstrated that 2% of the isolates originally identified as C. albicans were actually C. dubliniensis.ODDS et al. 27 reidentified 2589 cultures in a culture collection initially identified as C. albicans, finding that 2.1% were actually C. dubliniensis.JABRA-RIZK et al. 10 found that 1.2% of 1251 isolates originally identified as C. albicans were actually C. dubliniensis.COLOMBO et al. 5 investigated the presence of C. dubliniensis among 548 isolates kept in a collection and previously identified as C. albicans, finding that 11 of the isolates were actually C. dubliniensis. In the present study, it was found that two of 37 samples previously identified as C. albicans were actually C. dubliniensis, for percentage of 5.4%. In Brazil, C. dubliniensis was isolated for the first time in two AIDS patients in the state of São Paulo.One patient was a 3-year-old child with oropharyngeal candidiasis 34 and the other was an adult 24 .ALVES et al. 1 reported the first three cases of C. dubliniensis isolation from AIDS patients in the state of Rio Grande do Sul. According to MARIANO et al. 20 in South America the prevalence of C. dubliniensis isolates appears to be less than that encountered in North America. The incidence of C. dubliniensis in HIV-positive and AIDS patients observed in Brazil is less than that encountered in Europe and the United States.MILAN et al. 25 carried out the first multicenter prospective study of the oral incidence of C. dubliniensis in Brazilian HIV-positive and AIDS patients.Their study was conducted over a period of two years, at six medical centers around Brazil that provided treatment for HIV-positive patients.Of a total of 155 samples isolated, 2.8% were identified as C. dubliniensis.In a study done in Ireland, it was found that the incidence of C. dubliniensis ranged from 18 to 32% in HIV-infected individuals 3,42 , while studies conducted in the United States have reported rates ranging from 11.1 to 17.5% 14,21 . The isolation of C. dubliniensis in HIV-negative patients has also been reported 2,7,12,22,32,43 .Recent studies have shown that this species is more prevalent in HIV-positive individuals, and it is encountered as a commensal organism that can cause various forms of candidiasis 12 .We point out, that in the present study, of the two samples identified as C. dubliniensis, one was isolated from an HIV-negative patient who only suffered from erythematous candidiasis. In this study, the genotypic differentiation between C. albicans and C. dubliniensis was carried out by means of PCR, which proved to be a useful and practical method yielding an accurate identification, thereby showing that PCR can be an effective tool for elucidating the epidemiology of C. dubliniensis, and for establishing its clinical significance. Fig. 1 - Fig. 1 -Electrophoretic analysis of the products obtained through the amplification of the genomic DNA of isolate 15 H and of the standards for Candida albicans (1) and Candida dubliniensis (2) using the primers CAL5 and NL4CAL, and CDU2 and NL4CAL. Table 1 Results for the samples of yeast used in the PCR reaction with the primers CAL5 and NL4CAL (C.albicans), and CDU2 and NL4CAL (C.dubliniensis) *Samples 1 and 2 = standards for C. albicans and C. dubliniensis; ** Kurtzman & Fell 15 .
2017-04-20T01:07:33.137Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "c6670dfaf0fcd4c470e64b446096657441aab6b3", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rimtsp/a/cLS6B6GFC3XQZpbht9TDLZM/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "c6670dfaf0fcd4c470e64b446096657441aab6b3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253398048
pes2o/s2orc
v3-fos-license
Extraordinary magnetometry - a review on extraordinary magnetoresistance Extraordinary magnetoresistance (EMR) is a geometric magnetoresistance effect occurring in hybrid devices consisting of a high-mobility material joined by a metal. The change in resistance can exceed 107% at room temperature when a magnetic field of 5 T is applied. Magnetic field sensors based on EMR hold the potential formeasuring weak magnetic fields with an unprecedented sensitivity, yet, to date this potential is largely unmet. In this work, we provide an extensive review of the current state-of-the-art in EMR sensors with a focus on the hybrid device geometries, the constituent material properties and applications of EMR. We present a direct comparison of the best devices in literature across magnetoresistance, sensitivity and noise equivalent field for different materials and geometric designs. The compilation of studies collected in this review illustrates the extremely rich possibilities for tuning the magnetoresistive behavior varying the device geometry and material properties. In addition, we aim to improve the understanding of the EMR effect and its interplay with geometry and material properties. Finally, we discuss recent trends in the field and future perspectives for EMR. Introduction Magnetic fields are a fundamental part of nature and play a role in numerous physical processes and key technologies, ranging from the simple compass to the complex magnetic resonance imaging systems or tokamak fusion chambers. It is of essential scientific interest to detect and measure magnetic fields with as high an accuracy as possible to better understand nature as well as to optimize technologies. Magnetoresistive sensors have been used to detect magnetic fields for decades through magnetic field-induced changes to the sensor resistance. The development of magnetic hard drives in particular has been a key area of application for this technology as the detection of small magnetic fields using the the readhead is crucial. 29,30 Numerous other areas ranging from the detection of biological brain activity to space applications have also been explored. 31 The highlight work on magnetoresistive sensors was the awarding of the 2007 Nobel prize in physics to Albert Fert and Peter Grünber for the discovery of the giant magnetoresistance (GMR), but several other classes of magnetoresistances are actively researched including colossal magnetoresistance, tunnel magnetoresistance, and extreme magnetoresistive materials. Magnetoresistance can have both an intrinsic and geometric contribution. The intrinsic contribution stems from changes to material properties such as the magnetization, electronic band structure, and mobility which are induced by a magnetic field. In contrast, the geometric contribution depends on the design of the device, including the placement of contacts and geometries of the constituent materials. By using geometric magnetoresistance and combining several materials into a hybrid device, it is possible to geometrically enhance the magnetoresistance by orders of magnitude -such devices are known as extraordinary magnetoresistance (EMR) sensors. EMR devices hold the potential for turning the large magnetoresistance into sensitive magnetometers that can measure weak magnetic fields at room temperature using simple measurements of the electrical device resistance. EMR sensors further combines sensing capabilities in either two-and four-terminal measurement modes at room temperature with a device design that do not contain any magnetic elements, providing advantages of conventional magnetoresisive and Hall sensors. Despite some progress towards weak-field magnetometry using EMR sensors, the potential is to date largely unmet. Besides the decline in commercial prospects as read-heads for ultrahigh-density magnetic storage became obsolete with the emergence of solid state harddrives, a few key challenges of realizing EMR sensors also exist including complex device fabrication and the need for performing optimization within an overwhelmingly large parameter space. In this work, we review and discuss this field with a focus on elucidating the impact that the hybrid device geometry and constituent materials have on determining the magnetoresistive performance as well as highlighting the key findings for use in the development of the next generation of EMR sensors. The review serve as a more comprehensive and up-to-date account of the field compared to the previous review on EMR published in 20XX (ref). The main comparison of the experimentally realized EMR devices presented in the literature is given in Table 1 and with Figure XX presenting the state-of-the-art devices with respect to magnetoresistance, sensitivity, and noise equivalent field. Each of these are discussed substantially during the course of this review. The review is structured as follows: First, the physics of the EMR effect is discussed, a historical perspective on the discovery of the EMR effect is provided and key metrics for describing EMR are given. Following this, EMR devices in literature are reviewed in three major sections with respect to their geometry, material parameters, and application in magnetometery, respectively. In the section on geometry, we describe both the main device geometries used in EMR as well as aspects on 3-dimensional inclusions and contact placement. With regards to material parameters, besides the carrier density and mobility of the constituent materials, effects such as contact resistance and material specific phenomena are discussed. Finally, with regards to magnetometers, fabrication techniques as well as the concept of noise equivalent fields are discussed and reviewed. Ending the review, we conclude on the present state-ofthe-art, trends in the field, and directions of future research. Fundamentals of Extraordinary Magnetoresistance Magnetoresistance is a property of some material systems where the electrical resistance changes in response to a magnetic field. The types of phenomena which produce a magnetoresistive effect can be separated into two general categories: intrinsic magnetoresistance which originates from the properties of a material, and geometric magnetoresistance which results from the interaction between charge carriers and the device geometry. The extraordinary magnetoresistance effect is a special case of geometric magnetoresistance which can be observed in certain composite materials where the matrix material has a high carrier mobility and the second phase is much more electrically conductive than the matrix. EMR was first described by Solin et al. in 2000 where devices were presented that showed very large changes in the measured resistance induced by a magnetic field; up to a 1,500,000% increase in the resistance when the magnetic flux density was increased from 0 to 5 T. 1 The effect can be understood by considering a high mobility semiconductor with an embedded metal disk as show in Figure 1.1. 32 An unbound charged particle moving in an electromagnetic field experiences a force contribution parallel to the electric field (E) and a Lorentz force perpendicular to its velocity (v) and the magnetic field (B) according to: where is the charge of the particle Schematic of a typical EMR device in the vdP geometry (right). 32 In the case of a semiconductor/metal hybrid system where the metal is much more conductive than the semiconductor, the metal can be considered an equipotential volume. In this case, the local electrical field around the semiconductor/metal boundary is oriented perpendicular to the interface at all points (see Figure 1.1a). This leads the current into the metal in the absence of a magnetic field and produces a low zero-field resistance (Figure 1.1b). If a magnetic field is applied however, charge carriers approaching the boundary will experience a perpendicular deflection due to the Lorentz force, resulting in a force component tangential to the semiconductor/metal interface. As the magnetic field strength or the charge velocity is increased, the deflection becomes stronger and more of the current is forced around the conductive material and instead travels through the higher resistive semiconductor ( Figure 1.1c). The magnitude of the deflection is given by the Hall angle, which approaches 90° in the high field limit and corresponds to a total expulsion of the current from the conductive material. Thus, the current deflection increases the resistance across the device as a function of the applied field, leading to a positive magnetoresistance. The constitutive relation between the current, , and the electric field, , in the presence of a magnetic field can be expressed through the following equation: is the magnetoconductivity tensor which is given by: Here, is the unitless magnetic field: = and 0 is the intrinsic conductivity of the material in the absence of a magnetic field: where is the density of charge carriers, is the momentum relaxation time between scattering events, * is the effective mass of the charge carriers, and is the carrier mobility. Here we consider a material with only one conduction band and an isotropic mobility. For a thin film structure with a magnetic field perpendicular to the plane of the device ( = �) the system can be reduced to two dimensions with a simplermagnetoconductivity tensor: The off-diagonal elements of the tensor describe the deflection of the current and depend on the unitless magnetic field = . A strong deflection can therefore be achieved either by increasing the carrier mobility or the magnetic flux density. High mobility materials are preferred for EMR applications for this reason, as they require lower magnetic fields to achieve the deflection necessary to force the current around conducting inhomogeneities. 32-34 History of Extraordinary Magnetoresistance Extraordinary magnetoresistance can be better understood if contextualized within the greater historical framework out of which it arose. The origins of EMR began in an unlikely place: in the research laboratories of the infamous chemical and biotechnological company Monsanto. In the late 1960s Monsanto was also a producer of raw semiconductor materials, including GaAs and GaAsP, which later established its own high volume production lines of optical electronic devices and became the first company to mass produce light emitting diodes. In addition to developing GaAs materials for diodes, the company also explored their use for lasing applications. F. V. Williams, one of Monsanto's researchers in GaAs lasers, [35][36][37] noticed that in some GaAs samples grown under metal-rich conditions the measured electron mobility was anomalously high. Unable to explain this phenomenon, Williams presented this problem in a private correspondence to C. M. Wolfe at MIT. 38 Wolfe set about trying to understand the observed behavior and posited that it could be explained by the presence of inhomogeneities, 39 particularly if the inhomogeneous region was significantly more conductive than the surrounding bulk. In 1971, Wolfe modeled a van der Pauw disc with two concentric sections where the outer section represented the bulk semiconductor and the inner section was comprised of an inhomogeneity with higher conductivity (see inset of Figure 1.1). 40 A set of analytical expressions was derived for the apparent resistivity, , and Hall coefficient, , , of a van der Pauw disc with an out of plane magnetic field in the limit where the conductivity of the inhomogeneity, 0 , is much higher than that of the bulk, , ( 0 ⁄ → ∞). Here, the Hall coefficient was found by sourcing current between diagonally arranged leads while deducing the transverse voltage drop whereas the resistivity was found by sourcing current between adjacent leads and measuring the voltage drop at the opposing leads. The apparent mobility, , of the device is then given by , / . Plotting the ratio between the apparent mobility and bulk mobility as a function of the ratio of the inner and outer radii, α, at different magnetic field strengths yields the set of curves shown in Figure 1.2. Wolfe observes that in the low field limit, β << 1, as the size of the inhomogeneity increases the resistivity and Hall constant both decrease. However, the resistivity decreases faster than the Hall constant and therefore results in a higher apparent mobility. In the high field limit, β >> 1, the Hall constant approaches / which is equal to that of a homogenous sample as there is no current flow through the inhomogeneity. The observed behavior is thus a geometric effect, produced a current redistribution around the conductive inhomogeneity in the presence of a magnetic field. To demonstrate this effect, inhomogeneities were intentionally created in thin epitaxial layers of GaAs grown on semi-insulating substrates in 19XX (ref). Conducting inclusions were simulated by alloying Ga into parts of the exposed surface of the epitaxial layers. The optical phonon limited carrier mobility of pure GaAs is calculated to be around 8,000 cm 2 V -1 s -1 at 300 K while the lattice limited mobility at 77 K is approximately 240,000 cm 2 V -1 s -1 (ref). Samples with conducting inhomogeneities were measured to have apparent carrier mobilities that exceeded the theoretical maximum lattice-limited mobility. The apparent mobility of the samples ranged from 7,400 to 24,000 cm 2 V -1 s -1 at 300 K and between 150,000 and 740,000 cm 2 V -1 s -1 at 77 K, demonstrating that it is possible for this mechanism to produce the observed anomalous behavior. 40 Wolfe then further elaborated his theory in a subsequent paper published the following year. 38 The effect of material parameters on the behavior of the system were detailed by plotting the dependence of the derived expressions on various independent variables. By mapping these relationships, several features became apparent: 1) Relatively small differences in the conductivity of the medium can significantly affect the mobility measurement; 2) Even small values of α and 0 ⁄ have an appreciable effect on the Hall coefficient; and 3) The apparent mobility can be larger than the real mobility by up to several orders of magnitude depending on the various parameters. Additional experimental results were obtained to further buttress the theory (ref) and as a result of these investigations Wolfe recommended against using the mobility of a semiconductor as a figure of merit unless the homogeneity of the sample can be determined first. Inhomogeneities can appear in semiconductor materials through various mechanisms: for example, due to precipitates or metallic inclusions formed in samples grown under metal-rich conditions, variations in the doping level of doped materials, or through the uptake of impurities at different rates through the different crystallographic faces of polycrystalline materials. It was also suggested that the level of homogeneity could be estimated by measuring the Hall coefficient as a function of an applied magnetic field (ref). Wolfe's work was found particularly relevant 26 years later in 1998 by T. Thio and S. A. Solin, who at the time were working as researchers for the computer manufacturer NEC. In one of their research efforts, Thio and Solin explored the use of Hg1-xCdxTe as a magnetic sensor for hard disc read heads. To assess the performance of the material for magnetic sensing, heavily doped, lightly doped, and undoped samples of Hg 1x Cd x Te were manufactured into both Hall bar and Corbino disc geometries (ref). The data produced by the Hall bar samples suggested that the doped samples perform worse for applications requiring magnetoresistive behavior. However, for the Corbino samples it was found that for the doped samples the electron mobility was roughly 400% higher than expected. Thio and Solin theorized that the observed behavior may be due to the presence of microscopic inhomogeneities in the doped samples, possibly due to separation into dopant rich and dopant poor phases. 41 In a second paper published that same year, they argued that the effect could be explained by the Wolfe model, if it was augmented to include physical magnetoresistance. 42 Hall bar devices were fabricated by growing films of Hg 1x Cd x Te using molecular beam epitaxy with x ≈ 0.10 and a compositional fluctuations around ∆ ± 1.5%. The size of the inhomogeneities was estimated to be between 30-220 nm in diameter. An interesting behavior could be observed in the magnetoresistance, ∆ 0 = ( ( ) − 0 )/ 0 ⁄ and Hall coefficient data, as shown in Figure 1.3. At both low and high fields the MR followed a quadratic dependence, but the curvature was 30 times higher at low fields than at high fields, with a cross-over point around 0.4 T. For intrinsic semiconductors, ( = ), the MR is expected to increase quadratically as a function of field with a curvature of ℎ 2 , yet here the low field conductivity was much higher than this value. Another anomaly was observed in the Hall coefficient which at zero-field was 30% lower than at high fields, a similar result to what Wolfe observed in his experiments. Thio and Solin explain these behaviors by referring to the Wolfe model. They posited that in the absence of a magnetic field the current flows preferentially through the low resistance inhomogeneity, reducing the apparent resistivity of the sample. At high fields, the Hall angle approaches 90°, forcing the current around the inhomogeneity and reducing the conductance as the cross-sectional area that the current can flow through is decreased. The cross-over field occurs at = 1⁄ , where µ is the mobility of the bulk semiconductor. They point out that Wolfe's model explicitly ignores the case where semiconductors possess intrinsic physical magnetoresistance but then derive another set of equations which can capture this effect. The amended expression for the apparent resistivity fit the data well, lending validity to their approach. Thio and Solin concluded the paper by suggesting that the geometric enhancement of magnetoresistance at low fields could have important applications in the development of magnetic field sensors. The Discovery of Extraordinary Magnetoresistance Two years later, in 2000, Solin and Thio published results which became the foundational text for the field of extraordinary magnetoresistance. In it, the authors described the first sensor which made use of the aforementioned effect to measure magnetic fields. The sensor utilized the geometry which was proposed in the original Wolfe paper 40 ; a vdP disc with four equally spaced contacts and a circular metal inclusion (see inset in Figure 1.4). Metal organic vapor phase epitaxy was used to grow the semiconductor material for the devices. First, a 200 nm buffer layer of InSb was deposited on a GaAs substrate. A 1.3 µm thick active layer of Te-doped InSb was then grown with a mobility and carrier density of 45,000 cm 2 V -1 s -1 and 2.6×10 16 cm -3 , respectively. This was followed by a 50 nm InSb contacting layer, and the structure was terminated with a 200 nm thick passivating layer of Si 3 N 4 . The lattice mismatch between the GaAs substrate and InSb creates a high degree of disorder in InSb thin films which drastically reduces the mobility in the buffer layer. Band bending at the InSb/Si 3 N 4 interface also reduces the mobility and depletes the number of carriers in the contacting layer. As such, neither layer represents a parallel conduction channel and current only runs through the active layer. Reactive ion etching was used to define the shape of the device and Ti/Pt/Au layers were successively deposited to form the metal inclusion and metal contacts of the device. Devices were produced with varying metal filling factors defined as the ratio of the inner to outer diameter (α = ri/ro). The magnetoresistance measured in a four-terminal mode transitioned from being very weak in absence of gold inclusion (α = 0) to very strong as the filling factor increased (see Figure 1.4).For the InSb used in this experiment, the optimal value of the filling factor for high field magnetoresistance was found to be 13/16, but shifted to 12/16 in the low field regime. For α = 12/16 the resistance of the device increased by 83% at 0.05T and 400% at 0.1 T. In the high field regime, the device with α = 13/16 showed incredibly large magnetoresistances of 8,100% at 0.25 T, 42,000% at 1 T, and 1,500,000% at 5 T. Analytical and Numerical Modeling The physics of the EMR devices described above is theoretically well-established and well suited for both analytical and numerical analysis. For concentric circular EMR devices with contacts placed symmetrically as in the case of Figure 1.4, the equations reduce to a solvable Laplace equation on a circular geometry. For this the electrical potential between the two voltage contacts on the periphery of the device can be found as function of the magnetic flux density by writing up the solution as an infinite series. 43 The solution is: Here, = , 0 = 0 , = /(1 + 2 ), 0 = 0 /(1 + 0 2 ), the contact width is for all contacts, is the angular placement of the contact, is the current, is the thickness of the semiconductor, and are the mobility and the electrical conductivity of the semiconductor, while μ 0 and 0 are the mobility and the electrical conductivity of the metal. The analytical model shows a good agreement with EMR device experiments for symmetrical devices as displayed in Figure XX, in particular since no free fitting parameters is used to calculated the magnetoresistance. 2,32,44 An analytical solution also exists for an asymmetric circular device where the inner metal inclusion is displaced to the side 2 as well as for the bar-shaped device discussed in the next chapter (ref) Numerically, the EMR governing equations can be solved in steady state using both finite difference and finite element methods. 32,43 Finite element solutions are particularly well-suited due to its flexibility with meshing geometrical features. The EMR finite element model used to model the concentric circular EMR device matches the analytically expression very well as well as displaying a good agreement with experimental data in the range from -1 to 1 T as observed in Figure XX. 32,43 Due to this agreement and because the EMR equations are relatively easy to implement in standard finite element software, numerical studies of the EMR effect have been performed in numerous articles, as will be detailed throughout this manuscript. Figures of Merit The studies on extraordinary magnetoresistance are not compressed to a single universal figure of merit, but a variety of metrics are typically used. The four most commonly used metrics include: 1) The field-dependent four-terminal resistance (R): where the current (I) between two contacts is typical fixed and the resulting field-dependent electrostatic potential drop (Δ ) is measured between two voltage contacts. 2) The magnetoresistance (MR) which generally defined as: More conservative measures of the magnetoresistance combining both two-and four-terminal resistances have also been used. 23 3) The sensitivity as defined by the change of resistance around a bias magnetic field: Sensitivity is an important metric that relates directly to the voltage signal, = ⋅ � � Δ , generated in EMR magnetometers as they are subjected to a magnetic field signal Δ = − . In some cases, the sensitivity is scaled with a factor of 1/ or 1/√ to account for effects such the effect that a change in the EMR device resistance has on the noise. 4) The two key metrics often used when EMR is applied to magnetometery are the signal-to-noise ratio and the noiseequivalent field. These are described in detail in Section 4 where the application of EMR for magnetometry is discussed. These metrics are used to describe the performance of EMR devices throughout this review. Geometry The EMR effect depends on physically deflecting charge carriers and thereby changing the route the current must take as it travels from source to drain. Therefore, along with the properties of the constituent materials, the device geometry plays a major role in the performance of EMR devices, for instance going from a negligible MR for α = 0 to MR(1T) = 42,000% for α = 13/16 in Figure 1. 4. In this chapter we will review the various designs which have been studied and how various geometric factors affect their performance. There is almost unlimited degrees of freedom for designing these devices as one can manipulate virtually any geometric part of the structure. However, the geometric variations can essentially be generalized to one of three types: 1) changes to the overall shape of the device, e.g. circular vs. bar-type outer device boundary as displayed in Figure Figure 2.1a and b, respectively, 2) changes to the internal shape of the boundary between the constituent materials, e.g. as in the transition from concentric circular device to the asymmetric circular device in Figure 2.1a and c, respectively, or 3) changes to the locations and number of the electrical contacts such as IVVI configuration displayed in Figure 2.1b to an IVIV configuration. The most common EMR geometries are the concentric circular device and the bar-type device. Optimizations have led to more exotic structures such as a multi-branched device which resembles a Hall-bar or a so-called "fish-bone" structure (refs) and asymmetric EMR devices (ref). Three approaches have been used to probe the effect of varying the geometry of the device. In the first approach, the geometry effect has been investigated by experimentally varying the device geometry. Here, the considered devices are based on concentric circular geometries 1, 6,7,9,24,27,43,45 or bar-shaped geometries. 15,20,[46][47][48][49] In the second approach, geometries are described by parameters, which are varied theoretically in numerical 3,5,56-62,43,45,50-55 or analytical models 32,43 in order to evaluate the effect on magnetoresistive performance. Here, concentric circular devices 3,5,43,58,59 and bar-shaped devices 50-55,60-62 are also widely addressed. In the third approach, the shape of the shunt is changed into more complex multi-branched geometries in order to enhance the magnetoresistance response by several orders of magnitude. 63,64 The circular, bar, asymmetric and multi-branched device geometries are illustrated in Figure 2.1. In the following subsections we will introduce the various geometries which have been reported in literature and how geometric factors affect their performance. Corbino Geometry While the Corbino disc is generally not considered to be encompassed in the family of extraordinary magnetoresistive devices, it does exhibit a geometric magnetoresistance and merits some brief discussion as it was one of the first examples of geometric magnetoresistance. O. M. Corbino first proposed the Corbino disc geometry in 1911 in a series of papers which explored the nature of charge conduction in metals (refs). The choice of a disc was not novel, as precedence for this design existed in studies by Maggi, Hall, and Boltzmann regarding the case of conduction in metal discs (refs). However, Corbino claimed to have discovered previously unreported magnetoconductance phenomena through the use of this particular geometry (ref). In fact, Corbino's results were later discovered to be not an entirely new effect, but rather a clear and simple demonstration of the Hall effect. 65 Similar to the EMR devices described by Solin 1 , the Corbino disc is a round heterostructure with a metallic center embedded into another material. Unlike the classic EMR device where contacts are located in discrete areas on the outer boundary of the device, in the Corbino disc the entire perimeter forms one metallic contact and the inclusion at the center forms the other. As such it can only be operated in the two-terminal configuration. Current is injected through the inner contact and in the absence of a magnetic field it flows radially towards the perimeter (see Figure 2.2a). When a perpendicular magnetic field is applied, the electrons experience a deflection due to the Lorentz force and the current path spirals, with the degree of spirality determined by the strength of the field and the carrier mobility of the non-metallic material (Figure 2.2b-c). The spiral motion occurs because the geometry of the device creates a condition in which the Lorentz force is not counterbalanced by the Hall field formed by charge buildup at the device boundary and thus the current is allowed to freely deflect. An increase in the resistance of the device can be measured as a direct result of the lengthening of the current path. Due to its simple geometry, the only geometric parameters that the performance of a Corbino disc depends on are the thickness of the active material and the ratio of the outer and inner radii. Kleinman derived analytical expressions for the voltage drop and sensitivity of a Corbino disc and showed that the radii ratio is the dominant geometric factor, with larger values of the ratio yielding greater magnetoresistances (ref). The thickness of the active layer has some contribution to the value of the zero-field resistance. 67 The effectiveness of the Corbino disc as a magnetoresistive device can be clearly seen in the work published by Branford et al. 4 To compare the effect of various geometric designs, devices of various shapes were made from InSb. A 1 µm thick film of InSb was grown on a semi-insulating GaAs substrate and then etched to form devices with van der Pauw and Corbino disc geometries. A high intrinsic magnetoresistance in the InSb was observed in the data from the vdP samples which showed an increase in resistance of 885% at 8 T (see Figure 2.3). While relatively large, this value is much smaller than that of the Corbino disc where the resistance increased by 4,685%, demonstrating the strength of the geometric effect. Figure 2.3: MR(B) for the cloverleaf geometry (solid circles), a 1x1 array in vdP geometry (open squares), and Corbino disc geometry (solid squares). The solid line shows the predicted MR value using measured material parameters from the as-grown samples. Inset: Low-field MR shown for the Corbino disc (squares) and a concentric circular EMR device with a filling factor of 12/16 (triangles). Figure from Branford. 4 A sensitivity analysis of the Corbino samples revealed that there are two regimes: at low fields the sensitivity, dR/dB, increases as a function of field until it reaches a maximum value, and at high fields the sensitivity is constant. 4 Despite its ability to demonstrate geometric magnetoresistance, the Corbino disc is not often used in EMR literature due to its relatively low magnetoresistance compared to other designs. Shunted van der Pauw Disc The discovery of EMR in 2000 1 was based on the concentric circular geometry with magnetoresistances of more than 10 6 % at magnetic fields of 5 T. The design consists of a semiconductor disc which is shunted by a highly conductive material (see Figure 1.1). The perimeter of the device features four equally spaced contacts which can be operated in a four terminal configuration in which current is sourced through adjacent contacts and the voltage difference is measured at the opposite pair, or as a two terminal measurement in which only one pair of adjacent contacts is used for both the current and voltage leads. Devices made from shunted vdP discs are heavily affected by geometric variations in the shape and size of the conductive shunt. The effect of varying the filling factor defined as the ratio of the radii of the shunt and the outer boundary, = ⁄ , have been demonstrated in both experiments 1,6,7,9,24,27,43,45 and simulations. 5,32,58,59,63,64 The experimental results and simulations have both shown that increasing α decreases the overall resistance, but improves the magnetoresistance as the R(0) term decreases faster than R(B). This improvement continues up to a threshold value of α, after which the magnetoresistance for a fixed magnetic field drops as the current cannot effectively be deflected around the shunt (see Figure 2.4). The optimum value of α depends on the strength of the applied magnetic fields with larger fields yielding higher values of α. Here, the lack of effective current deflection for large values of α is compensated by a large magnetic field. Sun et al. examined the effect of the filling factor on both magnetoresistance and sensitivity 6 and found that the sensitivity decreases as the size of the shunt is increased. The magnetoresistance followed the opposite trend, with high values of α generally producing a dramatically higher magnetoresistance. Devices which feature a large shunt have a low zero-field resistance, driving up the magnetoresistance as the term in denominator becomes small. As such, small changes in the device resistance induced by the field can produce large magnetoresistances, but since the magnitude of the change is small the resulting sensitivity is poor. In contrast, if one optimizes the device for high sensitivity, the ideal shunt diameter is much smaller, but produces a larger zero-field resistance and hence a reduced magnetoresistance. The circular geometry has not received as much interest as some other designs due to difficulties in fabrication. Misalignment of the lithography masks can result in deviations in the concentricity of the two materials which can affect performance. 13 Nonetheless, the geometry shows great flexibility and can yield good magnetoresistive results. In 2013, Sun et al. published an experimental study comparing the circular, bar and Corbino disc geometry in n-doped InAs epilayers. 13 The authors reported that the 4-terminal configurations of the shunted vdP geometry reach higher magnetoresistances, however the 2-terminal bar-shaped device provided a magnetic field resolution of ~12 /√ which is 15 and 75 times better than the Corbino disc and the 4-terminal barshaped counterpart, respectively. Branford also compared these values to the results obtained by Solin et al. 1 on an InSb EMR device with a 12/16 filling factor. It can be seen in Figure 2.3 that in the low field regime the vdP geometry produces a much stronger response than that of the Corbino disc; with the former yielding a 100% increase in resistance compared to a meager 4.5% for the latter under a field of 0.05 T. 4 Overall it can be said that for applications requiring high magnetoresistances, large shunt diameters and a four terminal configuration are preferable whereas if a high sensitivity at finite magnetic fields is sought after, smaller shunt diameters and a two-terminal setup should generally be employed. Off-centered shunted vdP disc Variations to the concentric circular vdP disc was considered in a couple of studies (ref). The most comprehensive study was done using finite elements by Erlandsen et al. who shifted the inner metal disc systematically either vertically or horizontally as displayed in Figure 2.5a and d. A horizontal shift was found to dramatically alter the magnetic field dependence of the resistance, which transitioned from symmetric trace with R(B) = R(-B) when the metal disc was centered in a device to a highly asymmetric curve with R(B) ≠ R(-B) upon displacement (Figure 2.5b). In contrast, when the metal inclusion was shifted vertically, the symmetry of R(B) was kept intact and only small variations were found (Figure 2.5e). The authors deduced that breaking the mirror symmetry along the vertical axes was essential for breaking the symmetry of the field-dependent resistance. This symmetry breaking was declared a prerequisite for producing EMR sensors with magnetic field sensitivity at very weak magnetic fields as it entails that dR/dB ≠ 0. Furthermore, it enabled the detection of both the direction (positive or negative perpendicular fields) and magnitude of the magnetic field. The sensitivity was found to increase as the displacements and the filling factor were enlarged simultaneous with the largest value of approximately 70 Ω/T found when metal disc is as big as the semiconductor disc, but displaced such that it only overlaps with the right-hand part of the semiconductor. a) and vertically (c) in a InSb/gold EMR device as well as the effect on the magnetic field-dependent resistance shown in (b) and (d), respectively. (ref) Erlandsen et al. further showed that the symmetry could also be broken by dividing the semiconductor into two regions with different electron mobilities. By examining the current distribution in both the symmetric and asymmetric devices, the authors were able to visualize the change in the current distributions when exposed to a magnetic field (Figure 2.5). As expected, a large part of the current is predicted to flow through the metal shunt in absence of a magnetic field. In presence of a perpendicular magnetic field, the current is deflected around the metal inclusion. In the symmetric device, the current is deflected in a symmetric manner when comparing positive and magnetic fields. However, for both the asymmetric devices, the current flow was found to differ significantly. In particular, when applying B = -1 T to the device with two semiconductor regions (lower row in Figure 2.5) the current is deflected around the gold inclusion in the semiconductor region with high mobility, but is readily directed into the gold as the current flows into the semiconductor region with low mobility. Solin et al. also produced a device based on the vdP disc but with an off-center shunt and asymmetric lead configuration. 2 The reported magnetoresistance of this design was more than an order of magnitude greater than that of the concentric device with the same sized inhomogeneity. The locations of the voltage and current contacts can greatly affect the performance of the device, as can be seen in Figure 2.15 where a relatively small shift in the location of the current leads results in a 5-fold increase in the magnetoresistance at 0.1 T. The position of the shunt and the leads provides yet more degrees of freedom for the design of EMR devices. Bar Geometry The most common EMR device shape is the bar-shaped geometry due to its simple design, simpler fabrication and compatibility with nanoscale sensor fabrication. 8 Solin et al. 44,68 showed how the bartype geometry can be derived from a circular geometry by using conformal mapping 69 In the case of a bar-shaped device with symmetric leads, an equivalent filling factor can be calculated with the expression: where is the horizontal distance from the axis of symmetry to the furthest current lead, 1 is width of the semiconductor, and 2 is the vertical distance from the leads to the top of the shunt. 70 The factors that affect the performance of bar-shaped devices have been investigated in various experimental 15,20,29,47,49 and numerical studies. 12,54,55,60,64,71 Sun et al. 71 showed that the performance of the bar device is highly susceptible to changes in geometry e.g., the location of the contacts, the width of the semiconductor, and the width of both semiconductor and metal. In particular, the effect of length-to-width ratios of the semiconducting region in the barshaped device has been widely explored in literature with experiments 15,20,29,47,49 and simulations. 12,54,55,60,71 This parameter is the geometric equivalent of the shunt diameter in the shunted vdP geometry.For instance, Möller et al. demonstrated how magnetoresistance in bar-shaped devices increases exponentially with decreasing semiconductor width. 15 MR values of 93%, 1,900%, and 115,000% were measured at 1 T for devices with length-to-width ratios of 2.8, 10, and 28, respectively. The increase in magnetoresistance is due to the decrease in the zero-field resistance that occurs as the width is reduced. 46 A maximum value in the sensitivity of XX for B = YT was observed for the device with the thinnest semiconductor. Sun et al. further elaborated on the effect of the length-to-width ratio and suggested that the optimal ratio depends on the strength of the field one is trying to measure. They reported that for high magnetic fields, length-to-width ratios of 10 to 20 gave the best results, while a ratio of 5 was optimal for low magnetic fields. 71 Similar results have been reported in the other cited papers. Huang et al. relaxed the assumption that the cross-section of the semiconductor should be uniform and presented a chevron-shaped design in which the width of the semiconductor decreases as it approaches the axis of symmetry of the device (see Figure 2.7). This creates a constriction in the center which increases the resistance and tripled the MR at 3 T. 64 Sun and Kosel 12 investigated bar-shaped devices with metal shunts in various shapes with the use of finite element simulations. Instead of treating the device as if it was strictly 2D, they explored electrodes that were thinner or thicker than the semiconductor (See Figure 2.9) or were lying partly on top of the semiconductor (see Figure 2.10) as is often the case in experimental devices. In the case of devices without an overlap, the magnetoresistance and sensitivity were both found to be constant if the metal thickness was larger than that of the semiconductor ( > in Figure 2.9), and decreased only slightly for the case where the thickness of the metal was one tenth that of the semiconductor. In the case where a significant overlap existed between the metal and semiconductor, the output sensitivity is slightly decreased, however, it comes with a simplified fabrication process. Thus, the small decrease in performance could likely be offset by the potentially improved electrical contact and easier fabrication. In addition, these results suggest that bar-type devices are not particularly sensitive to the types of physical deviations from idealized geometries that arise during the fabrication of experimental devices. Oszwaldawski et al. 8 also published an experimental procedure to produce bar-shaped devices which a simplified fabrication process, however, using metal contacts and shunt deposited on top of the semiconductor (see Figure 2.11). This configuration was reported to yield an even better magnetoresistance compared with its conventional counterpart due to the very low resistance at zero magnetic field, which is mostly attributed to the low metalsemiconductor contact resistance arising from the large contact area. They claim that the new configuration can be utilized to upscale the production of EMR sensors that have historically been difficult to fabricate. The new configuration also provides an accessible platform for investigating EMR in promising thin film and 2D materials such as graphene. In order to estimate how the devices would perform at higher fields the devices were simulated using FEA software. 26 Experimental results matched the output from the simulations for the range of magnetic fields tested, allowing the authors to simulate the performance at higher fields. The simulations predicted a magnetoresistance of 55,000% at 9 T, similar to what was reported by Lu et al. with a shunted vdP geometry 24 , which lends credence to the proposed planar geometry. Multi-branched Structures As Huang showed in the case of the chevron shaped shunt in the bar-shape device (see Figure 2.7), there are no requirements around the uniformity of the shunt geometry. Several groups have tried to optimize EMR devices by changing the shape of the metallic shunt (refs). One common type of alternative shunt geometry is what we refer to as multi-branched structures. The multi-branched structure is based off of the shunted vdP disc, but the shape of the shunt resembles a Hall bar with multiple arms. Branched structures were originally proposed by Hewett et al. who showed using FEM simulations that the multi-branched structure similar to device shown in black in Figure 2.12 performed significantly better than the conventional shunted vdP disc. 63 This study was later expanded by Huang et al. 64 that numerically showed that by combining the branched structure with the outer parts of an ellipse, an incredible enhancement of the MR by several orders of magnitude could be achieved (see Figure 2.12). The rounded outer parts from the elliptic geometry are crucial to this structure, since they keep the resistance at zero-field extremely low given that almost the entire current pathway at zero-field is covered in metal. At the same time, the voltage drop at finite magnetic fields is large since the current deflects away from the metal and into tight constrictions between the voltage probes in the semiconductor, leading to even larger resistance values compared to the other multibranched structure. Hewett et al. 63 additionally demonstrated that structures with smaller filling factors experience the largest gains when converted into multi-branched geometries. Moktadir and Mizuta 59 proposed a multi-branched device with additional arms which they called a fish-bone structure. The performance of the device composed of a graphene/metal hybrid was studied numerically and yielded large magnetoresistances exceeding 10 8 % (see Figure 2.13). This devices geometry was further used to study the effect of an inhomogeneous graphene conductor composed of p-type and n-type puddles with equal mobility and charge carrier density, but with different sign of the charge. By varying the area fraction of n-type puddles (f n ), the authors concluded that the largest magnetoresistance was obtained when the system approach a homogeneous conductor composed either solely of electrons or holes (see Figure 2.13). While experimental results of multi-branched devices have not been reported in literature, their incredible performance in numerical studies suggests that the geometries which have been explored to date are far from ideal and that massive improvements may be achieved through geometric optimization. Other Geometries A variation of the bar-shaped device geometry was investigated by Pugsley et al. 72 who considered a square version of the device; i.e. a square metal inclusion inside a square modeled with finite element simulations. The authors concluded that at 1 T the optimal ratio between the side lengths of the inclusion to that of the semiconductor is 8/10, a value close to that of the optimal filling factor of 13/16 for the shunted vdP geometry. The authors also showed that by separating the square inclusion into two rectangular regions, the magnetoresistance at low fields (0.04 T) can be increased by two orders of magnitude, without affecting the magnetoresistance at high fields (see Figure 2.14). 72 The 3D version of the system in which a cubic metal inclusion is embedded inside of a cubic semiconductor was also calculated, with the results showing magnetoresistances on the same order of magnitude as that of the corresponding 2D system. Hong et al. took a different approach to generate an EMR response than what is typically done with the metal/semiconductor hybrid devices that have been discussed previously. 14 Rather than creating a device with inhomogenous material properties in order to control the current flow in a magnetic field, an inhomogenous magnetic field was used to control the current path in a homogenous material. The device was produced by creating a two dimensional electron gas in an InAs quantum well with a GaSb cap layer. The deposited layers were then etched to form a vdP square with contacts on each of the four corners. Ferromagnetic gates were then made by depositing either Fe or Co layers in a strip over the center of the devices (see Figure 2.16a). The operating principle of the sensor is that the ferromagnetic gate is capable of magnetizing in the presence of an external field, and in doing so, it generates fringe fields at the edges. The fringe fields at opposite ends of the gate are oriented in opposite directions and act as magnetic barriers (see Figure 2.16b and c). This change in the local magnetic landscape causes the electrons to move preferentially along the channels formed by the peaks in the local field since the Hall angle in the region near the edges is close to 90°, thereby lengthening the conduction path in the presence of an external magnetic field and increasing the measured resistance. Two different configurations were used during the testing of the device. In the diagonal configuration, the A and C terminals functioned as the current source and drain and the potential was measured at the B and D terminals. In the side configuration, A and D were used as the source and drain and the potential was measured at B and C. Electron mobility and carrier density were determined to be 193,800 cm 2 V -1 s -1 and 9.46×10 11 cm -2 at 4.2 K and 31,000 cm 2 V -1 s -1 and 2.28×10 12 cm -2 at room temperature. Interestingly, it was observed that the device was capable of producing a positive magnetoresistance when operated in the diagonal configuration but a negative magnetoresistance when the side configuration was used (see Figure 2.17). When operated in the diagonal configuration, the magnetoresistance at 1 T was 800% at room temperature and 12,000% at 4.2 K. A maximum sensitivity of 78 Ω/T and 681 Ω/T at 0.3 T were recorded for the room and low temperature measurements respectively. When tested in the side configuration, magnetoresistances of -27% and -1450% at room and low temperatures were observed. Both the diagonal and side configurations show a saturation in the magnetoresistance at an external field strength somewhere between 0.5 and 1 T. This result from the magnetization of Fe typically saturating around 0.6 T, which represents a ceiling to the strength of the magnetic barrier which the gate can generate. This means that there is a limit to the maximum field strength that can be measured with this device. However there are some unique advantages to this design, namely that it is easy to fabricate and that its performance does not depend on contact resistance as there are no internal interfaces. Similar strategies for controlling the electron trajectory using inhomogeneous magnetic fields have also been addressed in several other studies as reviewed by Nogaret (ref). Contact Positions: Similar to the geometry of the metal inclusion, the position, number and size of the contacts are of great importance for the response of EMR devices. Most studies regarding the order or location of the contacts focus on the bar geometry. Conformal mapping of a vdP disc where the current probes are adjacent to one another into a barshaped device results in a VIIV order of the voltage (V) and current (I) probes (ref . Sun et al. reported that changing the contact configuration between the two symmetric cases of IVVI and VIIV yielded almost identical device performance. 55 Huang et al. 64 presented simulations which predicted that the barshaped device could be optimized by varying the distance between voltage probes. The spacing between the innermost leads in the IVVI configuration was changed from 18% of the total device length to 13% which led to a doubling in the MR at 3 T (see Figure 2.7), demonstrating that the location of the probes is an important geometric parameter. The effect of contact spacing was also examined by El Ahmar et al. using a ten terminal device (see Figure 2.18) in the VIIV configuration. 24 It was determined that a terminal arrangement of I 3,t V 1,4 produced an EMR response almost 3 times higher than I 4,t V 3,5 regardless of the value of t, signifying that the location of the second current terminal does not have a large effect, whereas the location of the first terminal is a significant factor. In addition to changing the probe spacing, the probe order can also be varied. Several works have explored the effect of staggering the probes both experimentally 8,21,26 and with the use of simulations. 56,57 Troup et al. showed a nearly four-fold increase in the magnetoresistance and three-fold increase at XX T in the sensitivity of a device when switching from an IVVI to an IVIV configuration. 21 The magnitude of the improvement in the performance of the sensor was similar to what was observed by El-Ahmar et al. (see Figure 2.18). 8,26 It should be noted that when bar-shaped devices are operated in the IVIV configuration, the EMR signal is not symmetric around 0 T. The authors claim that due to the asymmetry of the device, the effectiveness of the metal shunt differs depending on the direction of the applied field. Ultimately El Ahmar et al. were able to verify results from previously established literature and recommend using an asymmetric VIVI terminal configuration, placing the first current terminal at a distance of 25-35% of the length of the device from one edge and the second terminal at a distance of 10-20% from the opposite edge. 26 The effect of asymmetric contact placement was also studied by Holz et al. for the case of a bar shaped device in the IVVI configuration. 51 When the voltage contacts were placed in a mirror symmetric manner (V2-V4 in Figure 2.19a), a symmetric resistance with ( ) = (− ) was obtained. If the mirror symmetry was broken (V2-V3), slight asymmetries were observed. In both cases, the resistance appeared to approach the value obtained without a metal shunt for large magnetic fields, signifying a complete expulsion of the current from the metal. If the voltage contacts were positioned in a VIVI configuration (V1-V2 and V1-V4 in Figure 2.19b), the overall measured resistance approached the linear Hall coefficient at large magnetic fields. For small values of µB, nonlinearities were observed that resembled the EMR and enhanced the sensitivity, ( / ) , above that of the Hall effect while simultaneously benefitting from a zero-field sensitivity which is greater than zero, which is characteristic of Hall sensors. Thus, the interplay between the Hall effect and conventional EMR may be combined to yield superior device performance. That hypothesis was also reinforced by Sun et al. 11 who studied a 3-terminal bar-type device which exhibited enhanced low-field sensitivity while retaining the strong EMR effect at high fields. They found that using three terminals would produce a larger sensitivity as the Hall effect also contributes to the total output sensitivity. Holz 29 and Sun 71 suggested that for the weaker magnetic fields around 50 mT, an asymmetric voltage probe placement boosts the magnetoresistance and sensitivity by an order of magnitude compared to the symmetric voltage probes. However, Solin 45 showed that asymmetric contact configurations in a bar-type device could also lead to significant enhancements in higher magnetic fields (see Figure 2.20). For the concentric circular device, the influence of the contact positions was numerically investigated by Huang et al. 73 . Using filling factors ranging from 11/16 to 13/16, the magnetoresistance in a field of 0.1 T was simulated for various symmetric voltage probe configurations. It was found that the magnetoresistance could be increased by about a factor of two by narrowing the angular span between the two voltage contacts. An asymmetrical voltage contact placement was also briefly investigated with only a single simulation indicating that the magnetoresistance might be increased slightly by an asymmetrical voltage contact placement in the case of the shunted vdP geometry. Top vs. Side Contacts: Electrical contacts to the active materials can be made by either depositing metal on the top surface or along the side-wall of the structure. Top contacts are easier to produce than side contacts and require less precise alignment of the etch masks but in some cases can result in lower contact quality (ref). Sun et al. 74 investigated the difference between top and side contacted EMR devices using two bar device geometries (see Figure 2.22) to determine if the manufacturing process could be simplified by using top contacts. The first device was a conventional bar-shaped device where Sidoped InAs was patterned into a strip followed by an aligned metal deposition to define the metal shunt and electrodes. The metal contacts the semiconducting bar from the side with a slight overlap at the top. The second device is composed of an unpatterned semiconductor with metal deposition on top. To avoid current leakage between the two electrodes, an insulating layer of silicon nitride was added below the electrodes. Both devices showed an EMR effect when exposed to an out-of-plane magnetic fields with the top contacted device yielding a lower resistance at zero field (see Both devices showed a similar magnetoresistance, but the sensitivity (dR/dB) was approximately a factor of 2 lower for the top-contacted device. The authors further investigated the magnetic field sensor properties and found that the top-contacted device resulted in a thermal voltage noise of 1.7 nV/√Hz, which was reduced slightly compared to the side-contacted device (2.3 nV/√Hz) due to its lower resistance. The larger sensitivity of the sidecontacted device resulted in an approx. 30% better magnetic field detection limit, yielding a value of 19 nT/√ between 0.4 and 1 T and 4.3 µT/√ at 0 T. Size of Contacts: Poplavskyy examined the effect of contact size by studying the magnetoresistance of the concentric circular geometry by using the analytical expression. 43 In this study, the contact size was found not to be a critical parameter, as the magnetoresistances for point contacts, 8 degree wide contacts and 16 degree wide contacts were very similar, and only a small reduction was found when considering contact widths up to 32 degree (see Figure 2.23). The case of point contacts further led to a significant simplification of the analytical expression. Material Parameters In this section we review how the choice of materials and material properties affect the performance of EMR sensors. Firstly, we examine how individual material properties affect device performance, after which experimental procedures and results are detailed for various material systems. Electronic Transport Parameters The electronic properties of the constituent materials in EMR devices have a strong impact on the EMR device performance. The electrical conductivity, , is given by: = where is the elementary charge, is the charge carrier mobility, and is the density of charge carriers which can be positive or negative depending on the charge state of the carrier species. Both the mobility and carrier density of the constituent materials as well as the contact resistance to the metal shunt can effectively turn the magnetoresistance in EMR devices from a high value to being nonexisting as outlined in the following sections. The materials that have been used for making EMR sensors are primarily III/V semiconductors and graphene as outlined in Table 1 in the introduction. Another vital parameter which determines the performance of EMR sensors is the interfacial contact resistance between the shunt and the semiconductor and whether interface forms an ohmic contact or Schottky barrier. If the energy gap between the Fermi level in the metal and the conduction band in the semiconductor is low, the resulting contact is ohmic. For many semiconductors however, energy gap can be quite sizable and the wave function of electrons in the metal decays into the semiconductor band gap, resulting in pinned electronic states within the band gap of the semiconductor which can result in a relatively high resistance. 76 This state is known as a Schottky barrier and results in a non-linear current-voltage response which is directionally dependent. Carrier Mobility The carrier mobility is a measure of how quickly charge carriers move through a material in response to an external electric field, E. For electrons and holes the mobility is given by: Where is the drift velocity, is the relaxation time between scattering events and * is the effective mass of the carrier. The most significant factor affecting the mobility is the scattering processes present within the system such as scattering off impurities, phonons and crystal defects. The carrier mobility is particularly relevant for EMR devices as it both lowers the zero-field resistance and directly affects the Hall angle ( = tan −1 ) by entering the off-diagonal elements of the conductivity tensor. Finite element models were used to investigate the effect of the carrier mobility of the semiconductor in numerous studies. 29,54,60,77,78 In the low-field regime, higher mobilities translate to higher magnetoresistance values due to the increase in the Hall angle via the terms in the magnetoconductivity tensor allowing for a more effective redistribution of the current. High field strengths and consequently the Hall angle will always be large and thus the improvement in magnetoresistance begins to saturate at high field and mobility values since all of the current is forced from the shunt. 54,60 Figure 3.1: Magnetoresistance using point contacts (PMR) for a three terminal device as a function of mobility at the low and high field regimes. 54 The higher the mobility of the semiconductor, the faster the magnetoresistance saturates due to the larger Hall angle at a given magnetic field. For mobilities exceeding 20,000 cm 2 /Vs saturation can be observed within 5 T (Figure 3.5). 77 In the case of very large mobilities, the semiconductor conductivity can approach that of the shunt and reduce the effectiveness of the device. 54 The effect of mobility on the sensitivity of EMR devices was also considered. It can be seen in Figure 3.6 and Figure 3.7 that the device resistance increases as the carrier mobility is decreased. However, this decrease is not linear under a finite magnetic field and as a result both the magnetoresistance 29,60 and sensitivity 78 curves of the sensors show a clear maximum as a function of mobility. The mobility which maximizes these two figures of merit becomes lower as the magnetic field strength is increased. Holz determined that the maximum sensitivity, dR/dB, scales with µB and under their conditions the sensitivity reaches a maximum when µB is equal to 0.8. 78 Figure 3.4: Resistance (left) and sensitivity (right) of a bar shaped EMR sensor for 3 magnetic fields as a function of carrier mobility. 63 In a two-terminal device, no such saturation in the resistance as a function of mobility was observed (see Figure 3.8). As a result, the magnetoresistance of the devices only increased with higher mobilities. Whereas four-terminal devices have an optimum mobility depending on the field range that one would like to measure, for two-teriminal devices a higher mobility yields better performance at all field ranges tested. 60 Carrier Density The effect of varying the carrier density of the semiconductor was investigated in various papers which modeled bar shaped devices with two 60 , three 54 , and four contacts. 60,78 In all three contact configurations, the magnetoresistance was found to be invariant as a function of the semiconductor carrier density provided that the resulting conductivity of the semiconductor remained below a threshold value (see Figure 3.6). Below this threshold, varying the carrier density of the semiconductor produces a inversely proportional relationship between the device resistance at a given magnetic field and the carrier density, which is in line with what is expected theoretically. 60,78 Since the low and high field resistance curves are parallel, the magnetoresistance remains unchanged. However, when the carrier density is increased past a threshold value, the resulting conductivity of the semiconductor becomes comparable to or higher than that of the metal. The metal therefore no longer acts as a shunt which reduces the magnetoresistance of the device until it eventually vanishes. 60 Figure 3.6: Resistance (top) and magnetoresistance (bottom) of a four terminal bar device for 3 magnetic fields as a function of carrier density. 60 A critical carrier density was also observed when the device was operated in the two-terminal configuration, however the magnetoresistance did not vanish at high carrier densities up to 10 -27 m -3 . 60 Lower carrier densities also result in a higher sensitivity with an inverse linear relationship between the two up to a threshold carrier density (see Figure 3.3). The lower carrier densities result in a higher semiconductor resistance which increase the overall device resistance in the presence of a field. However, because the device resistance as a function of carrier density has the same slope as the current sensitivity, the voltage sensitivity defined as (1⁄ ) ⁄ shows the same behavior as the magnetoresistance and saturates below the threshold carrier density. 78 Shunt Conductivity FEA models were used in various studies in order to examine the role of the conductivity of the shunt material. 54,60,77,78 Here, the conductivity contrast between the semiconductor and shunt regions was varied by changing the carrier density in the shunted region. Holz 78 and Rong 54 found that the conductivity of the shunt itself does not play a decisive role in determining the magnetoresistance, but rather the ratio of the shunt and semiconductor conductivities, ⁄ . When the ratio of the shunt to semiconductor conductivity is increased above 10 7 , the resistance of the device changes little and the magnetoresistance saturates. As such, it is not critical to find extremely high-conductivity metals or superconductors in order to observe a strong effect. However, if the conductivity of the two materials is similar, the resistance of the device increases rapidly and the magnetoresistance and sensitivity of the device approaches zero as no current flows through the shunt. Both papers claim a threshold in the conductivity ratio of 10 4 , above which the the device produce a significant EMR effects. 78 Hewett 77 and Nunnally 60 performed the same study by modeling concentric vdP disc and bar shaped devices, respectively. In these papers it was reported that the magnetoresistance response began to saturate when the shunt was only approximately 100 times more conductive than the semiconductor. Considering that the original paper by Solin 1 used a device with a conductivity ratio σ M /σ S of 2430, these results seem to be more realistic. 60 Nunnally conducted the aforementioned simulation for a twoterminal bar device and observed similar behavior in the magnetoresistance response. However, the saturation of the magnetoresistance begins when the shunt is only on the order of 10 times more conductive. 60 Interestingly, there is a finite magnetoresistance in the device regardless of the conductivity of the shunt. The two terminal configuration may, therefore, be more appropriate for material systems with a low conductivity contrast. Interface Contact Resistance The effect of a contact resistance between materials was considered numerically in several papers for cases featuring ohmic contacts 61 and Schottky barriers. 59,64,77,78 In addition, the shunt has also been considered to pin the Fermi level of the graphene and forming a p/njunction in graphene in vicinity of the graphene/metal interface (ref Bowen). In the modeling of ohmic contacts, a contact resistance was applied as a constant value to the interface between the two materials. 61 For the simulations involving a Schottky barrier, a thin material region was placed between the two materials, at either 0.3% 59 or 1% 64,77 of the semiconductor thickness. To this material region a resistivity tensor was applied with the following form: where is the specific interface resistance. Unlike the magnetoresistivity tensor, the interface resistance is treated as independent of the magnetic field. For the case of ohmic contacts it was found that as long as the contact resistance was lower than 10 -8 Ω/cm 2 there was no influence on the magnetoresistance, but above this value the magnetoresistance decreases in an S-shaped curve, dropping by a factor of 100 at 10 -5 Ω/cm 2 (see Figure 3.12). However, it is interesting to note that the sensitivity, while displaying the same behavior as a function of contact resistance, remains high until a contact resistance of 10 -6 Ω/cm 2 before decreasing. 61 Models which considered the contact resistance with a Schottky barrier found that very low values of the contact resistance do not affect the resistance of the device, but once the contact resistance approaches 10 -6 Ω/cm 2 the magnetoresistance depends critically on the value of the contact resistance. 77,78 At intermediate or high values of the contact resistance the current is effectively blocked from entering the metal, causing to flow through the semiconductor which increases the resistance of the device. It should be noted that the critical value of the contact resistance depends on the geometry of the device, specifically the width of the semiconductor layer, with the trend that the critical value of the contact resistance increases as the width of the semiconductor decreases. Additionally, the authors also observe a shift in the peak sensitivity of the device as function of contact resistance for different widths of the semiconductor layer, i.e. the sensitivity is not highest at zero contact resistance but at some small but finite value of the contact resistance which varies with the width of the semiconductor layer. 78 These findings were consistent with previous experimental reports of EMR devices. 47,50,78 Holz et al. 78 also estimated the contact resistance for the best case scenario of a high mobility 2DEG with the possibility of ballistic transmission of electrons via the Sharvin resistance: is the Fermi wave number for a 2DEG system, is the sheet carrier density, and is the width of the semiconductor-metal interface. The specific contact resistance is thus given by: where is the thickness of the quantum well. For the values = 200 µm and = 4 nm they estimate a realistic lower bound for the contact resistance of 8.5×10 -9 Ωcm 2 . The role of contact resistance in EMR devices was also studied experimentally by Möller et al on InGaAs/InAs/InGaAs quantum wells. 47 Several bar-shaped devices of varying feature sizes and contact quality were produced for the experiment. Figure 3. 16 shows the performance of two devices with nearly identical zerofield resistances but different feature sizes (200×20 vs. 20×1.9 µm) and contact resistivities (1.6×10 -8 vs. 7×10 -8 Ω/cm 2 ). Despite being an order of magnitude smaller in physical size, the higher contact resistance in the second device resulted in an overall zero-field resistance that was similar to the larger device with a higher quality contact. At 1 T, the devices showed similar magnetoresistances of 1950% and 1290%, demonstrating that at high fields the properties of the semiconductor dominate the electronic flow and the measured magnetoresistance is thus mainly determined by the zero-field resistance. However, the slope of the response is significantly higher at small fields for the low contact resistance device suggesting that in the low field regime the interface resistance plays a crucial role in determining the effectiveness of the metal shunt. The same study also examined the interplay between the semiconductor width and specific contact resistance in determining the zero-field resistance and sensitivity (see Figure 3.17). In these experiments all of the devices were 200 µm long but differed in terms of the semiconductor width and the quality of the ohmic contacts. For the devices with high quality contacts, the zero-field resistance decreases sharply as the semiconductor is narrowed since the current has to travel through a shorter length in the higher resistance material. The effect of the contact resistance can be clearly seen when comparing devices of similar widths but different contact quality. Despite the relatively small difference in size between the 7 and 9.6 µm-wide devices, the contact resistivity is 4.5 times higher in the latter. The resulting zero-field four-terminal resistance however is one order of magnitude greater for the device with lower quality contacts. Another important observation is that for the devices with an intermediate contact resistivity of 7 ⋅ 10 −8 Ωcm 2 , the zero-field resistance remains roughly invariant when changing the width from 10 µm to below 2 µm. This suggest that the current passes efficiently into the gold shunt for all small semiconductor width so that the four-terminal voltage drop in the device center does not vary significantly. The resistance is, however, increased when the contact resistance is increased to 11 ⋅ 10 −8 Ωcm 2 .The authors also observed that the sensitivity is improved by decreasing both the semiconductor width and the contact resistivity. It can be seen from the results detailed above that the contact resistance is a key parameter in determining the performance of EMR devices. Solin directly addresses the influence of contact resistance in his discussion of nanoscale scanning EMR magnetometers, stating that for applications where there is a need for a high spatial resolution this requirement may dictate the minimal size of the active area and choice of materials, thus rendering the specific contact resistance even more crucial. 5 In this case, the nanoscopic sizes may reduce the extraordinary magnetoresistance by lowering the efficiency of the metal shunt as well as increasing the noise by the increased resistance of the device. Despite the high mobilities which can be achieved with the InSbbased devices used for the nanoscale EMR sensors, Solin argued for using InAs for an improved contact resistance since the material naturally forms ohmic contacts. Materials In the previous section we examined the critical material parameters which influence EMR device performance. It was found that a low carrier density, high mobility, high contrast in shunt/semiconductor conductivities, and low contact resistance are all important for producing EMR devices with a high magnetoresistance. In this section we will detail how different materials have been used within literature to experimentally fabricate EMR devices. Table 1 in the introduction provide an overview of the active materials used in experimental EMR devices. The focus in the section below will be on the material properties, the fabrication methods employed, and how the choice of materials contributed to the results. While we focus only on thin films, heterostructures and 2D materials, care should also be taken when selecting a substrate material to minimize undesired leakage currents through the substrate. 3,5 Thin Films Thin films are a natural choice of material platform for EMR sensors as typically only the perpendicular component of the magnetic field affects their performance since electronic motion within these structures is more or less confined to a plane. In particular, thin films of the narrow bandgap III-V semiconductors InSb 1,2,4,6-9 and InAs 9-13 have attracted the attention of researchers in the field as they possess some of the highest known room temperature electron mobilities in bulk materials and well as low contact resistances to metal shunts. InSb thin films can be prepared by various methods. They are typically grown on semi-insulating GaAs substrates due to its low conductivity and low cost, but the large lattice mismatch (14%) between the two materials produces a defect-rich region in the InSb close to the interface. The presence of defects causes electron scattering to occur and significantly lowers the mobility (ref). To counteract this, epitaxial InSb films with thicknesses exceeding 1 µm are used in order to display high carrier mobilities. All of the experimental works cited in this review reported InSb film thicknesses between 1 and 1.4 µm. As an alternative described in the next section, more complex thin film stacks can be used where buffer layers mitigate the mobility degradation form by the dislocation defects without the need for an active layer that is very thick. Single-crystalline films are preferred to further avoid scattering at grain boundaries. These epitaxial layers are grown via metal organic vapor phase epitaxy 1,2 or molecular beam epitaxy 4 , which are used to produce EMR devices with high semiconductor mobility values of 45,500 and 38,000 cm 2 V -1 s -1 , respectively. While epitaxial thin films can possess excellent quality, the necessary equipment and fabrication are, however, expensive. Cheaper, albeit polycrystalline, thin films have been grown using thermal 6 and flash 7-9 evaporation methods. Both techniques involve vaporizing pure sources of the constituent elements and allowing nucleation to occur on a target substrate. EMR devices with relatively high semiconductor mobilities of 20,000 cm 2 V -1 s -1 and 12,200 cm 2 V -1 s -1 can still be achieved with films prepared via flash evaporation, 8 and for thermal evaporation, respectively. 6 Thin film mobility is an important parameter but the fabrication methods used to shape the films into devices can also be a key determinant of device performance. The effect of device properties on performance can be seen by comparing the results of several published experiments with similar values of α, around 0.7, at 1 T. Solin et al. fabricated a device made from an epitaxial thin film of InSb which showed a zero-field resistance of 0.08 Ω and a subsequent magnetoresistance of 25,000% at X T. 1 The conductivity of the thermally evaporated film prepared by Suh et al. was only around half that of the epitaxial film but the zero-field resistance of 1.5 Ω was approximately 30 times higher, resulting in a magnetoresistance of only 700% at X T. 6 To explain this discrepancy we can compare these two results to those from flash evaporated films prepared by El-Ahmar et al. which were shunted with two different methods: using either edge contacts or top contacts (see Figure XX). 8 Though the InSb film used for the edgecontacted device had a conductivity that was 40% higher than the film used in the planar device, the edge-contacted device showed a zero-field resistance of 0.9 Ω and a magnetoresistance of 800% at X T compared to values of 0.05 Ω and 22,500% at X T for the topcontacted device. This comparison directly shows the importance of contact resistance and device construction on performance. The thermally evaporated and edge-contacted devices showed similar transport parameters when compared to the top-contacted device, yet the latter vastly outperformed the former and approximated the behavior of the epitaxial device. While the metal deposition methods used by Solin et al. may have been able to produce lower contact resistances, El-Ahmar et al. were able to reproduce similar results by increasing the contact area with a top-contacted configuration. Flash evaporated films were also produced by Mansour et al. 9 and though no characterization data is provided, a high zero-field resistance of 74 Ω and low magnetoresistance of 50% suggests the presence of a high contact resistance. While the high mobility of InSb makes it an attractive material for EMR applications, it should be noted that a Schottky barrier may form when it is contacted with metals. As the thermal energy to overcome the barrier is reduced at low temperatures, problems may arise especially for cryogenic applications. InAs is often used for EMR devices because, despite its lower mobility, it tends to form ohmic contacts with metals. 5 Preparation of InAs thin films is quite similar to that of InSb. Thicknesses of over 1 µm are also required for accommodating the lattice mismatch with GaAs; and growth techniques such as molecular beam epitaxy [10][11][12][13] and flash evaporation 23 have been used to produce films. Contact resistances as low as 1×10 -7 Ωcm 2 have been demonstrated in MBE grown films. As with most semiconductors, the electron mobility of InAs has a strong temperature dependence. Sun et al. studied the effect of temperature on epitaxial InAs films. 13 The mobility was found to increase from around 8,000 cm 2 V -1 s -1 at room temperature up to a maximum value of 25,000 cm 2 V -1 s -1 at 75 K, below which the mobility decreased (see Figure 3.18). Above 75 K, scattering is dominated by lattice vibrations but below this temperature, scattering with charged impurities and lattice defects become the most significant factors. The performance of their two-terminal barshaped EMR devices followed the same trend, with magnetoresistance values increasing with decreasing temperatures until around 100 K. The highest magnetoresistances were reported for the shunted vdP geometry, with values at 1 T around 9,000% at room temperature and 30,000% at 100 K. Interestingly, Sun et al. reported the observation of intrinsic magnetoresistance in samples which were shaped into unshunted vdP discs (see Figure X). 13 The authors suggest that these results can best be explained by considering a two-band conduction model, as the presence of two carrier species has been demonstrated to produce intrinsic magnetoresistance. 79 Most of the materials discussed in this review are considered to possess only a single conduction band with only one carrier species . However, when measuring the Hall coefficient for their InAs samples the authors noted that the value was found to decrease with increasing magnetic fields and eventually saturated, instead of maintaining a constant value as would be expected for a material with a single conducting band. 13 The presence of a second conduction band can result from the formation of a charge accumulation layer at the surface of thin films as a consequence of native surface defects. This surface layer acts as a parallel conduction channel, with a carrier density and mobility which is different from that of the bulk. The thickness of the conducting layer was estimated by using the Debye length to be around 33 nm at room temperature. A nominal mobility and carrier density for the entirety of the thin film was measured to be 8,160 cm -2 V -1 s -1 and 5.6×10 16 cm -3 . In a two-band model, which accounts for the contributions from both bands, the averaged zero-field mobility and carrier density for the device is given by: where and are the carrier mobilities; and are the carrier densities; and are the thicknesses of the surface and bulk layers respectively. The mobility and carrier density were deduced to be 3,360 cm -2 V -1 s -1 and 3.93×10 18 cm -3 for the surface layer and 16,000 cm -2 V -1 s -1 and 2×10 16 cm -3 for the bulk. The authors observed that at low fields the magnetoresistance of unshunted vdP discs followed a quadratic dependence, but at higher fields the response became linear. The crossover point between the quadratic and linear regimes occured when = 1 . The explanation for this behavior can be seen in the analytical expression for the intrinsic magnetoresistance of a material with two bands: In the low field regime 2 is small and the field independent term dominates the denominator, resulting in a quadratic dependence on the field due to the 2 term appearing in the numerator. At high fields, the field independent term in the denominator becomes negligible and the magnetoresistance as a function of field strength saturates. 13 Regardless of the choice of material, one should take into consideration the possibility of multiple conducting channels when using thin films as they can influence the overall transport behavior within the device. Multiple conducting channels or species can produce an intrinsic magnetoresistance effect in materials that otherwise would not display this behavior. A major impediment to the widespread adoption of EMR sensors based on the material systems described in most of the literature is the lack of commercially available systems capable of mass producing them. EMR devices were produced using materials and processes that are standard to the silicon-based semiconductor industry by Troup et al. 21 The devices were fabricated by etching a doped Si wafer into mesas with a bar geometry and then sputtering Ti onto the sample (see Figure 3.19). The shunt region was created by rapid annealing in a Ni atmosphere at high temperature which caused any unmasked areas to form a stack where the bottom 70 nm were a metastable state of C49 TiSi 2 , the next 10 nm were Ti 5 Si 3 , and the top 20 nm were Ti/TiN. The Ti/TiN layers were selectively etched, leaving behind only the titanium silicide phases and the Si. The remaining C49 phase is a material approximately 24 times more conductive than the n-doped Si. The highest magnetoresistance for this device was only 15.3% at X T, a value much lower than what was observed in other material systems. However, it is still interesting to note that a measurable effect can still be achieved in systems with relatively low values of the mobility and conductivity contrast between the shunt and semiconductor. Two-Dimensional Electron Gasses (2DEGs) As shown in previous sections, materials with high electron mobilities are essential for realizing devices with a strong EMR effect. The highest recorded carrier mobility values have been observed in two-dimensional electron gasses (2DEGs). Due to their exceptional mobilities these material systems are natural candidates for EMR applications. In a 2DEG the motion of electrons is tightly confined to a 2D plane as the energy for motion in the third dimension is quantized. Typically, this confinement potential is is formed at a surface or introduced through bandgap engineering of semiconductor heterostructures. For the latter, materials are chosen such that they possess similar lattice spacing to minimize defects but different bandgaps. The bandgap mismatch can form discontinuities in the conduction and valence bands which can confine carriers in quantum wells in the vicinity of the interface (see Figure 3.20). 80 It further enables the donors to be spatially separated from the charges in the quantum well, which further reduces the scattering and enhances the mobility. Experimental devices based on 2DEGs started being produced shortly after Solin first discovered the EMR effect. 3,5,[14][15][16][17][18][19][20] Devices fabricated from 2DEGs have been reported with material combinations including InSb embedded in InAlSb 3,5 , InAs embedded in either InGaAs 15-17 , AlSb 18,19 , or GaSb 14 , in addition to the interface between InGaAs and AlGaAs. 20 In order to ensure epitaxy and high mobility, the heterostructure layers were all grown using MBE and then turned into EMR devices using etching and metal deposition to form side contacts. The electron mobility in the InSb/InAlSb heterostructures at room temperature (XX,XXX cm 2 V -1 s -1 ) was found to be lower than in bulk films of InSb (XX,XXX cm 2 V -1 s -1 ), despite both being prepared with MBE. 3 However, for InAs-based 2DEGs, several groups report electron mobility values exceeding that of thin films, with room temperature values as high as 21,000 cm -2 V -1 s -1 in InAs/InGaAs 16 and 31,000 cm -2 V -1 s -1 in InAs/GaSb. 14 At 4.2 K, the electron mobility reached values as high as 150,000 cm -2 V -1 s -1 and 194,000 cm -2 V -1 s -1 for InAs/InGaAs and InAs/GaSb, respectively. Due to the formation of ohmic contacts, contact resistances in InAs/InGaAs devices as low as 10 -8 Ωcm 2 have been reported. 15,17 Yet despite the high mobility values and low contact resistances observed in 2DEG EMR devices, there are few notable results with regards to device performance. Solin et al. showed an appreciable magnetoresistance effect at low fields in an InSb/InAlSb device with a 35% increase in device resistance at 0.05 T. 3 Möller et al. reported magnetoresistance values as high as 10 5 % in InAs/InGaAs devices at 1 T and X K. When the devices were tested at room temperature the magnetoresistance was found to be only 1,900% at 1 T. 15 Though many of the devices possessed relatively high sensitivities between 500-1000 Ω/T, 5,14,15,18,20,81 large zero-field resistances led to low magnetoresistances. For example, the zero-field resistances at room temperature were as high as 30 Ω in the device described by Möller et al. 15 , or 430 Ω in the case of the InAs/AlSb device described by Boone et al. 18 These values are orders of magnitude greater than what was seen in the best performing thin film devices. Although producing high-performing EMR devices from 2DEG systems has proven to be quite challenging, their low dimensionality and high carrier mobility allows for quantum mechanical phenomena to be observable at cryogenic temperatures. The clearest example can be seen in the results from Kronenworth et al., who compared the behavior of a bar-shaped device and an unshunted vdP disc fabricated from InAs/InGaAs heterostructures at 4 K. 46 Above 1 T strong Shubnikov-de Haas (SdH) oscillations could be seen in both devices (see Figure 3.22). The SdH effect is characteristic of high-mobility conductors at low temperatures and occurs due to the quantization of the energy levels of electrons in the presence of a magnetic field. The resistance curves for the hybrid and pure semiconductor devices are comparable at high fields, lending credence to the position argued by Solin that at high fields the shunt is no longer effective and the behavior of the device is solely determined by the properties of the semiconductor. The EMR effect is clearly visible in the bar-shaped device where the resistance steadily increases in the low field regime whereas the resistance of the unshunted vdP device is nearly constant in this region. The onset of quantum mechanical behavior at high fields should therefore be considered when designing devices with high electron mobility as the resistance of the device may be dominated by non-EMR phenomena. 46 Figure 3.22: Low-temperature behavior of an EMR device (left) and unshunted vdP disc (right) made from InAs quantum wells as a function of magnetic field. 46 Boone et al. calculated the mean free path of electrons in InAs/AlSb heterostructures as a function of temperature. 18 It is interesting to note that although the mean free path of electrons was calculated to range from 80 to 325 nm as the temperature was lowered from 300 to 5 K, the EMR response increased monotonically with decreasing temperature. At low temperatures, the mean free path is larger than the smallest features of the device and thus a significant portion of electronic transport may occur in the ballistic or quasi-ballistic regime. Despite this, it is promising to observe that the magnetoresistive effect continues to persist into the ballistic regime. van der Waals Materials Like 2DEGs, van der Waals materials are natural candidates for EMR sensors because of the exceptional electronic properties found in some of these materials. Among vdW materials, graphene in particular has long been seen as an attractive candidate for EMR devices due to its intrinsically high carrier mobility and its ability to reach extremely low carrier densities. The first documented use of graphene in EMR sensors was reported by Pisana et al. in 2010. 22,23 Since then several works have been published describing EMR devices made from graphene. [24][25][26][27][28] Graphene has a thickness of only one atomic layer, and its transport properties are easily influenced by local conditions as the entire material is essentially a surface. Due to the lack of electronic screening, graphene is highly susceptible to charged impurity scattering and electronic doping. Even the choice of substrate can have a large influence, as phonons and impurities in the substrate material can scatter electrons in graphene. 82 Figure 3.23: SEM micrographs of a planar bar EMR device on a graphene flake (left) 22 and of a vdP disc graphene device (right). 24 The first graphene EMR devices were made by exfoliating graphene layers from graphite and depositing them onto Si/SiO 2 substrates. [22][23][24] Graphene-on-oxide systems have a relatively low electron mobility, typically ranging from 1,000 to 10,000 cm 2 V -1 s -1 due to strong scattering from optical phonon modes in the SiO 2 . Pisana 22,23 and Lu 24 also reported mobilities of 2,500 and 5,000 cm 2 V -1 s -1 in graphene devices on Si/SiO 2 substrates respectively. The electronic properties of exfoliated graphene can be exceptional under certain conditions, but the difficulty of their acquisition and the small size of obtained flakes makes the technique impractical for mass-production. Graphene can be grown over large areas using chemical vapor deposition (CVD), producing a material which is more amenable to serial device manufacturing, but the resultant films may contain many defects and grain boundaries 83 that can negatively affect electronic transport. Friedman 25 and El-Ahmar 24 both fabricated EMR devices using CVD graphene. Friedman produced a graphene on Si/SiO 2 device with an electron mobility of 1350 cm 2 V -1 s -1 , similar to the 1200 cm 2 V -1 s -1 of the graphene on SiC device reported by El-Ahmar. Recently, a significant improvement in the graphene device quality was achieved by encapsulating graphene in flakes of hexagonal boron nitride (hBN). 84 hBN is an ideal substrate for graphene for several reasons: First, it is an atomically flat, wide bandgap semiconductor where its surface optical phonon modes have high energies which prevents their interaction with electrons in the adjacent graphene. Second, there is a very small lattice mismatch between the two materials. Third, it is inert and possesses a low density of of charged impurities. The hBN protects graphene from extrinsic disorder from sources such as water, adsorbed hydrocarbons, dangling bonds and trap states, leading to exceptional transport properties in encapsulated graphene such as micron-scale ballistic transport at room temperature 82,85 and a 2 or 3 orders of magnitude larger mobility than that of graphene-on-oxide systems. In encapsulated graphene devices, edge contact is made between graphene and metal, offering a lower contact resistance than top contacts. 85 This method has been used in other magnetometery applications; for example Hall bars with magnetic field resolutions as low as 50 nT/√Hz. 86 For EMR devices, Zhou et al. produced encapsulated graphene devices with a mobility of approximately 80,000 cm 2 V -1 s -1 . 27 Encapsulation is one method to preserve the properties of graphene, but other strategies can be used to avoid impurities and prevent interactions with the substrate. Suspending graphene over the substrate is another technique which has been commonly employed to isolate graphene. Kamada et al. produced suspended graphene Corbino discs by etching away a sacrificial layer of resist and demonstrated mobility values of around 100,000 cm 2 V -1 s -1 . 28 Figure 3.24: SEM micrograph and schematic of shunted vdP device fabricated from encapsulated graphene (left). 27 False color image and schematic of a Corbino disc device made with suspended graphene (right). 28 The effect of the local environment on the mobility and subsequent magnetoresistance can be directly observed in the performance of the various devices. For example, Lu et al. reported the best graphene-on-oxide devices with a magnetoresistance on the order of 10 4 % at 8 T. 24 On the other hand, the encapsulated graphene device presented by Zhou with the same filling factor showed a magnetoresistance at the same magnetic field strength on the order of 10 7 %. 27 Designing sensors with vdW materials requires careful consideration of the device architecture and how the environment around the active graphene layer will affect electronic transport. One of the key features of the electronic behavior of graphene is that the conduction and valance bands meet at Dirac points, thus making it a semi-metal. Both holes and electrons can be the majority carriers in graphene depending on the position of the Fermi energy relative to the Dirac point as shown in Figure 3.25. The Fermi energy can be modified through the application of a gate voltage and thus determine the number and sign of the dominant charge carriers. At the Dirac point itself the conductivity of graphene reaches a minimum and there are equal numbers of holes and electrons (charge-neutrality). 87 Natural defects and local potential fluctuations cause short-range disorder and result in the formation of electron and hole puddles which ensure that there is always a nonzero carrier density above 0 K. 59 Graphene can show an intrinsic magnetoresistance by the presence of these electron-hole puddles. 88 Lu et al. demonstrated this effect by fabricating vdP discs made of pure metal or pure graphene. The metal disc showed a magnetoresistance of 5% at 9 T, which is consistent with the theoretical prediction for a pure metal. In contrast, the graphene device showed a magnetoresistance of 300-500% at 9 T, but only near the charge-neutrality point, otherwise no magnetoresistance was observed. 24 The intrinsic magnetoresistance that arises in graphene should be considered when operating sensors near the charge-neutrality point. Charge-neutrality can be located by scanning the back gate voltage and identifying where the lowest conductivity occurs. If the Dirac point is found far away from a back gate voltage of 0 V it could indicate a strong presence of extrinsic dopants. This can be seen in Figure 3.26 which shows the results of the graphene-on-oxide devices reported by Pisana 14 and Friedman 25 where the minimum conductivity in shunted vdP EMR devices was found at back gate voltages in the range of 20-30V. Close to charge neutrality, however, the response became quadratic (see Figure 3.26). The authors propose two possible mechanisms that explain this behavior: either that the presence of electron and hole puddles give rise to an intrinsic magnetoresistance in the graphene, or that the effectiveness of the shunt increases due to the higher resistivity of graphene at the charge neutrality point. Both of these mechanisms should produce a quadratic response according to theory. 14 The Dirac point can also be observed in the high-field data from Friedman (see Figure 3.26 and Figure 3.30). At low magnetic fields, electronic transport occurs mostly through the gold resulting in an overall resistance which is low and has only a weak dependence of the applied gate voltage (Figure 3.26b). At higher fields, the current is expelled from the shunt and the overall behavior of the device is dominated by the properties of the graphene. In this regime, the resistance of the device varies significantly as a function of gate voltage, with the peak in resistance indicating the location of the charge neutrality point. 25 Moktadir et al. developed simulations which included electron-hole puddles and indicated that homogeneous graphene is expected to yield a better EMR response. 59 This suggests better performance away from the charge-neutrality point. However this conflicts with experimental results reported by Lu 24 and Zhou 27 which demonstrated that graphene-based EMR devices function best when operated near charge-neutrality. Lu et al. showed that as the backgate voltage is moved away from the charge-neutrality point, the carrier density of the graphene increases leading to a decrease in the ratio of σ m /σ g . The decreased conductivity contrast in the device results in a decrease in both the magnetoresistance and sensitivity (see Figure 3.27). 24 Contact resistance is also an important factor to consider when designing graphene devices. Since graphene is a single atomic sheet, the architecture of the device can determine the type of bonding that occurs between the metal contacts and the graphene. In the work by Pisana et al. 14 metal contacts were deposited directly onto the graphene flake in a top-contact configuration Lu et al. 24 etched the graphene using oxygen plasma into the desired shape and deposited the metal into the corresponding areas (see Figure 3.27). Converting the dimensions of Pisana's bar device into an equivalent filling factor 70 results in an of 0.6. Although the electron mobility in the graphene used by Lu was only twice as high, the devices fabricated with the equivalent filling factor showed an order of magnitude improvement in the magnetoresistance; around 100% at 1 T compared to the 10% reported by Pisana. Although results were not given to compare the contact resistivities between the two studies, the difference in performance between the two devices may be explained in part by the contact resistance. Side contacts to graphene have been shown to produce lower contact resistivities since they allow both the pπ and pσ orbitals in carbon to contribute to electron transmission, compared to only the pπ orbitals when forming top contacts. 85,87 Pisana et al. reported contact resistances of 3.7×10 -6 Ωcm 2 , which is above the threshold calculated by Sun et al. (see Figure 3.12). 61 On the other hand, edgecontacted graphene devices in literature have been reported with contact resistances on the order of 10 -9 Ωcm 2 . 85 While the contact resistance for Lu's devices is not reported, a lower contact resistivity could be formed by etching into the graphene and exposing the edge, which would lead toan improvement in the performance of the device. Graphene encapsulation with hBN also form edge contacts, which may also explain the low zero-field resistances and high magnetoresistances observed by Zhou et al. 27 Although the magnetoresistance varied significantly between the devices described by Pisana and Lu, the reported sensitivity was 1000 Ω/T in both cases. This is also in good agreement with the simulations presented by Sun et al. 61 which suggests that sensitivity is more robust to changes in the contact resistance and can remain high even at values around 10 -6 Ωcm 2 . Resistance at the interface between graphene and metal depends not only on the type of bonding that occurs, but also on the relative Fermi energies of the two materials. Zhou et al. found that the zerofield resistance not only changes with gate voltage, but is asymmetric about 0 V (see Figure 3.29a). The authors posit that this phenomenon occurs due to Fermi-level pinning where the large density of states in the metal pins the Fermi level in the adjacent graphene, producing an interface-near region that is n-doped. However, in the bulk of the graphene the Fermi level is mostly controlled by the back gate voltage. Thus when the bulk of the graphene is p-doped, a p/n-junction is formed near the shunt, effectively increasing the zero-field resistance. 27 Computational modeling was used to estimate the length scale over which the Fermi level relaxes and was determined to be on the order of 100 nm (Figure 3.29b). 27 Kamada et al. determined that the formation of a p/n-junction affected the ability to accurately measure the mobility of holes and electrons in graphene Corbino discs. Only by gating the graphene with a strong bias voltage could the effect of the p/njunction be eliminated. 28 The high electron mobility of graphene means that one has to consider quantum mechanical phenomena, particularly at low temperatures where phonon modes are suppressed and the mean free path of electrons becomes large. Zhou et al. observed negative resistances, which may be a signature of ballistic transport (see Figure 3.29a). 27 Friedman et al. observed oscillations in the resistance of their devices as the gate voltage was scanned at high magnetic fields and cryogenic temperatures, and posited that they are an example of the quantum transport. These oscillations are clearest for the case of an unshunted vdP sample (Figure 3.30) but can also be observed to a lesser extent in shunted vdP devices (Figure 3.26b). 25 In addition to single-layer graphene, EMR devices made from bilayer graphene and Bi 2 Se 3 have also been reported. 24 Bilayer graphene on SiC was found to have a higher mobility than monolayer (5,000 vs. 3,000 cm 2 V -1 s -1 ), likely due to electronic screening of charged impurities provided by the graphene layer in contact with the substrate. The device made with bilayer graphene showed an order of magnitude increase in magnetoresistance compared to the monolayer graphene device. No explanation is provided by the authors, and while the mobility of the bilayer graphene is higher, the difference in the carrier mobility of the two devices is not sufficiently large to fully account for the improvement. It is possible that factors such as a lower contact resistance in the bilayer device played a role. Bilayer graphene has been demonstrated to produce lower contact resistances than monolayer graphene by increasing the number of edge contacts. 87 Bi 2 Se 3 is not strictly a vdW material but can be exfoliated in the same manner. The measured magnetoresistance was extremely low, likely due to its low electron mobility of 50 cm 2 V -1 s -1 . 24 Magnetometers A key application of extraordinary mangetoresistance is to form the basis of magnetometers that can detect magnetic fields using either simple 2-terminal or 4-terminal electrical measurements. The use of EMR devices as magnetometers has been investigated in several studies and will be reviewed in this section. Important benefits of EMR magnetometers are the potentially large range of conditions that they can be operated under, including wide ranges of temperatures and magnetic fields. In addition, as displayed in the previous chapters, EMR devices possess extremely rich possibilities for tuning their performance as magnetometers by varying the device geometry and material properties Fabrication of Devices Bar-shaped geometries are often considered when fabricating EMR magnetometers as this geometry facilitates simple fabrication and even realization of small sensors with nanoscale spatial resolution. 3,5,70 To further improve the spatial resolution, the voltage contacts are closely spaced and the metal shunt only contacts the high-mobility semiconductor in the region between the voltage contacts (contact 2 and 3 in Figure 4.1). This region then becomes the active region, which determines the lateral resolution together with the distance between the sensor and the magnetic field source. The latter is improved by 1) placing the conductive layer close to the sensor surface and capping this with a nanometric insulating layer such as Si3N4 to prevent shorting and 2) by decreasing the thickness of the active layer in the EMR sensor, e.g. using quantum well structures with high mobility obtained through modulation doping and reduction of dislocations using epitaxial buffer layers. 70 With this strategy, Solin et al. fabricated EMR sensors from InSb/AlInSb quantum wells with an active volume of 35x30x20 nm 3 . 3,5 As shown in Figure 4.2, this nanoscopic sensor produced an asymmetric magnetoresistance of up to 150% at -1 T. EMR magnetometers based on graphene, InAs, and GaAs/AlGaAs have also been considered, 5,13,16,19,23 as well as other nanoscopic sensors. 19 Of particular concern is the contact resistance between the semiconductor or graphene and the metal shunt, which can severely lower the magnetic field sensitivity. Here either poor electrical contacts, Schottky barriers, or p/n junctions may form, 5,27 which can limit the operation of magnetometers, particularly at low temperatures. Therefore, special care should be taken in the choice of materials and during fabrication as described by Solin et al. 5 Signal-to-Noise Ratio and Noise Equivalent Field The two key figure of merits for EMR magnetometers are the signalto-noise ratio (SNR) and noise equivalent field (B NEF ). The typical operation of EMR devices involves passing a constant current through the device while measuring the field-dependent voltage drop across either the current contacts in a 2-terminal configuration or separate voltage contacts in a 4-terminal configuration. The SNR is then defined as: = When measuring small magnetic fields differences ( − ) around a bias magnetic field ( ), the voltage signal can be approximated by 89 : Here, the bias magnetic field may be used to tune the EMR devices into the most sensitive region, or alternatively to provide a description for the case where a background magnetic field is present in the measurements. In contrast to conventional giant magnetoresistance devices, the typical EMR devices do not contain any magnetic elements, which eliminates the magnetic noise contribution and stray fields from the magnetic component. Noise in the EMR sensors is generally characterized by only two contributions: A frequency-independent thermal noise contribution and a low-frequency 1/f noise contribution. Depending on the material platform used, other contributions may also contribute to the noise including generationrecombination noise 90 , however these contributions have generally not been considered for EMR sensors. An example of the noise sources in an InSb EMR device is shown in Figure 4.3, which features both a low noise and a low frequency corner of 140 Hz separating the 1/f and white noise regions 16 . The frequency corner may however vary widely and has been reported to exceed 5 GHz in InAs EMR with Ta/Au as shunt metal. The thermal noise ( ℎ ) is well-established with an origin in thermal agitation of the charge carriers, which increases as the temperature and resistance is increased: where is the Boltzmann constant, ,2 ( ) is the twoterminal resistance of the voltage probes evaluated at the bias field and Δ is the measurement bandwidth. 89 In the case of a twoterminal EMR device, = ,2 , with this resistance including the resistance of the leads as well as any contact resistance in the circuit. For four-terminal devices, the devices resistance R may be significantly different from ,2 , as is strongly affected by the relative placement of the current and voltage terminals and does not include the resistance of the leads or the contact resistance between the leads and the device. The 1/f noise contribution ( 1/ ) is less rigorously understood, but is often described using: where is the bias electric field applied to drive the constant current in the EMR device, is the dimensionless Hooge parameter and is the electron mobility. 89 A key difference between the two noise contributions is that the thermal noise regime does not depend on the magnitude of the current whereas the 1/f voltage noise increases linearly with current ( ∝ ). 19 16 The SNR is defined as: Δ with a cross-over frequency between the 1/f noise and thermal noise occurring at = 2 /(4 ). When measuring weak magnetic fields without a bias ( = 0),the SNR can be used to extract the noise equivalent field ( ) in units of /√ representing the magnetic field strength which produces a signal strength equal to the noise value (SNR = 1). For the thermal noise regime this field detection limit is given by: Whereas the low-frequency noise-equivalent field becomes: In both noise regimes, a low magnetic field detection limit is obtained by increasing the sensitivity (dR/dB) while lowering the 2terminal resistance at the voltage probes. In addition, it is favorable to increase the measurement frequency in order to operate in the thermal noise limited regime. In this regime, lowering the temperature and increasing the current further boost the sensitivity. The optimal current applied to the EMR magnetometer depends on the device geometry, constituent materials, and the application of the sensor. Solin argued that the maximum current in InSb-based EMR devices is limited by the onset of non-linear transport caused by a drop in the carrier mobility at high electric fields. 5 The optimal current is typically a trade-off between improved signal strength when increasing the current in the thermal noise regime and either current limitations of the sensor -such as the onset of non-linear transport -or the emergence of the current-dependent 1/f noise at the measurement frequency. With one exception, 16 the optimal current and the effect of current on the noise have not been identified for EMR devices. However, for an InAs device the white noise was found to be independent of the current up to approximately 0.6 mA, beyond which a moderate increase of 23% in the white noise was observed when increasing the current to 7.5 mA. In general, the noise equivalent field is typically estimated by only calculating the thermal noise using the resistance and temperature and hence excluding the impact of the current on the noise. Detection of Homogenous Weak and Strong Fields The noise equivalent field serves as the figure of merit for detecting magnetic fields that are homogeneous within the spatial extent of the EMR sensor. For 4-terminal devices, the expressions for the figure of merit as well as the optimization of the EMR magnetometers share several similarities with the more developed Hall magnetometers. 90,92,93 The magnetometers can be optimized in terms of the measurement frequency, operating temperature, probing current as well as the material parameters in the hybrid device. For the latter, the key parameters are the mobility and carrier density of the high-mobility material, 89 the contact resistance to the metal 61 as well as the conductivity contrast between the metal and high-mobility material. 78 In contrast to Hall-bars where the Hall signal is largely independent of the geometry, EMR sensors intrinsically have a large geometric influence as described in Section 2. An example is shown in Figure 4.4 for a bar-shaped InAs EMR device measured in 2-and 4-terminal configuration and compared to a corbino disk. Here, the sensitivities and resistances of the devices using a 1 mA applied current are measured as a function of an applied homogeneous magnetic field. The 2-terminal bar-shaped EMR device is found to have the largest sensitivity, exceeding the other two geometries by two orders of magnitude. The thermal noise was calculated from the resistance values, and used to extract the noise equivalent field, yielding values down to =10 nT/√Hz for magnetic fields ranging from approximately 0.2 to 1 T. At weak magnetic fields the noise equivalent field is worsened significantly due to the reduced sensitivity in the symmetric devices. In contrast, a non-vanishing sensitivity can be obtained in asymmetric devices where the asymmetry is typically formed by IVIV contact configurations rather than the IVVI configuration as described in Section 2. Other studies report noise equivalent fields in EMR devices of 0.01-2200 nT/√Hz where the noise is calculated in a similar manner 5,20 (See Table 1 for further details), with the lowest bound calculated using a high current of 100 mA. However, in a 20 µm x 8 µm InAs EMR device, the noise and its dependence of the current were measured, yielding a magnetic field sensitivity of around 1.3 nT/√Hz at 0.8 T. Of particular interest for magnetic field sensing is the micro-and nanoscopic EMR sensors such as the one shown in Figure 4.1 where the small sensor size yields a higher degree of homogeneity across the sensor as well as a great spatial resolution if employed in a scanning EMR magnetometer. The noise equivalent field is calculated for 35 x 30 x 25 nm 3 EMR sensors made from InSb/AlSnSb, which gives a value of = 4.1 µ /√Hz at a field of 9 mT and probing currents of 2.2, respectively 5 . Predictions for similar sensors made from InAs gave = 2.4 µ /√Hz at a field of 9 mT and probing currents of 3.8 µA. Larger microscopic sensors with dimensions of 1000 x 1000 x 25 nm 3 result in noise equivalent fields of 10-20 nT/√Hz, which was found to compare favorably to Hall sensors of identical size based on 2DEGs in GaAs/AlGaAs heterostructures. 5 The potential and challenges of using such nanoscale EMR sensors in scanning magnetometers with nanoscale resolution have been addressed by Solin. 5 Inhomogeneous Magnetic Fields It remains unknown how EMR sensors react to inhomogeneous magnetic fields that vary significantly across the EMR sensor. Only a single study exists that assesses the capabilities for EMR devices to sense local magnetic fields 52 i.e. fields smaller than the physical size of the sensor. The device investigated was a bar-shaped semiconductor/metal hybrid device intended for sensing extremely localized magnetic fields. Its characteristics were investigated using a numerical finite element model identical to that of Moussa et al. 32 The authors considered a localized magnetic dot with a spatial extent equal to the width of the semiconductor region in the device. The bar-shaped EMR device was long and thin with a length-towidth ratio of the semiconducting layer of 50 as shown in Figure 4.5. Using an asymmetric voltage probe configuration within the bar-shaped device, the finite element model revealed that for the ideal position of the magnetic dot, the magnetoresistance in a ±50 mT field is 18%. While this is much lower than the 134% obtained if the magnetic field was homogeneous across the entire device, it is nevertheless considerable considering that the magnetic dot only covers 1/60 of the semiconductor area. Conclusion and Perspectives EMR remains a relatively unknown species in the zoo of different magnetoresistance classes. In contrast to the majority of other magnetoresistance phenomena, EMR finds its root in a geometrically enhanced deflection of the current upon the application of a magnetic field. The effect has been realized in hybrid devices consisting of metal inclusions inserted into a highmobility material. To date, the majority of EMR devices are based on graphene or III-V semiconductors with non-magnetic metal inclusions. The magnetoresistance in EMR devices has been shown to exceed 10 7 % at room temperature. 1,3,27 Although the current deflection has not yet been imaged experimentally, the origin of the EMR effect is considered to be well understood as justified by the good agreement between experimental magnetotransport data and both analytical and numerical models. Numerical modeling continues to be a valuable tool for investigating how variations in material parameters and geometry impact performance as well as to guide and understand experimental progress. Yet, most numerical and experimental studies published to date aim at describing the magnetoresistance in the device, and efforts to push the magnetic field detection limit in EMR magnetometers have been scarce. Beyond magnetic field sensing, a range of other perspectives in EMR remains relatively untouched. Particularly interesting areas include the use of EMR devices for magnetic switches and magnetometers, the cross-over from the extensively studied diffusive devices to sparsely studied ballistic devices, as well as the family of other extraordinary phenomena that has developed in the wake of the discovery of the EMR effect. These are discussed briefly in the following subsections. Magnetoresistive Switches and Magnetometers The EMR effect can be harnessed for developing new switches and magnetometers with ample of degrees of freedom to tailor the performance through geometric and material optimization. For magnetic switches, the presence and absence of magnetic fields form a magnetically induced high on/off ratio in the electrical resistance of the device. This is of interest in, e.g., position sensing where the presence of magnetic objects can be electronically detected in the EMR devices, which may reveal positional information. Another application can be to magnetically induce a redirection of current as well as to close and open connections in a circuit which can be used for instance in solid-state switches. As observed by Solin et al. (ref) , magnetic switches can contain a low on-resistance, a high on/off ratio, contain no moving parts, and possess a fast switching speed provided that the magnetic field can be applied and removed quickly. Here, optimization of the geometry may be used to customize the performance as exemplified by Solin et al. (ref) where the onset field for switch operation is shifted from around 0.05 to 0.4 T by increasing the radius of the inner metal disk in concentric circular EMR devices. Another example is presented by Erlandsen where geometrically shifting of the inner metal disc in a shunted vdP device produce an EMR response where negative magnetic fields turn off the device and positive fields turn the device on (ref). EMR devices can also be used as magnetic field sensors by sensing magnetically induced changes in the resistance similar to the extensively used Hall sensors and giant magnetoresistance sensors. This allows for a convenient all-electronic solid-state magnetometer with no moving parts that can operate in a wide temperature and magnetic field range. In contrast to giant magnetoresistance sensors, the general EMR sensor does not contain any magnetic elements, which eliminates magnetic noise, enables the use at higher magnetic fields and allows the sensors to be smaller without risking a spontaneous superparamagnetic spin reversal in the sensor. Despite the very immature state of EMR sensors, they still show a promising magnetic field sensitivity generally down to a few nT/√Hz 16 with ample of room for improvement through optimization of both materials and geometry. The possibility of using EMR sensors in 2terminal mode as well as the large design space for sensor optimization sets EMR apart from Hall sensors. The geometry of Hall sensors has generally no impact on the Hall resistance, and the geometry primarily influences the noise level where scaling down Hall sensors results in an increasing noise. 90,93 Instead, optimization of Hall sensors is typically achieved either by lowering the sheet carrier density of the active area to increase the signal strength or increasing the mobility to decrease the noise level. This combination of material properties makes graphene and semiconducting 2DEGs the most promising material platforms for Hall sensors. As described in the previous section, EMR sensors have a higher ceiling for magnetic field resolution as they can be optimized to a very large extent by using both the material and geometric parameters of the sensor. Diffusive vs. Ballistic Transport While most of EMR studies were done in the diffusive transport regime, Zhou et al. found ballistic transport may significantly contribute to MR values as R 0 becomes extremely small and approaches zero. 27 This offers a new direction to study EMR in an almost entirely uncharted territory of EMR magnetometry. This experimental discovery is, however, contradictory to Solin's prediction in which they expect the MR value would be orders of magnitude lower in ballistic regime (ref). The experimental transition from the diffusive transport regime to a quasi-or fully ballistic transport regime can be done through a variety of different means that increases the mean free path of the carriers beyond the characteristic lengths of the devices, including increasing the carrier mobility, reducing device sizes to a few micrometers or less, or decreasing the temperature. Finite element simulations only apply in cases where transport occurs in the diffusive regime and to study the EMR effect in the ballistic regime semiclassical trajectory simulations and multiscale tight binding calculations similar to Calogero et al. 94 could be used. Other EXX Phenomena The geometry of metal/semiconductor hybrid devices can have a profound impact on devices beyond magnetoresistive sensors. It is possible to change the resistance of hybrid metal/semiconductor devices by subjecting them to a variety of external stimuli. These phenomena are collectively known as the EXX family of effects, of which extraordinary magnetoresistance (EMR) is just one of the known members. Other documented effects include extraordinary optoconductance (EOC), extraordinary piezoconductance (EPC), and extraordinary electroconductance (EEC). 56,95 The extraordinary magnetoresistance effect as reviewed here remains the most studied phenomena but progress in this field may translate to the other EXX family members as well. In extraordinary optoconductance, the semiconductor/metal hybrid device is irradiated with a focused laser, which excites carriers across the bandgap. Electrons and holes thus form and undergo diffusion. As the electron mobility in III-V semiconductors often exceed that of holes, the spatial distribution of photo-generated electrons is wider than the hole distribution, leading to a spatial fluctuation in the net charge and an associated electrostatic potential. The potential difference measured on two voltage contacts depends on the position of the laser beam with respect to the voltage contacts, leading to a light sensor with positional detection of the incoming light. The diffusion of electrons and hence the measured voltage can be modified by metallic inclusions with ohmic contacts to the semiconductor. This effectively constitutes an extraordinary light sensor as realized in the conventional bar-shaped geometry. 45 As in the case of EMR, the performance of extraordinary optoconductive sensors may be optimized through appropriate geometrical optimization. In extraordinary piezoconductance 45 the interface between the semiconductor and the metal shunt forms a Schottky barrier that electrons can tunnel through. As the device is subjected to strain, the interatomic distances vary which changes the height of the Schottky barrier. Tunneling probabilities through the barrier have an exponential dependence on the barrier height, so the small variations induced by external strain result in a measurable change in resistance. The Schottky barrier between the semiconductor and metal inclusion is also at the heart of the extraordinary electroconductance effect. 45 In contrast to extraordinary piezoconductance, variations in the Schottky barrier are induced by application of an electric field in electroconductive devices. As with the EMR and EOC, the ability of both piezoconductive and electronductive sensors to measure strain or electric fields may also be optimized through appropriate choice of geometry. Overall, extraordinary magnetoresistance and the related phenomena constitute an exciting field where the geometric design of the sensor can be used as a versatile route for tailoring performance. In particular, this may be realized through the use of advanced numerical tools featuring inverse modelling to navigate through the countless possible geometric variations. To date however, the landscape is to a high degree uncharted territory which holds plenty of opportunities for future development within the field.
2022-11-09T06:43:01.917Z
2022-11-08T00:00:00.000
{ "year": 2022, "sha1": "c65d778052d7cc5046ca1e58289a7dfa11d9fa9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c65d778052d7cc5046ca1e58289a7dfa11d9fa9f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119330612
pes2o/s2orc
v3-fos-license
Correlations in the cotunneling regime of a quantum dot Off-resonance conductance through weakly coupled quantum dots ("valley conductance") is governed by cotunneling processes in which a large number of dot states participate. Virtually the same states participate in the transport at consecutive valleys, which leads to significant valley-valley conductance correlations. These correlations are calculated within the constant interaction model. Comparison with experiment shows that these correlations are less robust in reality. Among the possible reasons for this is the breakdown of the constant interaction model, accompanied by"scrambling"of the dot as the particle number is varied. Off-resonance conductance through weakly coupled quantum dots ("valley conductance") is governed by cotunneling processes in which a large number of dot states participate. Virtually the same states participate in the transport at consecutive valleys, which leads to significant valleyvalley conductance correlations. These correlations are calculated within the constant interaction model. Comparison with experiment shows that these correlations are less robust in reality. Among the possible reasons for this is the breakdown of the constant interaction model, accompanied by "scrambling" of the dot as the particle number is varied. I. INTRODUCTION One of the interesting aspects of the physics of quantum dots is the mechanism of cotunneling [1] which governs transport through the quantum dot away from resonance ("conductance valley"). Such a mechanism, which usually gives a small (as compared with resonance values) yet significant contributions to the conductance, consists of the use of a large number of virtual dot states, which, due to high electrostatic energy, are classically forbidden. As one varies an external parameter (e.g. applied magnetic field), these are virtually the same (albeit possibly modified) states that contribute to the transmission, giving rise to significant conductance correlations [2]. In a recent work [3] we have pointed out that such correlations show up in the transmission phase as well. Here we study the conductance at different valleys (corresponding to a different number of electrons on the dot, a parameter which is controlled by an applied gate voltage) and calculate the ensuing conductance correlations. Our formalism, outlined in the present and throughout the next section, follows that of Aleiner and Glazman [2]. We evaluate the significant valley-valley conductance correlations within the constant interaction model, as function of the strength of the interaction, location in the valley and temperature (Section III and Appendix A). A simplified toy model which, in our opinion, captures much of the pertinent physics, is presented and studied in Appendix B. Comparison with the experiment (Section IV) reveals that in reality these correlations are less robust. Possible reasons for this are discussed in Section V, where we stress the likelihood of the breakdown of the constant interaction model, accompanied by "scrambling" of the dot states as the particle number is varied. We consider transport through a quantum dot weakly coupled to reservoirs by high tunneling barriers [4,5]. The chemical potential µ of the quantum dot can be tuned by an applied gate voltage V g . Because of the small size of those structures charging effects are important; the weak coupling to the external reservoirs (leads) implies that, in general, there is a definite number of electrons on the dot. The conductance of the quantum dot as function of V g shows nearly equally spaced peaks. These resonances are due to sequential tunneling of electrons through the dot. At these values of V g the ground state energies of the system with the dot containing N and N + 1 electrons are degenerate and the occupation number can fluctuate. In the "valleys" between the resonances the conductance is strongly suppressed due to the charging energy: it costs energy to add or remove electrons to or from the dot, and the number of electrons on the dot is fixed to an integer value. There is however a residual conductance in the valley between the peaks which is caused by a different transfer process. An electron or hole tunnels virtually from one reservoir to the other through energetically forbidden states. Since the particle tunnels coherently through the whole structure this transfer mechanism is known as cotunneling [1]. In the following we will focus on this regime away from the resonances. We model the quantum dot coupled to the reservoirs by the usual Hamiltonian H L,R describe the reservoirs to the left and right of the quantum dot, H T represents the tunneling of electrons in and out of the quantum dot, and H QD describes the states of the isolated quantum dot including the electron-electron interaction. We use the simplifying "constant interaction" model which asserts that the total energy due to the electron-electron interaction solely depends on the total number of electrons on the dot (N is the number operator). The charging energy is given by U = e 2 /C invoking the electrostatic energy of a classical capacitance C. The coupling strength of level j of the quantum dot to the leads is characterized by the tunneling rates Γ j = 2π/h k |V j,k | 2 δ(E−ǫ k ). For the calculation of transport properties we need the retarded Green function of the quantum dot coupled to the leads, In the regime of weak coupling Γ ≪ kT, ∆ (temperature T , mean level spacing ∆) we approximate G ret by with ... N = tr N exp −βH QD .../tr N exp −βH QD the thermal average with N electrons. Here ∆ is the single particle level spacing. The probability to find N electrons on the dot is given by Away from the resonances eqn. (6) describes the elastic cotunnling, i.e. the virtual tunneling via a single state j of the quantum dot. This is the dominant process in the regime kT < √ U ∆ [1] which we are focussing at. In addition, (6) also describes the dynamics in the vicinity of the resonances on the same level as an master equation approach of ref. [6]. For the Green function G ret we note that in the cotunneling regime away from the resonances there is an integer numberN of electrons on the dot, and P N = δ N,N in (6). Hereafter we attach an index N to a valley, corresponding to the number of electrons on the dot over that range of V g . The resonance separating the valleys N − 1 and N will be denoted by (N − 1, N ). It is convenient to shift the reference point of the chemical potential µ to the resonance Eqn. (6) contains canonical occupation numbers. In the following we will use the grand-canonical occupation numbers, i.e. the Fermi functions, instead. We account for the difference by using an effective inverse temperature β GC ≈ β(1 + β∆/4) [7] which is justified for β∆ ≪ 1. We then obtain with ω = ǫ j Since we are in the cotunneling regime (large denominators), we neglect the tunneling rates iΓ in the denominators in eqn. (6). The first (second) term describes the occupied (empty) states with energies smaller (greater) than ǫ N . The minimum energy necessary for hole (particle) transfer is We should keep in mind that eqn. (7) is valid for x sufficiently away from the resonances at x = 0, 1. Since the number of electrons N starts to fluctuate on an energy scale kT around the resonances the range of validity of eqn. (7) is II. FORMALISM We will briefly review the formalism of Aleiner and Glazman [2] for the calculation of transport properties of a disordered quantum dot in the cotunneling regime. Hereafter "disorder" should be understood as caused either by impurity scatterers or by shape irregularities in the confining potential. In both cases the statistical properties of the single-particle levels are described by random matrix theory (RMT) (for a review see [8]): over energy intervals smaller than the Thouless energy E T h , the tunneling matrix elements and the level spacing are strongly fluctuating. We start with the expression for the transmission amplitude t(E) containing the retarded Green function of the dot G ret and the tunneling matrix elements 2πρ L (E) with ρ L the density of states in the left reservoir. Since in the elastic cotunneling regime the electrons or holes are transferred through one single dot state, only the diagonal elements in (5) have to be accounted for. The tunneling matrix elements V j,k(E) in the Hamiltonian (4) contain the overlap of the lead wave function with the dot wave function at the barrier [9]. We assume that the tunneling matrix elements factorize into a part describing the tunnel barrier with its penetration factor, and a dot part. The dot part involves the wave function of state j at the barrier, ψ j (R). Thus, we have for the left barrier V L j,k(E) = V L ψ j (R L ). The separation is justified when the properties of the barrier only weakly change on the typical energy scale, U , of the system. Eqn. (8) then becomes with ρ L,R the density of states in the leads. For the last step in (9) it is essential that G ret j only depends on j via ǫ j , cf. (7). Aleiner and Glazman identified the sum over j as the local density of states of single-particle levels and expressed it in terms of Green functions of the non-interacting dot Impurities on the quantum dot only affect the single-particle states. With (10) the transmission amplitude is expressed as a convolution of two types of Green functions, One term in the integral is the cotunneling Green function G ret of a quantum dot with interaction (but no explicit dependence on disorder dependent quantities); the other term includes the advanced and retarded Green functions G A and G R of the system which account for disorder, but do not include interaction. We would like to stress that this separation into interaction and disorder dependent terms respectively is only possible due to the constant interaction model. The transmission amplitude t(E) is linked with the linear conductance G through the quantum dot by the Landauer formula [10,11]. Off the resonances G = e 2 /h|t(E F )| 2 even for finite temperature. Averaging over disorder only affects the single-particle Green functions G R,A defined for non-interacting electrons. For weak disorder, k F l ≫ 1, a diagrammatic expansion can be established [12]. The averages G R G R and G A G A reduce to the product of the average of the single particle Green functions; as long as the dot's size L = |R L − R R | is larger than the mean free path l such terms may be neglected. The remaining average G R G A can be expressed in terms of Diffusons and Cooperons, where ρ D is the density of states on the dot. Diffusons (Cooperons) depend only on the difference (sum) of magnetic fields B 1 , B 2 , and are the solution of diffusion equations. In momentum space (R L ↔ k, R R ↔k) these solutions read where q = k −k and Q = k +k. The area of the quantum dot is denoted by S and Ω + = B 2 E T h S 2 /Φ 2 0 with Φ 0 = hc/e the flux quantum. The Cooperon contribution is maximal for Q = 0, i.e. for the coherent backscattering process k = −k. In contrast to the Diffuson, the Cooperon is suppressed for a magnetic field greater than a characteristic field B c ∼ Φ 0 /S. The lowest non-vanishing eigenvalues of the diffusion operators are of the order of the Thouless energy E T h . For energies E < E T h only the zero-frequency mode must be retained which corresponds to spatially homogeneous solutions in (14) and (15) (q, Q → 0). We focus on the regime ∆ ≪ E h , E p < E T h . The first inequality assures that we are allowed to use the diffusion approximation. Since the relevant energy scale (charging energy) is smaller than the Thouless energy E T h which is the case for a dot with not too strong disorder or boundary deformations, we only have to consider the spatially homogeneous zero mode q → 0. The sum of Diffusons in momentum space then becomes Aleiner and Glazman have obtained the full distribution function of G [2] and have found for the variance III. COTUNNELING CONDUCTANCE CORRELATIONS In [2] the conductance-conductance correlation for different magnetic fields but the same chemical potential was studied. Here, we will calculate G(N 1 , x 1 )G(N 2 , x 2 ) between different values of the chemical potential (N 1 , x 1 ) and (N 2 , x 2 ) but fixed magnetic field for the two cases B = 0 and B ≫ B c . We are especially interested in the correlation function for N 2 = N 1 + ∆N and x 1 = x 2 , i.e. for different valleys but at the same position within the valleys. In order to make contact to the experiment [13] we consider the following normalized autocorrelation function for U < E T h . We expect to find strong correlations since from one valley to the next only one level changes its character from occupied to empty (or vice versa) [3]. For instance, the first empty state in the N th valley is the last occupied state in the (N + 1) st valley. We start as shown in the previous section by separating interaction from disorder with K = e 2 /h|V L | 2 |V R | 2 ρ L ρ R . Since the energy scale (U ) involved in this problem is much larger than the mean level spacing (∆) the average of the product of four Green functions can approximately be decoupled into a product of pairwise averaged Green functions [2]. In a diagrammatic expansion this approximation corresponds to keeping a certain class of diagrams which also turn out to be relevant for the universal conductance fluctuations [14,15]. As in the previous section we will neglect terms like G R G R and G A G A . All possible pairings of the remaining averages in terms of Diffusons and Cooperons are given by Due to the Cooperons, the conductance-conductance correlation function (19) depends on the magnetic field. Using (20), eqn. (19) can be expressed as For a strong magnetic field the last term involving the Cooperons vanishes. For U < E T h we only have to consider the spatially constant zero mode of the diffusion operator, and the sum of the two Cooperons also tends to δ(ω 1 − ω 2 ) for B = 0, as in (16). Thus the conductance correlation function becomes The parameter α depends on whether time reversal symmetry is broken or not: α = 2 at B = 0, while α = 1 for B ≫ B c . Thus the conductance correlation function (18) becomes We note that C is independent of α, i.e. possesses a universal form for both conserved and broken time reversal symmetry. The integral for A at finite temperature can be evaluated and the result is given in Appendix A. At low temperature ∆β ≫ 1 the integration yields with d = ǫ N2 − ǫ N1 positive The autocorrelation function (18) in this limit finally becomes The energy difference d between the single-particle levels N 2 and N 1 can be expressed in terms of the mean level spacing ∆, d/U = (N 2 − N 1 )∆/U = (N 2 − N 1 )δ where δ = ∆/U . As a special case (and for the sake of simplifying the algebraic expressions), we set x = x 1 = x 2 and obtain for the conductance autocorrelation function C for different valleys (26) Fig. 1(left) shows C as function of N 2 − N 1 for U/∆ = 40 and different values of x. As expected the correlations decay slowly with valley number. Away from x = 1/2, C falls off more rapidly because the nearest occupied/empty level contributes more strongly to the conductance. Fluctuations in this level from one valley to the next reduce the correlations. Fig. 1 (center) demonstrates that the scale for the decay is set by U/∆. The slope of C at N 2 − N 1 = 0 at x = 0.5 is given by −4∆/U . Next we turn to the correlations between x 1 and x 2 in the same valley N 1 = N 2 . The autocorrelation function (25) then becomes We note that C is independent of the ratio U/∆. Fig. 1 (right) shows C vs. x 1 and x 2 fixed. At x 1 = x 2 the autocorrelation function becomes maximal, and it falls rapidly off for x 1 → 0, 1. We should bear in mind, however, that (7) breaks down near the resonances, since it neglects fluctuations in the particle number of the dot. IV. COMPARISON WITH AN EXPERIMENT Measurements of the inter-valley conductance of semiconductor quantum dots were first reported in [13]. The experimental set-up was similar to other experiments reported in ref. [16,17]. The dots were all in the ballistic regime and an irregular shape of the confining potential renders the motion of the electrons chaotic, so that RMT describes the statistics of the mesoscopic fluctuations in the sample. Additional gate electrodes could distort the shape of the dot, and allow an ensemble average to be obtained from a single sample. The experiment focused on the conductanceconductance correlations in the elastic cotunneling regime between different magnetic fields but at the same chemical potential, as calculated in [2]. Since kT < √ U ∆ the observed cotunneling was indeed elastic. In ref. [13] the dot was more strongly coupled to the leads (barrier conductance G L,R ∼ e 2 /h) than in [16,17], since otherwise the cotunneling current would be very small and difficult to measure. Thus the tunneling rate Γ ∼ 0.7∆ and the averaged conductance at the peaks is eight times larger than the cotunneling conductance in the valleys ( G ∼ 0.05e 2 /h). Fig. 2 shows a comparison of C vs. N 2 − N 1 between theory and experiment (ref. [13]). The quantum dot on the sample under study had the parameters kT ∼ 9µeV , ∆ = 15.8µeV and U = 410µeV . Experimental data for the autocorrelation function C (18) in the middle of the valleys x 1 = x 2 = 1/2 are only available for three neighboring valleys and for U/∆ = 25.9. The lines are our results for three different temperatures. We observe that in the experiment the correlations are more strongly suppressed than predicted in theory. V. DISCUSSION There are two main messages arising from the present analysis. On the theoretical side, our analysis underlines the significant correlations of the transmission (i.e. conductance) through a quantum dot as an external parameter is varied. In our case the quantum dot is weakly coupled to the leads (external reservoirs) and the external parameter which is varied is the gate voltage, affecting the number of electrons which reside on the dot. All this results in valley-valley correlations of the conductance in the cotunneling (far-from-resonance) regime. Previously, correlations have been found in the original analysis of Aleiner and Glazman [2] (intra-valley, as function of the magnetic field), in ref. [18] (valley-valley) considering the differential capacitance of the dot instead of the cotunneling conductance, and in a recent analysis on transmission phases [3]. The mere existence of correlations as found in the present study is, therefore, not totally surprising, although the details (dependence on the location within the valley, on the interaction strength-charging energy, temperature etc.) are certainly different. The second message contains, in our opinion, a much more intriguing element. The correlations found in the experiment seem to be less robust than our theoretical expressions suggest. There might be several possible reasons for this. Some are obvious while others are more subtle and may give rise to some intriguing physics. (i) The precise value of the electron-gas temperature in the experiment. In [13] the temperature was estimated to be kT ∼ 9µeV which corresponds to β∆ = 1.4. Modest deviations from this value are not going to produce agreement with theory. (ii) Approximating the canonical distribution function by a Fermi function with an effective shifted temperature (cf. the comment that precedes eqn. (7), see also ref. [7]) is asymptotically justified for kT ≫ ∆, which, for β∆ = 1.4 is not quite the case. However, given the other curves in Fig. 2, a more accurate treatment of this point is not going to cure the problem. (iii) In our derivation of the correlation function C we have considered the case U < E T h . From the technical point of view, this allowed us to retain the "zero-mode" of the Diffuson propagator only. In the experiment, the Thouless energy was estimated to be E T h = 180µeV , which is smaller than the charging energy of U = 410µeV . Qualitatively one can expect that the inclusion of non-zero modes would imply deviation from the constant interaction model. In that case the addition (removal) of an electron is likely to change the effective (interaction induced) potential landscape felt by the other electrons, facilitating more efficient "scrambling" of the dot (see below), and suppressing the correlations found here. However a quantitative statement in this regard calls for a detailed analysis, not included in the present work. (iv) The Hamiltonian of an interacting electron system (such as a quantum dot) includes terms other than the constant interaction U . Varying the gate voltage is then bound to change the nature of the many-body wavefunction of the electron gas in the dot. For example, within the approximate Hartree-Fock picture, the effective single particle wave functions will be modified as electrons are kept added to the dot, resulting in the breakdown of the Koopmans picture [19]. It was shown that the Koopmans picture, which asserts that the effective singleparticle Hartree-Fock states remain unchanged as electrons are added to or removed from the system, breaks down when it concerns with finite (and disordered) systems, with sufficiently strong electron-electron interaction. For the systems studied that breakdown happened at r s ≤ 1.5 where r s = 3 3/(4πn 0 a 0 3 ) with n 0 the electron density and a 0 the Bohr radius. A remarkable experimental evidence has been provided by the experiment of [20], where the magnetofingerprints of various excited states of a dot at different particle number have been compared. It turned out that by adding electrons to the dot of concern (the total number of electrons was 200) the magneto fingerprints have been significantly modified, indicating scrambling of the electronic states. A more systematic study of this scrambling has been reported in ref. [17]. We speculate that the suppression of the conductance correlations in the present context may be another tool to evaluate the scrambling. One systematic measurement which is called for is to repeat the experiment for different values of r s . Evidently, to facilitate a more detailed comparison with our calculated expression one would need data concerning various values of U/∆, varying temperature and a larger number of valleys. VI. ACKNOWLEDGMENT With this model we calculate the correlation function of transmission amplitudes in the cotunneling regime C t (N 1 , x 1 , N 2 , x 2 ) = t(N 1 , x 1 )t(N 2 , x 2 ) * (B1) and show that we obtain the same results as with the formalism of section II. For one disorder configuration (a sequence of the α j ) the transmission amplitude in the cotunneling regime (with N electrons) at low temperature kT ≪ ∆ becomes The disorder average is now equivalent to an average over the α i . We find α i = 0 and α i α j = δ i,j . We set E = 0 and obtain where U = U/∆. The sums can be expressed in terms of the Digamma function ψ We arrive at When we are sufficiently far away from the resonances x 1,2 U ≫ 1 we can use the asymptotic expansion of the Digamma function ψ(z) ∼ log z for z ≫ 1. In the limit N → ∞ we obtain With the formalism of section II the correlation function C t is given by C t (N 1 , x 1 , N 2 , x 2 ) = ∆h e 2 G Lh e 2 G R A(N 1 , x 1 , N 2 , x 2 ) (B7) with A defined in (22) and calculated in (24). Comparison with (B6) shows that V 2 = ∆ 2 (hG L )/e 2 (hG R )/e 2 .
2019-04-14T02:19:57.514Z
1999-10-07T00:00:00.000
{ "year": 1999, "sha1": "c8f2876d06f9b2e585c624e07c68f1eba27d678c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9910097", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c8f2876d06f9b2e585c624e07c68f1eba27d678c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18958793
pes2o/s2orc
v3-fos-license
Clinical Interventions in Aging Dovepress Diagnostic and Therapeutic Path of Breast Cancer: Effectiveness, Appropriateness, and Costs – Results from the Docma Study Objective An increase in breast cancer incidence has been documented in Italy and in other countries, and some women decide by themselves to undergo diagnostic examinations outside the official screening campaigns. The aim of this paper was to analyze – in terms of effectiveness, appropriate access, and related costs – the path spontaneously followed by a sample of Italian women for the early diagnosis of breast cancer. Subjects and methods A total of 143 women who consecutively referred themselves to the breast cancer outpatient facilities at the Sant’Andrea University Hospital in Rome from May to June 2007 were enrolled in the study, gave their consent, and were screened according to their individual risk factors for breast cancer. The entire diagnostic and therapeutic path followed in the previous 2 years by each of them, either at Sant’Andrea or in other medical facilities, was reviewed and evaluated in terms of its operative efficiency and fair economic value. Results The subjects’ mean age was 47.5 years (standard deviation 13.6 years); 55% of the women were <50 years old (28% <40 years), and were thus not included in the official screening campaigns; 97 women (70%) were requesting a routine control; and 49% of them had already undergone four to seven examinations before the enrollment, although no major risk factor was present in 73.5%. After enrollment in the study, nine of the patients had surgical interventions performed on them at Sant’Andrea’s, identifying five invasive carcinomas and two ductal in situ carcinomas and two benign lesions. Operative efficiency and fair economic value were found to be optimal only in diagnostic/therapeutic paths followed at Sant’Andrea. Conclusion The diagnostic path at Sant’Andrea’s specialized center for breast cancer diagnosis and therapy is characterized by higher operative efficiency and more sustainable costs than at general hospitals, outpatient facilities run by local health authorities, or private medical centers. This result seems to confirm the present tendency to refer high-risk patients for breast cancer directly to breast units like the one at Sant’Andrea. Introduction After cardiovascular diseases, cancer is the first cause of death among women in Italy, with breast cancer being the main tumor. [1][2][3] Official data from the Italian Ministry of Health have estimated the total breast cancer incidence at about 40,000 new cases per year, with an overall prevalence of 416,000 cases (women living with cancer). 1,2 The incidence per age-group was estimated to exceed 100 new cases every 100,000 women 40 years of age, rising up to 200 new cases and over 300 cases in women aged 50 and 60 years, respectively. 1,4 The number of deaths due to breast cancer in the Italian female population is about 18% of the overall mortality due to cancer. 1,5,6 In 2009, a total of 12,195 1 At the same time, recent studies by some of the present authors have documented that the number of surgical interventions due to breast cancer in Italy has progressively increased between 2001 and 2009 (15.8% over the 9-year period). 7,8 In particular, both mastectomies and quadrantectomies have markedly increased in the age interval not covered by official screening campaigns: 40.4% in women aged 40-44 years and 19% in those aged 25-39 years. Similar increasing trends were also observed in older age-groups: 13.6% between 45 and 64 years old, 16.2% between 65 and 74 years old, and 27.4% in women aged 75 years. 8 As in other countries, the Italian health care authorities have introduced mammographic screening campaigns for the early diagnosis and treatment of breast cancer cases. In Italy, these official screening campaigns are run at a local level by the regions' health care departments, and are limited to women aged 50-69 years; only recently have they been extended in some of the regions to the age group 45-49 years. 8,9 According to the latest available data for the years 2007-2008, about 70% of Italian women belonging to these age-groups (coverage rate) were invited to undergo free X-ray mammography (MRx) tests. Only 60% of the invited women actually turned up for the appointment (adherence to the screening). 9 Significant differences currently exist in Italy between northern regions (screening-coverage rate of 82%, with adherence rate of 68%), central regions (screening-coverage rate of 58%, with adherence rate of 60%), and southern regions (screening-coverage rate of 46%, with adherence rate of 36%). 9 The detection of a malignant breast lesion every 1,000 women undergoing MRx varies between two and four cases (in southern and northern regions, respectively). 9 A recent official national report has documented higher survival rates in the groups included in mammographic screening campaigns versus unscreened women belonging to the same age-group. 2 However, despite a 5-year survival rate of 85% after a breast cancer diagnosis (90% in northern Italy versus 81% in southern regions; average European 5-year survival rate 80%), no improvement in survival has been observed in younger women under 40 years of age, as well as in those 70 years old. 2 Recent medical literature aimed at evaluating the outcomes of screening campaigns has pointed out the problem of overdiagnosis of breast cancer and associated implications (eg, overtreatment and distress). [10][11][12] According to Bleyer and Welch, about 1.3 million US women who were diagnosed with breast cancer after mammographic screening during the past 30 years would never have suffered from clinical symptoms. The same authors pointed out that in 2008 alone breast cancer was overdiagnosed in more than 70,000 women (31% of all diagnosed breast cancers). 10 However, according to other studies, the balance of the benefits of population-based mammography screening seems to overcome the harm of overdiagnosis, with overdiagnosis possibly having limited effect when assessing women aged 40-49 years. 13,14 At a time when a significant increase in breast cancer incidence has been documented in younger age-groups (45 years), 7,8 a large number of women feel compelled to undergo diagnostic examinations (ie, MRx or breast ultrasound [US]) outside the screening campaigns run by the local health authorities, which in any case do not cover the entire target population aged 50-69 years. The present DOCMa study (Study on the Optimal Diagnostic Path for Mammary Cancer) has been carried out in order to analyze -in terms of effectiveness, appropriate access, and true costs -the diagnostic and therapeutic path spontaneously followed by women outside official campaigns for early diagnosis of breast cancer. The hypothesis of the investigators was that possible significant differences exist between highly specialized centers (the socalled breast units, ie, highly specialized centers for breast pathology, as defined in the European Society of Breast Cancer Specialists guidelines), 15 such as the one at Sant'Andrea University Hospital (Rome), and general hospitals, or outpatient clinics, and private medical services in Italy. Subjects and methods We included in the study 143 women who consecutively accessed the Breast Unit at Sant'Andrea University Hospital between May 14 and June 16,2007. All patients who gave their consent were meticulously interviewed by a medical doctor, and their answers were recorded on a specific form. The questionnaire had been developed at Sant'Andrea University Hospital in order to acquire general information about the patient (date of birth, level of education, family history of breast cancer), to investigate her reason for undergoing diagnostic examinations, and to record details of all the medical and/or instrumental examinations undergone in the previous 2 years (the screening interval recommended by the international guidelines). 16 743 Diagnostic and therapeutic path of breast cancer Sant'Andrea University Hospital after enrollment in the study was also reported in the data sheet. Possible breast surgery undergone before 2005 was also recorded. At the time of enrollment, each woman was classified as high or low risk, according to the presence/absence of major anamnestic and clinical risk factors for breast cancer (those identified in the National Institute for Health and Care Excellence [NICE] guidelines on familiar breast cancer). 16 The whole diagnostic path followed by each patient was reviewed: medicals, MRx, US, biopsy, fine-needle aspiration cytology (FNAC), and when performed, surgery and the ensuing histological exam. Final diagnostic conclusions, resulting from the diagnostic path followed at the Sant'Andrea Breast Unit, were compared with those of the previous examinations. Descriptive statistical analyses were performed. We carried out descriptive cost analyses, based on the official regional health care system's charge list for each diagnostic exam and surgical intervention performed in a public hospital or outpatient clinic, namely the reimbursement system from the region to the hospitals. 17 The fees of private centers (where some of the enrolled women had medical visits and examinations) were also taken into account. We computed the operative efficiency of diagnostic and therapeutic paths followed by the patients before and after enrollment in the study. [18][19][20] To achieve this goal, we focused on patients undergoing surgery after the diagnostic examinations, as these cases were likely to indicate possible malignant lesions. Suspect cancer cases undergoing breast surgery were divided into four groups according to 1) patient age (older or younger than 50 years) and 2) presence of malignant or benign lesion (final histology results). The overall diagnostic path followed by the patients (number and type of examinations undergone) and the context in which examinations and surgery had been performed (at the Sant'Andrea Breast Unit or at other public or private medical services) were also taken into account in order to assess the operative efficiency and to estimate the average costs (fair economic value) for each group. [18][19][20] Patients whose lesions after surgery were discovered to be benign were subdivided into: 1) Sant'Andrea Breast Unit patients aged 50 years, 2) Sant'Andrea Breast Unit patients aged 50 years, 3) non-Sant'Andrea patients aged 50 years, and 4) non-Sant'Andrea patients aged 50 years. The formulae applied to estimate operative efficiency and fair economic value for each of these latter groups were: Operative efficiency where Examinations standard = maximum number of medical examinations sustained at Sant'Andrea, Examinations path = average number of medical examinations sustained in a diagnostic path, and: where V average = average economic value needed to plan surgery intervention, and V standard = average economic value at Sant'Andrea. Operative efficiency is a specific indicator used to assess the capability of a diagnostic path to identify the pathology. European guidelines 21 suggest triple assessment in patients with screen-detected mammographic abnormalities, but in our study -to be more conservative -we have assumed four as the benchmark, thus assuming that four assessments (medical test, MRx, US, and either FNAC or core biopsy [CB]) are needed to classify a suspect breast lesion. Four examinations correspond to efficiency =0, while a diagnostic path with four or more assessments has an operative efficiency characterized by a negative value (no efficiency); operative efficiency will tend toward a value of 1 (maximum efficiency) if the assessments performed to confirm surgical indication comprised between one and four. Fair economic value indicates the economic value of medical and other examinations needed before surgery in relation to the benchmark (four examinations). This indicator ranges between 0 and ∞: values closer to ∞ indicate low efficiency due to diagnostic inappropriateness; results close to 0 indicate good efficiency, namely lower costs sustained in order to arrive at the surgery decision. Table 1 summarizes the main characteristics of the 143 enrolled patients. Subjects' mean age was 47.5±13.6 years (range 16-86 years). Seventy-eight women (55% of patients) were 50 years old, with 39 of them (28%) being 40 years of age. The majority of women belonged to the lower-risk group (n=103, 73.5% of total patients). Seventy-one held a high school diploma (59.1% of total patients), and 28 a university degree (19.6%). Results All the patients who were referred to a surgical intervention at Sant'Andrea belonged to the high-risk group. Similarly, all the 25 patients who had undergone breast surgery in the 2 years preceding the enrollment (ten benign lesions and 15 malignancies confirmed by histology) were also in the high-risk group. There were also 24 Table 2, 113 patients (79%) had already had two to five diagnostic examinations performed before enrollment in the study, with 70 women (about 50% of the sample) having undergone a minimum of four examinations ( Table 2). Table 3 shows the outcome of all examinations performed in the 2 years (from June 2005 to June 2007) preceding the enrollment of the 143 subjects: 1) results of the first tests were available for 59 patients, showing seven undetermined nodules, 14 suspect nodules, and 38 benign lesions; 2) second examinations were carried out in 12 patients, revealing three undetermined nodules, one suspect nodule, and eight negative outcomes; 3) a total of 114 first MRxs were performed (with results being documented by clinical records for 77 patients), and revealed 13 undetermined nodules, four nodules suspect for malignancy, two malignant lesions (in the same woman) and 95 negative outcomes; 4) a second mammographic exam was performed in 65 women (results documented by clinical records for 55 patients), showing 12 undetermined nodules, three suspect nodules, and 50 negative outcomes; 5) the 139 first breast USs resulted in 118 negative outcomes, two malignant lesions (in the same woman), 15 undetermined nodules, and four nodules suspect for malignancy, with results being documented by clinical records for 88 patients; 6) 72 second US examinations were performed (results documented by clinical records for 67 patients), and revealed 54 negative outcomes, 14 undetermined nodules, three lesions suspect for malignancy, and one malignant lesion (diagnostic findings of the second medical examination [MRx and US] were significantly consistent both for MRx and US with those of the first ones [P0.001]); and 7) 45 FNACs were also performed (with six patients being submitted to two cytological examinations and one patient presenting three different nodules that were all analyzed), revealing seven malignant lesions (4.8% of patients), five nodules suspect for malignancy, three undetermined nodules, and 30 negative outcomes. Moreover, six vacuum biopsies (VBs) with a Mammotome, six CBs, and six magnetic resonance imaging exams were performed, resulting in the detection of two undetermined nodules, one suspect nodule, six malignant lesions (where two patients presented two simultaneous malignancies already detected by FNAC), and nine negative outcomes. The 25 surgical interventions carried out in the 2 years before enrollment had revealed 14 malignant lesions and eleven benign lesions (six in subjects 50 years old and five in women 50 years). Among these patients, one woman aged 42 years was operated on twice over the 2-year period (a benign lesion in 2005 and a malignancy in 2006). 745 Diagnostic and therapeutic path of breast cancer In ten patients enrolled in the study from May to June 2007 (7% of the total sample), surgical intervention was indicated, but one woman aged 86 years refused surgery. None of the women undergoing surgery at Sant'Andrea after the enrollment had been previously operated upon for breast cancer. Final postsurgery histology identified five invasive carcinomas, two ductal carcinomas in situ (DCIS), and two benign lesions. Pearson's χ 2 test showed that final histology results were independent of the outcome of mammographic and US examinations performed in the 2 years preceding the study, thus indicating that the malignant lesions were either not present in the previous 2 years or had not been identified. Table 4 summarizes the final diagnostic results for all patients enrolled in the study from May to June 2007. Table 5 reports the types of examinations undergone by women enrolled in the study (including those in the 2 years preceding enrollment) and the relative costs, either sustained by the patient in the private sector or evaluated on the basis of the regional charge list, 17 overall and per age-group (50 and 50 years old), distinguishing between private and public medical facilities (Sant'Andrea is among the public hospitals). It should be pointed out that at the time of this study, public health care-system patients would pay a fee (the so-called ticket) set by the regional health care authorities, corresponding to about a quarter or a fifth of the true overall cost of the exam defined in the official charge list. 17 As reported in Table 5, the total cost of the 859 performed examinations recorded was computed to be between €104,608 and €156,806. Costs generated by examining women aged 50 years old (an age-group excluded from the screening program) were assessed to be between €47,279 and €71,017, while those generated by subjects 50 years old can be estimated to be between €57,329 and €84,960. The latter costs were generated by women 50 years old not participating in the screening program, so in theory they should be compared with costs arising from the official screening campaigns. The fees for medical examinations in the private sector could be three times higher than the highest fee ("ticket") paid by patients in a public hospital; while -concerning instrumental tests -the prices paid by the patients are about equivalent, only FNAC might have a higher price in the private versus the public sector. Table 6 summarizes the operative efficiency indicators and fair economic values of the diagnostic paths followed by patients aged 50 or 50 years (the latter is an age-group usually excluded by official screening campaigns) presenting malignant or benign breast lesions at postsurgery histology. As shown in the table, patients undergoing surgical intervention at the Sant'Andrea Breast Unit needed on average 1.0-4.0 diagnostic exams to confirm surgical indication, while subjects examined in local health care authority outpatient clinics or general hospitals needed an average of 4.8-6.0 exams before the operation. Average costs sustained by the patients to perform diagnostic exams varied between €21.17 for women with benign lesions aged 50 years (operative efficiency 0.75, fair economic value 0.16) and €89.78 for those 50 years (operative efficiency 0.25, fair economic value 0.66). Average costs sustained by patients aged 50 years affected by malignant lesions were computed to be €83.56 (operative efficiency 0.29, fair economic value 0.61). It should be noted that younger patients generated higher costs and less efficient processes, despite being affected by benign lesions. As reported in Table 6, diagnostic paths outside Sant'Andrea Hospital always showed a negative operative efficiency (lower than 0) and poor fair economic values (close to 1 or higher than 1), thus indicating that too many exams were performed with less acceptable costs per patient (from €119.15 to €243.34). Discussion The DOCMa study was aimed at determining the appropriateness, effectiveness, and costs of patients' spontaneous access to Breast Unit facilities at Sant'Andrea University Hospital in Rome, a second-level highly specialized center to which only women with already identified suspect breast lesions or with controversial diagnoses should be referred. This study retrospectively analyzed the individual records of the 143 patients enrolled in the study concerning examinations performed in the previous 2 years. More than 50% of the patients were under 50 years old (with 28% of them being 40 years), and thus belonged to an age-group excluded from the official screening programs organized at a local level by the regional health care authorities. A total of 113 patients (79% of our sample) had already had two to five diagnostic examinations carried out before enrollment in the study, with 70 women (about 50% of the sample) having had four to seven examinations performed in the previous 2 years. This is particularly surprising, considering the relatively young mean age of our study sample (47 years on average). The majority of the patients in the sample were selfreferred for a routine exam. It may be argued that a fairly high education level could play a role in women's health awareness, since the majority in our sample held a high school diploma or a university degree (n=99, 69.2% of the total). The study group included 49 patients (35% of our sample) who had previously undergone breast surgery (24 of them before 2005, and 25 between 2006 and the starting point of the study, ie, May 2007). None of these patients underwent a second surgical intervention during the study period. Among the 25 surgical procedures in the 2 years preceding the enrollment, histology had confirmed 14 malignant lesions (9.8% of total sample), with three malignancies having been detected in women aged 50 years (2.5% of subjects belonging to this age-group), and eleven benign lesions. The majority of our study population presented no major risk factors for breast cancer (lower-risk group, n=103, 73.5% of total patients). On the other hand, only patients classified as high risk at the time of enrollment underwent surgical interventions at Sant'Andrea, none of whom had previously been operated on for breast cancer. This highlights that the classification algorithm for the definition of risk levels seems to be efficient, and can be used for patient stratification. [22][23][24][25] When the patients entered the study at Sant'Andrea, they were subjected to breast US. Then, FNAC procedures were performed on 35 of the nodules, 20 of which were also examined by CB. Ten patients (7% of the sample) presented an indication for surgical intervention, but one refused to be operated on. Histology performed on the surgical samples identified seven patients with cancer (five with invasive cancer, and two with DCIS), and two patients with benign lesions. All the five patients affected by invasive carcinomas already had a suspicious lesion previous to entering the study, while the remaining patients (two DCIS and two benign lesions) were negative at the time of previous diagnostic examinations. Pearson's χ 2 test showed that final histology results were independent from the outcome of MRx and breast US performed in the 2 years preceding the study. This point is of particular interest, as the majority of the enrolled patients had been subjected to at least two clinical or instrumental examinations before entering the study. Patients' selection for surgery was shown to be very cost-effective at the Sant'Andrea Breast Unit, both in terms of early diagnosis, as two DCIS of seven suspicious carcinomas (29%) were identified, and in terms of malignant versus benign lesion ratio: this ratio was 3.5 (this value should always be 1 according to the guidelines). 21 According to the national statistics in Italy, screening campaigns in the general population recruited only on the basis of age-group criteria (50 years old) result in the detection of two to four malignant lesions every 1,000 women undergoing MRx. 9 In our study, we detected either a carcinoma (n=5) or a DCIS (n=2) in 17.5% of the patients, all of whom were at high risk for breast cancer according to the international guidelines 16 classification. These data seem to suggest the need for extending screening campaigns for the early detection of breast cancer on the basis of individual risk, thus including younger women aged 45 years and possibly even 30-35 years old, as suggested by recently published studies, 10,11 in the same spirit as the novel US National Cancer Institute initiative for risk-based and preference-based approaches at a population level. 22 According to our operative efficiency analysis, patients undergoing surgical intervention at the Sant'Andrea Breast Unit needed an average of 2-3. other clinics or hospitals underwent 4.8-6.0 examinations before surgery. This may be explained by the comprehensive diagnostic and therapeutic path designed at Sant'Andrea, where all the facilities needed for the diagnosis can be found in just one place. Top operative efficiency was observed in women 50 years old with benign lesions who were subject to all the diagnostic examinations at Sant'Andrea's before surgical intervention. The benchmark for this cost analysis has been set to be our university hospital, where a dedicated breast unit is active and the most advanced technologies for the early detection and treatment of breast cancer are available. As shown in Table 5, the medical fees paid by the patients in the private sector may be significantly higher than the maximum costs in a public setting. The Sant'Andrea Breast Unit diagnostic path is accomplished in agreement with international guidelines on the triple approach, which is the gold standard in breast cancer diagnosis, according to the European guidelines. 21 This means that patients were firstly examined by an expert clinician, before performing US and FNAC (when appropriate) to confirm (breast imaging-reporting and data system [BIRADS] 3-5) or exclude (BIRADS 1-2) breast cancer diagnosis. CB or VB was reserved for patients with discordant triple assessment, inconclusive FNAC result, suspicious area, discrete lumps, or microcalcifications without lumps at VB. In addition, when preoperative prognostic parameters were requested, a tissue sample was obtained. Our data seem to confirm that breast units are more efficient, not only in terms of patient survival (as already shown by Peltoniemi et al) 23 but also in terms of a prompt diagnosis, especially when a triage system is implemented, and with regard to costs. 24,25 This results in higher efficiency of the diagnostic and therapeutic path. Outside the breast units (ie, general hospitals, outpatient clinics ruled by local health authorities, and private centers), the path followed by the patients would be different, despite the adherence to international standards. As already highlighted by Hung et al 24 as part of the progressive rise of people's expectations of medical care, the demand for specialist care has been increasing over the years. There is an increase in referrals to specialist clinics, leading to long waiting lists before specialist consultation. A diagnosis of malignancy constitutes the outcome of only approximately 5%-10% of referrals to specialists. 26 On the other hand, there is a clear need for prompt diagnoses in patients at high risk for cancer. It has been shown that patients with breast cancer who have a 3-month delay in diagnosis show a 12%-lower 5-year survival rate than those with a shorter delay. 27 With limited resources, a way to minimize the delay is to reduce the number of inappropriate referrals to highly specialized centers (breast units), which should be reserved for high-risk patients. As a result of the DOCMa study, patient self-referral to the Sant'Andrea Breast Unit has been stopped and a triage system based on a medical evaluation of individual risk and evaluation of the reasons for contacting the hospital has been introduced to plan patients' access to medical and instrumental examinations. Since then, although the number of breast cancer diagnoses has risen each year, the waiting time from the patients' referral to their appointment at the Breast Unit has diminished (data not yet published). Conclusion Our study suggests that breast units should be reserved for high-risk patients with suspicious lesions or controversial diagnoses. Within these settings, patients can follow a personalized, qualified, and efficient diagnostic path. Conversely, our data suggest that repeated imaging examinations performed on women upon their spontaneous requests in a private or public outpatient clinic are very often both inconclusive and low on cost-efficiency. Our study suggests that screening campaigns should take into account not only the age of the patient but also individual risk factors for breast cancer (NICE guidelines), 16 with specific risk assessment performed in an outpatient clinic, and should also be offered to younger women who are currently excluded from the official screening campaigns. Only selected cases -consisting of women who are at higher risk for breast cancer according to international criteria 21 -should be referred to breast units. Finally, our data also seem to indicate that the public health care sector might be more efficient and less expensive than the private one.
2018-05-08T18:41:55.674Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "0de9323c4534fbc6f968db34dca73ff7a7c2417c", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24689", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0de9323c4534fbc6f968db34dca73ff7a7c2417c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
258591326
pes2o/s2orc
v3-fos-license
Contextual determinants influencing the implementation of fall prevention in the community: a scoping review Background Successful implementation of multifactorial fall prevention interventions (FPIs) is essential to reduce increasing fall rates in community-dwelling older adults. However, implementation often fails due to the complex context of the community involving multiple stakeholders within and across settings, sectors, and organizations. As there is a need for a better understanding of the occurring context-related challenges, the current scoping review purposes to identify what contextual determinants (i.e., barriers and facilitators) influence the implementation of FPIs in the community. Methods A scoping review was performed using the Arksey and O'Malley framework. First, electronic databases (Pubmed, CINAHL, SPORTDiscus, PsycINFO) were searched. Studies that identified contextual determinants that influence the implementation of FPIs in the community were included. Second, to both validate the findings from the literature and identify complementary determinants, health and social care professionals were consulted during consensus meetings (CMs) in four districts in the region of Utrecht, the Netherlands. Data were analyzed following a directed qualitative content analysis approach, according to the 39 constructs of the Consolidated Framework for Implementation Research. Results Fourteen relevant studies were included and 35 health and social care professionals (such as general practitioners, practice nurses, and physical therapists) were consulted during four CMs. Directed qualitative content analysis of the included studies yielded determinants within 35 unique constructs operating as barriers and/or facilitators. The majority of the constructs (n = 21) were identified in both the studies and CMs, such as “networks and communications”, “formally appointed internal implementation leaders”, “available resources” and “patient needs and resources”. The other constructs (n = 14) were identified only in the . Discussion Findings in this review show that a wide array of contextual determinants are essential in achieving successful implementation of FPIs in the community. However, some determinants are considered important to address, regardless of the context where the implementation occurs. Such as accounting for time constraints and financial limitations, and considering the needs of older adults. Also, broad cross-sector collaboration and coordination are required in multifactorial FPIs. Additional context analysis is always an essential part of implementation efforts, as contexts may differ greatly, requiring a locally tailored approach. Background: Successful implementation of multifactorial fall prevention interventions (FPIs) is essential to reduce increasing fall rates in communitydwelling older adults. However, implementation often fails due to the complex context of the community involving multiple stakeholders within and across settings, sectors, and organizations. As there is a need for a better understanding of the occurring context-related challenges, the current scoping review purposes to identify what contextual determinants (i.e., barriers and facilitators) influence the implementation of FPIs in the community. Methods: A scoping review was performed using the Arksey and O'Malley framework. First, electronic databases (Pubmed, CINAHL, SPORTDiscus, PsycINFO) were searched. Studies that identified contextual determinants that influence the implementation of FPIs in the community were included. Second, to both validate the findings from the literature and identify complementary determinants, health and social care professionals were consulted during consensus meetings (CMs) in four districts in the region of Utrecht, the Netherlands. Data were analyzed following a directed qualitative content analysis approach, according to the 39 constructs of the Consolidated Framework for Implementation Research. Results: Fourteen relevant studies were included and 35 health and social care professionals (such as general practitioners, practice nurses, and physical therapists) were consulted during four CMs. Directed qualitative content analysis of the included studies yielded determinants within 35 unique constructs operating as barriers and/or facilitators. The majority of the constructs (n = 21) were identified in both the studies and CMs, such as "networks and communications", "formally appointed internal implementation leaders", "available resources" and "patient needs and resources". The other constructs (n = 14) were identified only in the . Discussion: Findings in this review show that a wide array of contextual determinants are essential in achieving successful implementation of FPIs in the community. However, some determinants are considered important to address, regardless of the context where the implementation occurs. Such as accounting for time constraints and financial limitations, and considering the needs of older adults. Also, broad cross-sector collaboration and coordination are required in multifactorial FPIs. Additional context analysis is always an essential part of implementation efforts, as contexts may differ greatly, requiring a locally tailored approach. KEYWORDS fall prevention, implementation, contextual determinants, community-dwelling older adults, scoping review Introduction Fall rates are expected to increase in the coming decades, due to the rapidly aging population and the rising prevalence of multimorbidity, polypharmacy, and frailty among older adults (1). Currently, more than one-third of community-dwelling people over the age of 65 years fall each year (2). Fall-related injuries may have significant personal consequences, such as short-and long-term functional impairment, reduction in quality of life, loss of independence and they may cause fractures, serious soft tissue injuries, and even death (3). In addition, falls in this population are the leading cause of emergency department visits and hospitalizations, which result in a high health care demand and, therefore, high fall-related health care costs (4,5). As a result, reducing falls in community-dwelling older adults has become an international health priority (2,4,6). In order to reduce fall rates, the use of multifactorial fall prevention interventions (FPIs) is recommended (7). Multifactorial FPIs are primarily designed to address known modifiable risk factors for falling, which have been identified through individual fall risk assessments (7,8). These multifactorial FPIs consist, therefore, of different types and combinations of interventions, such as exercise therapy, medication review, and occupational therapy (3). This requires a multidisciplinary approach across individuals, providers, and organizations within the context where the FPIs occur (3). However, the potential of effective FPIs is often constrained due to a lack of successful implementation (6,9). Failing to appropriately implement research findings into clinical practice severely limits the potential for patients, and communities as a whole, to benefit from advances of proven effective interventions. To achieve successful implementation of a proven effective intervention into practice, implementation strategies must be applied (10,11). Implementation strategies are methods or techniques used to improve adoption, implementation, and sustainability of a clinical practice or program (12). However, an implementation strategy may be effective in one setting and result in failure in another, since every organization, community, or provider experiences different barriers or facilitators during implementation depending on their context (13). Therefore, implementation strategies must be tailored to the unique, dynamic, local context where the implementation of the intervention occurs (10, 14). Tailoring strategies to specific contexts requires several steps, of which examining and understanding the local barriers and facilitators (i.e., contextual determinants) is the first one (15,16). Within this step, the use of theoretical frameworks is highly recommended to better understand and explain which determinants account for the success or failure of a specific implementation strategy (17). Tools exist to help implementers to assess potential determinants in a specific context, such as the widely used Consolidated Framework for Implementation Research (CFIR) (17,18). Recently, McConville and Hooven (2020) (19) performed an integrative review to identify determinants that influence the implementation of fall prevention management in the primary care setting. Five themes were identified that described barriers to implementation: provider beliefs and practice, lack of provider knowledge, time constraints, patient engagement, and financial issues. However, this research mainly focused on barriers, whereas insight into facilitators is equally important for context analyses and future steps in the implementation process (10). Furthermore, they primarily concentrated on the perceptions of health care professionals, while responsibility for effective fall prevention management lies not with providers in health care, but also in social care sectors (3,20). Nevertheless, many studies on the implementation of FPIs are still concentrating on single care settings or provider groups (21,22). Focusing on a single setting, organization or provider type has been long debated by Ganz et al. (2008) (23), where it was emphasized that "it takes a village" of stakeholders across settings, sectors, and organizations to prevent falls and reduce fall risk among older people. Concentrating on the implementation of FPIs in the community setting is therefore essential (3,24). "Community setting" can be defined as the geographical area where community-based health and or social care services are delivered (integrally) to residents in primary or community care (25). Surprisingly, the role of communities as a context for FPIs has been mostly unrecognized. Understanding and accounting for what happens in the context of the community where the intervention is performed, is of major importance to better address implementation challenges (13). To date, little is known about the best ways to implement FPIs in the broader context of the community. The first step to address this gap of knowledge is to gain insight in contextual determinants that influence the implementation process in the local context where the intervention is performed (16). Additionally, active involvement of relevant stakeholders is essential to add relevance and impact to findings derived from the literature. Therefore, a scoping review incorporating consultation with stakeholders was conducted, aiming to identify what contextual determinants influence the implementation of FPIs in the community. (27). There are two key stages to this methodology: (1) a comprehensive review of the literature; and (2) consulting stakeholders in the field during consensus meetings to inform or validate the findings from the literature. In the area of fall prevention in the community, health and social care professionals are key stakeholders. Also, it is suggested that researchers share preliminary findings as a foundation to inform the consultation and to enable stakeholders to build on the existing evidence (27). Design Reporting was performed according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist (Appendix 1) (28). This scoping review is part of a Dutch implementation research project: Fall pRevention ImplEmentatioN stuDy (FRIEND), which has received ethical clearance from the Ethical Committee Research Healthcare Domain of the HU University of Applied Sciences, Utrecht, the Netherlands (113-000-2020). Identifying relevant studies Studies in this review focus on contextual determinants influencing the implementation of FPIs from the perspective of health and social care professionals in the community. "Context" in this review is broadly defined as everything outside the evidence-based intervention and includes all forces (or "determinants") working for or against implementation (18,29). "FPI" is defined as a multifactorial evidence-based intervention in health and social care addressing modifiable fall risk factors and therefore aiming at fall prevention (such as exercise, medication review, occupational therapy, and nutrition therapy) (30). Studies were eligible for inclusion in this review if they: (1) described barriers and/or facilitators regarding implementation of FPIs for community-dwelling older people; (2) were performed in a community setting; (3) had a (partly) qualitative study design; (4) were written in English or Dutch; and (5) were published since 2010. Only articles from approximately the last decade were included since this best reflects the current health care landscape. Furthermore, quantitative studies were not included since qualitative methods are suited best to discerning barriers and facilitators to the uptake of an intervention (31). Studies were excluded if they: (1) examined eHealthinterventions; (2) investigated the implementation of fall risk screening or assessment only; (3) focused solely on participants with mental health and/or neurological conditions (such as dementia or Parkinson's disease); (4) focused on perceptions of older people regarding FPIs only; (5) were intervention studies assessing the effectiveness of fall prevention interventions, such as Randomized Controlled Trials; or (6) were non-Western studies. The reason for excluding the latter was that the health systems of the Netherlands and other Western countries are more similar, in comparison with those of non-Western countries. To identify potentially relevant studies, the following electronic databases were searched: Pubmed, CINAHL, PsychINFO, and SPORTDiscus. The search was supplemented by scanning the reference lists of included studies. The key search strategy consisted of the words "fall prevention intervention", "barriers and/or facilitators", "community", and "implementation". The search words were combined through Boolean operators. The search strategy was drafted by one researcher (MS) and further refined through discussion with another researcher (JB). The final search strategy was performed in March 2022 (Appendix 2). An update of the results derived from the initial search strategy was carried out in October 2022, to find the most recently published articles. In addition, initially, we explicitly left "RCTs" out of the search strategy to narrow down the results and to exclude effectiveness-studies. However, in order to ensure we did not miss any eligible studies that were documented as hybrid implementation trials involving randomization, the search strategy was rerun in March 2023 after deleting search terms related to the study design. Study selection Prior to study selection, agreement on selection criteria was reached to increase consistency among researchers. Then, the studies that arose from the search strategy were exported to Rayyan, a web app for reviews (32). Study selection comprised two stages. First, all titles and abstracts were screened independently by two reviewers (MS and JB). Second, if studies seemed to be eligible, the full text was reviewed independently (MS and JB). If disagreement on study selection arose, the researchers (MS and JB) discussed until they reached consensus. When conflicts were unresolved, a third researcher (SV) was approached. However, this proved to be unnecessary since consensus was reached (MS and JB) on eligibility after both stages. Finally, the reviewers generated a definitive list of studies eligible for inclusion. Charting the data A data-charting form was co-created by two researchers (MS and JB). Descriptive data of the included studies were extracted by one researcher (MS) in the data-charting form: authors, year of publication, country, study design, data collection, data analysis, type of FPI, setting, and study population. The findings were discussed with and confirmed by all members of the research team (MS, SV, CV, ME, JB). Collating, summarizing and reporting results The data in the included studies were analyzed using directed qualitative content analysis ( Figure 1) (33). Figure 1 shows the process of the data analysis, as well as examples of contextual determinants that derived from the literature and the CMs. Analysis was performed in ATLAS.ti, version 22 ® . Within this structured type of qualitative analysis, the first step is to identify key concepts or variables to create an initial coding scheme with predetermined codes (33). In this review, the constructs of the CFIR were used as predeterminant codes. The CFIR is among the most well-operationalized and widely used determinant frameworks to perform research within local settings (18). The original CFIR consists of 39 implementation constructs, categorized into five domains that influence implementation: Intervention characteristics (e.g., features and quality of the intervention), Outer setting (e.g., the economic, political, and social context), Inner setting (e.g., the structural, political and cultural context where the implementation takes place, such as an organization), Characteristics of individuals (e.g., attitudes, values and believes of the individuals involved) and Process (e.g., components that impact the implementation process) (18). Second, in the included studies, relevant determinants in the text were highlighted. Then, a differentiation was made between a determinant being a barrier (−), facilitator (+), or having no specific direction (+/−). Determinants were considered barriers if they hindered or impeded implementation; determinants were considered facilitators if their presence promoted implementation. Only when a determinant was explicitly mentioned to be a barrier or facilitator, it was coded as such. In all other cases, e.g., a determinant was "important to consider", it was coded as +/−. A determinant might have been coded multiple times in the same study and with different allocations, e.g., as both a barrier (−) and as having no specific direction (+/−). This occurred when a determinant was specifically mentioned as being a barrier (−), and later on, was described without a specific direction (+/−). Third, the determinants were assigned to the CFIR constructs in the coding scheme and then categorized into the CFIR domains. Of all studies that resulted from the search strategy, a quarter was independently coded by two researchers (MS and JB). After a consensus meeting, where differences were discussed until consensus was reached, one researcher coded (MS) the rest of the studies. Overall, the selected determinants in the text and appointed CFIR constructs were very similar between both researchers. Finally, findings were presented in a table according to the five domains of the CFIR and they were discussed with the research team, considering the meaning and overall implications of the results. Consultation To validate and complement the preliminary findings from the included studies to the context of Dutch communities and to offer an additional source of information, meaning, and perspective, stakeholders were approached to be included in consensus meetings (26,27). A broad selection of health and social care professionals (HSCPs) working with fall prevention in four districts in the region of Utrecht, the Netherlands were involved in the FRIEND project, such as general practitioners, physical therapists, dieticians, community nurses, and community sports coaches. In each district, a consensus meeting (CM) was held with the local HSCPs. All participants had given informed consent. The aim of the CMs was to identify barriers and facilitators of the implementation of FPIs in the community, from the perspective of the HSCPs. During the CMs, the Practical, Robust, Implementation and Sustainability Model (PRISM)-framework was used (34,35). This framework consists of 4 domains: Intervention, Recipients, External Environment and Implementation and Sustainability infrastructure. The PRISM framework was used since it is a comprehensive framework, allowing us to systematically identify important multilevel contextual factors (35). Also, PRISM was developed as a practical, actionable model, that both practitioners and researchers could use; therefore, it was suitable to use in the CMs in the current study (35). At the start of the session, post-its were handed out to the HSCPs and they were asked to write down barriers and facilitators that influenced the implementation of fall prevention, from their perspective. Then, they placed the post-its into the most suitable PRISM domain on a working sheet. The CMs were conducted in separate meeting areas to ensure privacy. The sessions were facilitated by two researchers who acted as moderators. The CMs were not recorded due to pragmatic challenges that arise with recording focus group discussions (e.g., speaker identification). Also, recording of the CMs did not fit with the purpose of this study, which was to collect barriers and facilitators to implementation rather than to gain a deeper understanding of these determinants. One of the researchers wrote meeting notes of the sessions. Data from the working sheets and meeting notes were also analyzed following a directed qualitative content analysis approach, according to the constructs of CFIR (18). We chose to continue with the CFIR framework at this stage since the PRISM framework lacked clear definitions, guidance and measures to assist in understanding contextual determinants. Conversely, CFIR provides a taxonomy, codebook, and definitions of constructs to facilitate its applicability and usefulness (18,36). Moreover, the CFIR is based on, among others, elements of the PRISM framework, both drawing on theories of behavior change and improvement science (18,37), resulting in similar context dimensions across both frameworks (13) and allowing to transfer from PRISM to CFIR. During the last step of the analysis, comparisons between literature and CMs were made and results were combined per CFIR construct and domain. Results The initial search strategy in electronic databases resulted in 308 studies; one additional study was added after screening through reference lists of the included studies. The updated search strategy in October 2022 and in March 2023 yielded 34 and 376 additional studies, respectively. Duplicates were removed (n = 302). A total of 392 studies were excluded after screening title and abstract, mainly because the implementation of fall prevention interventions was not discussed or the setting was not fitting, for example, studies on fracture prevention, implementation of person-environment approaches to prevent falls, the use of FPIs in hospitalized patients and integrated care for older adults in non-western countries. The remaining 25 articles were assessed fully for eligibility and 15 studies were finally selected for inclusion in this review ( Figure 2) (38-52). Consensus meetings In total, four CMs (a, b, c, and d) were held in four districts in the region of Utrecht, the Netherlands, with 35 HSCPs. All CMs lasted 120 min and there were on average 9 (range 7-13) HSCPs involved. Table 2 shows the descriptive data of the participants of the CMs. Analysis of the literature and consensus meetings Directed qualitative content analysis of the included studies and the CMs yielded determinants operating as barriers and/or facilitators within 35 unique CFIR constructs; data from the CMs resulted in 21 unique constructs. All 21 constructs which were identified in data from the CMs were also found in the included studies; whereas the remaining 14 constructs were identified only in the included studies and not in the CMs. In most determinants, it was recognized that a facilitator (e.g., having enough time) became a barrier when there was a lack of it (e.g., lack of time). Consequently, most identified determinants can act both as barrier and facilitator. Also, it should be noted that, in some cases, the absence of a determinant was a facilitator (e.g., no complex intervention), whereas the absence of another was a barrier (e.g., intervention is not compatible). Analysis of the data from the literature and the CMs is categorized and discussed per CFIR-domain and -construct ( Table 3). Characteristics of the intervention According to the results of seven studies (38, 42-44, 48, 49, 52) and two CMs a,d , a degree of "complexity" of the intervention influences implementation. In the study by Worum, et al. (2019) (44), participants highlighted that information on the intervention program is often perceived as complex, with terminological challenges and differently defined guidelines. This eventually leads to poorer use, and therefore unsuccessful implementation of an FPI. In both CMs a,d , the user-friendliness of guidelines was referred to as being important for successful implementation. Furthermore, determinants were identified within the construct "relative advantage". This construct is defined as the stakeholders' perception of the advantage of implementing the intervention vs. an alternative solution (18). This was identified in three studies (42,43,45) and two CMs a,c . In the studies by Gemmeke, et al. (42) it was recognized that health care professionals were motivated to implement the intervention since they were well aware that it contributed to decreased fall risk in older adults, which improved health outcomes and lower health care costs. Determinants about the "cost" of the intervention derived from both literature (38,42,48) and three CMs a,c,d . This refers to the-sometimes significant-financial contribution which is required for participation in FPIs, which can be a major barrier to some older adults. "Evidence strength & quality" of the intervention was identified more often in the literature than in the CMs (three studies (44,45,51) and one CM d , respectively). The construct "adaptability" was identified in as many CMs as studies [three studies (44,45,52) and three CMs a,b,d ]. It seems to be important that interventions are tailored to the context where the implementation takes place (44). Outer setting "Patient needs and resources" was identified in ten studies (38-40, 42-45, 48, 49, 52) and mentioned in all CMs a-d . General (49) expressed that persuading older clients, who did not acknowledge they had a fall risk or that hazards needed to be addressed and that FPIs would be beneficial, was the most difficult part of their work regarding fall prevention. In the CMs, this was experienced as well by several participants: there was denial and a lot of resistance from clients regarding fall prevention. In a total of twelve studies (38)(39)(40)(41)(42)(44)(45)(46)(47)(48)(49)(50) and all CMs it was mentioned that networking well with external organizations is required to successfully implement FPIs in the community. This is summarized by the construct "cosmopolitanism". Hospital leader (n = 1); PT at a hospital (n = 1); specialist in-patient PT (n = 2); community health service leader (n = 1); PT in municipality (n = 4); specialist community PT (n = 2); section leader intermediate care (n = 1) organizations and community groups that are trying to achieve similar goals was essential. Consequently, strong connections are likely to enhance capacity through increased referrals (40,48). Moreover, in the study by Dykeman et al. (2018) (47) participants stated that fall prevention requires a community-wide approach, where crossing organizational boundaries and inter-agency relationships were deemed necessary for optimal teamwork and successful fall prevention activities. Participants in the CMs mentioned that working together with many stakeholders in the community is challenging, since for them it is often unknown what kind of services other HSCPs deliver as part of FPIs, and how to reach and connect with each other on a regular basis. Furthermore, the absence of accurate funding and policies could lead to a compromise of quality care ("External policy & incentives") since is often inadequate to meet the demands of HSCPs and it makes the implementation of FPIs much more complex and less attractive. This issue was highlighted in eleven studies (38, 39, 41-45, 47-49, 52) and all CMs. In the study by Liddle et al. (2018) (49), health care professionals discussed that funding systems were often perceived as barriers since they are complicated to understand and constantly changing. In the study by Dykeman et al. (2018) (47), it was stated that legislation determines what kind of services could be provided for the client, and this was often restricting. Also, there is a need for clear guidelines for fall prevention, which professionals have to be familiar with (38,44,47). Participants in the CMs mentioned that health insurance companies and municipalities should be more clear about how HSCPs and seniors can be reimbursed for implementing and attending FPIs, respectively. The use of friendly competition, i.e., "peer pressure", was identified as a facilitator in the study of Johnston et al. (2022) (42). Inner setting In a total of twelve studies (38-43, 45-49, 51) and all CMs, it was mentioned that having well-established and -working networks, with effective communication within an organization, was of utmost importance for successful implementation of FPIs ("Networks & communications"). For example, chaotic communication and not being open to others' perspectives were perceived barriers (45,47). The importance of networks and communication was emphasized by the requirement of a multiprofessional and multidisciplinary approach to fall prevention (38,39,43,46,49). According to the CMs, however, there is often a significant lack of collaboration, as every professional works solitarily from each other. In addition, a problem that arose from the CMs in this context was that there is usually no clarity on colleagues' roles and responsibilities while there often is an overlap in skills and experiences. This specific challenge emerged from the literature as well (47,49). Determinants within the construct "implementation climatecompatibility" were identified in five studies (40,(42)(43)(44)(45) and three CMs a-c . In the study by Gemmeke et al. (2022) (43) it was highlighted that, to facilitate further implementation, integration of FPIs into regular interventions was preferred. In the CMs it was also discussed that it would be beneficial to integrate fall prevention within existing workflows regarding other chronic diseases, such as diabetes. However, this is currently often not the case. In addition, combining workflows between different organizations can be challenging. Furthermore, limited time, staff capacity and financial resources, unavailable venues to provide the intervention, high staff turnover, lack of support, and inconsistency in staff education may hinder the possibility of concrete use of FPIs. This is summarized by the construct "readiness for implementation-available resources" and was identified in twelve studies (38-43, 45-49, 51) and all CMs a-d . Determinants within the construct "readiness for implementation-leadership engagement" were identified more often in the CMs, compared to the included studies; in three CMs a,c,d , and one study (45), respectively. In the CMs, participants mentioned that they often felt a lack of support from key organizational leaders, such as the management team, which eventually did not allow them enough time to implement FPIs. In the study by Worum et al. (2020) (45), this was highlighted as an important facilitator: commitment enabled a clearer direction of the process and how to proceed. "Readiness for implementation-access to knowledge & information" was identified in five studies (38,42,44,47,51); this was not mentioned in the CMs. General practitioners stated, in the study by Amacher et al. (2016) (38), that they needed adequate information and helpful documents to be able to participate well. Characteristics of the individuals Determinants within the construct "knowledge & beliefs about the intervention" emerged in eleven studies (38)(39)(40)(43)(44)(45)(46)(47)(48)(49)(50) and two CMs a,c . Negative beliefs of HSCPs, e.g., related to the nature of falls and effective measures, were at times a barrier to implementing FPIs (47). Also, in some cases, professionals were not aware of how the intervention must be performed, e.g., trying to recruit older adults for FPIs according to wrong selection criteria (38). On the other hand, professionals indicated that, as they executed the FPIs, they learned the benefits for both clients and themselves, which reinforced the importance of delivering FPIs in everyday practice (49,50). In ten studies (38, 39, 42-47, 49, 52) and three CMs a,c,d it was identified that working with enthusiastic HSCPs, who are motivated, dedicated, optimistic, and passionate about prevention facilitates the implementation of FPIs. All these personal features are summarized by the construct "other personal attributes". HSCPs should have the capability, competencies, skills, and experiences to implement FPIs successfully since they play a crucial role in preventing falls (39,44,46,47). In the CMs, participants considered the level of enthusiasm and dedication as of utmost importance. Furthermore, the individual belief in their capabilities to execute FPIs well, summarized by "self-efficacy", was identified in only one study (43). Process Evidence indicated that having a well-planned strategy, with clear directions for all involved stakeholders, is important regarding successful implementation. Determinants within this construct, "Planning", were identified in six studies (41-43, 45, 48, 51) and three CMs a,c,d . Also, the development of a scheme or tasks in advance of the implementation endeavors might be helpful for successful implementation. This was highlighted in the study by Baumann et al. (2022) (41) where registration forms were developed to facilitate communication, which was considered useful by the participants in the study. Furthermore, identifying and engaging the right stakeholders to establish partnerships helps to succeed the implementation process. Determinants within the construct, "engaging", were identified in nine studies (39,40,(42)(43)(44)(45)(46)(47)51) (46), a variety of stakeholders must be involved: clinicians, (public) health professionals, nongovernment organizations, and older people. In the CMs it became clear that it can be difficult to keep key stakeholders involved actively, hindering the accurate use of FPIs. Also, in some cases, the group of involved stakeholders is not complete, missing an HSCP with a crucial role in the implementation process (e.g., general practitioner). Furthermore, a leader or coordinator of the implementation process is another important factor for the successful implementation of FPIs. Determinants within the construct "engagingformally appointed internal implementation leaders" emerged in eight studies (39,40,42,43,(45)(46)(47)51) and all CMs a-d . In the study by Worum et al. (2020) (45) it was emphasized that implementation success could not be achieved without an active leader. Also, this leader should provide supportive and perseverant leadership, and it is their task to engage the entire organization and ensure that everyone is involved in and informed about the implementation process (45,47). In the CMs, participants indicated an active leader is necessary to keep an overview of other projects in the community and keep the implementation process moving forward. Discussion The aim of this scoping review was to identify what contextual determinants influence the implementation of FPIs in the community. Although fall prevention requires a community-wide approach, where various stakeholders and organizations must cross boundaries, an overview of barriers and facilitators that influence implementation in this particular setting remained still unexplored. Directed qualitative content analysis of the literature and the four CMs identified determinants within all CFIR domains and in almost all (35 of the 39) CFIR constructs; suggesting that a broad array of barriers and facilitators influences the implementation of FPIs. Also, all included studies and CMs reported multiple contextual determinants to implementation, emphasizing that successful implementation of FPIs in the community is challenging since there is not one single factor that can be identified as a key barrier or facilitator. This has been recognized in previous research as well (6,53). However, findings in this review indicate that there are a few important determinants that definitely need to be considered when implementing an FPI in the community setting-since a relatively large overlap was shown between the identified determinants within the included studies and the CMs. One of these essential determinants is regarding working collaboratively with the right stakeholders, within and outside an organization. This collaboration theme was categorized under CFIR constructs such as "networks & communications" and "cosmopolitanism", and was described in almost all included studies and mentioned in all CMs. In the CMs it was remarked that the unclarity of roles and responsibilities among involved stakeholders was often a challenge. These findings are in line with prior research, where strong cross-disciplinary and cross-organizational partnerships were identified as being of utmost importance which cannot be neglected in the scope of the multifactorial nature of fall prevention, where multiple stakeholders must be involved (6,19,23,54). The recently published World Guidelines for Falls Prevention and Management for Older Adults also highlights that, for successful implementation, regular interaction and Frontiers in Health Services engagement with key stakeholders is required (3). Furthermore, appropriate leadership is important; strong project management and clear communication between leaders and implementers are needed to achieve successful implementation. Such leaders should be engaged in implementation activities to be successful (18). This has been found in previous research as well, both within the scope of fall prevention and in the broader view of evidencebased practice across health and social care settings (15,(55)(56)(57). Also, "available resources", such as time, financing, and staff was identified frequently in the included studies and the CMs. This construct is categorized under the CFIR construct "readiness for implementation", suggesting that when these aspects are taken care of, the readiness of an organization to implement a given intervention will increase (18). Other research has highlighted the importance of handling "available resources" during the implementation of evidence-based interventions, within the scope of fall prevention or other contexts, as well (19,(58)(59)(60). Finally, taking into account the wishes and needs of the patients appears to be of significant importance, such as practical issues (costs, transportation, location) and the usage of fall prevention-related language when reaching and interacting with older adults. Especially the latter has been shown as an essential aspect to consider, since older adults often do not recognize they have a fall risk that needs to be addressed, leading to reluctance to adhere to FPIs (61). Overall, it is possible that the abovementioned determinants act as core components that are less dependent on different contexts, and therefore always should be taken into account when implementing FPIs in a community setting. In general, it should be highlighted that context matters in implementation practice; and this is emphasized by the results of this study. We found both a differentiation in the direction of identified determinants (i.e., barrier, facilitator, or having no specific direction) and a variety of identified CFIR constructs within and across included studies and involved communities. In detail, during the coding process in this study, a distinction was made between determinants that were explicitly mentioned as being a barrier or facilitator, and determinants without a specific direction. This resulted in a detailed overview, displaying that the majority of the identified determinants can act both as barriers and facilitators: a factor was a facilitator if it was present; its absence was considered a barrier. This has been acknowledged in other studies as well (58,62), and could be due to the varying contexts where the implementation took place. Furthermore, we noticed that some constructs were only identified in the included studies and not in the CMs, while some constructs were identified more often in the CMs than in the included studies. This could also be due to the different contexts where the implementation occurred within the included studies and the involved communities. Nilsen et al. (2019) (13) stated that the specific context where the implementation of an evidence-based intervention is performed is considered responsible for study-tostudy variations in outcomes. Hence, the results of this study can be used as an indication of which determinants might be important to consider while implementing FPIs, but the variation of identified determinants and constructs also underlines the importance of always taking into account local contexts (13,63). The next step of the implementation process is to design tailored implementation strategies that specifically address previously identified determinants in its local context (10, 64). There are several strengths to this review. First, gathering complementary data from stakeholders "in the field" has led to data representing determinants from a real-life setting. This allows us to, later on, select and design implementation strategies that fit to their local context, leading to the most effective results (13,64). Also, consulting stakeholders in addition to the literature review has resulted in rich data; perspectives of both health and social care professionals are involved in this review. This is in line with recommendations from the current World Guidelines of Fall Prevention in Older Adults, which states that optimal implementation requires actions in healthcare and social care sectors (3). Second, the comprehensive and widely used CFIR was used, to ensure a systematical and clear approach to data analysis. There are also a few limitations to this scoping review. The search strategy yielded studies that did not include older adults' perspectives. However, the description of the construct "patients" needs and resources' covers this issue partially. Also, in a substudy of the FRIEND project, the views of older adults on fall prevention are studied more extensively. Results will be published in the near future, and therefore, we chose not to include this topic in the current review. Furthermore, different frameworks were used during the CMs and the analysis afterward. However, this may not have led to different results, since the raw data (i.e., the barriers and facilitators on the post-its) was used for further analysis. The reason for choosing the CFIR framework over PRISM to analyze data was that the CFIR provides a well-defined taxonomy that facilitates its usefulness as an explanatory framework to identify and understand the success or failure of implementation activities (18). PRISM lacks clear definitions and guidance to assist in planning, understanding, and improving results (37). Also, the alignment of the use of CFIR throughout the entire review allowed for not only, comparisons between findings in the literature and the CMs, but also for comparisons and building knowledge on what influences implementation across studies and contexts over time (63). Unfortunately, an updated version of the CFIR was published after the analysis of the current review was completed and therefore, the older version (2009) was used. The updated CFIR expanded its number of determinants and other constructs were renamed, separated into multiple constructs or relocated to different domains (65). Despite many updates, the new constructs can still be mapped back to the original CFIR to ensure consistency over time (65). Besides, the constructs of the 2009-CFIR framework have been linked to a collection of implementation strategies that were developed by Powell et al. (2015) (66), helping to guide decisions about the strategies that match locally identified barriers (10). Therefore, the selected implementation strategies for the involved communities within the FRIEND project will fit the local context and, consequently, lead to better implementation outcomes. The constructs of the updated CFIR are not yet related to implementation strategies; this remains an area for future research. Also, when tailored implementation strategies are applied, it is important to understand why a strategy did or did not reach the intended outcomes. Insight into working mechanisms of implementation strategies may help to inform determinantstrategy matching and eventually create a more rational compilation of strategies that target local determinants and, therefore, fit contextual challenges. Research on mechanisms has been started recently (67), but precise guidance and knowledge on this matter are still unknown and future implementation research on this topic should be performed (68). In conclusion, to successfully move evidence into action, the first step is to understand the local context and the interplay between contextual determinants. Findings in the current review show that multiple determinants play a role in achieving successful implementation of FPIs in the community. However, establishing collaborative relationships, accounting for time, financing and staff, and appointing strong leaders seem to be of utmost importance to take into account, regardless of the context where the implementation occurs. Also, taking into account the wishes and needs of older adults while providing FPIs appears to be essential to successful implementation. Looking ahead, our task is now to select and design implementation strategies that fit the local context within the communities involved in this review, and to provide insight into the application and effectiveness of these strategies. This will eventually support a more widely and structurally applied implementation of FPIs, which ultimately reduces falls among our growing aging population. Author contributions MS and JB contributed to the conception and design of the study and conceptualized the review approach. MS and JB performed the data collection and data analysis. MS wrote the first draft of the manuscript and led the manuscript writing. SV, ME, CV and JB provided detailed comments on all drafts and critically revised the manuscript. All authors contributed to the article and approved the submitted version. Funding This research was co-funded by Regieorgaan SIA, part of the Netherlands Organization for Scientific Research (NWO). The funder had no role in the conception and design of this study, data collection, data analysis, interpretation, or the writing of this manuscript (grant number: RAAK.PRO03.099). Rationale 3 Describe the rationale for the review in the context of what is already known. Explain why the review questions/objectives lend themselves to a scoping review approach. 2 Objectives 4 Provide an explicit statement of the questions and objectives being addressed with reference to their key elements (e.g., population or participants, concepts, and context) or other relevant key elements used to conceptualize the review questions and/or objectives. METHODS Protocol and registration 5 Indicate whether a review protocol exists; state if and where it can be accessed (e.g., a Web address); and if available, provide registration information, including the registration number. 3 Eligibility criteria 6 Specify characteristics of the sources of evidence used as eligibility criteria (e.g., years considered, language, and publication status), and provide a rationale. 3 Information sources* 7 Describe all information sources in the search (e.g., databases with dates of coverage and contact with authors to identify additional sources), as well as the date the most recent search was executed. 3 Search 8 Present the full electronic search strategy for at least 1 database, including any limits used, such that it could be repeated. 3, 19 Selection of sources of evidence † 9 State the process for selecting sources of evidence (i.e., screening and eligibility) included in the scoping review. 3 Data charting process ‡ 10 Describe the methods of charting data from the included sources of evidence (e.g., calibrated forms or forms that have been tested by the team before their use, and whether data charting was done independently or in duplicate) and any processes for obtaining and confirming data from investigators. 3 Data items 11 List and define all variables for which data were sought and any assumptions and simplifications made. 3, 4 Critical appraisal of individual sources of evidence § 12 If done, provide a rationale for conducting a critical appraisal of included sources of evidence; describe the methods used and how this information was used in any data synthesis (if appropriate). Selection of sources of evidence 14 Give numbers of sources of evidence screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally using a flow diagram. 5 Characteristics of sources of evidence 15 For each source of evidence, present characteristics for which data were charted and provide the citations. 5 Critical appraisal within sources of evidence 16 If done, present data on critical appraisal of included sources of evidence (see item 12). Not applicable Results of individual sources of evidence 17 For each included source of evidence, present the relevant data that were charted that relate to the review questions and objectives. 5 Synthesis of results 18 Summarize and/or present the charting results as they relate to the review questions and objectives. Summary of evidence 19 Summarize the main results (including an overview of concepts, themes, and types of evidence available), link to the review questions and objectives, and consider the relevance to key groups. 13 Limitations 20 Discuss the limitations of the scoping review process. 14 Conclusions 21 Provide a general interpretation of the results with respect to the review questions and objectives, as well as potential implications and/or next steps. Funding 22 Describe sources of funding for the included sources of evidence, as well as sources of funding for the scoping review. Describe the role of the funders of the scoping review.
2023-05-11T13:10:16.981Z
2023-05-11T00:00:00.000
{ "year": 2023, "sha1": "097f539f0f1aa3bd931cccc5f2e42447b1b0699b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "097f539f0f1aa3bd931cccc5f2e42447b1b0699b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
53978842
pes2o/s2orc
v3-fos-license
Automated mound detection using lidar and object-based image analysis in Beaufort County, South Carolina The study of precontact anthropogenic mounded features — earthen mounds, shell heaps, and shell rings — in the American Southeast is stymied by the spotty distribution of systematic surveys across the region. Many extant, yet unidenti fi ed, archaeological mound features continue to evade detection due to the heavily forested canopies that occupy large areas of the region, making pedestrian surveys di ffi cult and preventing aerial observation. Object-based image analysis (OBIA) is a tool for analyzing light and radar (lidar) data and o ff ers an inexpensive opportunity to address this challenge. Using publicly available lidar data from Beaufort County, South Carolina, and an OBIA approach that incorporates morphometric classi fi cation and statistical template matching, we systematically identify over 160 previously undetected mound features. This result improves our overall knowledge of settlement patterns by providing systematic knowledge about past landscapes. The study of topographically distinct anthropogenic features-shell rings, middens, shell heaps, and earthen mounds-has been a primary focus of Southeastern archaeology since its inception (e.g., Anderson 2004;Claflin 1931;Crusoe and DePratter 1976;Marquardt 2010;Moore 1894aMoore , 1894bPutnam 1875;Squier and Davis 1848;Swallow 1858;Trinkley 1985). The shape, configuration, and distribution of these distinctive cultural features are routinely used as the basis for studies of demographic change, environmental alteration, social organization, and site formation in the Americas (Brennan 1977;Carr and Sears 1985;Claassen 1986;Crusoe and DePratter 1976;Lightfoot and Cerrato 1989;Peacock et al. 2005;Reitz 1988;Russo 2004;Trinkley 1985). Yet, while mounds are key components to our understanding of the archaeological past, the lack of systematic survey of large areas hinders our knowledge of their numbers and spatial patterns. In particular, wherever vegetation is dense-as is common across much of the American Southeastwe have an inconsistent and partial knowledge of these archaeological features. Today, substantial effort is placed on archaeological investigations that utilize noninvasive and nondestructive remote sensing techniques. These approaches offer opportunities to expand our ability to incorporate systematic methods to archaeological surveying across large areas (e.g., Custer et al. 1986;De Laet et al. 2007;Doneus et al. 2014;Eskew 2008;Freeland et al. 2016;Kirk et al. 2016;Krasinski et al. 2016;Kvamme 2013;Lasaponara et al. 2014;Riley 2009;Schneider et al. 2015; Thompson et al. 2011;Traviglia and Torsello 2017;Trier et al. 2015;Van Ess et al. 2006). These approaches utilize sensors, such as cameras on aerial platforms, to acquire landscape level information about the archaeological record. These sensors measure visible light or other ranges of electromagnetic spectra. While using aerial sensors in archaeology is certainly not new (e.g., Capper 1907;Engelbach 1929;Lindbergh 1929aLindbergh , 1929b, the use of photos has largely been rooted in manual analysis where the analyst must visually seek out features of interest. This approach, while productive, limits the ability of remote sensing to be useful in large areas. Additionally, it leads to the inconsistent evaluation of materials, makes repeated evaluation costly, and restricts the approach to imagery easily evaluated in an intuitive fashion (e.g., visible light photography). One promising alternative to the manual evaluation of remote sensing data for detecting features of interest is the use of object-based image analysis (OBIA) (Blaschke 2010;Freeland et al. 2016). While growing in popularity across the natural sciences (e.g., Freeland et al. 2016;Magnini et al. 2017;Riley 2009;Schneider et al. 2015;Trier et al. 2015), remote sensing applications have remained largely unexplored in Southeastern archaeology. The potential for using OBIA to explore remote sensing data is particularly great for the identification and location of two forms of topographic anomalies with precontact origins: earthen constructions (i.e., mounds) and shell constructions (i.e., mounds and rings). The archaeological record of the American Southeast once had thousands of these mound features, and their study provides much of the basis of our knowledge about the prehistory of the region. Unfortunately, urban development over the last 50 years has led to the widespread destruction of many shell rings and earthen mounds (Stalter et al. 1999:864). In Beaufort County, South Carolina, for example, less than 5% of the surface has been well surveyed according to the state archaeological site files. Much of the area is under dense vegetation, making systematic surface survey difficult, if not impossible. At the same time, the rate of land development for golf courses and residential complexes has increased substantially over the last 30 years, along with a doubling of Beaufort's population (US Census 2010). Sea level projections estimate that up to 30,000 acres of dry land in the Beaufort County area will be submerged by 2040, including nearly total inundation of many coastal islands (National Oceanic and Atmospheric Administration [NOAA] 2015). As the loss of the archaeological record continues, it is urgent that we implement efforts to systematically investigate the remaining landscape, and to do so with the greatest detail, coverage, and the lowest cost possible. Consequently, remote sensing innovations have tremendous potential to address this challenge. In this paper, we explore an approach to implement a systematic remote sensing method to identify artificial mounded features using Beaufort County, South Carolina, as a case study ( Figure 1). Mounds and their challenges Identifying topographic features such as precontact anthropogenic mounds and rings is complicated by their morphological diversity in terms of outline, profile, and size (see Russo 2006; also see Riley 2009); these can be circular, oval, rectangular, or have irregular and effigy outlines. Even within a single set of features, such as rings, there can be variation. Within South Carolina and Georgia, for example, features identified as "shell rings" have circular or "C-shaped" outlines, whereas in Florida, shell rings are often "U-shaped" and far more amorphous (Russo 2006:24). Mounded features have a variety of two-dimensional elevation profiles ranging from rectangular to triangular to trapezoidal or, in the case of rings, bimodal. Additionally, size also varies: shell rings in South Carolina and Georgia are substantially smaller than those in Florida. If one is searching for rings in Florida, any automatic detection algorithm must account for objects that can occupy spaces of 250 m 2 or greater, whereas in South Carolina, these features are unlikely to exceed 150 m 2 (see Russo 2006). Fortunately, much of the variability in mound morphology is stylistic (sensu Dunnell 1978) and thus, regionally specific. Therefore, algorithms designed to detect these features can be trained using regionally specific information that examines the number of potential dimensions of variability. Using regional samples to train algorithms provides a statistical basis for defining parameters. It is important to note, however, that parameters are contingent on sampling; to be useful in new areas, algorithms must be trained to set appropriate parameters for those study regions. One additional challenge for identifying mounds and shell rings comes from the fact that anthropogenic topographic features resembling earthworks may in fact be relatively recent phenomena. Among such examples are remnants of levee constructions, modern construction projects, and golf courses. Golf courses, in particular, often have shape and topographic characteristics that closely resemble mounds. Minimizing false positives caused by modern land disturbance requires additional information to be utilized, such as surrounding land use or ecological context. Ultimately, isolating features that are recent in origin often requires subsurface sampling or comparisons with historic data. Lidar One excellent source for topographic information on the scale of landscapes come from light and radar (lidar) instruments. 1 Lidar data are produced using an active remote sensing system that emits electromagnetic energy in the form of light and records the return times of these pulses to calculate distance. By measuring the time-offlight of many different light pulses simultaneously, lidar data are unique in their ability to reflect ground surfaces, even in densely vegetated areas (Jensen 2007). The recent set of lidar-based studies to explore the archaeological record, including the recent discovery of several hundred new Mayan archaeological sites in Guatemala, provide excellent examples for how lidar can reveal previously hidden landscapes (Clynes 2018;Inomata et al. 2018; see also Chase et al. 2014;Evans et al. 2013;Johnson and Ouimet 2018;Weishampel et al. 2011;Witharana et al. 2018). Within the Southeast, lidar has primarily been used to produce high-resolution maps of known mound sites (e.g., Thompson et al. 2016; Wood and Pluckhahn 2017) but has not been utilized for prospection of new archaeological deposits. In the context of the mounds and rings of the Southeast, lidar is particularly significant, as these features are often now covered by dense forests, making their detection difficult with traditional pedestrian surveys (e.g., Nance 1983;Schiffer et al. 1978) or impossible with aerial photography. Lidar data are represented in three dimensions as "point clouds" in which each measurement has a spatial coordinate in three-dimensional space. The most common files for storing lidar points are LAS format (or LAZ for compressed versions). LAS files store collections of lidar points in a binary format which provides efficient storage of large amounts of lidar data (Samberg 2007). LAS files include data on the geographic position for each mapped point plus information about collection methods, minimum and maximum values, and classification values. Often the raw LAS files are converted into Digital Elevation Models (DEMs) in which the raw data are processed and interpolated into a regular grid of elevation points. The use of DEMs for analysis typically reduces the detail that would be available in the raw lidar data but also ensures regular topographic coverage for regions of interest. For our analyses, we used DEM files generated by NOAA (2013) from the raw LAS files. The DEMs are interpolations of "bare ground" lidar returns from the original data using nearest neighbor and inverse distance weighting (IDW) algorithms to create elevation values every 1.2 m. We conducted all subsequent analyses using our algorithm for mound detection on this raster dataset. For archaeological purposes, lidar data are usually filtered to limit analysis to those points representing bare ground elevations. In areas with obscured topographies, such as forests or vegetated landscapes, however, not all light pulses will penetrate to the ground surface. Isolating the bare ground data requires selecting those points that come from the pulses of light that are the last to return to the sensor as opposed to earlier returned pulses that are reflected off of intervening vegetation. The penetration of lidar signals through vegetation can vary and is dependent upon the power of the lidar transmitter, the wavelength of light used in the pulse, the scanning angle of the sensor, the density of the vegetation, and the type of vegetative cover present in an area (Clark et al. 2004;Crow et al. 2007). In areas for which lidar data are missing due to heavy vegetation, interpolation algorithms must be used to estimate the values of locations that lack data (Li and Heap 2014). Given any particular lidar dataset, one must devise algorithms that can isolate spatial patterns of the topography of interest. These algorithms search through the data and identify matches of points that meet criteria on overall shape, size, local relief, and degree of symmetry. The challenge to the archaeologist is to find the most effective set of criteria that can best identify features of interest with the fewest false positives and false negatives. Often it is useful to include data from other sources including vegetation, distance to other features, land-use classification, and so on. Lidar data have a number of limitations for the detection of cultural features. The spacing of ground sampling generated through the process of lidar scanning is a major factor in the quality of the data for use in identifying features. If lidar data are too sparse with point spacings that are too far apart, the dataset will have a low spatial resolution and thus may not be adequate for recognizing distinct topographic features, especially those that are small (Johnson and Ouimet 2014). The coverage of lidar over surfaces is also impacted by the degree to which the emitted light was able to penetrate vegetation. In densely forested areas, for example, the intensity of survey must be sufficient to ensure adequate returns from beneath the canopy (Bater and Coops 2009). The utility of lidar data given the degree of coverage also depends on the complexity of the terrain: the more complexity that one wishes to explore, the more coverage is required. In the same way, features that have low topographic profiles will require a greater degree of coverage and increased precision in the spatial positions. Finally, the utility of data will also be impacted by the raw data that are resampled to produce DEMs (Bater and Coops 2009). Analytic approaches to remote sensing data There are two primary means to analyze remote sensing data: pixel-based and object-based methods (Sevara et al. 2016). Pixel-based approaches rely on spectral values encoded in raster data. Using a library of known values associated with targets of interest, it is possible to divide raster images into a series of classes that represent those targets. In contrast, OBIA methods identify features using a number of morphological characteristics, including the spectral difference within image objects, object shape, and neighborhood analysis (Blaschke 2010:3). By incorporating multiple morphological parameters, OBIA is well suited for identifying spatially discrete features that are small, spectrally diverse, and/or structurally similar. In the case of mound detection-where features vary primarily in terms of topographic structure (e.g., shape, circularity, and elevation profiles; Larsen et al. 2017)-lidar data analyzed through OBIA shows great promise. More specifically, mound detection algorithms can take advantage of multiresolution segmentation and template-matching (Cerrillo-Cuenca 2017; Magnini et al. 2017;Schneider et al. 2015;Trier et al. 2015). Segmentation involves the splitting of an image into individual components based on brightness thresholds, elevation profiles, shape, and texture (Haralick et al. 1973;Mao and Jain 1992). The process isolates individual pixels and then systematically expands sets of pixels to larger units. For each step, the algorithm segments the units based on differences in texture, color, and shape, which results in the division of an image into representations of surface features. For example, mounded features display sudden changes in topography which are divided in the segmentation process. Circular features on the ground are represented by circular image objects of the same shape and size of the mound on the ground (e.g., Freeland et al. 2016;Jahjah et al. 2007;Magnini et al. 2017;Sevara et al. 2016;Van Ess et al. 2006). Multiresolution segmentation is a method that uses iteration to combine attributes about texture, shape, compactness, and color into the segmentation procedure, thereby increasing its accuracy compared to other image segmentation methods (Burt et al. 1981;Mao and Jain 1992;Silberberg et al. 1980). Template-matching is an additional OBIA approach for isolating features of interest. Template-matching involves iteratively searching images using a constructed framework and evaluating statistical similarity (Trier et al. 2008(Trier et al. , 2015Trier and Pilø 2012;Trier and Zortea 2012). However, the approach can produce significant false positives, as the templates simply produce portions of the image that are most similar to the specified morphology of the template. For mounds, false positives often occur as recent cultural features caused by construction along roadsides, stream banks, dams, golf courses, residual buildings, and farming (Riley 2009:82). To minimize false positives, one can use land-use maps and roadway shapefiles to filter out features that are best explained as the result of recent activity unrelated to prehistory. Materials and methods Beaufort County, South Carolina, contains a large number of recognized archaeological sites, a significant number of which are earthen or shell mounds (Frierson 2002;Stephenson 1971). Shell rings are the earliest unambiguous evidence of sedentary or near-sedentary occupations of the coastal portions of the county (Russo 2006;Trinkley 1980). These deposits offer information about the subsistence and settlement patterns of Archaic period hunter-gatherer groups living along the coast. Later Woodland period deposits include earthen mounds (Trinkley 1989), a form that becomes increasingly common over time, particularly during the later Mississippian period (Anderson 1989). Much of Beaufort County consists of forests and heavily vegetated marshland. These conditions make traditional pedestrian surveys (e.g., Michie 1980) difficult. Thus, much of our knowledge about the archaeological record is limited to portions where land has been cleared for development or for which pedestrian access is relatively easy. Consequently, our knowledge of settlement patterns is limited to more inland regions. Fortunately, in 2008 and 2009, the National Oceanic and Atmospheric Administration generated lidar data of many of the counties along the coast in South Carolina. While generated to provide information about coastal flooding, these datasets offer archaeologists a means of studying these landscapes that are otherwise hidden by dense vegetation. Currently available NOAA (2017) data are DEMs with a spatial resolution of 1.2 m. These DEMs are derived from the original raw lidar data using nearest neighbor and IDW interpolation algorithms. These data offer topographic elevation values for every 1.2 m, a resolution that is generally sufficient for identifying mound-scale archaeological features on the order of tens of meters (see Beck et al. 2007). 2 As such, a majority of known mounded features (including shell rings) in South Carolina are large enough to be identified at this resolution. The data, however, are unlikely to be able to detect small mounds that are just a couple of meters in diameter. Thus, the smallest mounds will be systematically missing from the results of this study. In addition to the horizontal spatial resolution, the vertical precision of the lidar data is 15 cm, meaning that for a mound to be detectable it must rise at least 15 cm from the ground surface. One issue that faces the analyst is excluding cultural features with discrete topographic expressions that are not part of the archaeological record. Contemporary features related to recent development (e.g., golf courses, housing developments, roadways, construction piles) often have shapes that are similar to prehistoric mounds. In order to minimize false positives from these features, we used United States Geological Survey (USGS) landuse maps and road maps from the South Carolina Department of Transportation. These maps provided us examples of features (n = 393) that we used to create negative templates for topographically distinct nonmound features (e.g., roadways, waterbodies, linear features, and building imprints). Pre-processing steps Using DEM data downloaded from NOAA, we created four different rasters that highlight topography in different ways: slope, maximum focal statistics, red-relief image map (RRIM), and range focal statistics. Each of these rasters became the source information used for our analytic techniques of feature extraction. Maximum focal statistics Maximum focal statistic rasters are calculated by evaluating each data point and conducting a nearest neighbor analysis of elevation values where the maximum elevations are identified over a moving window (Podobnikar 2012). The produced raster exaggerates topographic features in the landscape and allows for smaller objects to be seen more easily (Figure 2). Hillshade Hillshade rasters are a type of shaded-relief map that highlights elevation changes in a landscape (Figure 2). One of the drawbacks to this raster type is that the source of the light in the model causes distortion that can obscure certain landscape features (Devereux et al. 2008). For this reason, we also use a shade-free relief map known as a RRIM. RRIM Red-relief image mapping produces rasters that are based on the concept of topographic openness (Chiba et al. 2008;Yokoyama et al. 2002). Using System for Automated Geoscientific Analyses or SAGA (Conrad et al. 2015), an open-source GIS platform, we calculated topographic openness. For our DEM data, we calculated an "openness parameter" (I) for each point following Equation (1) (Chiba et al. 2008(Chiba et al. :1073. In Equation (1), O p is an assessment of positive openness-which calculates topographic concavityand O n is an assessment of negative openness-which calculates topographic convexity. RRIM is created by overlapping I with a slope gradient in ArcGIS. We then used the RRIM values to create a colored map that shows slope in a red gradient and I as a whiteto-black gradient. RRIM conversions of raw data highlight relatively slight landscape features in lidar data regardless of viewing angle (Ichita et al. 2016;Inomata et al. 2017). Range focal statistics Rasters constructed from range focal statistics show overall elevation changes in a DEM within a specific neighborhood. Because mounds are characterized by sudden changes in elevation relative to the local topography, range focal statistics can indicate locations with steep changes that might have mound features. An algorithm for topographic feature identification Our algorithm for identifying mounds follows the process illustrated in Figure 3. We conducted our template-matching procedure using eCognition (Trimble 2016). To account for morphological variability in mound shape, we created 15 templates using a series of 29 mound features (Figure 4), of which 6 are known archaeological mounds and the remainder were manually identified in lidar data (Lipo et al. 2018). Our use of 15 templates to identify mounded features enables us to assess the degree to which our template choice influences the features that are identified. The product of the template-matching consists of two correlationcoefficient maps that represent the fit to our positive and negative templates. These provide statistical probabilities on a scale from −1 to 1 (where −1 is extremely unlikely and 1 is a definitive match) of mound locations for the study area. We also conducted multiresolution segmentations of each raster image using area and circularity as classificatory parameters. In this process, we assess each of the areas matching the templates by their sizes and shapes (Freeland et al. 2016) as well as asymmetry and compactness. Asymmetry characterization helps to eliminate natural phenomena while compactness characterization tends to be associated with artificial rather than natural features (Kvamme 2013:55). We minimized false positives from our list of identified features through a number of steps in ArcGIS (ESRI 2017; Table 1). First, we used the elevation range raster to identify only those features that exhibit a total positive elevation difference of greater than or equal to 0.5 m but less than 5.0 m. This range is consistent with most mounds and rings known in the Beaufort County area (see Russo 2006), but it excludes all features that are topographically less than 0.5 m in relative elevation. Thus, mounds that have been flattened, eroded, bulldozed, or which have low relief will be systematically missing from our results. While the vertical resolution of the raw lidar data (15 cm) suggests that lower profile features can potentially be identified, the inclusion of small elevation differences results in an excessive number of false positives. Small differences due to natural processes-such as levee banks, tree fall, and animal burrows-would potentially appear as features with less than 0.5 m of elevation difference. As such, we decided to limit our search to those features we were more certain that we could identify as prehistoric mounds in this area. Second, we used a land-use map to isolate mound features that were located on or within 5 m of land classified as developed or disturbed by the USGS. These locations are often associated with false positive results due to their association with recent activity (Riley 2009). For the same reason, we excluded all results that fell within 10 m of roadways and 20 m of major highways. Third, we removed all features with topographic profiles that have slopes less than five degrees and greater than 50 degrees. We chose this range based on our ground surveys of 22 mound features in the study region (also see Wood and Johnson 1978). Fourth, we used the template-matching process to reduce our results to the most statistically viable: we excluded those results with a 75% likelihood of identification for negative templates. Fifth, we created Notes: Parameters are based on the ranges known from mounds previously identified in the Beaufort County study area. Circularity is measured on a scale from 0 to 1 with 1 being a perfect circle. Asymmetry and compactness are also unitless ratios, with the higher the number representing greater levels of asymmetry or compactness. a new raster by subtracting the negative correlation coefficient from the positive correlation coefficient. We rejected results that fell in areas of this raster with negative values. Sixth, we used the RRIM raster to visually inspect the remaining objects and remove those that could be identified as historic or recent. Results Our initial identification process produced 7,115 potential features. After inspecting these results using the steps above, we obtained 186 results that are likely to represent cultural mound features, of which 15 appear the most promising (Table 2). We chose the features we deemed to be the "most likely" to be archaeological mounds based on the visual inspection of each result using a combination of the DEM, slope raster, and RRIM, as these three datasets provide the necessary elevation information and visualization capabilities to view identified features. Paying particular attention to the features' immediate surroundings (i.e., are they located in a highly developed or undeveloped area?), elevation profile, size, and shape, the 15 high-likelihood features exhibited elevation profiles and morphological properties consistent with known mounds and were located in entirely undeveloped regions. The medium-and low-likelihood features exhibited morphological characteristics that were noticeably different than known mounds and/or had relative locations that contained greater levels of modern development and disturbance. In October 2017, we conducted a ground survey to assess a sample of five identified features (representing a 33% sample of high-likelihood objects; Figure 5). This number represents features that were accessible on public land and without the use of a boat. Three of the five features that we evaluated in our ground evaluation are mounds that were previously identified (see Supplemental Table 1 for all site numbers associated with identified features). Two of the features, however, are new discoveries (not yet recorded as archaeological sites). These two new mound features have evaded detection despite decades of traditional survey (e.g., Michie 1980;Russo 2006;Russo and Heide 2001;Stalter et al. 1999). The first of these is a precontact shell ring with a ∼15m-wide plaza and a ∼1.5-m-high arc ( Figure 5). The second newly identified site is a precontact mound that rises approximately 2 m from the surrounding area and shows evidence of previous looting ( Figure 5). While the ultimate determination of our method's efficacy requires a larger sample size and further ground survey, our results are promising. Many of the 186 identified sites in Beaufort County are previously unrecorded mounds, and our future work will focus on documenting each of these identified locations ( Figure 6). Only 20 of the features identified here appear in the South Carolina Archaeological Site Files, indicating that the majority of topographic anomalies identified here are not recorded as archaeological sites (see Supplemental Table 1). As such, our method has the potential to unveil over 160 new archaeological sites in the Beaufort County area. Conclusions Overall, our study demonstrates how semi-automatic OBIA using lidar data can provide a significant source of information about precontact landscapes in heavily vegetated areas. As Nance (1983) points out, the use of traditional pedestrian survey approaches in heavily vegetated areas is problematic, and in many instances, the results of such surveys are inadequate (also see Schiffer et al. 1978). Lidar offers a cost-effective means to economically identify features in areas that would otherwise be expensive to study. While successful, our method excludes potential mounds that have been plowed or otherwise disturbed by modern or natural phenomena. As such, mounds and rings that are potentially in the most need of active protection-namely, those that are being eroded, leveled by development, or otherwise reduced in size-are less likely to be identified using the specific parameters implemented here. New datasets with increased vertical and horizontal spatial resolution plus the inclusion of additional parameter potentially offer a means of identifying these smaller-and often overlooked-archaeological deposits. Our algorithm also relies partially on a comprehensive template that contains records of various known mound morphologies. As such, if a mound feature varies too far from the mounds in our template they are also unlikely to be identified. This shortcoming is resolvable by ground testing our results and continuing to apply our algorithm to other locations. As we confirm more features, they can be added to our template, making it more robust and more accurate. Despite these limitations, our algorithm enabled us to identify topographic features for an entire county of 2,481 km 2 in the span of one week. This same venture in terms of traditional pedestrian survey would be measured in years (see Nance 1983). Importantly, this method is adaptable to other locations. As we gathered results from our pedestrian evaluations, we were able to update our templates to include newly discovered features and to add false positives to our negative templates. Our preliminary examination of data from Charleston County, South Carolina, produced about 1,000 potential features. The use of OBIA offers promise in identifying previously undocumented archaeological features. This knowledge will help more fully document Native American settlement patterns and land use prior to European contact. With the discovery of a new potential shell ring, the roughly 50 currently known shell rings in the American Southeast (Russo 2006:40) are likely to represent only a sample of extant features. Through the use of remote sensing data and OBIA approaches we can be more confident about our knowledge of these important classes of mound features and can better contribute to the protection of these important deposits. Notes 1. Contrary to common thought, lidar is not an acronym for "light detection and ranging" but is merely a blend of the terms "light" and "radar" (see Goyer and Watson 1963). 2. Beck and colleagues (2007) conducted tests on satellite imagery data to assess the visualization capabilities of different spatial resolutions. Although different from lidar-derived DEM datasets, the implications of spatial resolution on the suitability of remote sensing data for archaeological prospection remain the same: better spatial resolution allows for the detection of smaller objects and greater numbers of archaeological deposits. Notes on contributors Dylan S. Davis is a graduate student at Binghamton University and focuses his research on human-environmental interactions and remote sensing applications for archaeological analysis. His geographic focus lies in coastal and island regions, including the American Southeast. Matthew C. Sanger is the Director of the Public Archaeology program and Assistant Professor of Anthropology at Binghamton University. His research includes Native American occupation of the southeast coastline, including South Carolina, Georgia, and Florida. Carl P. Lipo is a Professor of Anthropology at Binghamton University. His research focus centers on the use of remote sensing approaches as a means for studying the archaeological record. His work includes studies in Eastern North America and Oceania.
2018-12-01T16:40:33.159Z
2018-06-08T00:00:00.000
{ "year": 2018, "sha1": "6bf6c4f783b9dcd7cdb8ee1f778a6a01d1658052", "oa_license": "CCBYNC", "oa_url": "https://scholarsphere.psu.edu/resources/1deeafb0-9334-4e04-98a8-1ab86cd8d96c/downloads/14427", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "f0ea98f68a6833f6a70e578765d022926860289c", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
259743153
pes2o/s2orc
v3-fos-license
Relationship between kidney ultrasound data and blood creatinine and urea levels in cats with autosomal dominant polycystic kidney disease Relationship between kidney ultrasound data and blood creatinine and Introduction Polycystic kidney disease (autosomal dominant polycystic kidney disease (ADPKD)) is a genetic and incurable disease characterised by abnormal formation of fluidfilled cysts in one or both kidneys, extracellular matrix remodeling, inflammation and fibrosis in the affected kidney. Autosomal dominant polycystic kidney disease is the most common inherited genetic disease of cats, predominantly affecting Persians and Persian cats (Scottish Fold, Exotic Shorthair and British Shorthair). The prevalence of polycystic disease in Persian cats reaches 17.5 %, and in Persian-related cat breeds -3.9 % (Noori et al., 2019;Bilgen et al., 2020). Among other breeds, the prevalence of the disease ranges from 6 to 13.8 % (Alzahrani et al., 2022). There are two main forms of this disease: autosomal dominant polycystic kidney disease, which is a common form of polycystic kidney disease and another form which is an autosomal recessive polycystic kidney disease that is characterised by a slower rate of progression (Noori et al., 2019). Polycystic kidney disease is characterised by unilateral or bilateral formation of cysts in the kidneys and is a systemic, progressive hereditary disease with clinical signs that can develop at any age; cysts can also form in other organs such as the liver and pancreas (Kravhenko, 2009;Bilgen et al., 2020). The formation and growth of cysts is slow, causing a decrease in renal parenchyma and a gradual decline in renal function, leading to the development of irreversible renal failure (Schirrer et al., 2021). As for the pathogenesis of the disease, many causes are still under investigation and its mechanisms are not well understood. The process of cyst formation is likely due to a combination of increased cell proliferation, fluid secretion, and extracellular matrix changes, so the loss of cell polarisation can alter the water reabsorption function, causing cysts to form in the parenchyma (Schirrer et al., 2021). Ultrasonography and genetic testing are the two main methods for screening and/or detecting polycystic diseases in humans and cats (Chapman, 2007). Both methods are highly informative, but ultrasound, as a non-invasive and simple method, is the imaging modality most commonly used for screening and diagnosis of polycystic disease in cats (Guerra et al., 2019). Ultrasound Doppler complements B-mode ultrasound and allows to assess the initial perfusion based on the calculation of hemodynamic parameters, which are increased in chronic kidney disease. Thus, ultrasound examinations are not only useful in diagnosis, but also play an important role in determining the prognosis of animals with chronic kidney disease (Bragato et al., 2017;Stock et al., 2018). Cats are classified as positive if at least one anechogenic cavity in one of the kidneys is detected (Barthez et al., 2003). Ultrasound allows a fairly accurate measurement of renal volume (Reichle et al., 2002). In their studies, the authors did not find statistically significant differences in kidney volumes based on computed tomography and volumetric measurements based on ultrasound, which makes it possible to rely on the results of the latter in assessing structural changes in the kidneys. Functional kidney disorders are determined by creatinine and urea levels. In clinical practice it is very difficult to accurately determine the prognosis of the disease based only on clinical trial data and blood test results, so various indicators are being studied to predict the course of polycystic kidney disease. For example, it has been established that anemia diagnosed in 6 cats with polycystic kidney disease is an indicator of the degree of renal failure and a prognostic factor. Depending on the degree of parenchymal replacement by cysts and its compression (index of cystic lesions), a positive correlation between the erythropoietin level in animals was established (Roșca et al., 2022). The determination of erythropoietin is not always available to veterinary clinics and ultrasound scanning is the first method of assessing the condition of the kidney, so in clinical practice it is necessary to establish the degree of functional impairment of the kidneys in polycystic kidney disease based on the results of ultrasound examinations. The aim of the research The aim of the study is to investigate the effect of structural changes in the kidneys on their filtration function in cats with polycystic kidney disease. Materials and methods The study was carried out on 10 domestic cats with polycystic kidney disease (ADPKD) on the basis of veterinary medicine clinics Vetline, Dog + Cat and Snow Leopard in Kharkiv during 2018-2021. Laboratory tests of blood serum were carried out in the accredited and certified laboratory ALVIS-CLASS in Kharkiv. Blood samples were taken from cats on an empty stomach from the saphenous vein of the forearm and the serum urea and creatinine content was determined by conventional methods (Vlizlo, 2012). Animals were divided into two experimental groups according to the IRIS classification (International Renal Interest Society, 2023), the first experimental group consisted of five animals with moderate renal azotemia (serum creatinine level up to 450 μmol/l) and the second experimental group consisted of five animals with severe renal azotemia (creatinine level above 450 μmol/l). Ultrasound examination of kidneys in the experimental groups was performed using a Mindray device with a microconvex transducer with a frequency of 7.5 -10 MHz in B-mode. Both kidneys were scanned in each animal. The animal was placed on its left or right side, depending on which kidney was examined. The ultrasound scan of the kidneys was performed from the lateral surface of the abdominal wall, placing the transducer directly above the kidney. The hair in this area was preshaved and ultrasound gel was applied, and the kidney was scanned in the sagittal plane so that the kidney gate was visible. The dimensions of the kidney were determined -the length, width and thickness of the cortical layer. The number of cysts and their diameter were also counted, which was determined in several scans of the kidney during the rotation of the transducer. The area of the kidneys and the area of the cysts were determined by the formula for the area of an ellipse, which is equal to the product of the length of the major and minor axes of the ellipse and the number pi (S = a × b × π, where a is the length of the major axis of the ellipse, b is the length of the minor axis of the ellipse, and π is 3.14159). The total area of cysts in the kidney was determined as the sum of the areas of all cysts in the kidney, and the ratio of the area of cysts (Sс) to the area of the kidneys (Sk) in the animal (Sc/Sk) was also determined. Information on compliance with bioethical standards. The studies were conducted in accordance with the requirements of the General Ethical Principles for Animal Experiments (Kyiv, 2001) Results and discussion According to the results of the study of serum creatinine and urea content, the animals were divided into two groups. The first experimental group consisted of five animals with moderate renal azotemia, whose serum creatinine averaged 326.40 ± 23.59 μmol/l and the second experimental group included five animals with severe azotemia with a serum creatinine level of 887.00 ± 61.81 μmol/l, which is 2.7 times higher (P ≤ 0.001) than the creatinine level in cats of the first group, the creatinine level was significantly higher than normal in animals of both groups. The urea content in the blood serum of animals of the first and second groups was significantly increased compared to the norm and amounted to 22.82 ± 2.09 mol/l and 42.45 ± 1.05 mol/l, respectively. At the same time, this indicator was significantly higher in cats of the first group compared to animals of the second group by 2.2 times (P ≤ 0.01). Based on the results of determining the content of creatinine and urea in the blood serum of animals, it can be concluded that there was azotemia in animals of all groups. According to the IRIS classification, the level of azotemia in the animals of the first experimental group corresponds to the third stage of chronic renal failure, and the animals of the second experimental group -to the fourth stage of chronic renal failure. According to the results of ultrasound examination of the kidneys in animals of the II experimental group, an increase in kidney length by 6.5 mm on average was found compared to animals of the I experimental group (P ≤ 0.001) (Fig. 1). The width of the kidneys in animals of both groups did not differ. In addition, in animals of the first and second experimental groups, an increase in the thickness of the renal cortical layer was found compared to the norm (up to 5.0 mm) (Stock et al., 2018), and its thickening in animals of the second experimental group was 0.8 mm greater (P ≤ 0.01) compared to animals of the first experimental group (Fig. 1). In animals of the first group ultrasound examination revealed from 7 to 10 cysts in both kidneys, in animals of the second group -from 9 to 11, i.e. the number of cysts did not differ significantly (Fig. 1). The area of the kidney in the animals of the first experimental group was 1777.87 ± 54.93 mm 2 , and in the animals of the second experimental group -2038.41 ± 69.71 mm 2 . At the same time, the area of cysts in the kidney of animals of the first and second experimental groups was 264.39 ± 14.17 mm 2 and 284.09 ± 21.45 mm 2 , respectively. The ratio of cyst area to kidney area in animals of the first experimental group was 0.15 ± 0.01, and in animals of the second experimental group -0.14 ± 0.01. That is an increase in the area of the kidney was found in animals of the second experimental group compared to the first experimental group (Fig. 2), but no significant difference between the area of cysts in the studied animals was found. According to Reichle J. K. et al. (2002), who compared the volume of the kidneys in cats with polycystic kidney disease (n = 5; mean age 59± 10 months) and normal cats (n = 5; mean age 66 ± 10 months) using 2 imaging methods (ultrasound and CT), no statistically significant differences were found between the volume measurements of ultrasound and CT, which allows the use of ultrasound to determine the volume of renal cysts in sick cats. In the present study, in a group of clinically healthy middle-aged cats with polycystic kidney disease (ADPKD), renal function was within normal limits and did not differ significantly from the norm. Our studies were conducted on animals with the third and fourth stages of chronic renal failure according to the IRIS classification, which had clinical signs of chronic renal failure (depression, lack of appetite, periodic vomiting, decreased skin elasticity, uremic odour in some animals). In animals with polycystic kidney disease with the fourth stage compared to animals with the third stage of chronic renal failure, an increase in the size of the kidneys in length was found, which led to an increase in the area of the kidneys under ultrasound examination. A significant thickening of the cortical layer of the kidneys was also found. There was no significant difference between the number and area of cysts in the kidneys of the studied animals, indicating that there is no correlation between the level of creatinine in the blood and the area of cysts in the kidneys of cats with the third and fourth stages of chronic renal failure. The results of ultrasound examination of structural changes in the kidneys and the number of cysts or their area cannot be used to assess the degree of functional renal failure. In animals with polycystic kidney disease with the fourth stage of chronic renal failure, the results of ultrasound examination revealed an increase in kidney length by 6.5 mm (P ≤ 0.001) and cortical thickening by 0.8 mm (P ≤ 0.01) compared with animals with polycystic kidney disease with the third stage of chronic renal failure. There was no correlation between the area of cysts in the kidneys and the level of creatinine in the blood serum of animals. Conclusion Ultrasound signs of chronic renal failure in cats with the third stage of CKD were an increase in the length of the kidney from 51.0 to 58.0 mm and the thickness of the cortical layer from 5.5 to 5.8 mm. Ultrasound signs of chronic renal failure in cats with the fourth stage of CKD were an increase in the length of the kidney from 58.0 to 64.0 mm and the thickness of the cortical layer from 5.9 to 7.2 mm. Area of the kidney I-group Area of the kidney II-group Area of cysts in the kidney I-group Area of cysts in the kidney II-group Prospects for further research. Study of the effect of structural changes in the kidneys on their filtration function in domestic cats with polycystic kidney disease with mild or moderate azotemia (stage I or II chronic renal failure).
2023-07-12T06:40:20.508Z
2023-06-18T00:00:00.000
{ "year": 2023, "sha1": "c95d84c46a020b2b7776deaa7d92ba54fc3ca9a4", "oa_license": "CCBY", "oa_url": "https://nvlvet.com.ua/index.php/journal/article/download/4862/4975", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f9ab9a990804b18dafb82df112326646bfb9826", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
234057813
pes2o/s2orc
v3-fos-license
Modeling of Merging Decision during Execution Period Based on Random Forest )is study aims to investigate the key feature variables and build an accurate decision model for merging behavior during the execution period by using a data-driven method called random forest (RF). To comprehensively explore the feature variables during merging execution period, nineteen candidate variables including speeds, relative speeds, gaps, time-to-collisions (TTCs), and locations are extracted from a dataset including 375 noise-filtered vehicle trajectories. After the variable selection process, an RF model with 9 key feature variables is finally built. Results show that the gap between the merging vehicle and its putative following vehicle and the ration of this gap to the total accepted gap are the two most important feature variables. It is because merging vehicle drivers can easily observe the putative leading vehicles and control the relative speeds and positions to the putative leading vehicles and they tend to leave more space for their putative following vehicles. Relative speed between the merging vehicle and its following vehicle in the auxiliary lane is the only variable related to the vehicles in the auxiliary lane, which means merging vehicles mainly focus on the traffic condition in the adjacent main lane. Evaluation of the performance in comparison with the state-of-the-art method reveals that the proposed method can obtain much more accurate results in both training and testing datasets, which means RF is practical for predicting the merging decision behavior during execution period and has better transferability. Introduction As a basic driving task, lane changing has drawn great attention recently. Lane changing behavior was considered to be an important reason for traffic oscillations and accidents [1][2][3][4]. It was estimated that lane change crashes account for 4 to 10% of all crashes in the US [5]. Lane-changing behavior is complicated and risky because it is influenced by vehicles in both the current lane and the target lane. Several factors such as velocities and gaps should be taken into account during the lane changing process. Luckily, with the rapid development of communication technology, driving assistance systems have been developed to help drivers to make safer decisions [6,7]. Lane-changing decision assistance is one of the key functions of driving assistance systems. It can help drivers make safer decisions to start a lane change. rough the Vehicular Ad-hoc Network (VANET), vehicles can communicate with the surrounding vehicles and roadside unites [8][9][10]. e lanechanging decision assistance systems can well deal with the situation of discretionary lane-changing by using the data from surrounding vehicles and roadside unites. However, for merging areas on freeway, the judgment rules might be not applicable [11]. In merging areas, drivers need to change to the adjacent main lane within the limited distance, which may result in traffic congestions and even breakdowns [12][13][14][15][16][17]. As a sequential decision process, the whole merging process can be simplified as a sequential two-step model (gap searching and merging execution) or a three-step model (gap searching, merging position searching, and merging execution) [18][19][20][21]. However, most previous studies focused on the gap searching process but neglected the merging execution period. Several seconds are needed to execute the merging behavior and the traffic condition may change dynamically during the whole merging execution period. e ignorance of the merging execution process would lead to reduction of accuracy of traffic simulation and autonomous driving. us, there is a critical need to model the merging decision behavior during the execution period. During the merging execution period, the merging vehicles have interactions with putative leading (PL) and putative following (PF) vehicles in the adjacent main lane and the leading (L) and following (F) vehicles in the auxiliary lane. Various influencing factors might be considered for merging decision and should be analyzed in depth. However, previous studies [17] showed that there is multicollinearity between the variables. It was pointed by Balal et al. [22] that most of the lane changing related variables are highly correlated, implying that only a few representative or key variables might be sufficient to describe the interactions of vehicles. However, the selection of key variables is not an easy work. erefore, the variable selection process should be conducted before building parametric models such as logit model. Improper selection of the key variables might make the performance of the model deteriorate too seriously to be applied to merging assistance systems. Recently, data mining techniques have received a lot of attention in transportation fields due to their ability to deal with the large-scale data. Some of them can naturally overcome the multicollinearity problem and make full use of the training data. us, this study tried use a famous machine learning technique, random forest (RF), to model the merging decision behavior during execution period. It can not only produce more accurate prediction results but also excavate the hidden information among the data. More importantly, RF can effectively select the key variables. e main contribution can be summarized as follows: first, this study gives a comprehensive analysis of the influencing variables of merging decision. Second, the proposed RF method can accurately predict the merging decision during execution period, which can improve the safety and comfort level of driving assistance system if it could be incorporated into lane changing assistance system. ird, a key feature selection process is conducted to investigate the influencing factors. ese contributions can not only help understand the diverse influences of different variables on the merging decision but also shed new insights for driver assistance systems and autonomous driving. e remainder of the paper is organized as follows. Section 2 will provide a state-of-the-art review on the existing studies followed by section 3, which gives the methodology to build a RF model. Section 4 describes the NGSIM data used in this paper and comprehensively analyzes the influencing variables. Results and discussions are presented in section 5. Finally, the concluding remarks are presented in section 6. Literature Review Predicting merging decision has always been one of the focuses of transportation researches. A great number of models have been developed based on different theories. e first comprehensive lane changing framework was developed by Gipps [23] based on gap acceptance theory. en, similar frameworks were adopted in other studies [24][25][26][27]. However, the gap acceptance theory has been criticized that it cannot reflect the real behavior of drivers. To overcome the deficiency, logistic and logit models were introduced by some researchers [15,28,29]. To account for the heterogeneity among drivers, mixed models were proposed by Weng et al. [30] and Li [31]. Game theory models were also developed to model the merging behavior [32,33]. However, the prediction accuracy of the parametric models is barely satisfactory and the collinearity of influencing variables makes it difficult for researchers to choose appropriate variables to build accurate models [22]. Recently, data-driven methods, such as classification and regression tree (CART), Bayesian network, and fuzzy logic models, were used in building merging models or lane changing models and achieved promising results [16,[34][35][36][37][38]. CART was applied by Weng et al. [11] to model the merging decision in work zone area during execution period, in which time-to-collision (TTC) was considered as a risky factor. Considering the difference between cars and heavy vehicles, Moridpour et al. [39] presented the lane changing model based on fuzzy logic for heavy vehicles. A cooperative merging strategy was developed by Xu et al. [40] for vehicles with V2V and V2I networks, which is applicable to cooperative merging operations under saturated traffic conditions. However, the majority of previous studies separately considered speeds, relative speeds, and gaps as the influencing variables and ignored the interaction of variables. In addition, considering the complexity of merging behavior, a comprehensive analysis of all possible influencing factors should be conducted to better understand the merging decision during execution period. Previous studies showed that the variables of lane changing behaviour were highly correlated with each other [17,22,31]. us, selecting some representative or key variables might better describe the interactions of vehicles. However, feature selection has never been an easy work. Feature selection methods can be classified into statistics based methods [41], information theory [42], manifold [43], and rough set [44]. Besides, data-driven methods are also widely used for feature selection [34,45,46]. In this study, a popular data-driven method called random forest was applied in this paper to model the merging decision during the execution period. Compared with other models in the literature, the RF has several unique features and advantages. First, it is able to handle multisource heterogeneous data without long-time data processing. Second, as an ensemble machine learning technique based on CART, RF inherits the advantage of CART that can automatically accommodate missing data of independent variables. ird, RF overcomes the deficiency of CART and can automatically resist outliers and is not easy to be affected by small perturbations in the training data. Finally, RF can select the key variables from high dimension data by the importance of all independent variables [45,47]. RF has been successfully used in traffic prediction and produced promising results [48][49][50][51]. Methodology Predicting merging decision can be simplified as a classification problem. Some classical machine learning techniques, such as CART, are very suitable for modeling merging decision. ough CART is efficient and easy-touse, it is also easy to be affected by small perturbations in the training data [52]. To improve the robustness and generalization capacity of CART, an ensemble learning technique called random forest, which combines the bagging technique, CART, and random subspace method, was proposed by Breiman [45]. RF is an ensemble classifier composed of a group of decision tree classifiers and gets the prediction result by a simple majority vote. e RF model can improve the prediction accuracy of merging decision as well as help connected and autonomous vehicles (CAVs) make safer decisions during merging process. A brief description of random forest is given in this section and detailed fundamentals of mathematics can be referred to Breiman [45]. In RF, bootstrap aggregating (bagging) is the most basic theory. Suppose we have a training dataset (X, . , x K i and y i represent the feature vector and the response variable of the sample i, respectively. rough bagging, RF generates B new training sets (X b , Y b ) by sampling from (X, Y) uniformly and with replacement for N times. By sampling with replacement, some observations may be repeated in each data set (X b , Y b ) and some may not appear. e probability that each sample in en, we can get Equation (1) indicates that about 36.8% of the samples are not used in the training process, which is called OOB (Out of Bag) data. ese data can be used for validation. us, cross-validation or separate test data are not necessary like other machine learning methods. In RF, the OOB error has been proved to be an unbiased estimation of generalization error. e random subspace method is also used in RF. It can also be called attribute bagging or feature bagging, which means each tree is constructed based on a random subset of the feature variables. is method is designed to reduce the correlation between the trees and improve the generalization accuracy because the RF uses a simple majority vote of all the trees. Combining the above two methods and CART, the basic steps of RF can be shown in Figure 1 and summarized as follows: (I) Initiate the algorithm, set b � 1. (II) Use the bootstrap sampling method to obtain a new data set (X b , Y b ) by random sampling with replacement for N times, and the data that are not sampled will form a set called OOB set. (III) Randomly select m feature variables (m < J) and use the selected variables for splitting to train a decision tree T b based on the new sample set (X b , Y b ). e decision tree will grow the deepest and is not pruned. (IV) For b � 2, . . . , B, repeats steps II-III. e importance of the variables can be sorted by OOB data. RF can screen out important variables in the complex feature variable space, which is conducive to deepen the understanding of the research object. Assuming that the sample subset obtained by bootstrap method is b � 1, 2, . . . , B, the process of using RF to calculate the importance of variable x j is as follows: Journal of Advanced Transportation (2) Previous studies have shown that the merging decision could be influenced by a number of highly correlated variables [22,35]. us, the feature selection process must be conducted before building parametric merging decision models. By bagging and random space method, RF can naturally overcome the collinearity of influencing variables. Furthermore, the importance values can be utilized to rank the influencing variables and select the key feature variables through a forward stepwise or backward stepwise elimination process, which will be described in section 5.3. Data Description and Processing. In this section, vehicle trajectory data collected by the Federal Highway Administration (FHWA) in the NGSIM project are adopted to verify the proposed RF model. As an open-source dataset, the NGSIM dataset can provide rich and accurate vehicle trajectory data collected on both freeway and urban road [14]. It has been widely used in traffic studies such as traffic flow analysis and driving behavior modeling [18,37,53,54]. Previous studies have shown that the US-101 dataset had the best accuracy and consistency [18,55]. us, this dataset is chosen in this study. Figure 2 shows schematic diagram of data collecting site. One can find that the chosen 640 meters long segment is located between an on-ramp and an offramp with five main lanes and one auxiliary lane. Videos were captured from 7:50 a.m. to 8:35 a.m. on June 15, 2005, which was a sunny day. e dataset is updated at a resolution of 10 fps (frames per second) and contains three subsets containing 15 minutes trajectory data [56]. Table 1 shows the aggregate statics of speed and volume for every subset. e coordinates, speed, and acceleration of every vehicle at any instant can be easily obtained from the NGSIM dataset. Previous studies have shown that some random noises existed in the NGSIM data [55,57]. Filtering and smoothing techniques should be adopted before using. In this study, a data smoothing technique called symmetric exponential moving average filter (sEMA) proposed by iemann et al. [57] is applied before further data analysis. In addition, the local coordinates of three subsets are unified to filter the inconsistency of the local coordinates. Detailed steps of data processing can be referred to Li and Sun [17], Li [31], and Li and Cheng [15]. After processing, trajectories of 375 merging vehicle trajectories are extracted from the dataset. All of the vehicles are passenger cars with lengths from 2.5 m to 7.8 m. Data Extraction. After selecting the accepted gap, one merging vehicle needs several seconds to find the right time to merge into the adjacent lane and the driver may keep on adjusting the speed and relative position through acceleration deceleration during the execution period. At any time, a merging driver can either choose to continue merging or complete merging as shown in Figure 3. Let y t n define the n th merging vehicle's decision at time t. Obviously, y t n is a binary variable, shown in the following equation: Previous studies showed one second is suitable for a driver to make decisions [11,28,34,37]. us, we also choose one second in this study. en, T n represents the total time to complete merging for vehicle n. Obviously, a merging vehicle can have several observations of y t n � 0, but only have one observation of y t n � 1. By extracting the trajectory data of 375 merging vehicles, 1583 observations are obtained in this paper, that is, 375 observations are selecting to merge (y t n � 1) and 1208 observations are not (y t n � 0). It means that it takes 3.23 seconds on average for a vehicle to complete merging after making the decision of gap selection. During the process of merging execution, it has some certain influence on the additional lane and the main lane. At the same time, the merging behavior is also affected by the traffic flow state of the two lanes and the surrounding vehicles. erefore, the main factors that affect the decisionmaking of merging vehicles are the speeds, relative speeds, and gaps in the adjacent main lane and the auxiliary lane. However, previous models considered the above variables separately and ignored the interaction between variables. Some studies showed that the gaps between the merging vehicle and PF vehicle in adjacent main line were linearly related to the total gap during the merging process [20]. Figure 4 shows the scatter plots of the PF gaps and the accepted gaps according to the dataset used in this study. A strong linear relationship can be found in Figure 4. One can also find that the range of the ratio of the PF gap to the accepted gap for y t n � 1 is rather smaller than that for y t n � 0, indicating that this ratio might be an important factor for merging decision. erefore, the ratio of the PF gap to the accepted gap is also considered as the influence variable in this paper. In addition, a surrogate safety measure combining vehicle speeds, space gap, and time-to-collision (TTC) was also considered, because merging driver needs to control vehicle to avoid rear end accidents with the surrounding vehicles. TTC is defined as 4 Journal of Advanced Transportation where x L and x F are the longitudinal position coordinates of the front bumper of the leading and following vehicle, respectively; V L and V F are the speeds of leading and following vehicle, respectively; and L is the length of leading vehicle. Figure 5 shows the interactions between a merging vehicle and its surrounding vehicles. Table 2 shows the candidate variables and their explanations. It should be pointed out that TTC is negative when the following vehicle moves slower than the leading vehicle, which means that the collision would never occur. In addition, when the speed of the following vehicle is equal to or slightly larger than the Journal of Advanced Transportation 5 leading vehicle, TTC will be infinite or too large. In order to restrict these situations, we will set the TTC range to (0, 100 s], that is, when TTC is negative or greater than 100 s, it is set to 100 s. Table 3 shows the main statistical characteristics of the candidate variables for merging behavior. One can find that the merging vehicles move faster than both PF and PL vehicles and the PF vehicles have the lowest average speed. Both the leading and following vehicles in the auxiliary lane move faster than the merging vehicles. Additionally, the average speed of merging vehicles reduces from 12.477 m/s to 12.086 m/s during the merging process to accommodate for the mainline traffic speed, which can also be reflected by changes of average ΔV PL and ΔV PF . It is interesting to find that Gap PF increases from 9.616 m to 16.081 m while Gap PL does not change much. It means Gap PF plays an important role and the PF vehicles tend to yield to the merging vehicles during the merging execution period. One can also find that the TTC PL has the lowest average value during the merging process, indicating that the traffic conflicts between the merging vehicles and PL vehicles might be the most serious. A Pearson's correlation analysis is conducted to correlation coefficients between dependent variable and independent variables, as shown in Table 4. Bold values are the insignificant correlation coefficients at 0.95 confidence level. One can find that the dependent variable y t n has significant correlations with several independent variables, such as V PL and Gap PF . It is interesting to find that there is no significant correlation between Gap PL and y t n . (Gap PF /Gap) has the strongest correlation with y t n . Modelling Results After extracting enough data, the RF model is trained and tested in this section to verify the effectiveness. A data mining software called Salford Predictive Modeler is used in this study [16]. e data is randomly divided into two parts: 80% of the lane change cases are randomly selected as the training data, and the remaining 20% is used as the test data for validation. ough RF can use the OOB data for validation, we still do this for comparison with the state-of-the-art methods. Parameter Determination. e number of decision trees B is an important parameter of RF. When building decision trees, RF does not prune it. us, the modeling Figure 5: Schematic diagram of candidate variables. accuracy of RF will increase rapidly with the increase of the number of decision trees at first. However, after reaching a certain number, generating more trees would not improve the model accuracy but increase the computational burden. Previous studies showed that the total number of trees should be set at 200-500 [45,50]. To ensure the reliability of the modeling results, this paper sets the number of trees at 500. In RF, a randomly selected subset of features is used to build each single tree. Reducing the number of sampled features m would bring down the correlation among decision tree, leading to less generalization error. However, a too small m would also make the single tree suffer from large prediction error. Different m has been used in different studies [49,58]; thus, the number of sampled features m should be selected carefully. To select the best m, RF models are trained with an increasing number of m from 1 to 10. Table 5 shows the OOB errors with a different number of m. One can find that the OOB error has the lowest value when m is 3. us, the number of randomly sampled features m is set at 3 in this study. Variable Importance. e variable importance can be easily obtained by RF according to equation (2). e rank and importance values of independent variables are shown in Table 6. According to Table 6, it can be seen that Gap PF and (Gap PF /Gap) are the most two important variables, whose importance values are much greater than other variables. e reason is probably that merging vehicle drivers can easily observe the PL vehicles and control the relative speeds and positions with them. us, they tend to leave more space for their PF vehicles. is finding is consistent with that of the previous studies [20]. Table 6, one can find that the relative importance values of several variables are rather low, such as TTC L (0.18%), indicating that there are some redundant or irrelevant variables in the RF model. erefore, a feature variable selection process introduced by Genuer et al. [59] is applied in this study. e basic steps are shown as follows: Feature Variable Selection. From (1) Build a RF model with all candidate variables and rank the variables with the relative importance values in descending order (2) Delete the variable with the lowest relative importance value and create a new variable set (3) Build a new RF model with the new variable set and rank the variables with the relative importance values in descending order (4) Repeat steps (2) and (3) until only one variable remains (5) Rank all the RF models established in steps (1) to (4) according to the OOB error, and select the model and feature variable set with the lowest error After feature variable selection, nine feature variables are remained and the OOB error is reduced from 9.1% to 8.9%, indicating that reducing the number of feature variables will not reduce the prediction performance. e values of variable importance in the model are shown in Table 7. It is easy to know from Table 7 that Gap PF and (Gap PL /Gap) are still the two most important factors. ΔV F is the only variable related to the vehicles in the auxiliary lane, which means merging vehicle drivers mainly focus on the traffic condition in the main lane. Table 8 shows the prediction accuracy for training data and testing data. For comparison, a binary logit model and a CART model are also built based on the same dataset. e results show that the prediction accuracy of the RF model is much better than the binary logit model for both training data and test data. One can also find that CART has the highest prediction accuracy in training data. However, the performance of CART in testing data is much poorer than RF, indicating that RF has better ability to deal with problem of overfitting than CART. In addition, due to the influence of collinearity of variables, only six variables are included in the binary logit model. Some variables that may affect the merging decision behavior in a certain range are ignored by the binary logit model, such as TTC PL and ΔV F . It is clear that RF can overcome the collinearity problem and deeply explore the complicated nonlinear relationships between merging decision and influencing variables. One can also find that the reduction of the accuracy in training and testing dataset is also much smaller than the logit model and CART model, showing that RF is practical for predicting the merging decision during execution period and has better transferability. Conclusions is study conducts a comprehensive analysis of the influencing variables of merging decision and employs the random forest (RF) to model the merging decision behavior during the execution period. e proposed RF method can accurately predict the merging decision during the execution period and investigate important influencing factors. e US-101 vehicle trajectory data are used to train and validate the RF model. To comprehensively explore the influencing factors during merging execution, 19 candidate variables are extracted including speeds, relative speeds, gaps, time-to-collisions (TTCs), and locations. e modeling results show that Gap PF and (Gap PF /Gap) are the most two important variables, whose importance values are much greater than other variables. It is probably because that the merging vehicle drivers can easily observe the PL vehicles and control the relative speeds and positions with them and thus, they tend to leave more space for their PF vehicles. To select the effective variables, a feature variable selection process is adopted and 9 variables are selected in the RF model finally. Gap PF and (Gap PF /Gap) are still the two most important feature variables. ΔV F is the only variable related to the vehicles in the auxiliary lane, which means merging vehicles mainly focus on the traffic condition in the adjacent main lane. Evaluation of the performances in comparison with the state-of-the-art method reveals that the proposed method can obtain much more accurate results in both training ant testing datasets. e reduction of the accuracy in training and testing dataset is also much smaller than that of logit model, showing that RF is practical for predicting the merging decision behavior during execution period and has better transferability. Furthermore, it is obvious that merging drivers face more challenges and may make improper decisions under congested traffic conditions, which might cause long delays. In future, if vehicles can receive the real-time information about the traffic environment via VANETs, the proposed RF models can help the merging vehicles make safer decisions. us, the results of this study can also improve the safety and comfort of driving assistance systems and autonomous driving systems. Data Availability e NGISM data used to support the findings of this study have been deposited at the website: https://catalog.data.gov/dataset/ next-generation-simulation-ngsim-vehicle-trajectories. Conflicts of Interest e authors declare that they have no conflicts of interest. Journal of Advanced Transportation 9
2021-05-10T00:03:12.436Z
2021-02-03T00:00:00.000
{ "year": 2021, "sha1": "32cccf20f5b729615bd1242062642740b7d9d11c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jat/2021/6654096.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2930b22e07c9eb33aacc427555740331aa31edb7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
13802997
pes2o/s2orc
v3-fos-license
Rosuvastatin Does Not Affect Fasting Glucose, Insulin Resistance, or Adiponectin in Patients with Mild to Moderate Hypertension The effects of statins on insulin resistance and new-onset diabetes are unclear. The purpose of this study was to evaluate the effects of rosuvastatin on insulin resistance and adiponectin in patients with mild to moderate hypertension. In a randomized, prospective, single-blind study, 53 hypertensive patients were randomly assigned to the control group (n=26) or the rosuvastatin (20 mg once daily) group (n=27) during an 8-week treatment period. Both groups showed significant improvements in systolic blood pressure and flow-mediated dilation (FMD) after 8 weeks of treatment. Rosuvastatin treatment improved total cholesterol, low-density lipoprotein (LDL)-cholesterol, and triglyceride levels. The control and rosuvastatin treatment groups did not differ significantly in the change in HbA1c (3.0±10.1% vs. -1.3±12.7%; p=0.33), fasting glucose (-1.3±18.0% vs. 2.5±24.1%; p=0.69), or fasting insulin levels (5.2±70.5% vs. 22.6±133.2%; p=0.27) from baseline. Furthermore, the control and rosuvastatin treatment groups did not differ significantly in the change in the QUICKI insulin sensitivity index (mean change, 2.2±11.6% vs. 3.6±11.9%; p=0.64) or the HOMA index (11.6±94.9% vs. 32.4±176.7%; p=0.44). The plasma adiponectin level increased significantly in the rosuvastatin treatment group (p=0.046), but did not differ significantly from that in the control group (mean change, 23.2±28.4% vs. 23.1±27.6%; p=0.36). Eight weeks of rosuvastatin (20 mg) therapy resulted in no significant improvement or deterioration in fasting glucose levels, insulin resistance, or adiponectin levels in patients with mild to moderate hypertension. INTRODUCTION Statins (3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors) are prescribed worldwide in patients with or at risk for cardiovascular disease (CVD). Reduction of low-density lipoprotein (LDL) cholesterol is one of the primary mechanisms of CVD prevention. Beyond the lipid-lowering effect of statins alone, there is abundant evidence showing that statins provide immediate benefits, the so-called pleiotropic effects of statins. These pleiotropic effects are thought to include improved endothelial function, enhanced stabilization of atheromatous plaque, decreased oxidative stress, decreased vascular inflammation, and a decrease in the probability of developing atherosclerotic events in metabolic syndrome, type 2 diabetes, and hypertension. [1][2][3][4][5][6] These effects of statins may consequently prevent plaque rupture and subsequent myocardial infarction in the proinflammatory and prothrombotic environment. 7,8 Recently, randomized controlled clinical trials have raised the concern that lipophilic statins might have unfavorable metabolic effects, such as reducing insulin secretion and exacerbating insulin resistance and the development of new-onset diabetes. 3,9,10 Another study also showed that atorvastatin treatment resulted in significant increases in fasting insulin and glycated hemoglobin (HbA1C) levels consistent with insulin resistance in hyper- cholesterolemic patients. 11 These concerns are very important because insulin resistance increases the risk of CVD. Although some studies have been published on the adverse effects of statins, their effects on insulin resistance and new-onset diabetes are not obvious. 3,6,11,12 The purpose of this study was to evaluate the effects of rosuvastatin on insulin resistance and adiponectin in patients with newly diagnosed mild to moderate hypertension. Patients and methods This study was a randomized, prospective, single-blind study in patients with mild to moderate hypertension [systolic blood pressure (BP)<170 mmHg or diastolic BP <105 mmHg] from September 2009 to April 2010. The study was carried out in Gwangju Veterans Hospital and was approved by the institutional review board of the hospital. Every patient was given full information about the study objectives and methods and signed a written informed consent form. No patient had taken any lipid-lowering agent, hormone therapy, or vitamin supplements during the 8 weeks before randomization. Also, during the pre-randomization period (8 weeks) and the study period, to make the comparison of insulin sensitivity fair in the two groups, all patients took an angiotensin type II receptor blocker (ARB), telmisartan 80 mg, followed by a calcium channel blocker for the treatment of hypertension. Patients with newly diagnosed mild to moderate hypertension were included. We excluded patients with renal disease, hepatic disease, any thyroid disease, uncontrolled diabetes (HbA1C>8%), uncontrolled severe hypertension, stroke, acute coronary syndrome, and unstable angina. After a 1-week screening period, 57 patients were randomly assigned to either placebo (Group I: mean, 61.5±6.9 years, n=26) or rosuvastatin 20 mg (Group II: mean, 60.4±7.2 years, n=27) once daily during a 2-month treatment period. The allocation was performed by using envelopes. At screening, 57 patients were enrolled in the study. One patient was diagnosed with hepatocellular carcinoma. Three patients withdrew their informed consent. Thus, the final analysis was performed on 53 patients (Fig. 1). The patients were examined at baseline and at 8-week fol-low-up visits to assess changes in fasting glucose, insulin, HbA1C levels, QUICKI (quantitative insulin-sensitivity check index), HOMA (homeostasis model assessment), adiponectin, and flow-mediated vasodilation (FMD). Measurement of blood pressure For BP measurement, stabilization was attempted for more than 10 minutes. BP was measured on the right upper arm with the patient in a sitting position. The measurement was performed at least 2 times at a minimum interval of 10 minutes and the measurements were averaged. Systolic BP of more than 140 mmHg or diastolic pressure of more than 90 mmHg was defined as hypertension. Evaluation of vascular endothelial function The evaluation of vascular endothelial function was performed by FMD, a noninvasive method. To ensure that the ultrasonographic findings of the brachial artery were detected, the most accessible area, which was 2 to 5 cm inferior to the antecubital fossa, was targeted by use of a high-resolution ultrasonography unit (Sequoia 512; Acuson, Mountain View, CA, USA) to which a 10 MHz linear array transducer was implanted. Ultrasonography was performed according to methods reported previously. 13,14 Insulin resistance and adiponectin measurement Blood sampling was done in the morning before treatment and after 8 weeks of drug administration and more than 8 hours of fasting. Plasma insulin was measured with a radioimmunoassay (Biosource Inc., Nivelles, Belgium), as was adiponectin (LINCO Research Inc., St. Louis, MO, USA). Indices for insulin sensitivity (QUICKI and HOMA) were calculated on the basis of the following formulas: QUICKI=1/{log (insulin)+log (glucose)} and HOMA=fasting insulin × fasting glucose/22.5. The units of measurement of insulin and glucose were μU/ml and mg/dl, respectively. Statistical analysis All data are expressed as the mean±SD. We used Student's paired t test or Wilcoxon signed rank test to compare values between baseline and treatment at 2 months. A comparison of the measurements between the two groups was made by using repeated-measures ANOVA. The mean delta change (%) was calculated as a mean of delta change=(baseline valuefollow-up value)/baseline value × 100 (%). All statistical procedures were performed with the Statistical Package for the Social Sciences (SPSS), version 13.0 (SPSS Inc., Chicago, IL, USA). A p<0.05 was considered statistically significant. RESULTS The baseline characteristics of the subjects are shown in Table 2). Neither group showed a significant change in the high-sensitivity C-reactive protein level from baseline to 8 weeks. There were no significant differences in fasting glucose, fasting insulin, QUICKI, HOMA, or adiponectin levels between the two groups before or after randomization ( (Fig. 3). The plasma adiponectin level increased significantly in both groups compared with baseline. However, there was no significant difference in the mean delta change between the control and rosuvastatin groups (23.2±28.4% vs. 23.1±27.6%; p=0.36; Fig. 4). DISCUSSION The current study showed that 8 weeks of rosuvastatin (20 mg daily) therapy resulted in no significant improvement or deterioration in fasting glucose levels, adiponectin levels, or insulin resistance. As expected, all components of the lipid profile improved more from baseline following rosuvastatin treatment than control treatment. Our results suggest that rosuvastatin did not cause glucose intol- erance or insulin resistance. Insulin resistance is associated with increased risk for CVD. 15,16 The association between insulin resistance and hypertension is controversial. Whereas some studies have reported that insulin resistance is strongly related to hypertension, others have shown only a weak or even no association. [17][18][19] In clinical practice, risk factors for CVD tend to cluster within individuals, and hypertensive patients are at increased risk for metabolic syndrome and adverse changes in insulin resistance and the lipid profile. For risk modification, statins are prescribed in patients with multiple risk factors for CVD. Recent clinical studies have demonstrated that lipophilic statins, such as atorvastatin, simvastatin, and the hydrophilic statin rosuvastatin might increase the onset of new diabetes. 3,9,10 However, these studies were not designed to evaluate the onset of new diabetes or insulin resistance. Therefore, these results are not clear and have not led to recommendations for the general population. Other researchers have previously reported that simvastatin reduces adiponectin levels and insulin sensitivity. 20 Previously, Koh et al. 11 published that atorvastatin treatment in healthy hyperlipidemic patients aggravates insulin resistance by increasing fasting glucose, insulin, and HbA1c levels at relatively high doses. The characteristics of the patients in both studies were similar. The baseline characteristics, such as lipid level, proportion of diabetic patients, and laboratory findings of baseline insulin resistance were similar, even though the patient group in that study was composed of healthy volunteers and our patient group consisted of newly diagnosed hypertensive, dyslipidemic patients. 11 Indeed, whether statins, especially atorvastatin, have a decisive effect on insulin resistance is unclear. Recently, Koh et al. 21 published that compared with pravastatin, rosuvastatin therapy significantly increased fasting insulin and HbA1c while decreasing plasma adiponectin levels and the QUICKI index compared with baseline. A reason may exist for this discordance. First, our patients simultaneously took telmisartan 80 mg, which has a PPAR-γ effect that improves insulin resistance. As a result, it follows that it may have had some masking effects. This is a limitation of our study protocol. Second, our study groups consisted of hypertensive, dyslipidemic patients and included some patients with diabetes. Our patients already had metabolic disease. Thus, the unwanted metabolic effect by rosuvastatin may have been relatively weaker than in the patients in Koh et al.'s study. Huptas et al. 6 showed that 6 weeks of atorvastatin treatment results in significant improvement in insulin sensitivity in patients with metabolic syndrome. But, these conflicting results cannot be explained. Furthermore, it is unknown whether different statins have different metabolic effects on the basis of their lipophilic properties. Similar findings were shown for pravastatin, which is nonlipophilic. 22,23 Another study compared the effects of atorvastatin (10 mg) and rosuvastatin (10 mg) on changes in glucose and insulin levels, and the HOMA of the insulin resistance index, which were not significantly different between the two groups. 24 Also, the result of a meta-analysis of randomized controlled trials may suggest that potential differences exist between statins. 25 It is not clear why various statins have beneficial metabolic actions in some studies, but not in others. Thus, further head-to-head comparative studies are needed to elucidate the effects of statins on glucose metabolism. Our results showed that lipid levels improved, adiponectin levels increased, and the percentage change in fasting glucose and insulin levels and the QUICKI and HOMA indexes were not significantly different between the rosuvastatin and control treatment groups. To determine the trends in each group's differences according to treatment, we assessed the mean value of each parameter and the mean of the delta change. The values shown in Table 2 and the mean change percentages (Fig. 2-4) for each parameter may seem to be different results. But this could be because of the statistical differences. Studies in an animal model of insulin resistance suggested that rosuvastatin treatment increases whole-body and peripheral tissue insulin sensitivity via improved cellular insulin signal transduction. 26 A 20 mg dose of rosuvastatin, which is a relatively high dose, was used in our study. Rosuvastatin (20 mg) has equal lipid-lowering potency as atorvastatin (40 mg). Therefore, we assume that each statin has differential effects on insulin sensitivity and the rate of new-onset diabetes according to dosage. The rosuvastatin (20 mg) group tended to show improved vascular endothelial function and FMD, but showed no significant difference at the time of study termination. Our study and another study showed that treatment with a statin improved FMD in patients with a decreased baseline FMD. 27 In that study, discontinuation of statin treatment reversed the improved FMD to baseline. 27 The results showed that statins definitely affect vascular endothelial function, but only in patients with increased cardiovas-cular disease risk factors. In the current study, most patients had low cardiovascular disease risk factors; the anti-hypertensive ARB therapy could have already resulted in maximum improvement of vascular endothelial function. Under such conditions, statins would not have an additional effect on vascular endothelial function owing to the ceiling effect. If the current study had enrolled more patients with diabetes, metabolic syndrome, or other cardiovascular disease, the results would possibly have greater meaning. In our data, the value of adiponectin increased in both groups but did not differ significantly between the two groups. Some diabetic patients were included in this study, because many hypertensive patients already show metabolic disease in the real world. As a natural consequence, it follows that analysis of our data was partially ambiguous. Furthermore, telmisartan 80 mg, which has a PPAR-γ effect that improves insulin resistance, was taken by all patients for adequate BP control. As a result, it follows that the ARB may have shown good BP control but some masking effects on adiponectin, inflammatory markers, and insulin resistance. In conclusion, our study showed that 8 weeks of rosuvastatin (20 mg daily) therapy showed no significant improvement or deterioration of fasting glucose levels, insulin resistance, and adiponectin levels in newly diagnosed hypertensive patients treated with the ARB telmisartan.
2018-04-03T01:39:28.255Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "8e2b43b6bc4e1fe1404b9f6f3b339ad239677ede", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3651984?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8e2b43b6bc4e1fe1404b9f6f3b339ad239677ede", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5620866
pes2o/s2orc
v3-fos-license
Detecting abnormality in heart dynamics from multifractal analysis of ECG signals The characterization of heart dynamics with a view to distinguish abnormal from normal behavior is an interesting topic in clinical sciences. Here we present an analysis of the Electro-cardiogram (ECG) signals from several healthy and unhealthy subjects using the framework of dynamical systems approach to multifractal analysis. Our analysis differs from the conventional nonlinear analysis in that the information contained in the amplitude variations of the signal is being extracted and quantified. The results thus obtained reveal that the attractor underlying the dynamics of the heart has multifractal structure and the variations in the resultant multifractal spectra can clearly separate healthy subjects from unhealthy ones. We use supervised machine learning approach to build a model that predicts the group label of a new subject with very high accuracy on the basis of the multifractal parameters. By comparing the computed indices in the multifractal spectra with that of beat replicated data from the same ECG, we show how each ECG can be checked for variations within itself. The increased variability observed in the measures for the unhealthy cases can be a clinically meaningful index for detecting the abnormal dynamics of the heart. Detrending of the signals The ECG signals often contain global trends as shown for a typical data in the top panel of Fig. S1. These trends usually are result of the body movement by the subject while the ECG is being taken. As part of the pre processing, we first remove these trends as described below. The de-trended data thus obtained after removing the global trends is shown in the bottom panel of the To remove the undesirable trends, we fit a polynomial of a certain degree to the signal, which is then subtracted from the actual signal to get the de-trended signal. To choose the appropriate value of the degree n to be used for the fitting polynomial, we define a deviation δ (n) of the original signal from the detrended signal as: Figure S1: Time series of a randomly chosen subject before detrending (top panel) and after detrending (bottom panel). As explained in the text, the global trend is removed by fitting a polynomial to the original data. It is important here to choose the right degree for the polynomial to be fit to remove the trend. If the degree of the polynomial is small, it won't be able to remove the higher order trends. Thus, we try different order polynomials to see how they affect the resulting time series. Figure S2 shows the variation of the deviation δ as a function of n, the degree of the polynomial for a few randomly selected ECG time series. As is clear from the plot, values of the deviations saturate for sufficiently high n. Based on this, for all the datasets we use n = 20 to detrend them. Figure S2: Deviations as a function of the degree of the detrending polynomial for a few randomly selected ECG waveforms. It can be seen that for high values of the degree, deviations saturate. shown in Figure S3. It is almost impossible to quantify the difference between the variations in these waveforms visually. To resolve this problem, we turn towards a dynamical systems' framework and use these waveforms to reconstruct the dynamical attractors of these systems. By comparing the properties of these reconstructed attractors, we can quantify the differences between the original waveforms. The details of the embedding technique are given below. For uniformity, all the values in the time series c(t k ) are first scaled between 0 and 1 by using a transformation of "compression": where c min and c max are minimum and maximum values in the time series c(t k ) respectively. Each time series s(t k ) is then embedded into an M dimensional space, by constructing vectors as: Here a time delay τ is the time, measured in units of sampling rate = t i+1 − t i , at which autocorrelation of the signal falls to 1/e of its original value [2]. It is easy to see that there are in total N − (M − 1)τ embedded vectors. As mentioned in the main text, the value of M is chosen from the saturation values of the correlation dimension. Taken's embedding theorem dictates that the phase space trajectories or attractor obtained from these vectors have the same topological properties as that of the original system [3]. The distribution of the points on the attractor thus reconstructed, is usually non-uniform and determines the multifractal properties of the attractor. Figure S4 shows the multifractal spectra for the time series given in Figure S3. Our method uses four parameters α 1 , α 2 , γ 1 and γ 2 to characterize a given multifractal spectrum uniquely. As explained in the main text, the difference α 2 − α 1 or the width of the multifractal spectrum, is a measure of the complexity of the attractor since it represents the range of the scales required to characterize the attractor fully. The other two indices γ 1 and γ 2 represent the functional form (in the Eq.(3) from the main text) of the multifractal curves and are required for the complete characterization. It can thus be seen from Figure S4 that the waveform for the healthy heart is more complex than that of the unhealthy heart. To see this in detail for many datasets together, one can visualize the various parameter planes to see if the healthy and unhealthy groups cluster in the different regions of these planes. As an example, we show the results for α 1 -α 2 planes in Figure S5 and almost all healthy cases are seen to be more complex than the unhealthy cases. f(α) Healthy Unhealthy Figure S4: Multifractal spectra for the two time series shown in Figure S3. The spectrum for the healthy case can be seen to be broader than the unhealthy case. As explained in the main text, the α 2 values have larger numerical errors for the datasets used in the study. Hence for further characterization, we rely on α 1 -γ 1 and α 1 -α 0 planes and show how they can be effectively used to group the datasets as healthy and unhealthy. Extracting a single beat from an ECG time series Identifying a single beat in an ECG signal is tricky since a beat cannot be defined as a pattern that repeats with Figure S5: α 1 -α 2 planes for different channels. The red squares represent the patients and green circles represent the healthy subjects. exact periodicity in the ECG signal. However, it is easy to see that ECG signals do have a certain approximate periodicity because of the presence of beats. For the data used, in units of milliseconds, the individual beats are seen to repeat with a period T ∈ (600, 1500). To find the exact value of the period, we calculate the autocorrelation of the time series as a function of the lag τ . The resulting plot for a typical ECG time series is shown in Figure S6. The τ value corresponding to the highest peak in this range is then taken to be the period of the signal and the same is used to extract a single beat from the time series. The beats thus extracted are used for generating the beat replicated data used in the analysis.
2017-04-29T03:15:13.000Z
2017-04-29T00:00:00.000
{ "year": 2017, "sha1": "23175d6dc91e81045073fc014b542c12292a6995", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-15498-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "306f644c33a26c02828ab706222358fb6968a2b6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology", "Physics", "Computer Science", "Medicine" ] }