id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
55617342
pes2o/s2orc
v3-fos-license
Adaptation of the Four Forms of Employee Silence Scale in a Polish sample background Silence is understood as a decision not to speak up in situations of observed irregularities both in productivity and ethics. The study examined the validity of the Four Forms of Employee Silence Scale (FFESS) in the Polish population. The scale is a four-factor measure designed to capture differently motivated tendencies to be silent in organizations. The scale distinguishes acquiescent, quiescent, prosocial and opportunistic silence. Employee silence has been linked to many important individual outcomes: failure to react to ethical transgressions, stress and depression, and lower creativity and productivity. The motivation for being silent is embedded in the social relations in an organization.Employee silence is a decision how to behave in a particular social context; it is "the withholding of any form of genuine expression about the individual's behavioral, cognitive, and/or affective evaluations of his or her organizational circumstances to persons who are perceived to be capable of effecting change" (Pinder & Harlos, 2001, p. 334).Silence is not only conceived as an upward communication-related process.Withholding ideas and information can also be part of the broader organizational context.Sharing knowledge with colleagues or teammates as part of creativity and innovation endeavors is limited when employees experience generalized negative affective states, which is the case in high job demands such as workload or ambiguity (Madrid, Patterson, & Leiva, 2015).The importance of affective factors was also underlined by Perlow and Williams (2003, who showed that silence generates feelings of humiliation and anger and these limit creativity and undermine productivity. Though there are researchers who study silence as though it were a unitary construct, not all agree with this claim (Milliken et al., 2003;Pinder & Harlos, 2001;Van Dyne et al., 2003).The arguments for conceptualizing silence as a multidimensional construct and not the simple oppositions of voice are theoretical and also based on studies.There are small correlations between silence and voice (Madrid et al., 2015).Both are acknowledged as a kind of proactive behavior (Parker & Collins, 2010).There are different kinds of voice (Maynes & Podsakoff, 2014) and different motives for silence (Brinsfield, 2013).In the opinion of Knoll and van Dick, studying silence as a unitary concept "is an impediment of the progress in understanding why and when employees withhold their opinion, their knowledge and especially their concerns" (Knoll & van Dick, 2013, p. 350).There are at least four reasons for which employees restrain themselves from speaking up: 1) the negative view of possibilities of change (acquiescent silence), 2) fear of negative consequences for oneself of speaking up (quiescence silence), 3) the need to maintain harmonious relations with others (prosocial silence), 4) the need to protect and enhance one's interests (opportunistic silence). Four types oF silence Acquiescent silence is passive and generalized (Van Dyne et al., 2003).It is in the core of the organization's membership concept.The belief that personal efforts to improve the situation both in productivity and morality are futile because there is nobody to hear the constructive voice involuntarily leads to maintaining the system (Pinder & Harlos, 2001).The passive acceptance of the status quo means that the alternatives are not recognized and there is no will to change the situation.The belief that speaking up is pointless comes from previous efforts to raise more substantive issues that fell upon deaf ears.This form of silence demands broad assistance to break it. Quiescent silence is based on the fear that the consequences of speaking up will be personally costly (Morrisson & Milliken, 2000;Pinder & Harlos, 2001).This kind of silence is more proactive than acquiescent silence and more consciously driven by considerations of negative outcomes.The idea of fear-based silence is elaborated in the study of psychological safety that is a critical precondition for speaking up in the work context (Edmondson, 1999).The MUM effect is stressed in describing the core of quiescent silence (Van Dyne et al., 2003).This effect emerges when people refrain from delivering bad news to avoid such negative consequences as personal discomfort, which could be evoked by negative responses of the recipients (Lee, 1993).Quiescent silence does not have to be consciously driven.This is the case when the intensity of fear is high and an automatic response is activated in situations such as challenging the authorities (Kish-Gephart, Detert, Treviño, & Edmondson, 2009). Prosocial silence is based on altruism and cooperative motives.The beneficiaries are other people or the whole organization.In its essence, prosocial volume 5(4),  silence can be related to the organizational citizenship behavior phenomenon (Smith, Organ, & Near, 1983;Bateman & Organ, 1983), which is discretionary behavior, without being mandated by an organization.Other employee's prosociality may be assessed with such statements as: "This employee withholds confidential information, based on cooperation", "This employee protects proprietary information in order to benefit the organization", "This employee withstands pressure from others to tell the organization's secrets" (Van Dyne et al., 2003).Prosocial silence is intentional and proactive.Prosocial silence could be related to prosocial rule breaking, that is conscious actions undertaken to effectively complete a task or to maintain positive relations with clients when the action does not comply with organizational rules (Morrison, 2006;Dahling, Chau, Mayer, & Gregory, 2012). The fourth type of silence -opportunistic silence -was proposed by Knoll and van Dick (2013), who define it as deliberately withholding information to gain egoistic profit.The authors consider different manifestations of opportunism including delivering incomplete or distorted information.The aim of this practice is to preserve power and status if it is threatened by prospective changes.Opportunistic silence could also be a tactic to avoid additional workload by misleading or confusing.That is why the authors name it deviant silence, assuming that an employee is aware of the harm done to others. MeasureMent oF organizational silence Silence has been assessed mainly through qualitative research and case studies (Milliken et al., 2003;Morrison & Milliken, 2000;Perlow & Repenning, 2009).To measure quiescent and acquiescent silence Parker, Bindl, Van Dyne, and Wong (2009) proposed a 10-item scale.In this tool in the acquiescent silence subscale the authors included such items as "My view would make no difference, " or "No one would take much notice of my concerns".In the quiescent silence subscale items can be found such as "I would not want to hurt my career, " or "I would not want to hurt my position in the team".Acquiescent silence, defensive silence, and prosocial silence could also be measured by a 15-item scale developed by Van Dyne et al. (2003).Example items include: "I am unwilling to speak up with suggestions for change because I am disengaged, " "I do not speak up and suggest ideas for change, based on fear." Briensfield (2013) developed a 31-item scale that consists of six dimensions of motives for silence (ineffectual, relational, defensive, diffident, disengaged, and deviant).The climate of silence was assessed by Vakola and Bouradas (2005), the group voice climate by Morrison et al. (2011) and employees' beliefs about when and why speaking up at work is risky or inappropriate by Detert and Edmondson (2011).Morrison and Milliken (2000) proposed that the climate of silence is a collectively shared belief that speaking up on critical issues is not only futile but also can be dangerous.The climate of silence is formed through organizational structures and policies and managements' implicit beliefs.Knoll and van Dick (2013) developed a measure for the distinct assessment of the four forms of organizational silence -acquiescent, quiescent, prosocial and opportunistic silence -and demonstrated its usefulness.The authors showed that there is a positive correlation between an organizational climate of silence and three forms of employee silence excluding prosocial silence.They delivered evidence that correlations with the climate of silence were stronger for acquiescent silence compared to quiescent and opportunistic silence.The authors verified the hypothesis of a correlation between organizational identification and acquiescent silence with the conclusion that silence does not have to be associated with negative attitudes toward the organization.They also found negative correlations for all forms of silence with job satisfaction and well-being and positive correlations with strain.The results for employees who engaged in opportunistic silence were a surprise for Knoll and van Dick because contrary to expectations these employees reported the lowest scores in well-being and experienced strain.This convinced the authors of the scale of the necessity to redefine the meaning of opportunistic silence.Finally they established that turnover is positively related to all forms of silence. the present study The aim of the study is to examine the validity of the Four Forms of Employee Silence Scale (Knoll & van Dick, 2013) in a Polish sample 1 and to establish the criterion-related validity of the scale by correlating the four forms of silence with measures of emotional attitude toward the organization, procedural justice, relational contract and turnover intention.Emotional attitude toward work in an organization, as an important element of employees' attitudes, is strongly connected with satisfaction, engagement, burnout, workaholism and organizational commitment (Barbier, Peters, & Hansez, 2009).The emotional aspect of the employee's attitude is not only the result of the employee's judgment of employee-employer relations but is also a part of the experience that actively participates in behavior regulation (Yu, 2009).The relational psychological contract is a broad, long-term, socio-emotional set of mutual obligations between employer and employee (Rousseau, 1995).It stresses such values as commitment and loyalty and is consistent with collective interest.When organizations show care and support for employees by providing favorable contracts and working conditions then an employee perceives relations in an organization through the lens of relational contract, and develops high trust in the organization and a sense of belonging (Behery, Paton, & Hussain, 2012).The studies showed that a climate of fairness plays an important role in enhancing motivation to speak up at work (Pinder & Harlos, 2001), and, specifically, employees are less silent when they perceive a high level of procedural justice (Tangirala & Ramanujam, 2008). Scale TranSlaTion The scale was translated into Polish by following the recommendations of Guidelines for Translating and Adapting Tests (International Test Commission, 2005).The emphasis was put on an equivalent linguistic transfer.There were two translation teams including a total of two psychologists, three professional translators from English into Polish and two English native speakers who knew the Polish language (back translation).One member of the research team who is qualified in psychology supervised the whole process.The aim of the team was to uphold the colloquial character of the items in the new version.The two independent teams of translators discussed differences between the back-translations and the original English items, and any necessary corrections were carried out. ParTiciPanTS and Procedure We collected data from eight samples including a total of 1044 employees of various organizations working for at least six months at a given position.The survey was anonymous and voluntary.Employees were asked to complete questionnaires in paper or electronic form (access to the study was restricted by password). Sample 1 (n = 204) included 81 men and 123 women; 170 employees held non-managerial positions and 34 managerial; the mean age was 31.8 (SD = 10.8) and average seniority was 10.6 (SD = 10.1).Sample 2 (n = 176) consisted of employees of small and medium-sized companies, including 67 men and 119 women; 143 employees held non-managerial positions and 33 managerial; the mean age was 30.2 (SD = 7.6) and average seniority was 7.7 (SD = 7.2).Sample 3 (n = 161) consisted of the managers of different companies, participants of MBA studies, including 89 men and 72 women; 37 employees held non-managerial positions (candidates for managers) and 123 managerial; the mean age was 39.8 (SD = 7.4) and average seniority was 16.6 (SD = 7.5).Sample 4 (n = 100) included 30 men and 70 women; 82 employees held non-managerial positions and 18 managerial; the mean age was 34.7 (SD = 9.3) and average seniority was 12.8 (SD = 9.7).Sample 5 (n = 184) included 57 men and 127 women; 122 employees held non-managerial positions and 62 managerial; the mean age was 39.1 (SD = 10.5) and average seniority was 16.5 (SD = 11.1).Sample 6 (n = 78) consisted of employees of several branches of an international company, including 38 men and 40 women; 69 employees held non-managerial positions and 9 managerial; the mean age was 30.5 (SD = 6.3) and average seniority was 7.5 (SD = 6.9).Sample 7 (n = 72) consisted of employees starting their careers, including 8 men and 64 women; the mean age was 26.8 (SD = 2.7).Sample 8 (n = 69) included employees of a Polish organization (in order to ensure anonymity, the subjects did not enter demographic data). MeaSureS Four Forms of Employee Silence.We used the 12-item measure from the final version of the scale created by Knoll and van Dick (2013).The scale consists of statements to complete the item root "Sometimes I remain silent at work..." with a seven-point scale from 1 (never) to 7 (very often).The results of confirmatory factor analysis of the original and Polish versions of the scale are reported in Table 1.The tool was used in all eight samples. Positive and negative emotional attitude toward the organization.We used the 14-item Emotional Attitude towards the Organization Scale (Jurek & Adamska, in press).The original questionnaire was developed and validated in Poland.The tool consists of seven items referring to negative emotions associated with the workplace (e.g."What happens in my workplace exhausts me mentally", "I feel bad in my workplace") and seven items referring to positive emotions (e.g."I owe a lot to my organization", "I feel proud that I work for my organization").Items are answered on a five-point Likert scale from 1 (completely disagree) to 5 (completely agree).The tool was used in samples 1, 2 and 8. Procedural justice.This is a subscale of the Organizational Justice Scale of Colquitt (2001).It consists of seven items preceded by the remark that "the following items refer to the procedures used in your organization" and asks questions to what extent: Have you been able to express your views and feelings during those procedures?Have you had an influence over the (outcome) arrived at by those procedures?Have you been able to appeal the (outcome) arrived at by those procedures?Have those procedures upheld ethical and moral standards?A seven-point Likert scale from 1 (completely disagree) to 7 (completely agree) was used for answering the questions.The scale was adapted volume 5(4),  to the Polish sample and proved to be reliable (Retowski & Adamska, 2015).The tool was used in samples 3 and 8. Turnover intention.The subjects were asked to answer what is the chance that they would react to an unpleasant incident in their workplace in the following way: "considering the possibility of changing jobs" and "looking for job advertisements that would fit you".These two items come from the Polish version of the questionnaire that measures employees' reactions to difficult situations (EVLN model; Hagedoorn, Van Yperen, Van de Vliert, & Buunk, 1999) in the adaptation by Retowski and Chwiałkowska-Sinica (2004).Items are answered on a five-point Likert scale from 1 (completely disagree) to 5 (completely agree).The tool was used in sample 2. Relational psychological contract was measured by the subscale of the Swiss Psychological Contract Measure (Raeder, Wittekind, Inauen, & Grote, 2009).The subscale contains 13 items ranging from 1 (not at all) to 5 (very much) related to different aspects of the relational psychological contract in organizations (e.g.loyalty, decision-making, career development, safety, working atmosphere).Subjects were evaluated on how much their employer offers a working environment where these opportunities are realized.This is in line with the concept of psychological contract, which is implicit, rarely discussed and mainly accessible during the process of change but not when a routine reaction is needed (Schalk & Roe, 2007).The original version of the scale, in addition to the employer's offer subscale, measures three other subscales: employee's expectations, employee's contribution and employer's expectations.The tool was used in samples 6 and 8. The coefficients of reliability for the measurement of all variables are reported in Table 3. results confirMaTory facTor analySiS We conducted a series of confirmatory factor analyses (CFAs) to test whether the four forms of employee silence were empirically distinct concepts in the Polish sample.In the data analysis we used the R system for statistical computing (R Development Core Team, 2012) and the R package lavaan (Rosseel, 2012).Multiple model fit indices were reported, including the chi-square statistic (χ 2 ), comparative fit index (CFI) and the root mean square error of approximation (RMSEA).To assess the fit of the model to the data, we used the criteria recommended by Hu and Bentler (1999) and Brown (2015).We accepted CFI values greater than .95and RMSEA values lower than .08.CFA results confirmed the superiority of the four-factor model (see Table 1).This model provides a good fit [χ 2 (n = 1044) = 143.67;df = 48; CFI = .99;RMSEA = .044]and a significantly better fit than the uni-dimensional model [χ 2 (n = 1044) = 504.29;df = 54; CFI = .96;RMSEA = .089];Δχ 2 = 360.62,ΔCFI = .03.The obtained results support the conclusions presented by Knoll and van Dick (2013). Finally, Cronbach's α coefficients were used to assess the internal consistency of the four subscales measured by Four Forms of Employee Silence Scale in each sample separately (see Table 2 for details).Cronbach's α between .75 to .85 (for total sample) indicate a good level of reliability. ValidiTy Criterion-related validity was established by correlating four forms of employee silence with constructs theoretically linked to this phenomenon: emotional attitude toward an organization, procedural justice, relational contract and turnover intention.Table 3 presents an overview of the descriptive statistics and bivariate correlations between the four forms of employee silence and the variables included in the study. A consistent pattern of positive relationship among four forms of silence and negative emotional attitude toward an organization and turnover intention emerged.Individuals with a high score on the negative emotional attitude and turnover intention also reported higher levels of silence, especially acquiescent and quiescent.There also were significant negative correlations between employee silence and positive emotional attitude toward an organization, procedural justice and perceived relational contract.Again, the strongest correlations were reported in the cases of acquiescent and quiescent silence. conclusion and discussion The aim of this paper was to examine the reliability and validity of the Polish version of the Four Forms of Employee Silence Scale, which measures four different motives for keeping silence.CFA results confirmed the superiority of the four-factor model and a significantly better fit than the uni-dimensional model or other alternative models with two or three factors.The analyses provide evidence for a good level of internal consistency of the scale.In a sample of 1044 employees the value of Cronbach's α ranges between .75 and .85.Criterion-related validity was established by demonstrating a positive correlation of silence with negative emotional attitude toward an organization and turnover intention.It is also showed that silence drops with a more positive emotional attitude toward an organization and higher evaluation of procedural justice and relational contract.volume 5(4),  The rationale for adapting the scale that measures four forms of employee silence to the Polish context is that it gives more possibilities to test hypotheses about links between individuals' behaviors, leaders' behaviors and organizational climate.Such an instrument would enable us to understand what is hidden in social relations, offering an insight into the communication processes.A reliable scale would help to verify the contradictory claims related to the voice-silence phenomenon.The answer to the ques- tion whether silence and voice are a unidimensional construct or rather two distinct constructs could be useful in the research on change, its limit and management tactics to conduct change.The validation of the scale in the Polish sample encourages quantitative studies of silence in the domain of its antecedents and consequences.It may help to develop the theory of silence in an organization by answering scientific questions about the role of management style, personality of the employee and supervisor, the role of attitudes toward an organization and other individual factors influencing the intensity of silence. The theoretical bases of the silence phenomenon indicate that silence as an individual decision to withdraw from voicing ideas, suggestions, criticisms and opinions may be an effect of a belief shared with other employees that voicing is futile or even dangerous (Detert & Edmondson, 2011;Fivush, 2010;Pinder & Harlos, 2001).Sharing reality with others drives social actions.The tendency to protect common understanding by alignment with similar others and contrast beliefs and behaviors with socially distant others is strong enough to sacrifice objectively verifiable knowledge (Hardin & Higgins, 1996).The present study shows that two forms of silence, that is acquiescent and quiescent silence, are related to positive and negative emotional attitudes toward the organization, procedural justice, turnover intention and relational contract more strongly than to prosocial and opportunistic silence.This is partly in line with Knoll's and van Dick's (2013) evidence that correlations with climate of silence were strongest for acquiescent silence and their observation that organizational identification correlates only with acquiescent silence.These results and theoretical considerations could give rise to studies of different forms of silence climate.Sharing beliefs in a particular organizational setting about the futility or danger of speaking up at work would form a different climate of silence (with different consequences) than when silence is motivated prosocially or opportunistically. The limitation of the study can be seen in the lack of predictors of silence in an organization at the institutional level.To use the scale for inter-organizational comparisons would require inclusion of such information about the organization as its size, type of ownership, systems and practices of human resources management.These characteristics of an organization are crucial for differentiating organizational behaviors and, in consequence, for verifying the hypothesis that organizational climate can be discerned along different combinations of silences.It may be expected that if organizational practices are embedded in diversity and individualistic values then prosocial and opportunistic silence prevail.In contrast, acquiescent and quiescent silence may be a product of more bureaucratic types of organizations, and based on such values as stability and predictability.So the next step should include recognizing not only individual predictors but also organizational ones. Endnote 1 Adaptation was done with the permission of the authors. Table 1 Properties of the 12 items of the Four Forms of Employee Silence Scale and its CFA factor loadings Table 2 Cronbach's α for subscales of the Four Forms of Employee Silence Scale Note. a For details see Participants and Procedure. Table 3 Reliabilities, descriptive statistics, and intercorrelations among the study variables
2018-12-12T17:37:50.141Z
2017-06-13T00:00:00.000
{ "year": 2017, "sha1": "5ff2ccbd44707a193fae8e32da6dbcb982b33f66", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-75/pdf-30113-10?filename=adaptation.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5ff2ccbd44707a193fae8e32da6dbcb982b33f66", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
209392679
pes2o/s2orc
v3-fos-license
Huang-Lian Jie-Du decoction: a review on phytochemical, pharmacological and pharmacokinetic investigations Huang-Lian Jie-Du decoction (HLJDD), a famous traditional Chinese prescription constituted by Rhizoma Coptidis, Radix Scutellariae, Cortex Phellodendri and Fructus Gradeniae, has notable characteristics of dissipating heat and detoxification, interfering with tumors, hepatic diseases, metabolic disorders, inflammatory or allergic processes, cerebral diseases and microbial infections. Based on the wide clinical applications, accumulating investigations about HLJDD focused on several aspects: (1) chemical analysis to explore the underlying substrates responsible for the therapeutic effects; (2) further determination of pharmacological actions and the possible mechanisms of the whole prescription and of those representative ingredients to provide scientific evidence for traditional clinical applications and to demonstrate the intriguing molecular targets for specific pathological processes; (3) pharmacokinetic feature studies of single or all components of HLJDD to reveal the chemical basis and synergistic actions contributing to the pharmacological and clinically therapeutic effects. In this review, we summarized the main achievements of phytochemical, pharmacological and pharmacokinetic profiles of HLJDD and its herbal or pharmacologically active chemicals, as well as our understanding which further reveals the significance of HLJDD clinically. Background Herbal formula, the most popular therapeutic approach of traditional Chinese medicine (TCM), was recorded in ancient medical literature with fixed herbal components, definite curative effects, and acceptable adverse effects [1]. Huang-Lian Jie-Du decoction (HLJDD) (Orengedoku-to in Japanese and Hwangryun-Hae-Dok-Tang in Korean), a well-known classic TCM formula, was first described in Wang Tao's treatise "Wai Tai Mi Yao" in the Tang dynasty (752 A.D.). It has been a representative prescription for heat-clearing and detoxicating. Heat-clearing is to ameliorate the interior pattern or syndromes of exuberant heat, which is transformed from the process of external pathogens entering the internal organs. The heat is in the form of an elevation in the body temperature above normal or a subjective feeling of feverishness. Detoxicating indicates the measure to reduce the virulence and neutralize the toxicity of pathogens. Here, heat and poison are the forms of pathogens in Chinese medicines. HLJDD shows the ability to dispel the heat and poison and relieve the associated syndromes. This ability is achieved by four common crude herbs, Rhizoma Coptidis (RC) (Coptis chinensis Franch, Huang Lian), Radix Scutellariae (RS) (Scutellaria baicalensis Georgi, Huang Qin), Cortex Phellodendri (CP) (Phellodendron amurense Rupr., Huang Bo), and Fructus Gradeniae (FG) (Gardenia jasminoides Ellis, Zhi Zi) in a ratio of 3:2:2:3. According to the strict principle of "sovereign, minister, assistant Open Access Chinese Medicine *Correspondence: zhangqichun@njucm.edu.cn; zhuhx@njucm.edu.cn 1 Jiangsu Collaborative Innovation Center of Chinese Medicinal Resources Industrialization, Nanjing University of Chinese Medicine, Nanjing, China Full list of author information is available at the end of the article and courier" [2], which was developed from "Huangdi's Internal Classic" to enhance the effectiveness of Chinese medicinal herbs and to reduce toxics or side effects by combining various kinds of herbs, RC is the sovereign medicine with the action of purging the fire from the heart and middle energizer. RS acts as the ministerial medicine, removing the heat from the lungs and eliminating the fire from the upper energizer. CP purges the fire from lower energizer as the assistant medicine. FG purges the triple energizers and delivers the heat back to its origin as the courier medicine [3]. The whole formula is carefully designed and precise in formation. Xu et al. manufactured four HLJDD variants by leaving one herb out each time and found that the integral formula exhibited the strongest therapeutic effects in the cecal ligation and puncture rats among the four variants [4]. The precise and rigorous herbal combination is believed to be advantageous over single reagent since that various components can hit multiple targets simultaneously and perform synergistic therapeutic actions [5]. Moreover, due to the lack of TCM theories such as the theoretical mechanisms of diseases, researches on decomposed recipes of Chinese herbal compounds find it difficult to reveal the complex interactions between couplet medicines. Based on the clinical practice and inheritance of nearly a 1000 years as well as the integration of Chinese and Western medicine, the clinical application of HLJDD has gradually expanded from the diseases and symptoms of TCM to the diseases of Western medicine, and its use has also expanded to other countries besides China. With the remarkable therapeutic effects on removing excess heat and fire toxins, HLJDD plays an important role in the resolution of delirium, internal heat-related mania, insomnia, irritability, dry mouth and throat, heat-induced blood omitting, skin spots, and sore furuncle, according to Medical Secretes of an Official. This formula is also used to treat heat-pathogen-induced pyrostagnant rhinorrhagia, carbuncle, and jaundice as summarized by Prescriptions for Emerent Ref. [6]. At present, HLJDD has been widely used in the clinical practices to treat inflammation, hypertension, gastrointestinal disorders, liver and cerebrovascular diseases [7]. In a clinical study, the addition of HLJDD to yokukan-san (Japanese traditional herbal prescription) exhibited the same efficacy as aripiprazole (antipsychotics) in controlling aggressiveness of an Alzheimer's type dementia without any significant adverse reaction [8]. Another clinical study indicated that HLJDD was a possible treatment for fever of unknown origin [9]. In China, thin-layer chromatography and microscopy have been employed to establish the quality standard of Huang-Lian Jie-Du pills for decades. The contents of berberine hydrochloride and baicalin have been determined [10]. Additionally, an improved formula of HLJDD in the pill form has acquired the permission of Chinese State Food and Drug Administration to market (drug approval number Z20025356) [11]. The appearance and processing technology of Huang-Lian Jie-Du concentrated pill are shown in Fig. 1. In other Asian countries, HLJDD was approved for palliative cares and atopic dermatitis treatment by Ministry of Health, Labour and Welfare of Japan and Korean Food and Drug Administration [12,13]. Furthermore, HLJDD has been manufactured as a powdered, freezedried water extract by Tsumura Co, Ltd in Japan [9]. More and more clinical application cases have prompted people to explore the potential pharmacological effects and possible molecular mechanisms of HLJDD by modern pharmacology and molecular biotechnology. Modern pharmacological studies indicate that HLJDD exhibits therapeutic actions in various pathological aspects, such as hyperlipidemia [14], tumor [6,15,16], arthritis [17][18][19], sepsis [20][21][22], cardiac damage [23], liver injury [24,25], kidney disease [26], cerebral ischemia [27][28][29], type 2 diabetes mellitus (T2DM) [30,31], Alzheimer's disease (AD) [32][33][34], fungal infection [35] and inflammation [36]. In the meantime, with the deepening of researches and the continuous development of technology, more and more chemical compositions of HLJDD have been discovered. The effects of drugs are based on their chemical composition. This mainstream view holds that the different pharmacological effects and clinical applications of drugs depend on the tissue distribution and concentration of their active ingredients. Therefore, pharmacokinetics (PK) should be adopted to interpret the active substance basis of HLJDD. PK has the characteristics of holistic, comprehensive and dynamic, which is similar to the holistic concept and dialectical treatment of TCM. Although there are numerous researches with positive results on HLJDD, most of them were only performed with a fraction of the total compounds. Hence, it is necessary for us to sum up these past researches which are significant in guilding further researches of HLJDD. In this review, we summarized the phytochemical, pharmacological and pharmacokinetic investigations that have been conducted in recent years. Phytochemical investigation of HLJDD The components of TCM formulas are complex, but not all of them have pharmacological activities. Therefore, it is of great significance to separate and identify such pharmacodynamic components. Many studies manifested that alkaloids from RC and CP, flavonoids from RS and terpenes from FG are three major active components in HLJDD and therefore are regarded as markers for quality control of HLJDD [35,[37][38][39][40][41][42]. In recent years, with the progress of modern detection technology, the majority of researchers have actively explored the chemical components in HLJDD and established qualitative and quantitative detection methods for some of its active components. By HPLC-UV/MS, 11 major peaks in the chromatogram of HLJDD extracted by water were identified as geniposide, jatrorrhizine, palmatine, berberine, baicalin, wogonoside, baicalein, wogonin, coptisine, oroxin A, obaculactone. Among them, coptisine and obaculactone were two characteristic peaks that could distinguish CR from CP. The following quantitative analysis showed that baicalin was the most abundant, followed by geniposide, then berberine and wogonoside, respectively [43]. However, the contents of berberine, baicalin, geniposide, and baicalein in HLJDD by decocting twice under refluxing with 70% ethanol (1:10 and then 1:5, w/v) were 5.12%, 4.17%, 1.65%, and 0.96%, respectively [44]. The reason for this difference may be related to different extraction methods and the conditions of HPLC. An effective quantitative method based on multiple wavelengths HPLC-DAD was developed for simultaneous determination of fourteen major ingredients (seven alkaloids, four flavonoids, three terpenes) in HLJDD. The total contents of these fourteen analytes reached to 70% [45]. With HPLC-UV analysis, the chemical profile of HLJDD samples was generated. HLJDD comprises four distinct constituents including berberine, palmatine, baicalin and geniposide in an approximate ratio of 3:1:1:3 [6]. Moreover, Q-Exactive was employed for the comprehensive chemical identification of HLJDD. 69 compounds, including alkaloids, flavonoids, iridoids, triterpenoid, monoterpene and phenolic acids were identified, 17 major characteristic constituents were selected as the quality control markers of HLJDD [46]. Currently, the analysis of active ingredients in HLJDD is focusing either on the prescription or on its extract, while that in biological samples have seldom been reported. A rapid and sensitive UHPLC-MS/MS method was developed to determinate seven main active constituents (berberine, palmatine, jatrorrhizine, baicalin, baicalein, wogonoside, and wogonin) simultaneously in atherosclerosis rat plasma after administration of HLJDD at doses of 1.5, 3, and 6 g/kg. Baicalin, baicalein, wogonoside, and wogonin were highly detected in a dose-dependent manner, while the other three components were determined in a quite low level and in a dose-independent mode [47]. In this review, the chemical components of the four herbs of HLJDD were summarized and classified, which will provide references for the separation and analysis of the chemical compositions of HLJDD. The structural characteristics of alkaloids determine the low absorptions. Berberine, for example, is a quaternary ammonium alkaloid with conjugated double bonds and therefore has strong rigidity and poor solubility. Besides, berberine is the substrate of P-gp, which is an efflux transporter [47]. In addition, most berberine was excluded by the gastrointestinal tract after intragastric administration and was metabolized in a variety of pathways [66]. Hence, even long-term administration of alkaloids does not accumulate easily in the body because of their poor absorption through the intestinal wall. The absorptions of flavonoids were relatively better than the alkaloids. Flavonoids are easy to combine with glucuronic acid or sulfuric acid to form two-phase metabolism, thereby their plasma concentration-time curves showed obvious bimodal phenomena [72]. Iridoids and iridoid glycosides The major effective constituents of FG are iridoids and iridoid glycosides, such as genipin [73], geniposide [74][75][76], gardenoside [74,76], shanzhiside [74,76], and geniposidic acid [77] (Fig. 4). Among these components, geniposide and gardenoside, in particular, have very similar chemical compositions, with a difference of only one oxygen atom [77]. These compounds are responsible for the biological activities of FG, and their accurate and effective purification is of great significance for the quality control of this drug and its formulations. The content of iridoid glycosides may vary from different processing methods at about 2.65-7.23% [78]. A study quantified the content of geniposide, gardenoside, and geniposidic acid from different origins in China with 60.88 ± 11.47 mg/g, 56.33 ± 17.55 mg/g, and 2.61 ± 0.91 mg/g, respectively. Meanwhile, their average content were 52.80 ± 12.93 mg/g, 42.50 ± 13.21 mg/g, and 2.88 ± 2.19 mg/g, respectively, measured from different regions in Korea [77]. Pharmacological effects With the rapid development of modern pharmacology and biological technologies, increasing evidence has demonstrated the pleiotropic therapeutic functions of HLJDD on tumors, hepatic diseases, inflammations, allergies, blood lipid and glucose disorders, central nerve system diseases, bacterial infections, and intestinal flora disturbances (Table 1). Anti-tumor The ancient Chinese medical monograph "Zhong-Zang-Jing" recorded some descriptions of cancer-like symptoms such as "Yong, Yang, Chuang and Zhong", which are caused by retention of various pathogens including heat and dampness. Tumor growth involves induction of cellcycle progression, avoidance of apoptosis, and activation of the cell survival pathway [94]. Modern studies indicated that HLJDD could disrupt these processes, to suppress the tumor growth in vivo, and inhibite proliferation of cancer cells in vitro. In a hepatocellular carcinoma xenograft murine model, HLJDD was shown to suppress the xenografted growth in dose-dependent manner. The inhibitory effect of HLJDD may be due to the activation of eukaryotic elongation factor-2 kinase (eEF2K) and inactivation of eEF2. The activation of AMP-activated protein kinase (AMPK) signaling may be responsible for the eEF2K induction [6]. eEF2 is an essential protein for the elongation of nascent peptide [95]. The inactivation of eEF2 suppresses the synthesis of nascent protein, which supports the proliferation of the cancer cells [96]. The AMPK activation was reported to inhibit the mammalian target of rapamycin (mTOR) activity, followed by blockade of mTOR-mediated eEF2K phosphorylation [97]. Geniposide, baicalin, berberine and palmatine could induce phosphorylated eEF2 expression in Hep G2 and MHCC97L cells, which suggested that these four compounds could target on eEF2 [6]. However, their inhibitory effects on eEF2 activity have not been reported. Berberine and baicalin may be the two main components targeting AMPK in HLJDD, since they have been reported as AMPK activators [98,99]. It would be quite interesting to investigate the precise mechanism of the combination effect of active compounds in HLJDD. Another study on HLJDD in the treatment of hepatocellular carcinoma revealed the mutiple underlying mechanisms, including induction of apoptosis, blockade of cell cycle progression by regulating cell-cycle-related factor, modulation of the B cell CLL/ lymphoma 2 family proteins to favor programmed cell death, triggering of the mitochondrial pathway through membrane depolarization and caspase-9 activation, and inhibition of nuclear factor-kappa B (NF-κB) survival signaling pathway [100]. RS was responsible for the suppressive effect of HLJDD on myeloma cell proliferation, since RS alone exhibited stronger growth inhibition (IC 50 30 ng/mL) than HLJDD (IC 50 70 ng/mL) on U266 cells. In addition, baicalein showed the strongest growth inhibition with an IC 50 of 28 μM; while the IC 50 s of baicalin and wogonin, another two major flavonoids of RS, were greater than 200 μM. Baicalein inhibited the survival of MPC-1 − immature myeloma cells in vitro, and induced apoptosis in myeloma cell lines by inhibiting the activity of NF-κB and thereby blocking the degradation of inhibitor-kappa B-alpha (IκB-α). Further, induction of apoptosis by HLJDD, RS or baicalein may be considered to be involved in the mitochondria-mediated pathway, because the rapid loss of mitochondrial membrane potential was confirmed, followed by enhanced release of cytochromes c and subsequent activation of caspase-9 and caspase-3 [101]. In mitochondria pathway, the activity of NF-κB is considered to be pivotal, which modulates the expression and function of B-cell CLL/lymphoma 2 family proteins in the mitochondria [102,103]. These findings, consistent with previous studies, suggested that HLJDD and its active components exert therapeutic effects on different tumors through almost the same pathway. The molecular mechanisms of HJDD against tumor are shown in Fig. 5. Pharmacological actions Model Mechanisms Refs. Anti-tumor Hepatocellular carcinoma xenograft murine Suppressing xenografted growth by inactivating eEF2 through the activation of AMPK signaling [6] Hepatocellular carcinoma xenograft Hep G2 PLC/PRF/5 Inducing apoptosis Blocking cell cycle progression by regulating cellcycle-related factor (p21/WAF1, cyclin B1, cyclin A, Cdc25C, and Cdc2) Promoting programmed cell death by modulating Bcl-2 Triggering mitochondrial pathway through membrane depolarization and caspase-9 activation Inhibiting NF-κB survival signaling pathway [100] Hepatoprotection Thioacetamide Restoring redox system, gut flora, and urea cycle [24] Bile duct ligation Restoring redox system, gut flora, Kreb's cycle, and oxidation of branchedchain amino acids [24] Bile duct ligation Ameliorating energy metabolism, amino acid metabolism and gut microbiota metabolism Protecting oxidative injury [25] Anti-inflammatory Carrageenan-induced rat air pouch A23187-stimulated peritoneal macrophages LPS-stimulated RAW 264.7 macrophages Inhibiting inflammatory responses and eicosanoids generation from different lipoxygenases [106] Carrageenan-induced mice paw edema LPS-stimulated RAW 264.7 macrophages Reducing oxidative injury [44] Collagen-induced arthritis rats Regulating fatty acid oxidation and arachidonic acid metabolism [19] LPS-stimulated RAW 264.7 macrophages Suppressing the production of inflammatory mediators via inactivation of NF-κB and MAPKs, and degradation of IκBα [108] Cecal ligation and puncture-induced septic model rats Enhancing cholinergic anti-inflammatory pathway Inhibiting HMGB-1/TLR4/NF-κB signaling pathway [4] Cecal ligation and puncture-induced septic model rats Suppressing the production of proinflammatory cytokines Reversing the shift from Th1 to Th2 response and promote Th1/Th2 balance toward Th1 predominance Iinhibiting Th17 activation [112] 2,4-dinitrochlorobenzene-induced atopic dermatitis mice LPS-stimulated RAW 264.7 macrophages Inhibiting MAPKs/NF-κB pathway [115] LPS-induced gingivitis rats Inhibiting AMPK and ERK1/2 pathway [116] LPS-induced acute kidney injury mice Inhibiting NF-κB and MAPK activation Activating Akt/HO-1 pathway Ameliorating disturbances in oxidative stress and energy metabolism [26] Anti-allergy Antigen-induced RBL-2H3 cells Suppressing allergic mediators via inactivation of MAPKs and Lyn pathway [108] Modulation of blood lipid ApoE(-/-) mice Primary bone marrow-derived macrophage Foam cells Regulating the functional differentiation of monocytes, macrophages, and foam cells [119] High-fat diet-induced hyperlipidemia rats Activating the activityof lipid metabolism enzyme Enhancing the expressions of LDLR and PPAR γ mRNAs [14] High-fat diet and streptozotocin-induced T2DM rats Inhibiting the activity of intestinal pancreatic lipase [30] Modulation of blood glucose streptozotocin-induced T2DM rats Enhancing GLP-1 secretion in gut to promoting insulin secretion and improving function of β cell [120] Min6 cells NCI-H716 cells Elevating intracellular cAMP levels to promote GLP-1 secretion and insulin secretion Increasing β cell mass through hyperplasia and hypertrophy [121] The role of TCM in the treatment of tumors is often auxiliary. Combined with chemical drugs, it can increase the efficacy on one hand, and reduce the side effects on the other hand, while the researches in these aspects need to be further studied. Hepatoprotection Liver is vital for bile formation, amino acid utilization and ammonia detoxification, and is also the organ where glycolysis, gluconeogenesis, and the synthesis of certain plasma proteins happen [104]. In the liver, the toxic chemicals are commonly metabolized by cytochrome P450, namely first-pass effect. Hence, its detoxification ability would be attenuated due to pathological damage. In TCM, liver is depicted as an organ susceptible to heat and toxins, due to which dysfunction of liver is observed. HLJDD is rich in bioactive alkaloids, flavonoids, iridoid glycosides, and polyphenols, could restore the balance of the disturbed metabolic status common in two cholestasis injuries, e.g. redox system and gut flora, urea cycle in thioacetamide model, and Kreb's cycle and oxidation of branchedchain amino acids in bile duct ligation model, respectively [24]. These findings are consistent with a previous study that also used bile duct ligation to induce cholestatic liver injury [25]. The protection of single or combination use of berberine and HLJDD on acute liver injury induced via cecal ligation and puncture were to explore the herb-drug interactions of them in a holistic way. Livers of sham-operated group, treatment groups of berberine, HLJDD and their co-administration displayed no obvious histopathological changes. Both histamine and trimethylamine N-oxide were exclusively decreased by the treatments of HLJDD with or without berberine. Glutathione and carnosine were significantly increased after HLJDD and the combination treatment. Metabolomics analysis revealed that HLJDD had better APPswe/PS1dE9 mice Ameliorating neuroinflammation and sphingolipid metabolic disorder [34] HEK 293 cells Inhibiting indoleamine 2,3-dioxygenase activity [133] Anti-infection Candida albicans Inhibiting formation of hyphae and colony morphologies through downregulating the expression of HWP1, ALS3, UME6 and CSH1 [136] Pseudomonas aeruginosa Reducing pyocyanin pigment, elastolytic activity, proteolytic activity, biofilm formation, and bacterial motility [137] H1N1 Inhibiting NA activity [139] Modulation of microbiota High-fat diet and streptozotocin-induced T2DM rats Ameliorating hyperglycemia and restoring the disturbed gut microbiota structure and function through increasing short chain fatty acids-producing bacteria while reducing conditioned pathogenic bacteria [143] anti-inflammatory, anti-bacterial, and anti-oxidative effects than berberine alone. The single use of berberine had an inferior ability to HLJDD in restoring the whole disturbed metabolism of model rats [105]. Anti-inflammatory and anti-allergy It is believed in TCM that endogenous and exogenous heat and toxins are pathogenic mechanisms of inflammation. To some extent, inflammatory and allergic mediators, as well as inflammatory factors generated by inflammations and allergies are recongnized as toxins leading to the heat syndromes appearing in the context of inflammatory and allergic responses. Oral administration of HLJDD at a dose of 150 mg/ kg and 300 mg/kg significantly inhibited the inflammatory responses in carrageenan injected rat air pouches, with the inhibition ratio for exudate volume being 22.1% and 25.7%, and for leucocyte influx 26.4% and 36.2%, respectively. It also greatly reduced the production of nitric oxide (NO) and leukotriene B (4) in vivo without any influence on the biosynthesis of cyclooxygenasederived eicosanoids. However, eicosanoids derived from different lipoxygenases (LOs) were markedly inhibited by HLJDD in calcium ionophore A23187-stimulated peritoneal macrophages [106]. Further experiments on cell-free purified enzymes showed that RC and RS were responsible for the suppressive effect of HLJDD on eicosanoid generation. Baicalein and baicalin derived from RS showed significant inhibition on 5-LO and 15-LO, and coptisine derived from RC showed medium inhibition on leukotriene A 4 hydrolase. Moreover, 6 pure components including baicalein, baicalin, wogonoside, wogonin, coptisine, and magnoflorine could inhibite the generation of eicosanoid in rat peritoneal macrophages via LO pathway [11]. In lipopolysaccharide (LPS)-treated RAW 264.7 macrophages, the NO production [44,106], the mRNA expression of inducible nitric oxide synthase and several chemotactic factors (CCL3, CCL4, CCL5 and CXCL2) were suppressed by HLJDD [106]. Moreover, HLJDD also decreased the levels of malondialdehyde, prostaglandin E2, interleukin-6 (IL-6), IL-10, and tumor necrosis factoralpha (TNF-α), and increased the activity of superoxide dismutase in this model [44]. The results of exploring the material base for the anti-inflammatory activity of HLJDD showed that its two fractions had different effects on these parameters. On one side, HLJDD-1 (iridoids and flavonoid glycosides) showed higher antioxidant activity than HLJDD-2 (alkaloids and flavonoid aglycones) as supported by decreasing the level of malondialdehyde and enhancing the activity of superoxide dismutase. On the other side, HLJDD-2 has a more obvious inhibitory effect on NO and IL-6 than HLJDD-1. Moreover, most of the four typical compounds (geniposide, baicalin, berberine and baicalein) of HLJDD showed weaker effects on these parameters than HLJDD and the two fractions, suggesting that these compounds may have synergistic anti-inflammatory interactions [44]. In collagen-induced arthritis rats, the combination of 13 components (geniposide, coptisine, phellodendrine, jatrorrhizine, magnoflorine, palmatine, berberine, baicalin, chlorogenic acid, crocin, wogonoside, baicalein, and wogonin) of HLJDD exhibited similar pharmacological activities as HLJDD aqueous extracts in ameliorating the symptoms of arthritis, preventing joint damage, and reducing the serum levels of TNF-α, interferon-gamma and IL-17 [107]. HLJDD and its constituents combination have been shown to regulate fatty acid oxidation and arachidonic acid metabolism in collagen-induced arthritis rats [19]. In addition, the disturbed urinary levels of succinic acid, citric acid, creatine, uridine, pantothenic acid, carnitine, phenylacetylglycine, allantoin and plasma levels of phenylpyruvic acid in model rats were demonstrated to be restored by HLJDD. Meanwhile, the combination of HLJDD was able to recover the disordered urinary levels of citric acid, creatine, pantothenic acid, carnitine, phenylacetylglycine and plasma levels of uric acid, l-histidine, and l-phenylalanine in model rats [17]. Taken together, the 13 constituents' combination may represent the effective-composite of HLJDD. More importantly, HLJDD is beneficial in suppressing inflammation processes by synergistically acting on various components that on multiply target point. Hence, further researches elucidating the mode of action of these ingredients would give an insight into the use of HLJDD for its anti-inflammatory activity. The results of in vitro experiments indicated that the ethanolic extract of HLJDD exerted significant anti-inflammatory and anti-allergic effects through suppressing the production of inflammatory mediators (NO, IL-1β, IL-4, monocyte chemoattractant protein-1and granulocyte-macrophage colony stimulating factor) via the NF-κB and mitogenactivated protein kinases (MAPKs) inactivation and IκB-α degradation in the LPS-stimulated RAW 264.7 cells, and allergic mediators (IL-4, TNF-α, and monocyte chemoattractant protein-1) by inactivating the MAPKs and Lyn pathway in antigen-induced RBL-2H3 cells [108]. Based on its powerful anti-inflammatory ability, a large number of studies have proved that HLJDD is an effective prescription for treating various inflammatory diseases, such as inflammatory bowel disease [109], gastritis [110,111], and sepsis [4,22,[112][113][114]. Sepsis is a clinical syndrome characterized by systemic inflammation. In the experimental septic model rats induced by cecal ligation and puncture, HLJDD treatment suppressed the production of proinflammatory cytokines (TNF-α, IL-1, IL-6, and IL-17A), reversed the shift from T-helper (Th) 1 to Th2 response and promote Th1/Th2 balance toward Th1 predominance, and inhibited Th17 activation [112]. In addition, l-proline, l-valine, oleic acid, carnitine, palmitoylcarnitine, arachidonic acid, and arachidic acid were reversed by HLJDD, while docosahexaenoic acid, eicosapntemacnioc acid, and prostaglandin E3 were further elevated by HLJDD in the septic condition [22]. The strong therapeutic effects of HLJDD in septic models may be ascribed to its significant enhancement of cholinergic anti-inflammatory pathway and inhibition of high mobility group protein B1/Toll-likereceptor 4/NF-κB signaling pathway [4]. Sepsis often result in endorgan dysfunction, such as acute kidney injury. HLJDD and its component herbs could effectively inhibit LPS-induced acute kidney injury in mice by inhibiting NF-κB and MAPK activation and activating the Akt/HO-1 pathway, and by significantly ameliorating disturbances in oxidative stress and energy metabolism induced by LPS [26]. At present, in vivo and in vitro studies also indicated that HLJDD showed atopic dermatitis treatment effects. In 2,4-dinitrochlorobenzene-induced atopic dermatitis mice, HLJDD down-regulated serum expression levels of IL-1α, IL-1β, IL-2, IL-4, IL-5, IL-6, interferongamma and TNF-α, normalised the splenic CD4 + /CD8 + T-lymphocyte ratio, and inactivated MAPKs (including p38, extracellular regulated protein kinases (ERK), and c-Jun N-terminal kinase (JNK)), IκB-α, and NF-κB (p65). Moreover, HLJDD inhibited LPS-induced differentiation of RAW264.7 cells, reduced LPS binding to the RAW264.7 cell membrane, as well as decreased ERK, p38, JNK, IκB-α, and p65 phosphorylation levels in the MAPKs/NF-κB pathway and inhibited p65 nuclear translocation [115]. Further, a study revealed that HLJDD had a positive effect in rat gingivitis induced by LPS. HLJDD boosted the ability of anti-oxidation and anti-inflammatory by inhibiting AMPK and ERK pathways [116]. The molecular mechanisms of HLJDD regulating inflammation-related pathways are shown in Fig. 6. According to the "Four-nature Theory", all Chinese herbs are fit into four categories, including "cold" "hot" "warm" and "cool" herbs. Based on this theory, the four herbs in HLJDD are all recognized as "heat-clearing" herbs, which means that they all have therapeutic powers of removing the "body fire". In classic Chinese philosophy, "fire", one of five "basic elements" (wood, fire, earth, metal and water), is an element with dual seemingly paradox roles as both beneficial and deleterious [117]. The excess "body fire" will exert deleterious impacts and form the basis of many diseases. In fact, the essence of "body fire" is a gradual process including oxidative/nitrosative stress, inflammation and infection. Oxidative stress can induce inflammation and many other diseases by disrupting normal cellular mechanisms. Infection is a form of invasion and multiplication of various infectious agents in body, which will also cause inflammation. Therefore, inflammation involves a greatly complex web of intercellular cytokine signals [118] and is related to the pathogenesis of most diseases, such as cancer, and CNS diseases mentioned in this review. Blood lipid and glucose-modulating Symptom-complex of wasting-thirst in TCM mainly refers to syndrome X. According to the classical TCM theory, the pathogenesis of the metabolic syndrome is induced by excessive "heat" dissipating the body fluids. Moreover, recent researches on TCM theory pointed out that the internal heat is the primary pathogenic factor for the development of T2DM. In addition, excessive lipid may lead to the accumulation of "heat", which eventually transforms into toxin, a more serious cause. The regulations of lipid are divided into several parts: reducing lipid synthesis; increasing lipid degradation; and combating damage caused by high levels of lipids, such as inducing inflammatory responses. In an apolipoprotein E knockout mouse model, HLJDD was found to markedly decrease the ratio of inflammatory subset of monocytes. In addition, the results from in vitro experiments indicated that HLJDD-containing serum significantly facilitated differentiation of M2 macrophages and foam cells. Thus, HLJDD might attenuate the development of atherosclerosis, probably by regulating the functional differentiation of monocytes, macrophages, and foam cells [119]. It was reported that HLJDD could activate the activity of lipoprotein lipase and hepatic lipase, and enhance the expressions of low-density lipoprotein receptor and peroxisome proliferator-activated receptor gamma mRNAs to modulate the lipid metabolism in high-fat diet-induced rats [14]. However, HLJDD contains various chemical components and might possess multiple mechanisms to modulate the lipid metabolism. Therefore, HLJDD may exert the hypolipidemic effect through other mechanisms. For example, using the olive oil loading test, Zhang et al. reported that HLJDD extract lowered total cholesterol, triglyceride, and low-density lipoprotein cholesterol level of T2DM rats by inhibiting intestinal pancreatic lipase activity [30]. It could be speculated that HLJDD might exert the effect of lipid-modulating by multi-targets, multi-pathways and multi-effects. Insulin secretion and insulin action are essential for blood glucose homeostasis, and defects in either process cause metabolic diseases, such as T2DM [120,121]. Furthermore, HLJDD could decrease blood glucose concentration and ameliorated diabetic syndrome partly through its interaction with intestinal tract [120]. Glucagon-like peptide 1 (GLP-1), an important incretin secreted by the gastrointestinal L-cells, enhances insulin secretion, improves β cell proliferation and neogenesis, and reduces glucagon release from the pancreatic islet cells [122,123]. In the last decade, a novel group of glucose-lowering agents has been developed based on the gut hormone GLP-1 [124]. It was reported that 5-week HLJDD (4 g/kg/day) treatment on diabetic rats enhanced GLP-1 secretion in gut and the released GLP-1 subsequently promoted insulin secretion and improved function of β cell in pancreas [120]. In an in vitro study, the water extracts of RS and HLJDD increased insulin secretion in Min6 cells and GLP-1 secretion in NCI-H716 cells by elevating intracellular cyclic adenosine monophosphate levels. RS and HLJDD also increased β cell mass through hyperplasia and hypertrophy. The rise in hyperplasia was associated with elevated insulin receptor substrate 2 and pancreatic and duodenal homeobox 1 expression in the islets [121]. Geniposide, an active ingredient of HLJDD, has been reported as an agonist for GLP-1 receptor [125]. However, whether it can promote GLP-1 secretion is still unclear. Moreover, whether other compounds included in HLJDD contribute to the promotion of GLP-1 remains to be further investigated. Central nervous system diseases Diseases of central nervous system (CNS) are also believed to have close associations with the heat and toxins in TCM theory. The pathogenic factors, namely toxins, lead to nervous system injury, both in function and/or organic architecture. The typical clinical Currently, considerable studies have been conducted to understand the pharmacological mechanisms of HLJDD on ischemia-induced brain damage. Preconditioning of HLJDD protected neurons against oxygen and glucose deprivation, significantly reduced the cerebral infarction volume and cerebral water content, and improved the neurological deficient score of model rats obtained through middle cerebral artery occlusion (MCAO). The activation of the phosphatidylinositol 3-kinase/ protein kinase B (Akt) signaling pathway and hypoxiainducible factor-1 alpha was proved to be responsible for the resistance of HLJDD to ischemia-reperfusion or hypoxia injury contribute to inhibiting neuron apoptosis and enhancing neuron proliferation [28]. Furthermore, it has been reported that HLJDD exerted neuroprotective effects on ischemic stroke partly though the Aktindependent protective autophagy via the regulation of MAPK signals, which can avoid unfavorable sideeffects associated with the inactivation of Akt [126]. Pattern analysis of the 1 H NMR data disclosed that HLJDD could relieve MCAO rats by ameliorating the disordered metabolisms in energy, membrane and mitochondrial, amino acid and neurotransmitter, alleviating the inflammatory damage and the oxidative stress from reactive oxygen species, and recovering the destructed osmoregulation [127]. Total alkaloids, iridoids and flavonoids from HLJDD have potential as a treatment for ischemic brain injury. Firstly, alkaloids treatment was found to enhance neurogenesis by increasing the expression of vascular endothelial growth factor, angiopoietin-1 (Ang-1), and Ang-2 protein, and its neuroproliferative effect was partially correlated with enhanced phosphorylation of Akt, and glycogen synthase kinase-3 beta. Secondly, flavonoids could promote differentiation of cortical precursor cells into neuronal, which may be attributable to the regulation of Akt, glycogen synthase kinase-3 beta mRNA and Ang-1 protein levels. Finally, alkaloids and iridoids increased number of BrdU-positive cells and enhanced neuronal differentiation in the cortex [29]. Berberine, baicalin and gardenoside are the representative components of alkaloids, flavonoids and iridoids respectively, all of which can improve functional outcome after brain ischemia. Berberine exerted potent neuroprotective effects in ischemic environment [128]. Baicalin could also protect neuronal cells against various neurotoxic stimuli and ischemia-reperfusion injury [40]. Gardenoside was shown to enhance neurons viability, prompt neurite growth, and attenuate neuronal death against ischemic damage [129]. A study showed that the combination of these three ingredients treatment increased the levels of cellular antioxidants that scavenged reactive oxygen species during ischemia-reperfusion via the nuclear erythroid 2-related factor 2 signaling cascade, and exhibited stronger effects than the individual herbs alone [130]. Berberine and baicalin were the molecular basis for ameliorating the neurological function in ischemia-reperfusion, possibly due to their induction of increased expression of NF-κB, inducible nitric oxide synthase and cyclooxygenase 2 protein. In addition, the combination of berberine and gardenoside possessed neuroprotective effects, which may be related to their regulation of oxidative stress and autophagy [131]. These results indicated that the synergistic effects of different components of HLJDD are responsible for the powerful effectiveness of HLJDD. Besides, HLJDD was proved to ameliorate neurodegenerative diseases, such as AD. Clinical signs of AD are characterized by the neuron loss and cognitive impairment. Modern pharmacological studies have showed that HLJDD could significantly modulate effects on age-related changes of the gene expressions in the hippocampus and cerebral cortex in SAMP8 model, which include genes that involved in different biological function and process: signal transduction (Dusp12, Rps6ka1, Rab26, Penk1, Nope, Leng8, Syde1, Phb, Def8, Ihpk1, Tac2, Pik3c2a), protein metabolism (Ttc3, Amfr, Prr6, Ube2d2), cell growth and development (Ngrn, Anln, Dip3b, Acrbp), nucleic acid metabolism (Fhit, Itm2c, Cstf2t, Ddx3x, Ercc5, Pcgfr6), energy metabolism (Stub1, Uqcr, Nsf ), immune response (C1qb), regulation of transcription (D1ertd161e, Gcn5l2, Ssu72), transporter (Slc17a7, mt-Co1), nervous system development (Trim3), and neurogila cell differentiation (Tspan2) [132]. In APPswe/PS1dE9 mice, another classic animal model of AD, HLJDD had positive effects on AD by ameliorating neuroinflammation and sphingolipid metabolic disorder [34]. In addition, HLJDD may inhibit the activity of indoleamine 2,3-dioxygenase, one of the potential participants involved in the pathogenesis of AD [133]. The effects of HLJDD on CNS diseases are mainly through anti-inflammatory, antioxidant, and regulating energy metabolisms. At the same time, HLJDD also has different effects on central nervous functions and neurotransmitter levels. In a metabolomics study, HLJDD decreased the levels of glutamine and γ-aminobutyric acid in plasma of MACO rats, which might be responsible for neuronprotection via the decline of excitotoxicity of glutamate. HLJDD also elevated acetylcholine level and maintained cholinergic neurons function [27]. The molecular mechanisms of HLJDD in the treatment of CNS are shown in Fig. 7. However, the role of HLJDD in the CNS should not be considered simply from the traditional pharmacological effects. New perspectives, such as regulating the liver and gut bacteria, should be given more attention. The former can regulate the CNS through the liver-brain axis, while the latter can further intervene the CNS by activating the brain-gut axis, especially in the case of mental system diseases. The inflammatory response of the CNS is an important link and target for the intervention of HLJDD in the CNS. Compounds that directly enter the brain tissue, as well as the liver-brain axis and the brain-gut axis, are the pathways for the effects of HLJDD. The regulation of energy metabolism, on one hand, has a direct antagonistic effect on the occurrence and development of cerebrovascular diseases. On the other hand, the adjustment of energy can also adjust the function and state of microglia to intervene inflammation. Anti-infection and microbiota-modulating Bacterial or virus infection or imbalance of bacteria in the body commonly stimulates inflammatory responses or immune activation resulting in redness, swelling, heat and pain directly or participates in the pathological development of various systems such as gastrointestinal tract, endocrine and CNS. Such diseases are covered, at least partly, by the theory of heat and toxins of TCM. The actions on bacteria or viruses are the focus of heat dissipation and detoxification treatments. Candida albicans (C. albicans) is the most prevalent opportunistic fungal pathogen that can cause surface and even systemic infections in immunocompromised patients [134,135]. The results of gene expression of C. albicans with the treatment of HLJDD showed that ATP-binding cassette transporter and major facilitator superfamily transporter, which encode multidrug transporters, were identified to be remarkably upregulated, which might provide insights for the inhibition mechanism of HLJDD against C. albicans [35]. The ethyl acetate extract of HLJDD with concentration of 312 mg/L and 1250 mg/L could inhibit formation of hyphae and colony morphologies of C. albicans through downregulating the expression of hyphae-specific genes such as HWP1, ALS3, UME6 and CSH1 [136]. Pseudomonas aeruginosa, an opportunistic Gram-negative pathogen, has characteristic of quorum sensing modulation. HLJDD showed the lowest minimum inhibitory concentration (MIC) of 100 mg/mL against Pseudomonas aeruginosa, while MICs of 200 mg/mL for the RC and RS, 400 mg/mL for the CP, and more than 400 mg/mL for the FG. Moreover, at the sub-MIC, HLJDD significantly reduced pyocyanin pigment, elastolytic activity, proteolytic activity, biofilm formation, and bacterial motility [137]. In Mugil cephalus, 1% modified HLJDD feeding for 28 days may prevent Lactococcus garvieae infection and could be used in aquaculture industries [138]. Moreover, the water extracts of HLJDD and its four herbs exerted potent treatment power on H1N1 infection through the inhibition of neuraminidase (NA) activity [139], which is one of the biomarkers Fig. 7 Molecular mechanism of HHLJDD in treating CNS diseases for subtype classification of influenza A virus. The IC 50 of HLJDD, RC, RS, CP, FG, and peramivir (positive control) on NA activity were 112.6 ± 6.7 μg/mL 96.1 ± 7.6 μg/mL, 303.5 ± 21.9 μg/mL, 108.6 ± 8.6 μg/ mL, 285.0 ± 16.6 μg/mL, and 478.8 ± 15.6 μg/mL, respectively. Accordingly, it is valuable to use HLJDD as a complementary medicine for H1N1 infection in clinical. In addition, based on the effective inhibitors of various NA subtypes of its active ingredients, such as berberine [140], coptisine [141], and baicalein [142], it is meaningful to further study the anti-viral effect of HLJDD. In high-fat diet and streptozotocin-induced T2DM rats, HLJDD treatment ameliorated hyperglycemia and restored the disturbed gut microbiota structure and function to a nearly normal condition mainly through increasing short chain fatty acids-producing bacteria while reducing conditioned pathogenic bacteria [143]. Various chemical components in HLJDD have antiinfection effects, especially alkaloids, of which berberine has been used as a commodity for the treatment of bacterial diarrhea. For bacteria of different species, there are commonalities and differences between different components. At present, researches on bacteria cannot be limited to bacteriostatic or bactericidal. In view of the low bioavailability of chemical components, the effects of intestinal flora on the metabolism of compounds in HLJDD, as well as the evaluations of level and activity of metabolites need to be further studied. In a metabolomics study, 6 high level compounds in HLJDD [46], including 4 alkaloids (berberine, palmatine, coptisine and jatrorrhizine), 1 flavonoid (baicalin) and 1 iridoid (geniposide), were selected to clarify the metabolic pathways of HLJDD in rat urine and feces by LC-IT-MS combining with LC-FT-ICR-MS. In general, phase I (hydroxylation and demethylation) and phase II (sulfate conjugation and glucuronidated conjugation) reactions of flavonoids and iridoids, as well as phase I and II (hydroxylation, demethylation and glucuronidation) reactions of alkaloids were observed as the major metabolic fate of HLJDD in vivo. Notably, abundant benzylisoquinoline alkaloids were detected in feces due to their poor absorption in gastrointestinal tract. All the glucuronidated flavonoid glycosides were prototypes as well as metabolites [144]. It was reported that hydrolyzation by enterobacteria and subsequently glucuronidation reactions of flavonoids occurred in vivo [145]. In addition, the studies on the effects of the chemical components in HLJDD on the species abundance and metabolic activities of intestinal bacteria and the level of metabolites, such as neurotransmitters and short-chain fatty acids, may be new research ideas and directions to reveal the potential mechanisms and pathways of HLJDD. Other pharmacological effects Early studies showed that HLJDD could protect ethanol-and aspirin-induced gastric mucosal barrier injury [146], and gastric hemorrhagic lesions [147]. These gastric protection effects of HLJDD may be ascribed to the reinforcement of mucosal barrier resistance through endogenous sulfhydryl compounds and diethyldithiocarbamate-sensitive compounds [148,149]. In addition, HLJDD could inhibit drug-stimulated gastric acid secretion [150] via dopamine receptors and alpha-2 adrenoceptors [151]. In clinical, modified HLJDD combined electroacupuncture could promote the recovery of gastrointestinal function in critically ill patients after abdominal surgery via improving intestinal barrier function [152]. In addition, the administration of HLJDD in combination with chlorpromazine would alleviated the side-effects caused by less dose [153], while the mechanism remains to be unknown. Pharmacokinetic investigation PK is a discipline which studies quantitatively the law of absorption, distribution, metabolism and excretion of drugs in vivo and expounds the law of blood drug concentration with time by applying mathematical principles and methods. PK investigation is of great significance in the new drug development, the studies of drug-induced toxicity, and drug interaction [154,155]. Reasonably, it is the pivotal approach to reveal the obscure pharmacodynamic properties and toxicity of herbals or formulas in TCM [156]. Commonly, LC-MS/MS [157], HPLC-MS/ MS [158], UPLC-MS/MS [159], and GC-MS [160] are the main techniques employed in PK investigation. HLJDD is a traditional Chinese prescription with different types of PK interactions among its multi-components. In recent years, studies of the PK profiles and absorption of alkaloids, flavonoids and iridoid glycosides both in pure components and in HLJDD have been well conducted [38][39][40][41][161][162][163][164][165][166], especially of berberine, baicalin and geniposide. Berberine had better absorption within HLJDD than that of solo compound in an intestinal perfusion model of rat [167]. Similar phenomena were observed in the study of investigating the differences of absorption of geniposide after oral administration of geniposide alone and HLJDD by PK studies in vivo, intestinal perfusion model, and Caco-2 model. In addition, geniposide had better absorption in the duodenum and jejunum through passive diffusion [168]. These results indicated that the intestinal absorption of berberine and geniposide were affected by compatibility of other compounds of HLJDD. Baicalin showed bimodal phenomenon in the plasma following oral administrations of pure baicalin and HLJDD in rats, and other components in HLJDD had PK interaction with baicalin [40]. There were few studies on the PK investigation of the whole HLJDD extracts. Ren et al. obtained systematic PK data concerning the activity of HLJDD under inflammatory conditions by LC-QqQ-MS using a dynamic multiple reaction monitoring method. In normal group, the C max of geniposide, magnolflorine, baicalin, berberine, oroxylin A-7-O-glucuronide, wogonoside, wogonin and oroxylin A were 0.7 ± 0.3, 0.6 ± 0.2, 0.09 ± 0.03, 0.6 ± 0.4, 0.09 ± 0.03, 0.11 ± 0.04, 0.09 ± 0.03, and 0.08 ± 0.0 ng/mL, respectively. And the mean residence time were 0.9 ± 0.1, 1.8 ± 0.1, 4.3 ± 0.3, 5.7 ± 3.5, 4.4 ± 0.5, 4.7 ± 0.5, 4.3 ± 0.8, 3.0 ± 0.6 h, respectively. Compared with the normal control group, the PK behaviors of alkaloids, flavonoids, and iridoids in the inflammatory model exhibited a trend of continuous changes, including higher bioavailability, slower elimination, delays in reaching the C max and longer substantivity [169]. In addition, there were a handful of PK investigations of couplet medicines from HLJDD. Pan et al. explored the differences in PK and antioxidant effect of RC-FG couplet medicine and HLJDD in MCAO rats, which have been scarcely reported [170]. In MCAO group, the C max of RC-FG and HLJDD were 1.188 ± 0.162 mg/L and 1.44 ± 50.295 mg/L, respectively. The T max of were 0.625 ± 0.137 h and 0.458 ± 0.188 h, and the mean residence time were 97.042 ± 34.642 h and 101.306 ± 81.211 h, respectively. The results illustrated that HLJDD, compared with RC-FG couplet medicine, had a better assimilation effect, higher peak concentration, shorter time to peak, slower elimination rate, and longer mean dwell time in the context of cerebral ischemia. In addition, the extremely low concentrations of gardenia acid and geniposide could not prevent the superoxide dismutase from returning to normal values. This phenomenon may be due to other ingredients such as flavonoids and alkaloids, which played a similar role as iridoids. It demonstrated that HLJDD exhibited the ability in treating cerebral ischemia through its three major constituents synergistically. In rat liver microsomes incubation system, total flavonoids and alkaloids extracts exibited strong inhibition on rat cytochrome P450 isoenzymes activities, while HLJDD aqueous extract and total iridoids extracts had moderate inhibition ability. Total flavonoids and alkaloids also exhibited significant inhibitory effect on P-glycoprotein activity as evidenced by the efflux of Rhodamine-123 with IC 50 of 104.6 and 82.6 μg/ mL. However, the HLJDD aqueous extract and total iridoids extracts showed weak and negligible inhibitory effect on P-glycoprotein activity, respectively [171]. For further studies of herb-herb interactions and human situation in vivo, PK studies involving human intestinal and liver microsome preparations should also be conducted. Common analytical methods employed in PK studies usually need relatively large amounts of sample [172]. An indirect competitive enzyme-linked immunosorbent assay based on monoclonal antibodies against geniposide was developed and was successfully applied to study the PK of geniposide in HLJDD in mice [173]. Therefore, a technology with higher detection sensitivity would quite help in PK studies especially in small animals. Compared with abundant data of pharmacology and chemical composition studies, PK studies cannot well support and interpret the pharmacological actions of HLJDD. On one hand, it is difficult to confirm the active components. Although some effective components such as berberine were known, their pharmacological effects cannot represent the whole TCM formula. Due to the limitations of analytical methods, on the other hand, simultaneous detection and analysis of all chemical components cannot be carried out. Therefore, it is necessary to combine with other methods. For example, according to the pharmacodynamic data, the main parameters are calculated to indicate changes in an active ingredient, a group of components, synergies between components, or interactions between metabolites. Conclusion It is well known that TCM formula is a complex system and combinations can make the prescriptions more suitable for clinical application through herb-herb synergic interactions that improve pharmacological activities. Different from the methodology and philosophy of western medicine, TCM focuses on the overall functional state of the patients and the adjustment of their balance, which has aroused ever-increasing interest worldwide, especially for the treatment of complex diseases [174]. HLJDD, a classic TCM formula to clear "heat" and "toxins", is an aqueous extract of four herbal materials, RC, RS, CP, and FG in a ratio of 3:2:2:3. Although the four herbs show unique activities with varying abilities respectively, synergistic functions are exhibited when combining them in an appropriate proportion. In this view, we summarize the phytochemical, pharmacological and PK investigations of HLJDD. The potential bioactive constituents of this formula can be classified as alkaloids, flavonoids, and iridoids. Among them, berberine, baicalin, and geniposide are the representative ingredients. Containing numerous compounds, HLJDD exhibits pharmacological activities in various aspects, including anti-tumer, hepatoprotection, antiinflammatory, anti-allergy, lipid-modulating, CNS diseases, anti-bacterial, and gut microbiota-modulating. The main differences between the PK profiles of primary ingredients in HLJDD and pure compounds are reflected in some important PK parameters. HLJDD tends to present higher C max , shorter T max and better pharmacological effects than that in single drug or couplet medicine. These results demonstrate that the co-occurring components in HLJDD might interact with each other. To further shed light on compositive principle and action characteristics of HLJDD, several obstacles that represent the common problems of TCM need to be conquered. Firstly, accurately annotating and understanding the classical literatures of the utilization of Chinese medicine formula combined with totally randomized blank controlled double-blind clinical trials would help to confirm the therapeutic effects and reveal the adverse reactions. Secondly, personalized medicine is the specific signature of TCM, according to which one formula might be adopted to treat different diseases with the similar syndromes. One kind of disease, however, might be treated with different formula due to variations in syndromes. Then, with the modern biological technologies and pharmacological approaches, the investigations on clinical syndromes and the following development of preclinical research system in cell or animal with consistent pathological features or biomarkers are expected to interpret of rationality and rule of compatibility of monarchs, ministers, assistants and ambassadors in the prescription of Chinese medicine, to reveal the concepts of TCM theories such as heat-clearing and detoxifying. Moreover, unlike Western medicine, therapeutic system of TCM is established directly on the clinical practices. But the complex constitutes of herbals make it hard to note the exact activating components and the interfered node of pathophysiological process. Abundant ingredients of herbs commonly bear the burden of therapeutic efficacy through activating or inhibiting different targets. Highthroughput screening on the targets associated with the representative signaling pathway and further pharmacological assay on the synergistic action of those chemicals are required to explore interaction network between the multiple components and the multiple targets. Novel form of TCM formula appearing as several chemical preparations is believed to substitute the primary formula, which could be endowed with typical chemical, pharmacological and pharmacokinetic features. Furthermore, based on these digital database resources, the interactions between the chemicals and targets and relationship between the targets need to be analyzed via the system pharmacology, which favors the prediction of the potential activate components and the underlying targets or signaling pathways of TCM formulas. The following work performed to validate these literature mining results includes transcriptomics, proteomics, metabolomics, and rigorous biochemics and pharmacologics. Finally, the exploration is always on for TCM formulas, which promote the determination of pivotal components and uncover the interesting pathological mechanisms in the context of positive clinical therapeutic effects. The studies of TCM formula are based on thousands of years of clinical medication experience, which provides a guarantee for the direction of basic research. The basic researches can simplify the formula and enhance the targeting and specificity in the treatment of certain diseases. On the other hand, combined with the PK analysis, basic researches can explore meaningful monomer compounds. In addition, on the basis of pharmacological effect evaluation and molecular mechanism analysis, basic researches can develop new therapeutic compounds when combining with chemical synthesis technology. Finally, it is also worth noting that many disease markers, discovered because of their exact clinical value, could play a role in TCM formula where the active ingredients are not well defined and the treatment mechanisms are not clear. Therefore, the development of new compounds targeting these markers will provide effective research ideas and reliability assurance for the development of new drugs.
2019-12-18T16:11:10.796Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "d33df840a3cbbaa43b50ff77482f9f22c5e144a2", "oa_license": "CCBY", "oa_url": "https://cmjournal.biomedcentral.com/track/pdf/10.1186/s13020-019-0277-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d33df840a3cbbaa43b50ff77482f9f22c5e144a2", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
7554582
pes2o/s2orc
v3-fos-license
Diazoxide preconditioning antagonizes cytotoxicity induced by epileptic seizures Diazoxide, an activator of mitochondrial ATP-sensitive potassium channels, can protect neurons and astrocytes against oxidative stress and apoptosis. In this study, we established a cellular model of epilepsy by culturing hippocampal neurons in magnesium-free medium, and used this to investigate effects of diazoxide preconditioning on the expression of inwardly rectifying potassium channel (Kir) subunits of the ATP-sensitive potassium. We found that neuronal viability was significantly reduced in the epileptic cells, whereas it was enhanced by diazoxide preconditioning. Double immunofluorescence and western blot showed a significant increase in the expression of Kir6.1 and Kir6.2 in epileptic cells, especially at 72 hours after seizures. Diazoxide pretreatment completely reversed this effect at 24 hours after seizures. In addition, Kir6.1 expression was significantly upregulated compared with Kir6.2 in hippocampal neurons after seizures. These findings indicate that diazoxide pretreatment may counteract epileptiform discharge-induced cytotoxicity by suppressing the expression of Kir subunits. INTRODUCTION Epilepsy is a common neurological disorder. Continued epileptic discharges could cause many changes at the cellular level including oxidative stress, cytokine activation, activation of glutamate receptors, and activation of subsequent cell death pathways [1] . Sustained epileptic seizures cause a decline in ATP content and change the redox potential, which may lead to mitochondrial dysfunction and energy failure [1][2][3][4] . The hippocampus is especially vulnerable, and tends to suffer selective neuronal loss in the CA1 and CA3 regions [4] . The ATP-sensitive potassium channel can adjust membrane potential-dependent functions according to cellular energetic demands [5] . ATP-sensitive potassium channels are widely represented in metabolically active tissues throughout the body, including the brain. Activation of ATP-sensitive potassium channels hyperpolarizes brain cells, reducing activity and energy consumption, and thereby linking the metabolic state to excitability [6][7] . With functions ranging from glucose regulation to neuroprotection, ATP-sensitive potassium channels play an important role in the adaptive response to pathophysiological stress [5] . ATP-sensitive potassium channels are composed of pore-forming inwardly rectifying potassium channel (Kir) subunits, Kir6.2 or Kir6.1, and modulatory sulfonylurea receptor subunits, sulfonylurea receptor 1 or sulfonylurea receptor 2 [5] . Different combinations of ATP-sensitive potassium channel subunits can form functional ATP-sensitive potassium channels with different susceptibility to hypoxia, oxidative stress, toxicity or changes in blood glucose [7] . It was reported that 60 minutes of myocardial ischemia followed by 24-72 hours of reperfusion specifically upregulated Kir6.1 mRNA [8] . In another study, Kir6.1 mRNA was increased in the rat spinal cord at 4 and 24 hours after acute spinal cord injury [9] . However, the effect of epilepsy on Kir subunit expression in cultured cells remains unclear. It was reported that diazoxide can induce mild oxidative stress and preconditioning-like neuroprotection [10] . Diazoxide has been reported to provide protective effects for neurons and astrocytes against necrosis and apoptosis in animal models of stroke and Parkinson's disease, as well as in cultured cells [10][11][12] . However, the effect of diazoxide preconditioning on Kir subunit expression in cultured cells is also unclear. In this study, we used double immunofluorescence and immunoblotting to investigate the effects of epilepsy and diazoxide preconditioning on the expression of Kir subunits in cultured rat hippocampal neurons. To simulate epileptic conditions in vitro, cultured hippocampal neurons were exposed to magnesium-free media for 3 hours, which can induce a permanent change in the neuronal culture physiology as a permanent "epileptiform" phenotype [13][14] . Influence of diazoxide preconditioning on viability of hippocampal neurons The cells were treated with magnesium-free medium for 3 hours to induce epilepsy, and were then returned to normal culture medium for 24 hours (Ep24 group) or 72 hours (Ep72 group). The diazoxide + Ep24 group and diazoxide + Ep72 group were pretreated with diazoxide (1 mM) for 1 hour before the 3-hour incubation in magnesium-free medium, and then returned to normal medium for 24 hours (diazoxide + Ep24 group) or 72 hours (diazoxide + Ep72 group). The control group was treated with an equal volume of PBS. An MTT reduction assay showed that the epileptiform activity induced cell damage and significantly reduced cell viability (P < 0.05), by 32.2% in the Ep24 group and by 59.7% (P < 0.01) in the Ep72 group, compared with control group. Pretreatment of cells with diazoxide resulted in a reduction of seizure-induced cytotoxicity and significantly increased cell viability ( Figure 1). These results demonstrate that diazoxide can protect against epilepsy-induced cell loss. Figure 2). The overlay of Kir6.1 and Kir6.2 expression indicated that the expression of the two subunits was not parallel: Kir6.1 showed a greater increase than Kir6.2 ( Figure 2). These results suggest that expression of Kir subunits may be regulated by diazoxide. Influence of diazoxide preconditioning on Western blot analysis showed that the expression of Kir6.1 and Kir6.2 was significantly increased in the Ep24 group compared with the control group. This increase was completely prevented by pretreatment with diazoxide (diazoxide + Ep24 group; Figure 3). In the Ep72 group, the up-regulation of Kir6.1 and Kir6.2 was partially reversed by pretreatment with diazoxide (diazoxide + Ep72 group). The increase of Kir6.1 expression in the Ep72 group was particularly large with a 2.47-fold increase compared with the control group. Kir6.2 expression in the Ep72 group was upregulated to 1.88 times the control levels ( Figure 3). DISCUSSION Epilepsy is the second most common neurodegenerative disease after stroke [15] . Brain injury resulting from seizures is a dynamic process that comprises multiple factors that contribute to neuronal cell death. These may involve oxidative stress, altered cytokine levels, genetic factors, excitotoxicity-induced mitochondrial dysfunction, and energy failure [16][17][18][19] . Many reports have shown that ATP-sensitive potassium channels can recognize changes in the cellular metabolic state and translate this information into changes in membrane excitability [5] . Cellular viability was measured by a quantitative colorimetric MTT assay. The cells were treated with magnesium-free medium for 3 hours, and then returned to normal culture medium for 24 hours (Ep24 group) or 72 hours (Ep72 group). The diazoxide + Ep24 group and diazoxide + Ep72 group were pretreated with diazoxide (1 mM) for 1 hour, then exposed to the magnesium-free medium for 3 hours before being returned to normal culture medium for 24 hours (diazoxide + Ep24 group) or 72 hours (diazoxide + Ep72 group). The control group was treated with an equal volume of PBS. Results are expressed as mean ± SEM of six wells from each group. The experiment was repeated three times. a P < 0.05, b P < 0.01, vs. control group; c P < 0.05, vs. epilepsy group at the same time point (analysis of variance followed by Student's t-test). Cellular viability was calculated as a percentage of control value (absorbance at 570 nm). Kir subunit expression in each group was observed by double immunofluorescence. Kir6.1 (rhodamine-labeled, red) and Kir6.2 (fluorescein isothiocyanate-labeled, green) were upregulated after epileptiform discharges, and this was prevented by diazoxide pretreatment. The color of the merged images (yellow) indicates that the changes in expression were not parallel in the two subunits: Kir6.1 showed a greater increase than Kir6.2. Ep24: Treatment with magnesium-free medium for 3 hours followed by normal medium for 24 hours; Ep72: treatment with magnesium-free medium for 3 hours followed by normal medium for 72 hours; Diazoxide + Ep24: pretreatment with diazoxide for 1 hour, magnesium-free medium for 3 hours and normal medium for 24 hours; Diazoxide + Ep72: pretreatment with diazoxide for 1 hour, magnesium-free medium for 3 hours and normal medium for 72 hours; control: treatment with an equal volume of PBS. [20][21][22] . The mitochondrial ATP-sensitive potassium channels may have a functionally important role in neurons because Kir subunits are more concentrated in neurons than in whole brain tissue [23][24][25] . Diazoxide is the most commonly used mitochondrial ATP-sensitive potassium channel opener. Many reports have supported that diazoxide preconditioning exhibits potent neuroprotective effects against ischemic neuronal injury, oxidative stress and epilepsy [26][27][28][29] . Recently, diazoxide was shown to protect against status epilepticus-induced neuronal damage during diabetic hyperglycemia [30] . Flagg et al [31] reported that PI3K/Akt signaling may be involved in diazoxide preconditioning that protects against hippocampal neuronal death after pilocarpine-induced seizures in rats. Consistent with previous results, our study in hippocampal cell culture demonstrated massive neuronal loss at 24 and 72 hours after seizures, and a significant attenuation of cell death by diazoxide. ATP-sensitive potassium channels in different brain regions show different subunit compositions, which determine their varying susceptibility to hypoxia, oxidative stress, toxicity and change of blood glucose [32] . A recent study [33] showed that both Kir6.1 and Kir6.2 proteins are present on synaptic membranes of terminals and spines as well as in vesicular structures within the synaptic cytoplasm. Kir6.1 subunits were located predominantly in the pre-synaptic membrane, whereas Kir6.2 subunits were most likely to be located in the perisynaptic area of terminals. Melamed-Frank et al [33] reported that hypoxia upregulates the expression of Kir6.1 mRNA in vivo and in vitro, which in turn can change the composition of ATP-sensitive potassium channels. We previously found a significant increase in Kir6.1 expression in cultured neurons exposed to amyloid beta (1-42) for 24 hours, whereas Kir6.2 showed no significant change. After treatment with amyloid beta (1-42) for 72 hours, the expression of both Kir6.1 and Kir6.2 was significantly increased compared with the control group [34][35] . In this study, we found that the expression of Kir6.1 and Kir6.2 was significantly increased at 24 and 72 hours after seizures. The effect on Kir6.1 expression was especially significant. Diazoxide completely prevented the changes seen at 24 hours, and partly attenuated the effects at 72 hours. Both amyloid beta (1-42) and Kir6.1 (A) and Kir6.2 (B) expression changes were observed by western blot analysis. Results are expressed as mean ± SEM (absorbance ratio of Kir protein to β-actin). Three dishes from each group were analyzed, and the experiment was repeated three times. a P < 0.05, b P < 0.01, vs. control group; c P < 0.05, vs. epilepsy group at the same time point (analysis of variance followed by Student's t-test). Ep24: Treatment with magnesium-free medium for 3 hours followed by normal medium for 24 hours; Ep72: treatment with magnesium-free medium for 3 hours followed by normal medium for 72 hours; Diazoxide (D) + Ep24: pretreatment with diazoxide for 1 hour, magnesium-free medium for 3 hours and normal medium for 24 hours; D + Ep72: pretreatment with diazoxide for 1 hour, magnesium-free medium for 3 hours and normal medium for 72 hours; control: treatment with an equal volume of PBS. [36][37] . In our study, Kir6.1 increased more significantly than Kir6.2, indicating that the composition of ATP-sensitive potassium channels changed after epileptiform activity. The increased ratio of Kir6.1/Kir6.2 may make the channel more sensitive to the metabolic state of neurons and may be helpful in coordinating the electrophysiological function with oxidative stress and the inflammatory reaction. However, it might also contribute to the disturbance of membrane excitability that results in sustained epileptic seizures and neuronal loss. Diazoxide pretreatment may counteract seizure-induced cytotoxicity and maintain mitochondrial and cellular function. The differential regulation of Kir subunits may alter the composition of ATP-sensitive potassium channels, causing changes in channel properties. In summary, epileptiform activity in hippocampal cells increased the expression of Kir6.1 and Kir6.2, the former in particular. These changes were attenuated by pretreatment with diazoxide, indicating a possible neuroprotective action of this drug. MATERIALS AND METHODS Design A parallel controlled in vitro study. Time and setting The experiments were performed at the Laboratory of Linyi People's Hospital, Shandong Province, China, from July 2011 to April 2012. Materials Twenty healthy pregnant Wistar rats at 17 or 19 days of gestation, weighing 270 ± 20 g, of clean grade, were provided by the Laboratory Animal Center of Shandong University (License No. SCXK (Lu) 2007-0004). Animals were housed with a 12-hour light/dark cycle and allowed free access to standard diet and water. Animal protocols were performed in accordance with the Guidance Suggestions for the Care and Use of Laboratory Animals, formulated by the Ministry of Science and Technology of China [38] . Isolation and culture of hippocampal neurons from fetal rats and preparation of neuronal epilepsy model Embryos were removed from pregnant Wistar rats at 17 or 19 days of gestation. The hippocampus was dissected and digested with 0.125% trypsin at 37°C for 20 minutes. Digestion was ended with proliferation growth medium composed of a 1:9 mixture of fetal bovine serum (Gibco-BRL, Carlsbad, CA, USA) and Dulbecco's modified Eagle's medium (Gibco-BRL), and the cells were mechanically dissociated with a fire-polished pipette. The density of the cells was 6.0 × 10 5 cells/mL on 12-well plates for double immunofluorescence analysis, or 2.0 × 10 6 cells/mL on culture dishes for western blot analysis. During the 3-5 days of culture, cells were treated with cytarabine (5 μM) to inhibit the proliferation of gliocytes. To induce epileptiform activity, cells were placed in magnesium-free medium (145 mM NaCl, 2.5 mM KCl, 10 mM HEPES, 2 mM CaCl 2 , 10 mM glucose, and 0.002 mM glycine, pH 7.3, adjusted to 325 mOsm with sucrose) for 3 hours [39] . Treatments Diazoxide (Sigma, St. Louis, MO, USA) was dissolved in dimethyl sulfoxide (Sigma) to a concentration of 0.08%, and then diluted in serum-free medium prior to experiments. MTT reduction assay for cell viability Cellular viability was measured in a 96-well plate with a quantitative MTT colorimetric assay [40] . The culture medium was changed, and MTT (final concentration 0.5 mg/mL) was added to the cells. Western blot analysis of Kir subunit expression Cells were collected, washed in ice-cold PBS, and lysed in 250 μL of lysis buffer per dish. After incubation for 20 minutes on ice, cell lysates were centrifuged at 10 000 × g for 10 minutes at 4°C, and the protein concentration in the extracts was determined using a BCA Protein Assay Kit (Shenneng Bocai Company, Shanghai, China). Twenty microliters of solubilized total cell lysate (50 μg protein) was loaded per lane for sodium dodecyl sulfate polyacrylamide gel electrophoresis on a 10% (w/v) polyacrylamide gel. The proteins were transferred onto a polyvinylidene fluoride membrane by a Mini Trans-Blot Cell apparatus (Bio-Rad Laboratories, Shanghai, China) at 100 V for 120 minutes at 4°C. Membranes were blocked at room temperature (25°C) for 60 minutes with 5% (w/v) dried milk in Tris-buffered saline, and then incubated with polyclonal goat anti-Kir6.1 or polyclonal rabbit anti-Kir6.2 (1:400; Santa Cruz Biotechnology) overnight at 4°C. The blots were incubated with horseradish peroxidase-conjugated rabbit anti-goat IgG or horseradish peroxidase-conjugated goat anti-rabbit IgG (Santa Cruz Biotechnology) diluted in Tris-buffered saline and Tween 20 (1:3 000) at room temperature for 1 hour. Signal detection was performed with an enhanced chemiluminescence kit (Beijing Zhongshan Company). Immunoreactive bands were quantified using AlphaImager 2200 (Alpha Innotech, Santa Clara, CA, USA). Values were normalized to the absorbance of β-actin. The β-actin was detected with mouse anti-rat monoclonal antibody (Santa Cruz Biotechnology). Statistical analysis Data were expressed as mean ± SEM. The statistical significance of the difference between control and samples treated for different times was determined by analysis of variance followed by Student's t-test using SPSS 13.0 software (SPSS, Chicago, IL, USA). P < 0.05 was considered statistically significant.
2018-04-03T02:06:05.921Z
2013-04-15T00:00:00.000
{ "year": 2013, "sha1": "21d7b6c4c0c4af9f8194f08a3906167cd75c1139", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "Adhoc", "pdf_hash": "9b074ddada895f5b8c8e37809ae3462994287427", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
266744802
pes2o/s2orc
v3-fos-license
The role of serum thymidine kinase 1 activity in neoadjuvant-treated HER2-positive breast cancer: biomarker analysis from the Swedish phase II randomized PREDIX HER2 trial Background Thymidine kinase 1 (TK1) plays a pivotal role in DNA synthesis and cellular proliferation. TK1 has been studied as a prognostic marker and as an early indicator of treatment response in human epidermal growth factor 2 (HER2)-negative early and metastatic breast cancer (BC). However, the prognostic and predictive value of serial TK1 activity in HER2-positive BC remains unknown. Methods In the PREDIX HER2 trial, 197 HER2-positive BC patients were randomized to neoadjuvant trastuzumab, pertuzumab, and docetaxel (DPH) or trastuzumab emtansine (T-DM1), followed by surgery and adjuvant epirubicin and cyclophosphamide. Serum samples were prospectively collected from all participants at multiple timepoints: at baseline, after cycle 1, 2, 4, and 6, at end of adjuvant therapy, annually for a total period of 5 years and/or at the time of recurrence. The associations of sTK1 activity with baseline characteristics, pathologic complete response (pCR), event-free survival (EFS), and disease-free survival (DFS) were evaluated. Results No association was detected between baseline sTK1 levels and all the baseline clinicopathologic characteristics. An increase of TK1 activity from baseline to cycle 2 was seen in all cases. sTK1 level at baseline, after 2 and 4 cycles was not associated with pCR status. After a median follow-up of 58 months, 23 patients had EFS events. There was no significant effect between baseline or cycle 2 sTK1 activity and time to event. A non-significant trend was noted among patents with residual disease (non-pCR) and high sTK1 activity at the end of treatment visit, indicating a potentially worse long-term prognosis. Conclusion sTK1 activity increased following neoadjuvant therapy for HER2-positive BC but was not associated with patient outcomes or treatment benefit. However, the post-surgery prognostic value in patients that have not attained pCR warrants further investigation. Trial registration ClinicalTrials.gov, NCT02568839. Registered on 6 October 2015. Supplementary Information The online version contains supplementary material available at 10.1007/s10549-023-07200-x. Introduction Uncontrolled cell proliferation is a key hallmark of cancer [1].Thymidine Kinase 1(TK1), a strictly cell cycle-regulated enzyme and well-characterized proliferation marker, is essential during DNA precursor synthesis.TK1 levels and activity are low or undetectable in resting cells but increase significantly from late G1 to late S-phase in proliferating cells [2].The role of cellular TK1 as a potential biomarker has been previously evaluated in breast and other cancer types, mostly associated with worse prognosis [3,4].The advantage of minimally invasive measurement of TK1 activity in serum samples enables its serial evaluation during different disease phases-as compared to tissue-based markers such as mitotic count and Ki-67, while its reliable and reproducible quantification has been validated in several large prospective cohorts [5][6][7]. Extended author information available on the last page of the article The USA Food and Drug Administration approved the use of sTK1 activity in 2022 as a biomarker for monitoring disease progression in previously diagnosed hormone receptor-positive (HR +), HER2-negative metastatic postmenopausal breast cancer(mBC) patients based on the results of the SWOG S0226 trial [7] and subsequently validated in a recent prospective trial of 287 mBC patients receiving first-line CDK4/6 inhibitor(CDK4/6i) in combination with endocrine therapy (ET) [8].Its impact on physician's decision making on HR + mBC is currently under evaluation in another prospective trial [9].In early BC patients, effective neoadjuvant treatment is becoming the standard of care, but the identification and validation of potential biomarkers of early response remains an unmet need.A recent study demonstrated the utility of serum TK1 activity for monitoring responses to neoadjuvant CDK4/6i in early HR + BC patients [10].In addition, we have previously shown that serial measurement of serum TK1 activity during neoadjuvant chemotherapy (NACT) might provide long-term prognostic information [11].However, there is currently limited data regarding the utility of TK1 as a predictive or prognostic marker in HER2-positive BC. In the phase II randomized PREDIX HER2 trial, we previously reported that the efficacy of standard neoadjuvant combination of trastuzumab, pertuzumab, and docetaxel(DPH) treatment was not superior to trastuzumab emtansine(T-DM1) in terms of pathologic complete response (pCR) and long-term survival, while patients treated with T-DM1 had a markedly lower frequency of adverse effects and significantly better quality of life during the neoadjuvant period [12,13].In this study, we evaluated the potential predictive and prognostic value of baseline and serial levels of serum TK1 in patients with HER2 + early BC enrolled in the PREDIX HER2 trial. Clinical trial, endpoints, and sample collection PREDIX HER2 is a phase II, randomized, multicenter, academic clinical trial, conducted between December 2014 and October 2018 in nine centers across Sweden (ClinicalTrials.gov identifier NCT02568839).The study enrolled male or female patients aged 18 years or older with ERBB2-positive tumors larger than 20 mm and/or verified lymph node metastases.Patients were randomized in a 1:1 ratio to receive either six courses of docetaxel (first dose, 75 mg/m 2 , then 100 mg/m 2 ), subcutaneous trastuzumab (600 mg), and pertuzumab (loading dose, 840 mg, then 420 mg), or six courses of T-DM1 (3.6 mg/kg). The primary endpoint of the study was objective pathologic response, with pathologic complete response (pCR) defined as the absence of invasive tumor in the breast and lymph nodes (ypT0/Tis, ypN0).Event-free survival (EFS) was defined as the time from the date of randomization to the occurrence of the first event, including progression during treatment, locoregional or distant recurrence, contralateral breast cancer, other malignancy, or death from any cause.Disease-free survival (DFS) was defined as the time from the date of surgery to the first appearance of locoregional or distant recurrence, contralateral breast cancer, any cancer from other primary sites, or death from any cause. As shown in Fig. 1, blood samples were collected from all patients at baseline (visit 0), 16 ± 2 days after 2 cycles of treatment (visit 2), 16 ± 2 days after cycle 4 (visit 3), 16 ± 2 days after cycle 6 (visit 4), at the time of adjuvant treatment ends (visit EoT) and, where applicable, at the time of recurrence (visit R).Due to a protocol amendment, blood at visit 1(8 ± 2 days after cycle 1) was only collected from some of the patients treated at Karolinska University Hospital.This correlative analysis is reported in accordance with the REMARK criteria (REporting recommendations for tumor MARKer, supplementary Table 1). Measurement of TK1 activity The study employed the ELISA-based DiviTum® TKa assay (Biovica, Sweden) to determine the enzymatic activity of sTK1 in serum samples.The assay was performed on two aliquots of approximately 1 mL serum for each timepoint, following the manufacturer's instructions and as previously described [11].Briefly, the serum samples were mixed with a reaction buffer and incubated with a 96-well microtiter plate.The TK reaction phosphorylated bromodeoxyuridine (BrdU), a thymidine analogue, to BrdU-monophosphate, which was further phosphorylated into BrdU triphosphate.An anti-BrdU monoclonal antibody conjugated to enzyme alkaline phosphatase and a chromogenic substrate were used to detect BrdU triphosphate, resulting in the production of a yellow reaction product.The enzymatic activity of TK1 was expressed as DiviTum® unit of Activity (DuA), which were calculated using known TK activity values from reference sample of recombinant TK within a measuring range of 45 to 3081 DuA.All samples were analyzed at Biovica laboratories in Uppsala, Sweden, blinded to patient and tumor characteristics. Statistical analysis Violin plots were generated to show serum TK1 activity by time point in all patients.Undetectable TK1 activity of < 45 DuA at baseline and extreme high TK1 activity of > 3081 DuA were regarded as 45 and 3081 in the description of TK1 levels over time (Fig. 2A and B).Line plots displayed the levels of serum TK1 activity by time point in all patients and by treatment groups.For categorical variables, the distribution of TK1 levels in standard clinicopathological subgroups was compared using Chi-square test or the exact Fisher test.For continuous variables, difference in mean or median between groups was assessed using t-student test or ANOVA-test (parametric) or Mann Whitney test or Kruskal Wallis (non-parametric) as appropriate. The association of TK1 activity with pCR, EFS, and DFS was tested with univariate logistic regression and Cox regression.Multivariate analyses, including factors that were statistically significant in the univariate analyses and/or were clinically relevant, were applied to assess the adjusted odds ratios and hazard ratios.All p values are two-sided.All statistical analyses, descriptive and inferential, were performed with R version 4. Patient characteristics and outcomes A total of 202 patients were initially enrolled in the PREDIX HER2 trial, five patients were excluded from further analysis (three patients withdrew consent and two patients received a diagnosis of disseminated disease before treatment initiation).The intention-to-treat population consisted of 197 patients (99 patients in the standard group and 98 patients in the investigational group), all were evaluable for the current analysis.Patient characteristics and treatment details have been previously described [12] and are shown in brief in Supplementary Fig. 1. Table 1 presents the distribution of patient characteristics according to baseline TK1 activity.No association was detected with respect to age, tumor grade, hormone receptor status, Ki-67 status, or TILs percentage.The median follow-up for patients with available baseline TK1 data was 58 (range, 17-88) months. sTK1 activity kinetics during treatment In sequential samples, the levels of sTK1 activity in all patients and by treatment arms are summarized in Supplementary Table 2.The sTK1 activity kinetics are illustrated in Fig. 2A.Generally, TK1 activity was low at baseline for most patients.It significantly increased (p < 0.001) after one cycle of treatment, remained relatively stable during the neoadjuvant phase, and decreased at the end of adjuvant treatment.The median level of sTK1 activity subsequently decreased to approximately the same value as at baseline at the 1-year follow-up.The fluctuation of median sTK1 activity from baseline to visit 2 was higher in DPH arm than in T-DM1 arm, the level remained at high level of above 1000 DuA during the DPH treatment and decreased at visit EoT, while sTK1 remained at the intermediate high level of around 1000 DuA during neoadjuvant phases of T-DM1 treatment and at the high level at visit EoT (Fig. 2B). sTK1 activity level was explored as a categorical variable by dividing patients into three groups based on median value at baseline: undetectable (< 45 DuA), low (45 DuA ≤ value ≤ median), high (> median).At subsequent timepoints, patients were categorized into three groups based on the median value of sTK1 at each timepoint: low (< median), high (median ≤ value ≤ 3081 DuA), and out of range (> 3081 DuA).A significantly higher proportion of patients had out of range sTK1 activity at visit 2 and 4 in the standard than in the experimental arm, while conversely, higher proportion of patients had low sTK1 activity in the standard arm at visit EoT, and there is no difference in visit FU1(Supplementary Table 3).The dynamic group change from baseline to visit 2 and visit 4 is shown in Fig. 2C and D shows the flow of group change from visit EoT to visit FU1. Association between sTK1 activity levels and pCR To evaluate sTK1 level as an early marker of therapy response, we assessed the association of sTK1 levels at baseline, visit 2 and visit 4 and their kinetics with pCR.The median (IQR) sTK1 levels over time for pCR and non-pCR cases are illustrated in line chart (Supplementary Fig. 2A).Number of patients distributed in three groups divided by median value in treatment arms are shown in Supplementary Fig. 2B.sTK1 level at baseline, visit 2 and visit 4 did not have a significant effect on pCR status in adjusted logistic regression model (Table 2). Clinical outcomes according to sTK1 level at baseline and follow-up timepoints during therapy Finally, we investigated the association between sTK1 levels and long-term prognosis.sTK1 levels at baseline and visit 2 were not associated with EFS (Fig. 3, Table 3). A predefined cutoff of < 250 DuA is associated with a lower likelihood of disease progression in HR + HER2-Mbc [14] and was explored in the current study.Patients with low sTK1 at the end of adjuvant treatment (visit EoT) by both median and the predefined cutoff (250 DuA) had numerically better DFS (Supplementary Fig. 3), however the difference was not statistically significant in models adjusted for pCR, Ki67, treatment arm, tumor grade, tumor size, ER status, and node status (Supplementary Table 4).Subsequently, we assessed the prognostic value of sTK1 at visit EoT in a subset of non-pCR patients.A non-significant trend was observed among higher sTK1 activity at the end of treatment visit and worse survival outcomes (Fig. 4, Table 4). Discussion One of the fundamental characteristics of cancer is its ability to proliferate, making cell proliferation a critical hallmark of the disease [1].Changes in proliferation rates can serve as a significant indicator of tumor long-term prognosis and the response to early treatment.Liquid-based proliferation biomarkers have emerged as a promising non-invasive method for assessing these factors.The DiviTum® TKa assay platform has been validated for its reproducibility and has been found to perform favorably when compared to other assays [15].In this prospective randomized trial, we evaluated the longitudinal serum TK1 activity and investigated its potential value in early HER2 + disease.Although clinical utility for this setting could not be demonstrated in our study, our major findings add information to current evidence of the sTK1 dynamics in BC and provide interesting insights to how sTK1 could be further investigated for various clinical uses. Firstly, our study demonstrated dynamic sTK1 activity during different phases of HER2 + disease.We observed that the median sTK1 level of patients with detectable sTK1 at diagnosis was comparably higher than in a study on patients with clinical stage II HER2-negative breast cancer patients using the same assay [16], but lower compared to patients with advanced breast cancer [17], suggesting that sTK1 possibly reflects tumor burden, notwithstanding the difficulties of comparing different studies.Furthermore, we observed an increase in sTK1 even after short exposure to neoadjuvant treatment, similar to our previous findings on HER2-negative breast cancer treated with chemotherapy [11], and in contrast with available data on neoadjuvant endocrine-treated disease [10].This could be due to following explanations: (1) Effective targeted treatment induces cancer cell death, cytosolic TK1 is then released into the bloodstream; (2) Effective chemotherapy inhibits de novo dTMP synthesis pathway, and salvage pathway is effectively activated, leading to more TK1 uptake, thus more exocytosis/exosome TK1 is detected in the blood [18].Therefore, sTK1 is more likely to be a metabolic marker as also indicated in previous clinical and preclinical studies [19,20].Interestingly, more patients had high sTK1 in the DPH arm than the TDM-1 arm during neoadjuvant treatment, probably indicating a larger metabolic change for patients receiving regimens containing traditional chemotherapeutics.For both treatment groups, sTK1 increased significantly from baseline to visit 2 in both pCR and non-pCR cases, we also observed a marginally significant change of sTK1 from visit 2 to visit 3 in non-pCR cases but not in pCR cases.However, neither baseline sTK1, sTK1 at cycle 2 or at cycle 4 by cutoffs at respective timepoints associated with pCR.Previous findings had linked lower sTK1 with a greater likelihood of treatment response of chemotherapy in lung cancer patients [18], which we did not observe with an important caveat however, the limited detection range of the assay that introduces informative missingness to the analyses.Similarly, an association between sTK1 levels at any timepoint during neoadjuvant therapy, or of sTK1 kinetics, with long-term survival was not observed.We have previously demonstrated that a greater early sTK1 increase during neoadjuvant therapy for HER2-negative breast cancer, mostly in highly proliferative tumors, and the extent of this early increase was associated with improved survival outcome.The reasons behind the lack of prognostic value for HER2-positive breast cancer in this study are unclear, whether it is due to small sample size with few events, the detection range of the assay, or the biology of the disease, so further investigation is warranted.An intriguing finding of our study is the plausible prognostic value of sTK1 for patients with residual invasive HER2-positive breast cancer.Trastuzumab emtansine is the recommended post-neoadjuvant salvage therapy for such patients, even though three out of four patients treated with trastuzumab in the KATH-ERINE trial were disease-free at 3 years [21], patients that are currently overtreated with trastuzumab emtansine, with higher toxicity and increased cost as a result.Attempts to refine the post-neoadjuvant strategy by using the grade of histopathologic remission [22] or bespoke circulating tumor DNA panels [23] have shown clear clinical validity but have hitherto lacked clinical utility.Here, we show that sTK1 levels following surgery identify distinct prognostic groups within the population of patients with residual disease.Conceivably, by combining the well-validated Residual Cancer Burden index and sTK1, an assay of low complexity and cost, prognostication could be refined and patients with excellent prognosis be spared of unnecessary salvage treatment.Although our findings should be considered hypothesis generating due to their exploratory nature and few post-surgery relapses, the unmet clinical need to better stratify patients with residual invasive disease underscores the need for further validation of our observations in a larger cohort.This is to the best of our knowledge the first study that longitudinally assessed serum TK1 levels for HER2 + BC patients in a prospective, randomized clinical trial with longterm follow-up of more than five years.Additionally, serum samples were collected at baseline and subsequent timepoints from most trial participants, ensuring adequate representation and minimizing informative missingness.On the other hand, our study has some limitations that need to be considered.Firstly, it is an exploratory biomarker study that relies on retrospective analysis of prospectively collected data and the findings lack validation.Secondly, the relatively small number of patients and survival events may have concealed associations with outcomes.Additionally, the TK1 assay itself has technical obstacles to overcome, such as a detection range adopted for HER2-negative disease, and currently no standard cutoffs for early-stage disease. In conclusion, our study is, to the best of our knowledge, the first study that longitudinally assessed sTK1 as a putative long-term prognosticator in HER2 + breast cancer, both at baseline and following short-term exposure to neoadjuvant HER2-targeted therapy.While sTK1 levels and kinetics during treatment were generally not prognostic for short or longterm outcomes, the post-surgery prognostic value in patients that have not attained pCR warrants further investigation. Fig. 2 A Fig. 2 A median sTK1 change over time for all patients and B by treatment arm C sTK1 activity shifts from baseline to visit 2 and visit 4 and D from visit EoT to visit FU1 Fig. 3 Fig. 3 Baseline and visit 2 sTK1 and its correlations to EFS Fig. 4 Fig. 4 DFS probability according to sTK1 activity at visit EoT by A median value(422 DuA) as cut-off B 250 DuA as cut-off in a subset of non-pCR patients Table 2 sTK1 level at baseline, visit 2 and visit 4 and its associations with pCR a Adjusted for Ki67, treatment arm, tumor grade, tumor size, ER status and node status Table 4 Association of sTK1 level at visit EoT with diseasefree survival (DFS) in non-pCR patients a Adjusted for Ki67, treatment arm, tumor grade, tumor size, ER status and node status
2024-01-04T14:11:39.390Z
2024-01-04T00:00:00.000
{ "year": 2024, "sha1": "a1213b8dd896acfe2411d9f05b44f14d2ab0e79f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10549-023-07200-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8d40f54bb94c63d27bd250bd581136db9044100a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14273625
pes2o/s2orc
v3-fos-license
Improving Efl Writing through Study of Semantic Concepts in Formulaic Language Within Asian EFL contexts such as South Korea, large class sizes, poor sources of input and an overreliance on the Grammar-Translation Method may negatively impact semantic and pragmatic development of writing content. Since formulaic language is imbued with syntactic, semantic and pragmatic linguistic features, it represents an ideal means to evaluate the influence of Asian EFL contexts on writing. Thus, formulaic language within academic texts from Korean university students was compared to that found in essays written by American university students. Results revealed that Korean EFL learners overused transitions to define the organization of academic texts at the expense of developing content. Moreover, they used repetition, general lists, and all-purpose formulaic language to " pad " content, neglecting to consider semantic or pragmatic purposes of the text. In contrast to their Korean EFL counterparts, American university students used formulaic language for a variety of pragmatic purposes such as involving the reader, putting examples into a larger perspective, adding connotation, and addressing the perspective of the reader. It appears that EFL contexts such as South Korea require pedagogical and curricular reforms which foster the development of writing composition for semantic and pragmatic purposes. Introduction Cultivating good academic writing skills has historically been a challenge for English learners in South Korea (Oak & Martin, 2003).Large class sizes have severely limited the degree to which students can interact with instructors and receive feedback.While the number of students in each classroom was reduced a decade ago, sizeable classes of 35-40 still continued to predominate in most Korean middle and high schools (Cho, 2004).Some classes remain large today, particularly in school districts with limited funding or facilities to accommodate young learners. In addition to class size, extensive instruction via the Grammar-Translation Approach has presented challenges.While educators in Asian countries such as South Korea and China are now aware of innovative pedagogical practices to improve student writing, they tend to rely on conventional methods of instruction and feedback that focus on language form (Lee, 2014).The continued emphasis on grammar and translation has not only influenced learners' written communication, but their conception of effective writing principles.Concerning these principles, Tyson (2003) wrote that, "when I ask my [Korean EFL] students what they hope or expect to learn in my composition classes at the beginning of a new semester, nearly all of them mention grammar" (p.115).In actuality, this statement is a reflection of pedagogical overemphasis on language form, which has left learners with an impression that grammar is the primary means to produce quality writing. Although emphasis on grammar in the writing process is needed, it represents merely one facet of a much more complex process.Overemphasis of syntax, as well as extensive rote memorization of vocabulary, limits understanding of top-down processes related to discourse and pragmatics.Essentially, the excessive literal interpretation of individual linguistic forms prevents comprehension of language as a whole.This issue is illustrated through a recent conversation between an American and Korean EFL student: American: I'm so happy that you made these cookies! In this example, the Korean speaker shows a clear understanding of the grammar and surface meaning.The speaker, however, is unable to grasp the top-down, illocutionary purpose of the American's comment, which was made to express thanks.Failures such as these may be expected in countries that primarily emphasize grammar at the expense of semantic concepts.This explains research citing extensive pragmatic failures among Asian learners (Zheng & Huang, 2010).Because semantic and pragmatic failures may form a significant barrier to effective written communication, more innovative and effective pedagogical techniques are needed in EFL contexts.Before these techniques may be designed, however, the influence of various contextual factors on writing must be further examined. Research Problem In addition to bottom-up learning concepts such as grammar, it is essential that writers gain top-down knowledge of L2 discourse, pragmatics, and culture (Celce-Murcia & Yoo, 2014).Because formulaic language is a reflection of all these concepts, it can be further studied to expand our understanding of writing development, influences of the writing process, and appropriate pedagogical strategies to enhance writing ability.The following questions have been posed to study writing in an Asian EFL context: 1) How does formulaic language in EFL academic writing differ from that found in native English contexts? 2) Why does formulaic language use differ?What do disparities in formulaic language use reveal about the writer's understanding of semantic concepts and discourse? 3) How can an understanding of formulaic language be used to improve pedagogy and evaluation in Asian EFL contexts? Because EFL environments differ from those found in an ESL context, study within these contexts may reveal how differing input and pedagogical practices influence the writing process.This understanding may subsequently be used to develop more effective forms of writing instruction. Literature Review Initially, methods of language instruction focused upon rote memorization, grammar drills, and translation of L2 texts into the learner's mother tongue (Celce-Murcia, 1991).In the 1960's, however, it became apparent that such instruction was not effective as a means to improve speech or writing.Regardless of the instructional techniques applied, learners appeared to acquire linguistic features in a set order, which was hypothesized to be the manifestation of an innate Language Acquisition Device (LAD) (Chomsky, 1975(Chomsky, , 1981(Chomsky, , 1986;;Cook, 1993;Dulay & Burt, 1974;Dulay, Burt, & Krashen, 1982).As support for the presence of LAD grew, so did theories advocating the modification of learner input and tasks according to a learner's developmental level (Dulay & Burt, 1973;Krashen & Terrell, 1983;Pienemann, 1999Pienemann, , 2005)). Despite the valuable insight gleaned from research of language acquisition, the most effective means to modify instructional input and tasks to improve writing still remains unclear.This uncertainty is exemplified by the perpetuation of an intense debate over error feedback (Van Beuningen, DeJong, Kuiken, 2012;Bitchener, Young, & Cameron, 2005;Ferris, 2004;Truscott, 1996Truscott, , 1999)).Researchers such as Truscott (1996Truscott ( , 1999) ) have concluded that explicit grammar correction is superfluous in the writing process, while others have vehemently refuted this claim (Bitchener, Young, & Cameron, 2005;Ferris, 2004).In reality, the debate appears to be fueled by a limited focus merely on syntax, which neglects consideration of other factors that influence grammar development.Such a limited view makes interpretation of inconsistencies with pedagogical treatments challenging.To address this issue, recent research has taken a more holistic approach to the study of grammar.Recent studies, for example, have begun to simultaneously evaluate the importance of both semantic and syntactic concepts in the process of language production (Cuza, Guijarro-Fuentes, Pires, & Rothman, 2013;Gil & Marsden, 2010;Han & Liu, 2013;Ko, Perovic, Ionin, & Wexler, 2008). While aspects of semantics and syntax are synergistically responsible for grammar development, they are also just as important for the development of lexical elements within writing.This is exemplified by a study revealing that language learners utilize semantic understanding from their L1 to interpret relationships between both syntactic and lexical elements of their L2 (Jiang, 2004).Since syntax, semantics, and lexical units are all vital elements of language development, researchers such as Kecskes (2007) have sought to integrate and understand these factors through the development of a formulaic continuum (Table 1).As the continuum moves from left to right, the degree of semantic complexity increases.On the left side of the continuum, grammatical units such as "have to" appear to have a direct relationship between their syntax and meaning.On the right side of the continuum, however, Situation-Bound Utterances (SBUs) (lexical units used in precise pragmatic situations) and idioms reveal little connection between their individual constituents and meaning.The more highly complex formulaic sequences at the right of the continuum include not only less salient semantic characteristics, but cultural and pragmatic meaning.Consider an idiom such as, "Rome wasn't built in a day."It conveys a sense that the creation of great things takes time.The understanding that Rome is a "great thing" or a tremendous feat of engineering, however, is deeply rooted in a conception based upon Western civilization.Because such idiomatic expressions are imbued with cultural and pragmatic information, they may be very difficult for Asian EFL learners to acquire. Collectively, all of the grammatical and semantic units within the continuum in Table 1 are needed to effectively communicate.Not only is formulaic language used to convey semantic meaning, it is used to organize discourse (e.g., fixed semantic units or transitions), culturally connect to others (e.g., idioms), and serve pragmatic functions (e.g., SBUs).Research also confirms that such language is a ubiquitous part of writing, comprising more than 50% of most written discourse (Biber, 2009;Durrant, & Mathews-Aydınlı, 2011;Schmitt, 2004).Since formulaic language is so prevalent within writing, and includes a variety of semantic and syntactic elements, it represents an ideal tool for evaluation.Thus, study of formulaic language may help reveal the effectiveness or ineffectiveness of input and pedagogical techniques used to cultivate English writing skills. Formulaic Language in an EFL Context Recent studies clearly reveal the effectiveness of explicit pedagogical techniques on the development of formulaic language and fluency (Li & Schmitt, 2009;Wood, 2007Wood, , 2008Wood, , 2009;;Wray, 2000).The sole use of participants within ESL contexts, however, has provided only a limited perspective.Although ESL contexts contain a rich environment for the acquisition of formulaic language, EFL contexts often contain input that is scant or highly different from that found in native English contexts.Furthermore, the commercial book market produces resources of low quality that do not properly prepare EFL learners to write effectively (Chen, 2007;Cho & Shin, 2014). In addition to issues with input, pedagogical strategies within EFL contexts may inhibit development of formulaic language, along with its associated semantic, pragmatic, and cultural content.In Asian countries such as South Korea, Japan, and China, college entrance exams often drive the use of the Grammar-Translation Approach (Watanabe, 1996).Via this method, English sentences are "dissected" into their constituent parts and processed individually, rather than collectively to produce communicative discourse.Since this type of instruction emphasizes a bottom-up approach, it may severely limit understanding of top-down processes used to produce larger lexical expressions and written compositions. Because issues within Asian EFL contexts today make learning formulaic language and associated top-down semantic concepts problematic, it is essential that additional research be conducted.This research can ascertain influences of variable input and pedagogical practices on the development of formulaic language and semantic concepts in writing.Results of such study, in turn, may lead to the development of new pedagogical approaches for composition which accommodate the unique needs of Asian EFL learners in countries such as South Korea, China, and Japan. Materials In order to analyze texts written by both native English and Korean EFL university learners, the Louvain Corpus of Native English Essays (LOCNESS) and the Gachon Learner Corpus (GLC) were used respectively (Carlstrom, 2013;Centre for English Corpus Linguistics, 2014).LOCNESS includes essays from a variety of genres written by British and American university students.The learners selected from this corpus were all native speakers of English who studied at Presbyterian College in South Carolina.Like LOCNESS, the GLC includes academic texts from a variety of genres.These texts, however, are generally one paragraph in length and were written by EFL learners.All of the learners included in this corpus studied at Gachon University, which is located near Seoul, South Korea. Operational Definition of Variables Formulaic language is defined as, "a segment of language made up of several morphemes or words which are learned together and used as if they were a single item" (Richards & Schmidt, 2013, p. 503).In accordance with this definition, groups of morphemes or groups of words that formed single semantic units were systematically studied within each text of the selected corpora.To enhance understanding of language use within writing, the following types of formulaic expressions were selected from Table 1 and operationally defined as follows: 1) Fixed Semantic Units-Multiword units that cannot be changed (e.g., "As a matter of fact,") 2) Phrasal Verbs-Verbs with a particle (not verbs with a preposition) (e.g., "turn the light off") 3) Speech Formulas-Collocations which include "slots" or segments that may be altered (e.g, "… stand in the way of (your dreams)") 4) Idioms-A group of words with a meaning which is not discernible from constituent parts (e.g., "Paint a colorful picture of (life in the 18th century)") Since singular grammatical features (e.g., "have to") do not clearly reveal top-down semantic concepts in academic writing, and SBUs (e.g., "Help yourself") are often closely associated with situational contexts in oral communication, these features of the formulaic continuum were not selected for study. Procedure Since corpora that could be both quantitatively and qualitatively studied were needed, subsections of each corpus were selected.From the LOCNESS corpus, a subcorpus obtained from Presbyterian College in South Carolina was chosen.This corpus includes essays from 8 different learners which were collected in 1995.The first 16 essays comprised a corpus of 1,776 words.While the corpus was large enough to provide rich quantitative data, it was also small enough so that each text could be qualitatively analyzed in detail.The essays were of mixed themes, but related mostly to different forms of literature. To select a subsection from the GLC, the most advanced Korean learners who had not studied abroad were chosen.The most advanced learners were selected with the assumption that their writings would include the greatest amount of semantic complexity, while learners who had not studied abroad were selected to minimize the chance that input from native English contexts could influence the use of formulaic language.Determinations of proficiency level were made by selecting students with the highest TOEIC scores.These scores ranged from 800-925 and included 9 different students.Seventy-eight texts from these students, in the form of academic paragraphs of mixed genres, created a corpus of 1,754 words.The similarity in size of the GLC corpus selection with that of LOCNESS (1,776 words) made direct comparison of frequency values possible.The smaller sample size of both corpora also allowed for comprehensive qualitative analysis of the texts. After the corpora were selected for study, formulaic language was systematically located and analyzed.To provide the most comprehensive view of formulaic language use, a search for semantic sequences was conducted in two steps.In the first step, a simple concordance program was used to look at individual words and word frequencies.Formulaic patterns were analyzed qualitatively through referencing the words in context.In the second step, each text was carefully examined so that larger semantic units of the formulaic continuum (fixed semantic units, phrasal verbs, speech formulas, and idioms) could be counted.After the numbers of each type of unit were tallied, quantitative patterns were qualitatively examined. Following quantitative and qualitative analysis of formulaic language use, results from each corpus were contrasted to examine how and why formulaic language differed between the two groups of writers (research questions one and two).The results were then organized and summarized.Issues concerning the use of formulaic language among Korean EFL learners were outlined, along with pedagogical interventions needed to assist these and other learners from similar contexts (research question three). Results and Discussion Analysis of word frequencies and patterns revealed several key differences between Korean EFL learners and their native English counterparts.One major difference was the use pronouns and determiners (Figure 1).Not only did American learners use a larger variety of pronouns and determiners such as we, us, our, he, she, his, her, them and their, they used this type of grammatical feature more often.Native English learners, unlike their Korean EFL peers, utilized these grammatical features for pragmatic purposes.Pronouns and determiners were used to define differing social groups, ideas, or perspectives.This may best be illustrated by using the following examples obtained through concordance analysis of LOCNESS: 1) ...time in which to accept his own misconceptions and to… 2) ...one can relate these to one's own experiences and identities… 3) ...thereby broadening one's own knowledge about oneself… 4) ...gift to the people of their own background, as well as and… 5) ...not only to their own race but just as… 6) …especially of those in our own "backyards".People like… 7) …on which we can reexamine our own feelings towards these… The possessive determiner is used in expressions such as "our own backyards" and "our own feelings" to establish a social connection and sense of commonality with the reader.For expressions such as "his own misconceptions," "their own race," and "their own background," the possessive determiner is used to describe social circles, groups, or perspectives that differ from the reader.Finally, expressions which use the determiner one's, as in "one's own knowledge" and "one's own experiences," establish a socially "neutral" opinion.While native English writers were able to utilize pronouns and determiners to explain social relationships from the reader's perspective, Korean learners tended to write from their own singular point of view.This is exemplified by their overuse of the pronouns I and me.The subject pronoun I was used eight times more often in the EFL compositions (386 to 45 times respectively); the object pronoun me was used 20 times more often (40 to 2 times respectively).Although Korean EFL learners did use other pronouns such as they to describe people within their academic texts, these grammatical features appeared to serve little pragmatic purpose.Instead, they were simply used to express general concepts such as "other people." In addition to explaining groups and individuals from the reader's perspective, native English writers used a variety of words to either support an opinion or add negative/positive connotations to information given.In one text, for example, a learner stated that ethnic American literature cannot "merely" be categorized as "rebellious responses to oppression."In this example, the native English writer used the word merely to relegate the following phrase to an inferior position.As in the case of the word merely, many other words such as conflict, conform, war-torn, torment, confront, cope, dehumanizing, denounce, embrace, repercussions, shun, sacred, subservience, thought-provoking, truly, and whole-heartedly were used to intensify importance or imbue connotation to the concepts covered.Use of these words does not only indicate increased proficiency, it suggests an extra pragmatic ability to add degrees of significance to their perspectives. Unlike their native English peers, Korean EFL university students did not often describe different groups, social circles, or perspectives in detail.Furthermore, they appeared to have difficulty adding positive and negative connotations to concepts explained within their texts. Figure 2. Words used more frequently by Korean EFL writers More extensive use of the words in Figure 2 suggests that Korean EFL learners had difficulty adding specificity to texts.Words such as they, people, it, some, someone, and something were used as "all-purpose" words to describe main concepts.Likewise, words such as etc., important, good, and bad were used to describe the state of general situations or examples without adding more detailed information or connotation.The preference for general concepts over specific, detailed examples when supporting an argument is further exemplified by the following academic paragraph taken from the GLC: The best way for someone to improve one's appearance is neatness.For example, there is a man.The man is goodlooking and good at speaking.But, if he is not wash hair and clothes, many people don't like him.... So improving one's appearance is very important and that best way is neatness.The other way is go to clinic center.Although the money is needed, but result is very effective ... so if someone want to improve one's appearance, effective way is go to clinic center.He or she is gaven massage and nail care and pedicure, and haircut, etc. going salon is effective way too [sic]. All-purpose lists, repetition of similar concepts, and expressions such as etc. in this paragraph allow the writer to produce vague general ideas to support their argument, rather than more detailed explanations of specific concepts which may involve the reader.Moreover, the relatively neutral terms such as one's appearance, the man, and someone do not specifically explain relationships according to the reader's perspective.As a result of this writing style, there appears to be a general disconnect between the reader and the content presented in the text. Overall, analysis of individual words suggests that native English writers possess pragmatic skills that Korean EFL learners lack.First, American students can utilize words and other expressions to describe groups and situations from a variety of perspectives, including that of the reader.Second, American learners have a larger repertoire of words that can add differing connotations of various intensity levels.Such pragmatic writing skills appear to help the reader build a schema from which they can become involved with, and make judgments about, the text. Larger Semantic Expressions Like analysis of individual words, studies of semantic expressions within the formulaic continuum (fixed semantic units, phrasal verbs, speech formulas, and idioms) revealed several key differences (Table 2).From looking at Table 2, it is obvious that Korean EFL learners more extensively used fixed semantic units than their native English peers.These units were comprised primarily of transitions such as "First of all" or "However."Instead of explaining situations in detail or emphasizing content, Korean EFL learners appeared to concentrate on using fixed semantic units to "prove" academic writing proficiency.Refer to the following example from the GLC: First of all, I think their personality affect color that they like or dislike.For example, if some people are introverts, they like gloomy color like black and grey.In contrast, if some people are extroverts, they like bright color like yellow and white etc.Second, people like color that go well with them.For instance, I really like purple and red because they are good color on me and I don't like brown and pink because they aren't the ones that look good on me.As a result, I think there are many factors that affect like color and dislike color [sic]. The learner appears to use transitions to demonstrate that the writing has the correct academic organization, rather than elaborating on concepts in a sequence which reveals a cohesive structure.Very little attention is given to the expansion of content (e.g., etc. is used to avoid further exploration of color preferences).As a result, texts from the EFL learners look more like a skeletal representation of academic writing, rather than a substantial exploration of content or ideas. Native English writers relied less on the use of transitions to reveal structure, opting to explore content in a sequence that revealed coherence.The emphasis of content over the use of fixed semantic units explains the larger use of both phrasal verbs and idiomatic expressions by native English writers (Table 2).One writer, for example, used expressions such as "break away from social injustice" and "faced with an examination of his own role" to elaborate upon the struggles faced by minorities.The cultural and pragmatic associations of these expressions with the concept "challenge" helped to reinforce coherence without the need to utilize extensive transitions.Native English writers also utilized culturally-loaded idiomatic expressions to connect with the reader and cultivate interest in content.In the phrase, "Our country has been coined a melting pot," coined is used as a means to introduce additional cultural information that is assumed to be shared by the reader and author.While such techniques were generally not present within Korean EFL texts, one author did state that, "In Korea, there is a saying, 'Fat people are unscratched lottery ticket.'Actually there are many peoples [sic] who improve one's figure by dieting."Although this learner does use a cultural saying, he fails to elaborate upon precisely what it means or how it relates to the text, and simply assumes that it will be understood by the reader. In addition to detailed expansion of content and the utilization of semantically more complex expressions, native English writers utilize formulaic language for a larger variety of pragmatic purposes (Table A1).While both native English writers and Korean EFL learners used formulaic language for presenting sequences, comparing, contrasting, providing examples, emphasizing importance, expressing an opinion, and showing cause and effect, native English writers used a variety of additional formulaic units for pragmatic purposes.American students, for example, exemplified processes through expressions such as, "in this way" or "in this manner."These learners also presented alternate perspectives from the views of other individuals, social groups, or the majority.In addition to elaboration of different societal perspectives, native English writers tried to involve and interest the reader by revealing fresh viewpoints through expressions such as, "open our eyes to," "under the surface," and "one wonders how."Finally, native English writers relied on formulaic expressions such as "as a whole" or "draws together" to incorporate themes and provide a more general perspective of discreet elements within the text.These expressions were preferred to maintain coherence over the mechanical, sequential transitions favored by Korean EFL learners. In closing, comparison of native English and Korean EFL writers' use of formulaic language reveals several noteworthy differences that may be utilized to improve instruction and evaluation.On the whole, Korean EFL learners' use of formulaic language reveals academic writing that is only mechanically proficient.Formulaic transitions are used to explicitly denote sequences and maintain unity of content.The underlying simplicity of this content suggests that these learners do not have a good understanding of how to develop discourse or pragmatic concepts within English writing.They may also lack knowledge of the formulaic expressions needed to address such issues with content.Because traditions of writing in Asian countries like Korea, China and Japan value artistic expression through spiral logic and circumlocution (Bennett, 2007), students may be unaware of exactly how to elaborate on content in ways that maintain unity.Students may also have been hindered from using concepts related to pragmatics and discourse due to a historical overemphasis of grammatical forms. Differences in discourse and cultural writing styles, along with an overemphasis of grammar at the expense of semantic and pragmatic concepts, appear to have negatively impacted EFL learners within South Korea.These issues explain the dearth of idioms and less semantically transparent expressions within compositions.They also explain the general lack of formulaic language for the presentation of different perspectives or elaboration of ideas.Due to clear subject matter deficiencies within Korean EFL writing, it is essential that curricula be reformed to promote pragmatic and semantic development of content. Conclusion Analysis of writing suggests that Korean EFL learners use formulaic language to delineate basic structure of text and provide simplistic examples.These learners, however, appear to lack the means to strengthen academic writing through elaboration of content.Results suggest that Korean EFL learners who have been educated through the Grammar-Translation Approach have difficulty integrating cultural and pragmatic concepts within writing. Unlike their EFL counterparts, native English university learners have the ability to interest and involve the reader through: presenting idiomatic expressions that exemplify a point, presenting an alternate perspective from the view of various social groups or individuals, expressing a degree of certainty, describing a process, revealing new information to the reader, involving the reader, and putting examples into a larger perspective.Because these skills can produce superior content, it is essential that they be cultivated in EFL learners as they progress to more advanced levels. While a basic understanding of structure is necessary, it may be rendered useless if a writer is not able to achieve an intended pragmatic purpose.As the importance of English as a lingua franca grows, so do the needs to solve such practical, real-world problems.Since EFL learners may not have encountered pragmatic and cultural concepts needed to write effectively, and may have learned other circular means of discourse in their L1, educators need to utilize writing curricula that move beyond a simplistic focus on grammar.The sample syllabus in Table 3 shows how curricula can be realistically reformed in EFL contexts to enhance pragmatic and cultural competence.Idioms may be used to establish a common cultural background, interest the reader, or introduce the purpose of a text.Formulaic expressions may then be used to further develop content according to the idiom and the text's target objective.Development of content in this way will help EFL students accomplish various pragmatic tasks.It will also help these learners understand the purpose of their writing and the importance of considering content from the perspective of the reader. While the information presented within this paper is useful as a means to reform curricula in Asian EFL contexts such as South Korea, more study is needed.The paper provides only a limited understanding of linguistic features which can improve the quality of English compositions.Additional corpus studies must be conducted to cultivate a more holistic perspective of the formulaic, pragmatic, and cultural charactersitics of written discourse.Using such a perspective, writing curricula and pedagogy can be significantly enhanced.Because formulaic language encodes grammatical, semantic, cultural, and pragmatic information, it also represents an ideal means to improve writing evaluation.In the future, formulaic language may be used to transform education in Asian EFL contexts such as South Korea, which have thus far heavily relied upon methods that emphasize grammar. Appendix A Figure 1 . Figure 1.Words used more frequently by native English writers Table 2 . Different types of formulaic expressions found in text Table 3 . Sample syllabus for advanced EFL writing Table A1 . Formulaic expressions and their function
2016-01-11T18:29:14.669Z
2014-12-17T00:00:00.000
{ "year": 2014, "sha1": "74322806cb12a4b49841a00b76955fffd6623630", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/43437/23664", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "74322806cb12a4b49841a00b76955fffd6623630", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
218772528
pes2o/s2orc
v3-fos-license
Diagnosing hereditary cancer predisposition in men with prostate cancer Purpose We describe the pathogenic variant spectrum and identify predictors of positive results among men referred for clinical genetic testing for prostate cancer. Methods One thousand eight hundred twelve men with prostate cancer underwent clinical multigene panel testing between April 2012 and September 2017. Stepwise logistic regression determined the most reliable predictors of positive results among clinical variables reported on test requisition forms. Results A yield of 9.4–12.1% was observed among men with no prior genetic testing. In this group, the positive rate of BRCA1 and BRCA2 was 4.6%; the positive rate for the mismatch repair genes was 2.8%. Increasing Gleason score (odds ratio [OR] 1.19; 95% confidence interval [CI] 0.97–1.45); personal history of breast or pancreatic cancer (OR 3.62; 95% CI 1.37–9.46); family history of breast, ovarian, or pancreatic cancer (OR 2.32 95% CI 1.48–3.65); and family history of Lynch syndrome–associated cancers (OR 1.97; 95% CI 1.23–3.15) were predictors of positive results. Conclusion These results support multigene panel testing as the primary genetic testing approach for hereditary prostate cancer and are supportive of recommendations for consideration of germline testing in men with prostate cancer. Expanding the criteria for genetic testing should be considered as many pathogenic variants are actionable for treatment of advanced prostate cancer. INTRODUCTION Germline pathogenic variants (PVs) in cancer predisposition genes are reported in 7.3% to 11.8% of aggressive prostate cancer (PC) cases, including genes associated with homologous repair deficiency (HRD) (e.g., BRCA1, BRCA2, ATM, BRIP1, CHEK2, NBN, BARD1, RAD51C, MRE11A, and PALB2), and mismatch repair (MMR) deficiency (e.g., MLH1, MSH2, MSH6, and PMS2). 1,2 Most of these genes have clear management guidelines for early cancer detection and risk reduction, which may benefit the patient and family members. Their relationship to PC screening and management has garnered recent interest. 3 Previous reports suggest that PVs in BRCA1/2 confer increased risk for PC associated with poor survival and younger age at diagnosis; HOXB13 PVs also are associated with a young age at diagnosis. 4,5 Men at increased risk for aggressive or earlier-onset disease may choose more aggressive screening or earlier intervention. 6 Men with HRD or MMR-deficient metastatic prostate tumors may also benefit from targeted therapeutics, such as pembrolizumab, platinum therapies, or PARP inhibitors. [7][8][9] Despite improved understanding of the prevalence of PVs among men with PC, it remains unclear which men will most benefit from genetic testing. Historically, the Hopkins criteria, i.e., ≥3 affected first-degree relatives, affected relatives in three successive generations, or ≥2 relatives affected at 55 years or younger, provided a working definition of hereditary PC (HPC), but there is little evidence that HPC is associated with DNA repair gene variants. Recent data suggest that aggressive disease or family history of other cancers may be a better predictor for germline PVs in men with PC. 1,2 In 2017, expert consensus guidelines recommended genetic testing for men with HPC; men with ≥2 close unilinear relatives with a cancer associated with hereditary breast and ovarian cancer (HBOC) or Lynch syndrome (LS), men with metastatic castrationresistant PC, and men with somatic (PVs) identified via tumor testing. 10 However, these testing guidelines are based on limited evidence. Thus, there is an urgent need to determine a more robust way of identifying men with PVs so they may benefit from early screening/intervention and therapeutic options. This study evaluates predictors of germline PV status in a large cohort of men with a personal history of PC who underwent clinical genetic testing to help inform clinical testing guidelines. Study population Study participants included men with PC who underwent hereditary cancer multigene panel testing (MGPT) between April 2012 and September 2017 at a clinical diagnostic laboratory (Ambry Genetics) (n = 1878). Men with a BRCA1/ 2 PV reported in their family prior to testing were excluded (n = 66), leaving a total of 1812 individuals in the analyzed cohort. Patients who had prior genetic screening or testing (N = 150), including BRCA1/2 or LS germline testing, immunohistochemical screening for MMR deficiency in tumors, and other somatic testing, were analyzed separately to minimize bias in PV detection rates; 1662 men had no prior testing. Demographic and clinical data including age at testing, ethnicity, age of diagnosis, Gleason score, metastatic status, and personal and family history of cancer were collected through retrospective review of test requisition forms and other clinical documentation (e.g., pedigrees and consult notes) provided to the laboratory. Laboratory methods Depending on the type of clinical tests ordered, men underwent analysis of up to 67 cancer susceptibility genes. The frequencies of each gene tested are described in Table S1. Sanger or next-generation sequencing analysis was performed for all coding domains and well into the flanking 5' and 3' ends of all introns and untranslated regions, along with gross deletion/duplication analysis of covered exons and untranslated regions. Exceptions included GREM1, EPCAM, and MITF, for which analysis was limited to alterations known to be associated with disease (Table S1). Statistical analysis Descriptive statistics for the PC cohort are summarized as median (interquartile range [IQR]) for continuous and percentages for categorical characteristics. Logistic regression estimated odds ratios (OR) (95% confidence interval [CI]) for univariate associations between PV carrier status and personal and family history characteristics, by gene and for the combined set of PC predisposition genes (ATM, BRCA1, BRCA2, CHEK2, EPCAM, HOXB13, MLH1, MSH2, MSH6, NBN, PALB2, PMS2, RAD51D, and TP53). Stepwise logistic regression determined the most informative set of predictors from the following: continuous age at PC diagnosis; Ashkenazi Jewish ethnicity; personal history of non-PC; personal history of breast or pancreatic cancer; family history (presence of first-or second-degree relative) of cancer; >1 first or second-degree relative with PC; >1 first-or second-degree relative with breast, ovarian, or pancreatic cancer; >1 first-or second-degree relative with LS-related cancer (colorectal, endometrial, gastric, ovarian, pancreatic, small bowel, urothelial, kidney, or bile duct cancer); and Gleason score. The stepwise selection procedure aimed to minimize Akaike's information criterion (AIC), allowing a maximum of 1000 backward and forward steps. All analyses were conducted in R V3.3.3. Demographics The study cohort was primarily Caucasian (70%; Table 1). The median age at testing was 66 (IQR 59, 73) years and the median age of PC diagnosis was 60 (IQR 54, 66) years. Fortytwo percent had a personal history of other cancers, including colorectal (11.1%), breast (5.8%), and pancreatic (6.1%). Most individuals (92.4%) had a family history of cancer in at least one close (first, second, or third degree) relative; with 52.0% having a history of breast cancer and 50.6% PC. Thirty-six percent of men had more than one relative with breast, ovarian, or pancreatic cancer and 31.6% had more than one Germline genetic test results Among men with no prior genetic testing, the yield of a 14gene hereditary prostate cancer panel ProstateNext was 9.4%, with 26/277 testing positive for a PV. The yield of all other MGPTs combined was 12.1%, with 168/1385 testing positive (Table S2). Among candidate prostate cancer genes, PV frequencies were highest for BRCA2 (3.8%), ATM (2.7%), and CHEK2 (2.5%). PVs were also seen in MSH2, HOXB13, BRCA1, MSH6, PMS2, PALB2, TP53, NBN, MLH1, and EPCAM. No PVs were detected in RAD51D (Fig. 1, Table S3). Among all men with no prior genetic testing, the pooled frequency of PV in therapeutically actionable genes (BRCA1/2 and MMR genes) was 7.4%. Nine men who underwent MGPT (0.6%) had PVs in genes not currently associated with PC; all had clinical history consistent with the alteration identified (Table S4). Fourteen men in the entire cohort (0.7%) were found to have more than one PV (Table S5). Among men with prior genetic testing, results for 34/40 were consistent with previous findings, such as confirmation of somatic PVs in the germline or PVs that were concordant with tumor immunohistochemistry (IHC) results; 6/40 were found to have discordant results with tumor testing or additional PVs identified on expanded testing that were not detected with initial limited testing ( Figure S1). Univariate analysis The type of panel (ProstateNext vs. all other panels) used for testing was not associated with positive or negative results (p = 0.10). There was no significant difference in median (IQR) age at PC diagnosis for men testing positive versus negative (59 [11] vs. 60 [12] years; p = 0.32) ( Multivariate analysis To assess predictors of positive results, a subset of the cohort was analzyed, comprised of men with an available Gleason score who did not have prior genetic testing whose testing included all 14 of these genes (n = 524 Table 3). The multivariable adjusted OR for Gleason score represents the comparison per 1 unit increase in Gleason score (i.e., men with a Gleason score of 7 were 19% more likely to test positive than men with a Gleason score of 6). Similarly, men with a personal history of breast or pancreatic cancer were more than three times as likely to test positive than men without these cancers; men with more than one family member with breast, ovarian, or pancreatic cancer were 2.3 times as likely to test positive compared with men without this family history; and men with more than one family member with a LSrelated cancer were nearly twice as likely to test positive compared with men who do not have this family history. Ninety-five percent of men with PVs reported at least one of these informative predictors of a positive result. DISCUSSION The findings from this clinical laboratory genetic testing cohort demonstrate a 9. The variables presented here are adjusted for all other variables in the table. b Note: "panel tested", "meet HBOC criteria", "meet Lynch criteria", "meet Hopkins FPC criteria" and "metastatic" are not included in any models; all other univariate predictors in Table 2 are included as potential predictors in the selecting procedure: Ashkenazi ethnicity (yes/no); age at prostate cancer (PC); personal history of other cancer; personal history: breast, pancreatic cancer; family history of cancer; family history of cancer first-degree relative (FDR) only; family history of cancer FDR/second-degree relative (SDR); family history of PC; family history of PC FDR only; >1 relative with breast, ovarian, or pancreatic cancer; >1 relative with Lynch-related cancer (includes colorectal, endometrial, gastric, ovarian, pancreatic, small bowel, urothelial, kidney, or bile duct cancer); Gleason score; Gleason score (level 1); Gleason score (level 2), Gleason score (level 3). monoallelic MUTYH PVs were included, and the authors included the CFTR gene in their data set, which has a variant that is present in 1/24 European Caucasians. 15 When restricting to genes relevant to PC screening and advanced disease (e.g., HOXB13 and HRD and MMR genes), findings from this study are similar to Nicolosi et al. Among PC patients referred for genetic testing, BRCA2, ATM, CHEK2, and HOXB13 are the most commonly mutated genes. Here, 26/1501 men (1.7%) with no prior genetic testing whose testing included the LS genes (MLH1, MSH2, MSH6, PMS2, or EPCAM) were found to have a PV in one these MMR genes, and 66/1662 men (4.0%) with no prior testing were found to have BRCA1 or BRCA2 PVs. Men with advanced PC who are found to have PVs in the HRD or MMR genes may benefit from targeted therapeutic agents, such as pembrolizumab, platinum therapies, or PARP inhibitors. [7][8][9] Positive men with PC may benefit from identifying increased risk for additional cancers, identifying risks to family members, and understanding the cause of their cancer diagnosis. Stepwise regression analysis was used to identify factors for identifying PC patients with a PV. Factors associated with an increased risk for a positive result include increasing Gleason score; personal history of breast or pancreatic cancer; family history of breast, ovarian, or pancreatic cancer; and family history of LS-associated cancers. These findings are consistent with previous studies and current genetic testing recommendations. 1,10 Recent expert consensus guidelines reflect that germline testing should be considered in men with aggressive disease, somatic PVs, or family history suggestive of HBOC or LS. 10 Although a recent publication found no correlation between Gleason score or family history of HBOC or LS-related cancers and positive result status, our present methodology of employing multivariate analysis including only individuals who were tested for all 14 genes included on a prostate gene-specific panel eliminates confounders that were not previously accounted for. 15 Recently, Nicolosi et al. proposed universal testing among all men with PC, citing an inability to find reliable criteria to predict which men will most benefit from genetic testing. 15 Although the approach of universal testing proposed by Nicolosi et al. is appealing with respect to increased identification of at-risk patients and simplicity in application, additional studies are needed to assess the diagnostic yield and clinical utility of testing men with clinically localized low-risk disease and no other personal/family history suggestive of inherited cancer predisposition relative to targeted testing strategies. Despite the potential therapeutic benefit of identifying a PV, we found that on average, there was a six-year delay between time of PC diagnosis and genetic testing. Arguably, targeted therapies for HRD and MMR-deficient tumors have been recently developed and may not have been available for most men at the time of their diagnosis. Going forward, the timing of genetic testing may become more integrated with treatment planning for PC. While the present study focuses on men with PC, ideally genetic testing will identify at-risk men before they are diagnosed so that genetic information can be used for surveillance and clinical decision making. Further analysis of unaffected men is warranted. There are several limitations that deserve mention, including the fact that this cohort represents men clinically selected for genetic testing; 40.0% of this cohort had multiple primary cancers, indicating high threshold for genetic testing. Further, a clinical laboratory cohort is likely to give an overestimation burden of positive results; therefore, additional studies are needed to determine whether these predictors remain informative in an unselected group of men with PC. Men were tested with one of several multigene panel tests, which could potentially influence the PV frequencies we report, as shown in Table 4. However, our primary analyses limited to individuals tested for the same set of 14 PC susceptibility genes yielded no significant difference in the rate of PVs detected by panel type. This study was also limited by the data provided on the test requisition forms at the time of testing. While results from a recent study of predominately women with a history of breast cancer demonstrate that clinical history reported on test requisition forms is of comparable quality to clinic notes for most probands and their close relatives, it is unknown if the requisition data are similarly valid for prostate cancer and Gleason score reporting. 16 Additional analysis of the association of metastatic status with positive genetic result status may provide further insight, as metastatic status was available for only 13.1% of the cohort, and we did not find a significant association with positive result status, which differs from previous reports. 1,2 While the present findings regarding Gleason score were consistent with previous reports, Gleason scores were available for only 51.1% of the cohort. The present results support multigene panel testing as the primary genetic testing approach for hereditary PC and confirm Gleason score, personal history of breast or pancreatic cancer, and family history of cancers related to HBOC and LS as informative predictors for positive genetic test results. These results provide generalizability of previous findings and support current recommendations for consideration of germline testing in men with PC. ETHICAL APPROVAL All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. A waiver was provided for this research project by the Western Institutional Review Board. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-05-22T14:58:16.995Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "fa87ce63f4674208ff224c71bda14f3c4bffd7d5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41436-020-0830-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "700561e492ea7ffde1dbc0c4f801e11be56577b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238252000
pes2o/s2orc
v3-fos-license
Viable medical waste chain network design by considering risk and robustness Medical waste management (MWM) is an important and necessary problem in the COVID-19 situation for treatment staff. When the number of infectious patients grows up, the amount of MWMs increases day by day. We present medical waste chain network design (MWCND) that contains health center (HC), waste segregation (WS), waste purchase contractor (WPC), and landfill. We propose to locate WS to decrease waste and recover them and send them to the WPC. Recovering medical waste like metal and plastic can help the environment and return to the production cycle. Therefore, we proposed a novel viable MWCND by a novel two-stage robust stochastic programming that considers resiliency (flexibility and network complexity) and sustainable (energy and environment) requirements. Therefore, we try to consider risks by conditional value at risk (CVaR) and improve robustness and agility to demand fluctuation and network. We utilize and solve it by GAMS CPLEX solver. The results show that by increasing the conservative coefficient, the confidence level of CVaR and waste recovery coefficient increases cost function and population risk. Moreover, increasing demand and scale of the problem makes to increase the cost function. Introduction Medical waste management (MWM) is a critical problem in the COVID-19 situation. In the COVID-19 condition, amount of infectious patients grows up and amount of MWMs increases. As a result, we must pay more attention to MWMs and improve waste disposal. In many workers that do waste disposal, this subject threatens them very much. MWMs include infectious waste, hazardous waste, radioactive waste, and general waste (municipal solid waste). The WHO classifies medical waste into sharps, infectious, pathological, radioactive, pharmaceuticals, and other (including toilet waste produced at hospitals). About 85% of MWMs are general waste and 15% of MWMs are infectious waste, hazardous waste, and radioactive waste (Tsai 2021). Therefore, the importance of MWMs makes many researchers contribute to this subject and present mathematical approach and decision support system. Some researchers consider a location-routing problem for medical waste management (Suksee and Sindhuchao 2021;. Others investigate reverse logistics by the mathematical model (Sepúlveda et al. 2017;Suksee and Sindhuchao 2021). Also, some scientists analyze the MWM systems by multi-criteria-decision approach (Aung et al. 2019;Narayanamoorthy et al. 2020). The objective of these tools is to improve waste management performance and decrease risks for workers that we can see in Figure 1. One of the new discussions in the present age is the viability of network design in post-pandemic adaptation. The viability of networks that are proposed by Ivanov and Dolgui Responsible Editor: Lotfi Aleya (2020) is integrated agility, resilience, and sustainability in the network. Therefore, it is needed to suggest a systematic and mathematical model for setting up viable medical waste chain network design (VMWCND), because improving the performance of waste management in urban is needed and makes to prevent COVID-19 outbreak. Eventually, we should design a new mathematical model to consider agility, resilience, sustainability, risks, and robustness to cope with environmental requirements and disruption. Eventually, the innovation of this research and the main objective is as follows: & First time designing viable medical waste chain network design (VMWCND) & Considering robustness and risk in VMWCND The paper is organized as follows. In the "Survey on recent MWCND" section, we survey on related work in scope of MWCND. In the "Problem description" section, the VMWCND and risk-averse VMWCND are stated. In the "Results and discussion" section, the results of research and sensitivity analysis are presented. In the "Managerial insights and practical implications" section, the managerial insights and practical implications are discussed. In the "Conclusions and outlook" section, the conclusion is summarized. Survey on recent MWCND The amount of waste has increased because of the COVID-19 situation. Therefore, researchers research to manage, improve, and decrease losses from medical centers. We survey on the recent investigation on MWCND which is as follows. Mantzaras and Voudrias (2017) considered an optimization model for medical waste in Greece. They tried to minimize total cost including location and transfer between locations. The genetic algorithm (GA) is applied to solve the model. Budak and Ustundag (2017) designed a reverse logistic for multi-period, multi-type waste products. The model's objective was to minimize total cost and the model's decision included location, flow, and inventory. The case was in Turkey. They found that by increasing waste amounts, the numbers of facilities and strategies are changed. Wang et al. (2019) designed a two-stage reverse logistics network for urban healthcare waste with multi-objective and multi-period. In stage 1, they predicted the amount of medical waste, and in the second stage, they minimized total cost and environmental impact. Kargar et al. (2020a) presented a reverse supply chain for medical waste. They used mix-integer programming (MIP) to model problem. The objectives included total costs, technology selection, and the total medical waste stored that are minimized. A robust possibilistic programming (RPP) approach is applied to cope with uncertainty. A fuzzy goal programming (FGP) method is embedded to solve the objectives. The real case study is investigated in Babol, Iran. Other works of Kargar et al. (2020b) studied a reverse logistics network design for MWM in the COVID-19 situation. They minimized the total costs, transportation, and treatment MW risks, and maximized the amount of uncollected waste. They employed the revised multi-choice goal programming (RMGP) method. Homayouni and Pishvaee (2020) surveyed hazardous hospital waste collection and disposal network design problem with a bi-objective robust optimization (RO) model. The objectives include total costs and total operational and transportation risk. An augmented ε-constraint (AUGEPS) method is embedded to solve the problem. The real case study is investigated in Tehran, Iran. Yu et al. (2020b) considered a reverse logistics network design for MWM in epidemic outbreaks in Wuhan (China). The objectives included risk at health centers, risk related to the transportation of medical waste and total cost. They solved the model by fuzzy programming (FP) approach for multiobjective. They determine temporary transit centers and temporary treatment centers in their model. In addition, Yu et al. (2020a) studied a stochastic network design problem for hazardous WM. They minimized cost and transportation cost of hazardous waste and the population exposure risk. They applied stochastic programming with sample average approximation (SAA) for scenario reduction. They solved the model by goal programming (GP). Saeidi-Mobarakeh et al. (2020a) presented bi-level programming (BP) for a hazardous WM problem. They used an environmental approach for upper level and routing and cost for lower level. They solve mix-integer nonlinear programming (MINLP) by GA. In addition, Saeidi-Mobarakeh et al. (2020b) developed a robust bi-level optimization model to model hazardous WCND. They suggested a robust optimization approach to cope with the uncertainty. Also, the decisions of the model include location, determining capacity, and routing. Eventually, a commercial solver is utilized to solve the model. surveyed a sustainable fuzzy multi-trip location-routing problem for MWM during the COVID-19 outbreak. They embedded fuzzy chanceconstrained programming (FCCP) technique to tackle the uncertainty. Therefore, they implemented weighted GP (WGP) method to analyze and solve the problem. A case study is determined in Sari, Iran to show the performance of the proposed model. Tirkolaee and Aydın (2021) suggested a sustainable MWM for collection and transportation for pandemics. They minimized total cost and the total risk exposure imposed by the collection. Eventually, a commercial solver is utilized to solve the model with meta-goal programming (MGP) for multi-objective. Shadkam (2021) designed a reverse logistics network for COVID-19 and vaccine waste management. They utilized cuckoo optimization algorithm (COA). They tried to minimize total cost. Nikzamir et al. (2021) suggested a location-routing network design for MWM that tried to minimize the total cost and risks of population contact with infectious waste. They offered a mix-integer linear programming (MILP) and solved it by a hybrid meta-heuristic algorithm based on imperialist competitive algorithm (ICA) and GA. Li et al. (2021) surveyed a vehicle routing problem (VRP) for MWM by considering transportation risk. They suggested MILP for time window VRP and developed a particle swarm optimization (PSO) algorithm to solve large-scale problems. The classification of the literature is addressed in Table 1. It can be seen that researchers do not survey the VMWCND problem. This study investigates the VMWCND problem and used mathematical problems to locate the best place for MWCND. The main innovation of this research is as follows: & First time designing VMWCND & Considering agility, resilience, sustainability, robustness, and risk-averse in MWCND Problem description In this research, we try to design VMWCND. The previous section shows a lack of research in resilience, sustainability, and agility MWCND. In the present study, we have health center (HC), waste segregation (WS), waste purchase contractor (WPC), and landfill that wastes move through this network. Eventually, we present VMWCND through resilience strategy (flexible and scenario-based capacity and node complexity), sustainability constraints (energy and environmental pollution), and agility (balance flow and demand satisfaction). We need to locate WS to improve and recover waste and consider sustainability and environmental requirements in this situation (Fig. 2). Assumptions: & All wastes should be transferred to HC (agility). & All forward MWCND constraints include flow and capacity constraint is active. Very little positive number, α The confidence level for conditional value at risk, π Waste recovery coefficient, TT Threshold of node complexity for resiliency, φ The ratio of HC to WS. VMWCND mathematical model subject to: Agility constraints (flow constraints): Resiliency constraints (flexible and scenario-based capacity and node complexity) ∑ k wjk jkts þ ∑ c wjc jcts ≤ρ j Cap j jts x j ; ∀ j; t; s ð9Þ Sustainability constraints (allowed emission and energy consumption): ∑ i ∑ j Emij ijts wij ijts þ ∑ j ∑ k Emjk jkts wjk jkts þ ∑ j ∑ c Emjc jcts wjc jcts ≤ EMSC ts; ∀t; s ð12Þ ∑ i ∑ j Enij ijts wi j ijts þ ∑ j ∑ k Enjk jkts wjk jkts þ ∑ j ∑ c Enjc jcts wjc jcts ≤ ENSC ts ; ∀t; s ð13Þ Decision variables: x j ∈ n 0; 1; ∀ j ð16Þ wij ijts ; wjk jkts ; wjc jcts ≥ 0; ∀i; j; c; k; t; s Objective (1) considered minimizing the weighted expected value, minimax, and conditional value at risk of the cost function and for all scenarios. This form of the cost function is proposed for robustness and risk-averse against disruption with worst condition. Constraint (2) includes the fix and variable costs. Constraint (3) shows the fix costs that include fix cost activating WS for all periods. Constraint (4) indicates the variable costs of HC, WS, WPC, and landfill. Constraint (5) shows the waste transshipment from HC to WS. Constraints (6)-(7) are the flow constraints in forwarding VMWCND. Constraint (8) determines the ratio of waste that goes to the landfill. Constraint (9) is the flexible capacity constraints for WS that is less than the capacity of the WS system. Constraint (10) is the resilience constraints and the number of WS is greater than the coefficient of HC. Constraint (11) is the resilience constraints and shows node complexity in WS that summation of input and output of every WS is less than the threshold. Constraint (12) guarantees that the network's total environmental emissions are less than the allowed emission. Constraint (13) guarantees that the network's total energy consumption is less than the allowed energy consumption. Constraint (14) is the risks related to the transportation of medical waste. Constraint (15) shows the summation risks related to medical waste transport that contact with population (16) is the facility location for WC and binary variables and Constraint (17) is the flow variables that are positive between facilities. Linearization of max, sign, and CVaR (preliminary) The objective function (1) is nonlinear and makes the model mixed-integer nonlinear programming (MINLP). We transform them to mixed-integer programming (MIP) by mathematical method to improve time solution and solve smoothly (Gondal and Sahir 2013;Sherali and Adams 2013). Linearizing max and sign function: Linearizing CVaR: We used conditional value at risk (CVaR), which is a coherent risk measure. Uryasev and Rockfeller designed the CVaR criterion applied to a novel embed risk measure (Soleimani and Govindan 2014). CVaR (also known as the expected shortfall) is considered a measure for assessing the risk. CVaR is embedded in portfolio optimization to better risk management (Goli et al. 2019;Kara et al. 2019). This measure is the average of losses which are beyond the VaR point in confidence level. CVaR has a higher consistency, coherence, and conservation than other risk-related criteria. This measure is the average of losses which are beyond the VaR point in confidence levelα. CVaR has a higher consistency, coherence, and conservation than other risk-related criteria. Freevariables ¼ 6 þ 2 s j j ð36Þ We suggested scenario reduction and new algorithms to remove constraints and binary variables. This subject can help solve minimum time. Results and discussion We surveyed hospitals in Tehran, Iran, and estimated parameters from data of MWCND by managers of health centers. The performance of the mathematical model is presented. The number of indices is defined in Table 2 and the values of the parameters are determined in Table 3. The probability of occurrence is the same and optimistic, pessimistic, and possible scenarios have happened. We applied a computer with this configuration: CPU 3.2 GHz, processor core i3-3210, 6.00 GB RAM, 64-bit operating system. Finally, we solve the mathematical models by GAMS CPLEX solver. We show the potential location for assigning HC, WS, WPC, and landfill in Tehran, Iran (cf. Figure 3). After solving the model, it suggests that we activate WS and determine the location and the flow of VMWCND components (Table 4). The objective function is 1,520,407 in Table 2 and the final location-allocation is drawn in Figure 4. Finally, we calculate population risk (left-hand side of Constraint (15)) that are 54,026.33 persons. Eventually, we compare VMWCND with risk and worst case and without risk and worst case in Table 5. We can see that by embedding risk and worst case, the cost function is almost 1.65% greater than without risk and worst case. Variation on the conservative coefficient The conservative coefficient (λ) is the amount of conservative decision-makers. We change it by varying between 0 and 1 that the conservation of decision-maker has been changed. If the conservative coefficient increases to 1, the cost function grows as shown in Table 6, Figure 5, and Figure 6. If the conservative coefficient increases by 50%, the cost function will increase by 1.65%, but time solution and population risk do not change significantly. Variation on confidence level of CVaR The confidence level of CVaR (α) is the amount of risk-averse decision-makers. If the confidence level grows up, we can see that the cost function will increase (cf. Table 7 and Figure 7). By increasing 2% for confidence level, the cost function increases by 0.03%. Variation on waste recovery coefficient The waste recovery coefficient (π) is the ratio of waste that goes to landfills. If the waste recovery coefficient grows, we can see that the cost function and population risk will decrease (cf. Figure 8, Figure 9, and Table 8). Increasing waste recovery coefficient, transportation to WPC increases and then the cost function increases. But this issue helps systems to use and recover waste. Variation on demand We test the effects of changing demand. By increasing the demand, the cost function increases, too (cf. Table 9). As can be seen, when the demand increases by 40%, the cost function grows by 12% and when demand decreases by 50%, it grows down by 16% (cf. Figure 10 and Figure 11). Variation on scale of the main model The several large-scale problems are defined in Table 10. When the scale of problems is increased, the time solution and cost function increase as shown in Figure 12 and Figure 13. As can be seen, the proposed model shows the NP hard and the behavior of this model is exponential for large scale. Therefore, we need to solve the model by heuristic, meta-heuristic, and new exact solution in minimum time on large scale. Managerial insights and practical implications We surveyed viable waste medical chain network design (VWMCND). We try to pay more attention to five concepts in medical waste network design. We design VWMCND that considers agility, resilience, sustainability, risks, and robustness to cope with disruption and requirements of the government. As managers of the VWMCND, we should move forward to applying the novel concept to decrease cost and population risk, and increase the resiliency of facility, robustness, risk-averse, and agility of WMCND. In this research, we have health center (HC), waste segregation (WS), waste purchase contractor (WPC), and landfill. We propose to locate WS to decrease waste and recover them and send to the WPC. Recovering medical waste like metal and plastic can help the Figure 11 Effects of variation demand on population risk environment and return to production cycle. In this situation of COVID-19 and because of economic problem, we should use all power to utilize waste and move to circular economy and sustainable development. This issue is compatible with sustainable development goal (SDG12-Ensure sustainable consumption and production patterns) and the circular economy pillars. The maximum benefit from the proposed paper is people and service providers of the medical waste chain. Conclusions and outlook Medical waste management (MWM) is an important and necessary problem in the COVID-19 situation for treatment staff. The number of infectious patients grows up and the amount of MWMs increases day by day. We should think about this issue and find a solution for this issue. We suggest to recover MWM by waste segregation. Therefore, we proposed a novel viable medical waste chain network design (VMWCND) that considers resiliency (flexibility and network complexity) and sustainable (energy and environment) requirement. Finally, we try to tackle decrease risks and increase robustness and agility to demand fluctuation and network. We utilize a novel two-stage robust stochastic programming and solve with a GAMS CPLEX solver. Therefore, the results are as follows: 1. If the conservative coefficient increases up to 1, the cost function grows up. If the conservative coefficient increases to 1, the cost function grows as shown in Table 6, Figure 5 and 6. If the conservative coefficient increases by 50%, the cost function will increase by 1.65%, but time solution and population risk do not change significantly. 2. If the conservative coefficient increases up to 50%, the cost function will increase by 1.65%, but time solution and population risk do not change significantly. 3. If the confidence level of CVaR grows up, we can see that the cost function will increase (cf. Figure 7 and Table 7). Increasing for confidence level by 2%, the cost function increases by 0.03%. 4. If the waste recovery coefficient grows, we can see that the cost function and population risk will decrease (cf. Figure 8 and 9, and Table 8). By increasing the waste recovery coefficient, transportation to WPC increases and then the cost function increases. But it helps systems to use waste and recover them. 5. When demand increases by 40%, the cost function grows by 12% and when demand decreases by 50%, it grows down by 16% (cf. Figure 10 and 11). 6. When the scale of problems is increased, the cost function and time solution grow up as shown in Figure 12 and 13. As can be seen, the behavior of the proposed model is NP hard and exponential on large scale. Therefore, we need to solve the model by heuristic, meta-heuristic, and new exact solution in minimum time on large scale. Finally, solving the main model on a large scale is the research constraint. We propose to apply exact algorithms like bender decomposition, branch and price, branch and cut, column generation, and heuristic and meta-heuristic algorithms to solve models in minimum time (Fakhrzad and Lotfi 2018;Lotfi et al. 2017;Maadanpour Safari et al. 2021). We can add other resilience and sustainable tools to the model until increasing the resiliency and sustainability of the model like backup facility and redundancy. Also, we suggest to apply multi-objective for environmental, energy, and occupational objective (Das et al. 2021;Ghosh et al. 2021;Mondal and Roy 2021;Pourghader Chobar et al. 2021). Furthermore, we suggest adding coherent risk criteria like entropic value at risk (EVaR) (Ahmadi-Javid 2012) for considering risks. Researchers intend to investigate method uncertainty like robust convex (Lotfi et al. 2021a). Using new and novel uncertainty methods like data-driven robust optimization and fuzzy programming (Midya et al. 2021) is advantageous for a conservative decision-maker in the recent decade. Eventually, we suggest equipping VMWCND with novel technology like blockchain and neural learning (Khalilpourazari et al. 2020) for the viability of MWCND.
2021-09-27T18:46:37.469Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "a4569c30492f2bb8b0af98cb079691d23cdaf6ad", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-16727-9.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8f08924d1f7ca6053aaa42b5f1d0932d2c1cef33", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
119321279
pes2o/s2orc
v3-fos-license
Invariants for trivalent tangles and handlebody-tangles An enhanced trivalent tangle is a trivalent tangle with some of its edges labeled. We use enhanced trivalent tangles and classical knot theory to provide a recipe for constructing invariants for trivalent tangles, and in particular, for knotted trivalent graphs. Our method also yields invariants of, what we refer to as, enhanced handlebody-tangles and enhanced handlebody-links. Introduction A trivalent graph is a finite graph whose vertices have valency three, and a uni-trivalent graph is a finite graph whose vertices have valency three or one. In this paper, trivalent graphs and uni-trivalent graphs are not oriented, and may contain circle components. A knotted trivalent graph is a trivalent graph embedded in three-dimensional space, and a trivalent tangle is a unitrivalent graph embedded in R 2 × [0, 1] such that all of its univalent vertices belong to R 2 × {0} and R 2 × {1}. We call a univalent vertex an endpoint of the trivalent tangle. We note that a trivalent tangle with no endpoints is a knotted trivalent graph. Therefore, any statement that holds for trivalent tangles holds automatically for knotted trivalent graphs. Two knotted trivalent graphs are called equivalent (or ambient isotopic) if there is an isotopy of R 3 taking one onto the other. Moreover, two trivalent tangles are called equivalent if one can be transformed into the other by an isotopy of R 2 × [0, 1] fixed on the boundary. It is well-known that two knotted trivalent graphs (or trivalent tangles) are equivalent if and only if their diagrams are related by a finite sequence of the moves R1 -R5 depicted in Figure 1 (see [6] for more details). Two trivalent tangles with the same endpoints are called neighborhood equivalent if there is an isotopy of R 2 × [0, 1] (which fixes its boundary) taking a regular neighborhood of one onto a regular neighborhood of the other. An IH-move is a local change of a trivalent tangle, as shown in 1]. Two trivalent tangles with the same endpoints are called IH-equivalent if they are related by a finite sequence of IH-moves and isotopies of R 2 × [0, 1] fixed on the boundary. Ishii [2] showed that two trivalent tangles with the same endpoints (and respectively, two knotted trivalent graphs) are neighborhood equivalent if and only if they are IH-equivalent. Figure 2. IH-move It follows that two trivalent tangles (respectively, knotted trivalent graphs) are neighborhood equivalent if and only if their diagrams are related by a finite sequence of the moves R1 -R5 together with IH-moves (here regarded as moves in the plane). A handlebody-tangle is a disjoint union of handlebodies embedded in the three-ball R 2 × [0, 1] such that the intersection of the handlebodies with R 2 × {0} and R 2 × {1} consists of disks, called end disks of the handlebodytangle. A handlebody-tangle with no end disks is called a handlebody-link. Two handlebody-tangles are called equivalent if there exists an orientationpreserving homeomorphism of R 2 ×[0, 1] into itself taking one onto the other, and which is the identity map on the boundary. Any handlebody-tangle is a regular neighborhood of some trivalent tangle, and therefore, there is a one-to-one correspondence between the set of handlebody-tangles and the set of the neighborhood equivalence classes of trivalent tangles. When a handlebody-tangle H is a regular neighborhood of some trivalent tangle T such that each end disk of H contains exactly one endpoint of T , we say that T is a spine of H (or that H is represented by T ). Figure 3 shows a handlebody-tangle and two spines that represent it. The above statements imply that two trivalent tangles with the same endpoints represent equivalent handlebody-tangles if and only if their diagrams are related by a finite sequence of the moves R1 -R5 together with IH-moves. For more details we refer the reader to [2] (see also [3]). Therefore, we can study handlebody-tangles through diagrams of trivalent tangles. To obtain an invariant for a handlebody-tangle H represented by a trivalent tangle T , it suffices to construct an invariant of the IH-equivalence class of T ; that is, one needs to associate to a diagram of T some quantity which is invariant under the moves R1 -R5, as well as under the IH-move. Similarly, one can study handlebody-links through diagrams of knotted trivalent graphs up to the moves R1-R5 and IH-move. In this paper, we introduce the notions of enhanced trivalent tangles and enhanced handlebody-tangles. An enhanced trivalent tangle is a trivalent tangle with an edge set on which IH-moves can be applied. An enhanced handlebody-tangle is a handlebody-tangle represented by an enhanced trivalent tangle. We use enhanced trivalent tangles and combinatorial knot theory to provide a general recipe for constructing invariants for trivalent tangles (and, in particular, for knotted trivalent graphs). We also construct numerical invariants for trivalent tangles; these invariants depend on the definition of the Kauffman bracket of classical knots and links. The recipe provided in this paper also yields invariants of enhanced handlebody-tangles. The paper is organized as follows: In Section 2.1 we introduce the notions of enhanced trivalent tangles and that of IH-equivalence classes of enhanced trivalent tangles. Then we explain that there is a one-to-one correspondence between the set of IH-equivalence classes of enhanced trivalent tangles and the set of ambient isotopy classes of 4-valent tangles (see Lemma 1). We use this statement to provide a recipe for constructing invariants of the IHequivalence class of an enhanced trivalent tangle (G, ρ) with diagram D ρ via a collection C(D ρ ) of knot theoretic tangle diagrams associated with D ρ ; here D ρ is the 4-valent tangle diagram obtained from D ρ by contracting its thick edges (see Proposition 2). In Section 2.2 we show how one can use 3-move invariants of knot theoretic tangles to finally arrive at invariants of IH-equivalence classes of enhanced trivalent tangles, and of enhanced handlebody-tangles. Therefore, it remains to find 3-move invariants for classical tangles. Given an (m, n)-tangle T with diagram D, in Section 3 we use skein modules and basic linear algebra concepts to define a polynomial P (D) ∈ Z[q, q −1 ] in terms of the skein class D of D (see Definition 3). It turns out that P (D) is equal to the unnormalized Kauffman bracket of the knot or link obtained by taking the plat closure of the tangle T ⊗ T , where T is the mirror image of T . In Theorem 7 we prove that, for each k ∈ {1, 5, 7, 11, 13, 17, 19, 23}, the complex number P (D)| q=e kπi 12 is a 3-move invariant for the (m, n)-tangle T . Constructing invariants for trivalent tangles 2.1. Enhanced trivalent tangles. In this paper, handlebody-tangles have an even number of end disks, and trivalent tangles have an even number of endpoints (univalent vertices). Equivalently, a trivalent tangle contains an even number of trivalent vertices. Recall that any knotted trivalent graph contains an even number of trivalent vertices. Let G be a trivalent tangle. We call an edge of G joining two trivalent vertices an internal edge, and an edge incident to an endpoint of G an external edge. Let ρ be a map from the set of edges of G to the set {1, 2} such that ρ(e 1 ) + ρ(e 2 ) + ρ(e 3 ) = 4 for edges e 1 , e 2 , e 3 incident to a trivalent vertex, under the restriction that external edges and cycles or loops may be assigned only the value 1. Denote by R(G) the set of all such maps, and call the pair (G, ρ), for some ρ ∈ R(G), an enhanced trivalent tangle associated to G. We represent an edge e for which ρ(e) = 2 by a 'thick' edge in a diagram of (G, ρ). We note that for an enhanced trivalent tangle, there is at most one thick edge joining a pair of adjacent trivalent vertices, and that no external edge is a thick edge. An enhanced knotted trivalent graph is an enhanced trivalent tangle with no endpoints. We say that two enhanced trivalent tangles (G 1 , ρ 1 ) and (G 2 , ρ 2 ) with the same endpoints are equivalent (or ambient isotopic) if there exists an orientation-preserving homeomorphism f : and E ρ 2 are the sets of thick edges in (G 1 , ρ 1 ) and (G 2 , ρ 2 ), respectively. An IH-move on enhanced trivalent tangles is an IH-move which replaces thick edges with thick edges. We say that two enhanced trivalent tangles with the same endpoints are IH-equivalent if they are related by a finite sequence of IH-moves on thick edges and isotopies of R 2 × [0, 1] fixed on the boundary. These definitions extend to enhanced knotted trivalent graphs. An enhanced handlebody-tangle (respectively, enhanced handlebody-link ) is the IH-equivalence class of an enhanced trivalent tangle (respectively, enhanced knotted trivalent graph). It follows that in order to construct an invariant for an enhanced handlebody-tangle H represented by an enhanced trivalent tangle (G, ρ), it suffices to construct an invariant of the IH-equivalence class of (G, ρ). For each enhanced trivalent tangle (G, ρ), there exists an associated 4valent tangle G ρ obtained by contracting each thick edge e in (G, ρ). A 4valent tangle is a uni-four-valent graph embedded in B 3 , whose intersection with ∂B 3 consists of its univalent vertices (or endpoints). A contraction move is a local change as depicted in Figure 4, where the replacement is applied in a disk embedded in the interior of B 3 . Figure 4. Contraction move Recall that two knotted 4-valent graphs (or two 4-valent tangles with the same endpoints) are ambient isotopic if and only if their diagrams are related by a finite sequence of the Reidemeister moves R1 -R3 and the moves N4 -N5 given in Figure 5 below (see [6]). Let D ρ be a diagram of an enhanced trivalent tangle (G, ρ), and denote by D ρ a diagram of the associated 4-valent tangle G ρ . If D ρ and D ρ are diagrams of an enhanced trivalent tangle (G, ρ), then there are diagrams D ρ and D ρ representing the 4-valent tangle G ρ obtained from (G, ρ) by applying the contraction move given in Figure 4. In particular, diagrams D ρ and D ρ can be obtained from D ρ and D ρ , respectively, by contracting their thick edges. Therefore, we can study an enhanced trivalent tangle (G, ρ) through diagrams D ρ of 4-valent tangles. A few words are needed here, as a thick edge in the diagram D ρ might cross under or over (at least) an edge, making the contraction move for diagrams of trivalent graphs/tangles somewhat ambiguous. Below we exemplify the case of a thick edge crossing over a 'thin' edge, where we see that the two 4-valent diagrams on the right are the same, up to the move N4. −→ or The case in which a thick edge crosses under and/or over a few edges, some of which may be thick edges, are also unambiguous up to the move N4. Since we will be working with 4-valent tangle diagrams up to the moves N4 -N5 and Reidemeister moves R1 -R3, we can assume that a trivalent tangle diagram D ρ does not contain crossings involving thick edges (except for self intersection of thick edges). The following statement follows from the above discussion. Lemma 1. There is a one-to-one correspondence between the sets of ambient isotopy classes, as well as IH-equivalence classes, of enhanced trivalent tangles (or enhanced knotted trivalent graphs) and that of 4-valent tangles (or knotted 4-valent graphs). The next step is to create a collection C(D ρ ) of knot theoretic tangle diagrams obtained via the local replacements depicted in Figure 6. ←→ Two collections of tangles S 1 and S 2 are called 3-equivalent if every member of S 1 is 3-equivalent to some member of S 2 . The following proposition is essentially Theorem 3.3 from [8], thus we only sketch its proof. Proposition 1. Let D ρ and D ρ be two diagrams of a 4-valent tangle G ρ with n 4-valent vertices. Then there exists a permutation σ of the set {1, . . . , 4 n } such that the tangle (D ρ , f i ) is 3-equivalent to the tangle (D ρ , f σ(i) ) for each 1 ≤ i ≤ 4 n . In particular, the 3-equivalence class of the collection C(D ρ ) is an ambient isotopy invariant of G ρ . Proof. Without loss of generality, assume that D ρ is obtained from D ρ by applying exactly one of the moves R1, R2, R3, N4 and N5. Case I (moves R1 -R3). It is obvious that the Reidemeister moves R1, R2 or R3 do not affect the local replacements at a 4-valent vertex. Case II (move N4). Suppose that D ρ and D ρ are diagrams that are identical except in a small neighborhood where they differ by a move of type N4. Below we illustrate the effect of this move on local replacements at the involved vertex. We see that the two collections above are ambient isotopic, and therefore, are 3-equivalent. Case III (move N5). Suppose that D ρ and D ρ are diagrams that are identical except in a small neighborhood where they differ by a move of type N5, as shown below: The local replacements at the vertex involved in the move N5 are as follows: It is clear that, in this case, the two collections C(D ρ ) and C(D ρ ) of tangle diagrams are not ambient isotopic. However, the diagrams and differ by a +3-move. Similarly, the diagrams and are related by a 3-move. Then it is easy to see that the collections C(D ρ ) and C(D ρ ) are 3-equivalent. Finally, there should be no difficulty to construct the permutation σ on the set {1, . . . , 4 n } in the statement of the proposition. Proposition 2. Let D ρ be a diagram of an enhanced trivalent tangle (G, ρ), and let D ρ be the 4-valent tangle diagram obtained from D ρ by contracting its thick edges. Then the 3-equivalence class of the collection C(D ρ ) of ordinary tangle diagrams is an ambient isotopy invariant of (G, ρ), as well as an invariant of the IH-equivalence class of (G, ρ). Proof. The statement follows from Lemma 1 and Proposition 1. Invariants for enhanced trivalent tangles. An invariant I for classical tangles is called a 3-move invariant if I(T ) = I(T ) for any two 3equivalent tangles T and T . By Proposition 2, we have that if I is a 3-move invariant for tangles, then it can be extended to an ambient isotopy invariant I(G, ρ) of an enhanced trivalent tangle (G, ρ)-with associated 4-valent tangle G ρ -as follows: where the sum is taken over all states (D ρ , f i ) of D ρ and where D ρ is a diagram of G ρ . Moreover, by our construction, I(G, ρ) is invariant under the IH-move on enhanced trivalent tangles as well, and thus it yields an invariant of the IH-equivalence class of the enhanced trivalent tangle (G, ρ), and equivalently, an invariant of the enhanced handlebody-tangle with spine (G, ρ). Furthermore, the following sum taken over all enhanced trivalent tangles (G, ρ) associated to a given trivalent tangle G: yields an invariant of G. If the tangle has no univalent vertices, the method described here provides a recipe for constructing invariants for knotted trivalent graphs. We remark that such techniques have been used before to obtain invariants of knotted graphs. For example, the idea of using collections of tangles to obtain invariants of knotted graphs was first introduced (to the best of our knowledge) by Kauffman in [6]. Moreover, 3-move invariants of knots and links were used by Lee and Seo in [8] to construct numerical invariants for knotted 4-valent graphs. Our invariant described in Section 3.2 is closely related to that constructed in [8]. Some invariants for classical tangles Our goal now is to construct 3-move invariants for classical tangles and consequently arrive at invariants for enhanced trivalent tangles and handlebodytangles, as explained in Section 2. 3.1. The skein module E m,n . Let m, n be non-negative integers such that m + n is even, and let q be an indeterminate. An (m, n)-tangle T is an embedding in R 2 × [0, 1] of 1 2 (m + n) arcs and a finite number of circles, with the property that the endpoints of the arcs are distinct points in such that the endpoints of T are mapped to distinct points in the lines The skein (m, n)-module is the free Z[q, q −1 ]-module E m,n generated by equivalence classes of (m, n)-tangle diagrams modulo the ideal generated by elements: Each (m, n)-tangle diagram D represents an element of E m,n , denoted by D , and called the skein class of D. There is a basis for E m,n represented by flat (m, n)-tangle diagrams (crossingless matchings of the m + n endpoints), and the coefficients of the skein class D with respect to this basis are Laurent polynomials in q. If m = n = 0, that is the tangle represented by D is a link L, then D is the Kauffman bracket [5] of L, up to a factor of δ = −q 2 − q −2 . Let B m,n = {e 1 , e 2 , . . . , e p } be the basis of E m,n consisting of all flat (m, n)-tangle diagrams. We note that |B m,n | = 2k k /(k+1) is the k-th Catalan number, where k = 1 2 (m + n). For each (m, n)-tangle diagram D, denote the coordinate vector of D relative to B m,n by v(D) = [x 1 x 2 . . . x p ], where we write We remark that v(D) is a regular isotopy invariant of the tangle T represented by D. If D is the mirror image of D (obtained from D by replacing each overcrossing by an under-crossing and vice-versa), then v(D) = [x 1 x 2 . . . x p ], where x i := x i | q↔q −1 . Specifically, x i is obtained from x i by interchanging q and q −1 . 3-move invariants for classical tangles. Given two (m, n)-tangles T 1 and T 2 , denote by T 1 ⊗ T 2 the (2m, 2n)-tangle obtained by placing T 2 to the right of T 1 without any intersection or linking. That is, T 1 ⊗ T 2 is the tensor product of the morphisms represented by these tangles in the category of tangles. Denote by cl(T 1 ⊗ T 2 ) the plat closure of T 1 ⊗ T 2 , which is a link or a knot obtained by joining with simple arcs adjacent upper endpoints and respectively, adjacent lower endpoints of T . Figure 7 displays the plat closure of the tensor product of a (3, 3)-tangle with its mirror image. We are interested in 3-move invariants for tangles, and thus, in particular, we are interested in the behavior of and [ , ] under the 3-moves for tangles. We have that: . We remark that in the case of links, the numerical invariant P (D) k of Theorem 7 is the invariant appearing in Theorem 4.4 of [8]. Conclusions and final comments. Using the method described in Section 2 with the generic I(T ) (see Section 2.2) replaced by the 3-move tangle invariant P (D) k obtained in Section 3.2, we arrive, for each k ∈ {1, 5, 7, 11, 13, 17, 19, 23}, at a numerical invariant of the IH-equivalence classes of enhanced trivalent tangles, and moreover, at an invariant of trivalent tangles. For each k as above, this yields a numerical invariant of enhanced handlebody-tangles. Our results hold for knotted trivalent graphs and enhanced handlebody-links, as well. We remark that the numerical invariants obtained here are not independent: for all k ∈ {1, 5, 7, 11, 13, 17, 19, 23}, q = e kπi 12 are primitive 24-th roots of unity, and the corresponding invariants can be obtained from one another by Galois group actions. (The author would like to thank the referee for pointing this out.) The 3-move invariants for classical tangles constructed here were defined using elementary concepts from linear algebra. As noted in Section 3.2, these 3-move invariants for tangles are equivalent to certain evaluations of the Kauffman bracket polynomial of a link obtained in a specific way from the original tangle. The reader may want to compare this with Przytycki's [9] analysis of how the 3-moves influence the Jones polynomial [4]. In [9] it was also observed that tricoloring and F (1, −1) are 3-move invariants of links, where F is the Kauffman two-variable polynomial [7]. We remark that Montesinos and Nakanishi conjectured that every link can be reduced to a trivial link by a sequence of 3-moves. Dabkowski and Przytycki [1] found obstructions to this conjecture: they showed that the Borromean rings are not 3-equivalent to a trivial link. They also found a braid on three strands and with 20 crossings whose closure cannot be reduced by 3-moves to a diagram of a trivial link.
2018-12-17T22:15:53.000Z
2018-06-17T00:00:00.000
{ "year": 2018, "sha1": "d7c5d28da8001b1db3e7c0abf929fa7bd80164c0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "737065974757ca3b6e25bef48d9bfc923d867910", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
117159533
pes2o/s2orc
v3-fos-license
A survey of icephobic coatings and their potential use in a hybrid coating/active ice protection system for aerospace applications Icephobic coatings for aircraft and other surfaces subjected to ice accretion have generated great interest in the past two decades, due to the advancement of nanomaterials, coating fabrication methods, biomimetics, and a more in-depth understanding of ice nucleation and ice adhesion. Icephobic coatings have demonstrated the ability to repel water droplets, delay ice nucleation and significantly reduce ice adhesion. Despite these ongoing research activities and promising results, the findings reported hereafter suggest that coatings alone cannot be used for aircraft anti-icing and de-icing operations; rather, they should be considered as a complementary option to either thermal or mechanical ice protection methods, for reducing power consumption and the ecological footprint of these active systems and for expediting ground de-icing operations. This paper will first review the state-of-the-art of icephobic coatings for various applications, including their performance and existing deficiencies. The second part of this paper focuses on aerospace anti-icing and de-icing requirements and the need for hybrid systems to provide a complete ice protection solution. Lastly, several urgent issues facing further development in the field are discussed. Introduction Ice accretion on aircraft has an adverse impact on both safety and performance [1]. As such, there have been great efforts devoted to the development of strategies for de icing and anti icing. "De icing" refers to the removal of ice from aircraft surfaces and its methods include heating, vibration (contact or non contact), mechanical means (e.g., inflated boots on aircraft leading edges) and sprayed icing fluids [2] to remove any ice accretion, while "anti icing" is a preventive measure that delays or reduces ice accretion on surfaces so that the subsequent de icing process is not needed or less time/energy will be needed during de icing. Anti icing can be achieved by frequent spraying of anti freezing fluids or by the application of permanent coatings (hydro phobic or icephobic), designed to prevent water droplets from adhering to the surface before freezing, to delay the freezing event, and/or to reduce ice adhesion to the surface [3]. The use of de icing fluids for de icing and anti icing purposes has a severe environmental impact [4,5], while a permanent, long lasting coating can lessen such consequence. The development of icephobic surfaces can be dated back to the late 1950s [6,7]. However, due to the complexity of icing conditions and ice interaction with surfaces, there has not been a proven, commercially viable (low cost, easy application) and durable (repeated icing/de icing cycles, surface abrasion and mechanical loading) icephobic coating for aerospace applications thus far. Some promising coatings have shown to be able to reduce ice adhesion up to several orders of magnitude with respect to reference metal surfaces such as aluminium, titanium or steel, while others could delay ice crystal nucleation from supercooled water droplets or humid air for up to several hours. The purpose of this paper is to first briefly provide an overview of the aircraft ice accretion process and then summarize the latest icephobic coatings and their performance. The second part of the paper focuses on the requirements for effective aircraft anti icing and de icing strategies. Based on the state of the art technologies, future development in combining active systems and engineered coating materials into a hybrid system are proposed. Ice formation and accretion on aircraft Ice accretion on aircraft during flight can either be caused by im pinging supercooled water droplets, freezing rain, or snow particulates accumulating on the surface. The most common of these are super cooled water droplets which typically have a mean effective droplet diameter (MVD) less than 50 microns. The droplets impact the surface and can freeze on contact near the stagnation point or can roll back along the wing and freeze. Water can also exist in supercooled large droplets (SLDs) (typically larger than 50 μm) that freeze upon contact and the release of their latent heat melts them back into the liquid phase where they refreeze further back on the aircraft [8,9]. In addition to droplet size, atmospheric conditions, airfoil geometry, aircraft velo city and angle of attack all together contribute to the formation and coverage of ice [10]. Ice accretion commonly occurs on the upper and lower wing surfaces, fuselage, propellers, engine nacelles, radomes and sensor ports as shown in Fig. 1. In addition to the general performance reduction, the adverse effects of icing on aircraft include wing stall, icing contaminated tail stall (ICTS), icing contaminated roll upset, en gine and air intake icing, carburetor icing, propeller icing, static and dynamic port blockage, probe icing and windshield icing. There are three types of ice that can form in flight; rime, glaze, and mixed ice. Rime ice consists of mostly frozen supercooled water dro plets and forms at low temperatures in stratiform clouds while glaze ice (partially frozen supercooled water droplets) forms just below the freezing temperature of water in mostly cumulus type clouds as out lined in Fig. 2. Mixed ice forms in the middle of the freezing range which usually is between 0 and −20°C (−40°C in extreme conditions) [11]. Glaze ice can be difficult to remove as the supercooled water droplets exist mostly in the liquid state and are thus mobile once they contact the surface. The water droplets are able to coalesce, forming a sheet with a continuous bond to the surface. Since rime ice is generated from smaller droplets, they are unable to coalesce and as a result, freeze in place. It was qualitatively determined that hydrophobic coatings can be more effective in glaze ice conditions than icephobic coatings under rime conditions [12]. The concentration and distribution of the super cooled water droplets and ice crystals inside the cloud vary with temperature and altitude as shown in Fig. 3. As the temperature de creases, so does the saturated vapor density (the amount of water va pour the air can hold) while the probability of the droplets freezing increases, leading to a reduced icing probability. The ice accumulation rate is also related to the amount of supercooled water inside the cloud, where the highest amount of ice accretion would occur at temperatures just below the freezing point. Severe icing can also occur below the cloud level in freezing rain conditions. Hoar frost, another form of ice, forms on the aircraft on the ground or in flight when descending from below freezing condition to an alti tude with warm and moist air. Frost accumulation on the surface can impact aircraft performance and visibility. It is more prone to form in areas with more surface asperities. i.e., rough surfaces. Snow, on the other hand, is a mixture of ice and water and its accumulation com monly occurs on the ground. Frost, ice, and snow can also accumulate on aircraft surfaces while on the ground. If not removed, this accretion can reduce the lift by up to 30% and increase drag by 40% [13]. All five types of ice/snow accu mulation on aircraft are classified as "dangerous" to air traffic [1]. De icing and anti icing methods Various ice protection systems (IPS) can be employed to protect aircraft surfaces, engine inlets, sensors and windshields from ice accu mulation in flight and on the ground. A summary of both existing and potential methods and their applications are provided in Table 1 and Table 2. FAA certificated vs. non hazard For flight certification, the FAA calls for a 45 min hold pattern during flight in continuous, maximum icing conditions found in stratus clouds without the use of an ice protection system [14]. Unprotected surfaces to be tested include: landing gear, antennas, fuselage nose cones or radomes, fuel tank vents, fuel tip tanks, and the leading edge of control surfaces. A non hazard de icing or anti icing system should only be considered as a way for immediate escape from icing conditions. The non hazard de icing systems installed are not to be tested for perfor mance in icing conditions, rather they must show that in dry air, the installation of various systems does not adversely affect performance, stability, and other flight characteristics. Due to this loose certification standard, these systems are considered non essential to the aircraft. For many of the coating systems discussed throughout this review, if they are to be used as FAA certified passive anti icing and de icing measures, rigorous tests must be conducted under the conditions stipulated in FAA certification once other standard tests have been carried out per cus tomer specifications. Hydrophobic/superhydrophobic vs. icephobic Icephobicity and hydrophobicity have been considered closely related and many icephobic coatings were in fact derived from hydro phobic or superhydrophobic coatings and surface processing methods. Although a standard definition of icephobicity has not yet been agreed upon, an extensive body of work exists in the field of hydrophobic and superhydrophobic coatings. A brief overview of these surfaces is pro vided here. A more detailed, all encompassing review of this subject can be found in Ref. [15]. A surface is classified as hydrophobic when the water contact angle (CA) on this surface is > 90°; while a superhydrophobic surface has a water contact angle > 150°and a contact angle hysteresis (CAH) < 10° [16]. CAH is defined as the difference between the ad vancing and the receding CA of a water droplet that expands or shrinks on surface. Alternatively, the roll off angle (RoA) can be determined as the lowest angle a surface needs to be inclined before a water droplet rolls or slides off it. Superhydrophobic surfaces should also exhibit RoA < 10° [17]. These definitions are illustrated in Fig. 4. Superhydrophobic coatings, most of which mimic the surface mi crostructure of water repellent plants or leaves, have been viewed historically as potential icephobic coatings in that they offer the ad vantage of reducing ice adhesion or accretion from supercooled water sprayed or poured onto the surface [18 20] (see Fig. 5). This was based on the reasoning that water and ice have a similar surface tension/ surface energy and that a surface that repels water should have the ability to prevent ice accretion as well. Although superhydrophobic coatings have been investigated as part of an aircraft ice protection system [21], there are no studies conducted at aircraft cruise velocities (> 75 m/s) as noted by Yeong et al. [22] who recently performed tests in an icing wind tunnel at speed 50 and 70 m/s. Most of these studies use droplet impact velocities below 10 m/ s [23,24] or freezing conditions not in line with typical aircraft icing conditions [25,26]. In early work conducted on bare aircraft substrates, Scavuzzo and Chu demonstrated that the ice adhesion strength in creases with impact velocity [27]. In a similar study conducted on su perhydrophobic coatings, it was shown that the effectiveness of the coating decreased by increasing either the MVD or LWC [28]. Larger droplets have more momentum and thus are able to penetrate deeper into the surface asperities, increasing the effective contact area and the ice adhesion strength. Increases in the LWC promote frost formation which increases ice adhesion. Yeong et al. [22] obtained results at re latively high speed by generating rime and glaze ice on samples at 50 and 70 m/s with 20 micron MVD droplets. They showed that increasing droplet impact speeds tend to decrease the effectiveness of super hydrophobic coatings. They also found that the 20 micron MVD dro plets penetrate into the surface asperities and that the maximum Table 1 In-flight de-icing and anti-icing methods on aircraft (summarized from Refs. [13,14,117,130,131] roughness height to prevent this was approximately 10 nm. All these results appear to be significant obstacles for the field of super hydrophobic coating development. In addition, the water repellent ability of micro textured super hydrophobic surfaces may be compromised, or even reversed (to hy drophilic) under frost forming conditions. In fact, frost can also form on micro textured superhydrophobic surfaces, which are water repellent only to liquid water drops and not to condensed water microdroplets. Once a frost layer is formed on the surface, the surface turns into a hydrophilic surface. Frost formation commonly occurs on the surface of many outdoor structures such as wind turbine blades, power lines, communication towers and aircraft on the ground. During flight, ice accretion is usually due to supercooled water droplets impinging on aircraft surfaces. The icephobicity of a surface not only depends on intrinsic surface properties, but also on the ice formation conditions. In a reported study of a surface with an array of hydrophobic silicon posts (Fig. 6), which was demonstrated to be a superhydrophobic surface for sessile water drops, the cooled surface frosted more readily under high humidity [18]. As a result, superhydrophobicity was lost, (the surface turned hydrophilic) due to the frost layer, and the increased effective surface area led to a higher ice adhesion strength when compared with smooth surfaces. In addition to experimental studies, theoretical force balance analysis also revealed that superhydrophobic surfaces were not ne cessarily icephobic [29]. As such, under different icing conditions, su perhydrophobicity does not always directly translate into icephobicity [30,31], particularly if the surface structure is not specifically tailored to prevent frost formation. On the other hand, ice adhesion on non micro textured surfaces decreases as hydrophobicity increases [32,33]. General coating classification and requirements Icephobic coatings can be classified based on their chemical com positions, surface topology and application methods. Polymer or com posite coatings in the forms of "paints", that can be applied employing standard a spray, dip, brush, or an electrostatic deposition process, are beneficial for large structures such as aircraft and wind turbine blades. In contrast, coatings applied by either physical or chemical vapour deposition (PVD/CVD) or ion milling are rather applicable to smaller and higher cost devices, as these processes are carried out in an en closed chamber and the equipment and process costs are higher. Coatings fabricated using these processes also have limited thickness, which makes them less resistant to extreme weather conditions (freezing rain, dust particles, or ice crystals). Currently, the most pro mising icephobic coatings are based on two coating design principles: surface micro and nano texturing followed by a chemical Table 2 Anti-icing and de-icing methods used on different parts of aircraft [13]. Reproduced from Ref. [127] with permission from Springer Nature. functionalization (similar to superhydrophobic coatings based on the lotus leaf design) and infusing low surface energy polymeric matrices with functional lubricants (similar to slippery surfaces based on the Nepenthes pitcher plant design) [34]. For modelling ice adhesion to a smooth substrate, one can consider that the bonding strength on the atomic/molecular level depends upon three basic forces: electrostatic forces, covalent/chemical bonds, and van der Waals interactions [3]. The dielectric constant of a material affects the electrostatic attractive force; also, ice adhesion has been found to decrease with the dielectric constant of the surface material [35]. Teflon based materials have an inherently low dielectric constant (≈2) and are commonly used for hydrophobic and icephobic coating formulations. In fact, a RF sputtered Teflon showed almost negligible ice adhesion value when tested using a centrifugal ice adhesion appa ratus [36]. This shows that a low electrostatic force plays a significant role since van der Waals forces decay much more rapidly with distance than electrostatic forces. On the other hand, water (or ice), being a polar molecule with an exposed hydrogen atom, forms a strong bond with a substrate due to hydrogen bonding. The key to coating designs for reduced ice adhesion is to select materials with a low bonding strength to H 2 O. The effects of hydrogen bonds on the interface bonding have been studied by many researchers [32,37,38]. Using a range of mixtures of zero hydrogen bonding with a hydrophobic surface as sembled monolayer (SAM) of 1 dodecanethiol and a surface with a hydrophilic nature like a SAM of 11 hydroxylundecane 1 thiol, ex perimentally determined ice adhesion values showed that hydrogen bonding was the greatest contributing factor to ice adhesion [38]. On a microscopic level, the ice adhesion strength to a substrate is also af fected by surface roughness, i.e., mechanical interlocking between the ice and substrate. The greater the surface roughness, the larger the mechanical adhesion is between the ice and substrate. In summary, icephobic coatings can be designed by means of matrix compositions and topologies to achieve one or several of the following functions; these can also be used to define and compare icephobicity. Detachment or removal of water droplets from a coated surface (ice droplets roll off surface before freezing). Prevention or delay of ice formation via decreased heat transfer between impinging supercooled droplets and substrate so that ice crystallization is delayed. Reduce the ice adhesion strength to the surface below 100 kPa so that minimum energy/force is needed for de icing (in a passive system, it was suggested that for a coating requires an ice adhesion strength below 20 kPa [39]). In terms of ice adhesion strength, there exists a large variation in the published work for an individual substrate. This is in part due to the wide spectra of icing test conditions, ice thickness, the use of different adhesion test methods (lap shear, centrifuge, 0°cone test, bend test, knife edge test, impact [15,37]) and experimental variables. A review of the various testing methods and the measurements resulting from each test can be found in Ref. [40]. For example, the reported ice adhesion to an uncoated aluminium (Al) substrate ranges from 110 [29] to 1360 kPa [41]. As such, in many instances ice adhesion will be de scribed qualitatively by the adhesion reduction factor (ARF) or com paratively throughout this communication. The ARF is a comparison between the adhesion strength of the icephobic coating and a reference surface (typically aluminium). A high ARF (> 10) is characteristic of an icephobic coating. Throughout the following sections on coatings, the durability in the form of resistance to mechanical and chemical attack will be commented, whenever such information is available. Icephobic coatings and properties There are several categories of icephobic coatings (illustrated in Fig. 7) that will be outlined throughout this section. Ice adhesion strengths are reported; however, they may not be directly comparable due to the different physical test setups and selected parameters. Polymer coatings based on fluoropolymers Polytetrafluoroethylene (PTFE, commercially known as Teflon) bulk material, PTFE films, and fluorinated silicone rubber/polyurethane coatings were investigated to understand the effects of chemical com position and surface roughness on ice adhesion [42]. In one study, PTFE coatings were applied to aluminium substrates by a spray and sintering method (the sintering temperature of 350 370°C is high for a fully heat treated aged hardened aluminium alloy). The fluorinated coatings were applied by spraying and curing. The highest water contact angle, 152.8°, was recorded on one of the sintered PTFE surfaces. The sub micron grain structures, with gaps smaller than the mean water droplet In contrast, the smooth bulk PTFE has the lowest ice bonding strength (60 kPa). In another study, polyfluorinated polyether (PFPE) dip coating was able to reduce ice adhesion of a bare aluminium surface by a factor of 20, while Teflon provided a reduction of seven times [25]. Polyvinylidene fluoride (PVDF) coating was applied on to wind turbine blades made of glass fibre composite material to enhance ice phobicity. To create the roughened structure shown in Fig. 9, NH 4 HCO 3 was added to the PVDF solution [43]. The pore sizes ranged from 1 to 5 μm, and a water contact angle of 156°and a water sliding angle of 2°w ere reported. Although an ice adhesion test was not conducted, an ice accretion test carried out at −10°C with a water sprinkler (1 mm dia. water droplet size) spraying supercooled water onto horizontally placed samples, showed that there was negligible ice accretion within 50 min on the coated sample, while the bare sample collected about 40 g of ice. PVDF can also be combined with nano particles (epoxy siloxane mod ified SiO 2 [44], fumed silica [45], or graphene [46]) to create a porous structure and potentially increase the durability of polymer coatings. In a separate study, a plasma enhanced chemical vapour deposition (PECVD) process was used to deposit a Si doped film (200 nm) followed by the application of a fluorinated carbon coating (10 nm) on smooth and roughened aluminium (Al 2024) surfaces. Ice adhesion and water contact angle measurements showed that the coating could reduce the ice adhesion strength on smooth and roughened Al 2024 by a factor of two; and the water contact angle increased by nearly four times on the rough surface [47]. A coating made of fluorinated acrylate (polydimethylsiloxane b Poly) showed decreased ice adhesion (300 kPa) and extended ice crystallization delay (about 3 min at −15°C) [48,49]. The nano scaled roughness of the surface can be seen in Fig. 10. Amphiphilic crosslinked hyperbranched fluoropolymers could lower the freezing point of water [50]. However, water molecules must be bonded to the surface in order for the coating to be effective, as molecular contact is required. Fluoropolymer coatings impregnated with oxide and metal particles have drawn interest in the research community due to the ability to modify surface topology. Oxide and metal particles, such as ZrO 2 , Ag, and CeO 2 , were mixed with Zonyl 8740 (a perfluoroalkyl methacrylic copolymer) [31,51] to create the various coatings. The resulted surface morphology of different coatings is shown in Fig. 11. Ice adhesion tests were carried out by spraying supercooled water droplets (−10°C, 2.5 g/m 3 LWC, 10 m/s wind speed and ∼80 μm water droplet size) to generate glaze ice. When tested using a centrifuge ice adhesion testing apparatus, the results showed that ice adhesion was reduced up to 5.7 times on a nano Ag modified surface [51], compared to the uncoated surface. However, as later found by the same research group, the coatings deteriorated quickly after several icing and de icing cycles as was exemplified by the increased bonding strength to ice (Fig. 12). This was attributed to surface roughness changes after cycling. The results from several of these studies further stress the importance that a coat ing's durability is vital, particularly under repeated icing and de icing conditions. The examples provided here suggest that this class of materials can provide both superhydrophobic and icephobic capabilities. However, the durability of these coatings requires further improvement. Polysiloxane based viscoelastic coatings This class of polymer coatings is based on viscoelastic, low Tg sili cones. Silicones are polymers made of repeating units of siloxane along with functional constituents such as methyl, phenyl or trifluoropropyl. The low surface energy of the functional group bonds to the siloxane chain in combination with the low elastic modulus enables them to be icephobic [39]. From the perspective of reducing the adhesion of water to the solid surface, many researchers propose that a high contact angle in combi nation with a low sliding angle offer a reduction in water/ice adhesion to the surface [33,52]. As the bonding strength/energy between hy drogen atoms and fluorine atoms is three times greater than that of hydrogen atoms with dimethylsiloxane (or hydrocarbons) [33], many of the water/ice repellent coatings have been developed to make use of viscoelastic dimethylsiloxane polymeric materials. Indeed, viscoelastic coatings based on polydimethylsiloxane exhibited great ice adhesion reduction, near 100 times than that on a bare aluminium substrate [25]. The reduced ice adhesion was attributed to both the low surface energy and superior elasticity (perhaps to encourage interfacial sliding). This research also found that several existing so called "icephobic" wind turbine coatings had equivalent ice adhesion as that on bare aluminium surfaces. A plasma spray process was used to generate coatings from a liquid hexamethyldisiloxane feedstock (HMDSO, 98% purity, Aldrich) [53]. When applied onto an anodized aluminium surface, the coating could achieve an ARF of approximately four (from 400 to 100 kPa). There are though several disadvantages associated with these types of viscoelastic elastomer coatings; their bonding strength to non silica/glass substrates is poor, needing a primer as an interface [54], and the environmental resistance to dust, sand, and ice particles is in ferior to other coatings. Additives can be incorporated to render it more wear and erosion resistant, while at the same time imparting surface roughness changes and superhydrophobicity. A coating manufactured using a mixture of tetraethyl orthosilicate and n octyltriethoxysilane with the addition of silica nanoparticles (functionalized with octyl triethoxysilane) yielded a contact angle 153° [55]. Poly dimethylsiloxane (PDMS) coatings with the incorporation of nano silica were developed to reduce ice accumulation on power line insulations [56]. Coatings were deposited using a sprayed gel process, with the resulting coating morphology shown in Fig. 13. PDMS coatings with nano silica particles of less than 100 nm in diameter were observed to exhibit superior ability in shedding water droplets, instead of allowing droplets to freeze upon impacting the surface [57], due to a high water contact angle of near 161°along with a low water roll off angle. The rate of ice accumulation was also re duced as a result. Under a supercooled water droplet spray condition, test samples held at −5°C had negligible ice deposition for up to 0.5 h. The improved icephobic properties were attributed to both hydro phobicity and reduced ice adhesion to the coating. Similarly, other researchers also investigated the hydrophobic properties of PDMS with silica and PDMS with silica and metal oxides (Al 2 O 3 , Cr 2 O 3 , etc.) [58]. The addition of metal oxide was reported to impart a catalytic function to the reaction between the SiO 2 particles and polymer. Other re searchers developed transparent superhydrophobic coatings combining ZnO and SiO 2 with methylphenyl silicone binder [59]. An "erosion and icephobic fluorosilicone coating" was marketed by AMES Shied [60]. Presumably, the erosion resistance is provided by the incorporation of particulates. Lastly, when using a composite coating structure with nano particles, it must be realized that the particle sizes for superhydrophobic and icephobic coatings are in a different dimensional scale [57]. The selection of additives must be tailored for intended applications. The long term stability of siloxane based icephobic coatings has not been well established. The plasma sprayed hexamethyldisiloxane re ported in Ref. [53] showed surface degradation after 15 cycles of icing/ de icing. Aluminium samples coated with perfluoro octyltriethox ysilane also experienced degradation in terms of its ice adhesion in crease and water repellency reduction [31]. The same study also found that "wet" samples, exposed to a condensation condition prior to an icing test, produced ice adhesion strength values as large as three times that of dry samples. Similarly, ice adhesion to coatings made of per fluorodecyltriethoxysilane (FAS 17) or stearic acid (SA) increased about four times after 20 cycles of icing and de icing [24]. Hydrolysis was considered as a probable reason when the coating was in contact with water or ice for long period of time. In another study, although limited hydrolysis of the siloxane bonds was observed, the contact angle and roll off angle were not significantly influenced after 100 cycles during a 250 h test [61]. Metallic coatings Erosion from rain, sand, and dust particles is a critical issue for aerospace surfaces, thus coatings should have sufficient erosion re sistance to survive in the operating environment. Metallic coatings have become of interest, with some possessing icephobicity [62,63]. Tita nium based coatings are of particular interest with work done on tita nium nitride (TiN), titanium aluminium nitride (TiAlN), and commer cially pure titanium. In a study conducted by Palacios et al., TiAlN increased the erosion resistance of the leading edge erosion shield of a helicopter by two orders of magnitude [63]. The TiN samples also im proved the erosion resistance, however, they increased the ice adhesion strength when a performance metric that normalized the adhesion strength as a function of the roughness was introduced. The TiAlN samples had a lower ice adhesion strength than the titanium substrate when the roughness was below 0.6 μm [64]. Although the roughness of the surface increases with usage, polishing the surface prior to initial commissioning will minimize the initial roughness and maximize the overall performance. A study by Jung et al. tested a variety of icephobic, hydrophobic, and hydrophilic coatings and found that the hydrophilic diamond coatings yielded the greatest freezing delay time [65]. Although these surfaces are hydrophilic, their very low surface roughness (1.4 6 nm) is near the critical ice nucleation radius, thus significantly delaying the freezing time. The critical ice nucleation radius is the critical size that an ice crystal must reach for freezing to occur. This low surface roughness ensures that ice cannot form within the asperities, leading to a low ice adhesion strength. Surface texturing and topology modifications One study found that the ice adhesion to known, flat surfaces may have reached a physical limit [6]. This was observed from a large test matrix of samples, including 21 different materials with different water contact angles. As shown in Fig. 14, the ice adhesion strength decreases as the receding contact angle increases. However, as no known material has a receding angle greater than 120° [6], it was suggested that any further reduction to ice adhesion beyond 150 kPa must be achieved by surface topology changes or polymer molecular engineering [39]. Based on observations of reduced ice adhesion with increased water contact angles, roughened surfaces that allow the entrapment of air within their asperities have been proposed as means to reduce ice adhesion. How ever, these structures, which are difficult and costly to manufacture, are prone to damage during cyclic icing and de icing processes as the na nofeatures may break off [24,66]. Furthermore, when the atmospheric humidity level is high, the nature of the roughened/textured surface can switch from a Cassie Baxter state with trapped air below the water droplet to a Wenzel state [19,67] of low water contact angles (< 90°). The following sections summarize several of these textured coatings/ surfaces with enhanced icephobicity, along their application methods. tetrahydrodecyl trichlorosilane) [68]. Examples of these nano surface structures are shown in Fig. 15. The resulting contact angle changes from untreated states were significant, as shown in Fig. 16. Post anodizing processing with FOTS (fluorooctyltrichlorosilane) hexane treatment can render the anodized surface superhydrophobic, as the reported contact angles were greater than 150° [69]. However, when utilizing surface texture for anti icing purposes, it must be employed judiciously as a groove with a characteristic width ranging from 0.1 nm to 2 nm may promote ice crystal nucleation [70]. Laser texturing. A laser has the potential to micromachine any surface. With the wide spread use of lasers in processing and potentially in additive manufacturing, the use of laser profiling to create icephobic surfaces is appealing from the perspective of manufacturability and durability. One study showed that laser texturing had the ability to impart icephobicity to both metal substrates and diamond like carbon (DLC) coatings [71]. The surface morphology of the DLC coating is shown in Fig. 17. Laser texturing was also used to create a hydrophobic titanium (Ti) surface. Here a pulsed ultrafast laser micro texturing process was em ployed as shown in Fig. 18 (a) and the result was a Ti surface with pillars several microns in height ( Fig. 18 (b)) [72]. After laser proces sing, a thin fluoropolymer coating was applied to achieve a contact angle of 165°and a sliding contact angle of < 7°. Linear abrasive wear test results indicated that the laser processed surface can maintain a contact angle > 150°after three abrasion cycles using a 350 g mass (108 kPa applied pressure). Some superficial wear occurred due to the fracture of the upper 10% of the pillars, again illustrating the im portance of suitable mechanical durability for harsh application con ditions. 3.3.4.3. Two tiered surface structuring. A polymer, whether it is flat or contoured, can be implanted with particles to create bimodal surfaces containing micro and nano structure. Illustrated here are two of such structures; one engineered with two sizes of silica particles (10 nm and 50 nm) and the other with lithography followed by spraying 10 nm silica particles dispersed in epoxy, as shown in Fig. 19 (a) and (b), respectively. Epoxy is used in many aircraft composite systems; the attachment of surface features into the epoxy matrix has the potential to be part of the composite manufacturing process. Micro and nano sized silica in an epoxy matrix provided hydrophobicity, while at the same time improving wear resistance [73]. The resulting hierarchical surface structures ( Fig. 19) with bimodal features render the surface capable of de icing, self cleaning and anti fouling. However, it is not clear if this coating structure can sustain anti icing characteristics under a high humidity environment as the increased surface area, once covered by frost, may result in increased ice adhesion. In addition, the resistance to abrasive wear is not known. In another study, a micro scaled surface was created first with a wet etching process to generate cones about 60 μm apart; the surface was then etched with deep reactive ion etching (DRIE) to grow "grass" on the entire surface, as show in Fig. 20. Finally, the profiled surface was coated with perfluorooctyl trichlorosilane (in hexane solution). The resulting surface was tested under 65% relative humidity (at an am bient temperature of 22°C) with samples cooled to a temperature of −10°C using a cooling stage [74]. Based on a detailed in situ frost formation observation, it was concluded that the engineered structure was able to retard the frost formation process through a higher energy barrier for droplet coalescence and nucleation. A similar structure containing micro meter pillars (fabricated with photolithography and cryogenic ICP etching) with nano scaled surface roughness (PECVD SiO 2 followed by ICP etching) and a final layer of perfluorodecyltrichlorosilane (FETS) [75] has shown that the freezing delay of a sessile supercooled water droplet at −21°C is up to 25 h. The authors attributed this long nucleation delay (longest based on litera ture) to the presence of an interfacial quasi liquid layer. Despite the exceptional anti icing and hydrophobic properties of these highly engineered surfaces, one must be aware of the effect of surface roughness under various icing conditions. Certain roughened surfaces can accelerate the heterogeneous nucleation of ice while others may increase the ice adhesion strength. A review written by Schutzius et al. revealed that for a multi tier surface structure, each roughness scale must address a specific target; the micro scale has a low adhesion strength while the nano scale texture resists droplet impingement and promotes rebound [76]. Additionally, the processes used to create these surfaces, such as photolithography, CVD, PVD, etc., are costly and not suitable for mass production or on large structures. 3.3.4.4. Textured and coated stainless steels. Stainless steels are used to house many aircraft instruments and gauges; anti icing ability is also beneficial for many applications. Here a simple chemical etching (50% FeCl 3 solution) followed by the deposition of a layer of nano silica dispersed in methoxy silane has rendered a stainless steel icephobic, based on qualitative outdoor snow and freezing rain test [61]. In particular, the treated surface was able to sustain a water contact angle of 155°after 100 icing/de icing cycles and a cavitation erosion simulation test. [34,77,78]. To overcome the deficiency of hydrophobic surfaces in a moisture saturated environment (due to frost formation), a new composite surface design was created to minimize the frost formation on the surface. In this design, shown in Fig. 21, a nano porous polymer structure was first fabricated using electrodeposition; it was then followed by the infiltration of a low freezing point fluorinated liquid. The liquid is retained on the surface by the nano structure, giving the surface a "slippery" nature [34]. In fact, a pitcher plant has a similar slippery liquid filled porous structure. Both of which rely on the simple fact that the smoothest surface is a liquid. This engineered composite structure has a combination of low contact angle hysteresis (for water droplets to roll off the surface) and low ice adhesion of 16 kPa [41]. Despite the superior icephobic properties, lubricant depletion would occur and compromise the performance. A lubricant infused electrospray silicone rubber anti icing coating. In this study, a heptadecafluorodecyltrimethoxysilane fluorinated coating was fabricated to exhibit a hierarchically porous structure [79]. This structure was designed with the objective to improve upon the existing SLIPS [34] so that ice nucleation can be delayed for a longer duration and the period prior to lubricant depletion extended. The porous structure was infiltrated with a perfluoropolyether lubricant. The results showed that the ice adhesion strength can be reduced to 60 kPa (vs 1400 kPa for the uncoated substrate). However, this value increased to 600 kPa after 20 cycles of frosting and defrosting (the frosting process was carried out at −14°C in an environment with 80 90% humidity; after the surface was covered with frost the temperature was raised to room temperature and then the cycle repeated). The degradation was due to the loss of lubricant with each cycle. Based on a similar principle, others showed a silicone oil infused polydimethylsiloxane coating achieved a low ice adhesion of 50 kPa (3% of that of bare aluminium) [80]. Similarly, in another study of an oil infused porous PDMS coating, the ice adhesion strength was reduced to 38 kPa, about 50% the ice adhesion strength of a smooth PDMS surface and ∼30% of micro featured PDMS surface shown in Fig. 22 [81]. Antifreeze releasing coatings. A bioinspired coating was reported in Ref. [82] where a porous superhydrophobic layer with wicking channels was embedded. These channels were then infiltrated with antifreeze agent. Tests in frosting, simulated freezing fog, and freezing rain showed that the onset of either frost, rime or glaze ice formation was delayed for at least 10 times longer than that of other coating systems, including the lubricant filled surfaces. Depending upon how easily the antifreeze agent can be replenished during service, it has a potential aircraft application since the antifreeze agent is regularly applied to aircraft in icing conditions before taking off. The latest, commercially available room temperature vulcanizing (RTV) R 1009 (Nusil Sol Gel Vulcanized Silicone Coating) has seen an ice adhesion five times less than the previously marked anti icing coating R 2180 [83], also developed by the company [84]. The comparison of NuSil R 2180 with other commercial coatings is illustrated in Fig. 23. Although not being disclosed in public, it was speculated that this series of vulcanized coatings contains a slow releasing agent of freezing point depressant [6]. The authors of this paper are currently investigating the use of silicone R 1009 in conjunction with piezoelectric actuators in a hybrid coating/ultrasonic de icing system. Among all types of anti icing coatings described in this section, the lowest ice adhesion was reported among the SLIPS category of coatings [39]; at 16 kPa, the ice adhesion on these surfaces is nearly two orders of magnitude lower than that on uncoated aluminium surfaces. In terms of the durability of these coatings, as the liquid (lubricant, oil or anti freezing agent) is held in place via weak capillary force, its depletion or dilution, particularly during repeated icing/de icing, is likely to occur, thus rendering the coatings non functional if an active charging system is not put in place [15]. Icephobic polymer coatings designed based on cross link density and interfacial lubricant The newest and perhaps the most advanced icephobic coating series were designed to enable polymer chain mobility within an elastomer matrix, hence creating a slip boundary condition between the ice and coating surface [39]. As the shear stress to cause slip at the interface is governed by τ=Gfa/kT (where G is the physical stiffness or shear modulus under isotropic conditions, f is the force needed to detach a single chain with a length, k is the Boltzmann's constant, and T is the temperature) and the polymer cross link density ρ CL . The authors pro posed two methods to reduce the adhesion of ice on a polymer surface by (1) using a polymer with low cross link density and (2) the addition of miscible lubricant to enable interfacial slippage. Using the first method, a low cross link density PDMS was able to arrive at a low shear strength of 33 kPa, without the addition of lubricants nor the presence of texture/roughness. With the addition of interfacial lubricants (such as silicone, krytox or oil) into the polymer structure chemically (vs. physical infiltration in SLIPS), the adhesion strength was further re duced to 6 kPa. Other polymer systems (polyurethane (PU), fluorinated polyurethane (FPU), and PFPE, shown in Fig. 24) also exhibited similar improvements with cross link density reduction, although the addition of an interfacial lubricant had a greater impact on the ice adhesion reduction. Furthermore, when these engineered icephobic coatings were subjected to repeated icing/de icing cycles, wear, and outdoor weathering, they consistently demonstrated superior durability to commercially available (Nusil, NeverWet) and SLIPS coatings. 22. Slippery liquid-infused porous surfaces (SLIPS) with an optimal combination of high water repellency and icephobicity. Reproduced from Ref. [81] with permission from American Chemical Society. Carbon nanotube and graphene containing coatings In terms of durability and electrical properties required for potential aircraft applications, carbon nanotube (CNT) and graphene may pro vide the needed physical/mechanical properties, and offer a practical way to modify the surface topology and impart hydrophobicity and perhaps icephobicity. Although not being tested for anti icing, a CNT forest structure was fabricated with vertically aligned nanotubes (CNTs) within a PTFE matrix [85]. A micrometre scaled water droplet was completely suspended on top of this surface as shown in Fig. 25. In another study, a composite epoxy resin, impregnated with CNTs, was sprayed onto a substrate and superhydrophobicity was reported [86]. The use of nanotubes in a coating provides an opportunity to in corporate heating into the surface for de icing and anti icing purposes. In fact, resistance heating was enabled in a film of graphene nanoribbon (with large aspect ratio to form electrical pathway) within an epoxy matrix [87]. Unlike that discussed in the preceding sections where coatings were intended to provide reduced ice adhesion and delayed freezing of su percooled water droplets, one coating developed for protecting aircraft radomes was based on an active mechanism where current is passed through the layer to generate heat for de icing [88]. In this research, graphene nanoribbons (GNRs) coating (100 nm), which is transparent to radio frequencies, was applied to substrate using an airbrush at 220°C. A de icing test was carried out at −20°C and successful ice removal was reported. In another report, a Carbo e Therm coating was applied on curved surfaces and it could be electrically heated for use in the non hazardous low voltage range (e.g. 12/24 V). It contains carbon nanotubes and graphite to render it electrically conductive [89]. By combining icephobic coatings with conductive additives, the result is a coating with both passive and active anti icing and de icing cap abilities. Lastly, the approach used in SLIPS can be incorporated into these functional coatings; an example of which is a spray coated per fluorododecylated graphene nanoribbons with the addition of a lu bricating slippery surface [90]. Other coating types A negatively charged surface has been reported to have the function of reducing freezing temperature, in particular, a textured hydrophobic stainless with anionic polyelectrolytes brushes was found to reduce the freezing temperature by at least 7°C than that measured on untreated surface [91]. Coating material with polarity changes (generated for example by an externally applied electric field during the coating process) also has the effect of reducing the freezing temperature of water on the surface by restricting heterogeneous ice nucleation [67]. Ultimately, if a coating can be developed to delay the freezing of su percooled water droplets to beyond −50°C, icing may no longer pre sent an issue during high altitude flight. Icephobicity can also be combined with other functions such as aircraft drag reduction. From the study of many living species, it has been realized that many surfaces have the natural ability to repel water (lotus leaf, pitcher plant, cicada, etc.) and also possess superior aero dynamic performance (butterfly wings and shark skin, as shown in Fig. 26) [92,93]. In fact, shark skin topology can help reduce drag by up to 8% and fuel consumption by 1.5%, not to mention that it possesses the needed surface topology for potential icephobic properties. Also inspired by nature, another development in this area is the creation of biological antifreeze proteins. This is a new and different area that may see future development of synthetic macromolecules for preventing ice crystals from growing [94]. A detailed review can be found in the quoted reference. Harmonization of tests for assessing the durability of functional icephobic coatings The benefits expected from the use of icephobic coatings are to limit ice accretion on an aircraft surface during flight in icing conditions or to facilitate ice shedding on rotating components or components exposed to aerodynamic shear forces. When combined with an active IPS (heating elements, mechanical actuators, surface acoustic wave actua tors, piezoceramic actuators, etc.), the technology should reduce the energy consumption of the overall IPS. To be applied onto aircraft, icephobic coatings must meet several major requirements including but not limited to, erosion resistance (rain, sand), chemical exposure tolerances, resistance to UV exposure and thermal shocks, remain operative in representative icing flight conditions (e.g., 25 FAR Apps. C, O, or P), comply with the latest REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) regulations [95], and be compatible with all existing air craft (engine/airframe/nacelle) surface finish requirements. Contact angle, surface roughness, elasticity, coating thickness and FT IR analysis To verify the properties of an icephobic coating, a series of tests must be performed to determine the wettability via water contact angle measurements, the surface roughness, the elasticity (when applicable, e.g. via nano indentation measurements), the thickness of the coating (e.g., via eddy current measurements), and a surface spectroscopic analysis (e.g., FT IR or surface Raman) for controlling surface chemistry and cleanliness of the samples. Ice adhesion strength, cross cut & REACH compliance The ice adhesion strength must be determined using one of several methods currently used in several labs (e.g., tensile mode (Mode I), shear mode (Mode II), rotating arm, or bending test) in well defined, simulated atmospheric icing conditions in icing wind tunnel tests (IWTT). Since not all conditions can be tested, a design of experiments (DoE) approach can be employed for selecting a reduced number of icing parameters that are most representative of atmospheric icing, e.g. including a rime, a glaze, and two mixed icing conditions outlined in Table 3 [96,97]. Cross cut adhesion tests [98] of the coatings to the substrates can be performed according to the ISO 2409 procedure only on polymer coatings. The test is not applicable to icephobic functionalized metals or ceramic materials, which are used either bare or the coating applied (like, PFPE or perfluorinated silanes or siloxanes) is only a mono molecular thick polymer film and as such, its adhesion to the substrate is not detectable by the cross cut test. REACH compliance must be assessed for all icephobic coatings ac cording to the regulation. Rain erosion testing Rain erosion resistance must be tested according to well defined and accepted standards for aerospace applications, such as the P JET or the whirling arm tests which run according to the standard DEF STAN 00 35 [99]. A set of representative testing conditions is listed in Table 4. The working principle of the P JET, developed at Airbus, is that a jet of water accelerated by a pump to a set velocity is chopped into short segments by a disc with two openings rotating at a set speed. The front heads of the water segments acquire a hemispherical shape due to surface tension and aerodynamic drag. The segments then impinge on the surface of the test coupon that can be tilted at a desired angle of incidence. The number of impact events at the same location can be varied depending on the need and the type of coating: for our purposes, the number of impacts was varied between 20 and 3000. The testing schematic is shown in Fig. 27 while results of the droplet impact on several surfaces is shown in Fig. 28. Sand erosion testing Sand erosion resistance must be tested according to well defined and accepted standards for aerospace applications, such as ASTM standard G76 04 [100] which is used for the Plint TE 68 Gas Jet Erosion Rig. A set of representative parameters for this test is listed in Table 5. The working principle is that a defined mass of sand particles is suspended in the flow of a carrier gas and accelerated towards a nozzle that directs the mixed stream of gas and particles towards the sample surface as outlined in Fig. 29. After having eroded the sample with a defined mass of sand (erodent), the weight loss of the sample is de termined (eroded mass), and the test continues. The test is stopped when the maximum mass of erodent has been applied or when the coating is fully eroded and the primer or the substrate become exposed. During the sand erosion test, different surfaces will exhibit different behaviours. For example, elastomers undergo slow deformation while the sand particles accumulate and will suddenly fail, rendering the coating useless. On the other hand, polymers erode linearly with in creasing erodent mass while functionalized metallic surfaces typically show no visible damage until the metal itself is eroded. Functional performance of icephobic coatings After having discussed how to assess the basic properties of ice phobic coatings, we want to introduce an example of how to assess the functional performance of icephobic coatings. Functional performance testing investigates how the icephobic properties of the coatings dete riorate during (simulated) operation. This analysis goes one step be yond that of the previous section that investigated the durability of the coatings themselves; here it is intended to assess the durability of the icephobic property. Preliminary considerations The first consideration to be made is that there is, so far, no es tablished standard known to us for testing the functional durability of icephobic coatings, meaning the durability of the coating itself and the durability of its functionality in relevant (simulated) environmental conditions. One thus needs to define a new set of tests and measurables that allow for a meaningful and reliable assessment. The following so lutions could offer viable alternatives: • Solution 1: Expose all samples to sequential degradation tests (erosion, UV, thermal, fluids, etc.) and after each step determine the ice adhesion strength in an IWTT • Solution 2: Simulate accelerated degradation (erosion, UV, thermal, fluids, etc.) over the whole sample area and perform wettability tests only (contact angles of water drops) on the degraded coatings, taking the degradation of surface wettability as a strong indicator for adhesion strength to ice Solution 1 is very expensive and time consuming while solution 2 is less costly and time consuming, but might provide only an incomplete set of results. Therefore, there is a strong need of defining a more rapid, but complete screening standard for functional tests in the future. Simulation of mechanical degradation For the mechanical degradation simulations, a sandblasting test can be used to simulate erosion and a measurement protocol can be de veloped. The discrete time steps of sandblasting by which the thickness of the coating is gradually reduced would need to be standardized. These time steps depend on the specific material of the coating and must be found empirically. After each time step, the CA and the RoA of water drops on the surface must be determined. We must point out here that, since there are no existing ISO guidelines or standards to follow for such a characterization, the mea surement protocol must be set up and the sandblasting parameters chosen according to a best practice that must be developed during the course of the testing. Simulation of physical chemical degradation For simulating chemical degradation and stability, one must per form Q UV tests, immersion in at least two reference fluids, e.g. Skydrol Concluding remarks on testing of icephobic coatings As a general conclusion to this paragraph it must be stated that novel functional coatings, to which icephobic coatings belong, will be used on future laminar aircraft designs for increasing performance, decreasing fuel consumption, or reducing maintenance. To assess their performance and durability, new test methods must be developed in a common effort among all interested academic, industrial, and reg ulatory partners. The first outcome of such a joint effort will be har monized testing guidelines, while the final goal must be to define new industrial standards. Hybrid Icephobic coating/active anti-icing and de-icing strategies Despite the ongoing research efforts on designing and manu facturing icephobic coatings, coatings alone may not be sufficient for aircraft de icing and anti icing needs. In early work carried out by Anderson [101], it was concluded that ice accumulation in an IWTT or in flight conditions is largely dependent upon the external environment, not the surface itself. It went on further to state that as soon as a thin layer of ice was formed, the coating would no longer be functional. There is currently no universal coating solution [102] to resist ice formation under a wide variety of icing conditions and formation modes, including the fully wetted state under the conditions of high speed water droplet (with higher Weber number We = ρV 2 R/γ) im pingement and condensation from moist environments [6]. Further more, many polymer based coatings have shown substantial dete rioration after repeated icing/de icing cycles: hydrolysis of fluorooxysilane based coatings (one of the most researched coating bases) contributes to coating degradation; mechanical stresses during icing/de icing cycles compromise the surface asperities [103], hence the positive function of roughness on wettability; and eventual deple tion of lubricants renders the SLIPS system non functional. One of the coating classes detailed 3.3.6 exhibited great promises as a potential anti icing solution as it does not rely on surface micro and nano roughness, nor does it employ an infiltrated lubricant. However, due to the use of elastomer(s) as its matrix, the erosion resistance against sand and ice pellet impact may be poor. In fact, in our preliminary in vestigation of a commercially available silicone based icephobic coating, it was found that the erosion rate (weight loss) is nearly two orders of magnitude greater than PTFE when being tested under im pingement angles of 30°and 45°. Additionally, silica particles (erodent) were observed to have embedded in to the coating surface during the erosion test. In addition to coating durability under harsh conditions, material and process costs and process repeatability prevent some of the coatings and surface modification methods from reaching a commercial maturity stage. Further to these challenges, the fact holds that as soon as an initial layer of ice or frost forms on the surface, the icephobic property (e.g., impeding ice nucleation/crystallization or rolling off water droplets) will subside and subsequent ice accretion will not be affected by the coating. As such, other means to remove the accreted ice will be needed, even though ice adhesion to the coating may be minimal. To this end, coatings have been found to reduce the power consumption for thermal de icing by 80% while at the same time they decrease runback ice [104]. IWTT of ice adhesion also showed that the best strategy among various methods examined was the combination of electro thermal heating and icephobic coating [105]. Another strategy is to integrate coatings with electro mechanical de icing systems [106]. Each of the following sections briefly reviews existing studies on hybrid de icing systems combining icephobic coatings with thermal or electro mechanical active systems and then gives recommendations to obtain an efficient combination of coatings with either of the three active ice protection systems. The first investigation of an electro thermal ice protection system goes back to the mid 1930's [107]. The idea is to integrate electrical heating elements into or onto the surface to be protected. These heating elements provide the energy to operate either in anti icing or in de icing mode. Early examples include two designs that were applied to pro peller blade protection. The first one consisted of internal wires moulded into a neoprene shoe. The second design used an outer layer of conducting material and an inner isolating layer. Current was supplied to the outer conducting layer via two copper leads. Both designs pro vided an acceptable ice protection method. However, these concepts had a major drawback: the electro thermal system required a heavy weight electrical generator [108 110]. The arrival of turbojet engines led to some further development of electro thermal technology. Due to the close spacing and motion be tween rotor and stator, mechanical abrasion would limit ice formation in the initial compressor stages. Icing of this component was therefore deemed secondary. However, the inlet guide vanes became more cri tical in terms of ice protection. Icing of inlet guide vanes would ser iously affect engine performance. Hence, an ice prevention method using electrical heaters was investigated. The heating element consisted of nichrome wires encased into glass cloth and assumed a hairpin shape [111]. It was shown experimentally that power requirements could be significantly reduced, while maintaining ice protection, by operating the heaters in a cyclical activation mode. With the turbojet engine also came high altitude and high speed flight. Studies showed that the heat required for continuous anti icing of large critical surfaces could become very large, and even prohibitive [112]. In order to reduce the energy penalty required by thermal sys tems, investigation began on periodic de icing. In this context electro thermal architectures were also investigated [113]. The heating ele ments consisted of nichrome strips and were placed in the spanwise direction with very little spacing. The strips were integrated into a piling of glass cloth and neoprene. The use of a parting strip was found necessary for quick and complete ice removal. High local power den sities and short cycles were also found to yield the best results. How ever, attaining the melting temperature at the surface was insufficient to ensure ice removal. Peak temperatures of 10 35°C were found ne cessary for complete ice removal. Today, in the context of more electric aircraft and the need to re duce fuel consumption, aircraft manufacturers are showing a growing interest for electro thermal ice protection systems (ETIPS). The fact that Boeing has equipped its 787 Dreamliner with an ETIPS demonstrates the degree of maturity this technology has achieved. However, many questions remain to be answered: how does the ice detach in de icing mode? Is there an optimal layout for the heaters? Is it possible to combine an ETIPS with a surface coating to reduce its energy con sumption? The architecture of an ETIPS is usually based on a multi layered stack of materials. Each stack may differ in material properties and thickness depending on the design and applications. The operating of a modern electro thermal ice protection system in de icing mode is illu strated in Fig. 30. A parting strip (here heater C) is held active during Fig. 30. Illustration of an ETIPS operating in de-icing mode. the whole cycle. The remaining heaters are activated according to a defined cycle. This acts to create a liquid water film at the interface between the ice and the protected surface, hence reducing the ice's ability to remain attached to the surface. Once a critical amount of water film is formed, the ice block is shed under the effect of aero dynamic forces [113]. Combination of coatings with electro thermal systems in anti icing and de icing modes As a continuation of their earlier experimental studies in 2002 [114], researchers at the Anti icing Materials International Laboratory (AMIL) explored the employment of thermoelectric anti icing systems with hydrophobic coatings [115]. Three different coatings were tested (two hydrophobic and one superhydrophobic). Icing conditions were created using AMIL's icing wind tunnel. The superhydrophobic coating reduced the required power (for noncoated surface) by 13% for rime ice and 33% for glaze ice while the hydrophobic coatings decreased the power by 8% and 13% for rime and glaze ice respectively. However hydrophobic coatings did not prevent runback water from freezing on the unprotected areas. On the other hand, the superhydrophobic coating prevented the runback water from freezing, leaving the surface mostly free of ice. This suggests that a superhydrophobic coating could significantly reduce the power requirement of anti icing systems, al though the question of durability remains to be investigated. In addition, a study by Antonini et al. investigated the effect of superhydrophobic coatings on energy reduction in anti icing systems [104]. To do so the authors used a NACA0021 aluminum airfoil with an exchangeable insert section. Three different inserts were considered: untreated aluminium, aluminium coated with PMMA and etched alu minium coated with Teflon. The leading edge area was heated with an electrical resistor placed on its inner surface. Moreover, in order to quantify the coating performance, a no ice area was defined on the insert. With this configuration, the heating power needed to keep the no ice area free of ice was measured for different inserts. The perfor mance of a given coating was assessed by measuring the heating power and the amount of runback ice. Tests were performed with LWC's of 1.5 g/m 3 and 12.3 g/m 3 . The airflow velocity and static temperature were 28 m/s and −17°C respectively. In the first case, it was found that the coated surfaces leaded to a significant reduction of the heating power (up to 80%). A reduction of runback ice was also noted. More over, it was observed that for the Teflon coating, the airfoil remained almost completely free of runback ice. For the second LWC case, the reduction of the heating power was much smaller (10%). However, this value of LWC is not typical of aircraft icing conditions. Mangini et al. evaluated the effect of hydrophilic and hydrophobic surfaces on runback ice formation [116]. The authors used the same airfoil setup as in the previous study [104]. Two inserts were con sidered: untreated bare aluminium (measured to be hydrophilic) and an etched aluminium superhydrophobic coating. As in the previous study, an electrical heater was placed at the leading edge. Different nozzles were used to generate a dispersed spray (MVD 50 μm, LWC 2.5 g/m 3 ) and a dense spray (MVD 125 μm, LWC 6.5 g/m 3 ). The airflow velocity and static temperature were 14.4 m/s and −17°C respectively. The study showed two different types of ice build up depending on the coating. In the case of the hydrophilic surface, the droplets impinging at the heated leading edge created a liquid film. The film was observed to separate into ligaments when flowing downstream. Once beyond the heated area, the water froze, leading to ice build up on a large part of the surface. On the other hand, on the superhydrophobic surface, ice only built up as a few isolated islands. Moreover, some of the islands were shed by the aerodynamic forces. The authors hence conclude that superhydrophobic coatings could provide a significant enhancement to thermal ice protection systems. The previously described studies show that there is a need in further evaluating the performance of an ETIPS combined with surface coat ings. Indeed, a judiciously chosen coating could significantly reduce the energy required to protect a surface from icing. However, the physics of ice formation on surface coatings is complex. No clear standards on their use and effects are available. In fact, studies are usually conducted under low airflow velocities (with respect to large airliners). Information is also lacking on the effect of velocity on the observed physics of ice formation on coated surfaces. Finally, no study has yet attempted to investigate the combination of an ETIPS operating in de icing mode with a coated surface. Therefore, although coatings offer a very promising direction of research for the improvement of ETIPS technology, further work is required in order to fully understand their physics and use them in an optimal way. Principles of electro mechanical de icing systems and combination of icephobic coatings with electro mechanical de icing systems In the context of setting up new programmes for more electric air craft and for reducing fuel consumption and emissions, aircraft manu facturers must develop alternative solutions to the traditional thermal Fig. 31. Schematic of an EIDI system. Reproduced from Ref. [119]. and pneumatic ice protection systems. In addition to electro thermal ice protection systems, studies are carried out to develop electro mechan ical de icing systems. This technology is at a low maturity stage of re search for de icing purposes, however, it deserves further attention. Principles of electro mechanical de icing systems The most frequently studied electro mechanical de icing systems are electro impulsive, electro mechanical expulsive, and piezoelectric sys tems. 5.2.1.1. Electro impulsive de icing systems. Electro impulsive de icing systems (EIDI) induce de icing by transferring a mechanical impulse to the surface on which the ice has formed. The system operates using high voltage capacitors which are rapidly discharged through electromagnetic coils located under the surface. After the discharge, strong and rapid repulsive magnetic forces are induced from a high current electric pulse through the coil. This results in the rapid acceleration and flexure of the iced surface, causing detachment and shedding of the ice [117,118]. Fig. 31 shows a schematic diagram for an EIDI system. The drawbacks of this technology are the possible induction of structural fatigue, the generation of electromagnetic interference, the non negligible weight of the de icing system, and the disturbing (acoustic) noise generated during de icing. Electro mechanical expulsive de icing systems. In electro mechanical expulsive de icing systems (EMED), a short lived electrical pulse delivered to the coils causes them to extend in a few ms [119] and their deformation is transferred to the leading edge. The rapid change of shape of the leading edge results in vibrations at frequencies in the range of a few kHz and detachment of the accreted ice. Fig. 32 shows a schematic of an EMED system. This system was developed by COX Inc [120]. and these systems have drawbacks similar to the electro impulse de icing systems. 5.2.1.3. Electro mechanical piezoelectric de icing systems. Electro mechanical piezoelectric de icing systems cause ice delamination by vibrating the surface on which ice has formed [121,122]. Piezoelectric actuators are bonded on the interior of the surface on which ice accretes and can generate vibrations when they are controlled with alternating voltages. A schematic is shown in Fig. 33. The vibrations are of very small amplitudes compared to the previous technologies and induce less structural fatigue. There has been extensive work on this technology, with use of frequencies ranging from hundreds of Hz [117] to tens of kHz [123], to MHz [124]. Combination of icephobic coatings with electro mechanical de icing systems There are very few studies involving hybrid de icing or anti icing systems that combine icephobic coatings and electro mechanical de icing systems. In the work by Strobl, a hybrid system using a coating, heating elements and piezoelectric actuators was proposed and tested in research carried out at Airbus and the Technical University of Munich [125]. A NACA 0012 airfoil was coated and equipped with a thermal system along the stagnation line (in a running wet, anti icing mode) and piezoelectric actuators (cyclically driven for ice shedding) in the unheated aft region, as shown in Fig. 34. The surface was prepared by polishing, anodizing and coating with Episurf solution (Surfactis Technologies, France). A power density of 2.74 kW/m 2 was needed to operate the hybrid ice protection system compared to the 16.4 62 kW/ m 2 typically required for electro thermal ice protection systems. This clearly showed that by combining various features into an ice protec tion system, the power consumption or the ecological footprint of the active system(s) can be significantly reduced. Regardless of the electro mechanical de icing technology employed, the ice shedding mechanism is based on vibrations (induced by a shock or by harmonic solicitations) which generate shear stresses greater than the adhesive strength at the interface between ice and substrate, or tensile stress greater than the ice tensile strength. Tensile stress results Fig. 32. Schematic of an EMEDS system. Reproduced from Ref. [119]. in cracks forming within the ice, while shear stress tends to produce delamination. Both phenomena can be used for ice shedding. A de crease in the ice adhesion strength would be beneficial to reduce the electrical power consumption necessary for delamination, provided that the coating does not affect the stress generation. The stress generation is linked to the coupling between the source of vibrations and the ice/ substrate interface or the surface layer of the ice. If the coupling is weak, the vibrations are not well transmitted from the source to the substrate and the ice. The coupling and stress generation at the ice/ substrate interface or at the surface layer of the ice is strongly depen dent on the Young's modulus of the materials, in particular the modulus of the coating. To highlight the effect of the coating, Fig. 35 and Fig. 36 show the change in tensile stress per micron of deformation at the surface layer of the ice and the shear stress per micron of deformation at the ice/substrate interface with respect to the Young's modulus of the coating, respectively. These figures were plotted from simulated results obtained for a 1.5 mm thick aluminium substrate covered with a 100 μm coating layer and a 2 mm thick ice layer (modeled as a homo geneous material with a Young's modulus of 9.33 GPa and a Poisson's ratio of 0.33). Fig. 35 shows that the tensile stress generated per micron of dis placement decreases with Young's modulus. This result is valid for all vibrational frequencies. For shear stress, the same conclusions can be drawn, however, the effects of Young's modulus are more significant. For a coating with a low Young's modulus, the coupling is extremely weak and the shear stress (per micron of displacement) generated at the ice/coating interface is very low. These low stresses imply that a high power would be required to generate a shear stress in excess of the ice adhesion strength in order to remove the ice by delamination, even if the ice adhesion strength has decreased due to the coating. In order to fully benefit from the icephobicity of a coating combined with an electro mechanical de icing system, we thus recommend using a coating with a Young's modulus that exceeds 1 GPa. If the coating has a lower Young's modulus, the gain obtained from the decrease in ice adhesion strength must be compared to the loss of stress generation to be able to conclude if there is a real benefit of using the coating. To finalize this section on the combination of icephobic coatings with electro mechanical de icing systems, Table 6 provides the Young's modulus range of the main families of coatings and an assessment of their use for a hybrid icephobic and electro mechanical de icing system. Concluding remarks A summary of the material and surface properties required for coatings to be effective alone or as part of a hybrid de icing system, operating in anti icing or de icing purpose mode is presented in Fig. 37. To achieve the functions of anti icing and de icing, coatings must have the ability to repel water droplets, delay ice nucleation from both vapor and liquid states, and finally once ice is formed on the surface, to reduce ice adhesion. All three characteristics can be realized by (1) coating material selection (fluoro or silicone based), (2) material molecular structure changes (degree of cross link and/or addition of interfacial lubricant), (3) surface morphology/topology changes (creating a tex ture on the material itself or utilizing micro and nano particles) and addition of infiltrated lubricant/anti freezing agent, and (4) finally changing the surface physical properties such as dielectric constant or polarity. Out of the coatings surveyed, the coating series designed by controlling degree of cross link and the addition of integral interfacial lubricant demonstrated the best performance in terms of anti icing ability, durability and low process cost; however, its effectiveness against all types of icing conditions, its erosion resistance to sand and ice pellets and its adhesion strength to aluminium and composite sur faces are still not tested. In general, a composite structure with micro and/or nano scaled hard particles within a low surface energy polymer matrix may be the solution to look for. Particles can provide the needed mechanical and physical properties, and a venue to impart surface texture. Despite the effectiveness of the current and future coatings in de laying ice formation and reducing ice adhesion, it will have to work in concert with active systems to deliver fail safe anti icing and de icing solutions for future aircraft. In this regard, it is foreseeable that heating, vibration or microwave systems may be combined with icephobic coatings to provide the de icing operation, should extended icing con dition result in ice accumulation on critical aircraft components. The energy needed to operate the active system(s) will be greatly reduced due to the presence of the coating. The active system may also assist the coating in preventing ice formation in the first place and avoiding the "ballistic" effect of shedding large pieces of ice during de icing opera tions, i.e. to shed smaller ice fragments at a higher frequency since smaller pieces represent a lower risk for engines or aircraft control elements. Moving forward, it is in the authors' opinion that future research shall focus on, but is not limited to, the following areas: (1) Hybrid coating and low energy (or self powered) active systems. These active systems: heating, vibration, or microwave, are to provide the needed de icing capability once ice has accreted. The integration of two or more systems into the aircraft structure will be crucial as the coating and active system shall not adversely impact the structural integrity and aerodynamic performance. (2) Coating adhesion to various aerospace substrates (aluminium, carbon fibre reinforced polymer (CFRP), glass fibre reinforced polymer (GFRP) and metal composite laminate) should be con sidered as part of the coating development process as its adhesion to surface is as important as the icephobic properties. (3) Before undertaking extensive icephobic coating development and qualification tests, there is an urgent need to establish international standards (ASTM, ASME, AIAA, SAE or OEMs) to standardize the sample dimensions, coating thickness, coating and ice adhesion tests (test type, strain rate, and environment), and icing test con ditions. More complicated than common mechanical tests, there are a host of icing variables that must be specified, such as sample/ coating temperature and inclination, supercooled water droplet size, velocity and temperature, wind speed (turbulent or laminar), wind tunnel/chamber humidity and temperature. These will allow relative ranking of all coatings and more importantly align coating development for aerospace applications. (4) Lastly there is a need for standardized environmental testing pro cedures to assess a coating's functional resistance to UV (A and B), de icing chemicals, water/ice, and mechanical abrasion from sand, ice pellets and water droplets in addition to cyclic stresses from aircraft operation, icing/de icing, and vibration. Acknowledgement Dr. Huang would like to thank the Institute of Clement Ader and INSA at University of Toulouse for giving her the opportunities to work with its researchers and exchange valuable ideas. Table 6 Young's modulus range of the main family of coatings and potential benefit for hybrid coating/electro-mechanical systems. Coating type Young's modulus range Potential benefit for hybrid coating/electro-mechanical system Polymer coatings based on fluoropolymers 0.5 GPa [133] -Potential use if the damping is not significant -Erosion may be problematic Polydimethylsiloxane based viscoelastic elastomer coatings 1-3 MPa [133,134] -Young's modulus too low to be used in a hybrid system Surface texturing and topology modifications (Al, Ti) 70-120 GPa [135] -Surface is the same chemical composition as the substrate -Wear shown not to be an issue CVD/PVD diamond film, TiN 100-1200 GPa [136][137][138] -Thin, wear resistant coatings can be used -Coating application process is expensive Fig. 37. Summary of the material and surface properties required from coatings to achieve anti-icing and de-icing.
2019-04-16T13:29:25.372Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "42c76434ff2e39d864f756e52b9f2e54fccc6f8c", "oa_license": "CCBY", "oa_url": "http://oatao.univ-toulouse.fr/22835/1/Huang_22835.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "4f8f548c18b492ed3bd13fcf40b86e53d023173a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
247005650
pes2o/s2orc
v3-fos-license
Stochastic Jumping Robots for Large-Scale Environmental Sensing Single-use jumping robots that are mass-producible and biodegradable could be quickly released for environmental sensing applications. Such robots would be pre-loaded to perform a set number of jumps, in random directions and with random distances, removing the need for onboard energy and computation. Stochastic jumpers build on embodied randomness and large-scale deployments to perform useful work. This paper introduces simulation results showing how to construct a large group of stochastic jumpers to perform environmental sensing, and the fi rst demonstration of robot prototypes that can perform a set number of sequential jumps, have full-body sensing, and are well suited to be made biodegradable. An interactive preprint version of the article can be found at: Introduction Robots operating in large numbers have been proposed as a solution to perform large-scale environmental monitoring. [1] Such robots can be scaled in number to fit area coverage needs, and redundancy in these systems favors robust deployment. [2] However, very few large multirobot systems have been used in reality outside laboratories due to the challenges navigating uneven terrain or producing sufficient robot numbers for meaningful area coverage in a cost-effective way. We propose to overcome both barriers by making single-use stochastic jumpers that are easy to mass produce, cheap, and effective. Our jumpers operate using embodied randomness, encoding stochastic jumping behavior in the design of the body of the jumper (see Figure 1). Jumps are initiated upon releasing preloaded elastic energy using mechanical components (latches) that are activated by an environmental stimulus. These latches control the sequence and timings of the jumps. Sensing capability is directly painted on the robot, and the lack of electronics makes it possible that in the future the robot can be made fully biodegradable. Large numbers of jumpers could provide in situ sensory information of an area for common tasks within agriculture or environmental remediation industries. As a first step toward real-world applications, this work focuses on a deployment and sensing scenario over a target area. Robots operating in large numbers are often individually simpler than those used in systems consisting of a few robots or a single robot. The simplicity of the robots in these systems is often compensated by their numbers and the design of strategies governing their deployment. Algorithms using artificial forces, [3] minimal or noisy sensors, [4][5][6] and random walks [7][8][9] have all been proposed as methods of dispersing robots over an area. Large-scale indoor deployments up to 3000 ft 2 have been reported in the study by McLurkin et al. [10] using the iSwarm system. Outdoor robot deployments have been demonstrated at a large scale using drones [11] or surface water vehicles. [12] However, large outdoor land-based robotic deployments have not yet been realized. Jumping robots have been explored in the past as a way to navigate challenging outdoor terrain, especially for small robots. [13] Examples include miniature robots weighing under 10 g that exploit flea-inspired elastic release mechanisms driven by shape memory alloys [14] and DC motors, [15] although many of these platforms have not been designed for use in large numbers. This changes design priorities toward low individual robot cost, simplicity, and potential for mass fabrication. Previous jumping robots for environmental monitoring [16][17][18] all use electrical power and control components in their designs limiting their potential biodegradability. [19,20] The dynamic simulations shown in the study by Dubowsky et al. [17] demonstrated how jump height and robot size had a strong influence on their robot's ability to traverse an obstructed tunnel without becoming entrapped. Meanwhile, Mintchev et al. [18] performed physical trials of their robot, demonstrating the robot's ability to overcome obstacles 7 cm high and rapidly explore a flat 10 m by 10 m area by exploiting dynamic instabilities in its locomotion mechanism to perform a random walk. While Mintchev [18] does explore the total area covered by the system based on their robot's trajectories, neither work examines the coverage capabilities of a large number of jumping robots operating simultaneously. The principles of morphological computation represent an emerging view of intelligence in robotics, where mechanically preprogrammed control schemes and responses to environmental stimuli can be encoded in the robot's body. [21,22] These principles are extended in this work to embody randomness within a robot's structure, so that the control of locomotion is encoded without predefined or deterministic path planning. Embodied intelligence can also include sensing modalities, such as using observations of body dynamics to sense environmental characteristics [23] and reactive pigmentation for thermally [24] or chemically [25] responsive robots. Overall, the work presented here provides the first steps toward mass production and deployment of large numbers of stochastic robots, with embodied randomness, for outdoor applications. The potential for jumping robots to perform area coverage in simulation is demonstrated, and these simulations are used to inform the design of proof-of-concept single-use jumpers. These prototypes can be stored in a compact way, assembled quickly with minimal manipulation, and are then capable of sensing their local environment via direct contact. The prototype designs also operate at a low price point (%US $1.39 bulk cost of materials per robot). While the current design is not biodegradable, the limited number of materials used in its construction alongside the lack of toxic electronic elements makes the design well suited to be made fully biodegradable in the future. Simulation-Based Design Simulations, programmed in Python, were carried out to evaluate the performance of the system in covering a 10 m by 10 m area of interest after being released at the center. In the future, we imagine that a separate system might be able to produce and release the stochastic jumpers directly into the environment. Alternatively, the stochastic jumpers could be released at ground level or from the air by a human or robotic carrier. Stochastic Robot The ability of the system to cover the area of interest for environmental sensing is encoded in the design of the robot's body. Control of each robot is therefore determined not by a programmed microcontroller, as would typically be the case, but by mechanically programming the robots to execute a specific number of jumps. By changing the body of the robot, these jumps could be triggered after a certain time has elapsed or by environmental factors. These jumps have a noisy distribution of jump distances and directions due to the robot's interaction with the environment and open-loop operation. In the simulation, the robots have n j preloaded elastic jumping mechanisms which in total store a strain energy of E tot joules. Each jumping mechanism is assumed identical as previous simulations showed no difference in the system's performance by having different energy release strategies when noise is present. Furthermore, having identical jumping mechanisms lends itself to mass production. The robots are modeled as spheres (r ¼ 5 cm) in continuous space. Their velocity is dictated by the finite-state machine shown in Figure 2. Robots are introduced into the world after the previous robot has left the starting area, which is located at the origin. Robots start with a random z rotation (θ) and in the Not Deployed state. This can be imagined as a person or a robotic system manually placing the robots into the environment. Simulation Environment The robots wait one second in the Waiting state, to model the delay caused by the release latch reacting to the environment. After this, robots jump and transition into the Airborne state. When the robots jump, ballistic physics with air resistance neglected is used to determine their jumping velocity ( Figure 3). This velocity is calculated from jumping energy and take-off angle after noise has been applied. As each mechanism is identical, the ideal jumping energy used in a single jump E is an even fraction of E tot . Noisy jumping energy E n is calculated by multiplying E by a number sampled from the Gaussian distribution N ð1.0, 1=9Þ. Meanwhile, the noisy take-off angle α n is obtained by sampling the distribution N ðπ=4, π 2 =144Þ. The robot's jumping velocity u during the airborne state can then be calculated using the following equation. where t a is the time the robot has been in the air. In these simulations, m ¼ 50 g and g ¼ 9.81 m s À2 . When the robot lands, its velocity is set to zero and it enters the Landed state. When landing, the robot's orientation is unpredictable, so in this work the robot's θ after landing is chosen at random between Àπ and π. If the number of jumps the robot has done does is less than n j , then the robot will return to the Waiting state. During these movements, robots can collide if the distance between them is less than the sum of their radii and they are either both in the air or both on the ground. If collisions are being considered in the particular simulation, then both robots involved in the collision are moved into the Immobilized state. In this state, the robot will not move any further, but is considered to be laying on the ground, where it can still perform sensing. This can be considered the worst result of a collision. In reality, it is likely that one or both of the robots involved in the collision would continue moving, if they had jumps left to perform. Performance Evaluation To calculate the area covered by the robots, their positions are recorded over time during the simulation. A 10 by 10 grid of squares is then used to divide the area of interest into coverable sections. If any of the robots' centers lie within a grid square, then it is classified as being covered for sensing purposes; otherwise, the square is classified as uncovered. Coverage is then given as the percentage of all grid squares that are covered (see Figure 4). Simulation Results Here we present the insight from simulations in the design of area coverage strategies using stochastic jumpers. Figure 5 shows how releasing up to N robots with E tot ¼ 4.0 J and n j ¼ 7 covers the 10 m by 10 m area of interest. As N is Figure 3. A robot jumping from the origin. Jumping is modeled using projectile motion. The direction of the jumping velocity u is determined by a noisy take-off angle α n and the robot's z rotation θ, which can be considered to be the orientation of the next jumping mechanism. The z component of u is a function of jumping time t a which is zero at take-off. www.advancedsciencenews.com www.advintellsyst.com increased, the system's total coverage also increases in a nonlinear fashion. To design the correct robot for the area of interest, Figure 6 shows how varying the values of n j and E tot affects the system's final coverage performance for N ¼ 500 robots. From Equation (1), it can be seen that straight lines that pass through the origin represent different ideal jump lengths. The large red area demonstrates the large design space that exists in choosing n j and E tot to achieve a high performance in both total coverage and coverage time. For example, a deployment of 500 robots which jump seven times (n j ¼ 7) and have an ideal jump length of 2.33 m (E tot ¼ 4 J) covers 91% the area of interest in 540 s (shown bottom left in Figure 6). Meanwhile, a deployment of robots which jump 23 times (n j ¼ 23) but with a smaller ideal jump length of 1.24 m (E tot ¼ 7 J) is also able to cover over 90% of the area in a similar time of 574 s (shown top right in Figure 6). Across all of these simulations, the average deployment time was 556 s. The size of the robot will scale with the number of jumping mechanisms n j and the size of these mechanisms. As larger mechanisms are able to store more strain energy, the size of the robot is also proportional to E tot n j . It is therefore noteworthy that low values (n j < 5 and E tot < 4.0 J) are able to cover the area well, leading to the possibility of using very small robots for the area coverage task. Introducing collisions between robots, as shown in Figure 7, demonstrates that inter-robot interference, while damaging at high values of E tot and n j , still leaves a large design space (shown in red) where the robot is able to cover more than 75% of the area. The main cause of this deterioration is due to robots landing on top of each other, causing clusters of immobilized robots to form. If these clusters also occur in close proximity to the deployment zone, robots are prevented from reaching the outer regions of the area, lowering total coverage. For short jump lengths, these clusters are more likely to form as robots are less able to jump over one another. Robots with lower values of n j perform better when considering collisions. We hypothesize that this is due to the lower number of jumps leading to less situations (mainly landings) where the robot can enter into a collision. A mild improvement to the system's coverage performance was found by waiting 60 s between each robot being deployed. However, this led to a longer average deployment time of 8.49 h. Insight from these simulations shows that there exists a large number of combinations of jump numbers and jump lengths that allow 500 robots to cover more than 80% of the 10 m by 10 m area, In these images, the area of interest is outlined in blue, and the yellow squares show the covered sections of the grid where at least one robot lies. In these simulations, collisions between robots have no effect. www.advancedsciencenews.com www.advintellsyst.com with a resolution of 1 m. The amount of time taken varies between 556 s and 8.49 h depending on the time between robot releases. This time would also depend on the period between robot jumps, which were fixed at 1 s in these simulations. Stochastic Robot Design As a first step toward making large deployments of stochastic jumpers a reality, we present a series of prototypes capable of between 2 and 5 jumps. These prototypes fulfill the design requirements of area coverage, mass production, sensing, and potential for biodegradability. Each prototype design consists of cantilever beams arranged around a central circular area (Figure 8). The number of cantilever beams determines the number of jumps the robot will perform. When these beams are bent, they are capable of storing the required energy for a jump. This removes the need to use separate spring components, simplifying robot assembly ( Figure 9). Before being placed in the environment, the robots are preloaded with strain energy. This is achieved by inserting the tips of the beams into slots inside the central area (see Figures 9 and 10), which are then secured in place using 3D-printed water-soluble latches (polyvinyl alcohol, PVA). The water-soluble latches facilitate sequential jumps to be triggered by moisture in the environment (e.g., rain). The simplicity of the design opens up the possibility that it could be rapidly assembled by a robotic or human production line. Manual assembly of the current design for example takes less than a minute (as shown in Figure 10). The use of laser-cut scaffolds for the robot makes it easy to store the material, allowing for the production of large numbers of robots in a compact form. Currently the beams are constructed from acetal copolymer, which is not a biodegradable plastic. However, this sheet material could be replaced with a different compostable polymer [26] and enable the robot to be fully degradable. CAD designs can be found online. [27] Each prototype a-d) is capable of a different number of jumps (n j ) depending on how many cantilever beams are featured in the design. Each prototype was laser cut from a sheet of acetal copolymer. Jumping Mechanism The cantilever beams allow the robot to jump by releasing their stored strain energy and colliding with the ground. This converts some of the stored energy into kinetic energy of the robot body (see Figure 11). As shown by the earlier simulations, designing the robot with a certain ideal jump length is important to ensure good system performance. The jumping energy in the robot prototypes is controlled through the dimensions of the beam. The strain energy (W ) in an axially loaded beam can be approximated using Equation (3), derived from work in ref. [28] W where E ac is the Young's modulus of the beam material, d is the tip displacement, l is the beam length, b is the beam width, and t is the beam thickness. The beam material was chosen to be acetal copolymer (E ac ¼ 2800 MPa) due its low density and high yield strength. In the prototype, the distance between the beam ends when they are primed for jumping is essentially zero, making the displacement in the direction of loading equal to the beam length (d ¼ l), leading to Equation (3) becoming: The thickness t and length l of the beam were chosen based on the available material sizes, the dimensions of the laser cutting bed, and to minimize the stress in the material to avoid plastic deformation. The final values used were t ¼ 1.5 mm and l ¼ 165 mm. This leaves the beam width b as a free parameter which determines the energy stored in the beam. This was chosen to be b ¼ 22 mm, resulting in an energy per jump of 1.38 J according to Equation (4). This single mechanism (shown Figure 9. The robot is primed for jumping by bending the beams and securing them with PVA latches. Figure 10. a-d) Stills from the video of the robot being assembled in under a minute. First the beam is bent (a), then inserted in the body (b), and secured in place with a latch (c). This is repeated for each jumping mechanism on the robot (d). Full video available at https://youtu.be/2RLQSvjq33M. Figure 11. The jumping mechanism propels the robot into the air through the latch dissolving when in contact with water, releasing the compressed beam. As the beam unfurls, it collides with the ground, propelling the robot into the air. www.advancedsciencenews.com www.advintellsyst.com in Figure 12) is repeated around a central circular area to give the desired total number of jumps. Figure 8 shows the resulting designs for total jump numbers of 2À5 jumps. Environmentally Triggered Latches When the robot comes into contact with water (e.g., from rain), the latches dissolve (see Figure 13), eventually releasing the loaded beam and triggering the jump. The different thicknesses of latches cause the beams to release sequentially, allowing for consecutive jumps. The latches were 3D printed using a WANHAO i3 Mini and PVA filament. To characterize the time it would take for each latch to yield under load, a series of latch specimens of varying thickness were put in an experimental rig, as shown in Figure 14. The rig mimicked the loading conditions on the latch when loaded by the bent beam using a replica of the beam end and slot on the robot's body. The applied load (6.38 N) was chosen based on measurements made using a digital force meter (Fk-50 Sauter) and a beam from one of the robot prototypes. The latches were then submerged underwater and the time taken until the latch failed was measured. The results of these experiments (shown in Figure 15) demonstrate that varying the thickness of the latches can be used to precisely control their yield time, hence ensuring that jumps are released sequentially. In reality, the latches experience additional loading forces beyond just those from the bent beam including forces involved in robot assembly and during landing. Hence, to ensure sequential release, the thicknesses of the latches in the prototypes were increased in 1 mm increments with the thinnest being 1 mm thick. Sensory Coating The sensory coating of the robot allows it to communicate the presence of stimuli in the environment through the use of color change. To demonstrate this concept, the prototype robot was coated in thermochromic paint. Figure 16 demonstrates how the robot changes color in the presence of heat; this color change approximately happens at 31 C. These readings could then be recorded with an aerial photograph. Figure 17 demonstrates how an overhead image of deployed jumpers over an area can be used to locate a heat source by observing the robots' colors. In the future, larger areas could be imaged by combining many photos together that have been captured using a drone. [29] The sensory coating could also offer sensory information to other agents on the ground. In addition, various stimuli could be detected using colorimetric [30] or paper-based sensors. [31] These could be laminated on top of the sheet material used to construct the robot. Jumping Performance The jumping performance of four different prototypes, each capable of a different number of jumps, was evaluated by carrying out a series of jumping trials within a flat experimental arena. Two cameras were used to track the robot's movement and also measure jumping characteristics such as jump height and distance (see Figure 18). The side camera (FLIR Blackfly S BFS-U3-16S2M) had a framerate of 200 frames per second to capture the robot's motion during a jump. Meanwhile a separate top-down camera (Mermaid MM-USB8MP02G-MFV) was used to accurately measure the robots position before and after jumps. Image capture and processing was done using Python with the Spinview SDK [32] and OpenCV library. [33] At the start of each experiment, the robot prototype under test was assembled and placed in the center of the arena. To avoid the effect of any material fatigue, freshly manufactured robots were used during each trial and three trials were performed for each of the designs shown in Figure 8. Once the robot was in place, the recording software was activated and 50 mL of water was then added to the robot. Water was added to the robot periodically throughout the experiment to mimic how water would reach the robot outdoors (e.g., rain). During the experiment, the latches Figure 13. The working principle behind the PVA latches. a) A 500 g load is placed on a 1 mm-thick PVA part. The applied load represents the applied force from the bent beam. In b-d) 50 mL of water was applied to the part every 20 min, leading to the PVA dissolving and the part failing after 85 min. www.advancedsciencenews.com www.advintellsyst.com within the robot would yield once exposed to the water, causing the robot to perform a sequence of jumps. Top-down images were captured every minute to track the robot's movement. Meanwhile, side images were continually captured into a circular buffer that had a capacity to store 2.5 s of footage. When motion was detected in the side-view image, an additional 2 s of images were captured and then the entire buffer would be written to disk. This method was used due to the long timescale of the experiments and the limited speeds at which images could be written to a hard drive. Typical footage of the robot jumping is shown in Figure 19. Experiment Calibration and Measurement Once the robot performed all its jumps, the images from both cameras were processed and labeled to obtain the measurements of interest. Processing images from both cameras consisted of discarding irrelevant images that did not show robot motion and then removing distortion from the remaining images using www.advancedsciencenews.com www.advintellsyst.com each camera's distortion coefficients. These were established by capturing a series of images of a chessboard before the experiment. The top-down images underwent an additional processing step where they were reprojected onto the coordinate space shown in Figure 20 to align the grid axes and image axes. Pixel positions in the resulting images ðx t , y t Þ were then converted to real-world positions ðx a , y a Þ using the following equations, which are derived from the known size of the processed image and the grid on the arena floor. x a ¼ x t À 500 px 2000 px 820 mm (5) y a ¼ 1 À y t À 500 px 2000 px 820 mm (6) The labeling process involved selecting the pixel position of the center of the robot within top-down or side images. These positions were then used to calculate jump height and distance measurements. For each jump, the robot's central position in the top-down image was manually labeled both before and after the jump (P1 and P2 in Figure 21). These pixel positions were then converted into the arena coordinate system to find the robot's planar trajectory and jump length. Measuring jump height required finding the distance between the position of the robot at the peak of its jump and the ground underneath the robot at this time. These positions were obtained using both the top-down and side-view images, as shown in Figure 21. The peak position of the robot was found in the side image and labeled manually in the frame where the robot was at the peak of its jump. The ground position in the side view was difficult to determine accurately by eye. Hence, the ground position was calculated by assuming the robot followed a ballistic trajectory and would be at its peak height when it had travelled halfway from its starting position to its ending position, which had previously been labeled in the top-down images. In reality, the robot often bounced a small distance away from its initial landing spot. However, this distance was found to be negligible compared with the distance of the jump. To convert the midpoint position in the top-down image into a ground position in the side image, a homography between the two images was used. It was assumed that the arena grid was planar. Hence, the homography H between the top down and side images could be found by selecting corresponding points in the arena grid within both images. Once the ground and peak positions had been found, the pixel distance in the y axis (Δy s ) needed to then converted to height (z a ) using a conversion factor (m). z a ¼ mΔy s As the robot changes its distance from the camera during the experiment, the conversion factor m depends on the position of the robot in the arena. The side camera was carefully aligned using a spirit level so that the image plane was parallel to the x a z a plane. This allowed the conversion from pixels to millimeters to be represented as a linear function of the robot's y position in the top image (y t ). To find the two calibration constants c 1 and c 2 , a vertical jig marked with two targets was moved around the grid while www.advancedsciencenews.com www.advintellsyst.com images from both cameras were captured. The targets on the jig were manually labeled and the physical distance between them was known. This gave m for various positions in the grid. Then, c 1 and c 2 could be found by fitting a linear regression model to the data with a high accuracy (R 2 ¼ 0.99), as shown in Figure 22. The error that this calibration produces against the known height of the top target (416.5 mm) and bottom target (116.5 mm) as the tool was moved around the grid is shown in Figure 23. The mean of the error for both targets is close to zero (μ ¼ 0.610 mm). Meanwhile, the measurements made by the system can be said to be within AE4 mm based on three standard deviations (3 Â 1.3 % 4 mm). Experimental Results The results from the jumping trials are detailed below. First, Figure 24 shows the trajectories of the robots throughout the arena across all trials. Each robot executed its jumping sequence Figure 19. Stills from footage of the prototypes performing their first jump. Full video available at https://youtu.be/FTCM2WkV7Â4. Figure 18. The experimental setup used to measure the prototype's performance inside an arena. The setup consists of two cameras that are used to track the robots' movement and a computer used to store the captured images. A grid taped to the arena floor is used to calibrate the system so measurements can be made by converting between pixels and millimeters. The coordinate system ðx a , y a , z a Þ used for measurements is also shown. www.advancedsciencenews.com www.advintellsyst.com www.advancedsciencenews.com www.advintellsyst.com successfully and was able to move away from the starting area. All the robots were able to jump regardless of the orientation they landed in. This included the n j ¼ 2 design which was prone to falling on its side. In one particular trial, this design landed on its side after its first jump and then was able to move a further 122 mm during its second jump. This design also left the arena in two trials and had to be placed back into the center of the arena, so its second jump could be measured. The first jump of these trials could not be measured accurately; however, they do show that this design is capable of jumping distances larger than half the size of the arena (>560 mm). The jump heights and jump distances achieved by the prototypes are shown in Figures 25 and 26 respectively, with the greatest height (567 mm) and distance (>560 mm) achieved by the n j ¼ 2 design. The largest measured distance (475 mm) was achieved by the n j ¼ 3 design. Both jumping distance and height decreased as the number of jumps the design could perform increased. This can be explained by the fact that the energy per jumping mechanism is constant. However, the mass of the robot increases by around 8.5 g with each jumping mechanism added. It also appears that when comparing earlier jumps to later jumps, the earlier jumps achieve greater heights (see Figure 27) and travel further on average (see Figure 28). This could be due to a number of factors. First, the shape of the robot's body changes as beams are unfurled, altering its mass distribution. In addition, when observing the footage of the later jumps, there is a noticeable increase in oscillations in the robot body. This could be due to the fact that the stiffness of the robot structure is lower in these later jumps and so energy from the jump www.advancedsciencenews.com www.advintellsyst.com goes into deforming the robot body and not into the jumping motion. Furthermore, during later jumps, the robot has a larger surface area in contact with the ground, leading to increased adhesive forces between the robot body and water on the arena floor. The time taken for the each latch to release the jump is shown in Figure 29. The first latch (with a thickness of 1 mm) released the beam at around 0.938 h on average, while the thickest latch (5 mm) had an average release time of 5.87 h. Outdoor Demonstration To demonstrate the potential of the system to operate outdoors, we performed a jumping trial of one of the prototypes in rugged terrain, as shown in Figure 30. The prototype is able to execute its jump sequence successfully and traverse various obstacles. This demonstration is a first step toward deploying stochastic jumpers outdoors. Discussion We have taken the concept of stochastic jumpers from simulation through to a first prototype design. The prototype achieves many features of the simulated jumpers, including jumping motion, the sequential jump release of a finite number of jumps, and random reorientation. However, they do differ from the simulated robots in a number of ways. First, the simulated robot's jumping distance was independent of the number of jumps and the jump index. However, in the prototype designs, this is not the case. Furthermore, the jumping distances achieved by the prototypes were smaller than the simulated robots that were able to cover a 10 m by 10 m area. The length of time it takes the robot prototype to complete each jump is also longer than in the simulation. However, this could be acceptable in scenarios where the speed of the deployment is not important. Deployment times could also be reduced by releasing the robots in parallel. Releasing robots from multiple points in parallel would also allow the system to cover larger areas, as shown in Figure 31. This preliminary work shows how over 80% of a 100 m by 100 m area can be covered using as few as 25 deployment points. Future work will examine how best to choose these deployment points. To improve on the limitations of the current prototypes, there are a number of avenues for future work. The jumping performances could be improved by investigating various beam cross sections to improve the efficiency of strain energy stored per gram. A hammer-like element attached to the end of the beam may also help in energy transfer by ensuring beam contact with the ground. The latch structure and material could also be investigated further. Including keystone elements into the latch structure could decrease the period between jumps dramatically. These latches would remain strong while the keystone element is in place, but their strength would rapidly decrease once the www.advancedsciencenews.com www.advintellsyst.com keystone element dissolved. Alternatively, the material of the latches could be triggered by the presence or absence of a compound of interest in the environment. Responsive hydrogel [34,35] could be used to make latches respond to various stimuli such as pH and temperature. This could allow the robots to sense their environment and physically move based on whether a triggering material is present. In addition, latches are not limited to controlling jumping but could also release a payload when triggered. Combining these two behaviors could create a system of robots which would accumulate in certain areas and then selectively release fertilizers or a remedial agent into its vicinity over a long timescale. Robots could also release a payload that interacts with other robots for the purposes of communication. This could lead to swarm-like behaviors and improve the system's performance. For example, a payload released by one robot could cause nearby robots to jump. This could move robots away from each other, distributing robots more evenly over the area. Another possible application of stochastic robots could involve the system spreading out over an area to act as localization beacons for more sophisticated robots in noisy hazardous environments. Conclusion This work presents the first steps toward using large numbers of randomly jumping robots to cover an area of interest. Simulations demonstrate the flexible design space which would allow many robot configurations, with different numbers of jumps and total stored elastic energy, to achieve good coverage. For example, 500 robots can achieve over 90% area coverage with the robot design biased toward either reduced number of jumps (7 total jumps) or reduced jump length (1.24 m). The demonstrated robot prototypes contain all the core functionalities of the simulated system, including preloaded sequential jumps, environmental triggering and sensing, ease of production, low cost, and potential for biodegradability. Future work will focus on scaling up the system toward outdoor demonstrations. Figure 31. Snapshots of preliminary simulations that use multiple release points to increase the area covered by the system. Jumpers that are able to cover the 10 m by 10 m area (n j ¼ 7 and E tot ¼ 4) are introduced in equal numbers from multiple deployment points (shown by blue crosses). The 100 m by 100 m area is then covered by 50 000 robots using 25 deployment points (left) and 100 deployment points (right). As with previous simulations, the area is divided up into 1 m by 1 m squares, with covered squares shown in yellow alongside the percentage of total squares covered. These simulations do not consider collisions.
2021-12-16T17:55:26.302Z
2021-12-13T00:00:00.000
{ "year": 2022, "sha1": "38af073bb08a5fb7bb6df3936984ab8be96c1d3a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aisy.202100219", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "9526e7952f155ff29d01289b59529ace5c17d6be", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
268565368
pes2o/s2orc
v3-fos-license
Development of a Polyethylene Glycol/Polymethyl Methacrylate-Based Binder System for a Borosilicate Glass Filler Suitable for Injection Molding Powder injection molding is an established, cost effective and often near-net-shape mass production process for metal or ceramic parts with complex geometries. This paper deals with the extension of the powder injection molding process chain towards the usage of a commercially available borosilicate glass and the realization of glass compounds with huge densities. The whole process chain consists of the individual steps of compounding, molding, debinding, and sintering. The first part, namely, the search for a suitable feedstock composition with a very high solid load and reliable molding properties, is mandatory for the successful manufacture of a dense glass part. The most prominent feature is the binder composition and the related comprehensive rheological characterization. In this work, a binder system consisting of polyethylene glycol and polymethylmethacrylate with stearic acid as a surfactant was selected and its suitability for glass injection molding was evaluated. The influence of all feedstock components on processing and of the process steps on the final sintered part was investigated for sintered glass parts with densities around 99% of the theoretical value. Introduction Inorganic glass has evolved over time from a building and packaging substance to an increasingly important high-tech material.This includes, for example, use of fiber optics in information technology.In addition, glass is playing an increasingly important role in the areas of health and energy production.Glasses can be adapted to almost any potential application due to the almost infinite possibilities of glass composition [1,2].A significant disadvantage of the current glass processing methods is that all shaping processes happen in the molten state and require an enormous amount of energy [3,4].One promising method to reduce the required energy is via the use of powder technology replication methods like the established injection molding, which was originally invented for the shaping of thermoplastics but whose use through the years has extended to polymer matrix composites.Injection molding allows the production of plastic components with high dimensional accuracy and complex geometry [5,6].Over the course of time, this process has been further extended to include the material range of ceramics and metals after thermal post-processing by powder injection molding (PIM) [7][8][9][10][11][12].The abbreviation PIM represents a process chain, where the metal or ceramic filler is embedded in a thermoplastic matrix, called a binder, and molded.After removal of the binder (debinding), the "shaped" powder is sintered to obtain the final metallic or ceramic component.Nowadays, the PIM process has a particular significance as a manufacturing technology for large quantities of metal and ceramic components with high geometric accuracy [13].In contrast to "classic" liquid glass processing, the sintering process is carried out at a temperature of only 60-70% of the absolute melting temperature [8].Quite surprisingly, glass injection molding (GIM) is still in its infancy.There are currently only a few publications on glass injection molding [14][15][16][17].Mader et al. used a pure, fused silica glass with a binder system consisting of the partially water-soluble polyethylene glycol (PEG) and polyvinyl butyral (PVB) for injection molding [14].They achieved excellent part properties, but the used initial nanosized silica filler and the applied feedstock preparation are not suitable for mass fabrication due to the intermediate wet processing, causing elevated costs [14].Hidalgo et al. used recycled glass from food packaging [15].For this purpose, a binder system of low-density polyethylene (LDPE), paraffin wax (PW), and stearic acid (SA), which is commonly applied in ceramic injection molding (CIM), was used [15].They investigated feedstocks with a glass load between 55 and 70%, but phase separation occurred at loadings > 65%.After processing, they achieved density values around 90% of the theoretical density, which is quite low compared to ceramics like alumina.Sample transparency could not be achieved [15].Giassi et al. investigated the feedstock formation and injection molding of a glass-ceramic filler system.The binder consisted of polypropylene, wax, and SA as a surfactant.The solid load varied between 50 and 60 Vol%.Finally, they achieved density values of 97% of the theoretical density [16].Enriquez and coworkers also researched the powder injection molding of glass ceramics in a high-density polyethylene (HDPE)/wax/SA binder [17].The applied solid content was 45-70 Vol%.After processing, they obtained a density around 97% of the theoretical value [17].In [15][16][17], wax was used as the binder component, which must be removed after shaping, with hexane as the solvent in a liquid pre-debinding step.However, the use of hexane should be avoided due to a pronounced lack of sustainability and serious health issues. For ceramics and metals, several binder systems are described in the literature, e.g., the mixture of polyethylene and wax in combination with SA as a surfactant has been widely used in PIM [18][19][20].In addition, various binders based on partially water-soluble polymers such as PEG have been applied for environmental reasons, avoiding the abovementioned problematic hexane removal issue.A wide variety of materials have already been successfully molded with binder systems consisting of PEG/PVB or PEG and polymethyl methacrylate (PMMA) [21][22][23][24][25].A quite recent overview of powder-binder systems used in injection molding can be found in [26].In general, and as stated above, wax as the binder component should be substituted, e.g., by water-soluble components like PEG, to enable a more environmentally friendly liquid debinding using water. With respect to the increasing relevance of sustainable material selection and processing, this paper describes a glass feedstock consisting of a commercial borosilicate glass with a partially water-soluble binder containing PEG as the low molecular mass polymer, PMMA as the large molecular mass polymer, and SA as the surfactant.This development, among others, is targeted to achieve higher sinter densities than the 97% described in the literature.This binder selection follows the previous work in PIM and ceramic or metal part realization by powder-based material extrusion (MEX) additive manufacturing [27,28].The PMMA serves as the backbone polymer, which gives the component the necessary mechanical stability after molding (also denoted as green body stability).The water-soluble PEG reduces melt viscosity (plasticizer), enabling PIM of the glass filler-containing feedstock.The SA serves as the dispersant and ensures good wetting of the glass filler by the organic binder polymers.SA also acts as an agent promoting release from the metal mold inserts during demolding.The borosilicate glass powder applied here has already been used in PIM [29] and in additive manufacturing [30] applying commercial binders.This early work will now be extended towards a systematic binder development with variable solid loads, average molar masses of the backbone polymer PMMA, as well as of the water-soluble PEG, PEG/PMMA ratios, and SA amounts.In all cases, the impact of this parameter variation on the rheological feedstock properties, on the molding behavior, as well as on the final sintered part properties will be discussed in detail, enabling the determination of a clear process parameter-properties relationship for a robust GIM process chain.The process chain consists of the following individual steps: (a) feedstock preparation (compounding), including material selection; (b) comprehensive rheological characterization with respect to feedstock composition, shear rate, and temperature; (c) debinding, covering the steps of liquid pre-debinding and thermal debinding; (d) sintering, including characterization of the sintered part. The most important process step is compounding, because the solid load must be as high as possible to obtain a final dense part, which may result in a high feedstock viscosity.For complete mold filling, viscosity should be small, which is accompanied by a lower solid load.These contractionary requirements can only be solved by a careful binder material selection and comprehensive evaluation of the binder amount.Feedstock viscosity is mostly determined by the solid load or, in more detail, by the specific surface area (SSA) of the filler used.The SSA is the interface between the inorganic solid and the organic polymer matrix [31].In the case of micro-sized ceramic fillers, such as Al 2 O 3 or ZrO 2 with typical SSA values around 6-12 m 2 /g, a solid loading of around 45-55 Vol% ensures a good molding behavior [18,27,31].Use of micro-sized metal fillers with SSA values significantly below 1 m 2 /g enables higher solid loads between 60 and 65 Vol% [28].The final part quality is limited by the necessity of finding a suitable feedstock meeting the above-mentioned requirements.More details on the selected materials and the process parameters will be given in the following sections. Material Selection In the continuation of research work reported in the literature [29,30], a commercial, irregularly shaped glass powder (Schott 8250) with a density of 2.3 g/cm 3 , an average particle size (d 50 ) of 10.2 µm, and an SSA of 1.6 m 2 /g was used.The irregular morphology is obvious from Figure 1. ior, as well as on the final sintered part properties will be discussed in detail, enab determination of a clear process parameter-properties relationship for a robust G cess chain.The process chain consists of the following individual steps: (a) feedstock preparation (compounding), including material selection; (b) comprehensive rheological characterization with respect to feedstock comp shear rate, and temperature; (c) debinding, covering the steps of liquid pre-debinding and thermal debindin (d) sintering, including characterization of the sintered part. The most important process step is compounding, because the solid load m high as possible to obtain a final dense part, which may result in a high feedstock v For complete mold filling, viscosity should be small, which is accompanied by solid load.These contractionary requirements can only be solved by a careful bin terial selection and comprehensive evaluation of the binder amount.Feedstock v is mostly determined by the solid load or, in more detail, by the specific surface ar of the filler used.The SSA is the interface between the inorganic solid and the polymer matrix [31].In the case of micro-sized ceramic fillers, such as Al2O3 or Z typical SSA values around 6-12 m 2 /g, a solid loading of around 45-55 Vol% e good molding behavior [18,27,31].Use of micro-sized metal fillers with SSA value icantly below 1 m 2 /g enables higher solid loads between 60 and 65 Vol% [28].T part quality is limited by the necessity of finding a suitable feedstock meeting th mentioned requirements.More details on the selected materials and the process ters will be given in the following sections. Material Selection In the continuation of research work reported in the literature [29,30], a com irregularly shaped glass powder (Schott 8250) with a density of 2.3 g/cm 3 , an particle size (d50) of 10.2 µm, and an SSA of 1.6 m 2 /g was used.The irregular m ogy is obvious from Figure 1.A series of different new glass feedstocks were developed, containing mixtur water-soluble PEG, PMMA, and SA.The amount of SA was calculated in relatio glass filler-specific surface area (mg/m 2 ).Increasing SA amounts were compensat accordingly reduced PMMA fraction.In addition, different PMMA and PEG typ different molecular masses were investigated (Table 1).To evaluate the impact of t stock composition on processing conditions as well as on the final glass part pr A series of different new glass feedstocks were developed, containing mixtures of the water-soluble PEG, PMMA, and SA.The amount of SA was calculated in relation to the glass filler-specific surface area (mg/m 2 ).Increasing SA amounts were compensated by an accordingly reduced PMMA fraction.In addition, different PMMA and PEG types with different molecular masses were investigated (Table 1).To evaluate the impact of the feedstock composition on processing conditions as well as on the final glass part properties, the solid powder load, the average molar mass M W of PEG and PMMA, the ratio of PEG and PMMA in the binder, and the SA concentration were varied systematically.Previous work that dealt with the development of feedstock containing Ti6Al4V powder as a filler focused on PEG with different M w , G7E PMMA, different PEG/PMMA ratios, and SA as surfactant [28].The results obtained were considered when investigating the feedstock composition; details can be found in the subsections of Section 3. Compounding and Rheological Characterization Prior to any shaping or replication, a set of basic processing and feedstock characterization steps are required.As a standard method, compounding was performed in a torque recording compounder (W50 EHT; Brabender, Duisburg, Germany).It allows for in-line torque recording during mixing to visualize the compounding progress at a given temperature over time.For the feedstocks based on PMMA G7E, G77, and Sigma 120k, a mixing temperature of 160 • C was set.For better comparison, a temperature of 125 • C only was necessary for the Sigma 15k feedstock due to the significantly lower molecular mass and, as a consequence, lower melt viscosity.All feedstocks were mixed for 1 h with a mixing speed of 30 rpm.As in previous work, e.g., in [28], the mixing chamber volume was 45 cm 3 .Successful compounding as a function of the feedstock composition can be obtained directly from the torque vs. time curve by considering the final equilibrium torque value.In this way, the limits of this technique can also be derived [31].After compounding, the feedstocks were characterized using a high-pressure capillary rheometer at 170 • C for the high-molecular-mass PMMA-based systems and at 120-140 • C for the Sigma 15kcontaining mixtures, again with the exception of feedstock 4 for better comparison.These temperatures were almost identical with those used in injection molding.The rheological characterization was performed with a high-pressure capillary viscosimeter (Rheograph 25; Göttfert GmbH, Buchen, Germany).The used capillary had a diameter of 1 mm and a length of 30 mm.The shear rate varied between 10 and 3500 s −1 .The rheological data obtained allowed conclusions to be drawn as to whether the feedstock was homogeneous and suitable for injection molding. Glass Injection Molding For tests, green bodies with a diameter of 10 mm and a thickness of 2 mm were fabricated from all feedstocks using an injection molding machine designed for small and micro-sized parts needing only small amounts of feedstock (Microsystem 50; Battenfeld, Kottingbrunn, Austria).Depending on the feedstock composition, different molding parameters were chosen.The necessary dimensional stability was guaranteed by a holding pressure during cooling prior to demolding. Debinding The green bodies were debinded in two different ways.First, the binder components were removed using a thermal treatment at elevated temperatures.Second, liquid predebinding in de-ionized water was combined with a subsequent thermal treatment.Liquid pre-debinding allows for PEG recycling and further usage.For complete PEG removal, the necessary liquid debinding time and temperature were varied.In the case of thermal debinding, the focus was placed on the generation of defect-free test structures, which requires small heating rates.The experiments were performed using a Carbolite HT/28 (Carbolite, Neuhausen, Germany) chamber oven.The green bodies were placed onto alumina sintering plates.The selected temperatures, heating rates, and dwell times for PEG, PMMA, and SA were taken from [32].After thermal debinding, the test structures are called brown bodies by convention. Sintering and Further Densification by Hot Isostatic Pressing (HIP) The debinded brown bodies were sintered under two different conditions.First, standard atmospheric conditions were chosen when applying a Carbolite HTF17/5 (Carbolite, Neuhausen, Germany) chamber oven.Second, sintering was carried out in a vacuum oven MUT ISO 350/300-2400 W (MUT-Jena, Jena, Germany).In all cases, the test structures were placed onto alumina sintering plates.Some sintered glass samples were additionally treated by hot isostatic pressing (HIP 3000; Dieffenbacher, Eppingen, Germany) for further densification by void removal in the case of closed porosity.In any case, a temperature of 550 • C, a pressure of 100 MPa, and a dwell time of 60 min were selected. Sintered Glass Part Characterization The sintered glass parts were characterized by means of different methods.The final density after sintering or HIP was measured according to Archimedes' principle by applying a Sartorius YDK01 balance (Sartorius, Göttingen, Germany).Between 2 and 12 samples were considered.The surface appearance and the inner structures of the glass parts were evaluated by SEM.For microscopy, the samples were embedded and then ground with a Saphir 550 (QATM, Mammelzen, Germany).Grinding was carried out in 4 steps with water: First, the samples were ground flat with 46 µm paper and then with 30, 16, and 10 µm paper for 30 s each.Polishing was carried out for 30 min using 6 µm and 3 µm silk cloths with a diamond paste.The SEM measurements were then obtained with a Supra 55 FE-SEM (Zeiss, Oberkochen, Germany) at an accelerating voltage of 10 kV.Additional CT scans of selected samples (Phoenix v tome xs; General Electric, Frankfurt, Germany) provided information on the presence of inner defects like voids or cracks.The accessible spatial resolution was 15 µm (measuring time: 100 ms; voltage: 10 kV, current: 120 µA).Optical transmission measurements were carried out using a UV/Vis spectrometer (SPECORD S 600; Analytikjena, Jena, Germany). Feedstock Compounding and Melt Flow Behavior In the following sections, the influence of the binder composition on the compounding process as well as on the rheological behavior will be discussed comprehensively.Extensive feedstock development took place parallel to the injection molding trials in iteration loops to adjust the replication-relevant feedstock properties, such as viscosity for complete mold filling and green body stability for successful demolding.For a better overview, feedstocks with common features will be discussed in the subsections covering a systematic variation in individual binder components. Influence of the PMMA's Average Molecular Mass To cover a wide range of different average molecular masses, feedstock systems 1-4 were prepared with a constant solid load (50 Vol% borosilicate glass), constant PEG type (PEG 8000), constant PEG/PMMA ratio (50:50), and SA amount (4.4 mg/m 2 ) (see Table 2).They will be discussed below.Thanks to the use of an in-line torque recording mixer-kneader, feedstock homogeneity during compounding could be validated [31] based on the shape and absence of any signal scattering at the end of the stationary state.From previous experience, a final torque of less than 20 Nm ensures a good injection moldability [31].Figure 2a shows the compounding behavior and Figure 2b shows the change in the melt viscosity versus the shear rate for the four feedstocks listed in Table 2. Feedstock 4 shows the three typical states of a compounding curve as described in the literature (Figure 2a) [31]: Filling state: pronounced torque increase caused by filling all materials (PEG, PMMA, SA, and glass powder) into the kneader and pronounced friction between the solid glass particles prior to wetting. • Mixing state: drop in the torque curve due to the disruption of agglomerates and particle wetting by the different binder components, especially the surfactant.Thanks to the use of an in-line torque recording mixer-kneader, feedstock homogeneity during compounding could be validated [31] based on the shape and absence of any signal scattering at the end of the stationary state.From previous experience, a final torque of less than 20 Nm ensures a good injection moldability [31].Figure 2a shows the compounding behavior and Figure 2b shows the change in the melt viscosity versus the shear rate for the four feedstocks listed in Table 2. Feedstock 4 shows the three typical states of a compounding curve as described in the literature (Figure 2a) [31]: Filling state: pronounced torque increase caused by filling all materials (PEG, PMMA, SA, and glass powder) into the kneader and pronounced friction between the solid glass particles prior to wetting. • Mixing state: drop in the torque curve due to the disruption of agglomerates and particle wetting by the different binder components, especially the surfactant. • Equilibrium (stationary) state with a stable final torque value reflecting a homogeneous feedstock and good moldability.The other three feedstocks 1-3 exhibited a more complex behavior.After filling, the torque decayed, followed by another torque increase to a stable final value that was higher than that of feedstock 4 containing the PMMA with the very low MW.The second torque rise observed may be explained by the morphology and the MW values of the added PMMAs.While PMMA 120k consists of small plates, G77 and G7E are standard pellets with a typical diameter of ~2 mm and a length around 3-4 mm.PMMA 15k is a fine powder that can be fused easily at the elevated compounding temperature.In the case of the other PMMAs, the plates and pellets must be liquefied prior to particle wetting, which explains the delayed equilibrium state.The molecular mass of the used PMMAs has a pronounced impact on the compounding process [32].The sequence of the final torque value correlates directly with the order of the used MW values of the PMMAs.An The other three feedstocks 1-3 exhibited a more complex behavior.After filling, the torque decayed, followed by another torque increase to a stable final value that was higher than that of feedstock 4 containing the PMMA with the very low M W .The second torque rise observed may be explained by the morphology and the M W values of the added PMMAs.While PMMA 120k consists of small plates, G77 and G7E are standard pellets with a typical diameter of ~2 mm and a length around 3-4 mm.PMMA 15k is a fine powder that can be fused easily at the elevated compounding temperature.In the case of the other PMMAs, the plates and pellets must be liquefied prior to particle wetting, which explains the delayed equilibrium state.The molecular mass of the used PMMAs has a pronounced impact on the compounding process [32].The sequence of the final torque value correlates directly with the order of the used M W values of the PMMAs.An increasing M W causes a higher equilibrium torque due to polymer chain entanglement.This higher torque is equivalent to the enhanced inner friction that must be overcome by the mixer-kneader equipment during compounding.Figure 2b shows the related melt viscosity measurement.In all cases, a pronounced pseudoplastic flow can be detected.In accordance with the observation made for compounding, the melt viscosity of feedstock 4 is lower by almost one order of magnitude than the melt viscosities of mixtures 1-3, which can also be attributed to the low M W .In agreement with the compounding results, feedstock 2 reaches the highest melt viscosity in the whole shear rate range investigated. Influence of the PEG's Average Molecular Mass and Stearic Acid Amount In the previous subsection, it was shown that the combination of G7E with PEG 8000 and a solid load of 50 Vol% was difficult to compound.G7E is a widely used commercial PMMA type (old tradename Degalan G7E).To reduce material costs, it is recommended to use a standard common polymer.For successful process chain development, the glass filler load was raised up to 60 Vol%, which is helpful in sintering.This solid load increase generally results in a melt viscosity increase [28].For this reason, the PEG/PMMA ratio was shifted to higher PEG amounts, which causes a viscosity drop and facilitates compounding.To investigate the influence of the PEG's average molecular mass M W and the amount of stearic acid on compounding as well as on the melt flow behavior, feedstocks 5-10 were used (Table 3).The fraction of the low-cost G7E was kept at a constant solid load of 60 Vol%.The variation in the PEG's M W was investigated to ensure a good melt viscosity for powder injection molding needed for complete mold filling at moderate temperatures and sufficient mechanical stability during demolding.Another possibility of viscosity adjustment and improved feedstock homogeneity was to choose an appropriate surfactant SA concentration.Initially, a value of 4.4 mg/m 2 was selected, which had been found to be well-suited for ceramics [18].It was increased up to 20 mg/m 2 , which corresponds to approximately 3 wt.% of the binder.This high value has been proven to be useful for metals in the literature, especially if the particles are larger and possess a smaller specific surface area [29] compared to the glass filler used here.The significantly higher PEG content in the feedstock suppressed the previously observed (Figure 2a) retarded PMMA melting and the compounding curves with the three typical stages explained earlier were obtained (Figure 3a).As regards binders with identical SA amounts but different PEG types, such as systems 5 and 8, it was found that the binder with the PEG of lower molecular weight produced lower final torque values.When the PEG and SA amounts are the same, an increase in SA also leads to a torque drop.This is also obvious from the melt rheology measurements.With increasing SA concentration, the final torque decreases.The decrease in viscosity as a result of an increasing SA content is not exclusively due to the better wettability of the glass particles with the binder itself, but also to the fact that the feedstock contains less PMMA.In addition, the torque decreases when using a PEG of lower molecular mass.This can be attributed to the shorter, less entangled molecular chains of PEG 8000 compared to PEG 20,000 enabling a better polymer chain sliding at elevated temperatures, which leads to lower viscosity at a given temperature (Figure 3b).The influence of the PEG's average molecular mass on compounding and melt rheology was also observed in feedstocks containing Ti alloy [28].The impact of increasing SA amounts on the melt viscosity is more pronounced than that of the PEG's M W variation.The used PEG as well as the SA content are powerful parameters for the optimization of the injection molding process.As in the previously investigated feedstocks 1-4, a pronounced pseudoplastic melt flow can be observed, which supports injection molding.less entangled molecular chains of PEG 8000 compared to PEG 20,000 enabling a better polymer chain sliding at elevated temperatures, which leads to lower viscosity at a given temperature (Figure 3b).The influence of the PEG's average molecular mass on compounding and melt rheology was also observed in feedstocks containing Ti alloy [28].The impact of increasing SA amounts on the melt viscosity is more pronounced than that of the PEG's MW variation.The used PEG as well as the SA content are powerful parameters for the optimization of the injection molding process.As in the previously investigated feedstocks 1-4, a pronounced pseudoplastic melt flow can be observed, which supports injection molding. Influence of the PEG/PMMA Ratio As described in the previous subsection, the variation in the PEG/PMMA ratio allows for an adjustment of the melt viscosity [28].Increasing PEG amounts reduce the mechanical stability of the green body.It is therefore recommended to use a PEG with a higher MW.Based on the binder combination of PEG 20,000/PMMA G7E with an SA content of 10 mg/m 2 and a solid load of 60 Vol%, the influence of the PEG/PMMA ratio on compounding and melt rheology was studied (Table 4).Figure 4 represents the influence of the PEG/PMMA ratio on torque (a) and melt viscosity (b).The torque curve at the PEG/PMMA ratio of 65:35 shows a typical behavior, while the feedstock with the higher PMMA amount (PEG/PMMA ratio of 50:50) exhibits a non-ideal behavior prior reaching the equilibrium stage.The PMMA needs longer to liquefy, as it is added in the form of large pellets.After that, the torque decreases due to agglomerate wetting, followed by glass agglomerate disruption causing a torque increase.The higher amount of PMMA results in a pronounced equilibrium torque.The same tendency can be found in melt rheology.The feedstock with the lower PMMA amount (PEG/PMMA ratio of 65:35) ensures a significant viscosity drop over the whole shear rate range.Again, a clear pseudoplastic flow can be seen. Influence of the PEG/PMMA Ratio As described in the previous subsection, the variation in the PEG/PMMA ratio allows for an adjustment of the melt viscosity [28].Increasing PEG amounts reduce the mechanical stability of the green body.It is therefore recommended to use a PEG with a higher M W . Based on the binder combination of PEG 20,000/PMMA G7E with an SA content of 10 mg/m 2 and a solid load of 60 Vol%, the influence of the PEG/PMMA ratio on compounding and melt rheology was studied (Table 4).Figure 4 represents the influence of the PEG/PMMA ratio on torque (a) and melt viscosity (b).The torque curve at the PEG/PMMA ratio of 65:35 shows a typical behavior, while the feedstock with the higher PMMA amount (PEG/PMMA ratio of 50:50) exhibits a non-ideal behavior prior reaching the equilibrium stage.The PMMA needs longer to liquefy, as it is added in the form of large pellets.After that, the torque decreases due to agglomerate wetting, followed by glass agglomerate disruption causing a torque increase.The higher amount of PMMA results in a pronounced equilibrium torque.The same tendency can be found in melt rheology.The feedstock with the lower PMMA amount (PEG/PMMA ratio of 65:35) ensures a significant viscosity drop over the whole shear rate range.Again, a clear pseudoplastic flow can be seen. Influence of the Solid Load As a result of the positive outcome of the injection molding trials (see Section 3.2), feedstock 11 was slightly modified by a pronounced SA increase up to 25 mg/m 2 to guarantee simple compounding as well as good and reliable mold filling.This feedstock was denoted feedstock 12 (Table 5) and used as the starting point for a further increase in the solid load up to 70 Vol% (feedstocks 13-16).Figure 5 shows the influence of powder load on the torque and melt viscosity of feedstocks 12-16.As expected, the equilibrium torque increases non-linearly with the increasing powder load.Between 60 and 65 Vol% solid loads, the torque gain is small.When exceeding 68 Vol%, however, a pronounced rise can be measured.The torque of feedstock 16 (70 Vol% load) shows no equilibrium phase after one hour.It is assumed that the feedstock is not yet completely homogeneous, and the particles are not completely wetted (Figure 5a).The torque-time curve clearly shows the limitation of the maximum processable solid load in this binder system.Irrespective of the solid load, all feedstocks show a pronounced pseudoplastic flow (Figure 5b).The influence of the filler content becomes particularly evident at lower shear rates.With an increasing shear rate, the different amounts of glass filler cause no major viscosity differences.In general, the feedstocks with a solid load of 68 and 70 Vol% reach the highest viscosities over the entire shear rate range. Influence of the Solid Load As a result of the positive outcome of the injection molding trials (see Section 3.2), feedstock 11 was slightly modified by a pronounced SA increase up to 25 mg/m 2 to guarantee simple compounding as well as good and reliable mold filling.This feedstock was denoted feedstock 12 (Table 5) and used as the starting point for a further increase in the solid load up to 70 Vol% (feedstocks 13-16).Figure 5 shows the influence of powder load on the torque and melt viscosity of feedstocks 12-16.As expected, the equilibrium torque increases non-linearly with the increasing powder load.Between 60 and 65 Vol% solid loads, the torque gain is small.When exceeding 68 Vol%, however, a pronounced rise can be measured.The torque of feedstock 16 (70 Vol% load) shows no equilibrium phase after one hour.It is assumed that the feedstock is not yet completely homogeneous, and the particles are not completely wetted (Figure 5a).The torque-time curve clearly shows the limitation of the maximum processable solid load in this binder system.Irrespective of the solid load, all feedstocks show a pronounced pseudoplastic flow (Figure 5b).The influence of the filler content becomes particularly evident at lower shear rates.With an increasing shear rate, the different amounts of glass filler cause no major viscosity differences.In general, the feedstocks with a solid load of 68 and 70 Vol% reach the highest viscosities over the entire shear rate range.Following the results of the feedstocks 12-16 with the high SA content and the promising compounding and melt viscosity properties, the impact of the average molecular mass on the feedstock properties were investigated again using the two PMMAs from Sigma (Table 1) at a constant solid load of 60 Vol%.In addition, the influence of the PEG type was studied (Table 6).Figure 6a Following the results of the feedstocks 12-16 with the high SA content and the promising compounding and melt viscosity properties, the impact of the average molecular mass on the feedstock properties were investigated again using the two PMMAs from Sigma (Table 1) at a constant solid load of 60 Vol%.In addition, the influence of the PEG type was studied (Table 6).Figure 6a Injection Molding In general, all prepared feedstocks were suitable for injection molding.During part production by GIM, feedstocks 5-10 exhibited almost the same behavior and showed good molding properties.The main differences occurred during cooling time and demolding.For the combination of PEG 8000 and PMMA G7E, the cooling time for demolding of a warpage-free part was between 2 and 3 min, which is very long for such a small part.When the mold is opened prematurely, mechanical stability of the component is not guaranteed and warpage occurs.To reduce the cooling time, we decided to use PEG 20,000 and the PEG to PMMA ratio was set to 50:50 to enhance green body stability.This binder composition, however, led to a viscosity increase.Hence, we looked for a compromise between feedstocks 9 and 11, which both had the low viscosity necessary in combination with an enhanced mechanical stability during demolding after a more acceptable cooling time of 30 s.This new feedstock 12 consisted of PEG 20,000 and PMMA G7E at a ratio of 50:50 with an increased SA content of 25 mg/m 2 and was chosen for further investigation.Due to the composition changes to achieve good mold filling and a high strength of the green body during demolding, a stable and robust injection molding process was possible.Even the feedstock with the elevated solid load of 65 Vol% (feedstock 14) could be processed without any mold filling and demolding difficulties.When exceeding a glass content of 65 Vol%, mold filling started to be problematic.The high SA content of 25 mg/m 2 in feedstocks 17-20 also ensured reliable injection molding.A comprehensive overview of the injection molding trials producing the best part results in terms of complete mold filling and easy demolding is given in Table 7. Injection Molding In general, all prepared feedstocks were suitable for injection molding.During part production by GIM, feedstocks 5-10 exhibited almost the same behavior and showed good molding properties.The main differences occurred during cooling time and demolding.For the combination of PEG 8000 and PMMA G7E, the cooling time for demolding of a warpage-free part was between 2 and 3 min, which is very long for such a small part.When the mold is opened prematurely, mechanical stability of the component is not guaranteed and warpage occurs.To reduce the cooling time, we decided to use PEG 20,000 and the PEG to PMMA ratio was set to 50:50 to enhance green body stability.This binder composition, however, led to a viscosity increase.Hence, we looked for a compromise between feedstocks 9 and 11, which both had the low viscosity necessary in combination with an enhanced mechanical stability during demolding after a more acceptable cooling time of 30 s.This new feedstock 12 consisted of PEG 20,000 and PMMA G7E at a ratio of 50:50 with an increased SA content of 25 mg/m 2 and was chosen for further investigation.Due to the composition changes to achieve good mold filling and a high strength of the green body during demolding, a stable and robust injection molding process was possible.Even the feedstock with the elevated solid load of 65 Vol% (feedstock 14) could be processed without any mold filling and demolding difficulties.When exceeding a glass content of 65 Vol%, mold filling started to be problematic.The high SA content of 25 mg/m 2 in feedstocks 17-20 also ensured reliable injection molding.A comprehensive overview of the injection molding trials producing the best part results in terms of complete mold filling and easy demolding is given in Table 7. Debinding After replication and prior to densification by sintering, all organic binder moieties must be removed by dissolution, temperature-based decomposition, or a combination of both processes, which has been adjusted to the binder components with low-and high-M W polymers.The combined process is common in PIM due to pore formation during solvent pre-debinding, which allows for the diffusion of degraded polymer fragments out of the bulk material without damaging the shape of the samples [33,34]. Liquid Pre-Debinding The PEG/PMMA binder allows for the eco-friendly use of water as a solvent for the liquid pre-debinding step [28].During liquid debinding, time and temperature are the two key parameters relevant to PEG dissolution.Using feedstock 12 as an example, Figure 6 shows the PEG removal with time and temperature.The values obtained for feedstocks with higher solid loads at a fixed debinding time are also indicated. The degree of debinding increased strongly in the beginning due to the high concentration gradient of PEG between the green body and water.Then, it leveled off.A diffusion process, debinding was accelerated with the increasing temperature, which agreed with previous investigations [32,35].A theoretical debinding degree of 100% at 50 • C was already reached after approx.7 h, whereas approx.80% of the PEG only had disappeared after 24 h at 23 • C (room temperature).At a water temperature of 50 • C, not only the water-soluble PEG, but also fractions of the partially soluble SA were removed within 24 h. Figure 7b shows that the PEG dissolved along the surface.In the further course of this work, the components were subjected to liquid pre-debinding for 16 h at 40 • C. A sufficient degree of debinding of more than 90% was achieved at a moderate processing time.The debinding degrees of feedstocks 12-16 at 18 h and 40 • C are also shown in Figure 7a.With increasing powder load, the debinding degree increased as well. Debinding After replication and prior to densification by sintering, all organic binder moieties must be removed by dissolution, temperature-based decomposition, or a combination of both processes, which has been adjusted to the binder components with low-and high-MW polymers.The combined process is common in PIM due to pore formation during solvent pre-debinding, which allows for the diffusion of degraded polymer fragments out of the bulk material without damaging the shape of the samples [33,34]. Liquid Pre-Debinding The PEG/PMMA binder allows for the eco-friendly use of water as a solvent for the liquid pre-debinding step [28].During liquid debinding, time and temperature are the two key parameters relevant to PEG dissolution.Using feedstock 12 as an example, Figure 6 shows the PEG removal with time and temperature.The values obtained for feedstocks with higher solid loads at a fixed debinding time are also indicated. The degree of debinding increased strongly in the beginning due to the high concentration gradient of PEG between the green body and water.Then, it leveled off.A diffusion process, debinding was accelerated with the increasing temperature, which agreed with previous investigations [32,35].A theoretical debinding degree of 100% at 50 °C was already reached after approx.7 h, whereas approx.80% of the PEG only had disappeared after 24 h at 23 °C (room temperature).At a water temperature of 50 °C, not only the watersoluble PEG, but also fractions of the partially soluble SA were removed within 24 h. Figure 7b shows that the PEG dissolved along the surface.In the further course of this work, the components were subjected to liquid pre-debinding for 16 h at 40 °C.A sufficient degree of debinding of more than 90% was achieved at a moderate processing time.The debinding degrees of feedstocks 12-16 at 18 h and 40 °C are also shown in Figure 7a.With increasing powder load, the debinding degree increased as well.The standard liquid pre-debinding program covering a period of 16 h at 40 °C did not work for feedstock 18, because of crack formation (Figure 8a).This was attributed to the fact that the molecular mass of PEG was higher than that of PMMA.Normally, this should be the other way around.The short PMMA chains cannot act as a backbone polymer to ensure a certain mechanical stability in case of solvent-induced polymer swelling, for instance.For this reason, the PEG 20,000 used in feedstock 18 was replaced by PEG The standard liquid pre-debinding program covering a period of 16 h at 40 • C did not work for feedstock 18, because of crack formation (Figure 8a).This was attributed to the fact that the molecular mass of PEG was higher than that of PMMA.Normally, this should be the other way around.The short PMMA chains cannot act as a backbone polymer to ensure a certain mechanical stability in case of solvent-induced polymer swelling, for instance.For this reason, the PEG 20,000 used in feedstock 18 was replaced by PEG 8000 in feedstock 19 and by PEG 4000 in feedstock 20.After liquid pre-debinding, the parts made of feedstock 19 (Figure 8b) showed no visible cracks, but there were small bubbles on the surface.Further reduction in the PEG's M W (feedstock 20) led to a defect-free component after liquid pre-debinding (Figure 8c).The PEG swells slightly when dissolved in water.This spatial expansion cannot be sufficiently cushioned by the low-molecular PMMA.As the molecular weight of PEG decreases, spatial expansion decreases, and cracking is prevented.8b) showed no visible cracks, but there were small bubbles on the surface.Further reduction in the PEG's MW (feedstock 20) led to a defect-free component after liquid pre-debinding (Figure 8c).The PEG swells slightly when dissolved in water.This spatial expansion cannot be sufficiently cushioned by the low-molecular PMMA.As the molecular weight of PEG decreases, spatial expansion decreases, and cracking is prevented. Thermal Debinding Thermal debinding must be performed as slowly as possible, because thermal decomposition of the organic feedstock components is accompanied by a pronounced volume expansion due to the generation of gaseous products, which can cause cracks or total disintegration of the part.Figure 9 shows the thermal debinding program with very slow heating rates, especially in the temperature range from 120 °C to 330 °C, according to the thermal behavior of the major organic components PEG and PMMA.The debinding program was adapted from [32].Major decomposition started around 220 °C, which led to very small heating rates and the long dwell time at 330 °C.This thermal debinding program was used for all samples, irrespective of whether or not there was a solvent-assisted pre-debinding step. Sintering Process Development Sintering was performed in two different ways, either in air or under vacuum.The different temperature programs are also presented in Figure 9.The reason for the different temperature programs is the HTF vacuum oven that was used, which could not handle small heating rates.Hence, the smallest possible rates were used.To find out whether Thermal Debinding Thermal debinding must be performed as slowly as possible, because thermal decomposition of the organic feedstock components is accompanied by a pronounced volume expansion due to the generation of gaseous products, which can cause cracks or total disintegration of the part.Figure 9 shows the thermal debinding program with very slow heating rates, especially in the temperature range from 120 • C to 330 • C, according to the thermal behavior of the major organic components PEG and PMMA.The debinding program was adapted from [32].Major decomposition started around 220 • C, which led to very small heating rates and the long dwell time at 330 • C.This thermal debinding program was used for all samples, irrespective of whether or not there was a solvent-assisted pre-debinding step. 8000 in feedstock 19 and by PEG 4000 in feedstock 20.After liquid pre-debinding, the part made of feedstock 19 (Figure 8b) showed no visible cracks, but there were small bubble on the surface.Further reduction in the PEG's MW (feedstock 20) led to a defect-free com ponent after liquid pre-debinding (Figure 8c).The PEG swells slightly when dissolved in water.This spatial expansion cannot be sufficiently cushioned by the low-molecula PMMA.As the molecular weight of PEG decreases, spatial expansion decreases, and cracking is prevented. Thermal Debinding Thermal debinding must be performed as slowly as possible, because thermal de composition of the organic feedstock components is accompanied by a pronounced vol ume expansion due to the generation of gaseous products, which can cause cracks or tota disintegration of the part.Figure 9 shows the thermal debinding program with very slow heating rates, especially in the temperature range from 120 °C to 330 °C, according to th thermal behavior of the major organic components PEG and PMMA.The debinding pro gram was adapted from [32].Major decomposition started around 220 °C, which led to very small heating rates and the long dwell time at 330 °C.This thermal debinding pro gram was used for all samples, irrespective of whether or not there was a solvent-assisted pre-debinding step. Sintering Process Development Sintering was performed in two different ways, either in air or under vacuum.The different temperature programs are also presented in Figure 9.The reason for the differen temperature programs is the HTF vacuum oven that was used, which could not handle small heating rates.Hence, the smallest possible rates were used.To find out whethe Sintering Process Development Sintering was performed in two different ways, either in air or under vacuum.The different temperature programs are also presented in Figure 9.The reason for the different temperature programs is the HTF vacuum oven that was used, which could not handle small heating rates.Hence, the smallest possible rates were used.To find out whether liquid pre-debinding was necessary or not, sintered samples with and without pre-debinding were non-destructively characterized by CT scans (Figure 10). Figure 10a shows the sample after thermal debinding and subsequent sintering in air. Figure 10b presents the sample after pre-debinding in water, thermal debinding, and sintering in air.As can be seen, the sintered and thermally debinded component has dark spots inside.In contrast to this, the component subjected to a preceding aqueous pre-debinding step does not exhibit dark spots.The dark areas in the CT scan are areas where the density is lower than in the light areas.Since no binder is present after sintering, this must be air.Consequently, the dark areas are equivalent to voids.This is also confirmed by the sinter densities of the components measured using Archimedes' principle.The only thermally debinded component has a sinter density of 96.9%, while the two-step debinded sample reaches 98.7% of the theoretical value.Consequently, it is strongly recommended to apply the two-step debinding procedure. Materials 2024, 17, x FOR PEER REVIEW 14 of 22 liquid pre-debinding was necessary or not, sintered samples with and without predebinding were non-destructively characterized by CT scans (Figure 10). Figure 10a shows the sample after thermal debinding and subsequent sintering in air. Figure 10b presents the sample after pre-debinding in water, thermal debinding, and sintering in air As can be seen, the sintered and thermally debinded component has dark spots inside.In contrast to this, the component subjected to a preceding aqueous pre-debinding step does not exhibit dark spots.The dark areas in the CT scan are areas where the density is lower than in the light areas.Since no binder is present after sintering, this must be air.Consequently, the dark areas are equivalent to voids.This is also confirmed by the sinter densities of the components measured using Archimedes' principle.The only thermally debinded component has a sinter density of 96.9%, while the two-step debinded sample reaches 98.7% of the theoretical value.Consequently, it is strongly recommended to apply the two-step debinding procedure.The influence of the two different sintering conditions (Figure 9)-air or vacuumcan be seen in Figure 11a,b, both of which were taken after applying the same temperature program.All parts were originally made of feedstock 12. Two main differences can be seen: first, vacuum sintering improved the quality of the outer contour and reduced open porosity; second, no voids can be detected in the bulk part.The higher heating rates did not adversely affect the part quality.As a result, processing costs were reduced.In addi tion, the influence of hot isostatic pressing (HIP) on part quality after vacuum sintering was investigated (Figure 11c,d).Figure 11b-d show several anomalies.For all three com ponents sintered under vacuum, open porosity is obvious at the edges.The porosity de creases with increasing sintering time and subsequent HIP. Figure 11b shows the par sintered at 680 °C in vacuum after 2 h. Figure 11c represents the part sintered for 2 h a 680 °C in vacuum with subsequent HIP at 550 °C and 100 MPa for one hour.Figure 11d shows the microstructure after vacuum sintering with a dwell time of 8 h at 680 °C and subsequent HIP.With increasing sintering time and additional HIP, sintering warpage increases.In general, vacuum sintering without HIP is sufficient for a pronounced void reduction.As regards further part quality improvement, the impact of the solid load in the feedstock on processing and on the sintered part can be seen in Figure 12.Again, more voids are found in the air-sintered sample (Figure 12a) compared to the sample sintered under vacuum (Figure 12b).Due to the small effect of HIP on the sample quality described above, this additional process step was omitted. The influence of the PMMA's MW in combination with high SA amounts on pro cessing was investigated in feedstock 17. Figure 13 shows micrographs of four parts fab ricated from this PMMA Sigma 120k-based feedstock.Figure 13a shows the The influence of the two different sintering conditions (Figure 9)-air or vacuum-can be seen in Figure 11a,b, both of which were taken after applying the same temperature program.All parts were originally made of feedstock 12. Two main differences can be seen: first, vacuum sintering improved the quality of the outer contour and reduced open porosity; second, no voids can be detected in the bulk part.The higher heating rates did not adversely affect the part quality.As a result, processing costs were reduced.In addition, the influence of hot isostatic pressing (HIP) on part quality after vacuum sintering was investigated (Figure 11c,d).Figure 11b-d show several anomalies.For all three components sintered under vacuum, open porosity is obvious at the edges.The porosity decreases with increasing sintering time and subsequent HIP. Figure 11b shows the part sintered at 680 • C in vacuum after 2 h. Figure 11c represents the part sintered for 2 h at 680 • C in vacuum with subsequent HIP at 550 • C and 100 MPa for one hour.Figure 11d shows the microstructure after vacuum sintering with a dwell time of 8 h at 680 • C and subsequent HIP.With increasing sintering time and additional HIP, sintering warpage increases.In general, vacuum sintering without HIP is sufficient for a pronounced void reduction.As regards further part quality improvement, the impact of the solid load in the feedstock on processing and on the sintered part can be seen in Figure 12.Again, more voids are found in the air-sintered sample (Figure 12a) compared to the sample sintered under vacuum (Figure 12b).Due to the small effect of HIP on the sample quality described above, this additional process step was omitted. microstructure after 2 h of vacuum sintering, Figure 13b after 8 h of vacuum sintering.Figure 13c presents the sample after 2 h of vacuum sintering and additional HIP. Figure 13d shows the sample after 8 h of vacuum sintering.No defects can be seen in all these four components.Their edges and shapes, however, are different.Open porosity decreases with increasing sintering time.Figure 13c presents the sample after 2 h of vacuum sintering and additional HIP. Figure 13d shows the sample after 8 h of vacuum sintering.No defects can be seen in all these four components.Their edges and shapes, however, are different.Open porosity decreases with increasing sintering time.The influence of the PMMA's M W in combination with high SA amounts on processing was investigated in feedstock 17. Figure 13 shows micrographs of four parts fabricated from this PMMA Sigma 120k-based feedstock.Figure 13a shows the microstructure after 2 h of vacuum sintering, Figure 13b after 8 h of vacuum sintering.Figure 13c presents the sample after 2 h of vacuum sintering and additional HIP. Figure 13d shows the sample after 8 h of vacuum sintering.No defects can be seen in all these four components.Their edges and shapes, however, are different.Open porosity decreases with increasing sintering time.In addition, HIP treatment was found to lead to a slight reduction in open porosity.The distortion at the corners of the component increased significantly with increasing sintering time.The HIP treatment hardly had any influence on this.In conclusion, it can be stated that the extension of the sintering time has a greater influence on open porosity reduction than the additional HIP process.However, higher warpage occurs, as is obvious from Figure 13b,d.To finalize the investigations of the impact of feedstock composition on processing and the appearance of the resulting part, feedstock 20 containing the lowmolecular-mass PMMA 15k was studied using the same process parameters (Figure 14ad).As in the previous examples (feedstock 17, Figure 13), an increasing sintering time reduced open porosity, but caused a pronounced rounding off due to surface minimization, which is the driving force of sintering (Figure 14b,d).Again, additional HIP was found to result in a minor improvement only.In addition, HIP treatment was found to lead to a slight reduction in open porosity.The distortion at the corners of the component increased significantly with increasing sintering time.The HIP treatment hardly had any influence on this.In conclusion, it can be stated that the extension of the sintering time has a greater influence on open porosity reduction than the additional HIP process.However, higher warpage occurs, as is obvious from Figure 13b,d.To finalize the investigations of the impact of feedstock composition on processing and the appearance of the resulting part, feedstock 20 containing the lowmolecular-mass PMMA 15k was studied using the same process parameters (Figure 14a-d).As in the previous examples (feedstock 17, Figure 13), an increasing sintering time reduced open porosity, but caused a pronounced rounding off due to surface minimization, which is the driving force of sintering (Figure 14b,d).Again, additional HIP was found to result in a minor improvement only. Influence of Feedstock Composition, Debinding, and Sintering on Part Density An important criterion in the critical validation of the GIM process chain is the final sinter density.Table 8 shows the resulting sinter densities achieved for all feedstocks, which were completely processed.The influence of the debinding process is presented as well.The data were measured before the samples were ground for SEM.Table 8 clearly shows that the combination of liquid pre-debinding and thermal debinding always leads to higher density values.Contrary to the expectations, the increase in solid load in feedstocks 13-16 compared to feedstock 12 did not cause any remarkable density increase.Density remained almost constant at a high level. Influence of Feedstock Composition, Debinding, and Sintering on Part Density An important criterion in the critical validation of the GIM process chain is the final sinter density.Table 8 shows the resulting sinter densities achieved for all feedstocks, which were completely processed.The influence of the debinding process is presented as well.The data were measured before the samples were ground for SEM.Table 8 clearly shows that the combination of liquid pre-debinding and thermal debinding always leads to higher density values.Contrary to the expectations, the increase in solid load in feedstocks 13-16 compared to feedstock 12 did not cause any remarkable density increase.Density remained almost constant at a high level. Hence, the higher effort associated with the compounding and injection molding of feedstock loadings beyond 60 Vol% is not paid off by a higher sinter density.A filler load of 60 Vol% is sufficient to achieve the best sinter results.More details on the sinter parameters and the resulting density outcomes are obvious from Table 9, which summarizes the impact of the vacuum sinter time and additional HIP treatment.For all three investigated feedstocks (12,17,20), very good sinter densities of better than 99% theoretical density can be achieved when the samples are sintered under vacuum for 2 h at the maximum temperature.These values are in the same range as those described in [30].Neither a further increase in sinter time nor the use of HIP improves the sinter values in a relevant way.On the contrary, a longer sinter time leads to sample deformation. Optical Properties It is evident from the previous sections that even under optimized sinter conditions, surface layers possess a certain open porosity and defects, which reduce optical transparency and allow for a certain translucency only when illuminated from the back.Figure 15a shows a sintered micro tensile specimen designed for mechanical characterization.A pronounced surface reflection due to light scattering can be seen.Figure 15b represents a larger sintered plate (thickness 3.7 mm) and Figure 15c represents a 1.8 mm thick sintered round test sample as used in the previous micrographs Both are illuminated from the back by LED white light and show a certain translucency.The samples in Figure 15a,c were originally made of feedstock 12; the sample shown in Figure 15b was based on feedstock 18. Optical transmission spectra were recorded for samples with different process histories, especially sinter parameters (Figure 16).In the visible range (380-780 nm) of the investigated wavelengths, all samples possess a small optical transmittance between 0.5 and 1.5%, with a maximum of up to 3% around 600 nm.A clear correlation between sample history and transmittance cannot be detected.The best values were obtained for feedstock 12 with a vacuum sintering time of 8 h with and without HIP.Optical transmission spectra were recorded for samples with different process histories, especially sinter parameters (Figure 16).In the visible range (380-780 nm) of the investigated wavelengths, all samples possess a small optical transmittance between 0.5 and 1.5%, with a maximum of up to 3% around 600 nm.A clear correlation between sample history and transmittance cannot be detected.The best values were obtained for feedstock 12 with a vacuum sintering time of 8 h with and without HIP. Conclusions The most important results of the tests reported are listed below: 1.The glass injection molding process chain was evaluated systematically, from feedstock development to molding, to debinding, to sintering.2. In the first process step, feedstock development using the given binder components PEG, PMMA, and SA, the average molecular masses of PEG and PMMA, their ratios, and the SA content were varied to enable simple, fast, and reliable compounding as well as good molding.During replication, good mold filling as well as defect-free stable demolding were ensured by selecting suitable feedstock compositions.As Optical transmission spectra were recorded for samples with different process histories, especially sinter parameters (Figure 16).In the visible range (380-780 nm) of the investigated wavelengths, all samples possess a small optical transmittance between 0.5 and 1.5%, with a maximum of up to 3% around 600 nm.A clear correlation between sample history and transmittance cannot be detected.The best values were obtained for feedstock 12 with a vacuum sintering time of 8 h with and without HIP. Conclusions The most important results of the tests reported are listed below: 1.The glass injection molding process chain was evaluated systematically, from feedstock development to molding, to debinding, to sintering.2. In the first process step, feedstock development using the given binder components PEG, PMMA, and SA, the average molecular masses of PEG and PMMA, their ratios, and the SA content were varied to enable simple, fast, and reliable compounding as well as good molding.During replication, good mold filling as well as defect-free stable demolding were ensured by selecting suitable feedstock compositions.As Conclusions The most important results of the tests reported are listed below: 1. The glass injection molding process chain was evaluated systematically, from feedstock development to molding, to debinding, to sintering.2. In the first process step, feedstock development using the given binder components PEG, PMMA, and SA, the average molecular masses of PEG and PMMA, their ratios, and the SA content were varied to enable simple, fast, and reliable compounding as well as good molding.During replication, good mold filling as well as defectfree stable demolding were ensured by selecting suitable feedstock compositions. As regards the debinding procedure, it was found that a combination of liquid predebinding and thermal treatment was recommended.This could be verified after the final sinter process.In this step, vacuum sintering is also favorable to achieve the highest sinter densities.Sinter densities of around 99-100% of theoretical density could be achieved. 3. An increase in the initial feedstock's solid load does not result in any improvement in the final sinter densities and part appearance.This also holds when an additional thermal post-treatment by HIP takes place.4. Suitable feedstock systems with 60 Vol% glass filler, 25 mg/m 2 SA, and PEG as well as PMMA having different average molecular weights in combination with two-step debinding and vacuum sintering can be recommended for further investigations.5. A certain translucency was measured, with optical transmission values reaching up to 3% in the visible range.6. The comprehensive investigations allow for a clear correlation between the feedstock composition and the influence of each individual binder component on compounding, molding, debinding, and sintering. Figure 1 . Figure 1.SEM of the applied borosilicate glass (Schott, Mainz, Germany) with an irregular morphology. Figure 2 . Figure 2. (a) Compounding of the feedstocks 1-4 containing different PMMA types (compounding temperature of feedstocks 1-4: 160 °C; (b) change in the melt viscosity as a function of the shear rate (measuring temperature: 170 °C). Figure 2 . Figure 2. (a) Compounding of the feedstocks 1-4 containing different PMMA types (compounding temperature of feedstocks 1-4: 160 • C; (b) change in the melt viscosity as a function of the shear rate (measuring temperature: 170 • C). shows the compounding curves.Due to the high MW of the PMMA Sigma 120k binder, the mixing temperature was set to 160 °C (feedstock 17).For the Sigma 15k-based feedstocks (feedstocks 18-20), it was possible to lower the mixing temperature to 125 °C.All Sigma 15k-based feedstocks reached the equilibrium state quite quickly.Substitution of PEG 20,000 by PEGs with lower MW values further accelerated the process of reaching the equilibrium state.The same general trend is obvious from the melt viscosity experiment shown in Figure 6b.The reduction in the PEG's MW in the Sigma 15kbased mixtures from 20,000 down to 8000 caused a pronounced viscosity drop.All feedstocks exhibited a pseudoplastic flow behavior. Figure 5 . Figure 5. Investigation of feedstocks 12-16 with increasing solid loads: (a) Compounding at 160 • C; (b) change in the melt viscosity with shear rate (measuring temperature: 170 • C).3.1.5.Influence of the PMMA's Average Molecular Mass-Revisited shows the compounding curves.Due to the high M W of the PMMA Sigma 120k binder, the mixing temperature was set to 160 • C (feedstock 17).For the Sigma 15k-based feedstocks (feedstocks 18-20), it was possible to lower the mixing temperature to 125 • C. All Sigma 15k-based feedstocks reached the equilibrium state quite quickly.Substitution of PEG 20,000 by PEGs with lower M W values further accelerated the process of reaching the equilibrium state.The same general trend is obvious from the melt viscosity experiment shown in Figure 6b.The reduction in the PEG's M W in the Sigma 15k-based mixtures from 20,000 down to 8000 caused a pronounced viscosity drop.All feedstocks exhibited a pseudoplastic flow behavior. Figure 6 . Figure 6.Investigation of feedstocks 17-20 containing different PMMAs at an SA content of 25 mg/m 2 : (a) Compounding; (b) change in melt viscosity with shear rate. Figure 6 . Figure 6.Investigation of feedstocks 17-20 containing different PMMAs at an SA content of 25 mg/m 2 : (a) Compounding; (b) change in melt viscosity with shear rate. Figure 9 . Figure 9. Temperature programs used for thermal debinding and sintering. Figure 9 . Figure 9. Temperature programs used for thermal debinding and sintering. Figure 9 . Figure 9. Temperature programs used for thermal debinding and sintering. Figure 10 . Figure 10.CT scans of sintered glass samples when applying feedstock 5 for replication: (a) only thermal debinding; (b) combination of liquid pre-debinding and thermal debinding. Figure 10 . Figure 10.CT scans of sintered glass samples when applying feedstock 5 for replication: (a) only thermal debinding; (b) combination of liquid pre-debinding and thermal debinding. Figure 11 .Figure 12 . Figure 11.SEMs of sintered samples based on feedstock 12: (a) 2 h sintering in air; (b) 2 h under vacuum; (c) dwell time of 2 h under vacuum plus HIP at 550 °C; (d) dwell time of 8 h under vacuum plus HIP at 550 °C. Figure 11 . Figure 11.SEMs of sintered samples based on feedstock 12: (a) 2 h sintering in air; (b) 2 h under vacuum; (c) dwell time of 2 h under vacuum plus HIP at 550 • C; (d) dwell time of 8 h under vacuum plus HIP at 550 • C. Figure 11 .Figure 12 . Figure 11.SEMs of sintered samples based on feedstock 12: (a) 2 h sintering in air; (b) 2 h under vacuum; (c) dwell time of 2 h under vacuum plus HIP at 550 °C; (d) dwell time of 8 h under vacuum plus HIP at 550 °C. Materials 2024 , 17, x FOR PEER REVIEW 19 of 22 larger sintered plate (thickness 3.7 mm) and Figure 15c represents a 1.8 mm thick sintered round test sample as used in the previous micrographs Both are illuminated from the back by LED white light and show a certain translucency.The samples in Figure 15a,c were originally made of feedstock 12; the sample shown in Figure 15b was based on feedstock 18. Figure 16 . Figure 16.Optical transmittance spectra of samples sintered under different processing conditions. Figure 16 . Figure 16.Optical transmittance spectra of samples sintered under different processing conditions. Figure 16 . Figure 16.Optical transmittance spectra of samples sintered under different processing conditions. Table 1 . Used binder components, their functions, and the corresponding suppliers. Table 2 . Overview of investigated feedstock systems with different PMMA types. Table 2 . Overview of investigated feedstock systems with different PMMA types. 1Values taken from vendors' data sheets. Table 3 . Feedstocks with G7E as PMMA, different PEG types, and variable SA contents at a solid load of 60 Vol% glass filler. Table 4 . Overview of the used feedstocks with PEG 20,000 and PMMA G7E, constant SA contents, and solid loads (60 Vol%), but different PEG/PMMA ratios. Table 4 . Overview of the used feedstocks with PEG 20,000 and PMMA G7E, constant SA contents, and solid loads (60 Vol%), but different PEG/PMMA ratios. Table 5 . Overview of the used feedstocks with increasing solid loads. Table 5 . Overview of the used feedstocks with increasing solid loads. Table 6 . Overview of the feedstock systems investigated with two different PMMA types and high SA concentration. Table 6 . Overview of the feedstock systems investigated with two different PMMA types and high SA concentration. Table 7 . Injection molding parameters resulting in best molding qualities. Table 7 . Injection molding parameters resulting in best molding qualities. Table 8 . Measured sinter densities of the feedstocks after debinding and 2 h sintering at maximum temperature in air. Table 8 . Measured sinter densities of the feedstocks after debinding and 2 h sintering at maximum temperature in air. n.a.stands for not available. Table 9 . Measured sinter densities of feedstocks 12, 17, and 20 as a function of the sinter conditions (all cases: liquid pre-debinding, thermal debinding, and vacuum sintering).
2024-03-22T15:40:52.641Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "4364060e221cf3ff6dca087d7c1b077ba73f5fcc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/6/1396/pdf?version=1710839182", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52cc2f592127a0b08b9b2cd8d32cb96729804042", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119252687
pes2o/s2orc
v3-fos-license
Quantification of Nonclassicality To quantify single mode nonclassicality, we start from an operational approach. A positive semi-definite observable is introduced to describe a measurement setup. The quantification is based on the negativity of the normally ordered version of this observable. Perfect operational quantumness corresponds to the quantum-noise-free measurement of the chosen observable. Surprisingly, even moderately squeezed states may exhibit perfect quantumness for a properly designed measurement. The quantification is also considered from an axiomatic viewpoint, based on the algebraic structure of the quantum states and the quantum superposition principle. Basic conclusions from both approaches are consistent with this fundamental principle of the quantum world. PACS numbers: 03.65.Ta, 42.50.Dv, 37. 10.Vz The experimental demonstration of fundamental nonclassical effects led to various applications of nonclassical light. As a consequence, the realization of nonclassical states has attracted substantial interest during the last decades. In this context, the quantitative characterization of nonclassical effects is an important problem. Here we seek for the connection of a quantitative characterization of quantum states with potential applications for the suppression of quantum-noise in measurements. The concept of the distance between two quantum states was introduced by Hillery [1]. He defined the nonclassical distance as a quantitative measure of nonclassicality. Although it is an intuitive approach, in many cases the nonclassical distance is hard to calculate. Another proposal for a measure of nonclassicality was introduced by Lee [2], the so-called nonclassical depth of a quantum state. It is defined by the minimum number of thermal photons admixed to a quantum state, which is needed to destroy its nonclassical effects. From our viewpoint this should be considered as a measure of the robustness or fragility of a nonclassical state, rather than of its quantumness. More recently, Asbóth et al. proposed to use the amount of entanglement, which can be potentially generated by a nonclassical state, as a measure of nonclassicality [3]. Despite interesting relations between nonclassicality and entanglement [4,5], a genuine measure of nonclassicality or quantumness should not be defined through a special class of quantum effects. Moreover, an entanglement potential suffers from the difficulty to define a general entanglement measure, cf. e.g. [6,7] . Qualitatively nonclassicality of a quantum state of a harmonic oscillator is characterized in Quantum Optics by the existence of negativities in the Glauber-Sudarshan P function [8]. This is equivalent to the existence of negative expectation values, whose classical counterparts are positive semidefinite. Thus a quantum state is nonclassical, if there exists an observablef †f , withf ≡f (â,â † ) being an operator function of the annihilation (creation) operatorâ (â † ), so that [9] :f †f : where the ": . . . :" symbol denotes normal ordering. In this contribution we introduce operational measures for nonclassicality or quantumness, by starting from the established notion of nonclassicality for quantum states of harmonic oscillators. These measures are directly based on observable mean values as they are obtained by a chosen experimental setup. This is just what an experimenter needs for a certain experiment. Our approach leads to remarkable perspectives for quantum-noise-free (QNF) measurements based on a manifold of quantum states. As an example we demonstrate that moderately squeezed states can be used to implement QNF measurements. To introduce an operational measure of nonclassicality, we consider a given experimental setup which is characterized by an arbitrary but fixed operatorf . The resulting quantitiesf †f and :f †f : are Hermitian operators and hence observables of the chosen setup. Whereas the first observable is positive semidefinite, the second one may attain negative expectation values. We suppose that for a chosen operatorf the nonclassicality condition (1) is fulfilled. For example, let us consider a homodyne detection setup, measuring the phase-sensitive quadraturê x ϕ of a given radiation mode, cf. e.g. [10]. By choosinĝ the condition (1) represents quadrature squeezing, More generally, we can choosef ≡ 1 + e i(kxϕ+φ) for homodyne detection, with an arbitrary but fixed phase φ. Now the quantities f †f and :f †f : yield the full information on the characteristic functions of the Wigner and the Glauber-Sudarshan P functions, respectively. Hence we have access to the full information on the quantum state and we may characterize its nonclassicality completely, for details see [11]. Thus, by fixing an observable we do not necessarily restrict the generality. To quantify the nonclassicality of a given state in a particular experiment, we attempt to properly quantify the negativity that can be attained by the left-hand side (lhs) of the condition (1). Here and in the following we will assume that the lhs is negative, otherwise there is no need for a nonclassicality measure. Let us consider the difference ∆ between the normally ordered and the ordinary expectation values of the chosen observablef †f , For a given operatorf it is straightforward to derive an explicit expression of the quantity ∆ by methods of operator ordering, but this is not needed for the following considerations. By using the fact that f †f ≥ 0, it is clear that the relation holds true. Now we may define the operational relative nonclassicality R of a given quantum state for a chosen measurement scheme as which quantifies the negativity of the lhs of the condition (1). Based on this definition, a quantum state exhibits perfect nonclassicality, that is R = 1, if the (negative) value of :f †f : approaches the lower bound according to the relation (5). Hence, the so-defined perfect nonclassicality is attained for In addition, we define that R ≡ 0 for :f †f : ≥ 0. Due to the equivalence in Eq. (7), for the operationally defined perfect nonclassicality we no longer need the normal ordering often used in Quantum Optics. By the condition f †f = 0 we may define perfect quantumness for an arbitrary quantum system. For a general mixed quantum state, described by the density operator ̺ = ψ p ψ |ψ ψ|, with p ψ > 0 and ψ p ψ = 1, perfect quantumness requires that This condition is fulfilled if and only if for all states |ψ contained in̺. Thus perfect quantumness is attained for any quantum state composed of eigenstates of the operatorf whose eigenvalues are zero. In such cases the observablef †f is totally free of quantum noise. Note that the eigenvalue of zero is not a serious restriction, which becomes clear from the example in Eq. (2). In general one can substitutef → ∆f =f − f , together with ∆f |ψ = 0 replacing Eq. (9). Let us consider some physical consequences of perfect quantumness. We already renounced our starting assumption to define nonclassicality for a harmonic oscillator by applying normal ordering. Now we may define the appearance of perfect quantumness in a given experimental situation through Eq. (9), and hence by the fact that the accessible observablef †f becomes a QNF variable. For this purpose it is sufficient to prepare the system under study in a pure quantum state that fulfills the condition (9). This idea applies to general, i.e. other than harmonic, quantum systems. It opens possibilities to perform high-precision measurements at the ultimate limit of vanishing quantum noise. Given an experimental setup and hence the related operatorf , one may solve Eq. (9) to derive the optimized quantum state for performing QNF measurements. Now we turn to the prominent example of quadrature squeezing of the harmonic oscillator. Combining Eqs. (2) and (10), the sought perfect quantum state is given bŷ which defines the quadrature eigenstate, |ψ ≡ |x ϕ , with the eigenvalue being x ϕ = x ϕ . This reproduces the well-known fact that the quadrature eigenstates are suited for QNF quadrature measurements. The severe difficulty in realizing this situation consists in the fact that these eigenstates represent the limit of infinitely strong squeezing, which would require an infinite amount of energy. Consequently, the perfectly squeezed states are unphysical ones. Nevertheless, experimenters try to generate strongly squeezed states in order to approach this ideal situation. For example, recently a 10 dB reduction of the noise power of radiation has been achieved [12], and even stronger squeezing was realized in the quantized motion of a trapped ion [13]. In this way one can suppress the noise effects in measurements significantly, but one cannot reach the QNF limit. The question appears whether there is an alternative possibility of using squeezed states for QNF measurements. As we will demonstrate below, the answer to this question is yes! It can be realized by a proper choice of the observable to be measured. Even if the quadrature measurement -in view of the reduction of the quadrature variance of a squeezed state -seems to be the natural choice, we may achieve a better performance and eventually reach the QNF limit as follows. For simplicity, we will deal with a squeezed vacuum state |0; ν , which obeys the eigenvalue equation Here, ν (µ) is a complex (real) parameter which controls the amount of noise reduction of the squeezed vacuum state with respect to the quadraturex ϕ for properly fixed phase ϕ. Note that a total suppression of the quadrature noise appears for |ν| → ∞. However, as discussed above, perfect squeezing is not a realistic route towards the realization of perfect quantumness and of QNF measurements. Instead, we may choose the operatorf characterizing our measurement device simply aŝ By comparing Eq. (12) with (9), it is obvious that the squeezed vacuum indeed obeys the condition of perfect quantumness for the resulting observablef †f , thus it opens the possibility to implement QNF measurements. What we still require is an apparatus measuring the observablef †f . It may appear to be counterintuitive that the squeezed vacuum state |0; ν remains perfectly nonclassical even for moderate squeezing, that is for finite |ν|-values. To make use of this property, however, one only needs to properly adjust the measurement device to the available squeezed vacuum state. In the remainder of this contribution we will consider a practical implementation of a QNF measurement by using a squeezed vacuum state with moderate squeezing. For this purpose we will consider the situation for a trapped and laser-driven ion. In this case the vibrational center-of-mass motion of the ion in the trap potential plays the role of the quantized harmonic oscillator. As noted above, the preparation of a motional squeezed state has been realized more than a decade ago [13]. Let us now introduce the required measurement scheme for the observablef †f , with the operatorf given by Eq. (13). For this purpose a trapped ion is driven, in the resolved-sideband and the Lamb-Dicke regimes, simultaneously on the first red and blue sidebands, cf. Fig. 1. As indicated in the figure, the couplings on the red and the blue first sidebands are given by the Raby frequencies Ω r and Ω b , respectively. We assume that the condition |Ω r | > |Ω b | is fulfilled. Note that the unlike driving of the two sidebands is the only needed modification of a known measurement scheme for the determination of the motional quantum state [14], which has already been realized [15]. After a chosen interaction time of the two lasers with the ion, the electronic-state occupation can be tested with a very high quantum efficiency. For this purpose one usually drives a dipole transition between the electronic ground state |1 and an auxiliary state. The occurrence and the non-occurrence of resonance fluorescence on this transition efficiently detects the system in the state |1 and |2 , respectively. The interaction Hamiltonian in the interaction picture is of the form where ij = |i j| (i, j = 1, 2) is the electronic flip operator, ϕ r and ∆ϕ are the phase of the red-detuned laser and the phase difference of both lasers, respectively. This Hamiltonian can be rewritten aŝ with Ω = e iϕr |Ω r | 2 − |Ω b | 2 . The operatorf is given by Eq. (13), where together with µ 2 = 1 + |ν| 2 according to Eq. (12). Thus the resulting dynamics is indeed sensitive to the operator f we are interested in. Now it is straightforward to calculate the time evolution of a trapped ion initially (at time t = 0) prepared, for example, in the state̺(0) = |2 2| ⊗ρ(0), where̺ andρ denote the vibronic and the vibrational quantum state, respectively. That is, the ion is initially in the upper electronic state and the center-of-mass motion is in an arbitrary mixed quantum state. Let us consider the probability p 2 (t) = Tr[ 2|̺(t)|2 ], that the ion is in the electronic state |2 at time t. Note that the trace only refers to the motional degrees of freedom. We need the electronic diagonal elementÛ 22 of the time evolution operator,Û where we have used the propertyff † =f †f + 1 of the operatorf given in Eq. (13). Note thatÛ 22 is still an operator in the motional Hilbert space. The time evolution of the occupation probability of the upper electronic state can be easily calculated as p 2 (t) = 1 2 1 + Tr ρ(0) cos |Ω|t f †f + 1 . From this result it is obvious that the evolution of the electronic-state occupation sensitively depends on the statistics of the observablef †f we are interested in. Let us now consider an ion initially prepared in a motional squeezed vacuum state as given by Eq. (12), ρ (0) = |0; ν 0; ν|, together with the choice off according to Eq. (13). In this case we easily arrive at This represents a completely coherent oscillation, which reflects the QNF property of the observablef †f , which is accessible by our detection scheme. This coherent electronic dynamics clearly displays the striking property of the moderately squeezed states. For any amount of squeezing one may adjust the observablef †f such that the squeezed state exhibits perfect quantumness. Consequently the implemented detection scheme represents a perfect QNF measurement. By using a trapped ion, one can realize QNF measurements for a variety of observables. They can be detected by generalizations of the scheme given in Fig. 1. For example, one may drive motional sidebands of higher orders. Under conditions far from the Lamb-Dicke regime, the nonlinear Jaynes-Cummings Hamiltonian applies to each driven sideband [16]. Moreover, the accessible operators can be further generalized by simultaneously driving more than two vibronic transitions and by engineering the vibronic interaction [17]. We also note that the coherent oscillation obtained for a squeezed state is closely related to the coherent dynamics in the standard Jaynes-Cummings interaction [18]. The corresponding interaction Hamiltonian is obtained by setting ν = 0 in Eqs. (15) and (13). In this case, a coherent oscillation occurs for the initial preparation of a motional Fock state. This behavior for the Fock state also displays its perfect quantumness, even if forf =â the requirement given by the condition (1) cannot be fulfilled. However, by choosing ∆f =â †â − â †â , forf → ∆f the condition (1) is fulfilled for all Fock states |n with n ≥ 1. For these states the condition (10) for perfect quantumness is clearly fulfilled. Hence, the perfectly coherent Jaynes-Cummings dynamics occurring for all Fock states, for n ≥ 1 represents another special realization of a QNF measurement of the type under study. The generalization of the method is straightforward. Given any measurement device and the related (positive semidefinite) observablef †f , the optimal quantum state can be calculated as the solution of Eq. (9). Then a possibility of the preparation of such a state must be developed. When this problem can be solved, the QNF measurement can be implemented. Also the extension to the detection of arbitrary Hermitian operators is easy, the latter are related to the positive semidefinite operators via =f †f − κ1, κ ∈ R. Such methods may be useful for a manifold of quantum systems, depending on the possibilities to prepare the desired quantum states. In conclusion we have introduced an operational measure for the nonclassicality or quantumness of a quantum state of the harmonic oscillator. It is based on the negativity of an observable whose classical counterpart is positive semidefinite. The resulting perfect quantumness is related to the feasibility of performing totally quantum-noise-free measurements. As an example, we have demonstrated that a moderately squeezed state of the quantized center-of-mass motion of a trapped ion can display perfect quantumness. An implementation of the corresponding noise-free quantum measurement has been given. We have outlined that the introduced notion of perfect quantumness also applies to other than harmonic quantum systems, and the general strategy of implementing quantum-noise-free measurements has been considered for the case of arbitrary systems.
2012-11-28T12:30:31.000Z
2009-04-22T00:00:00.000
{ "year": 2009, "sha1": "18e5fe1a219e60762affef448e679b25d0bcb51f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0904.3390", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "18e5fe1a219e60762affef448e679b25d0bcb51f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
46894230
pes2o/s2orc
v3-fos-license
Size effects on Rhodium nanoparticles related to hydrogen-storage capability a Synchrotron X-ray Station at SPring-8, Research Network and Facility Services Division, National Institute for Materials Science, 1-1-1 Kouto, Sayo, Hyogo 679-5148, Japan b Synchrotron X-ray Group, Research Center for Advanced Measurement and Characterization, National Institute for Materials Science, 1-1-1 Kouto, Sayo, Hyogo 679-5148, Japan c Department of Innovative and Engineered Materials, Tokyo Institute of Technology, 4259-J3-16, Nagatsuta, Midori, Yokohama 226-8502, Japan d Synchrotron Radiation Laboratory, The Institute for Solid State Physics, The University of Tokyo, 1-490-2 Kouto, Shingu-cho Tatsuno, Hyogo 679-5165, Japan e Division of Chemistry, Graduate School of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan f INAMORI Frontier Research Center, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan SUPPORTING NOTE 1. TEM images 1 and particle size distributions of Rh NPs Supplementary Figure 1 shows the TEM images for Rh NPs.TEM images were recorded on a JEM-200CX or Hitachi HT7700, operated at 200 or 100 kV accelerating voltage, respectively.The particle size distribution of Rh NPs was shown in Supplementary Figure 2. The mean diameters of the nanoparticles were determined from the TEM images to be (a) 2.4 ± 0.5, (b) 4.0 ± 0.7, (c) 7.1 ± 1.2, and (d) 10.5 ± 0.8, respectively.Numbers that follow the ± sign represent estimated standard deviations. SUPPORTING NOTE 2. Hydrogen pressure-composition (PC) isotherms for Rh NPs Hydrogen pressure-composition (PC) isotherms for the Rh NPs were measured with a volumetric technique using a pressure composition temperature (PCT) apparatus (Suzuki Shokan Co., Ltd.Japan). The pressure sensor was a INFICON SKY Model CR090 and its range was from 1.33 to 133000 Pa. The purity of the hydrogen was 99.999 %, oxygen < 1 ppm.The weights of the measured samples were more than 100 mg as the amount of the metal.As a pre-treatment before the absorption process, a volume measurement of the NPs was performed with helium.Because the amount of hydrogen absorption tends to be overestimated from the 1 st PC isotherm owing to reduction of the particle surface, we measured each PC isotherm more than three times.After confirming whether the 2 nd and 3 rd measurements exhibited reproducibility, we used the 2 nd PC isotherm dataset.For absorption measurements, the pressure during the introduction of the hydrogen was raised in 23 steps (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000, 15000, 20000 Pa).The next step was not initiated until the differential pressure had settled within 15 Pa after 5 min of hydrogen introduction and the PC isotherm data had been collected.Above 2 × 10 −2 MPa, the pressure was automatically set and the measurements were performed using the same conditions.During desorption measurements, the next step was initiated after the differential pressure had settled to the same conditions as the corresponding absorption measurements by automatic control.The PC isotherms of Rh NPs are described in Supplementary Ref. 1. SUPPORTING NOTE 3. Rietveld analysis for fcc Rh NPs Supplementary Figure 3 shows the results of the Rietveld refinement of the XRD patterns of the fcc Rh NPs at room temperature.The experimental high-energy XRD patterns of the fcc Rh NPs exhibited well-defined Bragg peaks that could be indexed to a cubic unit cell. SUPPORTING NOTE 4. Crystalline domain size Supplementary Figure 4 shows the relationship between the average crystalline domain size and particle size.The domain size was calculated from the Scherrer equation, D = λ/βcosθ, where D is the average crystalline domain size, K (= 0.9 if nanoparticles are assumed to be spherical) is the shape factor, λ (= 0.202 Å) is the X-ray wavelength, β is the line-broadening of the observed peak, expressed as the FWHM in radians, and θ is the Bragg angle.The average crystalline domain sizes of the Rh NPs were obtained using fifteen Bragg peaks (from 111 to 600).The accuracy of the Scherrer equation is limited by the uncertainty in β.The error in β, determined from the results of the Rietveld analysis, was approximately 10%.For Rh NPs, the average domain size decreased linearly with decreasing particle size.This result means that the particle size of Rh NP can be determined from the magnitude of the Rh domain size, not the number of small Rh domains (similar to the case of fcc Ru NPs 2 ), and consistent with the expectation from Fig. 2(a).(eq.S1) The number of unitcell per domain can be calculated by the following equation (eq.S2). Number of unitcell per domain ) 3 (eq.S2) where Vdomain is the volume of crystalline domain, and D is the average crystalline domain size. Considering the number, which is four, of Rh atoms included in unitcell for Rh NPs, the number of Rh atoms per 1 domain can be simply calculated by the following equations (eq.S3) Number of Rh atoms per domain (for fcc structure) = Number of unitcell per domain×4 (eq.S3) Finally, the number of domain (N) per 1.0 mg for Rh NPs can be calculated from eq.S1 and S3. Figure S4 . Figure S4.Average crystalline domain size as a function of particle size.
2018-06-12T19:24:51.188Z
2018-06-06T00:00:00.000
{ "year": 2018, "sha1": "c2ead5df5afdee3c5227268e15d5cd51648cc412", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/cp/c8cp01678j", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "c2ead5df5afdee3c5227268e15d5cd51648cc412", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
231927717
pes2o/s2orc
v3-fos-license
Biomechanics of juvenile tyrannosaurid mandibles and their implications for bite force: Evolutionary biology The tyrannosaurids are among the most well‐studied dinosaurs described by science, and analysis of their feeding biomechanics allows for comparison between established tyrannosaurid genera and across ontogeny. 3D finite element analysis (FEA) was used to model and quantify the mechanical properties of the mandibles (lower jaws) of three tyrannosaurine tyrannosaurids of different sizes. To increase evolutionary scope and context for 3D tyrannosaurine results, a broader sample of validated 2D mandible FEA enabled comparisons between ontogenetic stages of Tyrannosaurus rex and other large theropods. It was found that mandibles of small juvenile and large subadult tyrannosaurs experienced lower stress overall because muscle forces were relatively lower, but experienced greater simulated stresses at decreasing sizes when specimen muscle force is normalized. The strain on post‐dentary ligaments decreases stress and strain in the posterior region of the dentary and where teeth impacted food. Tension from the lateral insertion of the looping m. ventral pterygoid muscle increases compressive stress on the angular but may decrease anterior bending stress on the mandible. Low mid‐mandible bending stresses are congruent with ultra‐robust teeth and high anterior bite force in adult T. rex. Mandible strength increases with size through ontogeny in T. rex and phylogenetically among other tyrannosaurids, in addition to that tyrannosaurid mandibles exceed the mandible strength of other theropods at equivalent ramus length. These results may indicate separate predatory strategies used by juvenile and mature tyrannosaurids; juvenile tyrannosaurids lacked the bone‐crunching bite of adult specimens and hunted smaller prey, while adult tyrannosaurids fed on larger prey. Abstract The tyrannosaurids are among the most well-studied dinosaurs described by science, and analysis of their feeding biomechanics allows for comparison between established tyrannosaurid genera and across ontogeny. 3D finite element analysis (FEA) was used to model and quantify the mechanical properties of the mandibles (lower jaws) of three tyrannosaurine tyrannosaurids of different sizes. To increase evolutionary scope and context for 3D tyrannosaurine results, a broader sample of validated 2D mandible FEA enabled comparisons between ontogenetic stages of Tyrannosaurus rex and other large theropods. It was found that mandibles of small juvenile and large subadult tyrannosaurs experienced lower stress overall because muscle forces were relatively lower, but experienced greater simulated stresses at decreasing sizes when specimen muscle force is normalized. The strain on post-dentary ligaments decreases stress and strain in the posterior region of the dentary and where teeth impacted food. Tension from the lateral insertion of the looping m. ventral pterygoid muscle increases compressive stress on the angular but may decrease anterior bending stress on the mandible. Low mid-mandible bending stresses are congruent with ultra-robust teeth and high anterior bite force in adult T. rex. Mandible strength increases with size through ontogeny in T. rex and phylogenetically among other tyrannosaurids, in addition to that tyrannosaurid mandibles exceed the mandible strength of other theropods at equivalent ramus length. These results may indicate separate predatory strategies used by juvenile and mature tyrannosaurids; juvenile tyrannosaurids lacked the bone-crunching bite of adult specimens and hunted smaller prey, while adult tyrannosaurids fed on larger prey. The primary purpose of this study was to compare the mandibular biomechanical capabilities of several tyrannosaurid specimens and infer how bite capabilities changed with ontogenetic and phyletic scaling. Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, including locomotion and feeding. In extinct animals, active phenomena such as bite force or body movements can be inferred, starting with fossilized bones. Finite element analysis (FEA) is a nondestructive modeling technique that allows for calculations of stresses exerted and experienced by living and extinct animals. FEA is commonly used in engineering to test for weaknesses in the design of bridges, buildings, and machines, as it can analyze any solid structures. In recent decades, FEA has been used in zoology and paleontology mostly to study the crania (upper part of the skull of both living and extinct animals). Rayfield (2005a), Mazzetta, Cisilino, and Blanco (2005), Lautenschlager et al. (2016a), and Cost et al. (2019) used FEA to study sutures and bite performance in theropod dinosaurs. FEA has concentrated on mandibles (lower jaws) of other animals, such as in analyses of mammal chewing biomechanics (Zhou, Winkler, Fortuny, Kaiser, & Marcé-Nogué, 2019) and crocodilian strain response to biting and twisting loads (Porro et al., 2011;Walmsley et al., 2013). Mandibles of horned and duckbilled dinosaurs analyzed in FEA revealed their chewing mechanics (Bell, Snively, & Shychoski, 2009), and Lautenschlager et al. (2016b) examined diversity of mandible stresses in therizinosaurs. Besides these examples, there have not been many 3D analyses for dinosaur mandibles, especially in blade-toothed theropods. This study is among the first to analyze 3D carnivorous dinosaur mandibles with FEA and to test for changes in biting capability as dinosaurs mature. Discoveries of several nearly complete tyrannosaurid skeletons in recent decades make the group of macropredators ideal candidates for 3D computerized reconstruction and finite element analysis, as tyrannosaur ontogeny is relatively wellunderstood and skull material is readily available for analysis. Three tyrannosaur specimens across different ontogenetic stages were tested: a small juvenile tyrannosaurine originally named Raptorex kriegsteini, a larger subadult T. rex, and a large adult T. rex (Figure 1; Carr, 2020 for ontogenetic assessment of Tyrannosaurus specimens). Raptorex served as a proxy for a small juvenile T. rex given its young age (based on limb histology : Fowler, Woodward, Freedman, Larson, & Horner, 2011), juvenile character states (Carr, 2020), and completeness. Differences in age allow us to better understand changes in stresses the animals experienced as they matured and how they were able to cope with higher bite forces at maturity. Stress is a physical quantity that expresses the internal forces that neighboring particles of a material exert on each other, while a strain is the measure of the deformation of the material. The scope of this investigation of mandible stress was increased by producing 2D, sagittal view profile models, similar to those of Rayfield (2005b) for theropods, Fletcher, Janis, and Rayfield (2010) for ungulates, Snively, Anderson, and Ryan (2010) for arthrodires, and Morales-García, Burgess, Hill, Gill, and Rayfield (2019) for early mammals (the latter two explicitly testing extruded 2D sections). 3D FEA results are emphasized because they are more realistic. However, 2D models increase the size of the comparative sample, and 2D and 3D models for two taxa were compared to test how closely 2D results approximate 3D stresses. | Primary hypotheses and rationale Two main hypotheses were tested with results from simulated stress and strain in the 3D mandibles. Hypothesis (1). Larger tyrannosaurid mandibles experienced absolutely lower peak stress, because they became more robust (deeper and wider relative to length) as the animals grew (Currie, 2003). This hypothesis would be falsified (greater peak stresses in the adult) if muscle and food reaction forces are sufficiently great in the adult. Hypothesis (2). At equalized mandible lengths, younger tyrannosaurids experienced greater stress and strain relative to the adults, suggesting relatively lower bite forces consistent with proportionally slender jaws. Setting mandible lengths to be equal in length enables comparisons of adaptations for bite performance: at the same body size, a deeper mandible with lower stress would indicate the ability to deliver a more forceful bite. | Criteria for testing hypotheses and interpreting relative stresses These hypotheses relate to the stress and strain of skeletal structures. Comparing the mandibles requires a biologically appropriate criterion to assess strength (how close structures are to becoming damaged) during experienced stresses (Gilbert, Snively, & Cotton, 2016). Mandible strengths were primarily compared with von Mises stress (von Mises, 1913), a value which accurately predicts how close ductile (slightly deformable/non-brittle) materials like a bone (Keyak & Rossi, 2000;Lotz, Chea, & Hayes, 1991) are to breaking (absolute strength) or permanent deformation (yield strength) when the larger of compressive (in the case of bone) or tensile strengths is less than 1.5 times that of the other (Kayak et al. 2000). Mandibles with lower von Mises stress were judged to be stronger under the imposed bite simulations, as lower stresses indicate less susceptibility to breakage or deformation under the imposed load. A question rarely addressed in such comparisons is whether there is the informative adaptive significance if two structures experience stress far below failure levels (T. Greiner, personal communication 2019). For example, neither 40 MPa (about 40% of shear failure stress) in a slender mandible nor 5 MPa in a robust mandible will break either structure, although the slender mandible is closer to the threshold. Comparing these stress results is still informative. Excessively repeated moderate stress F I G U R E 1 Skeletons of four tyrannosaurid specimens tested. Clockwise from above left: adult Tyrannosaurus rex (FMNH PR 2081) (Field Museum of Natural History, Chicago, IL; photo by the Field Museum), juvenile Tyrannosaurus rex (BMRP April 1, 2002) (Burpee Museum of Natural History; photo by A. Rowe), adult Tarbosaurus bataar (Dinosaurium exhibition, Prague, Czech Republic; photo by R. Holiš) and Raptorex kriegsteini skeletal reconstruction (LH PV18) (Long Hao Institute of Geology and Paleontology, Hohhot, Inner Mongolia, China; photo by P. Sereno) can lead to fatigue, weakening a structure over time. Yet bone remodels even under normal loading regimes, and relative experienced stresses, therefore, are reflecting active bone adaptation to loading from actual behavior. Bone responds directly to muscular and reaction loading throughout ontogeny, influencing morphology, functional capability, and adaptive scope. Furthermore, lower relative stress can reflect momentarily excessive construction (Gans, 1979), the overhead capability for rare life-ordeath situations where loading is exceptionally great. As with most such biomechanical studies in vertebrate paleontology, the small sample sizes of rare fossil specimens limit statistical testing, and unknowns about fossil preservation constrain confidence in specific point stresses and especially strain values, because tissue stiffnesses (stress/strain) are not precisely known (Rayfield, 2007). Fortunately, overall stress distribution and magnitude depend on mandible shape, size, and force loading, that are well-constrained in this study's specimens on their jaw hinges and teeth that impacted food. Therefore, visual, color-indexed comparison of stress magnitudes (Rayfield, 2007) and strain magnitudes were primarily used, which revealed overall magnitudes and distribution of stress and strain on specific points in the mandibles tested. Evaluation of these hypotheses is predictive for future statistical comparisons including phylogenetic and developmental influences, possible with sample sizes of at least 10 per compared group (Snively et al., 2019). In addition to testing these hypotheses, further implications for feeding at different growth stages in tyrannosaurs and the evolution of their jaw muscles were considered. Specific muscles may have been particularly important for facilitating unique feeding strategies used by tyrannosaurs, by applying apposite forces and distributing stress favorably for safety factors of bones, teeth, ligaments, and tendons (Cost et al., 2019). Adult tyrannosaurids are noted for their "puncture-pull" biting technique in which they splintered bone (Erickson et al. 1996;Carr & Williamson, 2004); muscles that impart high bite force without over-stressing bones and ligaments of the mandible would enable such a feeding strategy. | Specimens for 3D analyses and tests for secondary hypotheses Three specimens of tyrannosaurine tyrannosaurid were selected to represent different ontogenetic stages: a small juvenile, a large juvenile or subadult, and a large adult. Their ages and ontogenetic status were based on limb bone histology (Erickson et al., 2004;Fowler et al., 2011;Sereno et al., 2009;Woodward et al., 2020) and other ontogenetically consistent postcranial features (Carr, 2020). Criteria for assessing the ontogenetic stage were therefore independent of mandible morphology, which might invite circularity in this study's list of specimens (Table 1). A large juvenile T. rex (BMRP 2002.4.1) represents a later ontogenetic stage (Carr, 2020) near the inflection of the taxon's logistic growth curve (Erickson et al., 2004;Woodward et al., 2020), at its period of the fastest absolute growth. This specimen lacks a preserved angular bone, which was reconstructed by BMRP through comparison with other tyrannosaurid specimens. A cast of the reconstructed mandible was scanned and digitized courtesy Heather Rockhold (O'Bleness Memorial Hospital, Athens, OH), Lawrence Witmer, and Ryan Ridgeley (Ohio University). In addition to examining mandible stress distribution at this growth stage, the function of post-dentary ligaments was tested, behind the mandible's tooth-bearing bone. Comparative analyses with and without these ligaments can reveal their effect on stress and strain the animals experienced when biting. The final specimen (FMNH PR 2081; "Sue") is a large senescent adult of T. rex (Carr, 2020), at least 28 years old at mortality (Erickson et al., 2004). With this specimen the effects of two phenomena were examined. First, peak tooth stresses with an absolute fixed constraint (common in engineering) were compared with biting down on an object with food material properties, which is in turn immobilized (Moazen et al. 2008). Second, the effects of two inferred insertions of the ventral pterygoid muscle were assessed, which loops around the mandible in reptiles. This insertion can be reconstructed as restricted to the posterolateral surface of the angular (Cost et al., 2019;Gignac & Erickson, 2017). However, all adult T. rex specimens possess a lateral fossa impressed upon the surangular, angular, and posterior end of the dentary (Carr, 2020), which suggests a larger insertion for this muscle. Different extents of this attachment potentially have substantial consequences for mandible stress in life. | 3D finite element modeling To study the mechanical behavior of tyrannosaurid lower jaws, finite element analysis (FEA) was applied, which makes use of a virtual model of a structure that is divided into many smaller shapes called elements. FEA is a standard mathematical tool in biomechanics to quantify the strain and stress state within a solid, given its material properties, and under the actions of appropriate loads and constraints that represent a particular functional or behavioral scenario (Maiorino, Farke, Kotsakis, Teresi, & Piras, 2015). FEA produces a reconstruction of stress and strain within the skeleton and allows assessment of how the skeleton functioned and how evolution shaped it in a particular manner (Rayfield, 2007). Its noninvasiveness and applicability to extinct taxa have resulted in a surge of FEA applications to research on extinct animals (Rayfield, 2007). | Construction of 3D finite element models CT data were imported into Avizo and smoothed reduce pixilation for a more realistic model. Smoothing often facilitates the construction of a more biologically realistic, less blocky, or jagged final surface from lower-resolution original scans, and fewer stress and strain artifacts that occur at unrealistically sharp boundaries. To capture the 3D shape from 2D orthogonal slices, a range of densities were entered in the CT scan to capture the bones and teeth of the object and make a surface from the selected data. The resulting surface may have computationally prohibitive resolution, so the triangle count was reduced and the model was remeshed to make surface triangles as equilateral as possible for realistic results in Strand7: 15 times taller than wide was adequate. The surface editor tested the model surface for intersecting triangles, closeness, and aspect ratio of triangles. Another surface editor function ("prepare generate tetra grid") was used to fix remaining high aspect ratio surface triangles and generate a "solid" tetrahedral volume mesh from the surface for FEA. Surface and volume meshes of multiple sizes were made-larger models are more accurate, but take longer to solve. The largest mesh was kept for convergence analyses of the smaller meshes; the most accurate results for the minimal solving time were found by comparing results from smaller meshes with those from the largest mesh. Reconstructions of dinosaur jaw muscles were derived from Holliday (2009) and Gignac and Erickson (2017); Figure 2). Muscle force is proportional to cross-sectional area multiplied by a force/area that muscles produce (31.5 N/cm 2 : Gignac & Erickson, 2017) and is scalable linearly to cross-sectional area of muscles between tyrannosaur species. A muscle's physiological cross-sectional area (PCSA) incorporates its volume, pennation angle, and fiber length. If all fibers in the muscle were arranged in parallel, the PCSA would equal their anatomical cross-sectional area (Zatsiorsky & Prilutsky, 2012). To estimate a maximum force that would stringently test each mandible's F I G U R E 2 Muscle insertions where nodes were mapped for Raptorex kriegsteini model in Strand7 based on Holliday, 2009 andErickson, 2017. Nodes were mapped in identical areas for all tyrannosaurid models tested while accounting for mandible size and shape variation. Full muscle names in Table 2 structural capability, fiber lengths of 0.35 times that of each muscle's length were incorporated based on data from Falkingham (2012, 2018). Estimated forces from Gignac and Erickson (2017) were multiplied by 1/0.35 to obtain forces corrected for this adjustment to PSCA. Gignac and Erickson (2017) scaled muscle forces of adult T. rex specimen FMNH PR 2081, used in the present study, from forces estimated for a reconstruction of another adult, BHI 3033. Muscle forces of the large juvenile T. rex were scaled relative to the force values from FMNH PR 2081, and muscle cross-sectional areas and forces of LH PV18 (Raptorex) were scaled via the subtemporal fenestra method (Sakamoto, 2006); the subtemporal fenestra is where the jaw muscles extend in a typical reptile mandible. Because the method uses an area, forces were scaled from the large juvenile T. rex to Raptorex (Raptorex length/juvenile T. rex length) 2 . Muscle force components were trigonometrically calculated (Table 2-5) by measuring distances and angles from origin to insertion in these models and after Gignac and Erickson (2017). To transfer these components into the coordinate system of Strand7 and with the jaws slightly open, coordinate frame rotation methods from Gilbert et al. (2016) were used. Frame rotation reoriented each muscle vector from the specimen's original coordinate system into the Strand7 coordinate system. Muscle lines of pull and force components were determined from musculoskeletal reconstructions and the vectors were rotated reorienting the model. Rotations are often necessary because models vary from the original CT scanned orientation which may be lying on their side, upside down, slightly tilted up, or sideways. For most analyses, properties of Alligator mandible bone (Porro et al., 2011) were applied to all the specimens, to ensure comparable results. In one model and set of analyses for the large juvenile Tyrannosaurus BMRP 2002.4.1, separate properties for bone, dentine (which makes up most of the teeth of reptiles), and ligaments (Porro et al., 2011) were applied. This analysis assessed the effects of ligaments, which accommodate more strain than bone, on bone stresses and tooth reaction forces. Mesh models were imported into Strand7 and appropriate material properties were assigned to the tetrahedral model in brick or solid object properties: elastic modulus and Poisson's ratio. An elastic modulus is a quantity that measures an object's resistance to being deformed elastically when a stress is applied to it. The elastic modulus of an object is defined as the slope of its stress-strain curve (stress/strain) in the elastic deformation region (Askeland & Phulé, 2006), when it can spring back to its original shape. Poisson's ratio describes the resistance of a material to distort under mechanical load rather than to alter in volume (Greaves, Greer, Lakes, & Rouxel, 2011) and is the quotient of transverse strain (bulging under compression or thinning under tension) and longitudinal strain from applied loads. Bone, tooth, and ligament material properties of alligators were applied as a proxy for those of the tyrannosaurids, given the close evolutionary relationship between dinosaurs and crocodilians (Porro et al., 2011;Table 5). To apply forces, nodes were selected in the areas where forces were being tested and divided by the total respective estimated muscle force by the total number of nodes selected. Constraints were assigned to restrict free body motion, at the hinges of the jaw and two mesial teeth for both constraints to obtain bite reaction forces. Models with just one ramus (FMNH PR 2081 andBMRP 2002.4.1) were constrained against mediolateral translation, which has the effect of simulating the structural response of one side of a perfectly symmetrical structure. For several analyses of FMNH PR 2081, a food object was simulated by assigning the properties of bone in contact with the teeth. This arrangement was more realistic for stress at points of constraint, which is a necessary artifice of FEA. Beam elements were extruded from nodes of contact (the originally constrained nodes) and the beams were constrained at their other ends. The beams were assigned stiffness of compact bone (Table 5). Once satisfied with constraint placement based on tooth positions, beam placement, and node placement based on Holliday (2009) linear static assumptions, with unchanging loads and material properties. Stresses were compared with von Mises stress, which is a good predictor of failure under ductile fracture, or fracture characterized initially by plastic deformation, commonly occurring in the bone. It is a function of principal stresses in σ1, σ2, and σ3, that measures how stress distorts a material. Failure of ductile material is estimated when von Mises stress equals the yield strength of the material in uniaxial tension (Rayfield, 2007). These were drawn as contours with a user-specified range of colors on the model to indicate where stresses experienced are least and most significant (Figures 3 and 4). | 2D finite element comparisons of theropod mandibles 2D finite element models of theropods were constructed as listed in Table 6. Most of the 2D mandible models were based on complete specimens. Three Tyrannosaurus mandibles were incomplete and were partially reconstructed: Carr and Williamson (2004), and the teeth of LACM 23845 as reconstructed by Bruñén (2019). For photographed specimens, perspective and lens distortion were corrected for in Adobe Photoshop ® (Filter>Distort>Lens correction), using orthogonal straight lines in the photographs as a guide. Perimeters of mandibles, teeth, and mandibular fenestrae, scaled to their actual dimensions, were traced using the pen tool in Adobe Illustrator ® and exported as dxf files into COMSOL Multiphysics ® . Fenestrae were subtracted from the mandible perimeter using the "Difference" tool in Multiphysics ® . The profile models were meshed to between 80,000 and 90,000 triangular elements each. Meshes were extruded a thickness to 5% of mandible length and given the same material properties as the 3D models with appropriate axes transposed. | Validation of profile models Although semi-2D models simplify model creation and accelerate comparative analyses (Fletcher et al., 2010;Morales-García et al., 2019;Rayfield, 2005bRayfield, , 2011Shychoski, 2006;Snively et al., 2010), they do not capture anatomical details such as the mandibular fossa, curvature in coronal planes (in dorsal or ventral views) (Therrien et al., 2005) and variations in mandible thickness. This necessitated the validation of 2D models through comparisons with 3D FEA results. The validity of profile models was tested by comparing stress distributions for two, 2D representations with respective three-dimensional counterparts, matched for profile dimensions, material properties, constraints, and forces. For these validation analyses, all models were constrained at the jaw joint and a force applied to the mesialmost tooth. A profile model of T. rex FMNH PR 2081 was compared with a 3D representation after methods of Wroe et al. (2008) and Moreno et al. (2008). Results for a 2D mandible model of Carnosaurus sastrei (MACN CH 894) were compared with those for the 3D model by Mazzetta et al. (2005), using their data for material properties and forces of an anterior bite. | Forces and constraints for 2D models Two methods were used to estimate adductor muscle forces for the expanded sample of 2D mandible models. The mandible profile is meshed, and assigned material properties, loadcases, and constraints. (c). Stress distribution is visualized in Multiphysics; von Mises stress indicated proximity to yield or breaking stress of the structure. Institutional abbreviation: ZPAL, Institute of Paleobiology, Polish Academy of Sciences F I G U R E 4 Completed FEA on adult Tyrannosaurus rex "Sue" (FMNH PR 2081) mandible demonstrating a range of stresses, with blue and green denoting the lowest amount of stresses experienced and red and white displaying the highest. Constraints are placed at anterior teeth and hinges of the jaw. Linear static analyses were performed on all tyrannosaurid specimens in Strand7 As a baseline of relative adductor force, estimates from the subtemporal fenestra method (Shychoski, 2006) were used for T. rex (AMNH 5027) and Gorgosaurus libratus (TMP 91.36.500). By this procedure, the area of the subtemporal fenestra serves as a proxy for anatomical crosssectional area, which is then multiplied by muscle's isometric specific tension (31.5 N/cm 2 ) to obtain a force for the temporal adductors (Shychoski, 2006). When subtemporal estimates were unavailable, section moduli of dentaries (Therrien et al., 2005) were calculated as an index of comparative bite force and related these estimates to subtemporal estimates of closely related taxa. Therrien et al. (2005) calculated section moduli for resistance to vertical bending (Zx) for several T. rex dentaries used in the present study. Zx estimates were averaged at two tooth positions for T. rex specimens AMNH 5027 and FMNH PR 2081, respectively, and used these as an index of bite force relative to Therrien et al.'s (2005) mean estimate for adult T. rex (in units of "times stronger than Alligator"). Ratios of Zx for these specimens were remarkably close to the squared ratios of their minimum dentary depths. For other specimens, ratios were squared between their depths and that of AMNH 5027. These quotients were multiplied by subtemporal estimates for AMNH 5027 to obtain muscular forces. Similarly to 3D analyses, forces for T. rex were scaled to obtain forces for other tyrannosaurids. The crania of T. rex are relatively broad, and the subtemporal fenestrae elongate. Force estimates were therefore modified from narrower-skulled tyrannosauroids (Shychoski, 2006) by multiplying them by the ratio of areas measured in ImageJ (0.57) between Daspletosaurus torosus (CMN 8506) and T. rex (AMNH 5027), normalized for skull length. Equal percentages of total force were applied for muscles influencing specific regions of the mandible. Forces of the pseudotemporalis muscle, adductor mandibulae externus profundus muscle, and pterygoideus dorsalis muscle were consolidated into an insertion within the adductor fossa, and those for the adductor mandibulae externus superficialis muscle and adductor mandibulae posterior muscle along the posterodorsal surface of the T A B L E 6 Specimens for 2D FEA, for including mandibular ramus lengths, and for the theropods section modulus Z x relative to the Alligator specimen in Therrien, Henderson, and Ruff (2005), and calculated muscle force surangular. M. pterygoideus ventralis loops around the mandible, and its force was applied as compression on the posteroventral surface. Modeling this muscle's pull on the lateral surface of the mandible was simulated in a 3D analysis. Constraints were similar to those of the 3D analyses. Each model was constrained to zero displacement at the jaw joint (along the dorsal surface of the articular), and at two anterior teeth. To compare the effects of mandible morphology on stress magnitude and distribution, each model was scaled to the length and forces applied to the mandibular ramus of adult T. rex specimen AMNH FARB 5027. Scaling to mandible length and force of one specimen enables comparisons of stress if the theropods were at hypothetical, ecologically equivalent sizes, rather than estimating absolute stresses. Lower stress will suggest a mandible's ability to accommodate greater bite force, and vice versa. | 3D stress distributions and magnitudes Small juvenile and larger subadult tyrannosaurs experienced greater mandible stresses when mandible lengths are equalized by adjusting their sizes in Avizo and Strand7. At equal mandible lengths, the larger juvenile tyrannosaur BMRP 2002.4.1 experienced 3.29 times the compressive and tensile bending stresses relative to the adult tyrannosaur FMNH PR2081 ( Figure 5). The small juvenile Raptorex experienced 6.05 times the bending stresses relative to FMNH PR2081, and 1.84 times the stresses of BMRP 2002.4.1. However, at the actual size, the juveniles experienced lower absolute stresses when compared to the adult, contradicting Hypothesis 1. With calculated muscle forces and simulation at actual size, Raptorex experienced the overall lowest peak von Mises stresses of all the tyrannosaurs (Figures 5, 6). When postdentary ligaments are incorporated into the juvenile T. rex mandible, the strain on post-dentary ligaments decreases stress and strain in the posterior portion of the dentary, and where teeth impacted their food (Figure 7). Incorporating ligaments reduce overall stresses at the teeth and on the mandible by roughly 10 MPa, creating a more realistic scenario when the animal was biting down (Figure 8). Despite greater stress than in juveniles, the adult mandible maintains safety factors, or expressions of how much stronger the system is than it needs to be for an intended load, of 3-4 relative to ultimate stress in the bone. The adult T. rex with the simulated food object experienced tooth tip stresses of 300 MPa, and with greater values of over 1,000 MPa when the contact points served as constraints (Figure 9). These values are consistent with high pressures calculated by Gignac and Erickson (2017), with the point constraint similar to initial contact with the very tip of the tooth. Tension from the lateral insertion of the looping ventral pterygoid muscle is linked to increasing compressive F I G U R E 5 Lateral mandible views (from top to bottom) of Raptorex kriegsteini (LH PV18), juvenile T. rex (BMRP 2002.4.1), and adult T. rex (FMNH PR 2081), illustrating differences in jaw morphology and von Mises stresses during ontogeny. Units are in mega pascals (MPa). Note the jaw depth transition from the juvenile tyrannosaurs relative to the angular, deeply-set mandible of the adult tyrannosaur F I G U R E 6 von Mises stress, the measurement of how close a structure is to breaking, for Raptorex kriegsteini (LH PV18) in lateral view (top) and dorsal view (bottom). Units are in mega pascals (MPa). Constraints are placed at the fifth and sixth tooth and the hinges of the jaw stresses on the angular, a large bone near the posterior end of the mandible, but it serves to decrease anterior bending stresses on the mandible (Figure 10). Lowered mid-mandible bending stresses would be advantageous with the highly robust and conical teeth on the anterior end of the tyrannosaur jaw, where, usually, they may have applied their highest impact bite forces. Crocodilians experience the reverse situation: they possess robust teeth near the posterior end of their mandible where they apply their highest bite forces . Stresses are also notable on the pseudotemporalis profundus muscle (mamep) located near the posterior teeth, achieving between 25 and 30 MPa in adult T. rex (Figures 5, 9 and 10). Constraints are placed at the anterior teeth and hinges of the jaw. Units are in mega pascals (MPa). Note the increased stress on the angular but decreased stress throughout the mandible in the model simulating the looping medial pteygoideus ventralis insertion | 3D comparisons validate 2D stress results For both the T. rex and Carnotaurus comparisons, the pattern of stress distribution in the profile models was nearly identical between the 3D models in lateral view (Figure 11). Stress magnitudes were similar in most regions of the respective models but were lower in the 2D models at the dorsal dentary-surangular articulation in Carnotaurus, and lateral to the position of the mandibular fossa in T rex. The highest stresses on the T. rex dentary are farther forward in the 2D model (Figure 11b) than in the 3D simulation (Figure 11a). | 2D stress distributions and magnitudes Distributions of mandible stress are similar between juvenile tyrannosaurids and adults of other species (Figure 12). Stress patterns in juvenile tyrannosaurids most closely resemble those of the allosauroid Sinraptor dongi. Magnitudes of von Mises stress are lower in adult tyrannosaurids than in juveniles, whose magnitudes are closer to those of adults of other taxa. (Figure 12). Scaled mandibular rami of Ceratosaurus nasicornis and Suchomimus tenerensis experience notably high magnitudes of von Mises stresses, which is consistent with shallow post-dentary regions. | DISCUSSION The results show how changes in ontogeny result in different mandible stresses and strains, indicating that predatory lifestyles of juvenile and adult tyrannosaurids may have differed significantly. Using the Raptorex mandible as a proxy for that of a small juvenile Tyrannosaurus was not ideal but enabled us to include the best-preserved example of a young juvenile tyrannosaurid specimen. Similar results with a juvenile T. rex the size of the Raptorex specimen would be predicted, although its skull would be posteriorly broader (Carr, 1999(Carr, , 2020, and bite force perhaps concomitantly greater Gignac & Erickson, 2017) than in a juvenile Tarbosaurus skull of the same length. The stress results may correlate with proposed ontogenetic growth series that have been constructed (Carr, 1999(Carr, , 2020Carr & Williamson, 2004;Currie, 2003): elongate and gracile skulls of juveniles become deeper, more robust, and more heavily ornamented. These adult features are integral to the novel tyrannosaurid "bone-crunching" puncture-pull feeding, suggesting it was only used by larger, older individuals; this hypothesis may be further supported by the adults experiencing lower stresses compared to juvenile forms at equalized mandible lengths. While the adult tyrannosaurs experienced higher absolute stresses near the mid-mandible relative to the smaller tyrannosaurs, safety factors were sufficient and they were able to accommodate such high forces because the mandible F I G U R E 1 1 3D FEA results largely validate von Mises stress distribution in 2D models. A and B. Respective 3D (modified from Mazzetta et al. (2005), with Photoshop ® Cartoonize function) and 2D results for Carnotaurus sastrei. C and D. Respective 3D and 2D results for Tyrannosaurus rex. Distributions of von Mises stress are similar in 3D and 2D simulations for each specimen, evident despite different color scales (insets) for the 3D results was so much larger (this is also seen in giant pliosaurs: McHenry, 2009;Foffa et al., 2014). Stresses on the mid-mandible are lessened but increased near the mandible hinge by the ventral pterygoid muscle insertion in the late-stage juvenile tyrannosaur BMRP 2002.4.1. This presents a trade-off for the animal. While a risk to the jaw hinge is costly and teeth can be replaced, teeth are still immediately necessary for prey capture, feeding, and combat. Tyrannosaur remains with broken or fractured jaw hinges are rare, suggesting that this stress distribution along the mandible was more beneficial to the animal in the long term. However, BMRP 2002.4.1 displays a distinct lesion on the right quadrate condyle and a mottled surface of the ipsilateral surangular (T.D. Carr, personal communication 2021), consistent with high forces and stresses at and near the joint. In addition, analysis with simulated bone biting reduces artificially high stress at the originally constrained nodes. However, the resulting stress (300 MPa This high stress is consistent with enormous tooth-tip pressures calculated for adult T. rex, splintering of prey bone under catastrophic failure (seen in fossils and simulated experimentally: Erickson et al., 1996;Gignac & Erickson, 2017), and even with occasional breakage seen in tyrannosaurid teeth still within the jaw. For 2D stresses relative to mandible length and at equalized forces, T. res shows an ontogenetic gradient of diminishing stress consistent with hypothesized shifts in diet. These results have additional implications for theropod diets. T. rex mandible stresses in response to feeding loads recapitulate those of adults of other tyrannosaurids, in that as T. rex grew its mandible robustness and relative stress resembled those of adults of similar body size. The results for adult Asian Tarbosaurus are similar to those of younger adult T. rex. This similarity gives confidence in using the Asian Raptorex as representative of overall small juvenile morphology for tyrannosaurines, including T. rex. 2D stress results for T. rex itself are also congruent with the dramatic shift in skull robustness that Carr (1999Carr ( , 2020 identified between juvenile and subadult ontogenetic stages. In addition to their increasing robustness, tyrannosaurid mandibles vary ontogenetically in several discrete F I G U R E 1 2 von Mises stress (mega pascals) in planar models of theropod mandibles, scaled to length and muscle force of adult Tyrannosaurus rex AMNH FARB 5027. Stress is predictably greater (hotter colors) at smaller sizes ontogenetically in T. rex and with ontogeny and phylogeny in other tyrannosauroids. Mandible stress is greater in the other theropods than in tyrannosaurs of similar size, especially in the spinosaur Suchomimus tenerensis and surprisingly in Ceratosaurus nasicornis traits (Carr, 2020). These features include a diagnostic posterior surangular fenestra near the jaw joint (Figures 3 and 12; notably large in Daspletosaurus), the laterally projecting surangular shelf above this opening, and the external mandibular fenestra at the junction of the dentary, angular, and surangular. The surangular fenestra of the large juvenile BMRP 2002.4.1was surrounded by the lowest bone stresses and was largest in this specimen, suggesting that fenestra size varies inversely with experienced relative loading (less bone is "needed" for resisting lower local forces). Both the small juvenile and large adult have smaller surangular fenestrae and greater surrounding stresses ( Figure 5). Proportionally greater loadings in the adult Tyannosaurus are further consistent with discrete mandibular traits. The ventrolateral slope of the surangular shelf (Carr, 2020) would resist bending moments imposed by inserting muscles and may reflect a ventral pull by the posterior pterygoid if this muscle is inserted ventrally onto the shelf. The adult's external mandibular fenestra has an anterior, dorsal notch (Carr, 2020), and a more posterior, ventral intrusion by the surangular. The notch would distribute stresses around the opening (Farlow et al., 1991) better than the flat dorsal margin of the fenestra in other theropods. The projection from the surangular may reflect loading on attaching ligamentous tissue that traversed the fenestra, resisting relatively high loads (although the projection is less evident in other adult specimens: Carr (2020)). The ability of the tyrannosaurid mandible to resist high forces (especially in adults) is consistent with cranial adaptations. Tyrannosaurs are notable for their possession of fused nasals (Snively et al., 2006) and substantial ossification of the secondary palate that most other large-bodied theropods lacked (Holtz Jr., 1998;Holtz Jr., 2000;Holtz Jr., 2003;Holtz Jr., 2004). ("Secondary palate" is used by its definition as an embryologically posterior palatal structure formed by outgrowths of the maxillary processes (Abramyan & Richman, 2015), the patent in most reptiles but fused and largely ossified in crocodylians and mammals.) A typical large theropod skull would be relatively strong in vertical compressive loads but lack solid support to resist torsional or twisting loads (Snively et al., 2006). An extensive bony portion of the secondary palate of tyrannosaurs, formed by medial extensions of the maxillae and reinforced by the diamond-shaped anterior end of the vomer, would allow for greater resistance to torsional loads (Gignac & Erickson, 2017;Holtz Jr., 2008). These FEA findings for the mandible further validate the animal's powerful bite and demonstrate the potential to inflict tissue damage that few other large theropods could achieve (Cost et al., 2019). Spinosaurs, such as Suchomimus tenerensis (Figure 13) also possessed a bony secondary palate (Sereno et al., 1998;Taquet & Russell, 1998), which likely compensated for torsional weakness in a hydrodynamically efficient low, narrow cranium, as in some crocodylians and pliosaurus (Rayfield, Milner, Xuan, & Young, 2007;McHenry 2009;Cuff & Rayfield, 2013;Foffa et al., 2014). The secondary palate of tyrannosaurids was an enhancement of high structural strength, rather than compensation for a gracile or dorsoventrally compressed rostrum as seen in spinosaurs and some crocodylians. | Paleoecological implications Subadult tyrannosaurids mandibles experienced relatively low von Mises stresses in contrast to the mature individuals; this suggests that subadult or smaller tyrannosaurid genera fed on smaller, potentially more agile prey, while the bone-crunching bite used by mature individuals was reserved for large, less mobile prey, such as hadrosaurids. Hadrosaurids were large, herbivorous duck-billed dinosaurs such as Edmontosaurus (Figure 14) which possess ample evidence for active predation from tyrannosaurs (Carpenter, 1997;Hone & Watabe, 2010;Rothschild & DePalma, 2013). FEA data indicate that younger individuals were active predators, somewhat akin to dromaeosaurs such as Deinonychus (Ostrom, 1970). Dromaeosaurs were a group of theropod dinosaurs distinguished from tyrannosaurs by their well-developed slashing talon on their second pedal digit, a stiffened tail which possibly functioned as a dynamic stabilizer, and large grasping hands. Tyrannosaurids are noted for their surprising agility, including the Raptorex and juvenile T. rex specimens (Snively et al., 2019). This agility may be ideal for outmaneuvering prey after fast and energetically-efficient tracking (Dececchi, Mloszewska, Holtz Jr., Habib, & Larsson, 2020) of presumably slower prey (Currie, 1983;Persons & Currie, 2014). Late-stage juvenile tyrannosaurids at a near-identical age to the larger juvenile T. rex in this study were likely feeding on large prey despite lacking the bone-crunching adult bite (Peterson & Daus, 2019). While biomechanically capable of puncturing bone during feeding, and doing so without the large, blunt dental crowns of the adult, feeding traces on adult prey animals are likely postmortem. Ventral bite traces on the hadrosaurid vertebrae suggest that the tyrannosaur was eating after the haemal complexes or blood vessels and most of the superficial hypaxial muscles ("core" muscles on the tail) and m. caudofemoralis longus (tail muscle) had been removed. Hadrosaurs grew exceptionally fast, perhaps minimizing predation pressure from younger tyrannosaurs (Cooper, Lee, Taper, & Horner, 2008). The crushing bite of adult and perhaps late-F I G U R E 1 4 Skeletons of five herbivorous dinosaur genera that coexisted with Tarbosaurus bataar in Maastrichtian Mongolia based on fossil material recovered. Clockwise from above left: horned ceratopsian Protoceratops (Field Museum, Chicago, IL; photo by E. Snively), hadrosaurid Saurolophus (Gifu, Japan; photo by Y. Tamai), long-clawed theropod therizinosaur Nothronychus (Utah Museum of Natural History, Salt Lake City, UT; photo by E. Snively), long-necked sauropod Opisthocoelicaudia (Museum of Evolution of Polish Academy of Sciences; photo by A. Grycuk) and a small, agile oviraptorid (Field Museum, Chicago, IL; photo by E. Snively). Note that Nothronychus did not originate in Mongolia but therizinosaur material has been recovered there stage juvenile tyrannosaurids was likely useful for dispatching adult hadrosaurs and other large herbivores (Figures 13 and 14). Prey of adult tyrannosaurids likely included the large, herbivorous ceratopsians. The ceratopsians were a group of dinosaurs renowned for their imposing facial horns and large frills. Tyrannosaurus and Triceratops coexisted in Maastrichtian North America and are popularly depicted engaged in combat. Various frill pathologies and cranial lesions have been attributed to predation from T. rex (Happ, 2008); these pathologies may be as extreme as horns being bitten off (Happ, 2008;Hone & Rauhut, 2010). The potential diet of large Asian tyrannosaurids like Tarbosaurus and Zuchengtyrannus represented here by the small juvenile Raptorex specimen, consisted of large sauropods and hadrosaurids (Owocki, Kremer, Cotte, & Bocherens, 2019, Hone et al., 2011Figure 14). The Nemegt Basin of Southern Mongolia offers a wealth of fossil material for the isotopic study of paleoenvironments. Stable isotopes of oxygen and carbon are notable in paleontological work as they can aid in diet reconstructions and paleoecology. Tooth enamel carbonate along the growth axes of five Tarbosaurus bataar teeth have aided in the identification of seasonal climatic variations and imply the presence of a woodland ecosystem dominated by large herbivores. Tooth drag and puncture marks attributed to Tarbosaurus have been reported from bones of the hadrosaurine Saurolophus (Hone & Watabe, 2010; Figure 14) and the sauropod Opisthocoelicaudia (Borsuk-Białynicka, 1977; Figure 14). These data combined with carbon isotope signatures and FEA data imply that Tarbosaurus was an apex predator in the Late Cretaceous of the Gobi region. In Maastrichtian (72.1-66 million years old) North America, T. rex was considered the apex tyrannosaurid; skin pathologies (Rothschild & DePalma, 2013), healed prey vertebrae containing teeth (De Palma, Burnham, Martin, Rothschild, & Larson, 2013), and coprolite or fossil feces analyses (Chin, Tokaryk, Erickson, & Calk, 1998) indicate an active predatory lifestyle. The extent of active predation in Tyrannosaurus has remained a contentious topic for decades (Holtz Jr., 2008;Horner, 1994Horner, , 1997. Bone healing seen in herbivorous dinosaurs (Carpenter, 1997) seems to verify the active predatory hypothesis; herbivorous dinosaurs may have escaped the attacking predator and later died elsewhere. This FEA data along with the deep-set jaws of adult T. rex indicates a powerful, locking or catastrophically damaging bite (Cost et al., 2019;Erickson et al., 1996;Gignac & Erickson, 2017;Happ, 2008) however, there remains the possibility of potential prey escaping from non-fatal wounds that afflicted the tail or other appendages. | Conclusions and predictions In conclusion, tyrannosaurids possessed a rigid, bonecrunching bite that other predatory dinosaurs of the Mesozoic era lacked, despite similarities of some in the secondary palate. Their wide-set jaws and stress-resistant biting allowed them to easily subdue prey and swallow chunks of flesh and bone. Their biting capabilities, massive size, and surprising agility likely made them the apex predators in their ecosystems. This study centered on the mandibles of tyrannosaurids at varying ontogenetic stages. It is predicted that 3D comparison of a broader phylogenetic sample will elucidate divergent feeding styles within ecosystems. FEA comparisons will be informative about skull function in adult tyrannosaurids with differing morphology. Slender mandibles of adult low-snouted tyrannosaurids such as Alioramus altai (Brusatte, Carr, Erickson, Bever, & Norell, 2009) are predicted to constrain bite force compared with that of subadults of contemporaneous large tyrannosaurines. Further analyses of Morrison Formation theropods, including Ceratosaurus and Allosaurus (Figure 12), will inform hypotheses of niche fulfillment within their own diverse predatory guild (Van Valkenburgh & Molnar, 2002). Given that absolute numbers in FEA studies are not often examined, for future comparisons homologous points in the dinosaur mandibles can be sampled (Gilbert et al., 2016) and numerical methods can compare stresses and strains (Snively et al., 2010). Within jaws of given tyrannosaurs, stress and strain can be investigated as they bit at different points in their jaws. Constraints simulate the contact points for teeth as they bit down on prey and faced resistance from flesh and bone. FE analyses on the tyrannosaurid mandibles presented here are focused on the anterior teeth, which likely first made contact with flesh. Assigning these constraints only to the posterior teeth could yield entirely different stress and strain results; given the significance of anterior teeth in tyrannosaur feeding, the hypothesis that the smaller and less robust posterior teeth would break and deform under similar loads placed on the anterior teeth can be tested, validating the necessity of large, robust anterior teeth in the clade for purposes of efficient feeding. ACKNOWLEDGMENTS Gail Gillis provided ingenious insight and application for the finite element analyses. We thank Lawrence Witmer (Ohio University), Heather Rockford (O'Bleness Memorial Hospital, Athens, OH), and Ryan Ridgely (Ohio University) for scans and surface models of the juvenile T. rex "Jane" (BMRP 2002.4.1). We thank Jeff Parker (Western Paleontological Laboratories) for the Raptorex kriegsteini (LH PV18) scan and Brian Cooley for the sculpture of the adult T. rex "Sue" (FMNH PR 2081) and Prathiba Bali (University of Calgary) for the CT scanning of the sculpture. Stephen Rowe (University of New England) made the initial FE model of (FMNH PR 2081). Additionally, thanks to Kaitlyn Nichols for muscle reconstructions, and Anita Davelos and Thomas Greiner for their insight during data collection.
2021-02-16T06:16:20.150Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "8efedab435b1e4672fbeef89408842ad30e44059", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ar.24602", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8b5f91da2bae9567a30c205373f34990a9211ba0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
267217755
pes2o/s2orc
v3-fos-license
STATE AND PROSPECTS OF THE DEVELOPMENT OF ECONOMIC RELATIONS BETWEEN UKRAINE AND GREAT BRITAIN IN THE AGRICULTURAL SECTOR This article considers the prospects of economic relations between Great Britain and Ukraine in the agricultural area. Both countries have significant potential in this field, which creates unique opportunities for cooperation and development. Ukraine, which is one of the leading producers of grain, oil and other agricultural products, is one of the guarantee givers of food security in the world. On the other hand, Great Britain has a great demand for quality organic products, what Ukraine can use to expand the export range of agricultural production. Cooperation between two countries may involve the exchange of technologies, the expansion of sales markets and joint projects in the cultivation of organic products. However, in order to achieve success, it is important to develop infrastructure, bring production in line with international quality and safety standards, as well as intensify business contacts and facilitate the exchange of information between agricultural enterprises of both countries. Taking into account the potential and advantages, agricultural cooperation between Great Britain and Ukraine can contribute to sustainable economic development and ensuring food security. The problem statement in the article includes the following aspects Today, Ukraine finds itself in deep water because of the armed hostilities, and is forced to function, accepting challenges and overcoming obstacles.Great Britain became one of the reliable partners of Ukraine, which was one of the first to reach out for help, and for the second consecutive year continues to establish mutual cooperation in various areas, including military, political, and economic ones.In the conditions of a rapidly changing world and increased globalization, economic relations are becoming a key factor for the development of countries.In the future, cooperation between Great Britain and Ukraine in the agricultural sector may be of crucial importance, as the development of the partnership will ensure a sustainable supply of high-quality and ecologically clean agricultural products for consumers, ensure food security and contribute to the sustainable economic growth of both countries.For this purpose, Ukraine should outline the most appropriate areas of cooperation and problematic issues that need to be resolved. Analysis of the latest research Prospects of bilateral economic relations between Ukraine and Great Britain are highlighted in the works of many domestic and foreign scientists, including M. Bilousov, O. Honcharov, A. Hrubinko, P. Ihnatev, V. Krushinskyi, V. Maiko, O. Sahaidak, P. Sardachuk, N. Yakovenko.Although the contribution of these scientists to the study of the economic relations of these countries in the agricultural sector is quite significant, the active pace of economic development requires constant changes and improvements, which determines the relevance of the study. The purpose of the article is to identify potential opportunities for cooperation between Great Britain and Ukraine in the agricultural sector, including the exchange of technologies, scientific research in the area of organic production, fertilizer production, expansion of sales markets, implementation of joint projects and identification of problems that may arise in economic relations between the countries in agricultural industry, including infrastructural and logistical issues, provision of equipment and innovative technologies, production and application of modern fertilizers, compliance with quality standards, storage of raw materials and products, and other factors. Main material statement.In today's world, where economic relations between countries are a key factor in ensuring sustainable development, it is important to monitor and explore new opportunities for cooperation between countries.One of the promising areas of cooperation between Great Britain and Ukraine is the agricultural sector.Both countries have significant potential in this field, and the development of economic relations in the area of agricultural production can result in mutually beneficial outcomes for both parties. Ukraine has significant agricultural potential based on rich natural resources, fertile soils and a favourable climate.The country is one of the leading producers of grain crops, corn, sunflower, oil, oilseed meal, soybeans, rapeseed and other agricultural products.Additionally, Ukraine can become a leading producer of organic products, which is becoming more and more popular on the world market. The armed hostilities in Ukraine has created a threat to the food security of the world, as the real areas under sowing have decreased by about a quarter, since the armed hostilities zone covers almost 20 % of the country territory, a third of Ukraine is mined (more than 5 million hectares of agricultural land), which will take decades to clear, and the land, which was damaged by trenches, shells, and the military equipment traffic, needs reclamation.It should be noted that as a result of hostilities, 53 % of agricultural machinery, 23 % of manufactured products, 15 % of granaries, 6 % of perennial crops were destroyed, the total amount of losses previously as of July 19, 2023 amounted to 8.7 billion US dollars and continues to grow [1]. Furthermore, the consequences of the explosion of the Kakhovskaia HPP are the loss of 92 % of irrigation systems in the Kherson region, hundreds of farms were affected, the Zaporizhzhia region lost about 70 % of the systems, and a certain part of other regions on the border with the Kherson region.This will have consequences not only for farmers, but also for the country's overall export potential [3]. Today, despite the armed hostilities and limited export opportunities, Ukraine remains a reliable partner and supplier of agricultural products to the world market (Fig. 1). As can be seen from the data in Fig. 1, the volume of exports in 2022 decreased by almost 13 % compared to the previous year, as the armed hostilities caused heavy losses to the crop husbandry sector (14.3 billion US dollars).Thuswise, the losses amounted to: 39 % of the wheat crop, 17 % of sunflower, 12 % of corn, 8 % of barley, 3 % of berries and fruits, 21 % of other plants.Livestock losses ($1.7 billion) included: 48 % dairy, 20 % egg production, 17 % poultry, 9 % swine, 4 % cattle, 2 % all other.The decrease in exports is directly related to the armed hostilities, taking into account the gradually increasing trends observed in recent years and preliminary forecasts for 2022.However, even under the conditions prevailing in Ukraine today, the state is trying to develop a plan and ways to implement it to restore the export potential of the agricultural sector and maintain the status of a supplier of food to world markets. According to the results of the 2022-2023 season, corn and wheat continued to be the main export items, sunflower oil ranks the second, and oil meal and related products rank third.As shown in Figure 2, the leading sector of the economy in Great Britain is the service sector (74.5 %), the industrial sector takes (18.6 %), other sectors take (5.9 %) and the agricultural sector takes (1 %).Although the agricultural sector occupies the smallest share of Great Britain's GDP, this does not prevent it from satisfying about 2/3 of the country's internal food needs and occupying 71 % of its total area.Great Britain is engaged in the active production of a wide range of agricultural products, including cereals (the main cereals are wheat and barley), oil crops (rapeseed), industrial crops (sugar beet), vegetables, fruits, etc.Thus, the turnover of crop production in 2021 amounted to 11 billion pounds.In turn, livestock turnover in 2021 was 16.3 billion pounds, including beef, pork, lamb, poultry, dairy products and eggs.Separately, promising subsectors of food processing (healthy food, staple foods, snacks), dried and processed fruits (cranberries, dried cherries, prunes, raisins, wild berries), nuts (almonds, peanuts, pecans, pistachios, walnuts) are also developing in Great Britain), fish and seafood (cod, pollock, salmon, other fish products), fresh kinds of fruit and vegetables (apples, grapefruits, sweet potatoes, table grapes), meat (hormonefree beef and pork products), food ingredients (any product used for further processing), pellets and other waste/ residues (for renewable fuels) [4]. In 2021, Great Britain imported $78.2 billion worth of agricultural and related goods, and exported -$31.9 billion, which is less than half of the value of imports.It should be noted that the largest share of export and import of consumer-oriented goods (Fig. 3) includes: fresh fruit, products of animal origin, dairy products, distilled alcohol, wine, bakery products, fresh and processed vegetables etc. [5] Agricultural products include forest products, seafood and other related products.Semi-finished products include soy products and oil meal, other vegetable and essential oils, fodder, etc. Powder foods include corn, rice, and other coarse grains.This state of trade shows that historically the European Union has been the largest trading partner of Great Britain, but the official exit of the United Kingdom from the single European market, known as «Brexit», has affected the dynamics of trade, because of which the country seeks to diversify its trading partners.For this, the country plans to attract new trade partners not only due to its diverse production and product quality, but mainly due to its active implementation of modern technologies and innovations, such as automated control systems, data analysis and unmanned aerial vehicles to increase the efficiency and sustainability of production. Links between Great Britain and Ukraine in agriculture has begun to develop in the last decade.Ukraine has become an important supplier of grain and oil, and trade volumes in this area continue to grow.On October 8, 2020, the Agreement on Political Cooperation, Free Trade and Strategic Partnership between Ukraine and Great Britain was signed, which entered into force on January 1, 2021.It is also worth noting the great role of the bilateral Agreement No. 1 signed on May 4, 2022 on amendments to the aforementioned Agreement in the form of an exchange of letters between the United Kingdom of Great Britain and Northern Ireland and Ukraine regarding the announcement of the cancellation of import duties and tariff quotas for Ukrainian goods in order to support the economy of Ukraine.This additional Agreement made it possible for Great Britain to remain Ukraine's 11th-largest trading partner among European countries with a share of 3.5 % of total trade with European countries [5]. Also, thanks to this decision, Ukraine will be able to further increase exports of goods traditionally imported to the Great Britain's market, including flour, grain, dairy products, oil, honey, corn, wheat, etc. [6]. The consolidated efforts and cooperation in the agricultural sector of both countries will significantly affect the food security of not only Great Britain and Ukraine, but also the whole world, will contribute to sustainable economic growth and the development of innovations in agriculture. Let's consider the most advantageous ways of cooperation between Great Britain and Ukraine in the agricultural sector.Cooperation in the area of exchange of innovative technologies, samples of equipment and knowhow should be one, currently powerful area.Specifically, Great Britain is distinguished by the development of agricultural sciences and innovative approaches to production, and Ukraine, in turn, can share the experience of growing large volumes of grain and other products.Taking into account the expediency of reorienting part of Ukraine's raw export operations to the export of finished products, the exchange of knowledge and technologies can contribute to increasing the productivity and quality of agricultural production in both countries, expanding sales markets, increasing export potential and creating new jobs.Optimal use of the agricultural potential of both countries, joint initiatives in the field of research and innovation, as well as the development of trade relations can ensure a sustainable and mutually beneficial partnership. So, for example, thanks to Great Britain, Ukraine could introduce an innovative technology -unmanned aerial vehicle operation.Unmanned agricultural machinery (drones) has been actively used in Great Britain since 2017 and is used at all stages of crop cultivation: when planting, applying fertilizers and harvesting.Drones are equipped with cameras, lasers and GPS systems.This allows the equipment to navigate in space and carry out work without human intervention.The state of the field is also monitored using drones.UAVs are used for soil sampling and crop condition analysis.Special sensors are used for monitoring, and automatic systems are installed for spraying plants.The introduction of such technology in Ukraine is quite real, since companies specializing in UAVs for military purposes, in turn, will be able to adapt production for agricultural ones in the future.In addition, such an innovation will be extremely useful in the post-war period for determining the level of soil contamination and safe sowing. Another cooperation opening is the development of joint projects in organic production.In respect of the growing demand for organic products Ukraine has the potential to become an important supplier of organic agricultural products.This is confirmed by the statistical indicators of the export of organic products for 2022, when Ukraine, even during the armed hostilities, was able to sell organic products to European countries in a total volume of 245 thousand tons for $219 million, which in turn demonstrates a high level of European demand (Fig. 4). As for separate cooperation between Ukraine and Great Britain in the organic sector, certain steps are being observed that will contribute to the establishment of trade, namely the certification of organic products in accordance with the standards of organic certification of the EU and Great Britain.This, in turn, will help ensure the trust of buyers and compliance with the requirements of the importing country.Furthermore, in the near future, a commission will begin operating in Ukraine that will specialize in the study of market trends and consumer preferences in Great Britain, which will allow to adapt the production of organic products and the marketing strategy to the conditions of the local market. Fig. 1 - Fig. 1 -Export volume of agricultural products of Ukraine for the period 2017-2022, billion USA dollars Fig. 2 -Fig. 3 - Fig. 2 -Sectors of Great Britain economy in relation to the share of GDP as of 2021 Fig. 4 - Fig. 4 -Export of organic products from Ukraine to Europe, 2022
2024-01-26T16:31:54.293Z
2024-01-23T00:00:00.000
{ "year": 2024, "sha1": "d3c454197e23bc3d103684babcb36b3c5f32afba", "oa_license": "CCBY", "oa_url": "http://journalsofznu.zp.ua/index.php/economics/article/download/3987/3806", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a5a8add0f2f109481ed91e9131bda8492a5db05c", "s2fieldsofstudy": [ "Economics", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
199064451
pes2o/s2orc
v3-fos-license
Unification for the Darkly Charged Dark Matter We provide a simple UV theory for a Dirac dark matter with a massless Abelian gauge boson. We introduce a single fermion transforming as the $\bf{16}$ representation in the SO(10)$'$ gauge group, which is assumed to be spontaneously broken to SU(5)$'\times$U(1)$'$. The SU(5)$'$ gauge interaction becomes strong at an intermediate scale and then we obtain a light composite Dirac fermion with U(1)$'$ gauge interaction at the low-energy scale. Its thermal relic can explain the observed amount of dark matter consistently with other cosmological and astrophysical constraints. We discuss that a nonzero kinetic mixing between the U(1)$'$ gauge boson and the Hypercharge gauge boson is allowed and the temperature of the visible sector and the dark matter sector can be equal to each other. We provide a simple UV theory for a Dirac dark matter with a massless Abelian gauge boson. We introduce a single fermion transforming as the 16 representation in the SO(10) gauge group, which is assumed to be spontaneously broken to SU(5) ×U(1) . The SU(5) gauge interaction becomes strong at an intermediate scale and then we obtain a light composite Dirac fermion with U(1) gauge interaction at the low-energy scale. Its thermal relic can explain the observed amount of dark matter consistently with other cosmological and astrophysical constraints. We discuss that a nonzero kinetic mixing between the U(1) gauge boson and the Hypercharge gauge boson is allowed and the temperature of the visible sector and the dark matter sector can be equal to each other. Introduction.-Constructing a grand unified theory (GUT) of the Standard Model (SM) is an outstanding challenge in particle physics. The similarity of the SM gauge coupling constants and the beautiful unification of fermions in the SU(5) multiplets may support the existence of the unified theory at a very high energy scale. However, the running of the gauge coupling constants and the quark/lepton mass relation are deviated from the simplest SU(5) GUT prediction [1][2][3][4][5], which may imply that the GUT breaking in the visible sector is much more complicated than we expect. In the context of cosmology, there exists dark matter, which may be a fundamental particle that barely interacts with the SM particles. Since the dark matter (DM) must be stable and neutral under the electromagnetic interaction, we consider it to be charged under a hidden U(1) gauge symmetry. Then one may hope that the dark sector is also unified into a GUT theory as in the SM sector. In this letter, we propose a chiral SO(10)×SO(10) GUT as a unified model of SM and DM sectors. The first SO(10) gauge theory is a standard SO(10) GUT model, which we do not specify as it has been extensively discussed in the literature [6][7][8][9][10][11]. We focus on the second SO(10) gauge theory, which gives a dark sector. The fermionic matter content in SO(10) is a single field in the 16 representation. The SO(10) is assumed to be spontaneously broken to SU(5) ×U(1) at a very high energy scale and the SU(5) gauge interaction becomes strong at the energy scale of order 10 13 GeV. Below the confinement scale, we have a light composite Dirac fermion charged under the remaining U(1) . Therefore the DM sector results in a Dirac DM with a massless U(1) gauge boson, which has been discussed in Refs. [12,13]. A similar idea of the strong SU(5) gauge theory was used in the literature in different contexts [14][15][16], where they did or did not introduce the U(1) gauge symmetry. As discussed in Ref. [13], a DM with a massless hidden photon is still allowed by any astrophysical observations and DM constraints even if it is the dominant component of DM. The thermal relic abundance of the Dirac fermion can explain the observed amount of DM. We find that the temperatures of SM and DM sectors can be the same with each other at a high temperature. This allows us to consider a nonzero kinetic mixing between the U(1) and U(1) Y gauge bosons, which presents an interesting possibility for the DM search in this model. The relic of the massless U(1) gauge boson affects the expansion rate of the Universe as dark radiation, which can be checked by the detailed measurements of the CMB anisotropies in the future. Dark matter in the low-energy sector.-We first explain a low energy phenomenology in the dark sector. Let us introduce a U(1) gauge symmetry and a Dirac fermion η of weak-scale mass m η with charge q. We consider the case where the U(1) gauge symmetry is not spontaneously broken and the gauge boson γ is massless until present. We denote the temperature of dark sector as T and that of visible sector as T . We define ξ(T ) = T /T , which depends on the temperature. We will see that there is a viable parameter region even if ξ = 1 at a high temperature. The DM can annihilate into the dark photon and hence its thermal relic density is determined by the freeze-out process. The thermally-averaged annihilation cross section is given by where v Mol is Moller velocity andS ann is the thermallyaveraged Sommerfeld enhancement factor [17,18]. In the regime where the gauge interaction is relatively large, a bound-state formation is efficient and is relevant to determine the thermal relic abundance. Hence we have to solve the coupled Boltzmann equations for the unbound arXiv:1908.00207v1 [hep-ph] 1 Aug 2019 and bound DM particles as done in Ref. [18]. In Fig. 1, we quote their result to plot a contour on which we can explain the observed amount of DM for the case of ξ(T ) = 1 at the time of DM freeze-out. The DM has a self-interaction mediated by the dark photon. Its cross section is given by where log Λ (≈ 40 -70) comes from an infrared cutoff for the scattering process. The velocity of DM v depends on the scale we are interested in: v ∼ 30 km/s, 300 km/s, and 1000 km/s for dwarf galaxies, galaxies, and galactic clusters, respectively. The observed triaxial structure of a galaxy NGC720 puts a stringent upper bound on the self-interaction cross section since the DM velocity distribution is randomized and is more isotropic by the self-interaction [12,13,19]. This can be rewritten as a constraint on the gauge coupling constant and is shown as the orange shaded region in Fig. 1. The DM with mass of order 0.1 -10 TeV is allowed even if ξ = 1 at the time of freeze-out, depending on q 2 α ( 10 −2 ). We expect that a larger number of statistical samples of galactic structures will make the analysis more robust. Since the self-interacting cross section is proportional to v −4 , the cross section for the cluster scales is much smaller than the observational constraints [20]. On the other hand, the self-interaction is quite large in the smaller scales, like dwarf galaxies. It has been discussed that a too large scattering cross section leads to a very short mean-free path, which suppresses heat conduction and hence both core formation and core collapse are inhibited [21,22]. Therefore, the constraint on the dwarf galactic scales may not be applied to this kind of models and the massless mediator is still allowed for the selfinteracting DM model. The massless dark photon remains in the thermal plasma in the dark sector and contributes to the energy density of the Universe as dark radiation. Its abundance is conveniently described by the deviation of the effective neutrino number from the SM prediction such as where g * is the effective number of degrees of freedom in the dark sector and T d is the decoupling temperature of dark sector from the SM sector. In the case where the dark sector is completely decoupled from the SM sector before the DM becomes non-relativistic and the electroweak phase transition, we should take g * (T d ) = 2 + 4(7/8) = 11/2 and g * (T d ) = 106.75 and obtain ∆N eff = 0.21ξ 4 (T d ). Even if we set ξ(T d ) = 1, the prediction is consistent with the constraint reported by the Planck data combined with the BAO observation: [23]. We can check the deviation from the SM prediction with a large significance in the near future by, e.g., the CMB-S4 experiment [24,25]. It is also possible that the DM sector is in the thermal equilibrium with the SM sector at a high temperature and then decoupled after the DM becomes nonrelativistic. This is the case when the U(1) gauge boson has a nonzero kinetic mixing with the U(1) Y gauge boson as we will discuss later. Then we should take ξ(T d ) = 1 and g * (T d ) = 2. As we will discuss shortly, the decoupling temperature is just below the DM mass, which is of order or larger than the electroweak scale. Thus we expect g * (T d ) 100, which results in ∆N eff 0.07. This scenario is also consistent with the Planck data and would be checked by the CMB-S4 experiment in the future. Dark matter from hidden SO(10) .-Now we shall provide a UV theory of the DM sector, which is similar to the SM GUT. We introduce an SO(10) gauge group and a chiral fermion transforming as the 16 representation, assuming that the gauge group is spontaneously broken to SU(5) ×U(1) at the energy scale much above 10 13 GeV and below the Planck scale. After the SSB, the fermion is decomposed into ψ, χ, and N , which transform as the5, 10, and 1 representations in the SU(5) gauge group, respectively. If we denote the U(1) charge of N as q (= √ 10/4), those of ψ and χ are −3q/5 and q/5, respectively [26]. If one starts from a generic SU(5) ×U(1) gauge theory instead of the SO(10) gauge theory, the U(1) charge q may be different from √ 10/4. Since the SU(5) gauge interaction is asymptotically free, it becomes strong and is confined at a dynamical scale Λ 5 . Below the confinement scale, there is a massless baryonic state composed of three fermions like η = ψψχ as the t'Hooft anomaly matching condition is satisfied [27,28] (see Refs. [14][15][16] for other applications of this model). This can be combined with N to form a Dirac fermion. In fact, we can write down the following dimension-6 operator: where c is an O(1) constant. This results in a Dirac mass term below the dynamical scale and its mass is roughly given by This is of order 100 GeV − 10 TeV when the dynamical scale Λ 5 is of order 10 13 -14 GeV. As a result, the lowenergy sector is nothing but the DM model discussed in the previous section. As for the SM sector, we consider also an SO(10) GUT, motivated by the thermal leptogenesis [29] (see, e.g., Refs. [30][31][32][33] for recent reviews) and seesaw mechanism [34][35][36][37]. Here, we introduce a right-handed neutrino with mass of order or larger than 10 9 GeV in the SM sector. Then, we expect an SO(10)×SO(10) gauge theory to be a unified model of the SM and DM sectors. The similarity of the SM and DM sectors may be because a fermion in the 16 representation is the minimal particle content for the anomaly-free chiral SO(10) gauge theory. An example of renormalization group running of gauge coupling constants is shown in Fig. 2, where we note that there are three flavors for quarks and leptons while there is only one "flavor" in the dark sector. Although an explicit construction of the GUT model in the SM sector is beyond the scope of this paper, we present a gauge coupling unification in a simple GUT model proposed in [38]. They introduced adjoint fermions for SU(3) c and SU(2) L at an intermediate scale and at the TeV scale, respectively. Although the SU(2) L adjoint fermion is stable, we assume that it is a subdominant component of DM or there is another field that makes it unstable. Noting that this is just one example of GUT in the Standard Model sector, we plot the gauge coupling unification in the simplest case in the figure. We do not introduce such adjoint fermions in the dark sector or we assume that they are heavier than the dynamical scale if present. We are interested in the case where q = √ 10/4 and the SU(5) gauge coupling α 5 becomes strong at Λ 5 ∼ 10 13 GeV. Starting from α 4.2 × 10 −2 and 2.5 × 10 −2 at the electroweak scale, we find that the SU respectively, to explain the observed amount of DM if ξ(T d ) = 1. We note that the gauge coupling constants in the dark sector does not need to be unified at the same scale as the GUT scale in the SM but can be unified at the energy scale between the dynamical scale Λ 5 (∼ 10 13 GeV) and the Planck scale. Thus the U(1) gauge coupling constant can be as large as q 2 α ∼ 0.2 at the electroweak scale. However, we expect that the gauge coupling constant at the unification scale is of the same order with that of the SM gauge coupling constants and hence M GUT = O(10 16 -18 ) GeV. In this case, α must be within the region between the dashed lines in Fig. 1, namely, α = (2.5 -4.2) × 10 −2 , m η = 0.6 -1.1 TeV. (7) This is the prediction of the chiral SO(10) gauge theory in the DM sector. Kinetic mixing.-Finally, we comment on the kinetic mixing between the U(1) Y and U(1) gauge bosons. For this purpose, we need to specify how to break the gauge groups at the GUT scale. We first note that a scalar field transforming as the 45 representation in SO(10) is decomposed into scalar fields in the 1+10+10+24 representations under an SU(5) (⊂ SO(10)) gauge group. The singlet 1 can be used to break SO(10) to SU(5)×U(1). We assume that SO(10) and SO(10) are spontaneously broken to SU(5)× U(1) (B−L) and SU(5) ×U(1) by nonzero VEVs of 45 H and 45 H , respectively. The remaining SU (5) in the visible sector is also assumed to be spontaneously broken to the Standard Model gauge group G SM by the field in the 24 representation that is contained in 45 H . On the other hand, we assume that 24 in 45 H has a vanishing VEV. We finally obtain G SM ×U(1) (B−L) ×SU(5) ×U(1) below these energy scales. The U(1) (B−L) is assumed to be spontaneously broken at an intermediate scale to give a nonzero mass to the right-handed neutrinos. Then even if we start from the SO(10)×SO(10) gauge theory, the kinetic mixing between U(1) Y and U(1) is induced from the following dimension 6 operator: The dark photon γ can be in thermal equilibrium with the SM sector by the annihilation and inverseannihilation processes of DM into the SM particles ff ↔ ηη, the Compton scattering process ηγ ↔ ηγ(γ ), and the Coulomb scattering process f η ↔ f η via the kinetic mixing, where f represents generic SM particles with nonzero U(1) Y charges. Comparing the energy transfer rate Γ with the Hubble expansion rate H, we find that the these processes are most important at the temperature around the DM mass. The ratio at T ∼ m η is roughly given by where n f is the number density of the SM particles with nonzero U(1) Y charges. The ratio is larger than of order unity when 10 −6 for m η = 1 TeV. This process freezes out soon after the DM becomes nonrelativistic, that is, around the temperature of order O(0.1)m η . Therefore, if the kinetic mixing is not strongly suppressed, the temperature of the DM sector is the same as the SM sector around the time of DM freeze-out and we should take ξ(T d ) = 1. The nonzero kinetic mixing between the U(1) Y (or U(1) EM ) and U(1) gauge bosons leads to a rich phenomenology for the DM detection experiments. It is convenient to diagonalize the gauge bosons in the basis that the SM particles are charged only under U(1) EM and the DM is charged under both U(1) EM and U(1) . The effective electromagnetic charge of DM is given by q eff = − qe cosθ W /e EM , where e EM is the gauge coupling of U(1) EM and θ W is the Weinberg angle. The direct detection experiments for DM put a stringent constraint on such a millicharged DM [39,40]. However, the constraint is not applicable to the DM with a relatively large charge because the DM loses its kinetic energy in the atmosphere [41]. The measurement of CMB temperature anisotropies also constrain the millicharged DM for a larger charge region [42,43]. In combination, there is an allowed range such as 1 10 −6 m η 10 3 GeV 3 × 10 −5 m η 10 3 GeV 1/2 Finally, we comment on the case in which the kinetic mixing is as small as 10 − (10 -11) . Such a small kinetic mixing can be realized if there is Pati-Salam symmetry for the SM sector at an intermediate scale and the VEV of 24 (⊂ 45 H ) is much smaller than the GUT scale, or c 10 −6 . In this case, the DM sector is completely decoupled from the SM sector even in the early Universe and the ratio of the temperatures in these sectors is determined solely by the branching ratio of the inflaton decay into these sectors. We note that the gauge-coupling-mass relation of DM, which is shown as the blue curve in Fig. 1, changes only of order ξ(T d ) unless the Sommerfeld enhancement effect is strongly efficient. The constraint by the direct detection experiment of DM for such a very small kinetic mixing is given by 10 −10 (m η /1 TeV) 1/2 for m η 100 GeV [40,47]. This constraint will be improved by LZ experiment for 1000 days by a factor of about 10 [48]. The DM has a self-interaction mediated by the gauge boson. The cross section is velocity dependent, which is supported by the observations of DM halos in galaxy and galaxy cluster scales. As the DM couples to the SM sector only via the small kinetic mixing, the gravitational search is one of the important DM searches in our model (see, e.g., Ref. [69]). It would be interesting to collect a larger number of samples in different length scales so that we can determine the velocity dependence on the selfinteraction cross section [20,70]. This may allow us to distinguish our model from the self-interacting DM model with a velocity-independent cross section, like the ones studied in Refs. [71][72][73][74][75][76]. It is also worth to investigate if the self-interacting DM with a massless vector mediator solves the small-scale issues for the cosmological structure formation [22,[77][78][79].
2019-08-01T04:54:50.000Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "37e55faff5e322cd6272e0170ffadc853fac4f7d", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.102.015012", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "37e55faff5e322cd6272e0170ffadc853fac4f7d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238862705
pes2o/s2orc
v3-fos-license
Comparison of resting-state EEG between adults with Down syndrome and typically developing controls Background Down syndrome (DS) is the most common genetic cause of intellectual disability (ID) worldwide. Understanding electrophysiological characteristics associated with DS provides potential mechanistic insights into ID, helping inform biomarkers and targets for intervention. Currently, electrophysiological characteristics associated with DS remain unclear due to methodological differences between studies and inadequate controls for cognitive decline as a potential cofounder. Methods Eyes-closed resting-state EEG measures (specifically delta, theta, alpha, and beta absolute and relative powers, and alpha peak amplitude, frequency and frequency variance) in occipital and frontal regions were compared between adults with DS (with no diagnosis of dementia or evidence of cognitive decline) and typically developing (TD) matched controls (n = 25 per group). Results We report an overall ‘slower’ EEG spectrum, characterised by higher delta and theta power, and lower alpha and beta power, for both regions in people with DS. Alpha activity in particular showed strong group differences, including lower power, lower peak amplitude and greater peak frequency variance in people with DS. Conclusions Such EEG ‘slowing’ has previously been associated with cognitive decline in both DS and TD populations. These findings indicate the potential existence of a universal EEG signature of cognitive impairment, regardless of origin (neurodevelopmental or neurodegenerative), warranting further exploration. Supplementary Information The online version contains supplementary material available at 10.1186/s11689-021-09392-z. Introduction Down syndrome (DS) is caused by an extra copy of chromosome 21 and is the most common genetic cause of intellectual disability (ID) worldwide, affecting 1 in 800 births [1]. Due to a 'triple dose' of genes on this chromosome, almost all individuals with DS have an ID (clinically defined as an IQ less than 70 and impairments in everyday adaptive abilities), in addition to an ultrahigh risk of developing Alzheimer's disease (AD) with a lifetime prevalence of 90% [2]. Understanding brain function in people with DS is important for elucidating these characteristics, and electrophysiological measures in particular allow us to examine potential mechanisms related to function. Understanding electrophysiological characteristics associated with DS may therefore provide mechanistic insights into both cognitive ability and decline. Resting-state electroencephalography (EEG) paradigms provide a general measure of brain activity (i.e. activity that is not associated with any particular sensory modality). As resting-state paradigms are passive, they are inherently free from the need for participants to understand and retain task instructions (e.g. pressing a button in response to a target), thus reducing any confounding influences of individual differences in ID level and motor skills [3]. Such paradigms are therefore suitable for use with the majority of individuals with DS. Using EEG to identify individual and group differences in oscillatory brain activity has been of interest to researchers since EEG was invented in the 1920s. Power in both slow (delta and theta) and fast (beta and gamma) EEG bands has generally been reported as greater in participants with DS compared to typically developing (TD) controls [6][7][8][9][10][11][12][13][14]. This pattern is well established for slower frequencies, though inconsistencies in the literature exist for faster frequencies, with Babiloni et al. [12,13] reporting less power in beta and gamma bands in individuals with DS compared to TD controls. As alpha waves can be visually identified in EEG recordings, they have been of particular interest to DS researchers from the earliest EEG studies [15]. Alpha activity is commonly associated with both IQ and memory performance in the TD population [16][17][18], in addition to between people with DS [19]. Differences in alpha activity between people with DS and TD controls are commonly reported, though specific findings are inconsistent. Whilst the majority of studies have reported less alpha power in DS [8,11,12], others have found significantly more alpha power in DS [9] or no difference between DS and TD controls [10]. For alpha peak frequency (the frequency at which peak amplitude occurs within this band), many studies have found people with DS have a significantly slower peak frequency [6,7,11,14,20], though others reported no significant difference [10,13]. These inconsistencies in findings, for alpha power in particular, are in part likely due to methodological differences between EEG studies. Differences in study characteristics are common, including frequency band classification, power measure (i.e. relative or absolute), scalp regions examined, participant age and whether any participants show signs of cognitive decline. For example, Politoff et al. [10] reported no significant differences between people with DS and TD controls for relative power (where power is calculated relative to the total EEG power for each participant) but found differences for absolute power. There is therefore value in examining both type of power measure. Previous research also shows topographical differences are important to consider-differences in alpha activity between people with DS and TD controls appear most apparent in posterior regions [8,11] and may differ between occipital and parietal electrode derivations [6], whilst delta differences may be most apparent in frontal and centroanterior regions [8,11,12], theta differences in centroposterior regions [8,11], and beta differences in parietotemporal regions [8,11]. Given the previous discrepancies in the literature regarding differences in EEG measures between people with DS and TD controls, and the potential contribution of methodological variations to these discrepancies, it is important to conduct research accounting for these variations to understand differences in EEG activity between people with DS and TD controls. We therefore compared EEG activity between adults with DS and TD controls using commonly used frequency band classifications, and both absolute and relative power, in addition to including two scalp regions (occipital and frontal). To reduce a potential confounding effect of cognitive decline, we used a sample of adults with DS with no noticeable cognitive decline. Understanding these differences in activity between adults with DS and TD controls will not only provide mechanistic insights into cognitive ability but also help elucidate the significance of studies examining individual differences between people with DS. In turn, this may help inform biomarker and drug target research. Based on previous findings, it was hypothesised that individuals with DS would have less alpha power (8-13 Hz) but more power in delta (0.5-4 Hz), theta (4-8 Hz) and beta (13-30 Hz) bands compared to TD controls. Results were not expected to significantly differ between absolute and relative power measures, or between occipital and frontal electrode montages. Gamma activity was not investigated as it shares a similar frequency to muscle artefacts, which are common in electrophysiological recordings in people with DS due to lower compliance with the instruction to remain still. Participants Participants with DS were recruited from an existing pool of UK adults with DS who had participated in an initial cognitive assessment [3]. All participants had genetically confirmed trisomy 21 and were aged 16 and over. Participants with an acute physical or mental health condition were excluded, as were participants with a clinical diagnosis of dementia or the presence of cognitive decline associated with dementia. The presence of cognitive decline was determined by the Cambridge Examination of Mental Disorders of Older People with Down Syndrome and Others with Intellectual Disabilities (CAMDEX-DS [21]), which is considered a valid and reliable tool for assessing cognitive decline in adults with DS [21]. It is an informant-based questionnaire which enquires about decline (with respect to an individuals' best level of functioning) within the following domains: everyday skills, memory, orientation, general mental functioning, language, perception, praxis, executive functions, personality, behaviour and self-care. Any change in any one of these domains was scored as presence of decline. All participants were required to show no decline on this questionnaire to be included in the study. The resting-state EEG recordings from all participants with DS selected for this study has also previously been used in a separate investigation into differences between individuals with DS [19]. TD control group participants were selected from the Multimodal Resource for Studying Information Processing in the Developing Brain (MIPDB) [22]. The MIPDB is a large open source dataset provided by the Child Mind Institute. This dataset aims to advance the study of clinical cognitive neuroscience, and contains highdensity task-based and task-free raw EEG data collected from TD individuals aged 6-44 years. All participants were required to have sufficient EEG data (at least 24 s of artefact-free data) and for no measured EEG variables to fall > 3 SD from the group mean (indicative of outlier activity). All participants meeting inclusion criteria were considered for matching. In total, 25 individuals from each pool were chronologically agematched to within 1 year, and sex-matched at a subgroup level split by age (16-25 years, 26-35 years, 36 years and over). EEG acquisition and pre-processing procedure Data from both groups was acquired using 128-channel EEG Geodesic Hydrocel nets (Electrical Geodesics, Inc., Eugene, OR, USA) with an appropriate size selected by measuring head circumference. In both datasets, electrode impedances were maintained below 50 kΩ during recording; the EEG signal was referenced to the vertex, and was recorded with a bandpass filter of 0.1 to 100 Hz. An amplifier gain of 10,000 was used for both datasets. Data from adults with DS was sampled at a rate of 250 Hz, whilst TD control data was sampled at a rate of 500 Hz. During the resting-state task, participants of both groups repeated multiple eyes closed (EC) recording blocks. However, the first 11 participants with DS had one continuous 5.5 min EC block, which was then changed to multiple shorter blocks due to poor compliance and drowsiness (see Hamburg et al. [19] for details). All preprocessing was performed using EEGLAB [23] for MATLAB (MathWorks, Natick, MA) and was identical for both groups (see Hamburg et al. [19]). Briefly, the continuous EEG signal was digitally filtered using a lowpass filter of 30 Hz. All data from six channels situated around the ears were removed due to poor fit (as a result of morphological differences in people with DS). Movement and blink artefacts were removed manually based on visual inspection. Bad channels were also identified based on visual inspection and were replaced using spherical spline interpolation. Remaining channels were re-referenced to the average electrode excluding VEOG and HEOG channels. EEG analysis Analysis was carried out using MATLAB (MathWorks, Natick, MA). For each individual, absolute and relative power measures for each frequency band of interest were obtained (delta 0.5-4 Hz; theta 4-8 Hz; alpha 8-13 Hz; beta 13-30 Hz) for each region (frontal and occipital; see Fig. 1 for electrode montages). Additionally, alpha peak features were calculated; this was defined as the frequency (Hz) of the peak amplitude within the 8-13 Hz range. Specifically, absolute power measures were obtained by convolving the raw signal from artefact free, nonoverlapping 2 s epochs for each channel with a five cycle Morlet wavelet. Power spectra were then averaged across all 2 s epochs, yielding a single average power spectrum for every electrode for each individual. Relative power measures were obtained for every electrode for each individual by dividing absolute power values by the total absolute power across the 0.1-30 Hz frequency range. Some participants with DS did not have a measurable alpha peak. Standard methods would assign peak frequency to the lower boundary (i.e. 8 Hz) for these individuals, as brain signals show a decrease in power with increasing frequency. Alpha peak features were therefore obtained for all individuals by removing the linear trend from individual power spectra to achieve 'spectral normalisation' [24]. This method allowed an accurate representation of these values to be obtained for all individuals, including those whose peak characteristics were initially lost within the natural EEG background. Statistics and visualisation Customised MATLAB (MathWorks, Natick, MA) scripts were used to produce power-frequency spectrum plots. All statistical analyses were performed with SPSS. In order to determine whether there was any effect of EC paradigm on EEG variables within individuals with DS (i.e. one segment of 5.5 minutes compared to 11 segments of 30 seconds), independent sample t tests were used to compare absolute and relative power values between DS participants completing a full-block (n = 11) and those completing a split-block (n = 14) paradigm. As one t test was significant (higher relative frontal theta power was found for the fullblock paradigm (M = 0.26 (0.02 SD)) compared to the splitblock paradigm (M = 0.24 (0.01 SD)), (t (14.96) = 2.35, p = .033 (95% CI < 0.01, 0.02)), EC paradigm was added as a covariate for all comparisons when activity was compared between groups. All TD control participants were assigned to the splitblock protocol. ANCOVAs were used to statistically compare differences between groups. This was performed for each EEG variable at each region (occipital and frontal), using both absolute and relative power values and alpha peak amplitude, and alpha peak frequency values. Where the covariate (EC paradigm) was significant, this was left in the model, and where this was not significant, this was removed from the model. Partial eta squared values for each variable were used to provide an indication of effect size. Preliminary analysis Final analyses were carried out on 25 individuals from each group. Table 1 shows the demographics of all participants included in the final analysis. According to carer report of participants with DS, level of ID was mild (n = 13), moderate (n = 10), and severe (n = 2). Table 2 shows absolute and relative values for each EEG measure by region within each group, in addition to statistical analysis of EEG variables by region. Standard deviations in the DS group appeared to be higher than those in the control group, particularly for peak frequency, indicative of more variability. Figure 2 (DS group) and Fig. 3 (TD control group) further illustrate increased variability within the DS group, apparent from individual power spectra. As a consequence, further analysis was undertaken to compare alpha peak frequency variance between groups, with highly significant between group effects found for both occipital (F (24, 24) = 59.98, p < .001) and frontal (F (24, 24) = 29.15, p < .001) regions. EEG measures The overall group differences in the power-frequency spectra between DS and TD control participants are illustrated by Fig. 4. Statistical analysis of EEG variables from the occipital region revealed significantly higher absolute and relative delta power, and relative theta power, and significantly lower absolute and relative power in alpha and beta bands, for those with DS compared to TD controls (see Fig. 4 and Table 2). Those with DS also showed a significantly lower alpha peak amplitude. The effect sizes were greatest for relative alpha power, with group accounting for 56.5% of variance. Results for the frontal region followed the same pattern (see Tables 2 and 3). All group differences (including both absolute and relative values) were statistically significant, apart from absolute alpha and beta power and alpha peak frequency. Overall, in this region, absolute and relative delta and theta power values were significantly higher in individuals with DS, whereas relative alpha and beta power values, and both absolute and relative alpha peak amplitude, were all significantly lower. The effect sizes were greatest for absolute alpha peak amplitude and relative alpha power, with group accounting for 28.8% and 20.9% of variance in these variables respectively. It is worth noting that paradigm had a significant effect on occipital theta power, with both absolute and relative theta values higher for the full-block compared to the split-block paradigm for participants with DS (absolute values 5.57 log μV 2 (0.63 SD) full-block and 5.08 log μV 2 (0.71 SD) split-block; relative values 0.26 log μV 2 (0.02 SD) full-block and 0.25 log μV 2 (0.01 SD) split-block). Paradigm also had a significant effect on relative theta power in the frontal region, with higher values for the full-block compared to the split-block paradigm (0.26 log μV (0.02 SD) full-block; 0.24 log μV 2 (0.01 SD) split-block). Additionally, there was a significant relationship between alpha peak frequency and paradigm in the occipital region-those with DS completing the full-block paradigm had a faster peak (10.76 Hz (1.27 SD)) compared with those completing the splitblock paradigm (9.98 Hz (0.76 SD)). It is noteworthy that participants with DS completing the full-block also had higher standard deviation of occipital peak frequency (1.27 vs. 0.76), indicating more variability in this particular measure. Discussion This study aimed to characterise EEG differences between adults with genetically confirmed trisomy 21 with no evidence of dementia, and matched TD controls. We show an overall 'slower' EEG spectrum in both occipital and frontal regions (higher 0.5-8 Hz power and lower 8-30 Hz power) in people with DS. Alpha band activity in particular shows strong group differences, as shown by the greatest effect sizes for group. We illustrate the value of using high-density EEG recordings to examine topographical differences and of utilising relative power measures in this population. Interestingly, studies have linked 'slower' EEG spectra with cognitive impairment within the TD populationwith increased delta and decreased alpha associated with poor memory performance [25], and increased delta and theta, and decreased alpha, associated with mild cognitive impairment (MCI) [26,27]. Such differences may also become more pronounced with progression from MCI to AD in the TD population [28,29]. It is therefore possible that cognitive impairment has similar EEG signatures whether due to ID or neurodegenerative disease, characterised by a 'slower' spectra with more activity at lower frequencies. Furthermore, additional 'slowing' of the EEG spectra has also been linked to dementia in people with DS [30]. According to effect sizes, EEG characteristics most strongly associated with group were those related to alpha activity (occipital and frontal relative alpha power, and frontal peak amplitude). As discussed previously, differences in alpha band activity between individuals with DS and TD controls are commonly reported. Alpha peak frequency was not significantly associated with group in either region, which is in line with two previous studies [10,13], though other studies have reported a slower peak frequency in individuals with DS [6,7,11,14,20]. The difference in peak frequency variability (as measured using the SD) in individuals with DS compared to TD controls is large (1.07 Hz SD in DS; 0.14 Hz SD in TD controls), which may have impacted statistical power. Additionally, the highly significant group difference in alpha peak frequency variance suggests that peak frequency may be unstable in people with DS. This may be of particular importance as alpha peak frequency has been posited to act as an anchor around which the EEG spectrum is organised [17], and in turn the organisation of EEG activity represents the means by which neuronal networks dynamically communicate and interact [4]. As all individuals with DS are expected to have AD neuropathology (amyloid plaques and tau tangles) by age 35 [31], the extent to which the group differences reported here are related to ID associated with the presence of an extra chromosome 21 and/or subclinical AD-neuropathology, remains unclear. Potential ID-related mechanisms include delayed and reduced brain maturation (with maturation in TD children associated with age-related reductions in delta and theta power, and age-related increases in alpha and beta power [32]) and over-inhibition [33][34][35]. Studies involving alternative populations of individuals with ID (e.g. fragile X) would help elucidate whether these findings are unique to individuals with DS or are associated with ID in general. As the mean age of participants in this study is 27 years, which is prior to when significant amyloid-burden is expected in adults with DS [31], neuropathological mechanisms are unlikely. However, amyloid deposition can occur from childhood in DS [36] and therefore results may be confounded by this. Studies combining EEG with amyloid imaging (e.g. PET) could help explore this further. Future studies would also benefit from following individuals longitudinally, or examining different age groups cross-sectionally (including childhood and old age), to fully elucidate maturational and ageing influences. For regional differences, stronger effect sizes were found for higher delta activity in adults with DS in frontal compared to occipital regions, which is in keeping with previous literature [8,11,12]. Frontally, differences in absolute values of alpha and beta power (although lower in DS) were not significant, yet reached significance occipitally, with greater effect sizes in occipital regions. Previous studies have also reported that differences in alpha and beta between people with DS and TD controls may be most apparent in posterior regions [8,11]. It is of note that participants with DS had larger SD values in frontal regions compared to occipital regions, which may have impacted statistical power here. Effect sizes were generally larger for relative power values (possibly due to normalisation of relative values reducing variability and consequently increasing statistical power). Due to the high degree of variability in EEG measures of participants with DS, utilising relative values may be particularly beneficial in this population when comparing to TD subjects. Relative power helps account for differences in broadband power across participants, therefore, helping to control for inter-individual variability in brain anatomy, which is particularly apparent in individuals with DS [37]. This anatomical variability may contribute to the higher EEG measure SD values found in this group. It is worth noting that absolute power values are likely to be of value when investigating individual differences between people with DS, as this variability is of interest. The use of open source neuroimaging datasets is increasingly common and allows small exploratory studies of clinical population access to a large control group to obtain closely matched control subjects. Such datasets offer numerous benefits to researchers including increased efficiency, transparency and reproducibility [38]. Although variation in recording paradigm within the group with DS is a potential study limitation, effects were controlled for through the inclusion of this as a covariate. Furthermore, it appears likely that splitting the recording reduced participant drowsiness as intended (research suggests theta power is increased with light drowsiness [39] and theta power was higher for the fullblock paradigm). Serendipitously, this provides useful information pertaining to the most appropriate design for resting-state studies in people with DS. This study benefitted from only including individuals with genetically confirmed trisomy 21 and the exclusion of individuals with cognitive decline (as assessed using the CAMDEX-DS) or a diagnosis of dementia. This is important as cognitive decline in DS has been associated with changes in EEG activity [30]. This ensured results were not influenced by any individuals with a rarer form of DS (for example mosaicism), and results are valid for individuals with DS prior to dementia onset. These variables are not commonly controlled for within DS studies, despite them substantially impacting the validity of findings. An additional strength of this study is that peak frequency measures were obtained by removing the individual linear trend from the EEG spectrum to achieve 'spectral normalisation'. This method has not been utilised in DS studies previously but is particularly useful in this population due to many individuals having a small peak that is not measurable beyond the natural background EEG noise. A key limitation of this study is that there was no correction for multiple comparisons, due to the exploratory nature of this investigation-future studies should therefore prioritise replication of these findings. Future studies may also benefit from investigating differences in eyes-open EEG, and the examination of gamma activity in this population where possible. There is also an indication that the parietal region may be an area of particularly strong group differences (see Supplementary information), which future research may benefit from examining. Differences in inter-regional phase coupling between DS and TD groups also remains an important avenue for future investigation; however, there is a risk spurious increases in phase coupling may arise due to significant differences in oscillatory power between groups [40]. Such studies will therefore need to control for this. Interestingly, we report here significantly lower alpha peak amplitude in individuals with DS. In line with this, previous research indicates that within individuals with DS, higher peak amplitude is associated with greater cognitive ability [19]. However, we also find here that theta power is significantly higher in individuals with DS compared to TD controls, despite previous research Peak frequency variability Light grey shows EEG measures that were significantly higher and dark grey shows measures that were significantly lower in participants with DS compared to TD controls suggesting greater theta power may be associated with greater cognitive ability in individuals with DS [19]. It may therefore be the case that some EEG measures associated with higher cognitive ability in people with DS are closer to that of the TD population, whilst others are of the opposite direction (potentially suggestive of compensatory mechanisms). Importantly, therefore, higher ability individuals with DS may not necessarily have EEG activity closer to TD EEG spectra. The implications of this are interventions aiming to enhance cognitive ability in this population through seeking to 'normalise' EEG spectra could in fact negatively impact cognition. Instead, targeting EEG measures associated with individual differences in cognitive ability, rather than measures that differ between individuals with DS and TD controls, may be of benefit. Conclusions We report an overall 'slower' EEG spectrum, characterised by higher delta and theta power, and lower alpha and beta power, for frontal and occipital regions in people with DS. Alpha activity in particular shows strong group differences, including lower power, lower peak amplitude and greater peak frequency variance in people with DS. Such 'slowing' of the EEG spectrum has previously been associated with cognitive decline in both DS and TD populations. These findings indicate the potential existence of a universal EEG signature of cognitive impairment, regardless of origin (neurodevelopmental or neurodegenerative), warranting further exploration.
2021-10-15T13:30:24.550Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "2c539dc942cd70456d0e9c55d7f676e40e4f4b7d", "oa_license": "CCBY", "oa_url": "https://jneurodevdisorders.biomedcentral.com/track/pdf/10.1186/s11689-021-09392-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c539dc942cd70456d0e9c55d7f676e40e4f4b7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237722050
pes2o/s2orc
v3-fos-license
Probing Majorana zero modes by measuring transport through an interacting magnetic impurity Motivated by recent experiments we consider transport across an interacting magnetic impurity coupled to the Majorana zero mode (MZM) observed at the boundary of a topological superconductor (SC). In the presence of a finite tunneling amplitude we observe hybridization of the MZM with the quantum dot, which is manifested by a half-integer zero-bias conductance $G_0=e^2/2h$ measured on the metallic contacts. The low-energy feature in the conductance drops abruptly by crossing the transition line from the topological to the non-topological superconducting regime. Differently from the in-gap Yu-Shiba-Rosinov-like bound states, which are strongly affected by the on-site impurity Coulomb repulsion, we show that the MZM signature in the conductance is robust and persists even at large values of the interaction. Interestingly, the topological regime is characterized by a vanishing Fano factor, $F=0$, induced by the MZM. Combined measurements of the conductance and the shot noise in the experimental set-up presented in the manuscript allow to detect the topological properties of the superconducting wire and to distinguish the low-energy contribution of a MZM from other possible sources of zero-bias anomaly. Despite being interacting the model is exactly solvable, which allows to have an exact characterization of the charge transport properties of the junction. In this letter we fully characterize the electronic transport through a novel class of experimentally realizable systems [6,8] which have recently attracted great interest for their easily realization and control. The MZM, emerging at the endpoint of a one dimensional semiinfinite wire with strong spin-orbit interaction (i.e. InAs wire) deposited on top of a s-wave SC and exposed to an external magnetic field, is coupled to an interacting magnetic impurity that can be used as a spectrometer. By coupling the dot to two metallic fully-polarized contacts we can probe the properties of the MZM though measurement of the current and the shot noise across the junction. Model Hamiltonian. To model the junction displayed in Fig. 1 we consider the Hamiltonian where Figure 1. Sketch of a quantum dot coupled to two fullypolarized metallic leads and a semi-infinite topological p-wave SC hosting a MZM at its edge. is the dot Hamiltonian, with n d σ = d † σ d σ the number operator on the impurity site and Ω d = (2n d ↑ − 1)(2n d ↓ − 1). In (2) U denotes the on-site interaction, µ the gate potential and h the Zeeman field applied on the dot level. The Hamiltonian of the semi-infinite Kitaev chain reads where t is the hopping amplitude between nearest neighbor sites, ∆ the p-wave superconducting pairing and µ the chemical potential of the wire. We notice that left (L) and right (R) metallic contacts are described by Hamiltonian (3) with ∆ = 0 and different electrochemical potentials µ L = −µ R = φ/2. In our model both the Kitaev and the metallic chains are described by spinless particles. This is a natural assumption if one consider that topological SCs are realized in one dimensional p-wave SCs characterized by strong spin-orbit coupling and large magnetic fields, and if we assume fully-polarized ferromagnetic contacts. In this regime the magnetic exchange between the impurity spin and the leads is suppressed and the low-energy physics is dominated by the coupling with the MZM [41][42][43][44][45]. The tunneling between the dot and the metallic contacts reads: where V c is the tunneling amplitude and α = L, R. Finally, we consider the hybridization with the boundary site of the semi-infinite Kitaev chain: where the sum extends to the semi-infinite Kitaev chain and we have introduced the Majorana operators γ = c + c † and ξ = −i(c − c † ). The simple model in Eq. (5) allows to study exactly the effect of correlations on the non-local Majorana edge state tunnel-coupled to an interacting quantum dot. The interacting model is exactly solvable because the d ↓ electrons are localized and n d ↓ can be treated as a Z 2 real number (= 0, 1). This property makes the Hamiltonian (1) an effective quadratic model, where similarly to the Falicov-Kimball model (FKM) [46] the ↓ configuration is obtained by minimizing the ground-state energy of the ↑ degrees of freedom. In the absence of metallic contacts, V c = 0, the equilibrium properties of the model in Eq. (1) has been already studied in Ref. [47]. It is convenient to perform the following gauge transformation: in terms of γ η ↑ and ξ η ↑ fermions the Hamiltonian (1) becomes [48]: To avoid irrelevant complications we consider the case µ = h = 0. Introducing the Dirac (complex) fermion η ↑ = γ η ↑ + iξ η ↑ , the model Hamiltonian reads where in the Nambu representation ψ = (ψ, ψ † ) T ,V j is the hybridization matrix between the dot and the j-th Scattering matrix coefficients for the system in Fig. 1 in the trivial regime (left panel) and topological regime (right panel) at finite interaction U/t = 1.6. Blue: normal reflection; red: normal transmission, orange: Andreev reflection and crossed Andreev reflection. Vertical black lines show the "bulk" superconducting gap ∆gap. site of the Kitaev chain: andV c couples the metallic contacts to the dot . To characterize the transport properties of the junction we compute the charge current, H], that in the new representation (6) reads: where sign(L) = +1 and sign(R) = −1, and the zero frequency limit of the J Q fluctuations where δJ Q = J Q − J Q . In the following we study transport through the junction by performing calculations with Keldysh Green's function technique [49,50], which we compare with the scattering matrix approach [51][52][53]. In this letter we present a detailed characterization of the low-energy signatures observed in the charge conductance and shot noise measurements that allows to classify different regions of the Kitaev chain phase diagram. Despite being done on a toy model, the analysis may give physical insight for the understanding of the ongoing experiments where the effect of local on-site interaction cannot be neglected. We start by reporting the expression of the current flowing through the metallic contacts: Tr is the trace in the 2 × 2 Nambu space,T R/A ( ) is the impurity transfer matrix andρ( ) is the boundary density of states for the semiinfinite normal contacts (we refer to the supplemental material for more details [59]). The resulting value of the current is obtained by averaging over the spin ↓ configurations: where in the absence of any gate potential or Zeeman field on the quantum dot p(0) = p(1) = 1/2. In the topological regime, the low-energy physics is governed by the in-gap states that emerge from the hybridization between the real and imaginary part of the spin up dot fermion and the MZM of the Kitaev chain. The coupling between γ d ↑ and γ 1 induces an energy splitting ∼ V , while the quantum dot interaction generates an energy splitting ∼ U between γ d ↑ and ξ d ↑ . The combined effect of the dot-Kitaev chain coupling and the interaction, on an odd number of MZMs, is to split two of them by a term ∼ f (U, V ) that eventually, for U strong enough, wash them out from the superconducting gap. Whereas, the third one is a topologically protected, and robust to the interaction, zero energy mode. In the trivial regime, we have an even number of MZMs, then no zero energy mode is preserved as any finite interaction induces a hybridization ∼ U between them. These features can by easily detected resorting to the scattering matrix approach of Ref. [52,53] that allows to interpret the transport properties of the system in terms of the scattering processes across the junction (a detailed description is given in the supplementary material [59]). In the trivial regime (left panel of Fig. 2), the presence of massive in-gap modes suppresses low-energy scattering processes so that the L and R contacts are disconnected in the large U/V c limit. On the contrary, in the topological regime (right panel of Fig. 2), the presence of the MZM keeps alive all the scattering processes at low-energy. The normal transmission (T), the Andreev reflection (A) and the crossed Andreev reflection (C) are equal to one fourth at any value of U and V . As a consequence, the charge current, J Q , that measures the charge imbalance between left and right lead, is ∝ A + T ∼ 1/2 and the zero-bias conductance is reduced from e 2 /h to e 2 /2h, as already observed in previous studies [43,[60][61][62][63]. Interestingly, the on-site local repulsion does not modify the result e 2 /2h while it affects the curvature of the low-bias conductance by renormalizing the MZM: where Γ c = 2πρ(0)V 2 c is the hybridization with the metal- lic contacts,ρ(ω) the boundary metallic density of states and Z the quasiparticle renormalization factor. The latter quantity is shown in the color map 3, where we analyze the evolution of Z in different regions of the phase diagram of the Kitaev chain. We stress that Eq. (16) is valid in the topological regime |µ| < 2t where the SC posses a non-trivial topology and a MZM appears at the edge of the semi-infinite Kitaev chain. Differently, in the region |µ| > 2t, the MZM disappears and we enter in the Coulomb blockade regime where the zero-bias conductance is suppressed. The topological transition is associated to a drastic variation of the conductance G(φ). Indeed, as shown in Fig. 4, by crossing the critical line, µ = 2t, we observe a jump from G 0 = e 2 /2h in the topological region to G 0 0 in the trivial one, which allows to distinguish the two different phases. Moreover, we notice that in the non-topological region, for µ/t 2.5, the conductance presents coherent in-gap peaks attributable to Andreev bound states induced by the impurity, reminiscent of Yu-Shiba-Rosinov states [64][65][66]. The effect of the interaction on the G(φ) is shown in Fig. 5, where we report the evolution of the low-energy MZM and of the Yu-Shiba-Rosinov-like bound states as a function of U/t. Being non-topological, the latter features are strongly affected by the interaction, and indeed, as shown in Fig. 5, above a certain value of U/t they enter in the continuum of Cooper-pairs excitations of the SC. On the other hand, the contribution to the zero-bias conductance G 0 of the MZM is robust and persist for any value of U/t. The interaction renormalizes the coupling (5) V → V √ Z between the dot and the MZM by the quasiparticle renor- malization factor Z, displayed in Fig. 3, and enhances the curvature of the conductance close to the zero-bias anomaly (16). In order to have a complete characterization of the junction we compute the shot noise S Q that at zero temperature reads: for more details we refer the interested reader to the supplementary material [59]. Analogously to the previous case, we perform the average over ↓ configurations A complete characterization of the low-energy transport properties is given in Fig. 6, where we plot the current J Q , the corresponding charge-conductance G(φ) and its fluctuations S Q (φ) as a function of the applied bias. We notice that dependently on the region of the Kitaev phase diagram 3 we predict different low-energy response. In (II) and (III) present additional in-gap bound states distinguished by sharp peaks in G(φ) away from the zerobias anomaly. We observe that an additional signature of the MZM is given by the low-bias behavior of the shot noise S Q (φ), which is shown in the bottom panel of Fig. 6. Indeed, in the topological regime, for small bias, the shot noise goes like: while it becomes linear in the non-topological region, S Q ∝ φ for |µ| > 2t. The evaluation of the shot noise allows to compute the Fano factor which determines the charge of the elementary carriers [67]. In Eq. (20) we have introduced the backscattering current, defined as the deviation from the unitary transmission through the junction [68]: As a consequence of the small bias behavior of Eqs. (16) and (19), the topological regime |µ| < 2t is characterized by a vanishing Fano factor F = 0, independently from the value of the interaction U/t, as shown in the bottom panel of Fig. 7. On the other hand, in the nontopological region F is a function of U/t which becomes equal to 1 in the non-interacting limit U/t → 0. In particular for |µ| > 2t we find: Therefore, experimental measurements of the shot noise give additional informations complementary to those attainable by studying the characteristic zero-bias conductance e 2 /2h. Combined measurements of the conductance and the shot noise in the experimental set-up presented in Fig. 1 allow to detect the topological properties of the superconducting wire and to distinguish the low-energy contribution of a MZM from other possible sources of zero-bias anomaly. We argue that the predicted behavior of the conductance and the shot noise persists even for a more realistic model Hamiltonian, that presents a non-vanishing tunnel-coupling with the spin ↓ fermionic operator in the quantum dot [43][44][45]69]. However, a detailed analysis of this problem is left to future investigations. Conclusions. The present results show that transport measurements give a detailed characterization of the topological phase diagram of real materials and reveal MZM in nano-wires. The presence of a MZM is outlined by a fractional zero-bias conductance e 2 /2h that, we have shown, is robust against the dot interaction. Additionally, for small values of the on-site repulsion, we find in-gap bound states that represent the only low-energy feature in the topologically trivial region of the phase diagram in Fig. 3. Furthermore, we find that the topological regime is characterized by a vanishing Fano factor induced by the tunnel-coupling with the MZM at the edge of the superconducting wire. Our analysis gives a complete characterization of charge transport measurements that can experimentally detect the presence of MZM on the edge of real materials and, indirectly, allows to reconstruct their topological phase diagram. Acknowledgements. This work has been supported by the European Union under H2020 Framework Programs, ERC Advanced Grant No. 692670 "FIRSTORM". We are grateful to Michele Fabrizio, Domenico Giuliano and Roberto Raimondi for discussions and comments on the manuscript. We thank Shankar Ganesh and Joseph Maciejko for useful correspondence at the early stage of this work. In this supplemental material we derive the main equations and results presented in the manuscript. In the first part of the appendix I we present the evaluation of the transfer matrixT η ↑ , which allows to compute the charge current and the shot noise. In the second part II of the supplemental material, instead, we compute the wave functions of a magnetic impurity coupled to two Kitaev chains, that host a zero energy Majorana. The latter quantity allow to characterize the different scattering processes occurring at the junction and contributing to the charge current between the metallic contacts. I. CHARGE TRANSPORT WITHIN NAMBU-KELDYSH FORMALISM The model Hamiltonian we use to describe a magnetic impurity coupled to two metallic contacts and a p-wave superconductor reads: where in the Nambu representation ψ = (ψ, ψ † ) T ,V is the hybridization matrix between the dot and the 1st site of the Kitaev chain:V = iV 1 1 1 1 andV c couples the metallic contacts to the dot . In Hamiltonian (1) H c and H K are semi-infinite 1−D chains describing the metallic contacts and the Kitaev chain. We remind that the number of ↓ dot fermion is conserved, [d † ↓ d ↓ , H] = 0 and q d ↓ = (1−2d † ↓ d ↓ ) is an effective Z 2 variable q d ↓ = (−1, 1). As already observed in the manuscript this property makes the model exactly solvable, similarly to the Falicov-Kimball model (FKM) [1] the ↓ configuration is obtained by minimizing the ground-state energy of the ↑ degrees of freedom. Within the Nambu formalism we define the Keldysh Green's functions of a fermionic operator Green's functions of the junction x, x infinite chain lead αĜ xx αα x, x semi-infinite chain lead αĜ xx αα x, x semi-infinite chain hybridized with the impurity site leads α, βĜ xx αβ bare η ↑Ĝη ↑ dressed η ↑Ĝη ↑ mixing between the impurity and the leads η ↑ − 1α and 1α − η ↑Ĝ1αη ↑ ,Ĝη ↑ 1α arXiv:1907.06444v2 [cond-mat.mes-hall] 10 Mar 2022 where T C is the contour ordering and s, s = ±, where − and + are the forward and backward branches of the Keldysh contour, respectively. Notice well, in the following we refer to the lesser Green's function as G < (t, t ) = G(t − , t + ) and the greater one as G > (t, t ) = G(t + , t − ). The hybridization between the dot and the leads introduces several boundary Green's functions that are summarized in the table I. We remind that Green's functions are 4 × 4 matrices in the Nambu-Keldysh space. By performing perturbation theory in the tunnel-coupling between the leads and the impurity we obtain the following Dyson's equationĜ where • is the convolution withĜ 11αα boundary Green's function of the metallic lead α andĜ 11 boundary Green's function of the superconductive chain, we refer to section I C for more details. After straightforward calculations we find that: where a refers to one of the three leads connected to the impurity. Finally, the hybridization with the dot induces a direct coupling between different leads:Ĝ where the indices a, b refer to the metallic leads as well as the Kitaev chain and we have introduced the transfer matrix: In particular transport across the metallic contacts involvesT αβ η ↑ with α, β = L, R. From now on we consider symmetric metallic leads, i.e.V L c =V R c =V c , such thatT αβ η ↑ =T η ↑ does not depend on α and β. A. Charge current The current operator for the metallic lead α = L, R is: After straightforward calculations: The charge current, J Q = (J L − J R )/2, across the junction reads where sign(L) = +1 and sign(R) = −1. We notice that L and R leads are characterized by the same hybridization matrixV c as well as the same spectral properties. Therefore, the first contribution to the current in Eq. (13) vanishes and: whereĜ and ρ(ω) is the boundary spectral function of the metallic leads (49). By rescaling for −2πe 2 /h we obtain Moreover, we notice thatT and 54) and (46). The value of the current is obtained by averaging Eq. (16) over the spin ↓ configurations: where in the absence of any gate potential or Zeeman field on the quantum dot p(0) = p(1) = 1/2. B. Shot noise The correlation function between currents J α and J β reads: where T C is the time-ordering operator on the Keldysh contour, δJ α = J α − J α . In the following we evaluate the T C ordered S αβ , where usual perturbation theory can be applied, and then we take the lesser and greater components to compute S Q (t, t ). After straightforward calculations we arrive at the following expression: We are interested in average values of the form where: and In the steady-state regime we define the zero frequency limit of the current-current response spectrum as: where and We notice that and By using Eqs. (5), (8) and (9) we obtain: (30) By using Eqs. (47) and (48) we obtain Finally, the expression of the white-noise component of J Q fluctuations reads: Since we are interested in quantum-fluctuations we take the zero-temperature limit: where 2πe 3 /h is the rescaling factor, ρ(ω) is the boundary DOS of the metallic chain (49) andT is given in Eqs. (17) and (18). The shot noise is obtained by averaging over the spin ↓ where p(0) = p(1) = 1/2 at half-filling. C. Boundary Green's functions of the leads In this Appendix, we provide a derivation of the retarded/advanced and lesser/greater Green's function used in the manuscript, which describes quasiparticle excitations at the boundary of the semi-infinite leads [2][3][4]. a. Metallic leads. In the Nambu formalism the α = L, R k-space Hamiltonian is: where ξ α k = k − µ α , and k = −2t cos k. In the following we will firstly compute the retarded (R) and advanced (A) boundary Green's function and then the lesser (<) and greater (>) ones. To this aim we remind that the Green's function of a metallic chain with periodic boundary conditions reads: where 1 is the identity and σ z the third Pauli matrix, as usual the effect of the electrochemical potential µ α enters in statistical averages and does not influence the spectral properties of the metallic contacts. TheĜ xx αα Green's function is: where we have performed the change of variable z = − cos k. In order to compute the boundary Green's function of the semi-infinite metallic chain we need the local Green's functionĜ xxαα : and the nearest-neighbor onesĜ Moreover, we have: where ρ(ω) is the 1-D density of states ρ(ω) = 1 The boundary Green's function of a semi-infinite metallic chain located at x > 0 can be obtained from the "bulk" Green's function for the translationally invariant model (36) by adding a local impurity of strength λ at site x = 0, which results in the perturbation: By performing perturbation theory in (43) we obtain the following Dyson's equations for the boundary Green's functions:Ĝ where for simplicity we drop the index α. In the limit λ → ∞, i.e., when one effectively cuts the wire into two semi-infinite pieces, Eq. (44) yields for the boundary Green's functions. Thus, by taking the lesser component of Eq. (44) and then the λ → ∞ limitĜ < xx reads: For what concern the R/A components of the Dyson's equation we have: Finally, by using Eqs. (39)(40) and (41) we obtain: where R and A are obtained by z → ω ± i0 + , and with b. Kitaev chain. Analogously to the previous case the starting point is a Kitaev chain with PBC, that in the k-space is described by the Hamiltonian: and ξ k = −2t cos k − µ, ∆ k = 2∆ sin k. As a function of the complex variable z the Green's function reads: where where, in comparison with Hamiltonian H * in Eq.(1), we introduce an additional Majorana mode ξ coupled to the impurity with hybridizationṼ . The Hamiltonian can be exactly solved looking for the solutions of the secular equation: Through the ansatz the Bogoliubov-de Gennes (BdG) equations take the form within the bulk of the metallic leads, that is for j > 1. At the boundary BdG equations are for the endpoints of the leads, for the dot, and for the two Majorana fermions. The solutions of the BdG equation inside the bulk take the form that inserted into Eq.(60) gives the secular equation and the dispersion relation The latter equation admits four kind of waves (incoming particle in-p, outgoing particle out-p, incoming hole in-h, outgoing hole out-h), such that the most general eigenfunction with energy E is given by with cos k p;α = − E + µ α 2t The actual energy eigenstates are determined imposing the boundary BdG equation. Scattering through the quantum dot junction is fully encoded in the single-particle scattering matrix, S, that relates the outgoing waves, with Inserting Eq.(69) into Eq. (61), we finally arrive to where we have definedM We have thenĀ The scattering matrix is an unitary matrix that encodes all the possible single-particle processes at the junction with E the energy of the incoming particle/hole from the leads. Consistently with the notation above, we havê where r µ,µ α,α (E) denotes the reflection amplitude of a particle or of an hole, t µ,µ α,ᾱ (E) is the trasmission amplitude between the leads, a µ,μ α,α (E) corresponds to the Andreev reflection, that is the conversion of a particle (hole) into an hole (particle) within the same lead, finally c µ,μ α,ᾱ (E) is the crossed Andreev reflection amplitude, that is the conversion of a particle (hole) in one lead to an hole (particle) in the other lead. The scattering matrix allows us to introduce the four kind of eigenstates that define the scattering states basis. We have: with A δ appropriate normalization constants. In the following, to simplify the notation, we will assume particlehole symmetry, S µ,λ α,β (E) = S λ,µ α,β (−E) * , and assume the junction to be symmetric respect the lead exchange, S µ,λ α,β (E) = S µ,λ β,α (E). Because of these symmetries, we have only four relevant scattering coefficients, |S i,j | 2 , that fully describe the physics at the junction. We refer to them as R (E), normal reflection, T (E), normal transmission, A (E), Andreev reflection and C (E), crossed Andreev reflection. It is important to highlight that the normal transmission and the Andreev reflection are the only processes that creates an imbalance in the relative number of particles within the two metallic contacts. Whereas Andreev reflection and crossed Andreev reflection do not preserve the total number of particle in the metallic lead subsystem, as shown in Eq.(81) In Fig.(2) we report the scattering coefficients in presence of zero, one and two Majorana fermions. In the absence of Majorana fermions and for U = 0, the junction is trasparent, in the zero energy limit, due the resonance with the zero energy quantum dot states. However, for any finite U , the resonance is suppressed by the interaction that removes low energy states in the quantum dot. On the other hand, in the presence of a MZM we observe a zero bias trasmission and Andreev peaks that persist even for large values of U . Finally, in the presence of two Majorana, no zero energy state survives due to the hybridization between the Kitaev chains and the quantum dot. The robust topological peak in the scattering matrix coefficients is then expected to be an interesting signature of the presence of a MZM. In the following we will relate these features to physically measurable quantities like the current and the shot noise. To spell out the relation between the scattering matrix amplitudes and the current, it us useful to express the fermionic creation and annihilation operators in real space as a function of the system eigenvectors with δ running over the four scattering states. The eigenvectors satisfy the fermionic algebra all other anticommutator vanish. The Landauer-Buttiker approach, that consist in shooting particles and holes agains the junction from thermal reservoirs at fixed temperature and voltage biased chemical potentials, allows us to express the transport properties of the systems in terms of the voltage bias into the leads and the scattering matrix amplitudes. The starting point is the current operator in lead α, defined as
2019-07-15T11:37:37.000Z
2019-07-15T00:00:00.000
{ "year": 2019, "sha1": "b6e6f4c3a3900b56a75d49945226526cea271b69", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b6e6f4c3a3900b56a75d49945226526cea271b69", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
121693709
pes2o/s2orc
v3-fos-license
Single-photon sources based on single molecules in solids Single molecules in suitable host crystals have been demonstrated to be useful single-photon emitters both at liquid-helium temperatures and at room temperature. The low-temperature source achieved controllable emission of single photons from a single terrylene molecule in p-terphenyl by an adiabatic rapid passage technique. In contrast with almost all other single-molecule systems, terrylene single molecules show extremely high photostability under continuous, high-intensity irradiation. A room-temperature source utilizing this material has been demonstrated, in which fast pumping into vibrational sidebands of the electronically excited state achieved efficient inversion of the emissive level. This source yielded a single-photon emission probability p(1) of 0.86 at a detected count rate near 300 000 photons s−1, with very small probability of emission of more than one photon. Thus, single molecules in solids can be considered as contenders for applications of single-photon sources such as quantum key distribution. Introduction Optical detection and spectroscopy of single molecules in condensed phases such as crystals, polymers and biomolecular systems is now a well-established field [1,2]. On the one hand, singlemolecule spectroscopy (SMS) has become a powerful technique for exploring the individual nanoscale behaviour of molecules in complex local environments. On the other hand, single molecules that have been observed are truly nanoscopic emitters, only ∼1 nm in size, and the light emission from a single molecule is characteristic of that from a single quantum system. To probe a single molecule, a light beam (typically a laser) is used to pump a strongly allowed optical transition of the one molecule resonant with the optical wavelength in the sample volume probed, and the resulting optical absorption is sensed most commonly by recording the emission of fluorescent photons. Detection of the single molecule of interest must be done in the presence of billions to trillions of solvent or host molecules and in the presence of noise from the measurement itself; but in spite of these challenges, a large array of new systems are currently under study [3]. Although the early years of this research concentrated on zero-phonon optical transitions of rigid impurity molecules in solids at liquid-helium temperatures [4]- [9], much of the recent focus has centred on room-temperature investigations with an aim to explore both materials science as well as biology [10]- [18]. Optical spectroscopy in condensed phases at the single-molecule limit is generating much interest for a variety of reasons. Clearly, the detection of one molecule among billions or trillions of host molecules in the same volume achieves an ultimate level of sensitivity, i.e. detection of ∼1.66 × 10 −24 mol of material or 1.66Ymol. From a more fundamental point of view, detailed information may be obtained about the basic optical properties of the molecule itself or its interactions with the surrounding host matrix. Single-molecule measurements completely remove the normal ensemble averaging that occurs when a large number of molecules are probed simultaneously, allowing construction of a frequency histogram of the actual distribution of values (i.e. the probability distribution function) for an experimental parameter. Such details of the underlying distribution become crucially important when the system under study is heterogeneous, either by differences in immediate local environment (the 'nanoenvironment') or by differences in the time-dependent state from one molecule to another. Thus, the usual assumption that all individuals contributing to the ensemble average are identical can now be directly examined on a molecule-by-molecule basis. In this paper, a single molecule will be utilized as a single quantum object, naturally capable of emitting non-classical radiation, specifically, one and only one photon at a controllable time. It is worth noting that SMS studies are related to (indeed were inspired by), but distinct from, the well-established field of spectroscopy of single electrons or ions confined in electromagnetic traps [19]- [21] and subsequent successes in temporarily trapping single neutral atoms for quantum optical experiments [22,23]. The use of a single neutral atom as a light source has very recently achieved a major milestone with the realization of a single atom laser in the strong cavity coupling limit [24]. The vacuum environment and confining fields of an electromagnetic trap are quite different from the environments experienced by single molecules in solids and liquids. The trap experiments must deal with micromotion in the confining trap potential and/or trapping time limitations. In SMS, however, the interactions with the lattice act to constrain the molecule, hindering or preventing molecular rotation, and the surrounding solid traps the single molecule such that extended measurements on the same single molecule are achieved. This has allowed a variety of quantum optical measurements to be performed on a single molecule [25,26]. At the same time, the single molecule is continuously bathed in the phonon vibrations of the solid available at a given temperature, and can interact with the electric, magnetic and strain fields of the nanoenvironment. Despite their apparent complexity in terms of molecular vibrations, rotations and interactions with the host material, single-molecule systems have exhibited extremely simple 'two-' and 'three-level' behaviour under optical irradiation. Moreover, for a few carefully selected combinations of molecule and molecular crystal host, the single-molecule emission has been found to be extremely stable [27,28], in contrast with the photobleaching that commonly occurs in aqueous biological environments. As has been described elsewhere in this special issue, the generation of non-classical states of light [29,30] is an important scientific challenge with several potential applications. A particularly novel non-classical source of light is a deterministic (or triggered) single-photon source: a source that has the property to emit with a high degree of certainty one (and only one) photon at a user-specified time. By contrast, with an attenuated pulsed laser source, the probability of having 0, 1, 2 or more photons present at a time is controlled by the Poisson statistics. A deterministic source of single photons can be important for several quantum information processing applications [31], from quantum cryptography [32] to linear quantum computation [33], although the latter application places the most stringent requirements on the indistinguishability of the emitted photons. Over the past decade, various schemes have been proposed to create a single-photon source, for example, involving single atoms in cavities [34]- [36], highly non-linear cavities [37] or excitonic emission in semiconductors. The last approach has yielded several key demonstrations of single-photon emission at cryogenic temperatures, e.g. using a 'turnstile' effect [38] or a quantum dot in a post microcavity [39,40], reviewed elsewhere in this special issue. In this paper, experiments producing a source of single photons on demand using optical pumping of a single molecule in a solid are reviewed. In the low-temperature regime, single photons may be generated by controlled excitation of single molecules in a solid [41]. At room temperatures, a single molecule also provides a useful single-photon source, with a very high rate of emission and near-zero probability of emission of two photons simultaneously [42], a useful property for secure quantum cryptography. As a result, certain single-molecule systems can now be considered [43] as candidates for single-photon emitters in practical applications. Typical low-temperature studies use wavelength λ LT to pump the (0-0) transition, whereas at room temperature shorter wavelengths λ RT that excite vibrational sidebands of the electronic excited state are more common. The intersystem crossing or intermediate production rate is k isc , and the triplet decay rate is k T . Fluorescence emission shown as dotted lines originates from S 1 and terminates on various vibrationally excited levels of S 0 or S 0 itself with rate k 21 . Basic concepts of single-molecule detection Since a single molecule is governed by quantum mechanics, if one highly fluorescent molecule can be optically selected and controllably forced to emit, a single-photon source will result. Thus, one key requirement is to be able to select the single molecule and detect its emission; the other is to achieve single-photon emission on demand. Strategies to satisfy the latter requirement will be described in subsequent sections. To achieve single-molecule detection at any temperature, one must (1) guarantee that only one molecule is in resonance in the volume probed by the laser, and (2) provide a signal-to-noise ratio (SNR) for the single-molecule signal that is >1 for a reasonable averaging time. A comprehensive review of the experimental methods of singlemolecule detection and spectroscopy in condensed phases has recently appeared [44]. Guaranteeing only one molecule in resonance (illustrated schematically in figure 1(a)) is generally achieved by dilution and tightly focused excitation. For example, at room temperature one need only work with roughly 10 −10 mol l −1 concentration of the molecule in a transparent host combined with focusing of a laser beam to a diffraction-limited probed volume of the order of 10 µm 3 . At liquid-helium temperatures, the well-known phenomenon of inhomogeneous broadening [45]- [47] can be used to achieve additional dilution factors from ∼10 4 to 10 5 simply by tuning the laser frequency to a spectral region where only one molecule is in resonance. The power of spectral selection at low temperatures cannot be underestimated, as the width of a fluorescence excitation profile of a single molecule can approach lifetime-limited values [48,49] of the order of 10 MHz in this regime. Achieving the required SNR can be accomplished by careful selection of the emitting molecule and the transparent host, as well as by utilization of detection methods at the state-ofthe-art of laser spectroscopy. Considerations that are critical for the selection of the host include high optical quality to minimize Rayleigh scattering, and minimization of the volume probed to avoid Raman scattering. To obtain as large a signal as possible from the molecule, one needs a combination of large absorption cross-section, small focal volume, high photostability, weak bottlenecks into dark states such as triplet states, operation below saturation of the molecular absorption and high fluorescence quantum yield. Figure 1(b) shows the essential energy levels of the molecule-generally a strong electric-dipole-allowed singlet-singlet transition is pumped by the laser, and fluorescence emission should be the strongest de-excitation pathway from the lowest electronically excited state. For single-photon emitters, often the probability of intersystem crossing is so low that the triplet states can be neglected. In most experiments, a long-pass filter is used to block the pumping laser light and Rayleigh scattering, and the fluorescence shifted to long wavelengths is detected with a photon-counting system, either a photomultiplier and discriminator or a single-photon-counting avalanche photodiode. The detected photons generally cover a broad range of wavelengths, because the emission from the ground vibrational level of the electronically excited state terminates on various vibrationally excited (even) levels of the electronic ground state as shown in figure 1 In fluorescence excitation, the detection is usually background-limited and the shot noise of the probing laser is only important for the signal-to-noise of the spectral feature, not the signal-to-background. For this reason, it is critical to efficiently collect photons (as with a paraboloid or other high numerical aperture collection system). To illustrate, suppose a single molecule of pentacene in the host matrix p-terphenyl is probed with 1 mW cm −2 , near the onset of saturation of the absorption due to triplet level population. The resulting incident photon flux of 3 × 10 15 photons s −1 cm −2 will produce about 3 × 10 4 excitations s −1 . With a fluorescence quantum yield of 0.8 for pentacene, about 2.4 × 10 4 emitted photons can be expected. At the same time, 3 × 10 8 photons s −1 illuminate a focal spot 3 µm in diameter. Considering that the resonant 0-0 fluorescence from the molecule is typically thrown away along with the pumping light, rejection of the pumping radiation by a factor >10 5 -10 6 is generally required, with minimal attenuation of the fluorescence. This is often accomplished by low-fluorescence long-pass glass filters or by holographic notch attenuation filters. The attainable SNR for single-molecule detection in a solid using fluorescence excitation can be approximated by the following expression [50]: where the numerator S 1 is the peak-detected fluorescence counts from one molecule in an integration time τ, φ F is the fluorescence quantum yield, σ p the peak absorption cross-section on resonance, P o the laser power, A the focal spot area, hν the photon energy, N d the dark count rate and C b the background count rate W −1 of excitation power. The factor D = η Q F p F f F l describes the overall efficiency for the detection of emitted photons, where η Q is the photomultiplier quantum efficiency, F p the fraction of the total emission solid angle collected by the collection optics, F f the fraction of emitted fluorescence which passes through the longpass filter and F l the total transmission of the windows and additional optics along the way to the photomultiplier. The three noise terms in the denominator of equation (1) represent shot-noise contributions from the emitted fluorescence, background and dark signals, respectively. See [51] for a detailed discussion on the collection efficiency for a single molecule taking into account the dipole radiation pattern, total internal reflection and the molecular orientation. Assuming the collection efficiency D is maximized, equation (1) shows that there are several physical parameters that must be chosen carefully to maximize the SNR. First, as stated above, the values of φ F and σ p should be as large as possible, and the laser spot should be as small as possible. The power P o cannot be increased arbitrarily because saturation causes the peak absorption cross-section to drop from its low-power value σ o according to where I is the laser intensity and I S the characteristic saturation intensity [52]. The effect of saturation, in general, can be seen in both the peak on-resonance emission rate from the molecule R(I ) and in the single-molecule linewidth ν(I ) according to [49] where for the three-level system in figure 1, the maximum emission rate and saturation intensity are given by where the additional symbols are defined in the caption to figure 1. Equations (2) and (4) show that the integrated area under the single-molecule peak falls in the strong saturation regime. In particular, a strong triplet bottleneck produces a smaller saturation intensity. However, at higher and higher laser power, the scattering signal increases linearly in proportion to the laser power, so the difficulty of detecting a single molecule increases. The dependencies of the maximum emission rate and linewidth on laser intensity in equations (3) and (5) have been verified experimentally for individual single molecules [49]. The implications of these expressions in the presence of photobleaching have been described [44]. Microscopic and spectroscopic techniques in which each single molecule is observed as long as possible before photobleaching will be most useful for the purposes of this paper. At low temperatures, the well-known fluorescence excitation method in small focal volumes has been the most useful [8], and this method may be combined with microscopy by sample or beam scanning [49,60]. At room temperature, successful microscopic techniques for SMS include scanning methods such as near-field scanning optical microscopy (NSOM) [10] and confocal microscopy [61], as well as the wide-field methods of epifluorescence [62] and total internal reflection microscopy [63]. The essential components of each of these techniques are sketched in figure 3. Cases (a) and (b) show wide-field methods which observe several single molecules in parallel at different transverse spatial positions. Total internal reflection microscopy (TIR, case (a)), operates by illuminating a thin slice (∼125 nm thick) of the sample with the evanescent light field produced by total internal reflection of the pumping laser beam produced by passage through a prism P. The evanescent field can also be produced by illumination through the objective [64]- [66]. The TIR technique has the advantage of pumping only a thin pancake-shaped volume of the (aqueous) sample at the upper cover slip to reduce background signals. Emission from single molecules located in a region typically several microns on a side is collected by a microscope objective of large numerical aperture, carefully filtered (F) to remove scattered pump radiation, and imaged onto an intensified CCD camera or other fast two-dimensional detector. By following the emission from isolated spots in the images as a function of time, the behaviour of several single molecules can be recorded simultaneously, at the maximum frame rate of the camera. Case (b) shows the standard wide-field epifluorescence technique, here illustrated with an inverted microscope configuration. The pumping radiation is reflected off a dichroic beamsplitter (D) and focused through the objective O to illuminate a region of the sample some microns on a side. The emission from this region is collected by the objective, filtered to remove pumping radiation and other backgrounds, and imaged onto the two-dimensional detector. It is worth noting that in both the TIR and epi methods, the transverse size of a spot from a single molecule is limited by diffraction effects to be of the order of 250 nm in diameter or so (depending upon wavelength), and this multiplied by the extent of the beam in the axial (z) direction represents the effective volume producing background signals. When higher time resolution or higher spatial resolution is required, the ability to view several molecules in parallel is sacrificed, as in the scanning methods of cases (c) and (d). Case (c) illustrates the well-known confocal imaging method, where the pumping laser beam is focused to the smallest spot possible by the objective lens. The emission from the diffractionlimited volume is collected and filtered, and focused through an aperture A before detection with a silicon avalanche photodiode (SPAD) capable of detecting photons. Either the sample or the illumination/collection region is scanned to build up an image. As usual, the use of the confocal aperture has the effect of restricting the axial region of the sample that can produce scattering backgrounds. While this property is quite helpful in imaging thick samples, for most singlemolecule studies, the axial extent of the sample is typically already fairly small (a few microns). Figure 4 shows a confocal fluorescence image of single terrylene molecules in a p-terphenyl crystal at room temperature (10 µm × 10 µm spatial range, 100 × 100 points, 10 ms point −1 ). The detected photon signal is encoded in the height of the image in the z-direction and in the colour grey scale. With confocal microscopy, the scanner can be parked at the location of the molecule, and the time dependence of the emission can be recorded on a temporal scale determined by the counting interval, the laser intensity and the brightness of the emitter. It is this configuration that is most useful for utilizing a single molecule as a single photon source. Finally, case (d) in figure 3 shows the general scheme of near-field scanning optical microscopy (NSOM). In this technique, a tapered and aluminium-coated optical fibre tip with a 50-100 nm hole in the end (OF) is most often used as the illumination source. The emission from the sample is collected in a fashion similar to that for confocal microscopy; with thick samples a confocal aperture can also be used. Because the volume illuminated is very small, this method can yield reduced backgrounds. However, the sample under study must be very flat so that the tip can be maintained within some tens of nm from the surface of the sample. Photon antibunching under cw excitation As with atoms and excitons, the stream of photons emitted by a single molecule contains information about the system encoded in the arrival times of the individual photons. Figure 5 which has an average length equal to the triplet lifetime, τ T . The corresponding decay in the autocorrelation function of the emitted photons for pentacene in p-terphenyl is easily observed [8], and this phenomenon has been used to measure the changes in the triplet yield and triplet lifetime from molecule to molecule which occur as a result of distortions of the molecule by the local nanoenvironments [67]. Such correlation measurements can also extract information on wide time scales about the spectral shifting behaviour that occurs in amorphous systems [68]. Although this method gives access to many decades in time, the dynamical process must be stationary, i.e. the dynamics must not change during the relatively long time (seconds) needed to record enough photon arrivals to generate a valid autocorrelation. By contrast, in the nanosecond time regime within a single bunch ( figure 5(b)), the emitted photons from a single quantum system are expected to show antibunching [69], which means that the photons 'space themselves out in time', i.e. the probability for two photons to arrive at the detector at the same time is zero. This is a uniquely quantum-mechanical effect [70], which was first observed for single Na atoms in a low-density atomic beam [71] and subsequently in trapped ions as well [72]. For a single molecule behaving essentially as a two-level system (i.e. ignoring the triplet state), antibunching is easy to understand as follows. Immediately after photon emission, the molecule is definitely in the ground state and cannot emit a second photon immediately. The probability of second photon emission is given by the quantum-mechanical expression for the probability of occupation of the excited state, which grows with a rate 1/T 1 at low power, with T 1 being the excited state lifetime. This growth rate eventually increases linearly with power when the Rabi frequency becomes comparable with or larger than 1/T 1 (see [73] for details). Typical Rabi frequencies (proportional to the product of the optical electric field amplitude and the transition dipole moment) are of the order of 10 8 s −1 for an electric-dipoleallowed optical transition at visible wavelengths pumped near saturation. To observe antibunching correlations, the second-order correlation function g (2) (t) is generally measured by determining the distribution of time delays N(τ) between the arrival of successive photons in a dual-beam detector (the two quantities are equal when the total count rate is not too high [71,74]). Photon antibunching in single-molecule emission was first observed in the author's laboratory at IBM for the pentacene in p-terphenyl model system [75], demonstrating that quantum-optical experiments can be performed in solids and on molecules for the first time ( figure 5(c)). The high-contrast dip at τ = 0 is strong proof that the spectral feature in resonance is indeed that of a single molecule. This observation has opened the door to a variety of other quantum-optical experiments with single molecules [74,76], such as measurements of the AC Stark shift [77] and others [26]. The convenient 'trap' that the solid forms for a single molecule is critically important for these studies that observe the same single molecule for an extended period of time. It is important to realize that the photon emission times under excitation by a cw laser are not deterministic, i.e. the molecule does not act as a controllable single-photon source where photons are emitted upon demand. Nevertheless, the observation of photon antibunching is an important necessary condition for single-photon emission. Other investigators have reported photon antibunching under cw excitation for several molecular systems. For example, in 1997, Ambrose et al [78] studied 20 single molecules on a surface and added their signals to observe photon antibunching at room temperature [78]. Individual molecular signals were obtained until the molecule bleached, then a new molecule was selected and averaged with the others. Although, in essence, a bulk measurement similar to fluorescence correlation spectroscopy in solution, a clear antibunching signature was observed. At room temperature, figure 6 shows the antibunching correlations measured in the author's laboratory as a prelude to the use of the terrylene in p-terphenyl system for triggered single-photon emission [42] (also reported by Fleury et al [79]). In further studies, Treussart et al [80] have described photon antibunching correlations for single terrylene molecules in a poly(methyl methacrylate) thin film. Another nanoscale emitter of current interest is the semiconductor quantum dot, and high-contrast antibunching was reported for a single CdSe/ZnS quantum dot 1.8 nm in radius at room temperature by Lounis et al [73] in 2000. In a different solid-state system composed of single NV centres in a diamond nanocrystal, antibunching correlations have also been observed [81], described elsewhere in this special issue. Low-temperature source In 1999, Brunel et al [41] utilized the narrowness of the optical absorption of single molecules in solids at low temperatures to produce a triggered source of single photons based on single copies of DBATT (see figure 2 for the structure) in a hexadecane host. As mentioned above, at low temperatures (1.7 K) the fluorescence excitation profile of the (0-0) lowest electronic transition of a suitably chosen single molecule is a narrow resonance roughly tens of MHz in width. The optical configuration is sketched in figure 7(a), where a paraboloid was used to collect the emitted photons. A single-molecule emitter was optically selected with a single-frequency cw dye laser near 589 nm, and a sinusoidal applied electric field was used to adiabatically sweep the molecular absorption into and out of resonance with the pumping laser. For the passage to be adiabatic, the passage time must be longer than the Rabi period, but shorter than the fluorescence lifetime. Critical to this idea is the requirement for a linear Stark shift for the single molecule; since DBATT is centrosymmetric, the experiment relied upon distortion of the molecular structure to enable linear Stark shifts for single molecules in the wings of the inhomogeneously broadened line. Figure 7(b) illustrates the average time structure of the emitted photons. The inset shows that photons are emitted periodically at a frequency equal to twice the frequency of the applied rf field, i.e. at the zero crossings of the applied field. The time-averaged emission decays with a time constant of ∼8 ns, which corresponds to the fluorescence lifetime of this system. Additional measurements of g (2) using a standard Hanbury Brown-Twiss correlator confirmed the quantum mechanical nature of the emission, and the authors estimated that the probability of emission of a single photon per adiabatic rapid passage event was p(1) ∼ 0.68-0.74. Although the emission was highly non-classical, the probability of emission of two photons p(2) per passage was in the range of ∼10%. The overall detection efficiency was limited to ∼3 × 10 −3 , mainly because filtering out the pumping laser excitation also eliminates part of the single-molecule fluorescence [41]. near the laser frequency. These results illustrate that this system provides a reasonable source of single photons with high values of p(1) and emission rates near 6 MHz. The statistics of the source may also be quantified by means of the Mandel parameter Q S = (σ 2 − n av )/n av , where σ 2 is the variance of the distribution and n av is the average number of photons [82,83]. A value of Q S = 0 is characteristic of a Poisson distribution, with values >0 and <0 signifying super-Poissonian and sub-Poissonian behaviour, respectively. The Mandel parameter at the source for the low-temperature single-molecule emitter was Q S ∼ −0.6. Room-temperature source based on a stable single-molecule system Recently, a high-performance room-temperature source of single photons was demonstrated by Lounis et al [42] in the author's laboratory using the molecule terrylene in a p-terphenyl host crystal. This system has the advantage of fluorescence quantum yields near unity and a very weak triplet bottleneck, as well as extremely high photostability. Fluorescence microscopy at ambient conditions of single emissive dye molecules [2,18] embedded in polymers, droplets and gels or dispersed on surfaces has revealed various previously unknown effects that are normally obscured by averaging in conventional ensemble measurements. Digital photobleaching is an example; however, this imposes an upper limit on the number of photons detectable from a molecule. The photobleaching quantum efficiency of most single molecules has been reported to range from ∼10 −4 to ∼10 −7 in various host matrices, which is a severe limitation for quantum optical experiments [84]- [86]. In contrast, highly stable single terrylene molecules have been reported in p-terphenyl molecular crystals at room temperature by several investigators [27,28]. In this organic crystal, the fluorescent molecules are protected by the host from exposure to diffusing quenchers (such as oxygen), and profit from the ability to emit host phonons to prevent thermally induced damage. We find that for thick crystals (∼10 µm), this system indeed has extremely high photostability. Many molecules tolerated hours of continuous illumination of the same single molecule without photobleaching, i.e. the molecule regularly provides far more than 10 9 photons before irreversible termination of emission. To underscore the degree of stability, an epifluorescence microscope video of the emission from a sample has been provided in the online animation. Figure 4 shows a scanning confocal microscope image of the fluorescence from single terrylene molecules embedded in a thin, sublimed platelet of crystalline p-terphenyl. The isolated peaks (∼400 nm full-width at half-maximum) represent the emission of single molecules. Saturation studies performed on this system show that maximum emission rates as high as 2.5 MHz are achievable, with typical saturation intensities close to 1 MW cm −2 . The high saturation intensities are the result of the orientation of the molecular transition dipole moment, which is nearly perpendicular to the plane of the sample and hence to the laser polarization. The method of forcing single-photon emission on demand is based on a simple concept, the pulsed optical excitation into a vibrational sideband of the excited state as illustrated in figure 8(a). A short pulse of green laser light pumps the four-level scheme of the molecule from the ground singlet state to a vibronically excited level of the first electronic excited singlet state. After fast (ps) intramolecular vibrational relaxation (IVR) to the lowest electronic excited state, the molecule subsequently emits a single photon on the time scale of the fluorescence lifetime (ns). Key to this idea is the selection of time scales in which the time for IVR is much shorter than the pulse width, which is much shorter than the fluorescence lifetime. Since the molecule can be pumped at high energy, the probability of preparing the emitting state can approach unity, without the complexity and difficulty of achieving perfect inversion by a resonant pump pulse with area π. This scheme has the further advantage of spectrally separating the laser excitation and the fluorescence emission. The light emitted by a single molecule excited by a cw laser consists of single photons separated by random time intervals which depend on the excited state lifetime and the pumping rate. Since a single molecule can never emit two photons at once, the distribution of photon pairs separated by a time τ, rises from zero at τ = 0. Such photon antibunching behaviour for the terrylene/p-terphenyl system is presented in figure 5, which is an unequivocal signature of the single-molecule nature of the peaks shown in the confocal fluorescence images. To directly use the non-classical fluorescence properties of the single molecule, triggered single-photon emission was produced by pumping with a periodic, mode-locked laser source (frequency-doubled Nd:YAG laser, pulse width τ p = 35 ps, repetition rate ν = 6.25 MHz). Single photons are then generated at predetermined times, within the accuracy of the emission lifetime of a few ns. The standard time-correlated single-photon counting technique can then be used to measure the average time structure of the source (see figure 8(b)). The inset shows that the photons are emitted at times separated by the laser repetition time (160 ns). As expected, the main figure shows that the excited-state population rises in a very short time, demonstrating the rapid pumping of the molecule to its emitting state. The distribution then decays exponentially with a time constant of τ f = 3.8 ns, consistent with the lifetime of terrylene in p-terphenyl as deduced from the homogeneous width of ∼40 MHz at low temperature [54]. Critical to the performance of the single-photon emitter are the probabilities p(m) to have m photons emitted by the molecule after each pulse. Since the pump pulse width is very short compared with the fluorescence lifetime, the probabilities to emit two or more photons per pulse, p(n), n 2, are very small (see below). This means that the single-molecule emitter is quite resistant to the photon-number splitting attack [87]. Briefly, in the number-splitting attack, if more than one photon is used to send each bit, then an eavesdropper (Eve) can split off a photon here and there, obtaining information in a fashion that cannot easily be detected by the sender and receiver (Alice and Bob). In addition, the detected count rate from the molecule, S, is directly proportional to the probability p(1) that a single photon is emitted after each laser pulse: S = ηνp (1), where η is the detection efficiency. By considering the losses in filters and optics as well as the collection efficiency for a molecular dipole oriented nearly perpendicular to the plane of the microscope stage [51], we find η ∼ 6% for our confocal detection system. It is then possible to estimate the maximum value of p(1) by performing a power saturation study of the detected count rate. Figure 9(a) shows the measured S as a function of the average excitation power of the pulsed laser. The signal shows power saturation behaviour that is well fit by the saturation law of a two-level system from rate equations (i.e. the limit of pulse width longer than the coherence time of the excited level), S = S ∞ (I/I S )(1 + I/I S ) −1 [1 − exp{−(1 + I/I S )τ p /τ f }] = S ∞ p(1), where I and I S are respectively the excitation and the saturation intensities. From the fit, a saturation count rate of S ∞ = 343 ± 6 kHz can be extracted with I S = 1.2 ± 0.1 MW cm −2 . The maximum measured count rate in the experiment reached 310 kHz and was limited only by the available laser power. The maximum achieved p(1) can be determined in two ways: using our maximum pumping intensity and I S determined above, p(1) = 88%; using the maximum observed S and η, p(1) = 83%, in good agreement. 2 . The ratio of the area of the central peak to the lateral peak is in good quantitative agreement with the ratio expected from the measured average count rate for the signal and the background (S/B ∼ 6). Lower trace: when the laser spot is positioned away from any molecule, all peaks have the same area, as expected for a Poissonian source. The data were accumulated for 120 s (upper) and 300 s (lower). Reprinted by permission from Lounis and Moerner [42]. Crucial to the usefulness of a single-photon source is a small value of double-photon emission, p (2). For example, quantum key distribution using polarization states of photons requires immunity from a beam-splitter attack [88], where a large value of p(2) allows an intruder to detect some of the extra photons and obtain information without being detected. Two-photon emission requires that the molecule must first be excited, then a photon must be emitted, and then the molecule must be excited a second time, all within 35 ps. For the terrylene in p-terphenyl system at room temperature, the value of p(2) was estimated to be <8 × 10 −4 at the maximum intensity. To directly demonstrate the non-classical sub-Poissonian statistics of the single-molecule emitter, figure 9(b) shows the intensity correlation function of the emitted light measured with a standard Hanbury Brown-Twiss coincidence set-up (emission split with a beam splitter and detected with two photon-counting APDs). For both the single-molecule (figure 9(a)) and For Poissonian light, such as that from an attenuated pulsed laser or the fluorescence from background excited by the laser pulses, the central peak is identical in intensity and shape to the lateral ones ( figure 9(b)). In the case of a perfect single-molecule emitter, the central peak should vanish altogether since no more than one single photon can be emitted by the molecule per pump pulse. The ratio of the central peak's area to the area of the lateral peaks is the signature of the sub-Poissonian statistics of the light emitted by the source. The residual peak at zero delay in figure 9(b) arises from coincidence events involving background photons excited during each laser pulse. In our experiment, the background signal shows a lifetime of ∼4 ns, which indicates that it arises from weak fluorescence from out-of-focus terrylene molecules, not from Raman scattering. With an improved sample preparation (specifically lower concentration of terrylene), the contrast ratio should easily increase further. Figure 10 compares the probability distribution p(m) for the terrylene/p-terphenyl source to that expected from a Poisson distribution. At the highest pumping power, the probabilities of the single-molecule source are p(0) = 0.14, p(1) = 0.86 and p(m > 1) ∼ 0. This distribution is radically different from that for a pulsed coherent source with the same n av = 0.86: p coh (0) = 0.42, p coh (1) = 0.36, p coh (2) = 0.16, . . . . The Mandel parameter of the single-molecule source is Q S = −0.86, not far from −1, the value expected for a perfect single-photon emitter, and far from 0, the value for a Poissonian source. The Mandel parameter Q d of the detected photon counts is naturally affected by the light detection efficiency [82]. Using Q d = Q S (η/2), one may estimate Q d ∼ −3%. This represents relatively high performance for a single-photon emitter. In recent work, another room temperature, single-molecule single-photon source has been reported by Treussart et al [89]. The authors used a sample composed of the cyanine dye DiIC 18 (3) in a thin layer of poly(methylmethacrylate) pumped by a mode-locked laser at 532 nm and obtained values of p(1) near 5%. On the time scale of a few pulsed excitations, sub-Poissonian statistics were clearly observed, and the probability of two-photon events was 10 times smaller than for a comparable Poissonian distribution. However, on longer times, the blinking of the fluorescence due to passage into the dark triplet state produced excess noise. Ten thousand detection events were typically recorded before photobleaching, far fewer than for the terrylene/p-terphenyl system described above. The utility of such a source must balance the ease of sample preparation when molecules are dispersed in a polymer with the appearance of blinking effects and photobleaching. Concluding remarks This paper has reviewed progress in single-photon emitters based on single molecules in solids. An early demonstration utilized the narrow lines available for zero-phonon optical transitions in solids at low temperatures and an adiabatic rapid passage technique to generate controllable emission from a single molecule. More recent experiments have demonstrated a room-temperature source for single photons on demand, based on a highly stable single terrylene molecule in a p-terphenyl crystal. The parameters of the latter source (repetition rate and singlephoton generation probability) are limited only by the laser system used; nevertheless, the current performance already surpasses that of previous work. This high performance combined with the simplicity of the source suggests that it may be considered for a variety of quantum optical experiments and for other applications where triggered single photons are needed. The fact that photons are emitted into a range of solid angles can limit the detection efficiency; however, optical solutions to this problem can be envisioned, i.e. one can imagine that the single molecule could be coupled to a single-cavity mode to reduce losses, change the emission pattern or modify the emission lifetime and thus increase the emission rate. To reduce the background (from out-offocus molecules or Raman scattering), reduced terrylene doping, pumping with z-axis polarized light or use of a crystalline system with a more favourable orientation of the single absorber can be utilized. With further development, the prospects are high that single molecules in solids will provide compact and reliable sources of single photons for quantum key distribution applications. However, the emitted photons are not identical, in that the emission terminates on various excited vibrational levels of the ground state. Extension of this work in other directions may be envisioned based on recent work. The recent exploration of polymer hosts for dye molecules leads one to hope that for optimized molecule/polymer combinations, higher photostability will be realized. In terms of the coherence between emitted photons, in a low-temperature experiment on single nitrogen-vacancy centres in diamond, g (1) correlations have been observed for single emitters when only the (0-0) emission is detected [90]. Even though this is not the Hong-Ou-Mandel intensity interference [91] ultimately required for linear quantum computation, it is a step in the right direction and it should also be possible with molecular emitters, perhaps with modification of the spontaneous emission by cavity resonances. In a different low-temperature experiment [92], two single terrylene molecules were identified to be within ∼10 nm of each other by applying a Stark shifting inhomogeneous electric field [93]. At high excitation powers, a line appeared between the two fluorescence excitation profiles showing optical dipole coupling between the two emitters. This system has promise as an emitter of pairs entangled photons. Overall, the utility of a host crystal as a trap for single-molecule emitters is much better than might be expected. In spite of the presence of host phonons, the short relaxation time of phononassisted transitions renders the multilevel nature of the system relatively unimportant. This, coupled with the extreme stability of molecular emitters like terrylene in a suitable molecular crystal host, suggests that further improvements of single-molecule, single-photon emitters will be realized.
2019-04-19T13:11:18.822Z
2004-07-01T00:00:00.000
{ "year": 2004, "sha1": "b061c00b50a822a8e9ab4416d88446bd4d502124", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/6/1/088", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "73e71aa40930982dee62b40c2f4ea2da0becb8da", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226619636
pes2o/s2orc
v3-fos-license
Decision-Making Method on the Degree of Lecturers’ Compliance to Job Titles — A university lecturer conducting educational and scientific activities in the technical and IT spheres should have a high level of personal and professional competencies, including social and psychological, pedagogical, methodological, and scientific competencies. The article proposes an approach to the lecturer selection process, consisting of a hierarchical model of lecturer competencies, criteria for assessing private competencies, assessment and selection methods, selection algorithms, software, and hardware assessment and selection tools, which allows assessing various professional and personal competencies of lecturers and making decisions about the degree of their compliance. I. INTRODUCTION An analysis of literary sources has shown that researchers are currently studying various aspects of the formation and assessment of professional qualities and competencies of lecturers of educational institutions. Examples of aspects are [1,3,6]: • psychological foundations for modeling professional competence of a lecturer, • criteria for the effectiveness of lecturers and the formation of pedagogical competence of lecturers of educational institutions, • professional competence of lecturers, • methodologies for assessing the activities of lecturers of higher educational institutions, • requirements for lecturer qualifications and assessment of the quality of teaching staff, • organizational and pedagogical system for assessing the pedagogical activity of university lecturers, etc. Nevertheless, modern approaches to assessing the competencies of lecturers have a number of drawbacks, including significant ones, such as the subjectivity of making a final decision, the inability to apply the methodology under changed conditions, solving narrow, particular problems, and the refusal to use methods of modern decision theory [1,2,3]. In this regard, decisions made in universities often turn out to be insufficiently substantiated and may cause distrust of interested parties. II. PROBLEM STATEMENT A university lecturer conducting educational and scientific activities in the technical and IT fields should have a high level of personal and professional competencies. To assess the quality of lecturers is not enough to evaluate only their personal competencies. Professional competencies also play an important role, since this area is actively developing. And for the university in order to train competitive specialists in this subject area, who are able to adapt to changing conditions quickly and the emergence of new technologies and software and hardware, the level of professional competencies of the teaching staff (TS) should be quite high. Considering that the activity of a lecturer of a higher school is multifunctional and includes not only pedagogical, but also research, methodological, technical and didactic and other types of activity, it is advisable to separate professional and personal competencies. In this case, pedagogical competence, including educational subject, to consider as an integral part of the professional. In addition, since training and education require direct communication with students, the application of methods of psychological influence on them, methods of self-regulation of mental states, etc., it is advisable to use a broader concept -the "socio-psychological, pedagogical competence" of a higher school lecturer. III. METHODOLOGY As part of the study, a lecturer competency model will be developed as a set of particular characteristics, and methods and methods for their assessment will be determined. These particular characteristics are of varying degrees of importance for the professional activities of the lecturer, including those related to the specialization of the lecturer. Within the framework of the study, the baseline data will be collected to evaluate each lecturer in an educational institution, and private competencies and characteristics will be selected to evaluate lecturer competencies within the framework of the developed model. The formalization of the evaluation criteria for each qualification characteristic will also be carried out, methods for assessing the important factors of the criteria will be selected, gradations for each selected private characteristic of the lecturer's competencies will be developed to make appropriate managerial decisions, and a decision-making method will be developed to the degree to which the lecturer corresponds to the job title. IV. LITERATURE REVIEW In practice, different methods are used to evaluate the professional activities of lecturers (state certification, portfolio assessment, competitive selection, etc.) [4]. Known dissertation research related to the quantitative assessment of the work of lecturers, their activities, professional qualities, and professionalism, etc. An analysis of the sources shows that these approaches do not allow the models proposed in them to be adapted to the changed application conditions, and attach great importance to subjective expert assessments, taking little account of more formalized methods, and use estimation algorithms that are inappropriate with respect to competence, etc. [1] Issues of multicriteria assessment are the subject of research by a number of foreign and Russian researchers [2,[5][6][7][8][9]. In order to characterize the lecturer as a qualified specialist, it is necessary to indicate specific properties, the presence of which the object allows one to make an appropriate judgment. There are a number of approaches to assessing lecturers. For example, the Assessment Center involves a comprehensive assessment of lecturers by competency, taking into account the personal and professional qualities of specific lecturers [10]. Biysk Technological Institute uses its own methodology, within the framework of which a system of indicators of the lecturer's work is distinguished [11]: educational, educational and methodological, research, organizational and methodological work, as well as advanced training and social educational activities. To assess the activities of the lecturer, a point system is used with weighting coefficients, the values of which are set for each section, each integral and private indicator of the sections, with the involvement of experts. The calculation of the number of points scored by the lecturer is carried out by summing up the points scored for all private indicators, which, using weight coefficients, are folded into integral indicators. Scores for activities (sections) are formed from the well-targeted sums of integral indicators. The Moscow Automobile and Road Institute uses a different approach [12]. The rating of each lecturer consists of two parts: rating "P", which characterizes the accumulated qualification potential, and rating "A", reflecting its current activity in the main areas of activity. An absolute personal rating is calculated as the well-targeted sum of the ratings "A" and "P". In addition to the fact that a number of higher education institutions use different approaches to determining the rating of lecturers, many researchers interpret the concept and content of the concepts used for assessing professional and personal competencies in different ways [3,4]. Some researchers limit themselves to considering only the personal competencies of a higher education lecturer, while others suggest combining them with professional ones [1,13]. The considered approaches are not entirely suitable for assessing all the components of the competencies of a university lecturer who, in addition to teaching, is engaged in scientific and methodological work, and the authors do not consider these aspects of the lecturer's activity practically. Based on the foregoing, we can conclude that at present there is no methodology that allows one to take into account the multidimensionality of activities fully, taking into account their unequal preference, depending on the lecturer's job title. V. DISCUSSION Based on a review of the literary sources and publications of researchers, an analysis was made of existing approaches and distinguished groups of competencies, on the one hand, so that they could fully cover all aspects of the activity, and there was no mutual intersection between them. All types of competencies were divided into 2 groups -personal (PC) and professional (PrC). Professional and methodological (MC) and scientific (SC) competencies are a combination of three components: sets of relevant socio-psychological and pedagogical (RSPC), methodological (MC) and scientific (SC) competencies. Each of the many sets of competencies is proposed to be evaluated using tests, surveys, as well as various documentary sources. Each of the sets of competencies can be represented by a set of private competencies with a vector representing a list of those psychodiagnostic and / or praximetric methods that are used to evaluate existing private competencies (Figure 2). The integrated assessment of the competencies formation CF is calculated as a well-targeted sum of private competencies, taking into account their probability of using the appropriate methods in the assessment procedure. The disadvantage of this model is that it does not take into account the levels of formation of each group of competencies, as well as the positions of the assessed lecturers. To assess the achievements they use the minimum values for the level. VI. RESULTS As part of the work, an experiment was conducted to identify the lecturer's compliance with the position based on the proposed methodology. The primary results obtained during the assessment procedure for the two selected lecturers (applying for reelection as an associate professor) correspond to the following competency levels: The obtained values of private competencies for candidates were compared with the previously proposed cobweb model ( Figure 5). Based on the calculations, we can conclude that each of the lecturers meets the requirements of Belorussian State University when re-election to a vacant post, as well as the values of his competence indicators proposed in the developed methodology, are within the limits sufficient for his reapproval in the corresponding position. In the event that these are two teachers applying for the same position, preference should be given to Lecturer 2, since the values of his personal and methodological competencies are at a higher level. The proposed model for assessing the quality of a university lecturer is workable and claims to be more reasonable than the currently used documentation model, but requires verification on a larger sample of teachers applying for vacant positions.
2020-08-13T10:10:58.715Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "8713c34f35c50e9a766f14e535e77729a645ca59", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125941923.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "98a7d5f437ffcdb6b5e06ded2ace9dec7071bc29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
53528530
pes2o/s2orc
v3-fos-license
Microbiological Response to Periodontal Therapy: A Retrospective Study Background: Periodontitis is a multifactorial infection caused by a complex of pathogenic bacterial species that induce the destruction of periodontal structures. Objective: The aim of this study is to evaluate the presence and bacterial load of six periodontal pathogens bacteria, measured at initial visit and after osseous surgery in patients affected by chronic periodontitis and treated between 2005 and 2007. Methods: This cohort study was carried out on a sample of 38 consecutive patients affected by severe chronic periodontitis, diagnosed at baseline on the basis of probing depths equal to 6.68 ± 1.47 mm. On each subject, a microbiological test was performed before periodontal initial therapy and after osseous surgery (one year later). Five compromised teeth were chosen for each patient (the same teeth, before and after surgery), for a total of 190 teeth. Real-time PCR based analysis computed total bacterial load of the samples and quantified six periodontal pathogens: Actinobacillus actinomycetemcomitans, Porphyromonas gingivalis, Tannerella forsythia, Treponema denticola, Fusobacterium nucleatum and Prevotella intermedia. Data collection was made consulting medical charts. Results: Pocket probing depth reduction after surgery was 4.50 ± 1.54 mm (p=0.0001). The mean number of sites with bleeding at baseline was 2.08 ± 1.17 and 0.58 ± 1.00 after surgery (p=0.001). The mean number of sites with suppuration at baseline was 0.26 ± 0.86 and 0 after surgery (p=0.02). Cell count of each pathogen and total cell count were significantly higher at baseline than after surgery. Almost all bacteria presented a mean percentage reduction equal to that of the total count, except for Aa and Pi, which seemed to show a greater resistance. The difference of bacterial load, both before and after surgery, between smokers and non-smokers was not statistically significant (p<0.05). A statistically significant correlation was detected between pocket probing depth variation and bleeding on probing variation before and after the surgery, controlling for age (r=0.6, p=0.001). No significant correlations were observed between pocket probing depth and bacterial loads, except for Pg (r=0.5, p=0.001), Tf (r=0.6, p=0.001) and Td (r=0.4, p=0.02). Conclusions: Reduction of presence and bacterial load of the examined periodontal pathogens bacteria after osseous surgery, along with periodontal pocket reduction, appeared to be essential to achieve and maintain periodontal stability over years. INTRODUCTION Periodontitis is a multifactorial infection caused by a complex of pathogenic bacterial species that induce the destruction of periodontal structures, including tooth-supporting tissues, alveolar bone and periodontal ligament [1]. Some co-factors such as smoking, genetic susceptibility and host response, can increase and exacerbate microbial actions [2,3]. Periodontal diseases may arise as gingivitis or periodontitis: The first can heal with the restitutio ad integrum of all tooth-supporting tissues, the second instead always develops an irreversible lesion [4]. Periodontitis is site-specific: each tooth and its surfaces may be involved with different severity. Consequently, an accurate diagnosis is needed for each case and each tooth. The current definitions of periodontitis were introduced at the 1999 World Workshop for the Classification of Periodontal Diseases and Conditions [5]. Chronic periodontitis is a progressive disease that alternates silent and acute phases; aggressive periodontitis instead, is a highly destructive form of periodontitis. Both can be localized or generalized (respectively ≤ 30% of sites involved or > 30% of sites involved). Severity can be classified on the basis of the amount of Clinical Attachment Loss (CAL loss) in slight (1 or 2 mm CAL loss), moderate (3 or 4 mm CAL loss) or severe (≥ 5mm CAL loss) [6]. In the oral cavity we can find almost 700 bacterial species: microorganisms show a structural organization in the biofilm where they compete, coexist and⁄or synergize, leading to this chronic disease [7 -9]. This well-structured microbiological biofilm is able to colonize the sulcular regions between the tooth surface and the gingival margin [10]: the first species to reach these areas are gram-positive cocci and rods, followed by gram-negative cocci and rods, then fusobacteria, filaments, and finally spirillae and spirochetes [11]. The use of checkerboard DNA-DNA hybridization method made it possible to identify new bacterial species [12] that Socransky et al., clustered in complexes characterized by different virulence factors: Actinomyces species, Purple complex, Yellow complex, Green complex, Orange complex and Red complex [13]. Since bacterial microflora was found to differ between active and inactive sites, microbiological monitoring plays an important role not only for diagnosis but also for the choice of periodontal therapy [14,15]. During initial periodontal therapy, patients are usually treated with Scaling and Root Planing (SRP) to remove supra-and sub-gingival plaque and calculus, etiological agents of periodontal disease [16], and to obtain an increase of clinical attachment level. In case of persisting deep (> 4 mm) pockets after initial therapy, the sites need to be treated with resective osseous surgery [17]. The aim of this study is to evaluate the presence and bacterial load of six periodontal pathogenic bacteria detected at initial visit and after osseous surgery, in patients previously treated for chronic periodontitis. The null hypothesis was that no difference is observed between baseline and first visit in the presence and bacterial load of six pathogenic bacteria. MATERIALS AND METHODS This cohort study was carried out on a sample of 38 consecutive patients affected by chronic periodontitis, treated from 2005 to 2007 and attending a periodontal private practice (V.C.) in Bologna, Italy. Data collection was made in November 2013, consulting medical charts. Subjects were enrolled according to the following inclusion criteria: clinical diagnosis of chronic periodontal disease [18], presence at least of 12 teeth (excluded third molars) no periodontal treatment in the previous six months, no systemic diseases (as diabetes, arthritis, ulcerative colitis, Crohn disease, HIV, cancer, cardiac pathologies), no antibiotic therapy in the previous 4 weeks, no ongoing pregnancy, presence, in the medical records, of microbiological tests performed before and after osseous surgery. All selected subjects signed an informed consent form. This study was approved by Bologna-Imola Ethical Board Committee (Prot. N. 956/CE -02/11/2016). At the initial visit, each patient had been clinically and radiographically examined and Probing Pocket Depth (PPD), Bleeding On Probing (BOP) and suppuration data were recorded. Each clinical parameter was evaluated with a periodontal standard calibrate probe (Hu Friedy PCP11) that measured the depth from the gingival margin to the deepest point of the pocket. Moreover, patient's age, gender and smoking habits were recorded. Microbiological test was performed during the first appointment, before oral hygiene instructions and supra-gingival gross scaling. SRP was carried out in 4 successive sessions, once a week, one quadrant per week. Finally, after periodontal re-evaluation, patient underwent osseous surgery in the case of the presence of pocket sites with PPD > 4 mm and BOP +. One year after surgical treatment, a second microbiological test was performed. On each subject participating in the study, a microbiological test had been performed before periodontal initial therapy and after osseous surgery (about one year from baseline). For the microbiological test, five compromised teeth had been chosen for each patient (the same teeth, before and after surgery), at least one tooth for each quadrant, for a total count of 190 teeth. Supra-gingival plaque and/or calculus were removed with Gracey's curette. After that, the site was isolated from saliva by some cotton rolls and then delicately dried. For each site, a sterile paper cone was inserted in the sulcus at pocket depth and removed after 10 seconds. All 5 paper cones were collected, then placed in a unique sterile tube and sent to the laboratory (GmbH International GABA, Munster, Germany) for further microbiological evaluations. Samples were reduced with Wilkins-Chalgren-suspension and vortexed for 30 seconds to obtain a homogeneous suspension: 0.5 ml of the suspension was used for real-time PCR analysis. The cells were then centrifuged (15.000 turns at 41°C) for ten minutes and later subjected to automatic process analysis (Meridol Analysis ® Perio Diagnostics GABA International, Switzerland). Real-time PCR based analysis [19] was developed and validated (Carpegen GmbH). This method computes total bacterial load of the sample and quantifies six periodontal pathogens with a sensibility of 100 cells per type of pathogens: Actinobacillus actinomycetemcomitans (Aa), Porphyromonas gingivalis (Pg), Tannerella forsythia (Tf), Treponema denticola (Td), Fusobacterium nucleatum (Fn) and Prevotella intermedia (Pi). Statistical Analysis The sample size was determined by hypothesizing a mean difference between baseline and 1-year measurements of 0.5 (Cohen's medium effect size) [20] with a standard deviation of 1. A sample size of 38 subjects was obtained with a power of 85% at an α-level of 0.05. In each subject, 5 sites were examined: median value of probing depth, number of sites with bleeding on probing and number of sites with suppuration were computed for each patient and used for statistical evaluation. Mean ± standard deviation was used to describe the data. The comparison between bacterial load at baseline and after one year was performed by using Wilcoxon test for paired data, after having ascertained the nonnormality of the distributions of the bacterial loads by means of Shapiro-Wilks test (p=0.0001). Following the Socransky principles, the studied organisms were grouped into red complex (Pg, Td, Tf) and orange complex (Pi, Fn). Furthermore, the presence of Actinomyces (Aa) [13] was also determined. The percentage decrement of the bacterial load was computed with the formula: [(bacterial load after 1 yearbacterial load at baseline)*100/ bacterial load at baseline]. Kruskal-Wallis non-parametric analysis of variance was used for the comparisons between the percentage mean decrements of the bacterial loads among the six periodontal pathogens. McNemar chi square test was carried out aiming to compare the frequency of detection of the six periodontal pathogens before and after surgery. Exact binomial 95% confidence intervals of the proportions were also computed. Partial correlation coefficient was used to measure the association between variation of probing pocket depth and the number of sites with bleeding or suppuration, controlling for the age of the patients. Alpha level was a priori set at 0.05. RESULTS Thirty-eight patients (15 males and 23 females) were examined, 16% were smokers and 84% non-smokers, and the mean age was 47 ± 13 years. Patients were diagnosed with severe chronic periodontitis at baseline on the basis of probing depth equal to 6.68 ± 1.47 mm. The reduction of pocket probing depth after surgery was 4.50 ± 1.54 mm and it was statistically significant (p=0.0001). Thirty-eight % of patients presented a number of sites with BOP greater or equal to 3; the mean number of sites with bleeding at baseline was 2.08 ± 1.17 and 0.58 ± 1.00 after surgery, with a statistically significant decrease (p=0.001). Sixteen % of patients presented a number of sites with suppuration greater or equal to 1; the mean number of sites with suppuration at baseline was 0.26 ± 0.86 and 0 after surgery, with a statistically significant decrease (p=0.02). Bacterial loads at baseline and after 12 months are compared in Table 1. Cell count of each pathogen and total cell count were significantly higher at baseline than after surgery; the highest load was presented by Pg and the lowest by Aa. Grouping the pathogens, it emerged that even red and orange complex bacterial load significantly decreased from baseline to the first control. The highest mean percentage of decrease from baseline to one year control was presented by Fn, whereas the lowest was presented by Aa; statistically significant differences of percentage mean decrements were observed among the six periodontal pathogens (p=0.004) as shown in Table 2. Almost all bacteria presented a mean percentage reduction equal to that of the total count, except for Aa and Pi, which seemed to show a greater resistance. Microorganisms Mean Decrement (%) Standard Deviation Kruskal-Wallis Non Parametric Analysis of Variance p= Actinobacillus actinomycetemcomitans (Aa) 23 The frequency of detection of the six bacteria is presented in Table 3. The amplitude of confidence intervals suggests that these estimates are not precise, even if it shows, at baseline, the highest prevalence of the red complex and the highest resistance of Fn. However, the frequencies of detection significantly decreased from baseline to one year after surgery, except for Aa and Fn. The difference of bacterial load, both before and after surgery, between smokers and non-smokers was not statistically significant (p>0.05). Furthermore, a statistically significant correlation was detected between pocket probing depth variation and bleeding on probing variation before and after the surgery, controlling for age (r=0.6, p=0.001). No significant correlations were observed between pocket probing depth and bacterial loads, except for Pg (r=0.5, p=0.001), Tf (r=0.6, p=0.001) and Td (r=0.4, p=0.02). DISCUSSION The purpose of this retrospective cohort study was to compare data on the frequency of detection and bacterial cell count of six important periodontal pathogens at the initial visit and after osseous surgery, in patients affected by chronic periodontitis. Microorganisms Baseline ( Our results showed that the cell count of each pathogen and the total cell count were significantly higher at baseline than at the first control, after 1 year. The highest loads were presented by Pg, and the lowest by Aa. By grouping the pathogens, it emerged that both red and orange complex bacterial loads significantly decreased from baseline to the first control. The red complex, composed of Pg, Td and Tf and the orange complex, composed of Pi and Fn, were demonstrated to be strongly associated with periodontal disease [13]. Levy et al., showed that 12 months after surgery, not only the mean total DNA probe count was significantly reduced (p<0.01), but also each single bacteria species diminished significantly: Pg (p<0,05), Td (p<0,01), Fn (p<0.001) and P. nigrescens (p<0,01) [21]. These findings also confirm the data from Mombelli et al., who found reductions in levels, proportions and prevalence of gram-negative species of the red and orange complexes after surgery. In this study, after 12 months, Pg was reduced from 40% to 4% and Fusobacterium sp. was reduced from 80% to 42%. Considering the mean values, the authors reported that Pi, Fusobacterium sp. and Campylobacter rectus had a significant decrease, respectively from 1.8 x 10 6 to 1.9 x 10 5 , from 1.1 x 10 6 to 5.5 x 10 5 and from 8.9 x 10 5 to 1.0 x 10 5 [22]. Other authors reported that after periodontal osseous surgery, Aa and Pg were not detected in any patient 6 months after the surgery, respectively from 1% to 0% and from 5% to 0%. In addition, Pi and Fusobacterium sp. were recovered in a low proportion of patients after surgery [23]. Kyriazis et al., found that there was a constant decrease in red complex periodontal pathogens after periodontal osseous surgery: Pg was reduced from 3.1 x 10 5 to 1.7 x 10 5 , Tf was reduced from 6.7 x 10 5 to 4.2 x 10 5 , while Td remained almost stable (3.43 x 10 5 to 3.95 x 10 5 ) [24]. To explain significant reduction of red and orange complexes, Horibe explained that tissues management during surgical procedures may lead to an altered host immunologic response to pathogenic species, which later on may exhibit beneficial clinical effects even in sites that were not receiving periodontal surgery [25]. In our study, the highest mean decrease from baseline to after surgery was presented by Fn, with a percentage of 79%, whereas the lowest by Aa, with a percentage of 23%. These data are in disagreement with the results by Levy et al., where the proportional reductions begun to reverse from 9 to 12 months for Fn (increase of 31%) and P. nigrescens (increase of 59%) [20]. However, after using the checkerboard DNA-DNA hybridization technique, the authors confirmed that surgery led to a further reduction of periodontal pathogens compared to SRP alone. Various studies investigated the beneficial effect in bacterial microflora synthesis derived from apically positioned flap, associated with osseous surgery [21,22]. Moreover, Danser et al., reported that modified Widman flap surgery leads to a reduction of the prevalence of periodontal pathogens [26]. The authors examined the prevalence of Aa, Pg and Pi 3 months after periodontal surgery: the prevalence of Aa in the sub-gingival plaque was significantly reduced with a mean percentage of 1.2% (p = 0.02). A further significant reduction, with a mean percentage of 0.2% (p=0.003), was seen in the prevalence of Pg after periodontal surgery. Additionally, the prevalence of Pi was reduced with a mean percentage of 0.2% (p < 0.01) [25]. It has also been suggested that environmental changes, consequences of periodontal surgery, may lead to a shift in the sub-gingival microflora and that the final bacterial composition is more compatible with an oral health status [27]. Concerning clinical parameters and in agreement with other data [20], in our study there was a significant decrease in mean probing depth in surgically treated sites: from 6.68 ± 1.47 mm at baseline to 4.5 ± 1.54 mm after 1 year. In a similar study, Tuan et al showed a mean pocket depth of 5.5 mm at baseline, of 1.9 mm after 1 month, of 2.0 mm after 3 months and of 2.1 mm after 6 months, in sites treated with osseous surgery [22]. In the study of Danser et al., the probing depth decreased from 7.0 mm at baseline to 3.9 mm after surgery. Precisely, the mean reduction of pocket probing depth after SRP was 1.4 mm, and after periodontal surgery, probing depth decreased for an additional 1.3 mm [25]. Another research group reported that from baseline to post initial therapy, there had been a rapid pocket depth decrease (from 3.2 mm to 2.4 mm); on the contrary, from post initial therapy to the post-surgical examination, there had been no statistically significant decrease (from 2.4 mm to 2.2 mm) [28]. The authors obtained a greater mean reduction after SRP because tissue inflammation before initial preparation was greater than that observed before surgical therapy. In our study we found a statistically significant reduction (p=0.001) of sites with BOP: precisely, the mean decrease after surgery was 0.58 ± 1.00. These results are in agreement with others where the mean BOP was 1.8 at baseline and 1.1 after surgery (p<0,0001) [20,25]. Also Tuan et al., reported that BOP decreased from 30.4 ± 12.8 to 6.3 ± 9.4 (p=0,05) after 6 months from surgery [22]. In few studies, Tanner et al., showed that in bleeding sites, there was massive presence of gram-negative bacteria including F. nucleatum species, T. forsythia, Pi, P. nigrescens and Pg [29,30]. In addition, other authors reported that three bacteria of red complex species (T. forsythia, Pg, Td) and different bacteria of the orange complex were interconnected to the presence of bleeding on probing [11,31]. Even though the presence of BOP is not a reliable predictor or indicator for the loss of additional periodontal attachment [32], it has been verified that the presence of some specific periodontal pathogens (e.g. T. forsythia) may be connected with some sites that tend to convert to periodontitis [33]. Clinical studies have shown that there is a relationship between advanced age and increased pocket depth, and between advanced age and increased number of sites with BOP [34,35]. In our study, we found that a reduction of pocket depth between the initial visit and after surgery corresponded to a reduction in the number of sites with BOP (r = 0.5). This correlation may indeed be influenced by the relationship that exists between the two clinical parameters that we analyzed and the age of the patients treated. Therefore, we used the bivariate correlation coefficient to make statistically constant age-variable nullifying its influence on the correlation. After this procedure, the correlation between pocket depth and BOP was found to be stronger (r=0.6), passing from 25% of cases to 36%. One of the greater risk factors for periodontitis arises and progression is represented by cigarette smoking. Tobacco smoking alters the environment of the oral cavity, influencing the development of sub-gingival microbial pathogens (Td and Pg) that resulted significantly higher in smokers than in non-smokers [36,37]. Renvert et al., showed that current cigarette smokers respond less well than non-smokers to periodontal therapy such as SRP and osseous periodontal surgery [38]. The observation of no statistically significant difference in the cell counts between smokers and nonsmokers in our research may be due to the low number of smokers (16%) present. CONCLUSION The evaluation of presence and bacterial load of all the examined periodontal pathogens bacteria before periodontal treatment and after osseous surgery showed a significant decrease after surgical therapy. These results, along with periodontal pocket reduction, seem to be essential to achieve and maintain periodontal stability over years. Further analyses on larger samples are needed to confirm the findings of this retrospective study. ETHICS APPROVAL AND CONSENT TO PARTICIPATE This study was approved by Bologna-Imola Ethical Board Committee (Prot. N. 956/CE -02/11/2016). HUMAN AND ANIMAL RIGHTS No animals were used in this research. All research procedures followed were in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2008 (http://www.wma.net/en/20activities/10ethics/10helsinki/).
2018-10-26T22:48:42.069Z
2018-10-25T00:00:00.000
{ "year": 2018, "sha1": "611beefbd846c1ae76575526f90460bce05f868d", "oa_license": "CCBY", "oa_url": "https://opendentistryjournal.com/VOLUME/12/PAGE/837/PDF/", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "611beefbd846c1ae76575526f90460bce05f868d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252762625
pes2o/s2orc
v3-fos-license
Parallel Computation of functions of matrices and their action on vectors We present a novel class of methods to compute functions of matrices or their action on vectors that are suitable for parallel programming. Solving appropriate simple linear systems of equations in parallel (or computing the inverse of several matrices) and with a proper linear combination of the results, allows us to obtain new high order approximations to the desired functions of matrices. An error analysis to obtain forward and backward error bounds is presented. The coefficients of each method, which depends on the number of processors, can be adjusted to improve the accuracy, the stability or to reduce round off errors of the methods. We illustrate this procedure by explicitly constructing some methods which are then tested on several numerical examples. Introduction We present novel algorithms to compute functions of matrices and their action on vectors which can be computed in parallel. We believe that new algorithms that are designed to be computed in parallel will become more frequent and useful in the near future. For instance, about 25 five years ago, in one of the most relevant books in numerical methods [33] it is written: "In recent years we Numerical Recipes authors have increasingly become convinced that a certain revolution, cryptically denoted by the words "parallel programming," is about to burst forth from its gestation and adolescence in the community of supercomputer users, and become the mainstream methodology for all computing... Scientists and engineers have the advantage that techniques for parallel computation in their disciplines have already been developed. With multiprocessor workstations right around the corner, we think that now is the right time for scientists and engineers who use computers to start thinking parallel." See also e.g. [6,11,15,16,17,19]. The performance of a parallel algorithm will depend on the particular code in which it is written (Fortran, C++, Matlab, Python, Julia, etc.), the compiler used, the number of processors available, how efficiently they communicate, etc., and this will change in the future. For this reason, we will mainly focus on the structure of the algorithm rather than in its particular implementation for solving a given problem. Then, we will analyse the accuracy and stability of the methods with an error bound analysis and, in some cases, we will show the performance of the new algorithms in the two extreme cases, i.e. in the worst scenario when it is evaluated as a sequential method versus the ideal one in which the cost of the whole algorithm can be taken as the cost for the computations in one single processor and neglecting the cost in the communication between processors. These results will be compared with the results obtained with sequential algorithms from the literature to solving the same problem. These results will illustrate the benefits one can achieve when the parallel algorithms are used under different conditions. The efficient computation of an important number of functions of matrices of moderate size is of great interest in many different fields [1,3,4,5,7,8,9,20,22,23,24,25,30,31,34,35,36,37,38]. Frequently, it suffices to compute their action on a vector [2,17,25,26,27,32] allowing to solve problems of large dimensions, or using an appropriate filtering technique the previous methods can also be used to compute functions of large sparse matrices [40]. For example, exponential integrators have shown to be highly efficient numerical schemes to solve linear systems of differential equations [6,12,13,17,18,27,28] but their performance depend on the existence of algorithms to compute accurately and cheaply the exponential and related functions of a matrix, or their action on vectors. For example, the solution of the linear equation where ϕ(x) = e x −1 x which requires to compute either the functions of matrices e tA and ϕ 1 (tA) or their action on vectors, being the best choice depending on the particular problem. The solution can also be written in different forms involving only the exponential or the ϕ 1 function like, for example the first one requiring the computation of the inverse of the matrix A and the second one requires an accurate evaluation of ϕ 1 (tA) or its action on a vector. Even in this last case, if the scaling and squaring technique is applied and one considers that 2tϕ 1 (2tA) = (e tA + I)ϕ 1 (tA) then, both functions e tA and ϕ 1 (tA) have to be simultaneously evaluated. In discretised hyperbolic equations frequently one has to solve the equation In some other cases, given a the transition matrix which gives the flow for the evolution of a differential equation, one can be interested to obtain the generator matrix for this problem, and it can be computed with the logarithm of the transition matrix. Obviously, the particular method to be used will depend on the size and the structure of the matrices whose functions are desired to be computed or their action on vectors. Then, given a matrix A ∈ C d×d , the goal is to compute f (A) where f (x), with x ∈ C, is an analytic function near the origin, e.g. e x , cos(x), sin(x), log(1+ x) or the ϕ-functions, i.e. ϕ k (x) = (e x − p k (x))/x k where p k is the kth order Taylor polynomial to the exponential. To compute f (A), in this work we consider the case in which with α a sufficiently small scalar and v a vector, can be efficiently computed. Notice that: • If A is a dense matrix: It is well known that (I +αA) −1 can be computed at 4/3 times the cost of the product of two dense matrices. The inverse of a dense matrix can be computed, for example, using an LU decomposition and solving d upper and lower triangular systems with a total number of 8 3 d 3 flops or, equivalently, a total cost similar to 4/3 matrix-matrix products. • If A is a large and sparse matrix: the solution of (I + αA) −1 v can be efficiently carried out in many cases using, for example, incomplete LU or Choleski factorization, the conjugate (or bi-conjugate) gradient method with preconditioners, etc. [20,39]. For example, if A is tridiagonal (or pentadiagonal) then (I + αA) is also tridiagonal (or pentadiagonal) and, as we will see in more detail, the system (I + αA)x = v can be solved with only 8d flops (15d flops for pentadiagonal matrices) which can be considered very cheap since the product Av already needs 5n flops (9d flops for pentadiagonal matrices). • Quantum computation emerged about two decades ago and recently this is a field of enormous research interest. It is claimed that a high performance can be achieved for solving linear algebra problems of large dimension [10,21]. For instance, in [10] it si mentioned: The key ingredient behind these methods is that the quantum state of n quantum bits or qubits is a vector in a 2n-dimensional complex vector space; performing a quantum logic operations or a measurement on qubits multiplies the corresponding state vector by 2n × 2n matrices. By building up such matrix transformations, quantum computers have been shown to perform common linear algebraic operations such as Fourier transforms, finding eigenvectors and eigenvalues, and solving linear sets of equations over 2n-dimensional vector spaces in time that is polynomial in n, exponentially faster than their best known classical counterparts. Two steps are frequently considered when computing most functions, f (A), or their action on vectors: • If the norm of the matrix A is not sufficiently small, an scaling is usually applied that depends on the function to be computed. For example, to compute e A one can consider e A/N with N such that the norm of A/N is smaller than a given value and then e A/N is accurately approximated. If N = 2 s , then s squaring are finally applied. For trigonometric functions, alternative recurrences like the double angle formula can be applied, etc. • One has to compute the scaled function, say f (B) with B depending on A, e.g. B = A/N , that can be written as a power series expansion Then, high order rational Chebyshev or Padé approximants, or polynomial approximations are frequently considered to approximate the formal solution following some tricks that allow to carry their computations with a reduced number of operations [1,8,18,35]. For example, a Taylor polynomial of degree 18 can be computed with only 5 matrix-matrix products [8] or a diagonal Padé approximation, that approximates e x up to order x 26 , can be computed with only 6 matrix-matrix products and one inverse [1]. On the other hand, the computation f (B)v is frequently carried out using Taylor or Krylov methods [2,18,26,27] because the scaling-squaring technique can not be used in this case. In some cases the numerical methods to solve these problems have some stages which can be computed in parallel and this is considered as an extra bonus of the method. However, we are interested on numerical schemes that are built from the very beginning to be used in parallel. The goal of this work is to present a procedure that allows to approximate any function as a linear combination of simple functions that can be evaluated independently so, they can be computed in parallel. In addition, they can be used to approximate simultaneously several functions of matrices too. We will also show how similar schemes can be used to compute the action of these functions on vectors. Fractional decomposition A technique that has already been used in the literature is the approximation of functions by rational approximations [14,15,16,17,18,19,38] in which we can apply a fractional decomposition. Given a function f (x), it can be approximated by a rational function, r n,m (x), such that r n,m (x) ≃ f (x) for a range of values of x, where r n,m (x) = pn(x) qm(x) , and p n (x), q m (x) are polynomials of degree n and m, respectively. One can then consider the fractional decomposition where s n−m (x) is a polynomial of degree n − m if n ≥ m or 0 otherwise, and the right hand side can be computed in parallel. If n − m > 2 the cost can be dominated by the cost to evaluate the polynomial s n−m (x). The choice of the polynomials p n (x), q m (x) depends on the particular methods used, i.e. rational Padé or Chebyshev approximations, and the main trouble is that for most functions of practical interest the roots of q m (x), i.e. the coefficients c i are complex making the computational cost about four times more expensive. This can be partially solved if one considers an incomplete fractional decomposition [14]. Since the complex roots of q m (x) occur in pairs, one can decompose it as follows where m = m 1 + 2m 2 and c 1 , . . . , c m 1 are the real roots. Then, some processors have to compute one product, x 2 , and one inverse which altogether is nearly twice the cost of one inverse. Notice that for each function one has to find the polynomials p n (x), q m (x) for different values of n and m, and then to evaluate the fractional decomposition, making this procedure less attractive We simplify this procedure using only real coefficients and making the search for the coefficients of the fractional decomposition trivial for most functions as well as to show how to adapt the procedure when the matrix A has different properties. The paper is organised as follows: Section 2 presents the main idea to build the methods. An error analysis is presented in Section 3. Section 4 illustrates how to build some particular methods which are numerically tested in Section 5. Finally, Section 6 collects the conclusions as well as future work. Approximating functions by simple fractions The main idea of this work is quite simple, notice that for sufficiently small values of c i B and whose computational cost, as previously mentioned, can be considered, for general dense matrices, as 4/3 matrixmatrix products. Notice that each function F i , for different values of c i , can be computed in parallel and then, if P processors are available, we can compute with s = P − 1 if the coefficients are chosen such that c i = c j for i = j and the coefficients b i solve the following simple linear system of equations where the coefficients a k are known from (1). Obviously, once the functions F i are computed, different functions f (B) (with different values for the coefficients a k ) can be simultaneously approximated just by looking for a new set of coefficients b i in (2), sayb i , such the corresponding equations (4) with the new coefficients a k are satisfied. It remains the problem on how to choose the best set of coefficients, c i , for each function, and this will depend on the number of processors available, P , the accuracy or stability desired, etc. If we choose s + 1 < P then P − s − 1 coefficients b i can be taken as free parameters for optimization purposes. For example, the 4th-order diagonal Padé approximation to the exponential is given by with r 2,2 (x) = e x +O(x 5 ), but complex arithmetic is involved, and r 2,2 (x) is only valid to approximate the exponential function. Obviously, r 2,2 (x) can also be computed at the cost of one product, x 2 , plus one inverse, On the other hand, a 4th-order Padé approximation to the function ϕ 1 (x) is given byr However, if five processors are available we can take, for example where the coefficients c i can be chosen to optimise the performance of the method. One set of coefficients c i can be optimal to get accurate results while other set of coefficients can be more appropriate when, for example, the matrix B is positive or negative definite (the coefficients c i with appropriate size and sign can be chosen to optimize stability). If we take c then only four processors are required. Once the values for c i are fixed, the coefficients b i are trivially obtained. For example, if we choose c i = 1/(i + 1), i = 1, . . . , 5 then the system (4) with a k = 1/k!, k = 0, 1, . . . , 4 has the solution which corresponds to an approximation to the exponential that is already more accurate than the previous 4th-order Padé approximation (5). In addition, if one takes a k = 1/(k + 1)!, then we will obtain the solution which corresponds to a 4th-order approximation to the function ϕ 1 (x), or a 4th-order approximation to the function log(1 − x) for sufficiently small x can be obtained if we take although this set of coefficients c i is not necessarily the optimal one for these functions. If one is interested to compute the action of the function on a vector one has to consider which, as previously, can be computed by solving linear systems of equations in each processor in parallel. Functions of tridiagonal matrices acting on vectors Just as an illustration of functions of sparse matrices acting on vectors, let us consider the computation f (B)v where B is a tridiagonal matrix. The product Bv is done with 5d flops so, a Krylov or Taylor polynomial of degree K, requires 5d × K flops, in addition to the evaluation of the function f for a matrix of dimension K × K for the Krylov methods. However, since (I − c i B) is also tridiagonal then (I − c i B)v i = v can be solved with only 8d flops (using, for example, the Thomas algorithm). Then, for problems of relatively large size this is a significant saving in the computational cost (if the cost to communicate between processors can be neglected or is not dominant). This problem will be considered later with more detail. Linear systems for tridiagonal matrices can also be solved in parallel (see, e.g. [33]), but this would require a second level of parallelism which is not considered in this work. Error analysis Forward and backward error analysis can be easily carried out for the new classes of approximations. For example, if one is interested in forward error bounds, we have that Since, in general, the coefficients a k , b i , c i are known with any desired accuracy, one can take a sufficiently large number of terms in the summation to compute the error bounds with several significant digits. For different values of the tolerance, ǫ, we can find values, say θ s , such that if B < θ s the error is below ǫ. Obviously, sharper error bounds can be obtained if additional information of the matrix is known, e.g. when the bounds are written in terms of B k 1/k for some values of k (see, e.g. [1]). Alternatively, error bounds from the backward error analysis can be obtained if one considers that r (P ) s (B) can formally be written as s (x) coincides with the Taylor expansion of f (B) up to order s, but this is not necessarily the case in schemes like Chebyshev approximations where s = 0. Then, it is clear that These previous error analysis can be easily carried for different functions f (B) and for different choices of the coefficients c i and their associated values for the coefficients b i . However, one has to take into account that the coefficients b i will strongly depend on the choice of the coefficients c i . On one side, we have that taking very small values for the coefficients c i usually allows to apply the methods for matrices with large norms, but then the solution of the linear system of equations (4) will have, in general, large values for the coefficients b i , and this can cause badly conditioned methods when high accuracy (say, near to round off errors) is desired. This can be partially solved taking additional processors or slightly modifying the methods as we will see. Illustrative examples: Exponential and phi-functions We illustrate how to build and optimise some approximations of the matrix exponential and the ϕ 1 -function taking into account the error analysis when only a small number of processors are available and relatively low order methods are considered. For simplicity in the presentation we will only consider forward error bounds. 4th-order approximations One of the best 4th-order methods to approximate the the matrix exponential is the r 2,2 (x) Padé approximation which in a serial algorithm needs one product and one inverse and similarly forr 2,2 (x) to approximate the ϕ 1 -function. A forward error bound is given by Here, d k andd k are the coefficients from the Taylor expansion of r 2,2 (x) and r 2,2 (x), respectively. Let us now consider, for simplicity in the search of an optimized parallel method, the following choice for the coefficients c i for an scheme using four processors: where the coefficients b i are trivially obtained. Then, the proposed method to be used with four processors is written in terms of one free parameter, α. The forward error bound is given by Here, d k (α) = 5 i=2 b i c k i andd k (α) = 5 i=2b i c k i are the coefficients from the Taylor expansion of r 4 (x) andr (4) 4 (x), respectively, which depend on α. A simple search shows that the optimal solution which minimises ǫ 4 (x) for most values of x about the origin occurs (approximately) for α = 5, and then For this choice of α we can still get an approximation for the ϕ 1 function that is nearly as accurate as the previous Padé approximation. The optimal choice for this function is however (approximately) for α = 6, and then which gives more accurate results than the Padé scheme. In Fig. 1, left panel, we show the values of the different error bounds, ǫ 4,pad (x),ǫ 4,pad (x), ǫ 4 (x) andǫ 4 (x), versus x. The thick lines correspond to the 4th-order approximations to the exponential function and the thin lines correspond to the ϕ 1 -function when using Padé approximations (dashed lines) and the fractional approximations (solid lines). Then, given a tolerance, tol, one can easily find the value θ such that ǫ(x) < tol for x < θ, i.e. ǫ( B ) < tol for B < θ where ǫ denotes the desired error bound. It is clear that the new methods are more accurate for all values of B and are also faster to be computed when done in parallel, and we have to remark that the new methods are not fully optimized. 8th-order approximations In a serial computer, the 8th-order Taylor approximation can be computed with only 3 matrix-matrix products as follows [8] with being the most efficient scheme for this order of accuracy. In spite the coefficients x i , y i are not rational numbers one can check that T 8 is exactly the Taylor expansion to order 8, and a similar method can be obtained for the ϕ 1 -function which we do not consider for simplicity. The forward error bound is given by Let us take, for example r (8) to be used with eight processors and written in terms of one free parameter, α. The optimal value for this choice of coefficients c i which minimize the forward error bound, ǫ 8 (x) corresponds (approximately) to α = 5, and then In the right panel of Fig. 1 we show the values of ǫ T 8 (x) (dashed line) and ǫ 8 (x) for α = 5 (solid line) for different values of x. We observe that this method, with such a simple optimization search, already provides more accurate results than the Taylor approximation being up to slightly more than twice faster. High order approximations From the error bounds analysis we can deduce that when high accuracy is desired, it is usually more efficient to consider higher order approximations rather than taking a larger value of N in the approximation B = A/N and next to consider the recurrence like the squaring. This usually occurs up to a relatively high order. However, in the construction of different methods we have noticed that in the optimization process for the fractional approximation, better error bounds are obtained when small values of the coefficients c i are taken and this usually leads to large values for the coefficients b i whose absolute values typically grow with the order of accuracy (the coefficients b i of our 8th-order approximation are considerably larger in absolute value than the corresponding coefficients for the 4th-order method). Then, if nearly round off accuracy is desired, e.g. double precision, the new schemes could not be well conditioned and can suffer from large round off errors. This problem can be partially reduced in different ways. If the computation of (I − c i B) −1 I has similar cost as (I − c i B) −1 B we can take into account that where we have considered that P i=1 b i = a 0 . This decomposition usually has smaller round-off errors for relatively small values of |c i x| which is usually the case when close to round off errors are desired. Alternatively, if there is no restriction on the number of processors, one can consider in (3) P > s + 1 allowing for P − s − 1 free coefficients b i that can be chosen jointly with appropriate values for the c i 's to reduce round off errors. We can end up with an optimisation problem with many free parameters (the coefficients c i in addition to P − s − 1 coefficients b i ) and we leave this as an interesting open problem to be analysed for different functions of matrices, number of processors used and choices of the order of accuracy. In [29] the authors show how to optimise the search of the coefficients in some approximations of functions of matrices that hopefully could be used for this problem too. There are also hybrid methods that could be used to reduce round off errors. For example, we can consider which can be considered as a generalisation of the fractional decomposition of r n+2,n rational Padé approximations. Here, d 0 + d 1 x + d 2 x 2 can be computed in one processor (at the cost of one matrix-matrix product, that is slightly cheaper than the inverse of a matrix) and the partial fractions are computed in the remaining P − 1 processors. The coefficients b i must satisfy (for P ≥ 2) which allows to get methods up to order s = P + 1 with P processors and, hopefully, smaller coefficients b i in absolute value. It is then expected that a more careful search will lead to new schemes with considerably reduced round off errors. We illustrate this procedure in the search of some 10th-order methods for the matrix exponential. 10th-order approximations to the exponential The 10th-order diagonal Padé approximation which is given by [22] and is used as one of the methods implemented in the function expm of Matlab is: ). It requires 3 products and one inverse (approximately three times more expensive than the inverse to be evaluated by each processor on a parallel method). Note that the Padé approximation for the ϕ 1 -function has not the same symmetry for the numerator and denominator and has to be computed with four products and one inverse, i.e. four times more expensive that one inverse. An approximation by using the proposed fractional methods to order ten requires 11 processors (10 processors if one takes, e.g. c 1 = 0 or 9 processors if one considers (13)). Suppose we have two extra free coefficients b i in the composition (13) (11 processors in total) to have some freedom to improve the accuracy of the method (this is just a very simple illustrative example that has not been exhaustively optimised) r (11) where, given some values for the coefficients c i we take as free parameters b 9 , b 10 and solve the following linear systems of equations for the remaining coefficients: We have taken being the remaining coefficients This method has an error bound smaller than the error bound provided by the Padé approximation by more than one order of magnitude while being up to three times cheaper in a parallel computer. However, note that max{|b i |, |d i |} ≃ 4.9 · 10 6 so, one can expect large cancellations leading to large round off errors. This problem can be partially solved by choosing different values for the coefficients c i . Note that we have much freedom in this choice. For example, if we take the following values: Numerical examples Given a matrix, A, to compute a function, f (A), most algorithms first evaluate a bound to a norm, say A or A k 1/k for some values of k and then, according to its value and a tabulated set of values obtained from the error bound analysis, it is chosen the method to be used that gives a result with (hopefully) an error below the desired tolerance. To test the algorithms on particular problems allows us to check if the error bounds are sufficiently sharp as well as to observe how big the undesirable round off errors are. In the numerical experiments we only compute approximations to e hA or its action on vectors for different values of h using the following methods • r 2,2 : the 4th-order Padé approximation (5). Cost: one product and one inverse. • T 5 , T 10 : the 5th-and 10th-order Taylor approximations used for the action of the exponential on a vector. Cost: five and ten vector-matrix products, respectively. • R 5 : the 5th-order rational approximation, r 5 (x), with coefficients given in (22), designed to be used with five processors. • R 8 : the 8th-order rational approximation with coefficients given in (11) for α = 5, designed to be used with eight processors. As a first numerical test we take A = randn (100) i.e. a 100×100 matrix whose elements are normally distributed random numbers and we compute e hA for different values of h. We have done this numerical experiment repeatedly many times with very similar results. Figure 2 shows in the left panel the two-norm error (the exact solution is computed numerically with sufficiently high accuracy) versus h A for these methods. Dashed lines correspond to the results obtained with the diagonal Padé and Taylor methods and solid lines are obtained with the new rational methods. We observe that the new methods have in all cases similar or higher accuracy than the methods of the same order and built to be computed in a sequential algorithm. There is only a minor drawback in the 8th-order rational method when high accuracy is desired due to round off errors. Note that the computational cost is not taken into account in the figures since this depends on how efficiently is implemented the rational method in a parallel computer, where the new methods can be up to several times faster to compute than the methods designed for serial algorithms. We have repeated the numerical experiments with another 100 × 100 symmetric dense matrix with elements and the results are shown in the right panel in Figure 2. We observe that for this problem round off errors are similar for both 8th-order methods. We have repeated these numerical experiments for different dimensions of the matrices and other matrices with different structures and the results are, in general, qualitatively similar. Notice the results are in agreement with the error bounds shown in Figure 1. We have repeated the numerical experiments using the 10th-order methods. Figure 3 shows the results obtained in a double logarithmic scale: r 5,5 (dashed lines), R * 10 (dotted lines) and R 10 (solid lines). We observe the high accuracy of the method R * 10 , but with large round off errors. The scheme R 10 shows slightly less accurate results but the round off errors are reduced about three orders of magnitude. Notice that the methods R * 10 and R 10 can be up to three times faster than r 5,5 and this is not shown in the figure. As an illustrative example of the action of a function of a large and sparse matrix on a vector we consider the action of the exponential on a vector for the the tridiagonal matrix, We measured the two-norm error of the action of this exponential on a vector, e hA v, for different values of h and where v is a unitary vector with random components, i.e. w = randn(d, 1), v = w/ w for the 10th-order method, R 10 and we compare with the Taylor expansion truncated to order 10. As previously mentioned, each vector-matrix product involves 5d flops while each system requires only 8d flops, so the Taylor method requires 50d flops per step. To reduce round off errors (by nearly two orders of magnitude) one can use the trick given in (12) which requires one extra product, i.e. 13d flops per step and processor (approximately 4 times faster). The scheme reads r (11) where we have considered that d 0 + b 1 + b 2 + . . . + b 10 = a 0 = 1 so each processor that evaluates each fraction has to compute which requires one product, Av, and solving a tridiagonal system (13d flops). This method corresponds to R 10 but we denote it as R ′ 10 when the method is computed following this sequence. Figure 4 shows the results obtained in a double logarithmic scale: dashed line correspond to the 10th-order Taylor method, T 10 , dotted line to the method R 10 and the solid line R ′ 10 . Notice that R 10 and R ′ 10 are the same method but computed differently which affects both to the round off error as well as to the computational cost. We also remark that R 10 can be up to more than five times faster than T 10 and R ′ 10 up to about four times faster. Exponential of tridiagonal (and banded) matrices acting on vectors Let us now consider the particular case (but of great practical interest) of computing the exponential of a tridiagonal matrix acting on a vector, e A v, with single precision accuracy and where we assume that, say A ≤ 1.5, as it is usually the case when, for example, exponential integrators are used to solve differential equations. The same algorithms with minor changes in the computational cost of the methods apply to the computation of the exponential of pentadiagonal or banded matrices acting on vectors. One of the most used schemes to compute the action of the matrix exponential is proposed in [2]. The algorithm works as follows: To compute e A v, a set of Taylor polynomials of degree m = 5k, k = 1, 2, . . . , 11 are considered. Given a tolerance (single or double precision), an estimation to the norm A Table 1: Values of θ m obtained from the forward error analysis for the Taylor approximation t m of order m for single precision. The values for the backward error analysis used in the algorithm proposed in [2] are quite close but slightly shifted, but this also occurs for the rational methods. (frequently A 1 is used), and from the error analysis, it is chosen the lowest degree Taylor polynomial among the list such that the desired accuracy is guaranteed. In [2], the choice of the method is done from the results obtained by the backward error analysis. Very similar results are obtained with the forward error analysis, which is simpler and for the convenience of the reader we will consider it. In Table 1 We build a similar scheme in order to analyse the interest of the new algorithms versus the State of the Art algorithm. We then need a 5th-order approximation that, without an exhaustive analysis, we take as the following one: If we take c 1 = 0, c i = 1/(i + 1), i = 2, . . . , 6 then the system (4) has the solution The forward error analysis tells us that the method provides an error below the tolerance,u ≤ 2 −24 , for A ≤ θ 5 = 0.298, as indicated in Table 2, being significantly greater than with the Taylor method. The computational cost to evaluate r (5) 5 (A)v corresponds to solve five tridiagonal systems. If we consider that each system is solved with 8d flows, this will correspond to the cost of 8/5 times the cost of the products Av. If the method is computed sequentially, we observe that the cost would be equivalent to 8 matrix-vector products, and these numbers correspond to the interval ( 8 5 , 8) shown in Table 2 as a measure of the cost in comparison with the Taylor method. For the 10th-order method we take the scheme (15)-(17) (the scheme with coefficients given in (18)- (19) have smaller round off errors but on the other hand it is slightly less accurate and and it has a smaller value of θ 10 ). The cost and value of θ 10 for this method are collected in Table 2. The results from Tables 1 and 2 are illustrated in Figure 5 where we plot the computational cost of each algorithm, measured in terms of the cost of a matrix-vector product, for different values of A in the interval of interest. Table 2: Values of θ m obtained from the forward error analysis for the fractional approximations r (5) 5 and r (11) 10 of order 5 and 10, respectively, for single precision. The cost m * is measured as an interval in terms of the cost of the product Av to makes easier the comparison with the Taylor methods. The lower limit of the interval corresponds to the ideal case where all calculations are carried in parallel and there is no cost to communicate between processors, and the upper limit corresponds to the same scheme fully computed as a serial method. For the fractional methods (solid lines) the cost is measured both considering they are computed as a sequential scheme as well as if they were computed in an ideal parallel computer (the cost for one single processor). The Taylor methods correspond to the dashed line. We observe that the new methods are competitive even in the worst scenario and much better once some cost is saved due to the parallel programing. The excellent results obtained motivated us to make deeper analysis for this problem in order to find an optimised set of methods for different orders of accuracy similarly to the actual sequential scheme and to test them in multiprocessors workstations. Thsi work will be carried out in the future. We conclude this section with some remarks: • If one is interested to compute the exponential e A acting on several vectors (or the same exponential is applied on several steps for a given integrator) then the LU factorization requires to be done only once making the new algorithms cheaper and competitive even when used as sequential algorithms. • If the matrix A is pentadiagonal and we take into account that the vectormatrix product, Av, can be done with 9d flops and that the pentadiagonal system (I − c i A)x = v can be solved with only 15d flops using an LU factorization then, the same conclusions remain approximately valid for this problem because the relative cost is very similar to the case of tridiagonal matrices. • If the matrix A is banded and the linear system (I − c i A)x = v can be solved accurately with few iterations using, for example, a conjugate gradient method with an incomplete LU factorization as a preconditioner, then the previous results provide a good picture of the performance of the new methods for banded matrices. • The computational cost from the linear combination of vectors should also to be considered in the simple case of the exponential of tridiagonal matrices, making all methods slightly more costly. We have not included this extra cost to provide some results which can also be applied to pentadiagonal or banded matrices where this extra cost is marginal. Figure 5: Computational cost to evaluate e A v with A a tridiagonal matrix (measured as times the cost of one matrix-vector product) versus A for the algorithm using Taylor approximations of degree 5, 10 and 15 as given in [2] (dashed line) and the new rational approximations R 5 and R 10 of order 5 and 10, respectively. The upper solid line corresponds to the worst case in which the cost is measured as if the methods are computed sequentially while the lower one corresponds to the ideal case in which they are computed in parallel with no cost in the communication between processors. Conclusions In this work we have presented a new procedure to compute functions of matrices as well as their action on vectors designed for parallel programming that can be significantly more efficient than existing sequential methods. Given a dense matrix, A, the computation of (I − c i A) −1 , for sufficiently small constant c i , can be evaluated at the cost of 4/3 times the cost of a matrix-matrix product and can formally be written as a series expansion that contains all powers of A. Then, a proper linear combination of this matrix evaluated in parallel for different values of c i can allow to approximate any function inside the radius of convergence. If the computation can be carried in parallel with a reduced cost in the communication between processors then the new methods can be up to several times faster than conventional serial algorithms. For large dimensional problems in which one is only interested in the matrix function acting on a vector, the performance of the new methods depend on the existence of a fast algorithm to solve the linear system of equations, (I − c i A)v i = v. We have illustrated with some examples that to construct new methods as well as to carry an error analysis is quite simple. The preliminary results are very much promising and it deserves to be further investigated to get optimal methods for different classes of problems and number of processors available in order to get accurate and stable solutions with small round off errors.
2022-10-10T01:16:15.826Z
2022-10-07T00:00:00.000
{ "year": 2022, "sha1": "390b6c3f94fe92f1278495d96427cca97c972f20", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "390b6c3f94fe92f1278495d96427cca97c972f20", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
57758262
pes2o/s2orc
v3-fos-license
Deleterious alleles in the context of domestication, inbreeding, and selection Abstract Each individual has a certain number of harmful mutations in its genome. These mutations can lower the fitness of the individual carrying them, dependent on their dominance and selection coefficient. Effective population size, selection, and admixture are known to affect the occurrence of such mutations in a population. The relative roles of demography and selection are a key in understanding the process of adaptation. These are factors that are potentially influenced and confounded in domestic animals. Here, we hypothesize that the series of events of bottlenecks, introgression, and strong artificial selection associated with domestication increased mutational load in domestic species. Yet, mutational load is hard to quantify, so there are very few studies available revealing the relevance of evolutionary processes. The precise role of artificial selection, bottlenecks, and introgression in further increasing the load of deleterious variants in animals in breeding and conservation programmes remains unclear. In this paper, we review the effects of domestication and selection on mutational load in domestic species. Moreover, we test some hypotheses on higher mutational load due to domestication and selective sweeps using sequence data from commercial pig and chicken lines. Overall, we argue that domestication by itself is not a prerequisite for genetic erosion, indicating that fitness potential does not need to decline. Rather, mutational load in domestic species can be influenced by many factors, but consistent or strong trends are not yet clear. However, methods emerging from molecular genetics allow discrimination of hypotheses about the determinants of mutational load, such as effective population size, inbreeding, and selection, in domestic systems. These findings make us rethink the effect of our current breeding schemes on fitness of populations. | G ENE TI C LOAD AND INB REED ING Each genome carries deleterious mutations that can potentially affect fitness and health. According to population genetics theory, this mutational load depends on multiple factors such as mutation rate, demographic history, and selection. Most deleterious mutations are (at least partly) recessive, implying that their harmful nature will only be exposed in homozygous state. Since strongly detrimental variants are at low frequency (Mukai, Chigusa, Crow, & Mettler, 1972), homozygosity of these variants is most likely to occur due to inbreeding. Inbreeding is the inheritance of identical copies of genetic material from related parents and causes long homozygous regions in the genome of the offspring (ROH: Runs Of Homozygosity, see Figure 1; Curik, Ferencakovic, & Solkner, 2014). The potential negative impact that inbreeding will have on health and reproduction compared to an outbred population is referred to as "genetic load" (Crow, 1970a,b), which is mainly caused by the expression of recessive homozygous harmful mutations (Garcia-Dorado, 2003;Lynch, Conery, & Burger, 1995). | Inbreeding depression Small populations are more likely to suffer from inbreeding, which can negatively impact health and reproduction (Lynch et al., 1995). The decline in fitness observed in inbred progeny, relative to outbred progeny, is known as "inbreeding depression" (Keller & Waller, 2002). Inbreeding depression has largely been attributed to the accumulation of recessive harmful mutations in the genome: Inbreeding increases the probability of these mutations to become homozygous (i.e., Agrawal & Whitlock, 2012;Charlesworth & Willis, 2009;Ohta, 1973). Apart from increased homozygosity of recessives, the overdominance hypothesis assumes a heterozygote advantage, and therefore, overall loss of diversity due to inbreeding reduces the advantage of the heterozygotes (Crow, 1970a,b). In this paper, we focus on the detrimental effect of mostly recessive deleterious alleles, and their combined effect referred to as the mutational load. With advancing sequencing technologies, the mutational load in the genome of an individual can be estimated from sequence data with increasing accuracy. We should bear in mind that the number of deleterious mutations does not necessarily need to differ between an inbred and an outbred individual for differences in fitness to exist; the key concept here is that while harmful mutations generally have a small fitness effect in heterozygous state, in homozygous state, they are expressed, causing, for instance, heritable diseases ( Figure 1). Since ROH are formed because both haplotypes are identical by descent (IBD), the probability of a recessive deleterious allele with a low-frequency p to become homozygous is higher in such regions than outside IBD regions (frequency p 2 : (Szpiech et al., 2013). | Identification of deleterious alleles Alleles with a putative effect on the phenotype, both beneficial and deleterious, are thought to be younger than variants with no effect with the same allele frequency (Maruyama, 1974). Understanding the factors that cause harmful mutations to increase in frequency in a genome will facilitate prediction of genetic load in current populations, but also aid to avoid a high genetic load in the future (Garcia-Dorado, 2012). A popular approach to identify harmful mutations that are (almost) lethal is to screen populations for the absence of specific variants in homozygous state. If populations contain heterozygous carriers of such variants, homozygotes would be expected based on allele frequencies and carrier-carrier matings (Derks et al., 2017;Pausch et al., 2015;VanRaden, Olson, Null, & Hutchison, 2011). This method relies on the biological implications of the lethality of a variant-resulting in an absence of the allele in F I G U R E 1 Genomic consequences of inbreeding. When parents are related, two identical haplotypes can be passed on to their offspring that are identical by descent. Therefore, no genetic variation exists between the two inherited copies in the inbred offspring. Such homozygous stretches (ROH: Runs Of Homozygosity) can be seen as long homozygous regions without polymorphisms in individual genomes. In ROH, harmful mutations have a higher chance of becoming homozygous and being expressed because the haplotypes carrying the harmful mutation stem from a common ancestor Mother Father Inbred offspring Homozygous (ROH) Heterozygosity Position on chromosome homozygous state. Depending on the frequency of the lethal allele in the population, large sample size is often required to identify such alleles (Derks et al., 2017). Therefore, most of the studies implementing this depletion of homozygotes approach use large panels of genotyped animals. An alternative approach is to predict deleteriousness from the functionality of a mutation. Next-generation sequencing has opened up exciting possibilities to actually pinpoint potentially harmful mutations in individual genomes (Henn, Botigue, Bustamante, Clark, & Gravel, 2015;Li et al., 2010). The deleteriousness of a variant can be predicted based on its effect on gene functioning (such as protein changing, stop-gain, stop-lost), for example by assessing the degree of conservation of an amino acid residue across species Kumar, Henikoff, & Ng, 2009;Wang, Li, & Hakonarson, 2010). The implementation of multiple genome annotations, such as specific gene function or regulatory elements, has proven successful for in silico predictions of the effect of disease causing mutations in humans (Kircher, Witten, Jain, B. J. O'Roak, & Shendure, 2014). These techniques have recently been applied to estimate genetic load in human populations and domesticated species (Charlier et al., 2016). However, we need to remain cautious with such assessments of deleteriousness, since they rely on predictions that mostly have not been validated with experiments. High-impact variation may very well not be lethal in a domestic setting and could even be perceived beneficial if they serve a particular breeding goal. Nevertheless, such advances in predicting the effect of variants have high potential for bridging the gap between sequence information and fitness effects. | Application to domestic species A combination of both approaches to detect deleterious variants has proven successful in cattle (Pausch et al., 2015). Recent findings based on predicted deleterious variants from sequence data corroborate the classical theory by Mukai et al. (1972) that deleterious alleles are generally at low frequency (e.g., Mezmouk & Ross-Ibarra, 2014). Multiple factors contribute to inbreeding depression, and epistatic effects across loci should be considered as well. Nevertheless, quantifying harmful mutations in single genomes is an important first step toward the "genomic characterization of genetic load." An increase in studies assess mutational load in sequence data from domestic species. Some general trends about the effects of domestication on the burden of harmful mutations are now emerging (Makino et al., 2018;Zhou, Massonnet, Sanjak, Cantu, & Gaut, 2017). Here, we review recent studies on mutational load in domestic species and use re-sequence data from pig and chicken to demonstrate how specific hypotheses about the burden of deleterious variants can be tested. Specifically, we will discuss two major drivers of mutational load in genomes: bottlenecks and selection. | THE DOME S TI C ATI ON BOT TLENECK Domestication of plants and animals has a major impact on the domesticated species in terms of effective population size and selection pressure. This in turn could negatively affect the mutational load and cause genetic erosion (diminishing gene pool). In the context of inbreeding, lethal variants will quickly be purged from small populations (Charlier et al., 2016), but the frequency of slightly deleterious mutations is expected to rise as natural selection is less effective (Kimura, 1963). Past population bottlenecks have been proposed to drive mutational load in human populations Lynch et al., 2016), but the nature and strength of the impact is still debated (Henn et al., 2015;Lohmueller, 2014;Simons & Sella, 2016;Simons, Turchin, Pritchard, & Sella, 2014). | Domestication increases load Several studies report the effect of domestication on mutational load. In line with predictions, the bottlenecks associated with domestication have not only reduced genetic diversity of domestic species, but also increased the mutational load in them as well. Domestication bottlenecks have indeed been suggested to have substantially increased the mutational load in dogs, referred to as the "legacy of domestication" (Cruz, Vila, & Webster, 2008). In dog genomes, the ratio of amino acid changing heterozygosity to silent heterozygosity (variants presumed to have no effect) was higher than in their wild ancestors, gray wolves (Marsden et al., 2016). These results indicate that the ability of purifying selection to remove weakly deleterious variants is lowered by the bottlenecks. A similar phenomenon has been reported for domestic horse genomes, where an excess of deleterious mutations is thought to be caused by domestication and inbreeding (Schubert et al., 2014). Also in crops, deleterious mutations seem to have accumulated in domestic lineages (Kono et al., 2016;Lu et al., 2006). | Type of bottleneck is important The time frame in which the bottleneck occurs has a strong effect on the potential of natural selection to eliminate harmful mutations. If the bottleneck is severe and sudden, (local) genomic recombination is scarce, then drift can exert a maximal effect. If, however, the same population would decline over a long period of time, purging might enable deleterious variants to be removed from the population, resulting in a lower load (Charlier et al., 2016;Hedrick & Garcia-Dorado, 2016). This begs the question of whether it was domestication itself that increased mutational load, or rather one of the processes that co-occurred with it. The strong reduction in effective population size, going from wild to domesticated, could have driven the deleterious alleles to higher frequency (Liu, Zhou, Morrell, & Gaut, 2017). Artificial selection could have reduced the effective population size even further. Most domestic animal species are thought to have experienced a relatively strong population bottleneck, although the picture can become complex for species that went through multiple domestications in different regions. In livestock, domestication is no longer seen as a single, discrete event. Rather, substantial and continuous gene flow from wild populations has occurred during the process of domestication (Frantz et al., 2015;Scheu et al., 2015). In rice, Liu and colleagues suggest that it was not domestication itself, but the shift in mating system from outcrossing to predominantly selfing that had a substantial influence on mutational load (Liu et al., 2017). This suggests that the process of domestication as well as the management regime under which current lines have been formed could have influenced the occurrence of deleterious mutations in domestics. The intensity of the domestication bottleneck is thought to have influenced the difference in mutational load between annual and perennial crops . A recent analysis on multiple domesticated species concludes that in domestic plant and animal genomes, an elevated proportion of deleterious genetic variation is present, with European pigs as an exception (Makino et al., 2018). This creates opportunity to investigate further how domestication has elevated mutational load. | Case study in pig and chicken: application to sequence data We used genotype and re-sequence data from pigs and chicken to investigate the mutational load in wild and domestic populations, as described in supplementary materials. These species represent highly different domestication histories and selection regimes. We compared the ratio of predicted deleterious heterozygosity with silent heterozygosity within individual genomes to estimate mutational load in domestic and (semi) wild pigs and chickens, using the method described by Renaut and Rieseberg (2015), applied to sunflowers. Pigs were domesticated at least twice, independently, giving rise to the current Asian and European-based domestic clades (Kijas & Andersson, 2001;Larson et al., 2005). European and Asian wild boar form an excellent model for their wild ancestors, enabling direct comparisons between the wild and domestic form . The use of pigs from two different geographic regions (Asia and Europe) and subjected to different domestication events (wild and domestic, including local and commercial populations) enables the study of the impact of demography and domestication on the distribution of deleterious mutations. In chicken, however, the wide variety of domestic breeds are thought to stem from red jungle fowl (Fumihito et al., 1996) and involve more complex demographic history, including multiple regional centers of domestication across Asia (Miao et al., 2013). | Case study: increased load is context-specific The estimated mutational load in commercial chicken lines is higher than in African village chicken, both from estimates based on heterozygous variants and from estimates from homozygous variants (Figure 2b, Supporting Information Figure S1). An elevated mutational load in commercial chicken is corroborated by the analysis of pooled chicken data in Makino et al. (2018). The estimated mutational load was generally higher in European pig genomes than in Asian pigs (Figure 2a). So far, these analyses support the general view that domestication coincides with a population bottleneck that underlies the increase in load in the domestic form, which was also found by Makino et al. (2018). Studies in F I G U R E 2 Mutational load in Sus scrofa and Gallus gallus. Mutational load in individual genomes, calculated as the ratio of predicted deleterious heterozygous sites over synonymous heterozygous sites. (a) Pigs. ED = European domestic; EW = European wild; AD = Asian domestic; AWN = North Asian wild; AWS = South Asian wild. (b) chicken AF = African village chicken, putatively evolutionary closer to red jungle fowl. WL = commercial white layer lines human populations also indicate that individuals from populations that have experienced bottlenecks tend to carry more deleterious alleles (Fu, Gittelman, Bamshad, & Akey, 2014;Lohmueller et al., 2008). This is in agreement with the pattern of deleterious mutation observed in pigs, where European populations had a higher proportion of deleterious variants. Rather than affected by domestication, this seems to be related to the demography of the ancestral, wild boar population. In European wild boar, population bottlenecks during the Last Glacial Maximum were more severe than in Asia, which largely explains the lower genetic diversity of European wild boar . Thus, the higher number of deleterious variants observed in European pig populations, especially in wild boar, reflects the typically negative correlation between genetic diversity and the incidence of deleterious mutations. By contrast, the European commercial pig genomes contained a lower ratio of deleterious over silent heterozygosity than European The absolute number of homozygous deleterious variants inferred from the genomes of the different pig groups did not differ much, but the ratio of homozygous deleterious over homozygous neutral variants was higher in European pig and wild boar compared to Asian pigs (Supporting Information Figure S1). Variants with a strong effect are thought to be younger than variants with no effect with the same allele frequency. Therefore, the majority of deleterious variants are thought to be derived alleles, which were also found by Makino et al. (2018). When comparing breeds or populations with a relatively deep phylogenetic split, polarizing alleles using an outgroup is recommended since the distance to the reference genome could introduce a bias (Lohmueller, 2014;Makino et al., 2018;Simons & Sella, 2016). | Consequences of increased load Whether the high estimated mutational load in domestic genomes will lead to lowered viability is questionable. Many commercial lines are bred especially for reproduction-related traits or through the production of hybrids, which will dilute the effect of mutational load. Current ongoing work has shown that life history traits such as lifespan or propagule size are a key to understanding levels of genetic diversity and mutational load (Romiguier et al., 2014). So far, the results on mutational load estimated from the ratio of synonymous to nonsynonymous mutations are fairly evident in indicating that purifying selection is higher in the wild (Chen, Glemin, & Lascoux, 2017). The sources of genetic variation that create genetic diversity where selection can act upon are the same for both domestic and wild animals. However, in most circumstances, domestic animals have been so strongly selected for the sake of production, that their genetic diversity is extremely low. Moreover, artificial selection for specific traits sometimes results in so much inbreeding that it is threatening the survival of the breed. Clear examples can be seen in dogs such as the Norwegian Lundehund suffering from intestinal problems, but cancer, eye, and heart diseases are also common (Kettunen, Daverdin, Helfjord, & Berg, 2017;Schoenebeck & Ostrander, 2014). Combined, these effects of inbreeding in domestics have led to animals and plants that may have lost their ability to face environmental challenges, should they come from climate change or new diseases. Finally, the circumstances in which domestic animals are kept do not reflect the (harsh) environment they face in the wild. The loss of adaptive potential, disease resilience, and fitness reduction is of major concern for monocultures such as bananas and in salmon (Araki, Berejikian, Ford, & Blouin, 2008;Garcia de Leaniz et al., 2007). Together with the fact that the traits selected in domestic animals are usually deleterious in the wild make the detrimental effects of the potentially harmful mutations hard to observe (Hedrick & Garcia-Dorado, 2016). | ARTIFI CIAL S ELEC TI ON AND MUTATIONAL LOAD The strong artificial selection that is associated with domestic populations can increase inbreeding in commercial lines, as was already mentioned by Lush (1946). Not only the reduced effective population size, therefore, but also the selection for favoured gene variants may have increased homozygosity within domestic animals. In addition to the increase in homozygosity due to drift effects, selection constraints on detrimental variants may be lifted if they are in LD with a favoured allele that is strongly selected for. (Maynard Smith & Haigh, 1974). If the selection coefficient against those mutations, combined, is lower than the selection coefficient of the preferred allele that lies on the same haplotype, the allele frequency of the deleterious variant(s) is expected to rise due to genetic hitchhiking. Therefore, mildly harmful mutations are thought to be over- | Increased load due to selection In humans, deleterious alleles are thought to have increased in frequency due to linkage to sites that have been under positive selection. (Chun & Fay, 2011). Indeed, in domestic species, we find similar evidence of selected regions to be over-represented with predicted deleterious alleles. In dogs, an increased load was found within regions under selection, inferred from the genome (Marsden et al., 2016). A different approach is to assess the mutational load in genes known to affect phenotypic traits of interest. In plants, such genes were shown to contain proportionally more deleterious variants (Kono et al., 2016;Lu et al., 2006;Mezmouk & Ross-Ibarra, 2014). The linkage of deleterious variants to genes under balancing selection could also lead to a local overrepresentation of deleterious variants. A recent study in cattle (Kadri et al., 2014) showed that a long known antagonism between fertility and milk production is actually due to such a phenomenon. While a major QTL for fertility with effects in milk production is under balancing selection in nature, the change to directional selection in selection for milk production has led to an extremely reduced fertility in cows. Balancing selection in livestock has probably an underestimated role yet to be explored. | Case study in pig and chicken: load in runs of homozygosity Together, inbreeding and strong directional selection can increase the proportion of homozygous segments in individual genomes. Following the rationale of Szpiech et al. (2013) that ROH are enriched for deleterious variants (in homozygous state), we tested this hypothesis in pig and chicken lines. In three white layer lines, the number of predicted deleterious homozygous variants was higher in ROHregions compared to the rest of the genome. As a result, ROH contain proportionally more homozygous deleterious alleles than the rest of the genome (Figure 4), in line with findings in humans (Szpiech et al., 2013). Also, in commercial European pigs, ROH-regions contained proportionally more deleterious homozygous variants. Interestingly, however, European wild boar did not display this pattern (Figure 5c). | Disentangling selection and drift A higher proportion of deleterious alleles in ROH were also observed in cattle (Zhang, Guldbrandtsen, Bosse, Lund, & Sahana, 2015). In humans, the longest class of ROH was most enriched for deleterious variants in humans, whereas in cattle, shorter ROH contained proportionally more deleterious alleles (Zhang et al., 2015). A possible explanation for these different patterns is that in humans, inbreeding is mostly responsible for the formation of long ROH and hard selective sweeps are rare (Hernandez et al., 2011) By contrast, in cattle, artificial selection intensity is high and sweeps are abundant (Kim et al., 2013). This indicates that the different mechanisms that result in ROH, namely selection and drift, result in different patterns of detrimental alleles. In dog genomes, when regions under selection were excluded, an increase in deleterious alleles compared to wolves could still be observed, suggesting that drift rather than the hitch-hiking effect associated with selection is causing the increased load in dogs (Marsden et al., 2016). Also, the role of recombination in generating genomic diversity patterns and ROH should not be neglected; recombination rate locally influences effective population size in a genome, which in turn affects the efficiency of natural selection F I G U R E 3 Genetic hitch-hiking of deleterious alleles in selected regions. Under neutrality, deleterious alleles are present in a population at low frequency (indicated in red). If a deleterious allele lies on the same haplotype as a selected variant (indicated in blue), linkage is strong, and if the selective advantage s of the advantageous allele is stronger than the summed selection coefficient against all (slightly) deleterious alleles ∑s, the harmful mutations will rise in frequency despite their deleterious nature as demonstrated in the allele frequency spectrum in the population before -and after selection (Begun & Aquadro, 1992;Bosse et al., 2012). Finally, the number of generations since the last common ancestor and the speed of inbreeding influence the length of ROH stemming from inbreeding, and the associated mutational load (Hedrick & Garcia-Dorado, 2016). Therefore, minimizing inbreeding in modern breeding practices by specifically avoiding long IBD segments in optimal contributions may F I G U R E 4 Deleterious alleles within and outside ROH in chicken. Proportion of deleterious homozygous variants within and outside ROH-regions in three different white layer lines. Paired t test Line1 p = 6.65e-05; Line2 p = 1.11e-07; Line3 p = 5.684e-07 F I G U R E 5 Distribution of deleterious alleles within the genomes of commercial pigs and wild boars. (a) The x-axis represents length of the chromosome, and numbers on the y-axis indicate chromosomes. Blue horizontal bars represent ROH within the genome of an individual, red crosses pinpoint the location of predicted deleterious variants within ROH, and black crosses mark the position of homozygous deleterious alleles outside ROH. (b) Violin plots summarizing the frequency of homozygous deleterious alleles within and outside ROHregions. (c) The proportion of homozygous deleterious alleles compared to neutral homozygous alleles within and outside ROH-regions avoid an increase in mutational load (de Cara, Villanueva, Toro, & Fernandez, 2013a,b). Distinguishing regions under selection and IBD segments due to inbreeding may shed new light on the assumed detrimental effects of domestication. Specifically targeting genes known to be associated with favorable traits, or regions of reduced heterozygosity on a population scale, can aid in distinguishing ROHs stemming from inbreeding from ROHs stemming from selection. | Case study: deleterious allele frequency and ROH As can be seen in Figure 5, predicted deleterious variants in ROH occur often in regions where ROH occur in the same genomic region in multiple individuals. These predicted harmful mutations seem to be at higher frequency than expected if they were homozygous primarily due to inbreeding, suggesting a strong role of hitch-hiking through artificial selection. Such co-occurrence of ROH in individuals is an indication of selection for a favorable haplotype in a population, suggesting that most deleterious alleles are maintained due to hitch-hiking along with a selected variant at that locus. Indeed, the allele frequency of homozygous deleterious alleles in the population was higher in ROH-regions for commercial pigs, whereas in European wild boar, the allele frequency of deleterious homozygous variants was higher outside ROH-regions (Figure 5b). A likely explanation for this distinct pattern could be that past bottlenecks in European wild boar have resulted in fixation/elevation of many homozygous deleterious variants. Whereas, the increase in deleterious homozygous variants in commercial pigs is driven by more recent processes such as artificial selection. | CON CLUS IONS Based on previous work as well as on our own analyses, we conclude that despite the strong artificial selection on commercial breeds, mutational load can be high. The probability of a deleterious allele to rise in frequency in a population depends not only on population size (drift effects), but also on its selection coefficient (Whitlock, 2000). An important factor to keep in mind is that if the population is not at mutation-drift equilibrium, such as during rapid population growth, the distribution of deleterious variants is affected and load can increase rapidly (Casals et al., 2013;Gazave, Chang, Clark, & Keinan, 2013). The population bottleneck associated with domestication is thought to be generally long-term and inbreeding increase slow, relative to breed formation which coincides with quick increase in inbreeding: both are relevant for domesticated breeds (Oldenbroek, 2017). A further issue is that selection acts differently in domestic animals, where the individuals who will contribute to leave offspring are chosen according to some trait that is being optimized. By artificially selecting these individuals, natural selection cannot act on the population as it does in the wild, and there is very little room for it to act (de Cara et al., 2013a,b). Moreover, "predicted to be deleterious" can actually signify "beneficial" in an artificial selection context. The prediction of deleteriousness relies on variants that are thought to have a high impact on the phenotype. Therefore, "deleterious" could mean "not generally tolerated in the wild," but may be perfectly viable (or even highly viable and selected for) in a domesticated setting. Estimates of genetic load inferred from genomes are insensitive for the specific environmental circumstances of individuals carrying deleterious alleles, which may influence the impact of the predicted deleterious alleles on the fitness of an individual. We suggest some caution when inferring these general patterns of mutational load, since deleteriousness can be contextdependent (Hedrick & Garcia-Dorado, 2016). | Final remarks Overall, domestication seems to have elevated load in most domestic species. However, domestication does not lead to genomes with high load per se, indicating that fitness does not need to decline. | Sampling Re-sequence data from a total of 76 pigs were retrieved from the European Nucleotide Archive (ENA) accession code ERP001813 and included 43 domestic pigs and 33 wild boars from Asia and Europe. Pigs were classified into five groups in accordance with their geographic origin and domestication status: Asian domestic pigs (AD; N = 16), North Asian wild boars (AWN; N = 5), South Asian wild boars (AWS; N = 5); European commercial pigs (ED; N = 27) and European wild boars (EW; N = 23). For chicken, re-sequence data from eight African village chicken coming from different ecotypes in Kenya were obtained from (Ngeno, 2015). | Deleterious variants and ROH Runs of homozygosity (ROH) were estimated from genotype data from the same individuals, obtained from (Yang et al., 2017) and (Derks et al., 2017) for pigs and from (Derks et al., 2018) for chicken. SNP genotyping was performed on the species-specific Illumina 60K iSelect Beadchip for both species (Ramos et al., 2009). A ROH was defined as a genomic region of at least 1 Mb with at least 20 SNPs supporting the homozygous state, using the -homozyg option in PLINK v.1.9. This length reflects roughly 1 cM and consanguinity 50 generations ago (Howrigan, Simonson, & Keller, 2011), which is sufficient inbreeding for the purpose of this study, yet feasible with the density of our SNP chips. Number of deleterious and tolerated nonreference homozygotes overlapping and nonoverlapping ROH (Del-ROH and Del-non-ROH, respectively) were counted separately in each individual. Differences between the fraction of deleterious and tolerated variants in ROH per individual were tested for significance using the paired t test as implemented in R. ACK N OWLED G EM ENTS This work is part of the STW-B4F partnership program project number 14283: From sequence to phenotype: detecting deleterious variation by prediction of functionality. This study was financially supported by NWO-TTW and the Breed4Food Partners Cobb Europe, CRV, Hendrix Genetics and Topigs-Norsvin. We thank Hendrix Genetics and Topigs-Norsvin for collaboration and early data availability. We also thank Jack Windig from Wageningen DATA A R C H I V I N G S TAT E M E N T Re-sequence data from a total of 76 pigs were retrieved from the European Nucleotide Archive (ENA) accession code ERP001813. Since Ngeno, 2015 is a PhD thesis, the relevant vcf files of the village chicken are deposited into https://www.animalgenome. org/repository/pub/WUR2018.0809/. In addition, re-sequence data from one commercial pig line (N = 9) and three chicken lines (N = 51;66;43) were obtained from (Derks, Megens et al. (under review), Derks et al., 2018), respectively. Since (Derks, Megens et al. (under review) is not accessible yet, the relevant vcf files are deposited into https://www.animalgenome.org/repository/pub/ WUR2018.0809/. Genotype data from the same individuals were obtained from (Yang et al., 2017) and (Derks et al., 2017) for pigs and from (Derks et al., 2018) for chicken. The data obtained from Derks et al. (2018) was restricted but will be made available upon specific request to ensure repeatability is possible.
2019-01-22T22:29:33.611Z
2018-09-08T00:00:00.000
{ "year": 2018, "sha1": "9127cfdd06718f423e2d2afce6bd6b89c86168f5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.12691", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a24edf33f6012feafbceabc3413cfd9d9b188071", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234012644
pes2o/s2orc
v3-fos-license
Design and Implementation of Electronic Enterprise University Human Resource Management System Electronic Human Resource Management System (EHRMS) is a paperless-based system, which plays a vital role in facilitating organizational processes, overcoming all obstacles of a paper-based system, reducing cost, time, and efforts, enhancing the quality of services (QoS), and providing more accurate data. In addition, it is beneficial for competitive advantages and it eases the tasks of the HR managers to make decisions. In this paper, an efficient EHRMS is proposed, designed, and implemented. The system is called Enterprise Electronic University Human Resource Management System (EEUHRMS). The proposed system consists of fourteen modules that provide four groups of services. The first group is related to applicant services: Online Recruitment. The second group is related to staff services: Registration, Acknowledgements/Punishments, Annual Premium, Leaves, Leave Deduction, Archive, Dispatch, Extra Fees, Salary, and Service Summary. The third group is related to institutions and presidency services: Post and Statistics. While the fourth group is related to university services: Authentication and Statistics. The proposed system is evaluated by using the System Usability Scale (SUS) to get results via specific questionnaires that are checked by the university staff. The evaluation score of the questionnaire was about (85) which is considered a good result. The proposed system is developed using the Laravel framework. Introduction The rapid changes environment increases the data of the large-sized enterprises. Also, it becomes difficult to deal with such a quantity of data related to all staff with an increasing need to store, process, and retrieve these data easily and quickly at any time-saving cost and time with minimum error. For this ICMAICT 2020 Journal of Physics: Conference Series 1804 (2021) 012058 IOP Publishing doi: 10.1088/1742-6596/1804/1/012058 2 reason, it is necessary to implement an electronic system with the capability of solving slow-processing problems such as Human Resource Management (HRM) to meet the challenges of the recent companies [1], [2]. However, to manage Human Resources efficiently it is worthy to utilize an electronic system in an enterprise process to meet the requirements of the enterprise [3], [4]. Therefore, the companies' trend towards employing Information Technology (IT) to manage the rules and actions of HR, it means a computer-aided HRM that advantages from technology powers [5]. The HR department is an automation of the Human Resource Information System (HRIS) [6]. The objective of HRIS is to optimize HRM in firms [7], [8]. The practices of HR have been turned from HRIS to Electronic Human Resource Management (EHRM). In other words, the organizations are relay on the automating services for the employees and managers [6]. For Enterprise System (ES), the HR functions have been changed with the advancement of IT; it improves decision making, administrative efficiency, and information sharing [9]. In the 1990s the term EHRM was firstly used and emerged; it means HR transactions over the Internet and other technologies [1]. EHRM can be defined and called in various ways for instance 'electronic HRM', 'online HRM', 'EHRM', 'web-based HRM', 'virtual HRM', 'computer-based HRM', 'digital HRM', 'HRIS' and 'HRIT' [6]. EHRM is described as 'the application of computers and telecommunication devices to collect, store, retrieve, and disseminate [HR] data for business purposes' [10]. The objective of EHRM is to afford available knowledge to managers and employees anytime and anywhere [1]. Hence, EHRM is used in different aspects. For example, it has a potential role in health and is utilized frequently as an electronic hospital (e-hospital) [11], [12]. Also, it is utilized in the education area aiming to render information and services for learners and instructors in the learning/teaching purposes (E-Learning) [13]. EHRM systems make HR more strategic, flexible, costeffective, enhancing decision-making, reducing the efforts of administrators, speed response time, enhancing users' services, and increasing productivity [14]. In the informatization era, the construction of EHRMS has become an inevitable choice of enterprises in order to improve their core competitiveness [15]. The electronic system participates in facilitating the workflow of an enterprise by saving cost and time compared to performing the same processes manually and that will lead to a computerized and economic environmental enterprise [16]. Nowadays, some countries tend to apply electronic systems in many fields such as industry, management, education, and the health care sector aiming to transform to E-Government [11]. The main trend of this paper is to implement an Electronic Enterprise University HRM system (EEUHRMS), which encompasses Employee Recruitment, and Staff (Registration, Dispatch, Extra Fees, Authentication, Salary, Acknowledgements/Punishments, Annual Premium, Post, Leaves, Archive, and Summary Service). Moreover, the proposed EEUHRMS focuses on managing the mechanism of human resources and works electronically through the presidency directorates and the related units that belong to the institution. The handling of monetary and organizational activities for every private or public university could be expedited, also transforming approaches depended via the staff to automated style. However, the proposed is more economic, timesaving, and spending minimum exertion. Consequently, it may be dependent continually for other Iraqi ESs, moreover, this system capable of reorganization with respect to selected ES requirements within the recorded-period. Finally, the scope of this paper is limited to presenting full analytical considerations related to the Human Resource Management System electronically. Related Works Due to the HRM's role and advantages, numerous varieties of studies make a point of HRs management in recent. Efficient HRM contributes to increasing the productivity of the organization and improving the staff capacity to respond to the rapid changes of the organization. Hence, the present literature review presents some of the electronic human resource management systems. Selvi et al. [17], realized an HRMS by Investigation/Growth Core on behalf of Iron/Steel in lieu of predicting the managerial requests, frequent intensive care besides people regulation. This is in order for up keeping the decreasing workers consuming, systematization commercial purposes for People plus Organization section. Adding to that for providing more rapidly staff amenities with accessible contact to different staff data. There are 5 segments encompassed from the applied system: Staff Personality Profile, Considered Affirmation, departing from Managing, depart from Encashment, and Excursion Managing segment. Implementations of various segments existed for consumers depending on allocated jobs/capabilities. This system had been positioned through the Tomcat Apache Server. Ying et al. [18], have considered and applied fresh great-outcome, malleable, foldaway EHRMS agenda by means of J2EE stage knowhow. There were 7 segments included within a projected structure (People, Organization, Post, Emolument, Exercise, Performance Assessment, and System Task) Managing. This system fixes many applicable difficulties facing ES for improving HRM effectiveness. It is known by simplicity, relaxed implementation besides owning robust characteristics: relaxed upkeep, relaxed long-drawn-out, suppleness, and safety. Ouyang and Lu [19], have adopted Computer Supported Cooperative Work (CSCW) in the human resources management information system of the institution of higher education to increase the efficiency of using information and enhance the cooperation. The proposed system solves the existed shortages in traditional HRMS such as lack of sharing and exchanging data and good use of data. The CSCW technology provides a high sharing of data among human resources management information systems and other information systems on different operation platforms. It can provide a community activity support to HRM under the computer situation. Thus, it promotes the HRM of higher education to a new standard. Abdullah et al. [20], presented a cloud-based HRM system for Small and Medium Enterprises (SME) s to solve the problem of separated locations of enterprises. The authors focused on the advantages and disadvantages of adopting cloud-based HRMS within SMEs. The study explored that adopting such emerged technology improves HRM, increases HRM flexibility because the data is stored centrally, allows enterprises to expand, provides easy decision-making, and thus enhances the effectiveness of individuals and organizations. The study explored that the most important advantage of adopting cloudbased HRMS is reduced cost. In contrast, the great barrier is security. Methodology and Modules of the Proposed EEUHRMS The proposed EEUHRMS has been designed to provide significant services. The structure The new staff of the university should be registered to the proposed system and the Enterprise Admin will assign a role to each newly registered user according to his authority. After that, the user can log in to the system. The user will be directed to the portal according to the user role, for instance, the staff role will be directed to the staff portal. The manager role (Unit Admin) will be directed to the management portal. The financial Admin/employee role will be directed to the financial portal; the management Admin/employee role will be directed to the management portal and the Enterprise Admin role will be directed to the Enterprise Admin portal. Architecture of the proposed EEUHRMS The architecture of EEUHRMS is DOWN-UP architecture. Figure 2 demonstrates the overall architecture of this system. It consists of three main layers: 1. Presentation Layer: This layer is a front-end layer where the user accesses the web application at the client-side using web browsers (Firefox, Internet Explorer, Google Chrome, Opera, etc.). It makes the user capable of communicating and interacting with the logic layer to request the human resource components. Bootstrap is a front-end web development framework, which merges functions of (CSS and HTML) with amazing additional effects. This layer involves these tools:  HTML  CSS  JavaScript (includes: Ajax and jQuery).  Bootstrap 2. Logic Layer: The Logic layer is a server-side layer programmed with PHP language. It represents an intermediate layer between the presentation and data layers. It communicates with the upper layer (Presentation Layer) via HTML forms and JavaScript queries. It also retrieves data from the data layer. 3. Data Layer: It is a lower layer in a layered architecture. In this layer, all the coming data are saved into a MySQL database. Mechanism of the proposed EEUHRMS The mechanism of using supported tools to design EEUHRMS consists of four steps:  Webpage Design: It means designing the structure of a web page (i.e. tables, forms, input fields, text areas, etc.) using the HTML tool.  Webpage Effects: webpage effects represent the style of HTML elements (i.e. colours, hovers, font size, positioning, animation, etc.) using CSS tool, Jquery, and Bootstrap classes.  Processing: In this step, all entered data are validated and the necessary calculations are provided to produce the wanted results using PHP, jQuery, and JavaScript tools.  Saving and Retrieving: saving means storing entered data into the database while retrieving is the opposite of saving where data are restored from the database using PHP and SQL tools. System Requirements of the proposed EEUHRMS In order to design and implement EEUHRMS, some requirements are required such as functional and non-functional requirements, hardware and software requirements. Functional Requirements A functional requirement deals with what a system is assumed to perform and the services that are appealed. Besides, it specifies the authorization of inserting the data into the database system. Also, prepares the outputs such as reports, and identifies the classifications of data that are inserted into the system. The functional requirements for our proposed system are Enterprise Administrator, Manager (Unit Admin), Staff, and Applicants.  Enterprise's Administrator: is the person who has the authority to manage the entire system and has access to all modules. He is capable of inserting and viewing details, editing and deleting data, and giving the authority to other newly registered users.  Manager (Unit Admin): each unit or directorate usually has more than one employee. Each unit has one person who represents the admin of the module that is used for his unit of the institution he is working at. These units may be one of the (presidency or colleges). Furthermore, the admin has the authorization to insert, view details, edit and delete data related to his unit.  Staff: each registered staff has the authority to access and view his personal information, salary, leaves, posts, summary service, acknowledgments/punishments, and archives. However, he has no authority to edit or delete the data.  Applicants are persons who want to apply for jobs. They only can insert data (i.e. personal information and formal documents) and do not have the authority to change them after submitting them. Non-functional Requirements Non-functional requirements refer to the following requirements features which related to the proposed system:  Security: Information system protection versus illegal entrance or alteration of data is the most significant attribute in each electronic system. The security is important for both the client and server sides.  Usability: The system must provide a friendly user interface and it is easy to learn and use the components of the system via avoiding the complex design.  Reliability: refers to the system feasibility to offer the required duty during a specific period without breakdown.  Accessibility: The registered users have the right of accessing or reaching the system inside the enterprise within the limits of his/her authentication.  Extensibility: The system should have the facility to admit the vital enlargement of its abilities without modifying the basic design.  Availability: The strength of the system to fulfil its assigned role, whenever demanded. Software requirements In order to design a powerful electronic system with diverse attributes, it is crucial to use necessary software tools such as laravel framework. Hardware Requirements Hardware and software requirements complete each other. Therefore, hardware requirements are needed to build the system in both Server-side and Client-side:  Server-side: It is the computer that contains the web application as well as the database, which considered two-Tier Architecture (2TA).  Client-side: The client-side consists of two types: Internal Clients and External Clients.  Internal Clients: clients (within university) will be connected to the server with an internal network through a wireless router.  External Clients: there will be (M hosts) to be considered as Applicants that want to get jobs at the university.  Internal Network: The network device that is used to connect the server with clients' devices is a wireless router. The server is connected directly to the router. The clients are connected to the router wirelessly. In addition to the above hardware requirements, the proposed system requires other peripheral devices for each unit such as input devices. Evaluation of the Proposed EEUHRMS The evaluation process is a significant issue to assess the proposed system. The EEUHRMS practical implementation is implemented at Al-Kitab University. The System Usability Scale (SUS) is a simple and reliable tool used to measure the usability of the system. SUS is a questionnaire form that consists of ten questions about the system where the even-questions are negatively worded and the odd-questions are positively worded. These ten questions illustrate the level of satisfaction or dissatisfaction of the users. EHRMA, UA, and Staff have applied the proposed EEUHRMS. The testing period was (36 days) from (1/8/2019) to (5/9/2019). After testing EEUHRMS, the system has been evaluated. The effectiveness, efficiency, and satisfaction of the system's user perspective have been evaluated to check the usability of the system. After finalizing the SUS questionnaire, the acquired SUS scores of the proposed system user's samples have been processed as follows: • For odd-numbered items, one subtracted from the user response. • For even-numbered items, the user responses subtracted from 5. Table 1. The proposed EEUHRMS is tested and evaluated by (27) users and the result of the participants' survey is shown in Figure 9. It is obvious from the chart that positive questions attained high scores, while the negative ones produced low values. The results have provided a better perception of the system usability test and evaluation. Also, represents the user's beliefs on usability issues of the system. According to the survey results of the proposed EEUHRMS, the highest satisfaction is observed from the questionnaire (i.e. 27 users). It is clear from the results in Table 1 that the minimum SUS score is (50) and the maximum score is (92.5). However, according to the mean of SUS scores, it can be concluded that the provided system is generally perceived to be acceptable depending on the SUS total score, which is (85). Figure 9. Mean Survey Results. Conclusions Regard implementing planned EEUHRMS, it can be concluded that well-organized EEUHRMS depended, planned, and applied. The planned system is able to simplify handling monetary/managerial activities for every private or public university, then renovate apparatuses depended on university staff into automated style. Accordingly, we can say that the proposed system is considered as grounds for connecting university institutions (including one presidency and nine colleges) into a single automated organization. EEUHRMS improves linking amongst the management and monetary branches/units, besides linking amongst the presidency and the colleges. Adding to that, providing the improvement of linking amongst staff and branches/units in full suppleness. EEUHRMS is considered for saving money, wasted time, and exertion. Hence, dependent EEUHRMS maybe depended recurrently for other ESs in Iraq and re-organized depending on these new ESs requirements.
2021-05-10T00:03:46.034Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "50d66ea5d1f23232ab3892314b0bea273ca903f7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1804/1/012058", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9b0cc959d4bbef9b3bd8299675c5e6e93533c41c", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Business", "Physics" ] }
246453108
pes2o/s2orc
v3-fos-license
A Model for Energy Consumption of Main Cutting Force of High Energy Efficiency Milling Cutter under Vibration Understanding the influence of the main cutting force energy consumption of the milling cutter is the basis for prediction and control of energy and machining efficiency. The existing models of cutting force energy consumption lack variables related to milling vibration and cutter teeth errors. According to the instantaneous bias of the main profile of the milling cutter under vibration, the instantaneous cutting boundary of the cutter teeth was investigated. The energy consumption distribution of the instantaneous main cutting force of the cutter tooth was studied. The model for the energy consumption of the instantaneous main cutting force of the cutter tooth and the milling cutter were both developed. The formation of energy consumption of the dynamic main cutting force of a high energy efficiency milling cutter was researched. A method for identifying the time–frequency characteristics of the energy consumption of the main cutting force under vibration was proposed and verified by experiments. Introduction High energy efficiency milling cutters are widely used in the manufacturing of heavy machine tools and aircraft components. The energy consumption of the main cutting force of the cutter is an important indicator for revealing the cutting process of the milling cutter and evaluating the cutting energy efficiency of the milling cutter [1]. During the intermittent cutting process, the main cutting force of the milling cutter random changes under the effects of the vibration and impact between cutter and workpiece. The energy consumption of the main cutting force of the milling cutter thus changes dynamically, which would lead to difficulty in precisely predicting and controlling energy consumption in the cutting process [2]. The instantaneous multi-tooth cutting method of the high energy efficiency milling cutter determined that the main cutting force energy consumption of the milling cutter was composed of the instantaneous main cutting force energy consumption of each tooth participating in the cutting. The instantaneous energy consumption distribution of the main cutting force on the cutting edge of the cutter tooth was the key to revealing the dynamic characteristics of the energy consumption of the main cutting force of the high energy efficiency milling cutter [3]. The non-straight tooth structures, such as helical teeth commonly used in high-efficiency milling cutters, varies the instantaneous cutting speed vector direction of each point on the cutting edge [4]. At the same time, affected by milling vibration and tooth error, the main section of the milling cutter and the instantaneous cutting behavior of the cutter teeth are in an unstable state in the workpiece coordinate system space [5]. As a result, the instantaneous cutting contact relationship between the tool tooth and the workpiece is constantly changing, which causes the magnitude and direction of the instantaneous main cutting force at each point on the cutting edge of the cutter tooth to change continuously, rendering the instantaneous main cutting force relationship between the cutter teeth uncertain [6]. The energy consumption of the main cutting force of the milling cutter by multiplying the result of the composition cutting force of the tooth of the milling cutter or the main cutting force of the milling cutter obtained through experiments with the cutting speed of the milling cutter was calculated [7]. Based on the response of the maximum or average value of the energy consumption of the main cutting force to the cutting parameters, the factors affecting the energy consumption of the main cutting force of the milling cutter were identified [8]. The above method assumed that the instantaneous cutting behavior of each tooth of the milling cutter had the same change characteristics. It was impossible to reveal the instantaneous bias of the main section of the milling cutter and the changes of the instantaneous cutting speed vector and the instantaneous main cutting force vector at each point on the cutting edge, as well as its influence on the energy consumption of the cutter tooth and the instantaneous main cutting force of the milling cutter. Thus, it is necessary to conduct in-depth research. Establishing the correct cutter-workpiece engagement and cutting force model is the prerequisite for revealing the dynamic characteristics of the main cutting force energy consumption [9]. In recent years, Utsumi [10] used the contact behavior of the cutter and a workpiece to predict the simulation model of the milling force, which was consistent with the experimental results of several combinations of the predicted cutting forces and the cutter postures and feed rates; Jun [11,12] studied the changing law of cutting forces and established an empirical model of cutting forces on cutting parameters; Cai [13] proposed a new cutting force prediction model based on non-uniform rational basis splines and finite element methods, and established a single-insert cutting force model using NURBS interpolation. Wan [14,15] established a material separation model by combining plastic formation theory with slip line field theory, and based on the developed separation model, established a cutting force model that can consider shear and plowing effects separately; Zhang [16][17][18] developed a new universal instantaneous force model and analyzed and established the average uncut chip thickness, actual cutting depth, center position, and geometric relationship. Based on the above, He [19] established a model of the three goals energy consumption, cutting force, and processing time and processing parameters, and then obtained the Pareto optimal; Wang [20] established energy consumption models for faces, steps, grooves, and flutes on the basis of the study of energy consumption based on plastic deformation. The above methods had guiding significance for the establishment of the solution model of the energy consumption dynamic characteristics of the milling cutter's main cutting force. However, because it ignored the influence of the non-straight tooth structure of the milling cutter, milling vibration, and tooth error-as well as the difference in the instantaneous cutting behavior of each tooth-there was a principal error in the calculation of cutting force energy consumption. This cannot correctly reflect the dynamic characteristics of the main cutting force energy consumption of the milling cutter. In this paper, the model of the instantaneous cutting behavior in terms of contact angle and tool deviation under vibration were studied. The instantaneous bias of the main section of the milling cutter was quantitatively described. The instantaneous cutting boundary and cutting layer parameters of the milling micro-element were investigated based on the instantaneous cutting behavior model. The main cutting force energy consumption and its distribution on the rear face of the milling cutter were both investigated based on the calculation of cutting speed and main cutting force vector. The dynamic evolution of the energy consumption of the main cutting force of milling cutter was researched and also validated by experiments. Instantaneous Bias of the Main Section of the Milling Cutter under Vibration In order to reveal the dynamic characteristics of instantaneous main cutting force energy consumption of milling cutter under milling vibration, the bias of the main section of the milling cutter caused by milling vibration was analyzed, as shown in Figure 1. The variables in Figure 1 are explained as shown in Table 1. 2 o-xyz The workpiece coordinate. y L is the distance between the workpiece to be processed side elevation G L and the xoz plane along the y axis. D The diameter of the milling cutter. β The helix angle of the milling cutter. L c The axial length of the cutting edge of the milling cutter. l The overhang of the milling cutter. The total length of the milling cutter. The milling cutter structure coordinate system, o d is the center of rotation of the lowest cutter tip in the axial direction; the x d axis is parallel to the direction of the cutting speed at the maximum cutter tip of the radius; the y d axis is parallel to the radial direction of the cutter tip with the maximum radius; z d axis is the rotation axis of the milling cutter and points to the cutter shank. The cutter tooth coordinate system, o i is the rotation center of the cutter tip, y i axis is the direction through the origin o i points to the cutter tip, and z i axis is parallel to z d . The milling cutter cutting coordinate system without vibration. 22 The vibration displacement of the milling cutter in three directions of x, y, and z, respectively. In order to obtain the instantaneous main cutting force and main motion speed of the cutter tooth, a transformation matrix between the cutter tooth coordinate system and the workpiece coordinate system was proposed, as shown in Equations (1)-(3). The transformation matrix was solved by using the relationship between the reference coordinate system of dynamic cutting of the high energy efficiency milling cutter under vibration in Figure 1. where Q 1 , Q 2 , Q 3 , and Q 4 are rotation matrices and M 1 , M 2 , and M 3 are translation matrices. ϕ d (t) is the instantaneous angle between y d axis and y s axis in x s o s y s plane, which could be expressed as where ϕ d (0) is the initial cutting time of the milling cutter, that is, the angle between y d axis and y s axis in x s o s y s plane when t is 0. x oo (t), y oo (t), z oo (t) are the instantaneous position coordinate of the coordinate origin o o of the milling cutter cutting coordinate system without vibration in the workpiece coordinate system o-xyz, which could be solved as The instantaneous bias state of the main section of the milling cutter was characterized by the bias angle of the cutting coordinate system caused by milling vibration. The milling cutter trajectory o s (x,y,z) in cutting coordinate system under vibration could be written as The speed v(t) of the milling cutter in the direction of the cutting vector along the o s (x, y, z) motion trajectory could be expressed as The instantaneous angle between x s axis and x o axis θ s (t) could be derived as where v x (t), v y (t), and v z (t) are the components of v(t) along the x axis, y axis, and z axis, respectively. The bias angle of main section of the milling cutter θ(t) could be written as In order to investigate the effect of milling vibration on cutting posture of the milling cutter, the bias angle and the trajectory of milling cutter with/without vibration were compared by using Equations (1)-(10), as shown in Figure 2. As shown in Figure 2, the trajectory and bias angle of milling cutter remained constant without milling vibration, the milling vibration could cause displacement increment of the milling cutter in different degrees, which directly changed the trajectory and bias angle of the milling cutter, resulting in the continuous changes in instantaneous cutting boundary of the cutter tooth, which directly affected the formation process of the main cutting force energy consumption of the milling cutter. Solution Method of the Cutter Tooth Instantaneous Cutting Boundary under Vibration In order to solve the interval of the distribution function of the energy consumption of the cutter tooth's main cutting force under the action of vibration, the instantaneous cutting boundary of the cutter tooth should be investigated. By using Equations (1)-(10), in the workpiece coordinate system, the cutting edge E i (x, y, z) of ith cutter tooth could be written as The transformation matrix Ω i between the cutter tooth coordinate system and the workpiece coordinate system could be expressed as when ζ i = 0 • in Equation (11), the trajectory s i (x, y, z) of the cutter tip in the workpiece coordinate system could be expressed as During the milling process, the instantaneous cutting edge and the machining transition surface of the workpiece are shown in Figure 3. According to Figure 3, when the ith cutter tooth cut into the side elevation G L of the workpiece to be machined, the curve equation of the upper boundary characteristic point of the cutting edge could be expressed as where t i is the characteristic moment when the cutting edge of the ith cutter tooth cut the workpiece, and t i 1 is the cutter tip of the ith cutter tooth, that is, the characteristic moment when the point at which the cutting edge lag angle ζ i is equal to 0 • cuts into the side elevation G L of the workpiece to be processed, t i 2 is the characteristic moment when the cutting edge of the ith cutter tooth cuts away from the side elevation G L and cuts into the upper surface G H of the workpiece. Then the characteristic moment t i 1 could be expressed as The characteristic moment t i 2 could be expressed as when the ith cutter tooth cut into the upper surface G H of the workpiece, the curve equation of the characteristic point on the upper boundary of the cutting edge could be expressed as where t i 4 is the characteristic moment at which the cutting edge of the ith cutter tooth on the upper surface G H of the workpiece cuts out of the machining transition surface Σ i−1 formed by the previous (i−1)th cutter tooth. Using the method for constructing the tooth cutting edge equation shown in Equation (11), the equation of machining transition surface Σ i−1 formed by the (i−1)th cutter tooth could be expressed as where E i−1 (x, y, z) is the cutting edge equation in the workpiece coordinate system from the period of t Thus, the characteristic moment could be expressed as According to Equation (13), it could be obtained that during the process of the cutter tip of the ith cutter tooth cuts into the side elevation G L of the workpiece to be machined until it cut out of the transition surface Σ i−1 , the curve equation of the characteristic point of the lower boundary of the cutting edge could be expressed as where t i 3 is the characteristic moment when the cutter tip cuts out of the transition surface Σ i−1 , which could be expressed as After the cutter tip separate from the transition surface Σ i−1 , the lower boundary of the cutting edge could be expressed as According to Equations (11) and (22), it could be acquired that, the upper instantaneous cutting boundary m ik (t i ) of the cutting edge could be given as The lower instantaneous cutting boundary of the cutting edge could be given as The cutting boundaries were not only related to the instantaneous cutting pose of the current cutter teeth, but also closely related to the machining transition surface formed by the previous cutter teeth. The results showed that the instantaneous cutting boundaries were in an unstable state, which was affected by the vibration, the cutter tooth error, edge shape, and instantaneous cutting pose of the two adjacent cutter teeth. This not only directly affects the instantaneous cutting layer parameters, but also changes the distribution of instantaneous main cutting force energy consumption on the cutter teeth. Instantaneous Bias of the Main Section of the Milling Cutter under Vibration According to Figure 1 and Equations (6)- (11), the cutting edge of cutter tooth participating in cutting was affected by milling vibration, cutter tooth error, blade shape, and instantaneous cutting pose. An instantaneous cutting speed and main cutting force of cutting edge feature points of cutter teeth participating in cutting are shown in Figure 4. The position coordinates x i mi , y i mi , and z i mi of the characteristic point m i (t i ) on the cutting edge of the ith cutter tooth that participate in cutting instantaneously in the cutter tooth coordinate system o i -x i y i z i could be expressed as As shown in Figure 4, the instantaneous pose of the z i axis in the workpiece coordinate system was obtained by Equation (11), which was in the instantaneous cutting main section Gom of the point m i (t i ), use o im (t i ) which is the intersection of z i axix and main section Gom and points m i (t i ) construct linear equation h m (t i ). The instantaneous position coordinates x q m, y q m and z q m of the intersection q m (t i ) could be expressed as The instantaneous position coordinates of q m (t i ) in the cutter tooth coordinate system o i -x i y i z i could be expressed as Then, in the cutter tooth coordinate system o i -x i y i z i , the instantaneous cutting layer thickness h Dj (x i , y i , z i ) of the point m i (t i ) could be expressed as In the cutter tooth coordinate system o i -x i y i z i , the instantaneous principal motion speed of the point m i (t i ) could be calculated as According to the Figure 4, Equations (28) and (29), the energy consumption distribution function of the instantaneous main cutting force of the cutter tooth was given by where p is the unit cutting force and k t is the main cutting force correction coefficient. According to the Equations (23), (24), and (30), the instantaneous main cutting force energy consumption of the cutter tooth could be derived as In order to verify and further investigate the dynamic characteristics of energy consumption of milling cutter main cutting force, high speed milling experiment were carried out. The workpiece material was titanium alloy TC4. The milling cutter was an integral cemented carbide end milling cutter with diameter of 20 mm. The cutter tooth 1 is the longest cutter tooth with the bottom edge, and the other cutter teeth are sorted according to the follow-up cutting sequence, as shown in Figure 5. In order to eliminate the influence of cutting fluid on the accuracy of cutting force measurement, dry cutting was used in the experiment. The milling parameters and cutter teeth errors are shown in Table 2. ∆z i d is the axial error of the ith cutter tooth. ∆r i is the radial error of the ith cutter tooth. n is the rotational speed of the milling cutter. f z is the feed rate per tooth. a p is the cutting depth of the milling cutter. a e is the cutting width of the milling cutter. During experiments, the milling vibration acceleration signals were acquired, as shown in Figure 6. Where a x (t), a y (t), and a z (t) are milling vibration acceleration signals of the milling cutter along the workpiece coordinate system x, y, and z, respectively. According to the different variation shown by the time domain characteristic of milling vibration in Figure 6 and its corresponding time, the cutting process was divided into multiple cutting stages. Where t 0 is the starting time of each cutting stage, t' is the middle time of an each cutting stage, ∆t is the time interval of the cutting stage. According to Table 2 and Figure 6, the instantaneous main cutting force energy consumption of cutter teeth were calculated by using Equation (30). The time-domain characteristic curve of the main cutting force energy consumption of five cutter teeth of the milling cutter were obtained, as shown in Figure 7. Where P i 0 is main cutting force energy consumption of the ith cutter tooth without milling vibration. P i is main cutting force energy consumption of the ith cutter tooth with milling vibration. According to Figure 7, the results of the energy consumption of the main cutting force of the cutter teeth showed that the energy consumption of the main cutting force of each cutter tooth was periodic. The waveforms of energy consumption distribution of main cutting force of each cutter tooth of milling cutter was different. It was mainly reflected in the different values and periodic of the main cutting force energy consumption of each cutter tooth. This was because the tooth errors and the milling vibration of each tooth were different. As a result, the distribution of main cutting force energy consumption of the milling cutter composed of main cutting force energy consumption of the cutter teeth with different waveforms had dynamic characteristics. Identification Method for the Dynamic Characteristics of Energy Consumption of the Milling Cutter Main Cutting Forces In order to unveil the dynamic characteristics of the energy consumption of the main cutting force of the milling cutter, the instantaneous energy consumption of the main cutting force of the milling cutter was obtained by using Equations (30) and (31), as shown in Equation (33). where N is the amount of milling cutter teeth. The energy consumption of the main cutting force of the milling cutter and the cutter teeth with time was solved by using Equation (33), as shown in Figure 8. P 0 is main cutting force energy consumption of the milling cutter without milling vibration. P is main cutting force energy consumption of the milling cutter with milling vibration. According to Figure 8, the variety of energy consumption of instantaneous main cutting force of the milling cutter without cutter tooth error and milling vibration had obvious periodicity, and the energy consumption value remained unchanged. Affected by cutter tooth error and milling vibration, the waveform of instantaneous main cutting force energy consumption of the milling cutter changed continuously with cutting time. In order to investigate the influences of cutter tooth errors and milling vibration on the main cutting force energy consumption, the time-frequency parameters (root mean square, kurtosis, and main frequency of the instantaneous main cutting force energy consumption) with/without milling vibration and cutter tooth errors are shown in Figure 9. In Figure 9, C is the time-frequency parameter of energy consumption of milling cutter's main cutting force, C 1~C5 are time-frequency parameters of the main cutting force energy consumption of cutter tooth 1~5, respectively. g 1 and g i 1 are root mean square values of the main cutting force energy consumption of milling cutter and cutter teeth, respectively. g 2 and g i 2 were kurtosis of the main cutting force energy consumption of milling cutter and cutter teeth, respectively. g 3 and g i 3 were dominant frequencies of the main cutting force energy consumption of milling cutter and cutter teeth, respectively. According to Figure 9, the root mean square, kurtosis, and dominant frequency of the energy consumption without milling vibration and cutter tooth errors would not change over time and remain stable. Besides, the time-frequency parameters of instantaneous main cutting force energy consumption of cutter tooth and milling cutter showed different dynamic characteristics with milling vibration and cutter tooth errors. Among them, the root mean square of instantaneous main cutting force energy consumption of each cutter tooth changes in different degrees, and the root mean square value of instantaneous main cutting force energy consumption of milling cutter changes significantly with time. It is also found that the kurtosis of instantaneous main cutting force energy consumption of each cutter tooth changes in varying degrees, but the impact component of instantaneous main cutting force energy consumption of the milling cutter did not increase significantly. The dominant frequency of the instantaneous main cutting force energy consumption of each cutter tooth was close to the dominant frequency of milling cutter speed. The above analysis results showed that, under the effects of milling vibration and cutter tooth errors, the time-frequency parameters of instantaneous main cutting force energy consumption of the cutter teeth and the milling cutter showed different variations. This means the dynamic variation of the energy consumption of instantaneous main cutting force of the cutter tooth and the milling cutter was not a stable process. The instantaneous cutting boundary and parameters of instantaneous cutting layer changed constantly. This led to the variability of the energy consumption distribution of instantaneous main cutting force of the cutter tooth and the milling cutter. Based on the above analysis results, the identification method of dynamic characteristics of main cutting force energy consumption was proposed, as shown in Figure 10. In Figure 10, ∆g u is the difference between the uth time frequency characteristic parameter of milling ith cutter tooth and the target characteristic parameter [∆g u ] was the maximum allowable deviation of the uth time frequency characteristic parameter of milling ith cutter tooth. Where ∆g u is the difference between the uth time-frequency characteristic parameter of the milling cutter and the target characteristic parameter [∆g u ] is the maximum allowable deviation of the uth time-frequency characteristic parameter of the milling cutter. In this method, the influence of the milling vibration and cutter tooth error on the instantaneous bias of the main profile of the milling cutter and the instantaneous cutting behavior of cutter tooth were obtained firstly. The instantaneous main cutting force energy consumption of the cutter tooth and the milling cutter were consequently obtained by solving the instantaneous cutting boundary of the cutter tooth and the instantaneous main cutting force energy consumption distribution function. Using this method, the relationship between the instantaneous main cutting force energy consumption of the milling cutter and cutter tooth could be unveiled, and the influences of the process variables on the dynamic distribution of the main cutting force energy consumption of the milling cutter could also be identified. Responses of Energy Consumption of Milling Cutter Main Cutting Force In order to verify the identification method for the dynamics of the main cutting force energy consumption of the milling cutter, two sets of high-speed milling experiments were carried out, the experimental setup were the same as that in Section 4, and the process parameters in Table 2 was taken as experiment scheme 1, the process parameters in Table 3 was taken as experiment scheme 2. The milling vibration acceleration signal was obtained in experimental scheme 2, as shown in Figure 11. According to Table 2, Figures 10 and 11, the energy consumption variation of instantaneous main cutting force of the milling cutter and cutter teeth in scheme 2 were obtained, as shown in Figure 12. As shown in Figures 7, 8 and 12, the instantaneous main cutting force energy consumption showed significantly different variations by comparing with schemes 1 and 2. It can be seen that the energy consumption of instantaneous main cutting force of the milling cutter and the cutter tooth were sensitive to the change of the cutting conditions. In order to verify the results of the main cutting force energy consumption of the milling cutter, it is necessary to acquire the main cutting force energy consumption in the experiment, the cutting force along the direction of feed speed, cutting width, and cutting depth in the milling experiments were acquired by using the Kistler rotary triaxial dynamometer, as shown in Figure 13. According to Figure 13, the measured main cutting force energy consumption was calculated based on the main cutting force and main motion speed measured in experiment, the instantaneous energy consumption of the milling cutter main cutting force of scheme 1 and scheme 2 were obtained, as shown in Figure 14. According to Figures 7, 8, 12 and 14, the time-frequency parameters of instantaneous main cutting force energy consumption in two sets of experiments were compared, as shown in Figure 15. In Figure 15, C t is the time-frequency parameter of energy consumption of milling cutter's main cutting force measured in experiments. According to Figure 15, the changes in milling cutter speed, feed per tooth, and cutter tooth error, and the milling vibration caused obvious changes in time-frequency parameters of energy consumption of main cutting force. The results showed that the energy consumption distribution function of main cutting force of cutter teeth and milling cutter was sensitive to the changes in cutting parameters. In order to reveal the influence of process parameters such as milling cutter speed, feed per tooth, cutting width, and cutting depth on the energy consumption of the milling cutter main cutting force, the influence characteristics of the above parameters on milling cutter main cutting force were studied by single factor analysis method, as shown in Figure 16. According to Figure 16, with the increases of each process parameter, the main cutting force energy consumption increases, the reason is that, the increase of the spindle rotational speed lead to the changes in main movement speed, the increase in feed per tooth and cutting width cause the increase of instantaneous cutting layer thickness, the increase in cutting depth and cutting width would affect the length of the cutting edge that the milling cutter instantaneously participates in cutting, the main cutting force energy consumption of milling cutter thus increases. Verification of Energy Consumption of Milling Cutter Main Cutting Force The relative errors between the calculated and the experimental results of timefrequency parameters of energy consumption of milling cutter main cutting force were shown in Figure 17. According to Figure 17, the relative errors of root mean square value, kurtosis, and dominant frequency of the experimental and calculation result were all less than 20%, which indicated that the calculation results of milling cutter main cutting force energy consumption was in good agreement with the experimental results. In summary, the model and methods constructed in this research could reveal the formation mechanism of the milling vibration and the cutter tooth error on the main cutting force energy consumption of the milling cutter and its cutter teeth, and they could achieve the correct calculation of the main cutting force energy consumption of the milling cutter. 1. The reason the milling cutter posture changes from time to time is milling vibration affects the milling cutter trajectory and bias angle. A model for solving the instantaneous cutting boundary of milling cutter tooth edge under vibration was proposed, the results showed that the milling cutter was displaced due to the influence of milling vibration, besides, the instantaneous cutting boundary was not only affected by the milling vibration during the cutting stage of the current cutter, but also related to the instantaneous pose of the adjacent previous cutter tooth. 2. The energy consumption distribution function of instantaneous main cutting force of the cutter teeth was constructed. The analysis results of energy consumption distribution showed that, the variation diversity of the instantaneous cutting speed vector, instantaneous main cutting force vector and instantaneous cutting layer parameters were affected by cutter tooth error and milling vibration. The instantaneous main cutting force energy consumption of each cutter tooth showed obvious differences in the aspects of peak value and changing cycle. 3. A method for identifying the dynamics of the energy consumption of the main cutting force of the milling cutter was proposed. The identification results showed that, the instantaneous energy consumption of main cutting force show dynamic changes mainly in its root mean square, while the kurtosis and changing frequency does not change obviously. 4. The analysis results of the main cutting force energy consumption responses showed that the energy consumption of the instantaneous main cutting force of the milling cutter were sensitive to the process design variables such as the speed of the milling cutter, the feed per tooth and the tooth error. The validation results of the proposed energy consumption model showed that, the relative errors between the calculation and experimental results were less than 20%, which proved the accuracy of the proposed model.
2022-02-02T16:12:52.550Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "c10d1354ef85f5623e5ad6e1f298db84609ba577", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/3/1531/pdf?version=1644385904", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f1007f4d15b62b17c2911cfd84059d2c8bd40433", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
261833165
pes2o/s2orc
v3-fos-license
Evolution-informed therapy for kidney disease A recent editorial highlighted the challenges of bridging the great divides between evolutionists and clinicians [1]. Global prevalence of chronic kidney disease is rapidly increasing and affects African Americans at 4-fold the rate for European Americans [2,3]. Social inequalities contribute to many health disparities affecting African Americans, and the discovery of G1 and G2 APOL1 gene variants prevalent in 13% of this population contributes to the genetic component of the excess risk for nondiabetic kidney failure [4]. Focal segmental glomerulosclerosis (FSGS), the most common primary glomerular disorder causing kidney failure in the USA, is also more common in persons of African than European origin [5]. Importantly, FSGS is associated with the APOL1 gene variants common in African chromosomes but absent in European chromosomes [6]. With the exception of SGLT2 inhibitors [7], effective therapies to slow or prevent progression of FSGS are not currently available. EVOLUTIONARY PERSPECTIVES APOL1-MEDIATED KIDNEY DISEASE A recent editorial highlighted the challenges of bridging the great divides between evolutionists and clinicians [1].Global prevalence of chronic kidney disease is rapidly increasing and affects African Americans at 4-fold the rate for European Americans [2,3].Social inequalities contribute to many health disparities affecting African Americans, and the discovery of G1 and G2 APOL1 gene variants prevalent in 13% of this population contributes to the genetic component of the excess risk for nondiabetic kidney failure [4].Focal segmental glomerulosclerosis (FSGS), the most common primary glomerular disorder causing kidney failure in the USA, is also more common in persons of African than European origin [5].Importantly, FSGS is associated with the APOL1 gene variants common in African chromosomes but absent in European chromosomes [6].With the exception of SGLT2 inhibitors [7], effective therapies to slow or prevent progression of FSGS are not currently available. EVOLUTIONARY PERSPECTIVES APOL1 is a serum factor that lyses trypanosomes (parasites responsible for sleeping sickness) that evolved as a host defence mechanism through natural selection in human ancestors who produced a variant of APOL1 that killed a trypanosome subspecies endemic in west Africa at the time [8]. Parasite subspecies subsequently evolved resistance proteins.This was followed by evolution of new G1 and G2 variants of human APOL1 that kill extant trypanosomes in heterozygote hosts but also increase susceptibility to podocyte injury and FSGS in hosts with two variant alleles [8].After secretion into the circulation, APOL1 forms a Evolution-informed therapy for kidney disease Chevalier | 317 complex with a host protein that is acquired by the trypanosome by endocytosis.Once incorporated in the parasite the complex is catalysed to an ion channel that promotes lysis of the trypanosome [8]. FUTURE IMPLICATIONS A pharmaceutical company developed inaxaplin (Fig. 1), a small molecule that selectively inhibits APOL1 channel function in human embryonic kidney cells expressing the G1 and G2 variants [9,10].Moreover, treatment with inaxaplin of an APOL1 G2-homologous transgenic mouse resulted in reduced proteinuria. In phase 2a clinical study of patients with proteinuric FSGS, treatment with inaxaplin resulted in a 48% reduction in proteinuria [9].If confirmed in larger clinical trials (now underway), this application of molecular technology to address a major health disparity may contribute to greater awareness of the value of an evolutionary perspective in confronting public health challenges [1,11]. Robert L. Chevalier (Conceptualization [ideas; formulation or evolution of overarching research goals and aims], Investigation [conducting a research and investigation process, specifically performing the experiments, or data/evidence collection], Project Administration [management and coordination responsibility for the research activity planning and execution], Resources [provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools], Supervision [oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team], Validation [verification, whether as a part of the activity or separate, of the overall replication/reproducibility of results/experiments and other research outputs], Visualization [preparation, creation and/or presentation of the published work, specifically visualization/data presentation], Writing -Original Draft Preparation [creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation)], Writing -Review & Editing).
2023-09-16T06:57:33.689Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "8fbf9935f73b8e5298ff6abef02d2812e9c5fa0c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/emph/advance-article-pdf/doi/10.1093/emph/eoad027/51277788/eoad027.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fbf9935f73b8e5298ff6abef02d2812e9c5fa0c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
51801285
pes2o/s2orc
v3-fos-license
N-Doped TiO 2 / CdS Nanocomposite Films : SILAR Synthesis , Characterization and Application in Quantum Dots Solar Cell The N-doped TiO2/CdS nanocomposite films have been prepared through a successive ionic layer adsorption and reaction (SILAR) method on the N-doped TiO2 thin films with cadmium nitrate as Cd source and sodium sulphide as S precursor. The SILAR cycle was varied to study the CdS layer formation and its influence to the properties of resulted nanocomposite, i.e. 1, 5, 10, 25, and 50 cycles, respectively. The resulting materials were characterized using X-ray Diffraction (XRD), UV-Vis Spectroscopy, and Scanning Electron Microscopy (SEM). The result showed that the higher SILAR cycle resulted in a smaller CdS crystallite size and a higher band gap energy. The higher SILAR cycle was also provided the more intense response in visible light area. The prepared N-doped TiO2/CdS nanocomposite films were then applied in the quantum dots-sensitized solar cells (QDSSC) system. The solar cells performa test showed that there is an optimum cycle resulting in a highest power conversion. The quantum dot solar cells based on N-doped TiO2/CdS nanocomposite prepared with 25 cycles provided the highest performa with overall efficiency of 8.3%. Thus, by varying the cycle number in the SILAR synthesis process, it is easy for tuning the nanocomposite properties that fulfill the requirements as sensitized-semiconductor material in the solar cell system. INTRODUCTION The solar light-to-electricity conversion by photovoltaic technology offers an ideal problem solving for currently energy crisis because of the highly dependent on fossil fuel energy.Beside its unrenewable source, the use of fossil energy results in many serious environmental problems worldwide 1 . The dye-sensitized solar cells (DSSC) based on nanocrystalline TiO 2 has established an alternative concept to conventional solar cells owing to its low-cost production and high conversion efficiency 2 .DSSC system has successfully obtained an overall power conversion efficiency of 12%, but it is still having problems for commercial application due to the dye sensitizer and liquid electrolyte degradations causing the lower of solar cell efficiency and stability.The use of tandem semiconductors in the solar cells system became one of promising solutions to prevent the problems appeared from the dye degradation 3 .Inorganic quantum dots (QDs) semiconductors can serve as sensitizer instead of dye followed by some advantages: high light absorption in the visible region, great stability, an effective band gap that can be controlled by the QDs size, the possibility for multiple exciton generation, and the utilize of hot electron whose having higher energy than TiO 2 conduction band 4 .The small band gap semiconductors such as CdS, CdSe or CdTe have been used to sensitize TiO 2 in the QD-sensitized solar cells (QDSSC) and has successfully increased the visible light absorption of TiO 2 .Among those QDs, CdS has been receiving much attention because it has a high established relationship between the optical absorption and the size of the particle 5 .The QDSSC conversion efficiency has reached around 8% at present 6 , being lower than the DSSC efficiency but it is highly potential to increase the efficiency of the QDSSC system by tuning several properties of quantum dots.In order to be applied in the solar cell system, it is desirable for preparing semiconductor nanoparticles which have appropriately functionalized with different size and optical properties. The QDSSC based on wide band gap semiconductors, such as TiO 2 , has demonstrated a promising commercial technology in solar cell application.However, TiO 2 is not ideal semiconductor for solar cell photoanode due to its nature intrinsic band gap (around 3-3.4 eV) which means TiO 2 has a weak absorption of visible light 7 .Doping TiO 2 is one of the most promising approaches to increase its visible light response.Asahi et al., 8 firstly reported that N-doped TiO 2 films prepared by sputtering method showed significance visible light absorption at wavelengths less than 500 nm related to the band gap narrowing by mixing of N 2p states with O 2p states.The application of N-doped TiO 2 in the DSSC has significantly improved the efficiency and stability of DSSC 9,10 .So far, both doping of TiO 2 and quantum dot sensitization have been explored separately for solar cells applications, while combining the two approaches has not been reported yet.In this work, we develop CdS quantum dot sensitization with nitrogen-doping of TiO 2 and its application in QDSSC system.In N-doped TiO 2 /CdS nanocomposite, CdS acts as a visible sensitizer and N-doped TiO 2 being a wide band gap semiconductor that is responsible for the charge separation process.Therefore, the prepared N-doped TiO 2 /CdS nanocomposites thin films can effectively capture the visible light and transfer the photogenerated electrons into the N-doped TiO 2 conduction band, and started the electrical flows.In this work, the N-doped TiO 2 /CdS nanocomposite films have been fabricated by the successive ionic layer adsorption and reaction (SILAR) method as a relatively simple technique for large scale uniform coating to produce clean, dense and strong adhesion to substrate thin films. MATERIALS AND METHODS Materials.Cd(NO 3 ) 2 .10H 2 O, (NH 4 ) 2 S, absolute ethanol, acetyl acitic acid from Merck & Co. Titanium Tetraisopropoxide from Aldrich, Dodecylamine from Fluka, ITO and electrolyte redox from dyesol have been used as received without any further purification. Methods The N-TiO 2 has been synthesized following the method reported previously 11 .The prepared N-TiO 2 nanocrystalline was then applied on ITO glass substrate through doctor blading method and was calcined at 450 o C to obtain N-TiO 2 thin film.The CdS quantum dots were synthesized directly on the N-TiO 2 thin film surface by SILAR method as follows; the N-TiO 2 thin film was immersed in 0.2 M Cd(NO 3 ) 2 solution for 1 min.then washed with aquadest before further immersing in 0.2 M (NH 4 ) 2 S solution for 1 min.and again washed with aquadest to remove the impurities materials.The process was called as one cycle and the N-TiO 2 was sensitized by CdS quantum dots with the cycle variation.The prepared CdS-sensitized N-TiO 2 was then applied in QDSSC system and the performance of the QDSSC was measured as inscident photon to current efficiency (IPCE) and light to electricity conversion efficiency.To fabricate the QDSSCs, the N-doped TiO 2 /CdS thin film TiO 2 was assembled with the Pt-counter electrode (CE) between the active areas.The electrolyte solution was introduced through drilled hole on CE by capillary action, and the hole was then sealed.The current-voltage (I-V) measurements were performed using Keithley-2000 instrument with 1000 W/m 2 power.Thus, during the measurement the Solar Cell was irradiated by the light with the power density of 1000 W/m 2 , which was equivalent to Air Mass 1 (AM1).where V pp is a maximum voltage (V), and I pp is a maximum current density (mA/cm 2 ) 12,13 . To study the structure of resulted materials, the corresponding difractograms were recorded using a Rigaku Miniflex 600 40 kW 15 mA Benchtop Diffractometer, CuKα, λ=1.5406Å at scan rate of 2 o /minute. To study the electronic structure of CdS/N-TiO 2 nanocomposites, the light absorbances were recorded using UV-Vis Spectroscopy with diffuse reflectance of Shimadzu, UV-2550 model. RESULTS AND DISCUSSION The N-doped TiO 2 /CdS nanocomposite films have been successfully synthesized on the N-TiO 2 thin film surface through SILAR method with 1, 5, 10, 25, and 50 cycle's variation resulted in CdS/N-TiO 2 nanocomposites.Fig. 1 shows the selected powder XRD patterns of CdS/N-TiO 2 nanocomposites with cycle's variation of 0, 5, and 50.The patterns indicated the existence of CdS quantum dots material on N-TiO 2 semiconductor.The powder XRD pattern of CdS for the higher cycles performed in the synthesis showed the typical (111), (220), and (311) peaks of the cubic zinc blend structure which match the data of JCPDS-10-0454.The detailed CdS peaks appeared on 2q of 26.8 o , 44.12 o , 52.14 o , and 72 o corresponding to planes of (111), ( 220), (311), and (400), respectively, indicating the similar cubic structure of CdS as reported by Dhage et al., 14 and that of ZnS as reported by Soltani et al., 15 .The patterns also showed the peak of N-TiO 2 nanocrystalline that appeared at around 2q of 25. 112), ( 200), ( 105), ( 211), ( 204), ( 116), (220), and (215), respectively 16 .The diffractograms showed that there is not much peak change on the 5 cycles comparing to the pure N-TiO 2 (0 cycle), while on 50 cycles, the higher peak intensity might be associated with the higher amount of CdS on N-TiO 2 surface.The crystal size of both CdS and N-TiO 2 as estimated from Scherrer equation, are listed in Table 1.The result showed that all CdS synthesized on N-TiO 2 are quantum dots size.It was also found that the cycles applied in the synthesis process highly affect the crystal structure of CdS/N-TiO 2 nanocomposites.The more cycles lead to the lower CdS crystal size, while crystallite size of N-TiO 2 was not significantly changed.It might be because the more cycles formed higher CdS quantum dots on the N-TiO 2 surface, so it leads to the rearrangement of CdS quantum dots providing the lower particle size.In contrast to the smaller particle size with the more cycle synthesis process applied, it was also found that the more synthesis cycles resulted in smaller lattice parameter of the cubic crystal structure as listed in Table 1.The XRD patterns confirmed that both CdS and N-TiO 2 retained their identity as shown in both crystal structures and peaks appeared in all XRD patterns except on pure N-TiO 2 .It means the resulted N-doped TiO 2 /CdS system is a nanocomposite material 17 . The absorption spectra of cycle's variation on CdS/N-TiO 2 synthesis are shown in Fig. 2. The analysis of the typical bands together with colors may be seen in Table 2.It was clearly shown that the CdS addition on the N-TiO 2 resulted in a visible active response, which the higher cycles on CdS synthesis provided higher increase on absorption and wider red-shift absorption bands (smaller band gap energy/ Eg).It happened because the higher cycles resulted in the more CdS amount leading to the higher visible light response.The higher CdS amount could be also identified phisically from the deeper dark yellow intense color from the higher cycles applied (Table 2).The formation of CdS on N-TiO 2 surface greatly enhances the visible light response, so the resulted CdS/N-TiO 2 materials are highly potent to use as photoanode in solar cells system.The surface morphology of CdS/N-TiO 2 nanocomposites were evaluated using Scanning Electron Microscopy.The micrographs of N-TiO 2 and CdS/N-TiO 2 with 50 cycles are shown in Fig. 3.The N-TiO 2 micrographs (black region) seemed to show the porous nature of the N-TiO 2 as a requirement for the electrode better adhesion in solar cell system, which was in fact to be true in preparing the electrode.The cross sectional micrographs indicated that the application of 50 cycles SILAR synthesis on N-doped TiO 2 thin film led to the adsorption of CdS on the N-doped TiO 2 as rough surface changed to smoother packed surface and the width of the film changed from around 55 µm to around 66 µm.The existence of cubic CdS can also be identified on the N-TiO 2 surface. To study the effect of SILAR cycles on the photoelectrochemical properties of the resulted materials, we applied N-TiO 2 /CdS nano composites in the solar cells system.The solar cells performa was analyzed as I-V measurements (Fig. 4) and the overall efficiency (Table 3).From the I-V characteristics, it can be shown that these nanocomposites exhibit much higher efficiency compared to N-TiO 2 only, being 4.5-8.3%with fill factors of 0.34-0.54.Within the cycles, the 25-SILAR cycle provided the best efficiency while the 1 and 50 cycles gave a lower efficiency even from the 5-SILAR cycles.It might be caused the higher CdS amount resulted in a low band gap energy which was easily to provide CdS as a center recombination which was then decreasing the overall solar cells performa.In general, these nanocomposites also exhibit much higher efficiency compared to the fluorine-dopedtin oxide (FTO) of FTO/TiO 2 /CdS bilayers systems which was only 0.78% with fill factor of 0.4018, and to the CdS/TiO 2 -nanorod and CdS/TiO2-nanorod/g-C 3 N 4 , which were to be 1.67-2.31%with fill factors of 0.43-0.5119.However, this is slightly less than that recently reported by Wang et al., 20 for different material, being 9.02% for Zn-Cu-In-Se QDSCs. T h e o v e r a l l p h o t o -c o n v e r s i o n efficiency, h calculated from the integral of the shor t-circuit photocurrent density (I sc ), the open-circuit photovoltage (V oc ), the fill factor of the cell (FF), and the intensity of incident light (I s ) using the formula of where V oc = opencircuit Voltage (V), I sc = short-circuit-current density (mA/cm 2 ), and P inc = light intensity (W/cm 2 ).Fill factor (FF) is given by 2 o , 38.5 o , 47.8 o , 54.1 o , 55.4 o , 63 o , 69-71 o , and 75 o , corresponding to the anatase crystal planes of (101), ( Table 2 : The Color and Assignment of UV-Vis spectral bands of N-TiO 2 /CdS *Eg was calculated following Kubelka Munk formula
2018-07-22T11:23:58.108Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "cb4daa61acb2277f4848e510b049ce621f77b7fc", "oa_license": "CCBY", "oa_url": "http://www.orientjchem.org/pdf/vol34no3/OJC_Vol34_No3_p_1297-1302.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb4daa61acb2277f4848e510b049ce621f77b7fc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
256461026
pes2o/s2orc
v3-fos-license
DocInfer: Document-level Natural Language Inference using Optimal Evidence Selection We present DocInfer - a novel, end-to-end Document-level Natural Language Inference model that builds a hierarchical document graph enriched through inter-sentence relations (topical, entity-based, concept-based), performs paragraph pruning using the novel SubGraph Pooling layer, followed by optimal evidence selection based on REINFORCE algorithm to identify the most important context sentences for a given hypothesis. Our evidence selection mechanism allows it to transcend the input length limitation of modern BERT-like Transformer models while presenting the entire evidence together for inferential reasoning. We show this is an important property needed to reason on large documents where the evidence may be fragmented and located arbitrarily far from each other. Extensive experiments on popular corpora - DocNLI, ContractNLI, and ConTRoL datasets, and our new proposed dataset called CaseHoldNLI on the task of legal judicial reasoning, demonstrate significant performance gains of 8-12% over SOTA methods. Our ablation studies validate the impact of our model. Performance improvement of 3-6% on annotation-scarce downstream tasks of fact verification, multiple-choice QA, and contract clause retrieval demonstrates the usefulness of DocInfer beyond primary NLI tasks. Introduction Natural Language Inference (NLI) is a fundamental textual reasoning task seeking to classify a presented hypothesis as entailed by, contradictory to or neutral to a premise (Dagan et al., 2010). Prior NLI datasets and studies have focused on sentence-level inference where both the premises and hypotheses are single sentences (SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), QNLI and WNLI (Wang et al., 2018)) Documentlevel NLI extends the reasoning of NLI beyond * *Corresponding Author:puneetm@umd.edu sentence granularity where the premises are in the document granularity, whereas the hypotheses can vary in length from single sentences to passages with hundreds of words (Yin et al., 2021). Document level NLI is an important problem for many tasks including verification of factual correctness of document summaries, fact-checking assertions against articles, QA on long texts, legal compliance of contracts, etc. Even so, it challenges modern approaches due to the limited input bottleneck of modern Transformer models. Consider that the universally used BERT model (Devlin et al., 2018) can only encode 512 input sub-tokens due to its quadratic self-attention complexity. Consequently, evidence in the document premise relevant to the hypothesis can potentially be distributed in several textual spans located arbitrarily far away from each other in long documents, and may not be simultaneously available to draw inference. Recent approaches, notably SpanNLI (Koreeda and Manning, 2021), HESM (Hanselowski et al., 2018)) and others, have shown that chunking the premise into multiple document spans, scoring them, and aggregating the scores helps mitigate the limited input length problem. Such approaches do not allow the inference module to reason over the complete evidence. In contrast to encoding the document as a set of sentences fed into a transformer for inferential reasoning, a recent line of work, e.g. EvidenceNet (Chen et al., 2022), GEAR (Zhou et al., 2019) and HGRGA (Lin and Fu, 2022)), encodes documents as graphs and uses graph reasoning to perform textual inference. Graphs allow encoding of various morphological and semantic relationships at various granularities. However, these approaches use graph-based processing subsequent to evidence selection. We address the above challenge with a reasonable assumption that the portion of the premise (the ground truth evidence) necessary and sufficient for inference can fit entirely into the length limit of language model for effective representation learning. Our proposed system achieves this by selecting sentences in the document that are contextually relevant for a given hypothesis through pruning irrelevant paragraphs and reinforce learning based optimal sentence selection. Our main contributions: • DocInfer -a novel DocNLI model that simultaneously performs successive optimal evidence selection and textual inference on large documents. It utilizes a novel graph representation of the document encoding structural, topical, concept and entity-based relationships. It performs subgraph pooling and asynchronous graph updates to provide a pruned, hypothesis-relevant and richer sub-document graph representation and uses a reinforcementlearning based subset selection module to provide the contextually-relevant evidences for inference. Experimental results show that DocInfer outperforms the current SOTA on DocNLI, ContractNLI and ConTRoL datasets with a significant improvement of 8-12%. • We propose CaseHoldNLI -a new document-level NLI dataset in the domain of legal judicial reasoning with over 270K document-hypotheses pair with maximum premise length of 3300 words. We observe similar performance gains on this dataset. • Application on downstream tasks: We demonstrate the usefulness of the DocInfer evidence selection module on downstream tasks of fact verification, multiple choice QA and few shot clause retrieval from legal texts using no or small amounts of data for supervised fine-tuning. Results on FEVER-binary, MCTest and Contract Discovery dataset show significant improvement of ∼ 3-6% F1. Related Work Document-level NLI Datasets: Yin et al. (2021) introduced Doc-level NLI on news and Wikipedia articles. Liu et al. (2021) proposed the multiparagraph ConTRoL dataset focused on complex contextual reasoning (logical, coreferential, temporal, and analytical reasoning). Several datasets comprising legal documents like case laws, statutes, and contracts have been proposed. COLIE-2020 (Rabelo et al., 2020) and Holzenberger et al. (2020) support identification of relevant paragraphs from cases that entail the decision of a new case. How-ever, the combined input length of their premisehypothesis pairs remains within 512 tokens with the premise lengths at paragraph-level, reasonably suited for input to BERT-like models. Koreeda and Manning (2021) (Koreeda and Manning, 2021), HESM (Hanselowski et al., 2018)) chunk the premise into multiple document spans for reasoning. A similar approach was followed by legal language models such as Legal-BERT (Chalkidis et al., 2020) and Custom Legal-BERT (Zheng et al., 2021) for legal reasoning tasks. More recently, language models (e.g., Longformer (Beltagy et al., 2020) with 4096 token input) have been proposed to overcome the limited input field bottleneck. Fact Extraction and Verification (FEVER) (Thorne et al., 2018) tasks require extracting evidence and claim entailment given an input claim and the Wikipedia corpus. Prior works in this domain address the length limitation for claim verification by relevant evidence identification and its chunking which are individually scored and probabilistically aggregated (Subramanian and Lee, 2020;. Hierarchical graph modeling may be used to handle the large scale of the premise Zhou et al., 2019;Zhong et al., 2020;Chen et al., 2022;Lin and Fu, 2022;Si et al., 2021). Context Selection for Document-level NLP: Recent works have investigated selection of relevant context for document-level NLP tasks such as Neural Machine Translation (Kang et al., 2020), Event Detection (Ngo et al., 2020Veyseh et al., 2021), Relation Extraction (Trong et al., 2022. Recently, some of the work on document-level NLP has looked at temporal relation extraction (Mathur et al., 2021) , temporal dependency parsing (Mathur et al., 2022b), and speech synthesis (Mathur et al., 2022a) using graphs and sequence learning. However, none of them have considered an end-to-end trainable approach for graph learning with to identify the relevant evidence extraction. Contradiction Not Relevant HYPOTHESIS: All confidential information shall remain with clients and can be shared without prior notice. P1: The Joint Venture shall commence on the 1st of March, 2003, and shall be effective until February 28, 2004 unless extended by written agreement of the Joint Venturers not less than thirty (30) days prior to scheduled termination. Venturer shall be authorized or empowered to mortgage, hypothecate, pledge, sell, or transfer, an interest in the Joint Venture, nor confer on any successor or assignee the right to become a Joint Venturer without the consent of the other Joint Venturer. P2: During the term of the Joint Venture, no interest shall be allowed to any Joint Venturer upon the amount of his contribution. No Joint Venturer shall withdraw, transfer or have paid to him in any manner any part of his capital contribution or account, or any other funds or property of the Joint Venture without the consent of both Joint Venturers; provided, however, there may be distributed to the Joint Venturers, from time to time, so much of the gross income of the Joint Venture as shall not be needed to defray the necessary and expected costs and expenses of the Joint Venture business. P3: Internal Revenue Code Election. The Joint Venturers agree and declare that this association for the carrying on of a joint venture business operation does not, and is not intended to create a partnership, for either legal or United States income tax purposes, each Party recognizing that the other is able to contribute capital, labor, and services for the operation of a successful joint venture business. The Parties also declare that they are not making any agreement to do any business other than that set forth in this Agreement. P4: Procedure on Termination and Liquidation. On any termination of the Joint Venture, its debt shall be paid or provided for in a manner satisfactory to the Joint Venturers. Then, any unexpended portion of Joint Venture funds shall be distributed to the Joint Venturers in accordance with their prorata ownership in the Joint Venture and all other assets shall be distributed as undivided interests. Parties shall agree on a price for such asset so that each Party receives his proportionate share of all the Joint Venture BERT Encoder Dense x 1 x 2 x 3 x i DocInfer Given a textual hypothesis H, the task of documentlevel NLI is to classify whether the hypothesis is entailed by, contradicting to or not mentioned by (neutral to) the document D. We present DocInfer, a neural architecture ( Figure 1) that can select a set of evidence sentences E from document D to form a shortened document D e which is then used for NLI prediction. Here, for the document level NLI task, we need to constrain D e to fall within the length limit of BERT-like context encoder to enable it to consume the evidence entirely for improved representation learning for NLI. Our model can been seen as a sequence of four phases: (a) Representation of document D in the presence of the Hypothesis H to form a hierarchical document graph with sentences and paragraphs as nodes and Structural, Topical, Entity-centric and Concept-similarity relations as edges. (b) Paragraph node pruning using the novel Subgraph Pooling layer to select highly relevant paragraphs. (c) Asynchronous graph update for improved node representations and finally. (d) Optimal evidence selection using REINFORCE from the graph for the task of document-level NLI. Document Representation: Let premise document D be defined as a sequence of n sentences s 1 , s 2 , · · · , s n such that D = [s 1 , s 2 , · · · , s n ]. These sentences are naturally grouped into m consecutive paragraphs P = [p 1 , p 2 , · · · , p m ] such that each each sentence s i belongs to only one paragraph p j . We leverage pre-trained BERT language model to obtain the embedding of every sentence and paragraph nodes. The final representation for each sentence s i and paragraph [CLS] and [SEP ] are symbols that indicate the beginning and ending of a text input, respectively. Document Graph Construction: The document is then modeled as a hierarchical graph D G = (V, E) to capture the premise document structure. Here, V = {V p , V s , V h }, where V p , V s , V h are nodes corresponding to all the paragraphs, all the sentences and the hypothesis, respectively. The set of edges (E) of the Document Graph encodes four types of relations between the nodes mentioned below: (1) Structural Relations (R str ): Hypothesis-Paragraph edges and Paragraph-Sentence Affiliation edges model the hierarchical structure of the document through a directed edge from the hypothesis node to each paragraph node and from a paragraph node to each constituent sentence, respectively. Further, Paragraph-Paragraph Adjacency and Sentence-Sentence Adjacency links preserve the sequential ordering for consecutive paragraph and sentence nodes through directed edges. (2) Topical Relations (R top ): Sentence-Sentence Topical Consistency connections model the topical consistency between a pair of sentences by constructing sentence-level topical representations via latent Dirichlet allocation (Blei et al., 2003). Given a pair of sentences s i and s j , we extract latent topic distribution lda i , lda j ∈ R l for each sentence which are joined if the Helinger H(lda i , lda j ) distance between them is greater than 0.5. (3) Entity-centric Relations (R ent ): Sentence-Sentence Entity Overlap connections explicitly model the sentence-level interactions between entity spans by adding an undirected edge between two sentence nodes if they share one or more named entities. Further, Sentence-Sentence Entity Coreference connections join two sentences by an undirected edge if the sentences share mentions referring to the same real world entity. (4) Concept-Similarity Relations (R sim ): Sentences conceptually similar to other sentences and the hypothesis are connected to each other to account for presence of related events and topics in two sentences. We propose Sentence-Sentence ConceptNet Similarity using ConceptNet Numberbatch (CN). Let A cn i = [a 1 , a 2 , · · · , a l ] be the Con-ceptNet Numberbatch embeddings for the words in sentence s i = [w 1 , w 2 , · · · , w M ] respectively. Here, if a word w q does not have its corresponding embedding in CN, we simply set its vector a q to zero. Further, we introduce Hypothesis-Sentence Knowledge Similarity (using KnowBert embedding) connections that add weighted undirected edges between sentence-sentence and hypothesissentence node pairs, respectively. KnowBert representations are obtained by encoding text using the pre-trained KnowBert language model as A kbrt . The edge weights ε ( i, j) between the input vector pairs (a i , a j ) is cosine similarity between the knowledge-based semantic embeddings of the input texts. Paragraph Pruning using Subgraph Pooling: Long documents are structured as a sequence of paragraphs such that each paragraph may be topically coherent to itself and neighboring paragraphs. As such, paragraphs unrelated to a given hypothesis may be ignored to reduce distractor cues. Graph pooling (Grattarola et al., 2021) is a popular method for graph coarsening. Unlike previous methods such as gPool (Gao and Ji, 2019) and SAGPool (Lee et al., 2019) that pool entire graph, we propose attention-based Subgraph Pooling layer which can select top rank nodes from a predefined subset of nodes in the graph. Subgraph Pooling layer can selectively drop irrelevant paragraph nodes while retaining the remaining paragraph nodes, their corresponding sentence nodes and the hypothesis node in the graph. Suppose there are N nodes in document graph D G with node embedding of size C with adjacency matrix A ∈ ℜ N xN and feature matrix X ∈ ℜ N xC .We apply GAT (Veličković et al., 2017) over D G to obtain self-attention scores Z for all nodes. The pooling ratio η is a hyperparameter that determines the number of paragraph nodes to keep based on the value of Z. We want to select the top-rank nodes only from the set of paragraph nodes. Hence, we use a hard mask µ = {1|x i ∈ P ∀X; 0} that is 1 for all paragraph nodes P , otherwise zero. We perform an element-wise multiplication (⊙) between attention scores and mask values to get a soft mask Z P = Z ⊙ µ. Top-rank operation ranks returns the indices of top η paragraphs based on Z P . Node indices corresponding to the set of selected topη paragraphs added to the set of sentence nodes minus those belonging to the pruned paragraphs (idx S−S P −Pη ) and hypothesis (idx H ) are selected as follows: idx = top-rank(Z P , η)+idx S−S P −Pη + idx H . The combined index tensor (idx) contains the indices of all the nodes selected in the final graph D ′ G . X ′ (idx, :) and A = A(idx, idx) perform the row and/or column extraction to form the adjacency matrix and the feature matrix of D ′ G . The attention scores for selected nodes Z idx act as gating weights for node features after filtering which controls the information flow and makes the whole procedure trainable by back-propagation as given by: for sentence selection such that at step k + 1 in the process (k ≥ 0), a sentence s k+1 i is chosen which has not been selected previously in evidence set E k = {s 1 * , · · · s k * } at step k. We employ a Long Short Term Memory Network (LSTM) over previously selected k sentences to select a relevant sentence at time step k + 1. At step 0, the initial hidden state h 0 for LSTM is set to zero. At step k + 1, we use the hidden state h k of LSTM from prior step to assign a score sc k+1 i for each sentence node s i ∈ S − E k . The sentence with highest selection score is considered for selection at this step as given by sc k+1 where FFN is a two-layer feed-forward network. In particular, if selecting s k+1 * causes the number of words in the selected sentences so far to exceed the context encoder length limit (eg., 512 tokens for BERT), the selection process stops and s k+1 * is not included in the evidence set E (i.e., E = {s 1 * , · · · , s k * } in this case). Otherwise, the selection process continues to the next step and s k+1 * will be chosen and included in E (i.e.,E = {s 1 * , · · · , s k+1 * }). The hidden state of LSTM is also updated for the current step, i.e., h k+1 = LST M (h k , x k+1 * ), to prepare for the continuation of sentence selection. Evidence Selection Reward Function: In order to train the evidence selection module, we employ the REINFORCE algorithm (Williams, 1992) and incorporate the following information signals in the reward function of REINFORCE to better supervise the training process. In order to train the evidence selection module, we employ the REIN-FORCE algorithm (Williams, 1992). We incorporate the following information signals in the reward function of REINFORCE to better supervise the training process: (1) Task Reward ϕ perf : We compute this reward based on the NLI task prediction performance. In order to measure the impact of the selected context, we use a T-5 model (Raffel et al., 2019a) pretrained on MNLI corpus (Williams et al., 2017) to predict the NLI label for the given hypothesis + context pair. ϕ perf (E) is set to 1 if the final prediction is correct; and 0 otherwise. (2) Semantic Reward ϕ sem : We propose that the evidence sentences should be semantically similar to the hypothesis. Our motivation is that similar context sentences (e.g., discussing the same events or entities) provide more relevant information for the NLI prediction. We include the semantic simi-larity between the selected evidence sentences in E and the hypothesis as measured by the cosine similarity (i.e., ) between their sentence embeddings computed using SimCSE 1 (Gao et al., 2021). (3) Evidence Reward ϕ bleu : We seek to promote evidence sentences having a high overlap with the target ground truth evidence. In many cases, the target evidence length may be way less than 512 token limit. Hence, our motivation is to reward the lexical overlap while penalizing verbosity arising at evidence selection stage. We calculate the BLEU score between the selected evidence E and ground truth evidence E gt : ϕ bleu = BLEU (E, E gt ) This reward can only be applied for cases where ground truth evidence annotation is present. (4) Multihop Reward ϕ mhop : The motivation for this reward is that a sentence should be preferred to be included in E by the selection process if there are common entities mentions with the hypothesis. Moreover, connected sentences by the virtue of common entity mentions are more likely to refer to the same events. Hence, we leverage the subgraph similarity of the learned node embeddings of the selected evidence and their first degree node connections through entity-centric relations with the hypothesis node in G ′′ D . We perform maxpooling operation over the concatenated node embeddings of the corresponding evidence sentences and their first degree node connections joined by R ent :Ê = maxpool(v 1 v 2 , · · · , v k |s i ∈ E, i ∈ {1, · · · , k}), where means embedding concatenation. Finally, we compute the dot-product betweenÊ and node embedding of the hypothesis node h as ϕ mhop =Ê.h. NLI Prediction Loss: We combine the final representations corresponding to the learnt graph structure (g out ) and selected evidence text (t out ). We aggregate the embeddings corresponding to the selected sentence nodes in D ′′ G and the hypothesis node using a summation-based graph-level readout function (Xu et al., 2018) as The words in the evidence sentences are joined in order of their appearance in document D and input to the context encoder t out = Encoder([CLS]s 1 ; s 2 , · · · , s k ). g out and t out are concatenated and passed through two dense fully-connected layers: z = ReLU (Dense(t out g out ). This is followed by a Softmax layer to predict entailment/contradiction/neutral by utilizing the negative log-likelihood loss: Lpred = −P (y|z). Evidence Selection Loss: The overall reward function to train our evidence selection module is ϕ(E) = ϕ perf + ϕ sem + ϕ bleu + ϕ mhop . Using REINFORCE, we seek to minimize the negative expected reward ϕ(E) over the possible choices of E as Finally, the probability of the selected sequence E is computed via P (E|H, D) = k=0,··· ,K−1 P (s k+1 * |H, D, s i≤k * ), which is obtained via softmax over selection scores for sentences in S at selection step k + 1. Joint NLI Prediction and Evidence Selection: During training, the NLI prediction model M N LI and the evidence selection module E N LI are trained alternatively. At each update step, E N LI first selects optimal evidence sentences E that form a shortened document D e . M N LI uses E to predict the NLI label. The parameters of M N LI are updated using the gradient of NLI prediction loss Lpred , keeping parameters of evidence extraction module constant. Next, the parameters of the evidence selection module are updated using the gradient of Lsent , keeping parameters of M N LI constant. This process repeats until convergence. At test time, evidence sentences are first selected and then consumed by the prediction model to perform NLI prediction. Datasets for Document-level NLI We use the following three datasets to benchmark document-level NLI approaches. (1) DocNLI (Yin et al., 2021): A large-scale document-level NLI dataset obtained by reformatting mainstream NLP tasks such as question answering and document summarization. (3) ConTRoL (Liu et al., 2021): A passage-level NLI dataset of exam questions that requires logical, analytical, temporal, coreferential reasoning, and information integration over multiple premise sentences. (4) CaseHoldNLI, the fourth and novel NLI dataset introduced in this paper, in the legal judicial reasoning domain for identifying the governing legal rule (also called "Holding") applied to a particular set of facts. It is (Devlin et al., 2018) 63.1 60.1 BERT large (Devlin et al., 2018) 63.5 61.1 RoBERTa base 61.0 59.5 RoBERTa large 63.1 61.3 T5 (Raffel et al., 2019b) 62.9 61.1 Longformer (Beltagy et al., 2020) 46.1 44.4 GEAR (Zhou et al., 2019) 67.8 63.3 KGAT 68.5 64.8 HESM (Subramanian and Lee, 2020) 68.9 65.0 DREAM (Zhong et al., 2020) 69.7 65.9 TARSA (Si et al., 2021) 70.4 66.4 EvidenceNet (Chen et al., 2022) 72.6 68.5 Ours DocInfer (w/ RoBerta) 75.5 72.3 sourced from the CaseHOLD dataset (Zheng et al., 2021) comprising over 53,000+ multiple choice questions. Each multiple choice question comprises of a snippet from a judicial decision along with 5 semantically similar potential holdings, of which only one is correct. We obtain the NLIversion by combining the question and the positive (negative) answer candidate as a positive (negative) hypothesis. To evaluate the dataset quality, we asked an expert to select the NLI using only the hypothesis for 10% of the test data sampled at random. The poor performance of this human baseline (∼ 0.24F 1) validates that the dataset doesn't suffer from hypothesis bias. CaseHoldNLI dataset is comparable to challenging document-level NLI datasets with average premise length at documentscale and exceeds the maximum input length limit of BERT models. We report train/dev/test splits of each dataset. Experiments on Downstream Tasks (1) Fact Verification: The NLI-version of FEVER (Thorne et al., 2018) dence and other randomly sampled related text. (2) Multi-choice Question Answering: The NLIversion MCTest (Richardson et al., 2013) combines the question and the positive (negative) answer candidate as a positive (negative) hypothesis. Presence of limited labeled data makes them both good benchmarks to investigate the performance of document-level NLI models on annotation-scarce tasks. We evaluate DocInfer trained on DocNLI dataset and report F1 scores for both tasks. We follow the "FEVER-binary" and "MCTest-NLI" settings proposed in Yin et al. (2021). (3) Contract Clause Retrieval(Łukasz Borchmann et al., 2020): is a task to identify spans in a target document representing clauses analogous (i.e. semantically and functionally equivalent) to the provided seed clauses from source documents. We reformulate this as an NLI task where the seed clauses are concatenated to form the hypothesis, and the target document is the premise. We test the evidence selection capabilities of DocInfer trained on ContractNLI dataset for identifying relevant sentence-level spans in the premise for the clause retrieval task. The dataset has 1300 examples each for validation and test to tune and test the paragraph selection hyperparameter η. We followed the eval-uation framework specified in Łukasz Borchmann et al. (2020) of few (1-5) shot setting and report Soft F1 score. Results Table 1-4 compares the performance of DocInfer against other baselines on DocNLI, ContractNLI, ConTRoL, and CaseHoldNLI datasets. Similar to (Yin et al., 2021), we truncate the hypothesispremise pair sequence to appropriate maximum input length for input to Transformer models. BERT (Devlin et al., 2018), RoBERTa , DeBERTa (He et al., 2020), BART (Lewis et al., 2020) show superior performance for Doc-NLI, ContractNLI, and ConTRoL datasets, respectively. Legal-BERT (Chalkidis et al., 2020) outperforms other Transformer language models on Case-HoldNLI dataset due to its high domain-specificity of legal language. However, they are challenged by their input length restriction of 512 tokens for contextually reasoning over long premise lengths. Consistent with observations of (Yin et al., 2021), large input Transformer models such as Longformer (Beltagy et al., 2020) and BigBird (Zaheer et al., 2020) that can handle up to 4096 tokens underperform traditional BERT-like models on all four datasets. We attribute this to the presence of distractors in long documents and the inability of these models to reason in a multihop fashion. BART-NLI which is pretrained on sentence-level NLI (Liu et al., 2021) improves over naive Transformers but still struggles due to limited captured context. We also re-purpose several strong baseline methods from the Fact Extraction and Verification (FEVER 1.0) task. by reformulating the document retrieval and claim verification steps to paragraph retrieval and textual entailment, respectively. GEAR, KGAT, and HGRGA model the document as a dense fully-connected graph, leading to distractor interactions confounding the reasoning process. They are also devoid of linguistic information about entities, topics or commonsense knowledge. HESM uses document chunking which hinders contextual reasoning for far-away chunks. DREAM and TARSA use semantic role labeling and topic modeling, respectively, to identify phrase interaction but lack entity-level information required to resolve coreferences across document. EvidenceNet and SpanNLI emerge as strong baseline models for our work. DocInfer outperforms SpanNLI and EvidenceNet due to its ability to iteratively select important evidence sentences in the premise and simultaneously utilize multihop interactions between related evidences. Impact of Input Length: DocInfer achieves SOTA performance on all four datasets and maintains steady improvements over corresponding baseline models with increasing in input lengths. Choice of context encoder in NLI prediction: One of the merits of the our approach is that it is extensible and can utilize any domainspecific transformer language models for context encoding to further augment performance. We evaluate the choice of context encoder for different datasets. DocInfer gives SOTA performance using RoBERTa for DocNLI, BERT for ContractNLI, BART for ConTRol, and Legal-BERT for Case-HoldNLI, in the prediction model. Table 5 shows ablations for the document graph relations, module components and reward functions. We observe that concept relation is critical in all data settings due to the need for external knowledge-based semantic representation for connecting related concepts across sentences. Removing any of the relations does not degrade the performance below Evi-denceNet (Chen et al., 2022) or SpanNLI baselines. This is important for adapting our method to new domains where existing linguistic parsers maybe noisy or non-existent. Cells in Table 5 highlighted in red shows the ablation of individual components such that removing paragraph pruning mechanism severely deteriorates model performance as the model has to evaluate an exponentially larger number of candidate evidences during evidence selection stage. In absence of optimal evidence selection, we treat evidence extraction as a binary classification task over each sentence node along with NLI label given by the "readout" function similar to KGAT . The severe performance drop of DocInfer model in absence of evidence selection component highlights its importance for document NLI task. Asynchronous graph update adds incremental value to DocInfer owing to its relation-specific message passing. Evidence Selection and Paragraph Pruning components are most critical for SOTA performance of DocInfer. Greedy selection instead of REINFORCE significantly decreases performance. Concept relations are most beneficial for DocInfer, followed by topical and entity relations. Evidence, semantic, multihop and task rewards most help ContractNLI, Con-TRoL, DocNLI, and CaseHoldNLI. Table 5 shows that removing any reward component (i.e., task, semantic, evidence, multihop) significantly hurts the overall performance, thus clearly demonstrating their individual importance. To assess the necessity of the multi-step selection using REINFORCE, we eliminate multistep selection strategy and perform one-shot sentence selection where the top k sentences with highest selection scores from the first step are selected. We call this setting as greedy evidence selection and show that the elimination of multistep selection drops performance, suggesting that selecting sentences incrementally conditioning on previously selected sentences is advantageous. Impact of reward function: Performance of DocInfer on downstream tasks: Table 6 shows the evaluation of DocInfer along with RoBERTa-large and EvidenceNet (Chen et al., 2022) baselines and RoBERTa model from Yin et al. (2021) on FEVER-binary and MCTest tasks. We train all models on DocNLI dataset to benefit from cross-task transfer and for minimizing domain shift. We then inference all models in two settings: (i) without task specific fine-tuning, and (ii) with fine-tuning on the end task. DocInfer model consistently outperforms baselines across both tasks in case of without fine-tuning (FEVER-binary: +0.8 F1, MCTest v160: +1 F1, MCTest v500: +0.6 F1) and with fine-tuning (FEVER-binary: +0.9 F1, MCTest v160: +0.5 F1, MCTest v500: +0.2 F1). We observe that both the tasks require the models to capture topic coherence, knowledge-based semantics, and entities interactions as removing graph relations severely degrades the performance. Evidence selection for clause retrieval focuses on selecting evidence spans in the target document (premise) given the entailment relation with seed clauses (hypothesis). The task is unsupervised in nature (has no training set). We test the evidence selection module (E N LI ) of the DocInfer model and its ablated variants (without paragraph pruning and reward functions), all pre-trained on Con-tractNLI dataset. Table 7 shows that DocInfer model with BERT as the context encoder outperforms strong baselines by approximately 5%. Removing paragraph pruning significantly degrades the performance, highlighting the need to prune distractor paragraphs for retrieving relevant information. Presence of each reward function to maintain the performance of DocInfer indicates the linguistic importance of each reward. Formulating the task as NLI helps contextualize the seed clauses with premise as opposed to earlier techniques of isolated vectorization and naive aggregation by (Łukasz Borchmann et al., 2020) . Qualitative Analysis: Figure 2 shows qualitative analysis across different reasoning types on the test set of ConTRoL dataset. The results provide evidence that the multihop and semantic similarity rewards are important for coreference reasoning (CR) due to reasoning over multiple mentions and noun phrases. Multihop reward also helps improve Information aggregation (II) which requires combining information from multiple paragraphs. Task reward benefits logical reasoning as it focuses on logical inference of human language. DocInfer is unable to handle temporal and analytical reasoning cases. We further analyze the evidence extraction A Limitations Through careful analysis of error cases, we found that there are two main types of prediction errors from the proposed model. First, the model is unable to reason over temporal and causal aspects. For example, in the hypothesis "Repayment terms will be finalized before disbursement but prior to loan approval" while the evidence states "Repayment terms are subject to loan approval and monthly disbursement of interest amount". DocInfer does not recognize the fact that there is a temporal order between events "repayment", "approval", and "disbursement". Tackling this type of error requires temporal relation prediction between different events. The second type of errors is mainly due to contradictory/missing information in the retrieved evidence required for analytical inference. For example, the model predicts that the hypothesis "Insurance prices are all time high" contradicts with evidences "Insurance prices increase with increase in pollution" and "Protest marches for restoring pollution control board were censored". The model prunes a relevant piece of evidence -"Pollution control board tabled policy for curbing air contamination in residential areas" in an otherwise irrelevant paragraph which causes loss of logical flow. Potential Risks: Our models are exploratory and academic in nature and should not be used for real-world legal/contractual/healthcare purposes without extensive investigations into its shortcomings/randomness/biases. Unhandled Cases: The current work is limited to English language and would need suitable tools in other languages to process semantic similarity, concept knowledge and topic models. Moreover, our method has been tested on limited domains of Wikipedia text, narrative stories, exam-style questions, case laws, and contracts. Applying it to life-critical scenarios such as healthcare, public safety will need further investigations. B Ethics Statement We utilize three publicly available datasets -Doc-NLI, ContractNLI and ConTRoL for evaluating document-level NLI. We also curated dataset for doc-level NLI on legal judicial case documents. We source these contract documents from a publicly available resource -CaseHOLD dataset (Zheng et al., 2021). We repurpose the dataset for our task and provide new annotations. CaseHoldNLI dataset does not violate any privacy as these documents are already in public domain as part of Harvard Case Law Corpus 2 . There is no human bias involved in such documents as they are already annotated expertly and provided openly after anonymizying any identifiable information. These documents do not restrict reuse for academic purposes and any personal information was already redacted before their original release. All documents and our experiments are restricted to English language. FEVER-binary, MCTest, and Contract Discovery datasets are also publicly available for research purposes. There was no sensitive data involved in the studies. C Impact of Input Length DocInfer achieves SOTA performance on all four datasets and maintains steady improvements over corresponding baseline models with increasing in input lengths for performance vs premise length comparison for ConTROL, DocNLI, ContractNLI, andCaseHoldNLI datasets. D.1 Fact Verification The NLI-version of FEVER (Thorne et al., 2018) task, released by Nie et al. (2019), considers each claim as a hypothesis while the premises consist of ground truth textual evidence and some other randomly sampled related text. Yin et al. (2021) combined the "refute" and "not-enough-info" labels into a single class of "not entail", organized the dataset into train/dev/test split of 203,152/8,209/10,000 and renamed it as "FEVER-binary". E Data Statistics We present the dataset statistics in Table 8. A1: Limitations: Through careful analysis of error cases, we found two main types of prediction errors from the proposed model: (1) unable to reason over temporal and causal relations; (2)contradictory/missing information in the retrieved evidence required for analytical inference. A2: Potential Risks: Our models are exploratory and academic in nature ans should not be used for realworld legal/contractual/healthcare purposes without extensive investigations into its shortcomings/randomness/biases. B3: Intended use of data artifacts: The intended use of NLI datasets is to improve NLP reasoning and semantic understanding of text and languages. Use cases in legal and contract domains can make increase accessibility amongst non-experts and lead to AI for social good. B4: Steps taken to protect / anonymize names, identities of individual people or offensive content: We do not use any identifiable user data for any experiments. All persons mentioned in the dataset are anonymous or have their information publicly available. B5: Coverage of domains, languages, linguistic phenomena, demographic groups represented in data: Our work uses NLI dataset from Wikipedia, story narrations, exam questions, contracts and case laws in English language. Adaptation to other languages may need appropriate processing. B6: Data statistics (train/test/dev splits): The data statistics are given in Table 8. E.1 Training Setup Hyperparameters: We tune the hyperparameters for the proposed model using a grid search. All the hyperparameters are selected based on the F1 scores on the development set to find the best configurations for different datasets. In our model we use the BERT-base to encode document embeddings; LSTM and 2 layers for feed forward neural networks. The trade-off parameters α, β and γ are set to 0.5, 0.1, 0.05, respectively. We use Adam optimizer and the batch size of 16 is employed during training. We vary paragraph selection η between 1 to 10. We summarize the range of our model's hyper parameters such as: size of hidden layers in LSTM {100, 200, 300}, size of hidden layers in FFN {100, 200, 300, 400}, BERT embedding size, dropout δ ∈ {0.2, 0.3, 0.4, 0.5.0.6}, learning rate λ ∈ {1e − 5, 2e − 5, 3e − 5, 4e − 5, 5e − 5}, weight decay ω ∈ {1e − 6, 1e − 5, 1e − 4, 1e − 3}, batch size b ∈ {16, 32, 64} and epochs (≤ 100). Contextual Encoder: We used BERT-baseuncased for generating token embedding of size 1x 768. As BERT-base Transformer provides a stronger baseline as compared to RoBERTa, we utilized BERT Transformer for Contextual Encoder in DocInfer architecture. We use the default dropout rate (0.1) on BERT's self attention layers but do not use additional dropout at the top linear layer The output from the Contextual Encoder is a 1-D vector of size 768. Loss Function and Inference: DocInfer is trained end to end using Cross Entropy loss for context encoder and REINFORCE loss for evidence selection with Adam optimizer. Across all four datasets, we found the best results correspond with the use of Adam optimiser set with default values β 1 = 0.9, β 2 = 0.999, ϵ = 1e−8, weight-decay of 5e−4 and an initial learning rate of 0.001. We evaluate the performance of NLI prediction with the following corresponding metrics for each dataset: • DocNLI: devF1, test F1 • ContractNLI: NLI label using Acc (%), F1(entails), F1(contradicts), Evidence extrac-tion using mAP,PR@80 precision and recall score. C4: Implementation Software and Packages We implemented our solution in Python 3.6 using PyTorch framework. We used the following libraries and modules: • Huggingface's implementation for BERT/RoBERTa/BART/Legal-BERT/T5/Longformer/BigBird transformers.
2023-02-02T14:05:07.134Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4430cb7ddb3c4a9860ddabf4f92568a8a03c2b18", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "4430cb7ddb3c4a9860ddabf4f92568a8a03c2b18", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
119140228
pes2o/s2orc
v3-fos-license
Concordances from differences of torus knots to $L$-space knots It is known that connected sums of positive torus knots are not concordant to $L$-space knots. Here we consider differences of torus knots. The main result states that the subgroup of the concordance group generated by two positive torus knots contains no nontrivial $L$-space knots other than the torus knots themselves. Generalizations to subgroups generated by more than two torus knots are also considered. Introduction In [4], Krcatovich showed that all L-space knots are prime. Thus no nontrivial connected sum of knots is an L-space knot. We consider instead concordances from knots to L-space knots. In [12], Zemke uses involutive knot Floer homology to obstruct a knot from being concordant to an L-space knot. He gives a few examples of this obstruction for sums and differences of torus knots. Expanding on this, Livingston [6], showed that a connected sum of positive torus knots is not concordant to an L-space knot using the Ozsváth-Szabó tau invariant, the Alexander polynomial, and the knot signature. In his paper, Livingston discusses a few examples in which his strategy does or does not apply to showing that differences of torus knots are not concordant to L-space knots. We show the following: Theorem 1.1. If the connected sum of distinct positive torus knots mT (p, q) # nT (r, s) is concordant to an L-space knot, then either m = 0 and n = 1 or m = 1 and n = 0. The proof uses an approach similar to that of Livingston, with two additions: • properties of the Ozsváth-Stipsicz-Szabó Upsilon invariant, • and the result of Hedden and Watson [3] that the leading terms of the Alexander polynomial of an L-space knot of genus g must be t 2g − t 2g−1 . More generally, we make the following conjecture: Conjecture 1.2. If a connected sum of torus knots is concordant to an L-space knot, then it is concordant to a positive torus knot. Note that Litherland [5] proved that torus knots are linearly independent in the concordance group. So a connected sum of torus knots is concordant to a positive torus knot T (p, q) only if it is of the form mT (p, q) # (m − 1)T (p, q) where m ≥ 1. As progress towards this conjecture, we give conditions under which a connected sum of torus knots is not concordant to an L-space knot. These conditions involve all of the aforementioned invariants, as well as the relations among Upsilon functions for torus knots discovered by Feller and Krcatovich [2]. Notation. Throughout this paper, all torus knots T (a, b) considered are such that 1 < a < b and gcd(a, b) = 1. Preliminaries In this section, we gather some useful facts and references concerning L-space knots, torus knots, the Ozsváth-Szabó tau invariant, the Alexander polynomial, the Ozsváth-Stipsicz-Szabó Upsilon invariant, and the Levine-Tristram knot signature. In [9], Ozsváth and Szabó introduced the Heegaard Floer invariant HF (Y ) which associates a graded abelian group to a closed 3-manifold Y . A rational homology 3-sphere Y is called an L-space if rank HF (Y ) = |H 1 (Y ; Z)| (see [10]). A knot K is called an L-space knot if it admits a positive L-space surgery. Since lens spaces are L-spaces, positive torus knots are L-space knots. The Heegaard-Floer knot complex CFK ∞ (K) was introduced in [8]. For L-space knots, the complex CFK ∞ (K) is always a staircase complex where the height and width of each step is determined by the gaps in the exponents of the Alexander polynomial of K; the Alexander polynomial can be written as where the indices alternate between horizontal and vertical steps. For more details, see [10] and [1]. In [11], Ozsváth, Stipsicz, and Szabó defined the U psilon invariant, Υ K (t), a piecewiselinear function with domain [0, 2]. See [11] for explicit computations for the family of knots T (p, p + 1) and for formulas for computing Υ for L-space knots from CFK ∞ . The derivative Υ ′ K (t) is piecewise-constant with singularities at t-values where the slope changes in Υ K (t). See Livingston [7] for results on computing Υ ′ K (t) from CFK ∞ (K). Finally, consider the Levine-Tristram signature function σ K (t). The signature function is piecewise-constant and integer-valued with possible jumps occurring at zeroes of the Alexander polynomial of K. Livingston in [6] showed that if the cyclotomic polynomial (φ c (t)) k divides ∆ T (p,q) (t), then σ T (p,q) (t) jumps by −2k at t = 1/c. We use this fact along with the following to prove Theorem 1.1. Proof of the main theorem We break the proof into several smaller propositions. Note that if J is concordant to K, then τ (K) = τ (J), σ K (t) = σ J (t), and Υ K (t) = Υ J (t), as these are all concordance invariants. Livingston's result in [6] includes the cases where both m and n are nonnegative, so we need only show the result for at most one of m, n positive. The first case is easy; we use the Ozsváth-Szabó tau invariant to rule out the case where m, n ≤ 0. Proof. If m = n = 0, then K is the unknot. So assume that m, n ≤ 0 with at most one of m, n equal to 0. Suppose that J is a nontrivial L-space knot concordant to K and consider the Ozsváth-Szabó tau invariant, τ (K). Recall that 0 < p < q and 0 < r < s. Note that since the knots J, T (p, q), and T (r, s) are nontrivial L-space knots, τ (J), τ (T (p, q)), and τ (T (r, s)) are positive by Theorem 2.1 (a). However, since J is concordant to K, by the additivity of τ under forming connected sums, we have that The remaining cases are those where K = mT (p, q) # nT (r, s) and m · n < 0. Without loss of generality, we will assume that m > 0 and n < 0. For ease of notation, we write this as K = mT (p, q) # − nT (r, s) with m, n > 0. Next, we use the Ozsváth-Stipsicz-Szabó Upsilon invariant to rule out the case where r > p. Proof. Suppose that J is a L-space knot concordant to K and consider the Ozsváth-Stipsicz-Szabó Upsilon invariant, Υ K (t). Recall that 0 < p < q and 0 < r < s. Since the knots J, T (p, q), and T (r, s) are all nontrivial L-space knots, Υ ′ J (t), Υ ′ T (p,q) (t), and Υ ′ T (r,s) (t) must be increasing. Analyzing CFK ∞ (T (p, q)) and CFK ∞ (T (r, s)), we see that Υ ′ T (p,q) (t) has its first jump at t = 2/p and Υ ′ T (r,s) (t) has its first jump at t = 2/r. Since Υ is additive under connected sum, Υ ′ K (t) has its first jump at t = min{2/p, For the remaining cases, we rely heavily on the work of Livingston in [6] for analyzing the relationship between the Alexander polynomial and the Levine-Tristram signature function of connected sums of torus knots. Proposition 3.3. If the knot K = mT (p, q) # − nT (r, s), where m, n ≥ 1 and p ≤ r, is concordant to an L-space knot J, then rs divides pq and Proof of Proposition 3.3. Suppose that J is an L-space knot concordant to K. Recall that 0 < p < q and 0 < r < s. Consider the Alexander polynomial of K. It is a product of cyclotomic polynomials φ c (t): Suppose there exists c such that φ c (t) | ∆ T (r,s) (t) and φ c (t) | ∆ T (p,q) (t) . By Livingston [6], this implies that the Levine-Tristram signature function of K, σ K (t), jumps by −2(m−n) at t = 1/c. On the one hand, we know that for L-space knots the degree of the Alexander polynomial is equal to twice the τ invariant of the knot. So, since J is concordant to K, we have that On the other hand, we have which is a contradiction. Thus it must be that ∆ T (r,s) (t) | ∆ T (p,q) (t). Note that this implies that rs divides pq. Also, ∆ T (r,s) (t) | ∆ T (p,q) (t) implies that σ K (t) jumps by −2m at 1/c for each φ c (t) dividing ∆ T (p,q) (t) and not dividing ∆ T (r,s) (t) and jumps by −2(m − n) at 1/c for each φ c (t) dividing both. So it must be that By the argument above involving the degrees of the polynomials, we find that Proof of Corollary 3.4. Suppose that J is an L-space knot concordant to K. Then by Proposition 3.3, we know that In [3], Hedden and Watson show that the Alexander polynomial of an L-space knot of genus g must have highest degree terms t 2g − t 2g−1 . The symmetry of the Alexander polynomial then implies that the Alexander polynomial of an L-space knot must have lowest degree terms 1 − t. Since T (p, q), T (r, s), and J are all L-space knots, Equation 3.1 implies that Rearranging and expanding, we see that So it must be that m = n + 1. Proof of Theorem 1.1. By Propositions 3.1, 3.2, and 3.3 and Corollary 3.4, we can consider only the case of K = mT (p, q) # − (m − 1)T (r, s), where m ≥ 2, p ≤ r, and rs divides pq. Suppose that J is an L-space knot concordant to K. We apply Proposition 3.3 to have that where the last equality is due to the fact that rs|pq. Rearranging, expanding, and focusing on lower degree terms, we get (t pq − t p − t q + 1) m−1 ∆ J (t) = (t pq−rs + t pq−2rs + · · · + t rs + 1) m−1 (t rs − t r − t s + 1) m−1 ∆ T (p,q) (t), Notice that on the left-hand side, the coefficients of t 2 , t 3 , ..., t p−1 are the same as those in ∆ J (t). The coefficient of t p in ∆ J (t) must be either 0 or 1 since J is an L-space knot. So on the lefthand side, the coefficient of t p is either −(m − 1) or −(m − 1) + 1. On the right-hand side, the coefficients of t 2 , t 3 , ..., t r−1 are all 0 and the coefficient of t r is −(m − 1). Therefore, equating coefficients, we see that if r < p, Since L-space knots have Alexander polynomials with coefficients of alternating sign and −(m − 1) < 0, we have reached a contradiction except when r = p and ∆ J (t) = 1 − t + (−(m − 1) + 1)t r + · · · . So we need only consider the case where K = 2T (p, q) # − T (p, s). By Proposition 3.3, we know that rs|pq and it follows in this case that s|q. So s = pk + i for some 0 < i < p and k > 0 and q = sa = (pk + i)a for some a > 1. Applying Proposition 3.3 once more, we have that Rearranging, we see and equating coefficients, the Alexander polynomial of J is determined to be Recalling that s = kp + i, if i = 1 then the coefficient of t s is −2, and if 1 < i < p then the coefficients of t kp+1 and t s are both −1. In the former case, we have reached a contradiction since L-space knots have Alexander polynomials with coefficients 1, 0, or −1. In the latter case, there are no terms of degree t kp+2 , ..., t s−1 , thus ∆ J (t) has two consecutive terms with coefficient −1, contradicting the fact that L-space knots have Alexander polynomials with coefficients with alternating sign. Therefore K is not concordant to an L-space knot. Towards the more general case In this section, we state and prove some results which restrict concordances from connected sums of torus knots to L-space knots. First, we give generalizations of Proposition 3.3 and Corollary 3.4. Corollary 4.2. If the knot , where m, n ≥ 1, is concordant to a nontrivial L-space knot, then m = n + 1. Proof of Proposition 4.1. Suppose that J is an L-space knot concordant to K. Consider the Alexander polynomial of K. It is a product of cyclotomic polynomials φ c (t): Suppose there exists c such that (c, k) ∈ C + and (c, l) ∈ C − . By Livingston [6], this implies that the Levine-Tristram signature function of K, σ K (t), jumps by −2(k − l) at t = 1/c. Therefore, since J is concordant to K, we would have that (φ c (t)) k−l divides ∆ J (t). Similarly, if c is such that (c, k) ∈ C − but for all l, (c, l) / ∈ C + , then (φ c (t)) k divides ∆ J (t), or if c is such that (c, k) ∈ C + but for all l, (c, l) / ∈ C − , then (φ c (t)) k divides ∆ J (t). Thus On the one hand, we know that for L-space knots the degree of the Alexander polynomial is equal to twice the τ invariant of the knot. So, since J is concordant to K, we have that On the other hand, we have Therefore, If there exists c such that (c, k) ∈ C − but for all l, (c, l) / ∈ C + , then which is a contradiction. Thus it must be that ∆ K − (t) | ∆ K + (t). Finally, from the jumps of σ K (t) = σ J (t), we can conclude that By the argument above involving the degrees of the polynomials, we find that as asserted. Proof of Corollary 4.2. Suppose that J is an L-space knot concordant to K. Then by Proposition 4.1 we know that In [3], Hedden and Watson show that the Alexander polynomial of an L-space knot must have leading terms 1 − t. Since T (p i , q i ), T (p ′ i , q ′ i ), and J are all L-space knots, Equation 4.1 implies that Rearranging and expanding, we see that 1 − mt + · · · = 1 − (n + 1)t + · · · . So it must be that m = n + 1. Lastly, we use the following result of Feller and Krcatovich to give a condition on the Upsilon function of i > 0 for all i, a 1 > a 2 > · · · > a r , and a ′ 1 > a ′ 2 > · · · > a ′ s . If there exists a ′ i such that a ′ i does not divide a j for any j, then K is not concordant to an L-space knot. Before we prove the theorem, we offer an example. For an L-space knot J, Υ ′ J (t) is an increasing, piecewise-constant function. The Upsilon function T (p, p + 1) then has increasing, piecewise-constant derivative. From CFK ∞ (T (p, p + 1)), we also know that Υ ′ T (p,p+1) (t) has jumps in [0, 1] only at t = 2i/p for i such that 0 < 2i/p ≤ 1. Let K + = 3T (5, 6) and K − = T (2, 5) # T (3, 5). Since Upsilon is additive under forming connected sums, we have that Υ ′ K + (t) is increasing with jumps at 2/5 and 4/5. The function Υ ′ −K − (t) is decreasing with jumps at 2/3 and 1. Thus Υ ′ K (t) has a negative jump at t = 2/3 since Υ ′ K + (t) is constant there. So K is not an L-space knot. Proof of Theorem 4.4. Again, for an L-space knot J, Υ ′ J (t) is an increasing, piecewise-constant function. The Upsilon function T (p, p + 1) then has increasing, piecewise-constant derivative. By analyzing CFK ∞ (T (p, p + 1)), we also know that Υ ′ T (p,p+1) (t) has jumps in [0, 1] only at t = 2i/p for i such that 0 < 2i/p ≤ 1. Let K + = T (p 1 , q 1 ) # T (p 2 , q 2 ) # · · · # T (p m , q m ) and K − = T (p ′ 1 , q ′ 1 ) # T (p ′ 2 , q ′ 2 ) # · · · # T (p ′ n , q ′ n ). If Υ K (t) is as stated in the theorem, with a ′ i such that a ′ i does not divide a j for any j, we have that Υ ′ K − (t) has a negative jump at 2/a ′ i . Since no a j is divisible by a ′ i , we have that 2k/a j = 2/a ′ i for any j. Thus Υ ′ K + (t) is constant at 2/a ′ i . This implies that Υ ′ K (t) = Υ ′ K + # K − (t) has a negative jump at 2/a ′ i and so Υ K (t) is not the Upsilon function of an L-space knot.
2017-10-29T18:45:56.000Z
2017-10-29T00:00:00.000
{ "year": 2017, "sha1": "3793233ca468f3e90b8c51ae23c1b322b2f41cb5", "oa_license": null, "oa_url": "https://www.ams.org/proc/2020-148-04/S0002-9939-2019-14833-4/S0002-9939-2019-14833-4.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3793233ca468f3e90b8c51ae23c1b322b2f41cb5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
56373159
pes2o/s2orc
v3-fos-license
Research of damping influence for the transmission of energy in multi-masses systems The aim of the research is to determine the impact of damping on the manner of transferring the energy of vibrations from the vehicle's road wheel on the truck body. The work presents the results of simulation tests and real object. The proposed method is an effective tool for the analysis of energy transfer in mechanical systems containing flexible elements (elastic and damping). Introduction The moving vehicle is subjected to constant vibration extortions.Due to the need to limit the negative impact of vibration energy on the structural elements of the vehicle as well as people in the vehicle and the load being carried, solutions are used to mitigate this phenomenon.In automotive vehicles there are always two teams responsible for minimizing the impact of vibrations [1].The first are pneumatic wheels, respectively, which is the first type of mechanical filter reducing extortion from road unevenness.The second is the suspension system, which allows the wheel to be moved relative to the protected bodywork body.In other words, flexible elements that function as vibration damping are commonly used in automotive vehicles.Such a solution, however, always leads to the formation of a multi-mass system, which is characterized by specific frequencies of its own vibrations.In commercial vehicle suspensions, pneumatic springs are commonly used, which transfer static and variable loads resulting from operating conditions [2,3].The role of damping elements is played by hydraulic shock absorbers whose basic function is to reduce the speed in the vibrating motion.An exemplary suspension system, which is the subject of research, is shown in Figure 1. Description of the research problem Generally, in mechanical systems requiring the use of shock absorbing cushions, there is no need to use vibration dampers.If the system works in the field of sub-or super-resonance forces, the vibration damping function is fully implemented before elastic suspension.In automotive vehicles, the use of vibration damping elements is necessary due to the operation of these systems in a wide range of extortion frequencies.On the one hand, this is due to external extortions, for example, road unevenness.On the other hand, the vibrations of the vehicle are also influenced by internal constraints, which may include, for example, the unbalance of the road wheels.In both cases, the frequency of vibration extortions depends on the current vehicle speed.The range of vibration extortion frequencies is wide and very often overlaps with the natural frequency of the system.For this reason, in order to limit the amplitudes of vibrations in the automotive vehicles, the elements responsible for damping vibrations are used.The purpose of the conducted research is to determine the effect of the damping type on the manner of transferring the energy of vibrations from the vehicle's road wheel on the vehicle body [1,4].Studies on the impact of damping on the transmission of vibration energy are carried out on a simulation model.In the narrowest range, the vehicle can be simplified to a two-mass dynamic model (2DOF).This type of model is used in research.All parameters of the simulation model are selected based on the tests of the real object, which is a Volvo tractor unit.Due to the adopted scope of extortion, this type of model is used in research assumed that the damping characteristics used in the model will be linear, but different for the compression and stretching movements.The tests used different damping characteristics (the change of attenuation This type of model is used in research designed as a defined percentage of the nominal characteristic) and various types of kinematic extortions.The vibration energy used for the analysis of the studied phenomenon is the RMS values of selected physical quantities describing the vibrations of the object.To assess the energy transmission of vibrations on the body, the relationship between the values: vibration acceleration of the car body and relative displacements of the wheel relative to the car body are used.Simulation research is carried out using deterministic and random extortions corresponding to the real power density functions of road unevenness.Figures 2 and 3 present the results of the relation between accelerations of the body and suspension deflections during sinusoidal extortion with a linearly increasing frequency.Figure 2 shows the results of tests on the model with different damping characteristics.Figure 3 presents the results obtained for various amplitude values of vibration extortions. The graphs clearly show two areas related to resonance frequencies of the body and suspension.The increase in the value of the amplitude of the force influences proportionally on the form of the relationship between the selected vibration parameters.The increase of damping (Figure 2) affects the increase of the energy of vibrations of the body mass of the vehicle while reducing the displacement of the wheel relative to the body.The presented dependence does not have the form of a function (data of effective values of vibration acceleration fertilizer may occur for various effective values of relative displacements of sprung and unsprang masses), but describes the energy relations between the analysed vibration signals.Figure 4 presents the results of simulation tests obtained for various damping characteristics in suspension induced by stochastic enforcement consistent with the statistical description of road unevenness.In the case of the analysed random force, depending on the attenuation characteristics, there is also a clear differentiation of energy relationships between selected vibration signals.The proposed method allows, on the one hand, the analysis of vibration energy transmission occurring in a multi-mass system.On the other hand, it is also an efficient tool for analysing damping changes in the system.Empirical verification of the developed method is carried out on the real object.Tests are carried out on a Volvo tractor unit.The mounting locations for the measuring sensors are shown in Figure 5.The vibration signals are recorded at a frequency of 25 kHz. Conclusions The proposed method allows to evaluate the method of transferring vibration energy in the suspension system of a commercial vehicle depending on the damping characteristics.The change in damping affects the position of the point described by the coordinates in the form of effective values of accelerations of body vibrations and relative displacements of the wheel and body.Using the proposed method, the technical condition of the commercial vehicle suspension can be assessed in terms of vibration damping for any type of extortion. Fig. 1 . Fig. 1.The flexible elements used in the test vehicle utility Fig. 4 . Fig. 4. Results obtained for random exclusion.Black colour is 10 % of the nominal attenuation characteristics, respectively: red -40 %, blue -80 % Figure 6 presents the test results obtained for two shock absorbers.The results for the used shock absorber (lack of damping) are marked with black colour and the results obtained for the shock absorber in the nominal technical condition are marked in blue.The results presented are obtained while driving at a constant speed of 70 km/h.The vehicle moved on a good surface. Fig. 6 . Fig. 6.Test results on the real object.Black -worn shock absorber, blue -efficient shock absorber
2018-12-18T02:58:56.954Z
2018-09-24T00:00:00.000
{ "year": 2018, "sha1": "f7bafaa1386604ac6eca0b700cd2470c9af7c78d", "oa_license": "CCBY", "oa_url": "https://www.jvejournals.com/article/20228/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f7bafaa1386604ac6eca0b700cd2470c9af7c78d", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248345864
pes2o/s2orc
v3-fos-license
The Geriatric Emergency Care Applied Research (GEAR) network approach: a protocol to advance stakeholder consensus and research priorities in geriatrics and dementia care in the emergency department Introduction Increasingly, older adults are turning to emergency departments (EDs) to address healthcare needs. To achieve these research demands, infrastructure is needed to both generate evidence of intervention impact and advance the development of implementation science, pragmatic trials evaluation and dissemination of findings from studies addressing the emergency care needs of older adults. The Geriatric Emergency Care Applied Research Network (https://gearnetwork.org) has been created in response to these scientific needs—to build a transdisciplinary infrastructure to support the research that will optimise emergency care for older adults and persons living with dementia. Methods and analysis In this paper, we describe our approach to developing the GEAR Network infrastructure, the scoping reviews to identify research and clinical gaps and its use of consensus-driven research priorities with a transdisciplinary taskforce of stakeholders that includes patients and care partners. We describe how priority topic areas are ascertained, the process of conducting scoping reviews with integrated academic librarians performing standardised searches and providing quality control on reviews, input and support from the taskforce and conducting a large-scale consensus workshop to prioritise future research topics. The GEAR Network approach provides a framework and systematic approach to develop a research agenda and support research in geriatric emergency care. Ethics and dissemination This is a systematic review of previously conducted research; accordingly, it does not constitute human subjects research needing ethics review. These reviews will be prepared as manuscripts and submitted for publication to peer-reviewed journals, and the results will be presented at conferences. Open Science Framework registered DOI: 10.17605/OSF.IO/6QRYX, 10.17605/OSF.IO/AKVZ8, 10.17605/OSF.IO/EPVR5, 10.17605/OSF.IO/VXPRS. INTRODUCTION Increasingly, older adults are turning to emergency departments (EDs) to address healthcare needs. 1 2 Older adults (aged 65 years and older) in the USA visit the ED at a rate of 51.1 per 100 persons per year. 3 Recommendations to transform EDs to better care for older adults have included redesigning services and processes. [4][5][6] Geriatric emergency care and geriatric EDs (GEDs) have emerged over the past decade as innovative solutions to better provide emergency care for older adults. 4 6-8 However, many of the processes, protocols and care models targeting older patients with emergency care remain untested in the unique ED setting. Consequently, the impact of geriatric emergency care for older adults is unknown. 9 10 Furthermore, novel interventions and best practices tailored to the ED setting need to be developed for both older adults and persons living with dementia (PLWD). Strengths and limitations of this study ► The inclusion of transdisciplinary stakeholder participants as part of the scoping review and consensus process to identify research gaps and priorities. ► Cross-coordination with medical librarians of scoping review searches. ► Creation of a Health Equity Advisory Board to ensure meaningful inclusion of diverse populations in studies focused on the emergency care of persons living with dementia. ► A well-defined search strategy created by a team of academic research librarians to search a broad group of databases. ► Small body of published literature in topic areas. Open access To achieve these research demands, infrastructure is needed for GEDs to both generate evidence of intervention impact and advance development of implementation science, pragmatic trials evaluation and dissemination of findings from these studies. 11 The Geriatric Emergency Care Applied Research (GEAR) Network was created in response to these scientific needs-to build a transdisciplinary infrastructure to support the research that will optimise emergency care for older adults and PLWD. 12 The GEAR Network (https:// gearnetwork.org) is supported by the National Institute on Aging (NIA) and partner organisations, The Gary and Mary West Health Institute and The John A. Hartford Foundation (jointly on The Geriatric Emergency Department Collaborative grant (award number N/A) with two phased awards: GEAR (R33 AG058926 add dates) and GEAR 2.0-Advancing Dementia Care (GEAR 2.0 ADC) (R61 AG069822 September 2020-June 2022)). In the first phase of both awards, key stakeholders from emergency medicine, geriatrics, nursing, psychiatry, pharmacy, social work, individuals representing healthcare systems, clinicians, researchers, medical specialty organisations, advocacy organisations, caregivers, older adults and PLWD to identify consensus-driven research priorities that will improve the care of older adults (GEAR). GEAR 2.0 ADC added PLWD and care partners to the team. The second phase consists of pilot grant funding to support investigators that advance research priorities identified by stakeholder consensus. The original GEAR project (hereafter referred to simply as GEAR) is dedicated to improving ED care of the older adult and focused on the priority topics of: care transitions, cognitive impairment-delirium, medication safety, elder abuse and falls. Four of the five GEAR research priorities have already been published using this approach. [13][14][15][16] GEAR 2.0 ADC is focused on optimising emergency care for PLWD and their care partners in the priority areas of: ED practices, ED care transitions, detection and communication and shared decision-making. In this paper, we describe the phase I methods used by GEAR 2.0 ADC to identify consensus-driven research priorities, which were based on methods used for GEAR. We describe how we identified the priority topic areas, conducted scoping reviews in each topic area while integrating input from a transdisciplinary stakeholder taskforce, integrated academic librarians in the review process to perform standardised searches and provide quality control and conducted a large-scale consensus conference to prioritise future research. The GEAR Network approach may be valuable for other specialties, disciplines and organisations attempting to identify research and practice gaps, generate evidence, build collaborations, and target high-yield research questions to optimise the care of older adults. METHODS/DESIGN GEAR 2.0 ADC design and structure Like GEAR, GEAR 2.0 ADC is a phased programme that provides infrastructure to support the mission of increasing transdisciplinary research to improve emergency care for PLWD and their care partners. The organisational structure of GEAR 2.0 ADC (figure 1) consists of committees that guide operations, a taskforce of stakeholder members that join workgroups and participate in the consensus conference during the first phase (2 years) and Cores that support training and expert consultation for pilot studies that will be conducted during the second phase (3 years). GEAR 2.0 ADC is from 1 June 2020 to 31 May 2025. The executive committee GEAR 2.0 ADC is operationally coordinated by the executive committee that oversees and guides the programme and activities in both phases. The executive committee is led by geriatric emergency medicine investigators who also lead one of the four priority topic workgroups. Each of these leads were selected based on geriatric emergency medicine expertise and the concurrent engagement of local Alzheimer's Disease Research Center faculty at their sites. These investigators supervise the GEAR 2.0 ADC efforts and meet virtually on a biweekly basis. The oversight committee The oversight committee consists of content experts in geriatrics, emergency medicine, and Alzheimer's disease and related disorders (ADRD) that provides high-level guidance to the executive committee during quarterly meetings. Representatives from the NIA also participate in these meetings to hear updates and progress of GEAR 2.0 ADC activities. The oversight committee provides interdisciplinary guidance on the project direction, content and research approaches and future directions to address cross-disciplinary gaps highlighted by the American Geriatrics Society conference series. 17 Health Equity Advisory Board To address the need for greater equity in emergency care research in geriatrics and dementia care both with regard to PLWD, care partners and researchers, a Health Equity Advisory Board (HEAB) was created. The HEAB provides guidance and feedback on GEAR 2.0 ADC activities, to ensure meaningful inclusion of diverse populations based on race, gender, ethnic/religious affiliation, sex identification, along with the impact of social determinants of health in studies focused on the emergency care of PLWD. HEAB members include PLWD, their caregiver and care partners, advocates and stakeholders all from under-represented populations or groups. Current board members include individuals that are African American, Hispanic, Asian and lesbian. The HEAB will follow the NIA Health Disparities Research Framework 18 approach and will work with partner organisations like the Imbedded Pragmatic Alzheimer's disease and AD-Related Dementias Open access Clinical Trials Collaboratory, an organisation that is developing strategies to address diversity and inclusion in studies focused on PLWD. 19 This includes addressing the four key levels of analyses related to the NIA health disparities priorities of environmental, sociocultural, behavioural and biological disparities in health for older minority populations. We will incorporate the lifecourse perspective, which is a 'multidisciplinary approach to understanding the mental, physical and social health of individuals, which incorporates both life span and life stage concepts that determine health trajectory and influence population-level health disparities'. 18 Project team staff GEAR 2.0 ADC activities are supported by smaller project teams where each of the executive committee leads are located. Local project team members include a research coordinator and academic medical school librarian to facilitate GEAR 2.0 ADC activities, the bulk of which includes conducting the scoping reviews. Additional activities of the research coordinators include coordinating communication with all members, and organising meetings (including presentations, recordings, minute preparation). Patient and public involvement Throughout the methods, the involvement, inclusion and representation of patients, and public partners are described. The GEAR 2.0 ADC taskforce and workgroups are transdisciplinary groups of stakeholders committed to improve the emergency care of PLWD. Members were identified to participate based on content expertise, their positions in partner organisations and referrals from other invited members. The executive committee invited participants to ensure diversity of background and expertise while ensuring a manageable group size. They include emergency physicians, geriatricians, neurologists, psychiatrists, neuropsychologists, nurses, social workers, pharmacists, physical therapists, patient advocates and most importantly PLWD and their care partners. GEAR 2.0 ADC taskforce and workgroups The GEAR 2.0 ADC taskforce is a transdisciplinary group of stakeholders committed to improve the emergency care of PLWD. Members were identified to participate based on content expertise, their positions in partner organisations and referrals from other invited members. The executive committee invited participants to ensure diversity of background and expertise while ensuring a manageable group size. This included 47 individuals who identified themselves as emergency physicians, geriatricians, neurologists, psychiatrists, neuropsychologists, nurses, social workers, pharmacists, physical therapists, patient advocates and most importantly PLWD and their care partners (figure 2). Open access Taskforce members participated on one or more workgroups that represented research and clinical practice priorities in four topics (see below Priority domain determination section for how these topics were chosen): 1. Optimal ED care practices for PLWD and their caregivers (ED practices). Approach GEAR 2.0 ADC operational overview During the first phase, GEAR 2.0 ADC identified and prioritised research by completing scoping reviews in each of the priority topics and then held a 2-day consensus conference of key stakeholders who discussed and voted on research priorities to optimise emergency care for PLWD. The GEAR Network consensus conference approach is modelled after the Cornell Institute for Translational Research on Aging (CITRA) process for developing stakeholder-based translational research agendas in ageing. 20 Unlike CITRA, the GEAR Network approach has more extensive preparatory work prior to the consensus conference that includes completion of scoping reviews in preselected priority areas prior to the consensus conference. Completion of the scoping review required: (1) proposing initial research priorities in each of the domains; (2) using a Population, Intervention, Comparison, Outcome (PICO) framework for the research questions to conduct structured literature searches with academic librarians to identify publications related to the domains (round 1 priority research questions); (3) summarising the most recent scientific reviews of ED-based trials, observational and/or retrospective studies (if any) that address the priority area; (4) extracting major conclusions from relevant literature identified or other systematic reviews related to the PICO question. The results of the scoping reviews were then used as the basis for discussion and considerations of research priorities at the consensus conference. During the second phase, GEAR 2.0 ADC will fund pilot studies that encourage transdisciplinary collaboration to address the research priorities ranked by the stakeholders from the first phase. Priority domain determination GEAR 2.0 ADC taskforce members ranked priority topics in December 2019 during the grant proposal preparation process. The executive committee proposed the multiple priority topics which the taskforce ranked. These were then emailed as a survey to taskforce members to rank the importance of each topic and the top ones were selected to be the focus of GEAR 2.0 ADC activities. Based on past experience in GEAR, the decision was made to limit efforts to four workgroups based on capacity and workload. Workgroup preconference activities Each workgroup was led by an executive committee member lead and supported by the project team staff. At the study kickoff meeting, taskforce members were invited to participate in any of the four workgroups representing research and practice priority domains. Taskforce members joined workgroups based on their interests and expertise, noting their preferences through an online survey. Although most requests were honoured, some respondents were assigned to non-primary choices to ensure diversity of background and maintain workable group sizes of 12-14 participants. While participants were encouraged to only engage with one group, a number engaged in multiple groups. Each workgroup's leader developed a charter document that consisted of a description of the workgroup's topic, goals, meeting dates, membership list as well as expectations of both group leadership and participants. All workgroups met monthly for 1 hour, while work continued asynchronously through emails moderated by the group leadership. Files were accessible through cloud-based file sharing tools and servers to provide a single source of information for all members. These workgroup meetings served to review the progress of the project, to discuss and reflect on project findings and to frame project directions. Workgroups particularly had extensive discussions to develop key questions and identify research gaps using the PICO approach. 21 Phase I: scoping review process In preparation for the GEAR 2.0 ADC consensus conference, scoping reviews were conducted in the four domains. We followed the Preferred Reporting Items for Open access Systematic Reviews and Meta-Analyses-scoping reviewscoping review checklist process to explore both the breadth of literature in this area and identify the knowledge and practice gaps. 22 Scoping reviews are preferred for this type of work as they incorporate a wider range of literature than systematic reviews and can provide more synthesised ideas for future systematic reviews. 22 23 Development of PICO research questions Each workgroup brainstormed potential PICO questions within their domains. The workgroups iteratively refined and reviewed the questions and then submitted them to the executive committee for review. Each workgroup had approximately 20 questions. The executive committee, through joint discussion among the workgroup leads, ensured that questions were distinct. The full taskforce ranked questions for each workgroup via an online survey (Qualtrics). A respondent weighting system was used to identify the top research questions with workgroup members' ranking weighted double that of other taskforce members. The top two questions were then formatted using the PICO approach 21 (tables 1-4). Medical librarian collaboration Medical librarians from each workgroup leads institution working together developed a standardised core search strategy for the workgroups, as well as topic specific modifications for the scoping reviews. Prior studies have demonstrated this collaboration style creates higher quality search strategies and minimises review bias. [24][25][26] To confirm the search strategies developed would capture the articles sought after, exemplar articles were identified. The searches were reviewed to ensure inclusion of these articles. The only exclusion filter applied to the search was to limit the focus to an adult patient population. No other publication type, language or date filters were applied. The librarians worked together to identify relevant bibliographic databases to maximise capture of relevant articles while limiting duplication. Databases searched included MEDLINE (Ovid), Embase, Cochrane Central Register of Controlled Trials, CINAHL, PsychINFO, PubMed Central, Web of Science and ProQuest Theses and Dissertations. For a list of databases used by the workgroups, see table 5. Each site librarian conducted the literature search, identified article duplication and uploaded the results to Covidence, a systematic review software (Veritas Health Innovation, Melbourne, Australia; available at www.covidence.org). Search strategies began at the earliest year databases began indexing until March 2021 and focused on emergency care and the scoping reviews for each group are registered on Open Science Framework. [27][28][29][30] The workgroup lead and a trained research associate from each workgroup independently screened the titles and abstracts of all articles uploaded into Covidence for relevance. Each workgroup created unique inclusion and exclusion criteria based on workgroup consensus. Future publications will present the findings of the workgroups. The reviewers adjudicated any disagreements. If they did not agree, a third-party reviewer made the final decision. The full text of articles identified as potentially relevant were then reviewed in the same manner. Data were abstracted from the articles deemed relevant. To ensure consistency in the conduct of the scoping reviews, workgroup leads and project team members discussed progress at the biweekly meetings and communicated frequently through email correspondence. Phase I: GEAR 2.0 ADC consensus conference The culmination of the scoping review process resulted in presentations of these synthesised results from each domain at a 2-day consensus conference of the full GEAR 2.0 ADC taskforce in September 2021. At the conference, taskforce members were mixed and distributed across smaller groups to discuss the findings of the scoping reviews. The goal of these small groups was to provide stakeholder insight and recommendations on the current knowledge base and to provide suggestions for future research and pilot grants. After small group discussion, there was an opportunity for shared debrief of these breakout sessions. Each workgroup then incorporated the feedback and themes heard from the small group discussion to prepare five research priorities, based on the scoping review results and transdisciplinary stakeholder recommendations. The full taskforce then ranked these research priorities using an online survey ( Polleverywhere. com). Taskforce members unable to attend the conference were asked to vote asynchronously, for 100% participation by all taskforce and HEAB members. Results of each scoping reviews, their search methodology, data from included manuscripts and ranked research priorities will be published separately. Copies of the GEAR and GEAR 2.0 ADC consensus conference summaries are available on the GEAR website: https://gearnetwork.org/manuscripts-publications/ Phase II: GEAR 2.0 ADC pilot funding During the second phase, pilot funding opportunities will be made available to investigators. Proposals for pilot studies must address the research priorities recommended by the GEAR 2.0 ADC taskforce and HEAB members from the GEAR 2.0 ADC consensus conference. During this phase, the GEAR 2.0 ADC Cores will become active and support early research addressing research gaps and priorities recommended by the GEAR 2.0 ADC taskforce. In addition to pilot funding, the Research Core, Data and Informatics Core and Dissemination and Implementation Core will provide guidance to pilot awardees as they conduct their studies, including training sessions to enhance and increase transdisciplinary collaboration within and across the GEAR 2.0 ADC Network. These will be held as virtual training webinars, conferences and office hours, and Open access bimonthly research progress meetings where awardees will have the opportunity to share their study progress with each other. GEAR 2.0 ADC pilot funding opportunities can be found on the GEAR website: https://gearnetwork.org/ grants-and-funding-opportunities/ DISCUSSION In this paper, we present a framework establishing an infrastructure to advance geriatric emergency medicine research. The value of this framework, and more importantly the representation of key stakeholders, is unique and critical to guide optimally future research addressing Open access practice gaps that matter to all those engaged in all facets of emergency care for PLWD and their care partners. It differs from other previous agenda setting processes directed at geriatric emergency care [31][32][33][34] in the following ways: (1) the inclusion of stakeholder participation as part of the scoping review and consensus process to identify research gaps and priorities; (2) cross-coordination with medical librarians of scoping review searches; (3) creation of a HEAB to ensure meaningful inclusion of diverse populations in studies focused on the emergency care of PLWD; (4) provision of pilot funding to initiate research in the recommended consensus research priorities. A significant strength of the GEAR Network approach is the inclusion of patients, individuals that use the healthcare system and care partners as part of the process. It is a priority of the GEAR Network to include Open access their experiences and perspectives and to learn what matters to them about the emergency care they receive. Furthermore, GEAR Network strives to share with these stakeholders' reasons why health and medical care occurs the way it does, to enable them to engage meaningfully and to integrate their critical feedback and recommendations on the topics throughout the entire GEAR Network approach. For GEAR 2.0 ADC, this has even greater relevance coupled with challenges faced by PLWD, all of whom have cognitive impairment with varying degrees of severity. While the PLWD who participate in GEAR 2.0 ADC are in the early stages of dementia and remain high functioning, they, along with care partners and many other stakeholders who are not researchers nor clinicians, are not as familiar with taskforce or agenda setting research processes. Preparatory background steps by the GEAR 2.0 ADC Project Team with these non-research and non-clinical stakeholders are necessary to support their full engagement. Following the empowering partnership principles What are the impacts of pragmatic approaches to providing acute unscheduled care such as home care, community paramedicine, telemedicine or three-dimensional telemedicine on patient-centred outcomes for PLWD? 21. How do emergency clinicians best connect PLWD with community resources? 22. When concern for dementia or cognitive impairment is identified in the ED, how do clinicians address concerns with patient autonomy and capacity? Should these concerns be reported to anyone? For example, the patient's family, primary care clinician or adult protective services. Open access function that may impact tasks and activities. For example, the survey ranking many potential questions initially proposed by workgroups required significant mental focus to complete for individuals of all levels of cognitive function. This was even more challenging for some PLWD members who found the survey format difficult to comprehend fully. To incorporate their input, once the top four choices were identified, their thoughts on each were discussed separately with them. Concurrently, other PLWD members did not express any difficulty with the survey. It is important that researchers consider the potential limitations of PLWD in research engagement and find ways to enable their full participation. Another innovative feature of the scoping review process in GEAR 2.0 ADC was the collaboration of research librarians from four different institution sites and their inclusion early in PICO question development. Each workgroup's assigned librarian participated in meetings when PICO question development was occurring. This provided unique insight and understanding as to the group's thought process that allowed the librarian to craft the appropriate search strategy. It was decided that the four librarians would develop a standardised search for the elements consistent between the groups and then tailor the remaining elements for their specific groups. By cooperating on core search development, the librarians were quickly able to develop a highly effective search strategy, minimising bias. 26 The standardisation of the common elements helped ensure consistency in articles identified between groups. 25 As part of its mission, GEAR 2.0 ADC has also prioritised addressing equity through diversity and inclusion in its research agenda. The concern is multifactorial as it includes the diversity and composition of the workgroups, the defining of the questions and implementation in the future pilot grants to be offered by GEAR 2.0 ADC. Despite continuous efforts to increase diversity of the taskforce and while equally split in member gender, the workgroups and PLWD representatives are overwhelmingly Caucasian. This is a challenge for many organisations attempting to increase diversity in representation and health equity with research, especially for PLWD. Within the workgroups, diversity equity and inclusion was discussed in terms of the patients seen in the ED. The discussions included race, gender, ethnic/religious affiliation, sex identification, along with the impact of social determinants of health. Identifying additional workgroup members whose participation would broaden the groups' diversity would have taken more time than the groups had, thus the decision was made to create a HEAB of members from under-represented and disenfranchised groups to review and provide input on the output of the workgroups and GEAR 2.0 ADC processes. The GEAR 2.0 ADC Principal Investigators along with the workgroup leads have developed a framework for the board that includes quarterly meetings that preview consensus conference materials to incorporate feedback before the conference and sharing materials and will involve the HEAB when selecting GEAR 2.0 ADC pilot studies to fund. Finally, perhaps the most significant and unique feature of the GEAR Network research infrastructure is its provision of pilot funding for the research priorities generated by its consensus stakeholder process. Support is directed to build preliminary research and evidence in clinical and research gaps identified by scoping review processes that were voted by transdisciplinary members of the field and by patients and their care partners. This novel approach targets funding for stated and ranked priorities by 'putting money where our mouth is'. It is hoped that the funding from these pilot studies will foster interest and research in needed areas of geriatric-related and dementia-related emergency care, increase and diversify the pool and foci of researchers and generate preliminary evidence and data for larger scale study proposals that are critically needed to advance the science of geriatric emergency care. In summary, the GEAR Network approach provides a framework and systematic approach to review the literature for research and practice gaps. Furthermore, the GEAR Network approach gives insight as to how to engage key stakeholders from all facets of caring for older adults and PLWD to define and state what research priorities matter. This approach may be used by other disciplines, Contributors All authors read and approved the final manuscript. CC conceived the approach, provided methodological guidance, oversaw the implementation and operations of the approach and provided review and edits in the writing. SD organised the implementation and operations of the approach and was a major contributor in writing. JD organised the implementation and operations of the research approach and was a major contributor in writing the manuscript. AG organised the implementation and operations of the research approach and was a major contributor in writing the manuscript. LH organised the implementation and operations of the research approach and was a major contributor in writing the manuscript. UH secured funding, conceived the approach, organised the infrastructure and partnerships, organised the implementation and operations of the approach, was a major contributor in writing and overseeing the manuscript. JL organised the implementation and operations of the research approach and was a major contributor in writing the manuscript. AN organised the implementation and operations of the research approach and was a major contributor in writing the manuscript. MS secured funding, conceived the approach, organised the infrastructure and partnerships, organised the implementation and operations of the approach, was a major contributor in writing and overseeing the manuscript. ZT organised the implementation and operations of the research approach and was a major contributor in writing the manuscript.
2022-04-24T06:17:14.680Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "aa47d638816701ea3d764daa0e403b78dfaf4abd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Highwire", "pdf_hash": "7f335213ddc9c6eb1301c0928a2ed6ab9d20ae7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259498472
pes2o/s2orc
v3-fos-license
Broadband magnetic resonance spectroscopy in MnSc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_2$$\end{document}2S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_4$$\end{document}4 Recent neutron scattering experiments suggested that frustrated magnetic interactions give rise to antiferromagnetic spiral and fractional skyrmion lattice phases in MnSc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_2$$\end{document}2S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_4$$\end{document}4 . Here, to trace the signatures of these modulated phases, we studied the spin excitations of MnSc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_2$$\end{document}2S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_4$$\end{document}4 by THz spectroscopy at 300 mK and in magnetic fields up to 12 T and by broadband microwave spectroscopy at various temperatures up to 50 GHz. We found a single magnetic resonance with frequency linearly increasing in field. The small deviation of the Mn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{2+}$$\end{document}2+ ion g-factor from 2, g = 1.96, and the absence of other resonances imply very weak anisotropies and negligible contribution of higher harmonics to the spiral state. The significant difference between the dc magnetic susceptibility and the lowest-frequency ac susceptibility in our experiment implies the existence of mode(s) outside of the measured frequency windows. The combination of THz and microwave experiments suggests a spin gap opening below the ordering temperature between 50 GHz and 100 GHz. www.nature.com/scientificreports/ rise to additional modes as in the case of skyrmion lattice, where a breathing, a clockwise and a counterclockwise rotational mode were predicted and observed [34][35][36] . Very recent analytical calculations and numerical simulations show that an antiferromagnetic skyrmion lattice stabilized in synthetic antiferromagnet also possesses a phason mode and a series of optical magnons 37 . The aim of this study is to investigate the magnetic-field dependence of the spin excitations in MnSc 2 S 4 by THz spectroscopy up to 17 T. We carried out the experiments in the paramagnetic phase at 2.5 K, as well as in the ordered state at 300 mK, where the zero-field ground state is the helical spiral state and in the 4.5-7 T field range the triple-q state is expected to emerge 22 . We performed additional GHz experiments at 300 mK and from 2 K up to 20 K to address lower frequency and field ranges. Results THz absorption. We measured light absorption of a MnSc 2 S 4 mosaic in the far-infrared range, between 100 GHz and 3 THz at 2.5 K, and between 100 GHz and 2.1 THz at 300 mK. Figure 1 shows the field dependence of the differential absorption spectra at 2.5 K. We resolved a single paramagnetic resonance, which shifts linearly with field. The wavy baseline in the vicinity of the peak is caused by the magnetic-field-induced change of the interference pattern, arising due to multiple reflections in the nearly plane-parallel sample. A linear fit on the magnetic field dependence of the resonance frequency results in a 27 ± 0.6 GHz/T slope, and a zero-field offset of 20.4 ± 7.2 GHz. The slope corresponds to a g-factor of 1.93 ± 0.05. The fact that we did not detect any deviation from the linear field dependence of the resonance apart from a small off-set suggests that there is a small anisotropy of spin Hamiltonian parameters. The small anisotropy is also consistent with the small deviation of g-factor from the free electron value, both caused by the spin-orbit coupling. The resonance line appears only when the direction of the alternating magnetic field is perpendicular to the external magnetic field, B ω ⊥ B 0 , and is absent when B ω B 0 , which is consistent with a simple paramagnetic behaviour. Figure 2 shows the field dependence of the absorption spectra relative to zero field at 300 mK. The spectra follow a similar magnetic-field dependence as the ones measured at 2.5 K. The linear fit to the resonance peak positions shows a resonance shift of 27.5 ± 0.4 GHz/T, and a zero-field offset of 9.5 ± 1.5 GHz. The calculated slope corresponds to a g-factor of 1.96 ± 0.02, which is the same as that of deduced in the paramagnetic phase within the error of the measurement. The finite frequency intercept is somewhat smaller as compared to the paramagnetic resonance at 2.5 K. Neither in the ordered nor in the paramagnetic phase could we resolve clear deviation from the linear field dependence. If there is any anisotropy induced gap, it is below 100 GHz, the low-frequency cut-off of our Figure 1. Magnetic field and polarization dependent THz absorption spectra measured in Voigt configuration in MnSc 2 S 4 at T = 2.5 K. The absorption differences are shown with respect to zero field spectrum in magnetic fields up to 17 T. Bold spectra were measured at odd field values. Undulation of the spectra in the vicinity of the resonance is due to multiple reflections within the plane-parallel sample that is distorted by the change of sample optical constants near the spin resonance mode. www.nature.com/scientificreports/ experiment. The spin resonance detected at 300 mK is not sensitive to the magnetic phase transitions that are suggested to occur at 4.5 T and 7.5 T, according to Ref. 23 . Moreover, in our frequency window, we did not detect any other resonances, which may arise due to the emergence of a modulated spin structure, such as a spin spiral or magnetic skyrmion lattice. The absence of any signature of the modulated phase might correspond to the weak spin-orbit interaction and the related weak magnetic anisotropy of Mn 2+ . Without a sizable magnetic anisotropy, the spin spiral is harmonic and the modes folded to the reduced Brillouin zone remain silent. The oscillation of the plane of the harmonic spiral may induce modulation of the electric polarization via the inverse Dzyaloshinskii-Moriya coupling 26,27 , however, this mechanism is active only for spin cycloids, i.e., it cannot generate infrared active modes in the helical state of MnSc 2 S 4 . These are the most likely reasons for not observing additional spin resonances in the covered spectral range. From the absorption spectrum, the ω → 0 magnetic susceptibility χ can be obtained using the Kramers-Kronig relations, assuming that χ is small and the dielectric function ε is constant in the THz range 38 : where α is the absorption coefficient, ω is the angular frequency, and c is the speed of light in vacuum. We fitted the experimental α(B) − α(0 T) spectrum at B = 12 T with a single resonance, see top panel of Fig. 2 for illustration. The magnetic susceptibility is described by a Lorentzian oscillator: where ω 0 is the resonance frequency, γ is the damping parameter and S is the oscillator strength. To take into account multiple reflections within the sample, we modeled it as a Fabry-Perot etalon with infinite number of reflections. By assuming that the resonance is absent in zero field, our model provided an estimate for the THz dielectric constant: ε = 12.1. The evaluation of the integral in Eq. (1) with the Lorentzian model gave χ(ω → 0) = 2.5×10 −3 for fields oscillating perpendicular to the static field. This transverse susceptibility is an order of magnitude smaller than χ 0 = 0.021, the value published for the longitudinal, static susceptibility in Ref. 21 . Since the magnetization curve is nearly linear even in the magnetically ordered phases 23 , and the anisotropy is weak, these transverse and longitudinal susceptibilities should be nearly equal in the static limit. The missing spectral weight, i.e., the difference between χ(ω → 0) and χ 0 , must lie outside of the frequency range www.nature.com/scientificreports/ of our measurement system, implying the presence of further resonance(s) below 100 GHz, the low-frequency cutoff of the present study. In fact, an antiferromagnetic spiral emerging due to exchange frustration has three Goldstone modes in the absence of anisotropy: a phason mode, 0 corresponding to rotations within the plane of the spiral and two others, ±1 associated to out-of-plane rotations 28 . Magnetic anisotropy terms compatible with cubic symmetry may gap these modes making them detectable with microwave spectroscopy. Finally, we mention that non-resonant, relaxation modes may also explain the missing spectral weight as inferred in the case of the frustrated magnet ZnCr 2 O 4 39 . Microwave transmission. In order to search for lower frequency excitations, we also performed broadband microwave transmission measurements in the paramagnetic phase as well as in the magnetically ordered modulated phases. In the paramagnetic phase T > T N , our measurements covered the 10 MHz to 20 GHz and 0-1 T frequency-magnetic field range as shown in Fig. 3. We observed a single resonance from the sample that is shifted linearly with the field. The g-factor of this line without any zero-field offset at T = 20 K, T = 10 K, and T = 6 K is 2.07, 2.08, and 2.1, respectively, for fields applied along B [111]. The linewidth became broader as the temperature approached the Néel temperature and the resonance was not visible at 2 K. Although we extended our experiments up to 50 GHz and 8 T in the magnetically ordered phase, we did not detect any resonance from the sample in the 300-600 mK temperature range. The combination of the microwave and THz results suggests that the spin resonance of MnSc 2 S 4 occurs in the 50-100 GHz frequency window in zero field, i.e. a spin gap is opening below T N . Conclusions Earlier elastic and inelastic neutron scattering studies combined with Monte Carlo simulations found multiple phases in MnSc 2 S 4 , including a multi-q state, such as an antiferromagnetic skyrmion phase. Motivated by these findings, we studied the magnetic field dependence of the spin excitations in MnSc 2 S 4 by THz and microwave spectroscopy in the paramagnetic phase as well as in the ordered state. Although the material has a rich phase diagram with multiple modulated magnetic phases, we only observed a single resonance, whose frequency does not exhibit anomalies at the critical fields separating these phases. This resonance has g-factor close to 2 and shows no deviation from the linear field dependence in the studied frequency range, which indicates a small www.nature.com/scientificreports/ anisotropy. Other collective modes of the modulated states were not detected likely due to their negligible magnetic dipole activity, being the consequence of weak magnetic anisotropy. The analysis of the intensity suggests that further spin excitation(s) should be present outside of the measured frequency-magnetic field windows. Methods Single crystals with a typical size of ∼1 mm 3 were grown by chemical transport technique, as described in Ref. 22 . Several co-oriented crystals facing to the [111] direction were glued to obtain a mosaic with ∼2 mm diameter and 0.65 mm thickness. Sub-Kelvin temperatures were reached in a modified Oxford TLE200 wet dilution refrigerator at the National Institute of Chemical Physics and Biophysics (KBFI), Tallinn. The propagation vector of the incident unpolarized light was parallel to the external magnetic field, which is the so-called Faraday configuration. Measurements at 2.5 K were also performed at KBFI, on the TeslaFIR cryostat setup. These measurements were performed with polarized light, in Voigt configuration, i.e. propagation vector of the exciting polarized light was perpendicular to the external magnetic field. In both cases, the spectra were measured with an SPS200 far-infrared Martin-Puplett interferometer and a 300 mK silicon bolometer. The field-induced change in the absorption coefficient α was calculated as where I (B) is transmitted light intensity at a specific magnetic field B, and d is the sample thickness. Low frequency broadband measurements were performed in the microwave laboratory at Universität Stuttgart using metallic coplanar waveguides (CPWs) 40 . Measurements at and above 2 K were done with a 20 GHz vector network analyzer (VNA) in a magnet cryostat with variable temperature insert. The frequency-magnetic field maps were recorded as frequency sweeps in constant magnetic field. Measurements in the sub-Kelvin temperature range were performed with a 50 GHz VNA, in a wet dilution refrigerator on a single crystal with typical sizes 2.46 mm × 2.11 mm. Data availability The datasets analysed during the current study are available from the corresponding author on reasonable request.
2023-07-10T06:17:30.361Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "875d7a70fe508c810ececff83a850fbe39137910", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "875d7a70fe508c810ececff83a850fbe39137910", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
13488065
pes2o/s2orc
v3-fos-license
Proteasomal Degradation of TRIM5α during Retrovirus Restriction The host protein TRIM5α inhibits retroviral infection at an early post-penetration stage by targeting the incoming viral capsid. While the detailed mechanism of restriction remains unclear, recent studies have implicated the activity of cellular proteasomes in the restriction of retroviral reverse transcription imposed by TRIM5α. Here, we show that TRIM5α is rapidly degraded upon encounter of a restriction-susceptible retroviral core. Inoculation of TRIM5α-expressing human 293T cells with a saturating level of HIV-1 particles resulted in accelerated degradation of the HIV-1-restrictive rhesus macaque TRIM5α protein but not the nonrestrictive human TRIM5α protein. Exposure of cells to HIV-1 also destabilized the owl monkey restriction factor TRIMCyp; this was prevented by addition of the inhibitor cyclosporin A and was not observed with an HIV-1 virus containing a mutation in the capsid protein that relieves restriction by TRIMCyp IVHIV. Likewise, human TRIM5α was rapidly degraded upon encounter of the restriction-sensitive N-tropic murine leukemia virus (N-MLV) but not the unrestricted B-MLV. Pretreatment of cells with proteasome inhibitors prevented the HIV-1-induced loss of both rhesus macaque TRIM5α and TRIMCyp proteins. We also detected degradation of endogenous TRIM5α in rhesus macaque cells following HIV-1 infection. We conclude that engagement of a restriction-sensitive retrovirus core results in TRIM5α degradation by a proteasome-dependent mechanism. Introduction Retroviruses exhibit a restricted host range due to the requirement for specific interactions between viral and host proteins to complete the viral life cycle. Also limiting retroviral tropism are several recently identified intracellular antiviral factors ( [1][2][3][4][5]); reviewed in [6][7][8][9][10]). The prototypical restriction activity, Fv1, was first detected in the 1970s as differential susceptibility of inbred mice strains to the Friend leukemia virus [11][12][13]. Fv1 blocks infection of murine leukemia viruses (MLV) at a stage following fusion but prior to integration [14,15]. The block to infection can be overcome at high multiplicities of infection (m.o.i.) or by pretreatment of target cells with non-infectious virus like particles (VLPs) [11,16]. Susceptibility to Fv1 restriction is determined by the sequence of the viral capsid protein (CA) [17][18][19]. The gene encoding Fv1 was identified in 1996 by positional cloning [1]; yet the molecular mechanism by which Fv1 inhibits MLV infection remains poorly defined. Shortly after the identification of TRIM5a, a second HIV-1 restriction factor was identified in owl monkeys [4,5]. This protein, TRIMCyp, is the apparent result of a LINE1-mediated retrotransposition event in which the cyclophilin A (CypA) mRNA was inserted into the TRIM5 locus resulting in a functional fusion protein [4]. TRIMCyp potently inhibits HIV-1 infection by interacting with an exposed loop on the surface of the CA via the CypA domain. The discovery of TRIMCyp provided a simple explanation for the ability of cyclosporin A (CsA), which inhibits CypA binding to CA, to render owl monkey cells permissive to HIV-1 infection [38]. Mutations in the CypA binding loop that result in a failure to bind CypA also result in a loss of restriction by TRIMCyp [4,5]. More recently, novel TRIM5-CypA proteins have also been identified in other primate species [39][40][41][42]. TRIM5a and TRIMCyp are members of the tripartite motif family of proteins, which encode RING, B-Box, and coiled-coil (RBCC) domains [43]. TRIM5a is the longest of the three isoforms (a, c, and d) generated from the TRIM5 locus by alternative splicing of the primary transcript. While all three TRIM5 isoforms contain identical RBCC domains, the atranscript also encodes the B30.2/SPRY domain required for recognition of the incoming viral capsid and restriction specificity [29,30,33,34,36,[44][45][46]. The coiled-coil domain promotes the multimerization of TRIM5a molecules that is required for efficient restriction [44,47,48]. While the precise function of the B-Box domain is unclear, deletion of this region results in total loss of restriction potential thus indicating its importance [44,49]. The RING domain of TRIM5a is also required for full restriction activity, as mutants that lack this domain or in which proper folding is impaired are severely impaired for restriction and have altered cellular localization [3,44,49]. Substitution of RING domains from other human TRIM proteins results in changes in both the timing of restriction (i.e. pre-vs. post-reverse transcription) and the intracellular localization of the restriction factor [37,[50][51][52]. RING domains are commonly associated with ubiquitin ligase (E3) activity facilitating specific transfer of ubiquitin from various ubiquitin-conjugating (E2) proteins to substrates (reviewed in [53,54]). Polyubiquitylation of proteins commonly targets them for intracellular degradation by proteasomes. TRIM5a can be ubiquitylated in cells [55], but a role for this modification in TRIM5a stability or restriction has not been established. The d isoform of TRIM5, which encodes an identical RING domain to TRIM5a, exhibits E3 activity in vitro and mutation of the RING domain abolishes this activity [56]. The presence of a RING domain on TRIM5a suggested that the restriction factor might function by transferring ubiquitin to a core-associated viral protein, thus targeting it for proteasomal degradation. However, such a modification has not been detected, and the magnitude of restriction imposed by TRIM5a was not altered in cells in which the ubiquitination pathway was disrupted [57]. Nonetheless, recent studies have shown that proteasome inhibitors relieve the TRIM5a-dependent inhibition of reverse transcription, yet a block to HIV-1 nuclear entry remains [58,59]. Based on these findings implicating the proteasome in TRIM5a-dependent retroviral restriction, we hypothesized that restriction by TRIM5a leads to proteasomal degradation of a TRIM5a-viral protein complex. Here we show that inoculation of TRIM5a-expressing cells with a restricted retrovirus results in accelerated degradation of TRIM5a itself. Destabilization of TRIM5a was tightly correlated with the ability of the restriction factor to block infection by the incoming virus. Proteasome inhibitors prevented HIV-1-induced degradation of TRIM5a rh when added to cells prior to virus inoculation. These data suggest a functional link between proteasomal degradation of TRIM5a and the ability of TRIM5a to restrict an incoming retrovirus. Exposure of Cells to HIV-1 Destabilizes TRIM5a We hypothesized that TRIM5a itself might be degraded as a consequence of the post-entry restriction process. To test this, TRIM5a rh -expressing 293T cells were cultured in the presence of cycloheximide to arrest protein synthesis and then challenged with VSV-G-pseudotyped HIV-1 particles. At various times postinfection, cells were harvested for analysis of TRIM5a levels by quantitative immunoblotting. In control cells not exposed to virus, the TRIM5a level declined at a slow rate, eventually leveling off to 55% of the original level after 4 hours ( Figure 1A). By contrast, inoculation with HIV-1 induced a more rapid decrease in the TRIM5a level resulting in 85% loss after 4 hours. Analysis of data from 4 experiments indicated that the decay of TRIM5a was significantly faster in the HIV-1-inoculated cultures relative to the control ( Figure 1B). The stability of TRIM5a in our cells differs in terms of time as compared to previously published reports using Hela cells [55]. In additional studies we observed a similar destabilizing effect of HIV-1 exposure on TRIM5a rh in HeLa cells (data not shown). Exposure of target cells to saturating levels of virus or VLPs can overcome restriction by TRIM5a. To determine whether the decay of TRIM5a rh was related to saturation of restriction, we inoculated TRIM5a rh -expressing cells with various doses of a GFP-encoding virus in the presence of cycloheximide for a fixed period of time and harvested the cells to quantify TRIM5a levels. To probe the relationship between saturation of restriction and TRIM5a degradation, a portion of the harvested cells were replated and cultured for 48 hours, and the extent of infection determined by flow cytometric analysis of GFP expression. The results showed that the ability to detect degradation of TRIM5a rh was strongly dependent on the dose of virus used ( Figure 1C). Furthermore, the TRIM5a level following inoculation was inversely related to the overall extent of infection ( Figure 1D). These results indicate that HIV-1-induced degradation of TRIM5a is correlated with saturation of restriction, likely due to a requirement to engage most of the restriction factor to detect the loss of the protein. Human TRIM5a Stability is Not Affected by HIV-1 Human TRIM5a does not efficiently restrict HIV-1 infection. To further probe the link between restriction and TRIM5a destabilization, we analyzed the stability of the human TRIM5a protein following challenge of cells with HIV-1. As previously shown in Figure 1, HIV-1 challenge of TRIM5a rh -expressing 293T cells resulted in a more rapid loss of the protein vs. mockinfected cells (Figure 2A and B). TRIM5a hu was intrinsically less stable than TRIM5a rh , as indicated by its more rapid decay in the mock-infected cultures ( Figure 2B and C). However, inoculation with HIV-1 did not result in further destabilization of TRIM5a hu , indicating that the HIV-1-induced degradation of TRIM5a rh is not a nonspecific cellular response to the viral challenge. These results suggest that the loss of TRIM5a rh depends on its ability to recognize the HIV-1 core. Author Summary Recent studies have identified several cellular proteins that restrict infection by a variety of retroviruses. One of these restriction factors, TRIM5a, is partially responsible for the differences in susceptibility of monkeys and humans to SIV and HIV-1, respectively. TRIM5a inhibits retrovirus infection soon after penetration into the target cell by associating with the viral protein CA, which forms the polymeric capsid shell of the viral core. Although the detailed mechanism of restriction is unknown, TRIM5a is postulated to alter the stability of the viral core, resulting in a failure to complete reverse transcription. The activity of cellular proteasomes, which are responsible for intracellular protein degradation, has also been implicated in TRIM5adependent attenuation of retroviral reverse transcription. In this study, we show that cellular TRIM5a is rapidly degraded in cells exposed to a restriction-sensitive retrovirus but not in cells infected with an unrestricted virus. Virus-induced degradation of TRIM5a was dependent on cellular proteasome activity, as inhibition with drugs blocking proteasome function also inhibited degradation of TRIM5a. These results provide additional support for a role of proteasomal degradation in TRIM5a-dependent retrovirus restriction and suggest a novel mechanism by which binding of TRIM5a to the viral capsid prevents infection. Exposure to Restriction-Sensitive HIV-1 Destabilizes TRIMCyp The owl monkey restriction factor TRIMCyp restricts HIV-1 by binding to an exposed loop on the surface of CA. Restriction can be prevented by addition of CsA or amino acid substitutions in CA that reduce CypA binding. We therefore asked whether TRIMCyp would also be destabilized following encounter of HIV-1. 293T cells expressing TRIMCyp were treated with cycloheximide and then challenged with VSV-G pseudotyped HIV-1 particles. As a control, parallel cultures were inoculated in the presence of a CsA concentration known to abolish TRIMCyp restriction of HIV-1. In the control mock-inoculated cells, TRIMCyp was stable in the cells during the six-hour time course ( Figure 3A). Challenge with HIV-1 resulted in accelerated loss of TRIMCyp. In the cultures containing CsA, the HIV-1-induced loss of TRIMCyp was markedly reduced ( Figure 3B). Next we asked whether the HIV-1-induced degradation of TRIMCyp is correlated with the specificity of restriction. HIV-1 containing the G89V mutation in the CypA binding loop of CA is incapable of binding CypA and is also not restricted by TRIMCyp. However, this viral mutant is susceptible to TRIM5a rh restriction. Parallel cultures of 293T cells expressing either TRIMCyp or TRIM5a rh were treated with cycloheximide and then challenged with equivalent quantities of VSV-G pseudotyped HIV-GFP particles or the G89V CA mutant virus. As seen in Figure 3C, exposure to wild type HIV-1 induced accelerated loss of both TRIMCyp and TRIM5a rh . By contrast, exposure to the G89V mutant particles resulted in loss of TRIM5a rh but not TRIMCyp. These results indicate that exposure of cells to HIV-1 results in destabilization of TRIMCyp by a mechanism requiring recognition of the incoming HIV-1 core by the restriction factor. Human TRIM5a is Destabilized Upon Encounter of Ntropic MLV TRIM5a hu cannot restrict HIV-1 or B-tropic MLV but potently restricts N-MLV. To further test the link between TRIM5a destabilization and retrovirus restriction, we challenged 293T cells stably expressing TRIM5a hu with N-and B-tropic MLV viruses and measured TRIM5a levels following infection. The GFPtransducing N-and B-tropic MLV stocks were first titrated on nonrestrictive CrFK cells ( Figure S2, detailed in Text S1) then normalized to ensure equivalent dosing. Mock-treated cells lost TRIM5a hu at a slow rate (t 1/2 ,2.5 h; Figure 4A). Challenge with B-MLV did not significantly affect the rate of TRIM5a hu decay ( Figure 4A). By contrast, cells challenged with an equivalent quantity of N-MLV showed accelerated loss of TRIM5a hu (t 1/2 ,1 h) ( Figure 4A and 4B). The relative band intensities of the TRIM5a levels for this experiment were calculated and are represented in the graph in Figure 4B. These results, together with the TRIM5a and TRIMCyp data, establish a strong correlation between virus-induced TRIM5a destabilization and the specificity of restriction. Virus-induced TRIM5a Destabilization is Correlated with Lentiviral Restriction in Old and New World Monkeys TRIM5a proteins from different primates differ in their ability to restrict specific lentiviruses. For example, tamarin monkey TRIM5a (TRIM5a tam ) restricts SIV mac but not HIV-1, while spider monkey TRIM5a (TRIM5a sp ) restricts both viruses. To further test the correlation between virus-induced loss of TRIM5a and antiviral specificity, we stably expressed the TRIM5a tam and TRIM5a sp proteins in 293T cells and challenged them with equivalent titers of VSV-pseudotyped HIV-1 and SIV mac239 GFP reporter viruses (as determined by titration on permissive CrFK cells). The cell lines were found to restrict the respective viruses by at least ten-fold (data not shown). Immunoblot analysis of post-nuclear lysates revealed that TRIM5a rh was specifically destabilized when challenged with HIV-1 but not upon SIV mac challenge ( Figure 5A). By contrast, the SIV-restrictive TRIM5a tam was destabilized only in response to SIV mac challenge ( Figure 5A). TRIM5a sp , which restricts both viruses, was degraded in response to challenge with either virus ( Figure 5A and B). These results further strengthen the correlation between the specificity of retrovirus restriction and virus-induced destabilization of TRIM5a. HIV-1-Induced Destabilization of TRIM5a Requires Proteasome Activity A major mechanism for cellular protein degradation is via the 26S proteasome. Previous studies have shown that the turnover of TRIM5a is dependent on cellular proteasome activity. Furthermore, inhibition of proteasome activity overcomes the early block to reverse transcription imposed by TRIM5a. We asked whether HIV-1-induced destabilization of TRIM5a rh is dependent on proteasome activity. As previously reported [55], treatment of cells with the proteasome inhibitor MG132 resulted in an accumulation of TRIM5a protein (Figure 1, 0 H.p.i.). MG132 also prevented the HIV-1-induced destabilization of TRIM5a rh ( Figure 6A and B). Additional studies revealed that epoxomicin, a more specific proteasome inhibitor, also blocked the HIV-1-induced degradation of TRIM5a rh (data not shown). By contrast, infection by HIV-1 in the presence of the S-cathepsin inhibitor E64 did not prevent HIV-1induced TRIM5a rh degradation (data not shown), suggesting that endosomal proteases are not responsible for TRIM5a rh destabilization. We conclude that the virus-induced degradation of TRIM5a is dependent on cellular proteasome activity. To determine whether HIV-1-induced destabilization of TRIMCyp depends on proteasome activity, we challenged TRIMCyp-expressing 293T cells with either restricted HIV-GFP or unrestricted HIV.G89V-GFP in the presence or absence of MG132. As shown in Figure 6C, MG132 prevented the HIV-1induced loss of TRIMCyp. Infection with the unrestricted G89V virus did not alter TRIMCyp stability, while addition of MG132 stabilized the restriction factor. HIV-1-Induced Destabilization of Endogenous TRIM5a in Primate Cells All of the previous experiments studying TRIM5a stability were conducted in transduced 293T cell lines in which TRIM5a was detected by virtue of a hemagglutinin epitope tag. In this setting, it PLoS Pathogens | www.plospathogens.org was necessary to treat the cells with cycloheximide to detect virusinduced degradation of the restriction factor, potentially leading to artifacts due to general inhibition of protein synthesis. Virus titration experiments demonstrated markedly greater restriction in the transduced cells vs. rhesus macaque FRhK-4 cell line, indicating that the 293T cells overexpress TRIM5a rh (our unpublished observations). Furthermore, while cycloheximide treatment had only a minor effect on restriction in FRhK-4 cells, the drug markedly reduced restriction in 293T cells ( Figure S4). To probe the physiological relevance of our observations made in 293T cells, we sought a means of detecting endogenous TRIM5a protein in rhesus macaque cells. Using a monoclonal antibody against native TRIM5a for immunoblotting, we detected a band that was consistent in terms of molecular weight with TRIM5a rh that was also absent in cells lacking TRIM5a rh (data not shown). To confirm that the band is TRIM5a, we transfected FRhK-4 cells with either a TRIM5a rhspecific siRNA duplex or a non-targeting control siRNA duplex and quantified the intensity of this band by immunoblotting. As shown in Figure 7A and B, transfection with TRIM5a rh -specific siRNA resulted in a 72% decrease in intensity of the relevant band vs. FRhK-4 cells treated with the non-targeting control. Cells treated with the TRIM5a rh -specific also exhibited a tenfold increase in permissiveness to infection with HIV-1 (data not shown). HIV-1 infection of FRhK-4 cells was not altered by treatment with the non-targeting siRNA control. As expected, treatment with either siRNA duplex did not affect permissiveness to SIV infection (data not shown). These results indicated that the monoclonal antibody is capable of detecting endogenous TRIM5a rh in FRhK-4 cells. They further demonstrated that the transduced 293T cells express a 3.3 fold higher level of TRIM5a than FRhK-4 cells ( Figure 7B). We next sought to determine if endogenous TRIM5a rh was destabilized by HIV-1 in rhesus macaque cells. FRhK-4 cultures were inoculated with HIV-1 in the presence or absence of cycloheximide and the stability of TRIM5a rh in response to infection was analyzed by immunoblotting. Initial experiments showed no effect of cycloheximide treatment on TRIM5a rh levels in HIV-1-exposed cells (data not shown); therefore the drug was removed in all subsequent experiments. We observed that TRIM5a rh levels were stable in FRhK-4 cells over the 4 hour period ( Figure 7C and D). Infection with HIV-1 resulted in accelerated decay of endogenous TRIM5a rh in rhesus macaque cells without any requirement of inhibition of protein synthesis. We next sought to determine if the loss of TRIM5a rh was specifically due to restriction or was a non-specific effect resulting from viral infection. In the absence of cycloheximide we infected FRhK-4 cells with equivalent titers of HIV-1 or SIVmac239 GFP reporter viruses. As seen in Figure 8A and B, infection with HIV-1 resulted in a potent loss of TRIM5a rh while infection with SIV resulted in only a slight loss of TRIM5a rh as compared to the control cells. We conclude that infection by HIV-1 results in a rapid loss of TRIM5a rh in target cells and that this loss is directly related to the ability of TRIM5a rh to restrict infection by the incoming virus. HIV-1-Induced Destabilization of Endogenous TRIM5a Requires Active Proteasomes We sought to determine if inhibition of proteasome function would restore TRIM5a rh stability in rhesus macaque cells. FRhK-4 cells were exposed to HIV-1 in the presence or absence of MG132 for a period of four hours, and the levels of TRIM5a rh were measured by immunoblotting. As can be seen in Figure 8C and D, MG132 stabilized TRIM5a rh in HIV-1-exposed cells. Flow cytometry analysis of GFP signal in a small subset of the infected cells showed no difference in infection levels resulting from inhibition of proteasome function, which is consistent with previously published results. These results indicate that HIV-1induced destabilization of TRIM5a rh in rhesus macaque cells requires proteasome activity. They further suggest that the results we observed with TRIM5a-transduced 293T cells are unlikely to be an artifact of cycloheximide treatment. Discussion While it is well established that TRIM5a limits the host range of many retroviruses, the precise mechanism of restriction remains undefined. TRIM5a can specifically associate with assemblies of HIV-1 CA-NC protein in vitro, and genetic evidence indicates that TRIM5a and TRIMCyp require an intact or semiintact viral capsid for binding [60,61]. However, the detailed molecular consequences of the binding interaction to the viral core remain poorly defined. Two lines of evidence have implicated the ubiquitin-proteasome system in restriction. First, the d isoform of TRIM5, which has a RING domain identical to that of TRIM5a, exhibits E3 activity in vitro [56]. Deletion or mutation of the RING domain in TRIM5a results in significant loss of restriction efficacy [44,49]. TRIM5a is ubiquitinated in cells, although a role of this modification in retrovirus restriction has not been established [55]. Second, inhibition of proteasome activity alters the stage at which TRIM5a-mediated restriction occurs [58,59]. The latter observation led us to hypothesize that the proteasome may participate in restriction by degrading a complex of TRIM5a with one or more incoming viral proteins. To test this, we asked whether exposure of cells to HIV-1 alters the stability of TRIM5a rh . We observed that inoculation with HIV-1 results in an accelerated turnover of the restriction factor. Similar effects were observed in both 293T and HeLa cells (data not shown), suggesting that TRIM5a destabilization is not specific to a unique cell type. HIV-1 challenge resulted in destabilization of TRIM5a rh but not TRIM5a hu . Likewise, TRIM5a hu was destabilized by inoculation of cells with restriction-sensitive N-MLV particles but not by unrestricted B-MLV. Similar results were seen in cells expressing the HIV-1specific restriction factor TRIMCyp. Treatment of target cells with CsA, which blocks TRIMCyp restriction of HIV-1, or infection with virus containing mutations that prevent CypA binding [4,5,38], did not affect TRIMCyp stability. Specific loss of TRIM5a from cells expressing different primate alleles of the protein also correlated very well with the ability of those alleles to restrict HIV or SIV. The HIV-1-induced destabilization of TRIM5a rh and TRIMCyp was prevented by inhibition of cellular proteasome activity. Destabilization of TRIM5a rh by HIV-1 was also observed in a primate derived cell line without the need of cycloheximide to inhibit protein synthesis. This destabilization was specific for the restricted HIV-1 and was not observed in cells infected with an unrestricted virus. Inhibition of proteasome function restored TRIM5a rh stability in response to infection by HIV-1 in the rhesus macaque cells. We conclude that TRIM5related restriction factors are targeted for degradation by a proteasome-dependent mechanism following encounter of a restriction-sensitive retroviral core. TRIM5a forms heterogenous structures in cells referred to as cytoplasmic bodies (CBs). While the role of CBs in restriction is unclear, TRIM5a protein in these structures rapidly exchanges with soluble TRIM5a, indicating that the protein is highly dynamic within cells [62]. We observed that most of the cellular TRIM5a can be degraded in response to exposure to a restriction-sensitive retrovirus, which implies that a majority of cellular TRIM5a molecules can engage incoming viral cores. If the CB-associated TRIM5a is inaccessible to incoming virus, our observation that a restricted virus can induce degradation of the majority of the TRIM5a molecules suggests that this protein rapidly redistributes to a compartment accessible to incoming virus. TRIM5a and TRIMCyp are subject to proteasome-dependent turnover under steady-state conditions, yet its rapid turnover is not a prerequisite for restriction activity [55,63]. Accordingly, proteasome inhibitors do not overcome restriction ( [57]; Figure S5). Nonetheless, the effect of virus exposure on TRIM5a stability had heretofore not been reported. While alterations of specific individual portions of TRIM5a may alter its intrinsic stability, our results indicate that TRIM5a encounter with a restricted core results in degradation of the restriction factor by a proteasomedependent mechanism. Retrovirus uncoating is a poorly characterized process, but can be defined as the disassembly of the viral capsid following penetration of the viral core into the target cell cytoplasm. Studies of HIV-1 CA mutants indicate that the stability of the viral capsid is properly balanced for productive uncoating in target cells: mutants with unstable capsids are impaired for viral DNA synthesis, suggesting that premature uncoating is detrimental to reverse transcription [64]. Thus a plausible mechanism for restriction is that binding of TRIM5a to the viral capsid inhibits infection directly by physically triggering premature uncoating in target cells [65,66]. In this model, TRIM5a, perhaps with one or more cofactors, promotes the physical decapsidation of the virus core independently of proteolysis. Consistent with this view are studies demonstrating that TRIM5a restriction is associated with decreased recovery of sedimentable CA protein in lysates of acutely-infected cells [65,66]. However, these studies fell short of demonstrating that the sedimentable CA protein was associated with intact viral cores. Furthermore, a recent study reported that treatment of cells with proteasome inhibitors prevented TRIM5a-dependent loss of particulate CA protein [67], indicating the potential involvement of proteasome activity in TRIM5ainduced virus uncoating. Other studies further implicate the activity of the proteasome in TRIM5a-dependent restriction. Inhibition of proteasome activity rescues HIV-1 reverse transcription in TRIM5a-expressing cells, revealing a downstream block to nuclear import mediated by the restriction factor [58,59]. Engagement of the viral capsid by TRIM5a may lead to proteasomal degradation of a TRIM5a-CA complex, resulting in functional decapsidation of the viral core and a premature uncoating phenotype. Consistent with this model, TRIM5a restriction has been associated with decreased intracellular accumulation of HIV-1 CA [68]. In addition, a recent study of MLV particle-mediated RNA cellular transfer reported reduced accumulation of viral CA protein in cells in a manner that was correlated with restriction by TRIM5a, and this effect was reversed by proteasome inhibition [69]. Unfortunately, our own efforts to detect an effect of TRIM5a on the stability of the incoming HIV-1 CA have thus far yielded negative results; thus we are reluctant to conclude at this stage that a specific component of the viral core is degraded as a complex with TRIM5a. Another potential mechanism is that proteasomal engagement of TRIM5a bound to the virus core results in physical dissociation of CA from the core followed by its release from TRIM5a, thus leading to destruction of the restriction factor and decapsidation of the core but not necessarily degradation of CA [70]. Genetic evidence from abrogation-of-restriction studies indicates that TRIM5a binding requires an intact or semiintact viral capsid [60], suggesting that TRIM5a binding to CA is highly dependent on avidity resulting from multivalent interactions with the polymeric viral capsid. It is thus plausible that CA is released from TRIM5a following forced uncoating. This model is attractive in its ability to reconcile most, if not all, of the reported data regarding the mechanism of restriction by TRIM5a. HIV-1 infection in many primate cell lines exhibits biphasic titration curves, and restriction can be abrogated in trans by high concentrations of VLPs, indicating that virus restriction is saturable. While it is generally assumed that the saturation occurs via sequestration of the restriction factor by the incoming virus, our results reveal another potential mechanism. Degradation of TRIM5a rh by HIV-1 was tightly correlated with cellular susceptibility to infection by incoming virus, suggesting that loss of restriction at high virus input may occur via degradation of the restriction factor itself. Consistent with this view, treatment with MG132 resulted in a three-fold decrease in HIV-1 infection of FRhK-4 as well as OMK cells, while infection by unrestricted SIV was inhibited only marginally ( Figure S5). This result, coupled with our observations of proteasome-dependent degradation of TRIM5a proteins in restrictive cells, suggests that depletion of TRIM5a via the proteasome contributes to the saturability of restriction. The potential involvement of ubiquitylation in virus-induced degradation of TRIM5a degradation warrants further study. The autoubiquitylation of TRIM5d observed in vitro suggests that TRIM5a may be ubiquitylated in trans upon polymerization of the restriction factor on a retroviral capsid. However, we have been unable to detect accumulation of cellular ubiquitylated TRIM5a species following HIV-1 inoculation either in the presence or absence of proteasome inhibitors (our unpublished observations). While many cellular proteins are regulated by ubiquitin-dependent proteolysis, ubiquitin-independent proteasomal degradation is also well documented (reviewed in [71]). Most E3 ligases are not degraded following ubiquitylation of a substrate, yet notable exceptions exist. The E3 enzyme Mdm2 is degraded following its ubiquitylation of its target, p53 [72], and the stability of several E3 ligases is related to their ubiquitylation status resulting from autoubiquitylation [73][74][75]. It will therefore be of interest to determine whether HIV-1-induced degradation of TRIM5a is dependent on host cell ubiquitylation and the TRIM5a RING domain. The early post-entry stage of infection remains a fundamentally obscure part of the retrovirus life cycle. Our results provide novel evidence for a role for proteasome activity in TRIM5a restriction. Further mechanistic studies of TRIM5a may reveal novel approaches to antiviral therapy and fundamental insights into the molecular details of HIV-1 uncoating. Chemicals MG132 and cycloheximide were purchased from Sigma-Aldrich and used at final concentrations of 25 mM and 50 mM, respectively. Cyclosporin A was purchased from CalBiochem used at 2.5 mM final concentration. Epoxomicin was purchased from Boston Biochem and used at 10 mM. The cathepsin inhibitor E64 was purchased from Sigma-Aldrich and was used at 40 mM. Cells and Viruses FRhK-4 cells were purchased from the American Type Culture Collection. Cells were cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum and 1% penicillin/ streptomycin. VSV-G-pseudotyped HIV-1 NL4.3 , HIV-GFP, and SIV-GFP viruses were produced by calcium phosphate transfection of 293T cells with proviral plasmid DNA (23 mg) and pHCMV-G (7 mg). N-and B-tropic MLV virus stocks were prepared by cotransfection of 23 mg pCIG-N or pCIG-B plasmids with pHCMV-G (7 mg) onto the cell line 293TeGFP. This cell line is a clone generated from 293T cells previously transduced with the retroviral vector pBABE-eGFP and isolated by limiting dilution and selected for high levels of GFP expression. Transfected cells were washed after 24 hours and replenished with fresh media. Supernatants were harvested 48-72 hours after transfection, clarified by passing through 0.45 mm filters, and stored in aliquots at 280uC. Retrovirus stocks for transduction of TRIM5a alleles were harvested from 293T cells transfected with the plasmids pCL-ampho (10 mg), the appropriate TRIM5a vector (15 mg), and pHCMV-G (5 mg). Viruses were collected 48 hours after transfection and used to transduce 293T cells. All 293T cell lines expressing TRIM5a proteins were polyclonal cell populations obtained by selection of transduced cells with puromycin. TRIMCyp-expressing cells were obtained by isolation of a single cell clone via limiting dilution. HIV-1 was strongly restricted in these cells, and restriction was prevented by the addition of 5 mg/ml cyclosporin A (CsA). Infection Protocol Cells were seeded in 6-well plates at a density of 1 to 1.25610 6 cells/well and incubated overnight. Prior to infection, cultures were treated for 1 hour in 50 mM cycloheximide to block protein synthesis. In experiments involving proteasome inhibitors, cells were incubated with both cycloheximide and the appropriate inhibitor for 1 hour prior to infection. Viral stocks containing cycloheximide, polybrene (5 mg/mL), CsA (2.5 mM), and proteasome inhibitors were prewarmed to 37uC prior to addition to cells. After culturing for 1 hr, media from zero hour timepoints was removed and 1 ml of PBS was added. Cells were then detached from the plate by flushing, pelleted, washed in PBS, repelleted, and the pellets frozen at 280uC. Cells that were challenged with virus had media removed and replaced with viral stock and were returned to 37uC. Individual cultures were harvested hourly using same procedure as previously described for the zero hour timepoints. All cell pellets were frozen at 280uC prior to analysis. For experiments utilizing FRhK-4 cells the cells were seeded in 6 well plates at a density of 3610 5 cells/well and incubated overnight. Prewarmed viral stocks containing polybrene (5 mg/ mL) were added the following day with a well harvested at the time of viral addition serving as the zero hour timepoint. Cells were incubated with the viral stock for the indicated time period then trypsinized, placed in fresh D10 media at a 1:1 volume, pelleted, washed in 1 mL complete D10 media to inactivate trypsin, repelleted, washed 2 times in 1 mL PBS, then frozen at 280uC. In experiments with FRhK-4 cells involving MG132, the cells were incubated with inhibitor for one hour prior to viral addition with the zero hour timepoint being an uninfected well harvested after 1 hour pretreatment. siRNA Knockdown of TRIM5a rh 293T and FRhK-4 cells were seeded at a density of 2610 5 cells per well in 6-well plates and incubated overnight. 24 hours later, TRIM5a rh -specific siRNA [3], or a non-targeting control siRNA (Dharmacon), were diluted to a concentration of 3 mM in 16siRNA buffer then transfected into cells using Dharmafect 1 transfection reagent and OptiMEM I (Gibco) according to manufacturers protocol (Dharmacon). Cells were then incubated overnight and retransfected with siRNAs again the following day utilizing the identical protocol. 48 hours after the first siRNA transfection the cells were removed from the 6-well plates and plated onto a 10 cm dish in complete D10 media at a ratio of 1 well to 1 10 cm dish and incubated for either 24 or 48 hours. 24 hours later, one 10 cm dish of either TRIM5a rh -specific siRNA treated cells or non-targeting control treated cells were trypsinized and replated in 24 well plates at a density of 2610 5 cells/well then incubated overnight. The following day the remaining two 10 cm dishes of siRNA treated cells were trypsinized, diluted 1:1 in D10 media, pelleted, washed 16 in D10 media to inactivate trypsin, repelleted, washed 26 in 1 mL PBS per wash, repelleted, then frozen at 280uC. Cells that had been seeded the prior day in the 24 well plates were then infected with dilutions of HIV and SIV-GFP, incubated for 48 hours, then analyzed for GFP expression by flow cytometry. Protein Analyses Cell pellets were thawed and lysed in a solution containing 100 mM Tris-HCl (pH 8.0), 100 mM NaCl, and 0.5% NP-40. Nuclei were pelleted via centrifugation at 16,0006g for 10 minutes and post-nuclear supernatants were removed. Protein levels were quantified via BCA assay (Pierce). Samples, normalized for total protein, were denatured in SDS and subjected to electrophoresis on 4-20% acrylamide gradient gels (BioRad). Proteins were transferred to nitrocellulose and probed with HA-epitope tagspecific rat monoclonal antibody (3F10, Roche) and Alexa Fluor 680 conjugated goat anti-rat IgG (Molecular Probes). Cells expressing TRIMCyp were probed with the myc epitope-specific mouse monoclonal antibody (9E10, Invitrogen) and Alexa Fluor 680-conjugated goat anti-mouse IgG (Molecular Probes). Proteins extracted from FRhK-4 cells were probed the TRIM5a-specific mouse polyclonal antibody (IMG-5354, Imgenex) and Alexa Fluor 680 conjugated goat anti-mouse IgG (Molecular Probes). All immunoblots were probed with b-actin-specific rabbit monoclonal antibody (A2228, Sigma) and IRDye800-conjugated goat antirabbit IgG (Rockland). Dilutions of antibodies were 1:1000 and 1:5000 for primary and secondary respectively with the exception of IMG-5354 which was used at a dilution of 1:2000. Bands were detected by scanning blots with the LI-COR Odyssey Imaging System using both 700 and 800 channels, and integrated intensities were determined using the LI-COR Odyssey band quantitation software with the median top-bottom background subtraction method. The TRIM5a band intensities were then normalized to the signals from the corresponding b-actin bands. All signals were then expressed as a percentage of the initial TRIM5a/actin band intensity ratio.
2014-10-01T00:00:00.000Z
2008-05-01T00:00:00.000
{ "year": 2008, "sha1": "507b52a59c8f59ecf3c9b9050234ee61d0d37a66", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1000074&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "507b52a59c8f59ecf3c9b9050234ee61d0d37a66", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
29677079
pes2o/s2orc
v3-fos-license
Vitamin A and the eye : an old tale for modern times Clinical presentations associated with vitamin A deficiency persist in poor regions globally with the same clinical features as those described centuries ago. However, new forms of vitamin A deficiency affecting the eyes, which have become widespread, as a result of modern societal habits are of increasing concern. Ophthalmic conditions related to vitamin A deficiency require the combined attention of ophthalmologists, pediatricians, internists, dermatologists, and nutritionists due to their potential severity and the diversity of causes. As the eyes and their adnexa are particularly sensitive to vitamin A deficiency and excess, ocular disturbances are often early indicators of vitamin A imbalance. The present review describes the clinical manifestations of hypovitaminosis A with an emphasis on so-called modern dietary disorders and multidisciplinary treatment approaches. The present review also discusses the relationship between retinoic acid therapy and dry eye disease. The present review aims to inform health professionals of the modern presentations, causes, associated systemic diseases, and risk factors of hypovitaminosis A. The utility of retinoic acid application for the treatment of skin diseases and dry eye is also discussed (4) .Herein, we present the clinical presentation of hypovitaminosis A and discuss strategies for the investigation and treatment of the causes and consequences of hypovitaminosis A and side effects of the use of retinoic acid (a form of vitamin A) in dermatological and oncological therapies. Interestingly, one of the most complete and objective descriptions of the clinical manifestations of hypovitaminosis A was published decades before the specific underlying cause was known by the Brazilian ophthalmologist, Manoel da Gama Lobo, in 1865 (10) .Dr. Gama Lobo reported four cases of children, all descendants of slaves, with ocular disease who subsequently developed lung and digestive disorders before ultimately dying.In this report, the disease was termed Ophthalmia Braziliana, and the clinical progression was comprehensively detailed.Food deprivation was identified and credited to the practice of extensive monoculture in the farms of Southeast Brazil, in that century dedicated to the production of coffee and sugar. Dr. Gama Lobo attributed the signs and symptoms observed in his patients to the poor diet of slaves and their descendants, a problem that he never saw in his homeland to north of the country where agriculture production was dedicated to local consumption and therefore more variable and abundant.At the end of his report, Dr. Gama Lobo called the attention of legislators to the need for laws aimed at preventing the sequence of problems he outlined.His paper was published in Portuguese and in German but is relatively unknown to the majority of the medical community, although it is now freely available online (11,12) . Recent epidemiologic data from Brazil in a study population of 3,499 children aged between 6 and 59 months and 5,698 women aged between 15 and 49 years revealed that hypovitaminosis A is present in all five regions of Brazil with a prevalence of 17.4% and 12.3% among children and women, respectively (13) .The highest prevalence was found to be in urban areas and the northeastern and southeastern regions of the country. ClAssIC DIseAse The typical medical scenarios leading to hypovitaminosis A are low food intake, intestinal parasitosis, malabsorption syndromes, and diets containing low amounts of vitamin A (Figure 2). Hypovitaminosis A is classically caused by food deprivation.It is present in rural areas and the peripheries of large cities in South Asia, Africa, and Latin America, and the poor communities of large cities of developed countries (14)(15)(16)(17) .The most vulnerable individuals are children and pregnant women.The prevalence of hypovitaminosis A can reach 50% in children under 6 years of age in certain areas (18) .Laboratory confirmation of the diagnosis of hypovitaminosis A is defined as a serum retinol level <0.3 mg/l or 0.7 µM (19) . In addition to ocular problems, hypovitaminosis A also predisposes individuals to retarded growth, infertility, congenital malformations, infections, and early mortality (18,20) .The issue of vitamin A deficiency in these populations, distributed in more than 45 countries, has been the target of international preventive programs of vitamin A supplementation and periodic evaluation (16,18,19) . Individuals suffering from food deprivation and malabsorption are often infected with intestinal parasite diseases, such as Ascaris lumbricoides and Ancilostomides, Giardia lamblia, which may aggravate the inflammatory background and the signs and symptoms of hypovitaminosis A (21)(22)(23)(24) . Other well-known causes of vitamin A deficiency can be grouped into conditions associated with malabsorption syndrome.The treatments of several diseases that cause digestive disturbances and/or absorption of lipids and vitamin A have improved in recent decades leading to increased life expectancy and improved the clinical control of hypovitaminosis A allowing the majority of patients to lead a normal life.However, the majority of these patients will develop xerophthalmia (the specific term for hypovitaminosis A-related dry eye), which may progress to more severe ocular damage and other clinical manifestations of vitamin A depletion (25)(26)(27) . Acquired diseases associated with malabsorption syndrome known to cause hypovitaminosis A include chronic pancreatitis caused by chronic alcoholism, liver and pancreas autoimmunity, Crohn's disease, and ulcerative colitis, among other diseases affecting the digestive system (28) . Congenital diseases associated with malabsorption syndrome and hypovitaminosis A include cystic fibrosis and short bowel syndrome, among other genetic diseases that may impair intestinal vitamin A absorption in individuals with normal or high oral intake of retinoid and carotenoids (2,29,30) . The fourth group of conditions that classically cause hypovitaminosis A is those that may initially lead to malabsorption syndrome but later progresses to impaired hepatic storage of vitamin A. Biliary cirrhosis, chronic hepatitis, and chronic cirrhosis caused by toxic agents, viruses, and other causes may lead to hypovitaminosis A and should be screened for and treated by parenteral vitamin A supplementation according to body mass index and level of vitamin A deficiency (31) . MODeRN DIseAses AssOCIATeD wITH HypOvITAMINOsIs A In recent decades, the conditions known to induce hypovitaminosis A have been classified into four groups.Despite their varying prevalence, such conditions should be carefully considered by ophthalmologists during routine clinical practice. Modern causes of hypovitaminosis A that may also lead to xerophthalmia and other eye diseases and cause blindness are shown in (Figure 2 and Table 2) and comprising voluntary ingestion of low vitamin A diets or restrictive diets (e.g., vegetarian or cafeteria diets), psychiatric eating disorders (e.g., anorexia and bulimia), bariatric surgeries mimicking malabsorption syndrome, and chronic diseases that affect organs involved in vitamin A digestion or clearance (e.g., Sjögren's syndrome and kidney failure). Restrictive diets resulting from dietary behaviors may lead to a status of hypovitaminosis A and the consequences mentioned above.Diets adopted in conjunction with drugs to reduce appetite, diets with monotonous ingredients, and diets with limited sources of animal ingredients containing retinol and beta carotene (meat and dairy products such as milk, eggs, and their derivatives) are typically followed in the belief they will offer better control or prevention of certain diseases or improve general health (32)(33)(34)(35) . Exclusively vegetarian diets particularly put children and pregnant woman at increased risk of hypovitaminosis A as the conversion of beta carotenes present in vegetables to retinol is limited during digestion and the availability of vitamin A for absorption and hepatic storage is <20% of dietary vitamin A content (1) . The so-called cafeteria diet or competitive food, based on refreshing sodas and industrialized food, is predominantly composed of carbohydrates and lipids of vegetal source and provides insufficient amounts of dietary vitamin A. Accordingly, such diets could be considered causes of hypovitaminosis A and associated ocular problems in patients with excessive habits related to these diets (36) . The second group of causes of hypovitaminosis A includes the psychiatric eating disorders, anorexia, and bulimia nervosa, recognized as major, growing health problems with severe clinical complications, and high mortality.Both can cause hypovitaminosis A due to chronic dietary disturbances.The complexity of such conditions must be recognized in the context of early signs of xerophthalmia and should be managed in parallel with psychiatric specialists (37,38) . Bariatric techniques for the treatment of obesity include jejunoileal bypass and stomach reduction to induce weight loss by malabsorbtive and restrictive mechanisms (39)(40)(41) .Patients require vitamin supplementation following these procedures; however, a recent study in Brazil demonstrated that even before bariatric surgery a re lative amount of patients already have hypovitaminosis A, and that this prevalence increases 30 and 180 days after the procedure (42) .In patients with no compliance for a period of weeks or months, ophthalmologists may evaluate the initial manifestations of hypovitaminosis A. Special attention should be paid to patients undergoing oculoplastic or refractive surgeries as their nutritional status may be subclinical and cause disturbances in ocular surface homeostasis and wound healing leading to poor outcomes and serious ocular com plications (40) .Patients with the above-mentioned conditions may share a number of characteristics including individual concern and anxiety regarding body image, health, and satisfaction with food consumption. The fourth class of modern causes of hypovitaminosis A that may contribute to or worsen ocular surface diseases is the chronic disease leading to chronic impairment of the organs involved in digestion and clearance of vitamin A metabolites (Figure 1).Although the ma jority of these diseases are not new, improvements in therapeutic approach have allowed affected patients to lead longer and more active lives.Similarly, vitamin A deficiency may be neglected in patients receiving frequent healthcare. Within this group, the diseases causing severe dry mouth, such as head and neck radiotherapy and Sjögren's syndrome, may limit deglutition and digestion and impose dietary restrictions that may lead to hypovitaminosis A (43,44) .Therefore, dietary habits and vitamin A levels should be evaluated in patients presenting the diseases described above and ocular surface complications.Although patients commonly present with dry eye disease associated with these conditions, the clinical picture may be aggravated by hypovitaminosis A. Renal failure and hemodialysis are associated with dry eye disease and ocular surface changes in diabetic and nondiabetic patients (45,46) .There is currently controversy regarding lower vitamin A levels in such patients as renal failure reduced the reliability of traditional methods of measuring vitamin A levels.However, lower blood vitamin A levels have been shown to be associated with higher morbidity and mortality in these patient populations (47,48) .Recently, a case of night blindness and compatible retinal changes was described in a hemodialysis patient with apparent normal levels of serum retinol that were corrected with retinol palmitate treatment (49) . sIDe effeCTs Of vITAMIN A MeDICAl Use The utility of vitamin A topical eye drop administration in treating dry eye has been comprehensively investigated (50,51) .Vitamin A topical eye drops may also have utility in the treatment of skin diseases and specific types of cancer including ocular surface neoplasia (52,53) .However, excessive vitamin A intake is known to induce gastric and neural side effects such as abdominal and head pain, nausea, and irritability (54,55) .These symptoms may be aggravated by chronic use of vitamin A eye drops and lead to the development of blurred vision and pseudotumor cerebri (56)(57)(58) .A clinical history of dry skin and mucosa, nausea, and retinoic acid intake in meals or pharmaceutical formulations should inform suspicion of acute and chronic side effects or consequences of excessive vitamin A dosing. Recently, two publications reviewed the mechanisms underlying the induction of meibomian gland dysfunction and dry eye symptoms by systemic retinoic acid therapy for acne.The authors discussed the effects of systemic and topical skin or ocular application of different forms and doses of vitamin A formulations.Moreover, it was persistent meibomian gland dysfunction after systemic retinoic acid discontinuation was reported (4,52) . CAse RepORTs Case report 1: A 2-year-old boy presented with a history of conse cutive episodes of hordeola affecting the upper and lower lids of both eyes over the preceding 12 months.The patient had a history of photophobia and crying without tears.Previous ocular treatment included lubricants and antiallergic eye drops.The patient was an only child with no other personal or family antecedents.His dietary habits were based on soft drinks and junk food between meals with deficient intake of meat, milk derivatives, vegetables, and fruits.Swollen lids and hordeola affecting both eyes were observed on examination.He was able to fix and follow light projection with both eyes but was unable to perform visual acuity testing.Slit lamp examination demonstrated mild punctate keratitis and an epithelial defect in the right cornea.The rest of the ocular examination was normal.His body weight matched the 50th percentile for age and sex (12.7 kg); however, his height was in the tenth percentile (84 cm).Laboratory testing was requested and identified hypochromic and microcytic anemia with low blood levels of iron and retinol (32.7 μg/dl and 0.20 mg/l, where the normal levels for children are 50-150 μg/dl and 0.30-0.80mg/l, respectively). Clinical findings and laboratory testing indicated the chronic presence of hordeola, syndrome sicca, growth retardation, and anemia were all consequences of a diet deficient in essential elements such as vitamin A and iron (Fe).The diet was reoriented, and the child was maintained under close observation by his pediatrician until clinical signs improved fully. Case report 2: A 71-year-old woman presented with decreased vision and pain in the left eye (OS) for 20 days and a diagnosis of corneal ulcer.She was receiving antibiotic and corticosteroids eye drops at the time of presentation.She had previously undergone cataract surgery in both eyes 2 months prior to this presentation.Her medical history was noncontributive except for inappetence and weight loss of approximately 10 kg over the preceding year.Her visual acuity was 0.5 in her right eye (OD) and counting fingers at 1 m OS.Biomicroscopic examination revealed conjunctiva hyperemia and a 1.5 mm by 2.5 mm corneal ulcer without secretion or infiltration.A diagnosis of microbial keratitis was made, and eye drops were changed accordingly.During follow-up, she developed a corneal ulcer OD and the ulcer in the OS worsened.Severe corneal punctate fluorescein staining and conjunctival Rose Bengal staining were observed in both eyes.The Schirmer test without anesthesia was zero in both eyes.Her salivary flow was 0.06 ml/min (normal values >0.1 ml/min; Figure 3).Laboratory tests were positive for SSa and SSb (anti-Ro and anti-La antibodies, respectively), and blood levels of vitamin A were 0.2 mg/l.A minor salivary gland biopsy demonstrated leukocyte infiltration with focal organization, ductal dilation, and extensive fibrosis replacing acinar structures.The focus score was graded 4.During evaluations, the patient developed corneal melting OD and underwent penetrant keratoplasty.The present findings indicated a diagnosis of Sjögren's syndrome aggravated by hypovitaminosis A. After a period of corticosteroids and vitamin A therapy, her general and ocular symptoms improved.Her case illustrates a delicate combination of causes of sicca syndrome (Sjögren's syndrome and hypovitaminosis A) leading to a severe presentation.The extensive fibrosis of salivary gland structures, almost completely replaced by fibrosis, may be a consequence of concurrent disease and ageing (Figure 3 D). Case report 3: A 22-year-old woman presented with ocular pain, lid edema, and thick tearing for 5 months not improved by lubricants, cyclosporine eye drops, or bandage contact lenses.She reported a habit of mucous fishing.Her previous medical history included myopia, allergy, and acne vulgaris.She had been prescribed a 6-month course of oral isotretinoin 6 years previously without side effects and again 6 months prior to the current complaint.Examination revealed skin scarring, meibomian gland dysfunction, and punctate and filamentary keratitis that was worse OD (Figure 4).The tear film breakup time was 3 s in both eyes and the tarsal conjunctiva presented papillary reaction.The Schirmer test without anesthesia was zero in OD and 2 mm in OS, and her salivary flow rate was 0.033 ml/min.Laboratory testing was negative for hormonal abnormalities, and cystic fibrosis and her vitamin A blood levels were 0.4 mg/l.Tests for autoimmune diseases were negative for SSa and SSb, rheumatoid factor, and antinuclear antibody.Her condition was attributed to a side effect of isotretinoin treatment that had persisted after an 18-month interruption of oral isotretinoin intake.Her case corroborates previous reports of vitamin A-induced dry eye and represents a severe form of this condition that persisted after discontinuation of the causative medication. INvesTIgATION Hypovitaminosis A should be suspected in all cases of night blindness, ocular surface foreign body sensation, and photophobia without other evident causes.Crying without tearing is another relevant symptom of hypovitaminosis A. Recurrent hordeolum, meibomian gland dysfunction identified by gland dropout or inflammation with thickened lipid secretion, corneal epithelial defect, conjunctiva metaplasia (where Bitot's spot is an advanced form and a hallmark), and diffuse punctate keratitis also represent signs suspicious for hypovitaminosis A. In all patients suspected to have hypovitaminosis A, a dietary intake and nutritional habits enquiry must be conducted, with previously validated evaluation models available.In children, investigations of height and weight gain during the management period may also have utility. The utility of blood vitamin A levels measurements is broadly accepted, and a classification system established by the World Health Organization has defined low vitamin A levels as serum retinol concentrations <0.3 mg/l or 0.7 µM.There have been concerns regarding the reliability of blood concentration measurements as the liver is able to sustain normal levels even in extremely vitamin A-deficient states (19,59,60) . Other blood tests including complete blood count, protein, albumin, micronutrients, electrolyte concentrations, and stool fat microscopy have all demonstrated utility in assessing vitamin A deficiency severity.In addition, liver function tests, serology for hepatitis, and sweat sodium chloride test values >60 mM may aid in distinguishing between liver diseases and cystic fibrosis, respectively. Ocular surface assessments may be performed with vital staining and tear secretion measurements (fluorescein dye and Schirmer's test).Corneal and conjunctival impression cytology allows documentation of ocular surface epithelial metaplasia, square and speculate cells morphology, reduced nuclear size, and the absence or paucity of goblet cells on microscopy.Ocular surface assessments have demonstrated utility as simple and mildly invasive methods of recording and monitoring hypovitaminosis A in early xerophthalmia (61) . CONClUsION The major aim of treatment is to restore vitamin A levels in cases of hypovitaminosis and reduce exposure in conditions associated with side effects of oral or skin topical vitamin A use.Details regarding dosage and administration routes are outside the scope of the present review, as they are dependent on the underlying cause, patient characteristics, and severity of individual cases. Healthcare professionals attending poor populations and patients with chronic malabsorption syndrome, hepatic, and other related diseases should be familiar with the classic causes of hypovitaminosis A. The modern causes of hypovitaminosis A do not have the same magnitude in terms of prevalence but should be considered by ophthalmologists in daily clinical practice.Hypovitaminosis A can cause blindness and corneal opacity, but it is also an important cause of morbidity and mortality. Increased suspicion of hypovitaminosis A due to ocular surfa ce symptoms and signals should direct prompt investigation of nu tritional and digestive problems followed by interdisciplinary management allowing early diagnosis and treatment of the causes and effects of the majority of diseases related to hypovitaminosis A. Figure 1 . Figure 1.Metabolic steps underlying vitamin A deficiency from the dietary level to tar get cells. Figure 2 . Figure 2. Classic and modern causes of hypovitaminosis A. Figure 3 . Figure 3.A 71yearold woman with bilateral corneal ulcers, weight loss, and features of autoimmune disease affecting her hands.(A) Slit lamp examination demonstrating a corneal ulcer OD. (B) OD corneal melting.(C) Body aspect of weight loss.(D) Histology of a minor salivary gland with leukocyte focal infiltration, ductal dilation, and extensive fibrosis replacing acinar structures (200×).Her condition was attributed to a combination of dryness caused by Sjögren's syndrome and hypovitaminosis A. Figure 4 . Figure 4.A 22yearold woman with skin scarring secondary to acne vulgaris (A).Her meibomian glands were found to be dysfunctional (B), and her cornea has punctate with evidence of filamentary keratitis (C).Her condition was attributed to systemic and topical retinoic acid skin treatment. Table 1 . Vitamin A nomenclature Name Group Characteristics Retinoids Vitamin A and natural or synthetic derivate Similar chemical polyenes and polar end groups Carotenes α-Carotene, β-carotene, g-carotene, and the xanthophyll β-cryptoxanthin Β-ionine rings Vitamin A Group of lipophilic nutritional compounds Essential and broad effects on chordate animal bodies Provitamin A Carotenes and retinyl esters Dietary and pharmaceutical sources of vitamin A Retinoic acid Metabolite of vitamin A Transcription factor binding to cell nuclear receptors Retinal Form of vitamin A Essential for vision function Retinol Form of vitamin A Growth and development functions Tretinoin All trans retinoic acid Pharmaceutical formulas 1 IU of vitamin A = 0.3 μg retinol = 0.34 μg retinil acetate = 0.6 μg β-carotene.Arq Bras Oftalmol.2016;79(1):56-61 Table 2 . Major causes of hypovitaminosis A and diagnosis guidelines Major causes of deficiency of vitamin A Description Primary Malabsorption syndromeReduction in uptake and mucosa transport of digested nutrients to the blood stream Diagnosis: diarrhea, steatorrhea, weight loss, anemia, hyperkeratosis, and acrodermatitis.Blood examination to check pancreas and liver function.Stool analysis (fat, parasites) Bariatric surgery Surgery to treat obesity and associated diseases is divided into restrictive, disabsorptive, and mixed techniques and often mimics malabsorption syndrome Diagnosis: surgical history, use of vitamin supplements, bowel habits.Food intake history.Physical signs.Blood levels of vitamin A. Stool analysis (fat) deficiency Low dietary intake of vitamin A Food source: liver beef, damascus, spinach, cabbage, milk, carrot, and butter Diagnosis: food intake history, liver function, and vitamin A serum levels Restrictive and monotonous diets Restricted intake of sources of vitamin A and consumption of the same group of food for many months Eating disorders: psychiatric, cafeteria diet, and vegetarian Diagnosis: food intake history.Physical signs.Blood vitamin A levels Diagnosis: blood levels of pancreas enzymes and vitamin A. Stool analysis (fat) Cystic fibrosis Inherited disease affecting chloride channels leading to exocrine gland dysfunction.Malabsorption mechanisms and signs may be present Diagnosis: low weight gain in infancy, progressive malnutrition, chronic cough with hypersecretion, chronic sinusitis, biliary cirrhosis, diabetes, respiratory infections and infertility.Sodium and chloride levels in sweat Salivary and deglutition diseases Swallowing problems due to xerostomia, tooth problems, and/or muscular deglutition dysfunction.Example: Sjögren's syndrome Diagnosis: oral and dental examination and salivary flow rate
2017-10-21T06:43:49.661Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "c479b7af35ab068f2849fe26709337b38640525e", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/abo/a/xyyymLGBnPKNFhmMpQm7Dkc/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c479b7af35ab068f2849fe26709337b38640525e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30833587
pes2o/s2orc
v3-fos-license
Self-medication with antibiotics by the community of Abu Dhabi Emirate , United Arab Emirates Background: Self-medication with antibiotics may increase the risk of inappropriate use and the selection of resistant bacteria. The objective of the study was to estimate the prevalence of self-medication with antibiotics in Abu Dhabi. Methodology: A validated, self-administered questionnaire was used to collect data. Data were analysed using descriptive statistics, and the chi-square test when applicable. One thousand subjects were invited to participate in the study. Results: Eight hundred sixty questionnaires were completed, with a respondent rate of 86%, consisting of 66% males and 34% females. Among the 860 participants, 485 (56%) reported the use of antibiotics within the last year. Amoxicillin was the antibiotic most commonly used (46.3%). The survey showed a significant association between antibiotics used and age group (p < 0.001). Of the participants surveyed, 393 (46%) stated that they intentionally use antibiotics as self-medication without a medical consultation, a behavior that is significantly affected by educational levels (p 0.001). Two hundred forty-five (28%) participants stored antibiotics at home. These antibiotics were mostly acquired from community pharmacies without prescriptions (p 0.001). Conclusions: The results of this study confirm that antibiotic self-medication is a relatively frequent problem in Abu Dhabi. Interventions are required in order to reduce the frequency of antibiotic misuse. Introduction Efforts to promote the rational use of drugs have mainly been targeted at the formal health care services.These efforts started in the 1970s, when the World Health Organization (WHO) introduced the concept of essential drugs.The principle of this concept is that a limited number of drugs would lead to a better supply of these drugs, better prescribing, and lower costs for health care.Despite the introduction of the essential drug list in over 100 countries, drug consumption still increased worldwide [1].It can be argued that antimicrobials have done more to improve public health in the last 50 years than any other measure, but conversely it is estimated that the volume of the antibiotic market worldwide is between 1 and 2 x 10 8 kg of products [2]. It is well documented that the indiscriminate use of antibiotics has led to hospital, waterborne and food-borne infections by antibiotic-resistant bacteria, enteropathy (irritable bowel syndrome, antibioticassociated diarrhoea etc.), drug hypersensitivity, biosphere alteration, human and animal growth promotion, and destruction of fragile interspecific competition in microbial ecosystems [3].The consequences are severe; infections caused by resistant microbes fail to respond to treatment, resulting in prolonged illness and a greater risk of death.Treatment failures also lead to longer periods of infectivity, which increase the numbers of infected people moving in the community and thus expose the general population to the risk of contracting a resistant strain of bacteria [4]. Self-medication can be defined as the use of drugs to treat self-diagnosed disorders or symptoms, or the intermittent or continued use of a prescribed drug for chronic or recurrent disease or symptoms [5,6].Drug regulations that affect the availability of antibiotics are implemented differently in different countries and can play an important role in misconceptions about the use of antibiotics [7].In addition, regulations (and their enforcement) also vary for the dispensation of prescription antibiotics.For example, common self-medication with antibiotics in Spain may be a consequence of poor enforcement and control over the laws and regulations influencing prescription, which has a knock-on effect upon community pharmacy services [8].Another survey showed significant differences in public attitudes, beliefs and levels of knowledge concerning antibiotic use, self-medication and antibiotic resistance in Europe.Overall, only half of the respondents were aware of antibiotic resistance and this awareness was the lowest in countries with a higher prevalence of resistance [7]. The United Arab Emirates (UAE) is a federation of seven gulf emirates with an estimated population of 4.1 million people of different ethnic groups (80% are expatriates) [9].The gross national product per capita is $32,000 [10].Standards of health care in the UAE are generally high, reflecting the high level of public spending over the decades since the oil boom.Better health provision has been reflected in rapidly improving figures for key indicators, such as life expectancy and infant mortality rates, which are now at western levels [9].According to the national antibiotics policy and guide to antimicrobial therapy [11], antimicrobials should only be sold or supplied by prescription from an authorized medical practitioner or dentist.In this policy, for the purposes of rational use, antimicrobials are classified into three groups according to the level of prescription: Group A: For common use, all practitioners may prescribe them (safe, effective and relatively cheap) Group B: Restricted use; for prescription by specialists only (expensive, toxic and new agents) Group C: For use in primary health care (similar to group A), with some omissions [11]. Our literature review revealed that there are no published studies that address self-medication with antibiotics by the community in the United Arab Emirates; hence our study is a first for this region. Methods A descriptive, cross-sectional study was conducted during the 16 th Abu Dhabi International Book Fair in April 2006.The aim of the study was to estimate the prevalence of self-medication with antibiotics in Abu Dhabi.Data were collected through a structured, validated, self-administered questionnaire. Selection of the participants was based on systematic random sampling; every 35 th visitor was chosen and verbal consent was obtained after a briefing on the objectives of the study.The questionnaire collected demographic data such as age, Respondents were asked whether they had used antibiotics during the past year and were shown a portrait containing labels from the available types of antibiotics in health premises to help them remember which these were (Figure 1).Participants who confirmed antibiotic usage were asked why and how they obtained the antibiotics, whether they were storing any antibiotics at home, and if they intended to use them personally or for their children without a doctor's prescription.The collected data were pooled and analysed using SPSS version 11.0.Descriptive statistics were used, and the chi-square test was used when applicable. Results Out of 1,000 invited visitors, 860 agreed to participate in the study, a respondent rate of 86%.Males represented 66% of the participants, while females represented 34%.Participant age and educational levels are presented in Table 1.Among the 860 participants, 485 (56%) reported antibiotic use during the last year (68.5% male, 31.5 % female).Antibiotic use was significantly affected by age (p < 0.001) and educational level (p = 0.023) but not by gender (p = 0.045). The frequencies of antibiotics used are presented in Table 2. Amoxicillin was the antibiotic most commonly used by the participants and their children (46.3% and 70% respectively), followed by amoxicillin-clavulanate (23.9%, 10.8%), while ciprofloxacin and norfloxacin were used only by adult participants. Within the 485 participants who reported antibiotic use, 270 (56%) obtained their antibiotics with a prescription either from a physician or a dentist, while the remainder (115, 44%) acquired their antibiotics without prescription as selfmedication (see Table 3).Statistical analysis showed that the method of obtaining antibiotics is significantly affected by the age of the participants (p = 0.014).Among the 360 parents who confirmed antibiotic use for their children, we found that 236 (66%) received antibiotics through a prescription while 124 (34%) did not (Table 3).The most common reasons for which antibiotics were used are displayed in Table 4.Of all participants, 393 (46%) stated that they intentionally use antibiotics as self-medication without a medical consultation, which is significantly affected by the educational level of the participant (p 0.001: higher level of education associated with an increased use of self medication). No. Of Used Two hundred forty-five (28%) of all participants declared that they were keeping antibiotics at home, mostly acquired from the community pharmacies without a prescription.A significant association between the behavior of keeping antibiotics at home and age (p = 0.002) was found, with males also significantly more likely exhibit this behavior (151 males, p 0.001). Discussion Despite the UAE's antimicrobial policy that restricts the dispensation of antibiotics without prescription, our study indicates the wide availability of these agents over the counter and reveals the high prevalence of self-medication with antibiotics. Antibiotics were used by 56.3% of the study population.This rate is fairly high compared with results conducted in the Czech Republic (31.1%),Jordan (23.0%), and Lithuania (39.9%) [12,13,14].In Lithuania, women tended to use more antibiotics than men, while in our study the antibiotic usage was not associated with gender, but was significantly affected by age and educational level.Because we do not have sufficient information about the spread of infectious disease in UAE, we cannot determine whether the geriatric patients were more frequently sick or if they resorted to self-medication, so it would be advisable to monitor aspects such as resistance and sensitivity. The high prevalence of self-medication that was found within the adult participants (44%) and their children (34%) could be explained by a number of factors, including the nature of the UAE community, which comprises different nationalities.The majority are from India, the Philippines, Pakistan and different Arab countries, where the prevalence of selfmedication is also high: India 18%, Sudan 48% and Jordan 40% [15,5,13].This observation suggests that traditional, social and cultural factors influence selfmedication with antibiotics, despite these being prescription-only medicines in the UAE.Another contributing factor is the ease with which antibiotics can be acquired from the community pharmacies, which in turn is related to a lack of high disciplinary regulations.The antimicrobial policies in the UAE comprise no distinct clauses or articles that stipulate any sort of punitive procedures or punishments against violating pharmacists, who dispense antibiotics without medical prescriptions issued by licensed physicians. In concordance with reported results from studies in Sudan, Jordan and Greece [5,13,16], amoxicillin was the most commonly chosen antibiotic for selfmedication.Other antibiotics that were used included amoxicillin-clavulanate, macrolides, quinolones, and tetracycline. As stated by the participants of our study, influenza was the major reason for treatment with prescribed or self-medicated antibiotics.This finding, though consistent with results of other studies [13,14,17], also indicates the belief of the community that antibiotics can treat and eradicate any infections irrespective of their origin.It also revealed that the participants were unaware of the dangers and consequences of inappropriate use of antibiotics. Despite its highly harming photosensitivity, our results revealed the inappropriate and irrational use of tetracycline in treating gastrointestinal problems.This is a practice which is in discordance with published guidelines, which state the drug is to be used in combination with other medicines to manage food-borne illnesses caused by various bacteria (e.g.Brucella abortus, Vibrio cholera and Vibrio vulnificus) [18] or following initial H. pylori treatment failure (salvage therapy ) [19;20]. Intended self-medication and storage of antibiotics at home are both considered to be predictors of actual self-medication, as reported by Grigogyan and colleagues [21].With regard to these predictors, mainly the storage of antibiotics at home, our results are comparable with those reported from Malta and the Czech Republic (28% UAE, 35% Malta, and 7.5% Czech Republic).We found that the main source of obtaining these antibiotics were local community pharmacies.Our study also revealed that retrieving medicines from abroad is another common source, especially those which are expensive (100% of home stored azithromycin in our study was brought in from outside the UAE).Participants also indicated that another reason for both storing antibiotics at home and for bringing them from outside the UAE was the person's eligibility for medical insurance benefits. One limitation of our study is that the participants were book fair visitors who are generally intellectual and educated.Therefore, the sample might not be representative of all society classes.Since the participants were self-reporting via the questionnaire, we cannot be certain that we received all the relevant information related to their complaints and medicines (in terms of receiving or buying).Such bias may impact upon our results, but is difficult to avoid in questionnaire-based studies.While this study was the only assessment of self-medication in the UAE, future studies would ideally follow participants over time to gain a deeper insight into self-medicating behaviors. Conclusions The results of this study confirm that antibiotic self-medication is a relatively frequent problem in the UAE and interventions at different levels are required in order to reduce the frequency of antibiotics misuse.Quick et al. [22] classified these interventions into four sections: managerial, regulatory, educational, and financial.For the UAE, managerial interventions could include updating the Antibiotics Policy and Guide to Antimicrobial Therapy (2 nd Edition 1998), establishing a National Antibiotic Therapeutic Advisory Committee, and establishing a set of National Standard Treatment Guidelines.Regulatory strategies should concentrate on limiting the import of drugs to the market [1].The educational interventions for both prescribers (e.g.flow charts, newsletters, bulletins, etc.) and patients/consumers (e.g.educational campaigns on antibiotics, their uses and limitations) are very important and should be considered a priority.With regard to the financial interventions, the National Mandatory Health Insurance scheme should play an important role to diminish the problem of self-medication in the UAE. Table 1 . Distribution of demographics characteristics within the study population *100% of respondents = ad a/ = 860, ad b/ = 485 gender, and level of education of the participants. Table 2 . Frequencies of used antibiotics Table 3 . Source of obtaining antibiotics Table 4 . Frequencies of common reasons for which antibiotics were used Table 5 . Storing antibiotics at home
2017-06-04T17:20:13.852Z
2009-08-30T00:00:00.000
{ "year": 2009, "sha1": "33965982ad7681976e9b055deb570f2219ccbbec", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/19762966/265", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a8d7fc2b839e039ca4f9c7fa12ce8b2f09335745", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4417339
pes2o/s2orc
v3-fos-license
Circulating Cell-Free DNA and Circulating Tumor Cells as Prognostic and Predictive Biomarkers in Advanced Non-Small Cell Lung Cancer Patients Treated with First-Line Chemotherapy Cell-free DNA (cfDNA) and circulating tumor cells (CTCs) are promising prognostic and predictive biomarkers in non-small cell lung cancer (NSCLC). In this study, we examined the prognostic role of cfDNA and CTCs, in separate and joint analyses, in NSCLC patients receiving first line chemotherapy. Seventy-three patients with advanced NSCLC were enrolled in this study. CfDNA and CTC were analyzed at baseline and after two cycles of chemotherapy. Plasma cfDNA quantification was performed by quantitative PCR (qPCR) whereas CTCs were isolated by the ScreenCell Cyto (ScreenCell, Paris, France) device and enumerated according to malignant features. Patients with baseline cfDNA higher than the median value (96.3 hTERT copy number) had a significantly worse overall survival (OS) and double the risk of death (hazard ratio (HR): 2.14; 95% confidence limits (CL) = 1.24–3.68; p-value = 0.006). Conversely, an inverse relationship between CTC median baseline number (6 CTC/3 mL of blood) and OS was observed. In addition, we found that in patients reporting stable disease (SD), the baseline cfDNA and CTCs were able to discriminate patients at high risk of poor survival. cfDNA demonstrated a more reliable biomarker than CTCs in the overall population. In the subgroup of SD patients, both biomarkers identified patients at high risk of poor prognosis who might deserve additional/alternative therapeutic interventions. Introduction Non-small cell lung cancer (NSCLC) accounts for about 75-80% of all lung cancers [1,2] and despite improved diagnostic techniques, the great majority of NSCLC patients (70%) present advanced stage tumors at diagnosis and there is a 5-year survival rate of less than 5% [3,4]. The mainstay of care for patients affected by advanced NSCLC in absence of actionable driver mutations is first-line platinum-based chemotherapy; however, the prognosis of treated patients remains dismal, with a median survival of about one year [5,6]. Presently, no specific biomarker that helps clinicians determine prognosis and monitor patients' response during treatment with chemotherapy has been identified in NSCLC, but much interest has been focused on biomarkers identified by liquid biopsy. Liquid biopsy is a non-invasive blood test that detects tumor-derived nucleic acids (cell-free tumor DNA (cfDNA) and microRNAs) as well as circulating tumor cells (CTCs) shed by the tumor into the bloodstream [7,8]. Specifically, cfDNA is released into the circulation from primary or metastatic cancers through cell death mechanisms including apoptosis, necrosis, phagocytosis and lysis of tumor cells, thus representing an indicator of cellular turnover [9]. Conversely, CTCs are spread from the tumor into the peripheral blood, playing an important role in the development of metastasis [10]. Since blood samples can be obtained at different times during treatment, the clinical response as well as the emergence of drug resistance can be monitored in real time. Recently, the Food and Drug Administration (FDA) approved the CellSearch system (Janssen Diagnostics, Raritan, NJ, USA) for CTC detection and enumeration of the major malignancies such as breast, colon and prostate cancer [11][12][13]. This methodology involves antibodies directed against specific epithelial tumor markers; however, filtration by size systems might be more suitable to detect CTCs, irrespective of cell surface-markers [14][15][16]. The potential value of liquid biopsy in lung cancer has been recently underlined as being tissue accessibility, which is often challenging. To date, a number of studies have highlighted the relevance of cfDNA and CTCs as NSCLC biomarkers [17,18], however, to the best of our knowledge, their simultaneous assessment has not yet been uncovered. In this study, we sought to evaluate the role of cfDNA quantification and CTC enumeration, separately or conjunctionally, in predicting response to treatment and survival in a cohort of advanced NSCLC patients receiving first line platinum-based chemotherapy. Study Population Seventy-three patients affected by advanced stage IIIB-IV NSCLC who were candidates for palliative first-line platinum-based chemotherapy were considered eligible and enrolled in the study. The characteristics of the patients considered in the analyses are summarized in Table 1. The median age at first cycle (start of chemotherapy) was 67 years, the majority of patients were males (68.5%), the frequency of adenocarcinoma histology was 80.8% and Eastern Cooperative Oncology Group Performance Status (ECOG PS) of 1-2 accounted for 71.2% of cases. In addition, 92% of patients were current/former smokers. Among the enrolled patients, none harbored sensitizing mutations of epidermal growth factor receptor (EGFR); more specifically, three patients had mutations in exon 20 (resistant to EGFR tyrosine kinase inhibitors, and were therefore considered candidates for first-line chemotherapy [19]. In addition, no patients harbored anaplastic lymphoma kinase (ALK) rearrangements. Notably, all the patients but one had stage IV (metastatic) disease; the single patient with stage IIIB disease was included in the study due to the extension of the disease (T4N2), which excluded loco-regional treatments or combined modalities (such as chemo-radiation), leaving platinum-based chemotherapy as the only available therapeutic option. The patients affected by squamous cell carcinoma received a median of four cycles of chemotherapy with gemcitabine and a platinum-derivate (eight patients received cisplatin and four received carboplatin), while the patients affected by adenocarcinoma received a median of four cycles of chemotherapy with pemetrexed and a platinum-derivate (23 patients received cisplatin and 37 received carboplatin; one patient who was initially a candidate for cisplatin was switched to carboplatin after the first cycle due to a creatinine increase). Among patients treated with pemetrexed, 32 received at least one cycle of maintenance (range: 1-22; median: four cycles). Imaging with computed tomography (CT-scan) as well as blood withdrawals for liquid biopsy (cfDNA and CTCs) were performed at baseline (before chemotherapy) and after two and four cycles of chemotherapy. For one patient, the evaluation of cfDNA at baseline was not feasible due to plasma hemolysis. Blood samples were no longer collected in patients with unacceptable toxicity and deterioration of clinical conditions. For the above-mentioned reasons, the first assessment of cfDNA and CTCs after two cycles of chemotherapy (first evaluation) was feasible in 47 patients (two plasma samples were hemolyzed and not suitable for analysis) and 49 patients, respectively, whereas after four cycles of chemotherapy (second evaluation) only 21 (five plasma samples were not suitable) and 26 patients were evaluable for cfDNA and CTC analyses, respectively. Due to the small number of patients completing four cycles of chemotherapy, only cfDNA and CTC values at first evaluation by Response Evaluation Criteria in Solid Tumors (RECIST) were taken into account for the analyses. The median follow-up time of progression free survival (PFS) of 70 evaluable patients (not evaluable in three) was 4.7 months (range: 0.9-26.1) while the median follow-up time of overall survival (OS) of the whole population of 73 patients was 8.0 months (range: 1.0-49.9). The best overall response (BOR) according to RECIST criteria was considered for comparative analyses with cfDNA and CTCs, reporting 16 patients (21.9%) with partial response (PR), 39 patients (53.4%) with stable disease (SD), 16 patients (21.9%) with progressive disease (PD) and two that were not evaluable (2.8%). Role of Clinical Parameters and Circulating Biomarkers in Prognosis The prognostic role of clinical parameters (age, gender, histology, and ECOG PS) and circulating biomarkers (cfDNA and CTCs) in advanced NSCLC patients treated with first-line chemotherapy was evaluated by univariate Kaplan-Meier survival analyses. The median baseline plasma cfDNA value of 96.3 human Telomerase Reverse Transcriptase (hTERT) copy number (range: 16.7-1968.2) and the median baseline number of 6 CTCs/3 mL of blood (range: 0-50) were used as cut-off values to categorize patients into prognostic subgroups with potentially different outcomes ( Figure 1). A minimal inverse correlation between cfDNA and CTCs at baseline was identified (Pearson correlation after log-transformation = −0.21, p-value = 0.08). Conversely, no significant associations were found among the cfDNA and CTCs values and some clinical features known to be linked to poor prognosis such as histology (adenocarcinoma vs. squamous cell carcinoma: cfDNA p-value = 0.706; CTCs p-value = 0.905) and the metastatic status (thoracic district vs. extra-pulmonary: cfDNA p-value = 0.991; CTCs p-value = 0.406; evidence vs. no evidence of brain metastases: CTCs p-value = 0.697). The only significant correlation was observed between the baseline cfDNA values and brain tumor metastases (p-value = 0.041). Role of Clinical Parameters and Circulating Biomarkers in Prognosis The prognostic role of clinical parameters (age, gender, histology, and ECOG PS) and circulating biomarkers (cfDNA and CTCs) in advanced NSCLC patients treated with first-line chemotherapy was evaluated by univariate Kaplan-Meier survival analyses. The median baseline plasma cfDNA value of 96.3 human Telomerase Reverse Transcriptase (hTERT) copy number (range: 16.7-1968.2) and the median baseline number of 6 CTCs/3 mL of blood (range: 0-50) were used as cut-off values to categorize patients into prognostic subgroups with potentially different outcomes ( Figure 1). A minimal inverse correlation between cfDNA and CTCs at baseline was identified (Pearson correlation after log-transformation = −0.21, p-value = 0.08). Conversely, no significant associations were found among the cfDNA and CTCs values and some clinical features known to be linked to poor prognosis such as histology (adenocarcinoma vs. squamous cell carcinoma: cfDNA p-value = 0.706; CTCs pvalue = 0.905) and the metastatic status (thoracic district vs. extra-pulmonary: cfDNA p-value = 0.991; CTCs p-value = 0.406; evidence vs. no evidence of brain metastases: CTCs p-value = 0.697). The only significant correlation was observed between the baseline cfDNA values and brain tumor metastases (p-value = 0.041). As shown in Table 2, age at first cycle of chemotherapy was the only significant indicator of prognosis among the clinical factors reporting a survival probability at 18 months of 32.0% for patients younger than 67 compared to 14.0% for those older than 67 (p-value = 0.050). Regarding circulating biomarkers, patients with baseline cfDNA hTERT copy number ≤ 96.3 had a survival probability at 18 months of 38.0% vs. 9.0% for patients with higher cfDNA values (p-value = 0.019). No significant difference between patients with CTCs ≤ 6 and those with CTCs > 6 was observed, although patients with higher CTCs had a slightly longer median follow-up than patients with CTCs below the median value (10.3 vs. 7.2 months, respectively) and a survival probability at 18 months of 29.0% for patients with higher CTCs respect to 18.0% with lower CTCs (p-value = 0.402) showing an inverse correlation respect to cfDNA. In addition, no remarkable difference in PFS was found for either cfDNA or CTCs (data not shown). As shown in Table 2, age at first cycle of chemotherapy was the only significant indicator of prognosis among the clinical factors reporting a survival probability at 18 months of 32.0% for patients younger than 67 compared to 14.0% for those older than 67 (p-value = 0.050). Regarding circulating biomarkers, patients with baseline cfDNA hTERT copy number ≤ 96.3 had a survival probability at 18 months of 38.0% vs. 9.0% for patients with higher cfDNA values (p-value = 0.019). No significant difference between patients with CTCs ≤ 6 and those with CTCs > 6 was observed, although patients with higher CTCs had a slightly longer median follow-up than patients with CTCs below the median value (10.3 vs. 7.2 months, respectively) and a survival probability at 18 months of 29.0% for patients with higher CTCs respect to 18.0% with lower CTCs (p-value = 0.402) showing an inverse correlation respect to cfDNA. In addition, no remarkable difference in PFS was found for either cfDNA or CTCs (data not shown). Successively, the relationship between circulating biomarkers and prognosis, in terms of PFS and OS, was evaluated by the Cox multiple regression model (Table 3). In this context, cfDNA and CTCs were first fitted to survival data separately (i.e., the two biomarkers are fitted separately to survival data through two distinct equations) and then jointly (i.e., both biomarkers are included in the same regression equation) in order to assess the degree of reciprocal influence. Separate Cox regression analysis confirmed the prognostic role of cfDNA ( Figure 2). Indeed, the risk of death in patients with cfDNA > 96.3 was significantly higher than in patients with cfDNA ≤ 96.3 (hazard ratio (HR): 2.14; 95% confidence limits (CL) = 1.24-3.68; p-value = 0.006) (Figure 2A), and a similar result was obtained when relapse was used as an endpoint (HR: 1.70; 95% CL = 1.02-2.83; p-value = 0.040) ( Figure 2B). Conversely, no significant association with OS and PFS was found by CTC enumeration, although a worse cumulative death rate was observed in patients with CTCs ≤ 6 ( Figure 2C,D). Results obtained through the joint regression analysis showed a moderate influence of the two biomarkers in predicting PFS and OS, providing equivalent death and relapse rates (higher for patients with cfDNA > 96.3 and lower for patients with CTCs ≤ 6) and homogeneous statistical results (Table 3). Moreover, these findings suggest a substantially independent reciprocal prognostic role of cfDNA and CTCs in these NSCLC patients treated with chemotherapy. Finally, the Cox regression was applied to the 39 evaluable patients experiencing SD at BOR. In this subgroup, substantial changes were observed when separate and joint biomarker models were fitted to OS data (Table 4). In the separate analysis, cfDNA confirmed its prognostic effect (HR: 2.32; 95% CL = 1.01-2.53; p-value = 0.047). Similarly, although with borderline significance, a higher CTC number (>6) was able to identify patients with a better OS (HR: 0.47; 95% CL = 0.22-1.03; p-value = 0.058). These findings suggest that in the subset of SD patients, higher cfDNA content and lower CTC number may differentiate subjects at a higher risk of early death. In the joint modeling, both biomarkers lost their significant prognostic impact but a higher risk of poor prognosis in patients with cfDNA above vs. below the median value (HR: 1.87 vs. HR: 1.00, respectively) and CTCs below vs. above the median value (HR: 1.00 vs. HR: 0.59, respectively) was retained (Table 4). Circulating Biomarkers and Treatment The variations of cfDNA plasma content and CTC enumeration during chemotherapy were analyzed in relation to the patients' BOR estimated by RECIST. There were 47 and 49 patients with cfDNA and CTC determinations, respectively, who were evaluable for the analysis. Taking into consideration baseline measurements, a decreasing non-significant trend in geometric mean (GM) values of cfDNA according to a worsening of clinical conditions was found after two cycles of chemotherapy. In fact, a cfDNA GM equal to 46.3 (95% CL = 16.6-129.3), 69.2 (95% CL = 26.7-179.5) and 82.4 (95% CL = 27.1-250.8) hTERT copy numbers were evidenced in patients experiencing PR, SD and PD, respectively. Similarly, lower CTC GM levels were found among patients with PR (2.78; 95% CL = 1.19-6.52) and SD (2.73; 95% CL = 1.16-6.45) when compared to patients with PD (4.08; 95% CL = 1.33-12.50). These findings suggest that, although chemotherapy has reduced the whole cfDNA and CTC burdens, the higher values of circulating biomarkers in PD patients might indicate reduced responsiveness to treatment. Discussion Non-small cell lung cancer is still diagnosed at advanced stage, when the OS rate is very poor; therefore, the identification of novel prognostic indicators and predictive factors remains a top priority. The application of liquid biopsy, as a non-invasive test, represents a promising tool to identify prognostic/predictive biomarkers such as cfDNA, miRNAs and CTCs [7]. Despite circulating cell-free miRNAs as well as encapsulated exosome miRNAs are emerging biomarkers [8,20], but they are still far from becoming clinically useful markers [21]. Conversely, numerous efforts have been made in the isolation and quantification of cfDNA and CTCs in order to improve cancer diagnosis, prognosis, as well as predict treatment efficacy [22]. A number of studies have been carried out to establish the most reliable cfDNA and CTC determinations in advanced NSCLC patients treated with chemotherapy, although studies evaluating their simultaneous role are presently lacking [17,18]. We evaluated the significance of these biomarkers both in separate and joint analyses in a cohort of advanced NSCLC patients receiving first line chemotherapy. Despite that the study population might be seen as heterogeneous on the basis of clinical variables such as histology, performance status, or stage (IIIB-IV), it should be considered that all patients had non-oncogene-driven advanced NSCLC, they were treatment-naïve and all were candidates for the same therapeutic approach (platinum-based chemotherapy). More specifically, only one patient had stage IIIB disease, which was too extended for combinations including chemotherapy and radiation therapy; furthermore, as reported in Table 2, gender, histology, and ECOG PS were not associated with statistically significant differences in terms of OS. Baseline cfDNA content below the median value (96.3 hTERT copy number) demonstrated a significant prognostic indicator of OS in our cohort of treated NSCLC patients in simple analysis. This effect was further confirmed by Cox multiple regression models for both OS and PFS. Our results are in line with previous studies examining the prognostic role of cfDNA in advanced NSCLC patients undergoing chemotherapy. In particular, in the meta-analyses by Ai et al. [17], six studies evaluated the impact of cfDNA concentration on the OS of advanced NSCLC patients (stage III-IV) treated with chemotherapy. All the studies [23][24][25][26][27][28], except one [29], demonstrated that higher levels of cfDNA were significantly associated with poorer OS. In addition, we found that among the fraction of patients reporting SD at BOR, the baseline cfDNA retained its prognostic role by discriminating patients at a higher risk of poor survival. It should be underlined that the exclusive assessment of treatment response by RECIST criteria often provides ambiguous clinical information in defining SD patients [30]. In order to obtain more reliable information, some studies have attempted to integrate cfDNA evaluation with RECIST but, at present, results have been inconclusive [25,29]. In agreement with Lee et al. [25], our findings suggest that cfDNA might help clinicians select patients at high risk of early relapse, likely allowing them to benefit from alternative therapeutic interventions. Concomitantly, we also investigated the prognostic role of CTC number in this cohort of patients. Unexpectedly, we observed an inverse relationship between baseline CTCs and OS. Our results contrast with the majority of reported data in NSCLC which shows that patients treated with chemotherapy had worse OS and PFS with a higher CTC number [18,[31][32][33][34][35][36]. Only one study by Juan et al. [37] reported similar findings in a group of advanced NSCLC treated with docetaxel and gemcitabine. In particular, the authors found a longer median PFS and OS in patients with an increased CTC number (CTCs ≥ 2/7.5 mL) [37]. The authors concluded that a possible explanation could be due to the particular cohort of patients enrolled (high number of patient with PS = 2) and the technique used for the CTC enumeration (CellSearch system) leading to CTC underestimation. To date, the most widely used systems to enumerate CTCs are based on epithelial cell adhesion molecule (EpCAM)-based immunomagnetic techniques, such as CellSearch which is also the only FDA approved methodology. However, only about 35% of metastatic patients have EpCAM-positive CTCs [38] and evidence supports the fact that CTCs are a heterogeneous population consisting of cells with different phenotypes [39,40]. In the present study, we used a physical assay that isolates non-hematologic circulating cells by size, irrespective of cell surface markers. Moreover, filtration-based devices have a higher sensitivity to enrich the CTC subpopulations [16,31] and they also allow the detection of circulating tumor microemboli, defined as clusters of CTCs ≥ 3 [31,33,41], undetectable by CellSearch. Indeed, we found higher number of median CTCs at baseline (6/3 mL of blood) compared to previous studies using EpCAM-based techniques [31][32][33][34]36], reporting median CTC number ranging from one to six in a double volume of blood (7.5 mL). On the basis of the above considerations and present findings, we can hypothesize that a higher baseline number of heterogeneous CTC populations might exhibit different responsiveness to chemotherapy and that a higher fraction of more sensitive CTCs might have been present in the blood of patients with a better outcome (CTCs > 6/3 mL). Indeed, similar to previous studies, we observed a reduction of the CTC number following chemotherapy in PR and SD, compared to PD patients. This reduction, suggestive of treatment efficacy, might indicate that patients with better survival show a prevalence of chemo-sensitive CTC subpopulation. Likewise, the cfDNA levels progressively decrease during treatment with chemotherapy in PD, SD and PR patients, although this is not statistically significant. To date, few studies evaluated cfDNA variation during chemotherapy and reported discordant results [23,25,42,43]. Our findings were similar to those reported by Kumat et al. [42] in which cfDNA levels decreased among PD, SD and PR patients and also by Gautshi et al. [43] who observed a cfDNA increase in PD patients. Hence, we can speculate that both CTCs and plasma cfDNA may play a role as predictive biomarkers of chemotherapy efficacy. This hypothesis holds particularly true in the subsets of patients reporting SD at BOR by assuming that a higher risk of early death may be expected in patients with high cfDNA values and a low CTC number. This behavior further supports the assumption that cfDNA and CTCs have an independent prognostic role which is in line with their different mechanisms of release from the tumor into circulation; cfDNA is mainly released by a passive mechanism (tumor cell death) whereas CTCs are spread through an active process (tumor aggressiveness). Patients' Enrollment Seventy-three patients with advanced NSCLC and eligible for first-line chemotherapy were enrolled into a prospective study at the Lung Cancer Unit, IRCCS AOU San Martino-IST, Genova, Italy [44]. The inclusion criteria were pathologically confirmed NSCLC stage IIIB-IV, no previous systemic treatment, and a performance status (ECOG PS) of 0-2. All the patients underwent a CT-scan of the chest and abdomen prior to any treatment, concomitantly with baseline blood drawing; patients with known or suspected brain metastases underwent a baseline brain scan as well. Patients were treated with first-line platinum-based chemotherapy with pemetrexed for the adenocarcinoma or gemcitabine for the squamous histology; if patients were deemed unfit for cisplatin (advanced age, comorbidities, and impaired renal function), carboplatin was employed in its place [45]. Patients receiving pemetrexed in combination with a platinum-derivate were eligible for maintenance with single-agent pemetrexed after four cycles, provided that disease control or response were achieved and the treatment was well tolerated; during maintenance, CT-scans were performed with the same timing as combination treatment. CT-scans were performed at baseline and after every two cycles of chemotherapy for response assessment, concomitantly with peripheral blood drawing for cfDNA and CTC analysis. Radiological response was assessed using RECIST v.1.1 to measure variations in tumor size. The present study has been approved by the Local Ethics Committee (ID#TrPo11.003; IRCCS AOU San Martino-IST) and informed written consent was obtained from each patient. Circulating Free DNA Isolation and Quantification Five mL of peripheral blood were collected in ethylenediamine tetraacetic acid (EDTA)-containing tube and plasma was isolated by two-steps of centrifugation at 1600 rpm for 15 min. cfDNA was extracted from 400 µL of the resulting plasma using the QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. The quantification of cfDNA was performed by quantitative PCR (qPCR) method, using hTERT single copy gene (Thermo Fisher Scientific, Waltham, MA, USA). The qPCR reaction was carried out in a final volume of 10 µL consisting in: 5 µL of TaqMan Universal Mastermix (Thermo Fisher Scientific), 1 µL of assay and 4 µL of cfDNA on RealPlex2 system (Eppendorf, Hamburg, Germany). Each plate included positive and negative controls. The calibration curve was calculated based on a dilution series of a standard DNA (Promega, Madison, WI, USA): 1, 10, 100, 1000, 10,000, 100,000 copy number (3.3 pg of DNA = 1 gene copy). Each sample was run in duplicate and the final concentration, means as copy number, was calculated by interpolation of the mean of cycle threshold (CT) values with the calibration curve. Circulating Tumor Cell Isolation Circulating cells were isolated from whole peripheral blood of NSCLC patients by the filtration-based device ScreenCell Cyto (ScreenCell, Paris, France) according to manufacturer's protocol. Briefly, 3 mL of blood are mixed in an appropriate buffer to lyse erythrocytes and fix leukocytes and non-hematologic circulating cells. Circulating cells were separated through a microporous membrane filter allowing only cells larger than the pores (7.5 ± 0.36 µm) to be retained on the membrane. The filter was then released on a slide, stained with haematoxylin-eosin (H&E) and observed under a light microscope. The isolated non-hematologic circulating cells with malignant features were defined as CTCs and morphologically identified and enumerated according to the following criteria: nuclear size greater than or equal to 20 µm, high nuclear/cytoplasmic ratio (≥0.75), dense hyperchromatic nucleus, and irregular nuclear membrane as already reported by Freidin and colleagues (Figure 3) [46]. Circulating Tumor Cell Isolation Circulating cells were isolated from whole peripheral blood of NSCLC patients by the filtrationbased device ScreenCell Cyto (ScreenCell, Paris, France) according to manufacturer's protocol. Briefly, 3 mL of blood are mixed in an appropriate buffer to lyse erythrocytes and fix leukocytes and non-hematologic circulating cells. Circulating cells were separated through a microporous membrane filter allowing only cells larger than the pores (7.5 ± 0.36 μm) to be retained on the membrane. The filter was then released on a slide, stained with haematoxylin-eosin (H&E) and observed under a light microscope. The isolated non-hematologic circulating cells with malignant features were defined as CTCs and morphologically identified and enumerated according to the following criteria: nuclear size greater than or equal to 20 μm, high nuclear/cytoplasmic ratio (≥0.75), dense hyperchromatic nucleus, and irregular nuclear membrane as already reported by Freidin and colleagues ( Figure 3) [46]. Statistical Methods Continuous variables (age at first cycle and time interval between diagnosis and first chemotherapy administration) were described using median and inter-quartile range, while categorical factors (gender, histotype, metastatic status, and ECOG PS) were reported in terms of absolute and relative frequencies. The prognostic role of cfDNA and CTCs on PFS and OS were explored using the Kaplan-Meier methodology and differences in survival probabilities were statistically assessed through the log-rank test, after dichotomizing cfDNA and CTCs according to their median values, due to the relatively small sample size in order to guarantee comparable subgroup size and similar precision of parameter estimates. The prognostic effect of cfDNA and CTCs adjusted for potential imbalances in baseline patients' characteristics (age at start of chemotherapy, gender, histology, metastatic status, and ECOG PS) was estimated through Cox regression modeling. The HR, and corresponding 95% CL, was used as a measure of relative risk of relapsing or dying during the follow-up period. Statistical significance of the HR was assessed using the likelihood ratio test [47]. Finally, the relationship between BOR, cfDNA and CTC levels at the end of the second cycle was estimated using multiple linear regression after log-transformation of the biomarker measurements. Regression results were adjusted for potential imbalances in baseline individual characteristics (age at first cycle, gender, histology, metastatic status, and ECOG PS), which included log-transformed cfDNA and CTCs at first cycle. For all statistical comparisons, a p-value ≤ 0.05 was considered statistically significant. Conclusions Lung cancer prognosis remains a top priority worldwide and efforts to identify non-invasive biomarkers to improve the current imaging tools and predict treatment efficacy need to be addressed. To the best of our knowledge, this is the first study assessing the prognostic role of cfDNA and CTCs, in separate and joint analyses, in a cohort of advanced NSCLC treated with first-line chemotherapy regimens. cfDNA demonstrated a more reliable marker than CTCs, considering the overall population. Moreover, the results observed in the subgroup of SD subjects that identified patients at a higher risk of shorter survival, addressed the relevance of associating imaging techniques with cfDNA and CTC estimations; this helps clinicians identify patients deserving additional/alternative therapeutic interventions. The standardization of the procedures remains challenging due to different methods of extraction and quantification of cfDNA and CTC detection, although we assume that, concerning CTCs, a filtration-based technique might represent a better isolation tool to enrich subpopulations with diverse phenotypes. Large and methodologically uniform studies are required to confirm these data and better elucidate the significance of circulating biomarkers in treated NSCLC patients.
2017-07-25T10:59:35.543Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "f35355dd7a867e39de8090ed27a2eed2b22f51ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/5/1035/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af82147f5561eb6f9b5d5c4a633ac6c61a0adbd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195825153
pes2o/s2orc
v3-fos-license
Resveratrol Reduces Glucolipid Metabolic Dysfunction and Learning and Memory Impairment in a NAFLD Rat Model: Involvement in Regulating the Imbalance of Nesfatin-1 Abundance and Copine 6 Expression Resveratrol (RES) is a polyphenolic compound, and our previous results have demonstrated its neuroprotective effect in a series of animal models. The aim of this study was to investigate its potential effect on a nonalcoholic fatty liver disease (NAFLD) rat model. The parameters of liver function and glucose and lipid metabolism were measured. Behavior performance was observed via the open field test (OFT), the sucrose preference test (SPT), the elevated plus maze (EPM), the forced swimming test (FST), and the Morris water maze (MWM). The protein expression levels of Copine 6, p-catenin, catenin, p-glycogen synthase kinase-3beta (GSK3β), GSK3β, and cyclin D1 in the hippocampus and prefrontal cortex (PFC) were detected using Western blotting. The results showed that RES could reverse nesfatin-1-related impairment of liver function and glucolipid metabolism, as indicated by the decreased plasma concentrations of alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin (TBIL), direct bilirubin (DBIL), indirect bilirubin (IBIL), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), glucose, insulin, and nesfatin-1; increase the plasma level of high-density lipoprotein cholesterol (HDL-C); and reduce hepatocyte steatosis in NAFLD rats. Although there was no significant difference among groups with regard to performance in the OFT, EPM, and FST tasks, RES-treated NAFLD rats showed an increased sucrose preference index in the SPT and improved learning and memory ability in the MWM task. Furthermore, the imbalanced protein expression levels of Copine 6, p-catenin, and p-GSK3β in the hippocampus and PFC of NAFLD rats were also restored to normal by treatment with RES. These results suggested that four consecutive weeks of RES treatment not only ameliorated glucolipid metabolic impairment and liver dysfunction in the NAFLD rat model but also mitigated the attendant behavioral and cognitive impairments. In addition to the mediating role of nesfatin-1, the mechanism underlying the therapeutic effect of RES on NAFLD might be associated with its ability to regulate the imbalanced expression level of Copine 6 and the Wnt signaling pathway in the hippocampus and PFC. Resveratrol (RES) is a polyphenolic compound, and our previous results have demonstrated its neuroprotective effect in a series of animal models. The aim of this study was to investigate its potential effect on a nonalcoholic fatty liver disease (NAFLD) rat model. The parameters of liver function and glucose and lipid metabolism were measured. Behavior performance was observed via the open field test (OFT), the sucrose preference test (SPT), the elevated plus maze (EPM), the forced swimming test (FST), and the Morris water maze (MWM). The protein expression levels of Copine 6, p-catenin, catenin, p-glycogen synthase kinase-3beta (GSK3β), GSK3β, and cyclin D1 in the hippocampus and prefrontal cortex (PFC) were detected using Western blotting. The results showed that RES could reverse nesfatin-1-related impairment of liver function and glucolipid metabolism, as indicated by the decreased plasma concentrations of alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin (TBIL), direct bilirubin (DBIL), indirect bilirubin (IBIL), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), glucose, insulin, and nesfatin-1; increase the plasma level of high-density lipoprotein cholesterol (HDL-C); and reduce hepatocyte steatosis in NAFLD rats. Although there was no significant difference among groups with regard to performance in the OFT, EPM, and FST tasks, RES-treated NAFLD rats showed an increased sucrose preference index in the SPT and improved learning and memory ability in the MWM task. Furthermore, the imbalanced protein expression levels of Copine 6, p-catenin, and p-GSK3β in the hippocampus and PFC of NAFLD rats were also restored to normal by treatment with RES. These results suggested that four consecutive weeks of RES treatment not only ameliorated glucolipid metabolic impairment and liver dysfunction INTRODUCTION Nonalcoholic fatty liver disease (NAFLD) is a clinicopathological syndrome characterized by hepatic steatosis without significant alcohol use or other known liver disease. With an estimated prevalence of 25% in the general population (1,2), NAFLD is considered the most common cause of chronic liver disease (3,4). Moreover, the clinical burden of NAFLD is not restricted to liver-related morbidity or mortality (5), fueling concerns about the extrahepatic diseases accompanied with or induced by NAFLD. NAFLD is strongly associated with obesity, type 2 diabetes mellitus (T2DM), dyslipidemia, and hypertension and is now regarded as a liver manifestation of metabolic syndrome (6), and increasing evidence has demonstrated a close association between NAFLD and neuropsychiatric diseases including depression and cognitive dysfunction (7). Consistently, in our previous study (8), rats with NAFLD induced by a high-fat diet showed not only metabolic dysfunction, including obesity, liver dysfunction, dyslipidemia, and glucose metabolic dysfunction, but also significant impairment of learning and memory. Many measures have been taken directly toward improving the metabolic status of the liver as well as cell stress, inflammation, and fibrosis (4). However, there are currently no approved therapies for NAFLD, and standard-of-care lifestyle advice is rarely effective (1,5). Thus, it is imperative to investigate the pathogenesis of NAFLD and explore potential therapeutic targets. Although the mechanisms underlying the pathogenesis and progression of NAFLD are still incompletely understood, insulin resistance and dyslipidemia related to metabolic syndrome are believed to be the main pathogenic trigger that precipitates the development of NAFLD. It has been reported that insulin resistance is the linkage between NAFLD and T2DM (6), and NAFLD has been considered the hepatic manifestation of insulin resistance (9). Moreover, the homeostasis model assessment parameter of insulin resistance is considered an independent predictive factor for advanced fibrosis in nondiabetic patients with NAFLD, and modulation of insulin resistance is a potential strategy for NAFLD treatment (1). Increased hepatic expression of dipeptidyl peptidase 4 (DPP4) and high serum DPP4 activity have been demonstrated to be associated with NAFLD (10,11), and NAFLD has been reported to be an independent predictor of the effect of sitagliptin (STG), an oral DPP4 inhibitor, in patients with T2DM (12). Liver-specific DPP4 transgenic mice presented not only elevated systemic DPP4 activity but also a NAFLDassociated syndrome including obesity, hypercholesterolemia, hepatic steatosis, and liver damage (11). Furthermore, these dysfunctions were accompanied by increased expression of peroxisome proliferator-activated receptor gamma (PPARγ) and severe insulin resistance in the liver (11). The efficacy of STG has been demonstrated in NAFLD patients with T2DM, with significant decreases in plasma glucose and serum hemoglobin A1c (HbA1c), aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels (13). Similarly, as PPARγ ligands, thiazolidinedione antidiabetic agents have been extensively evaluated in the treatment of NAFLD (14,15). More importantly, in line with the results from animal studies showing that STG has neuroprotective activity including antinociceptive, antidepressant, and cognitive improvement (16), it has also been demonstrated in human research that in addition to the effects of glycemic control, STG therapy may result in the improvement of cognitive function in elderly diabetic patients with and without Alzheimer's disease (AD) (17). Resveratrol (trans-3,5,4 ′ -trihydroxy-trans-stilbene; RES) is a polyphenol component with diverse beneficial biological and pharmacological activity. It has been reported that RES is capable of ameliorating insulin resistance and improving insulin sensitivity (18). Focusing on its neuroprotective effect, the results of our previous studies have demonstrated that RES can bind to and interfere with the abnormal aggregation of amyloid beta (19), alleviate the impairment of learning and memory (20), and exert an antidepressant-like effect on rat models of chronic unpredictable mild stress (21), and subclinical hypothyroidism (SCH) (22). Based on these findings, it is rational to hypothesize that RES might be a good strategy for therapeutic intervention in NAFLD, targeting not only the metabolic dysfunction but also the behavioral and cognitive impairments. Thus, the main aim of the present study was to testify the effect of RES on NAFLD and explore the possible mechanism. The novel satiety factor nesfatin-1 and its precursor nucleobindin-2 (NUCB2) were first reported to regulate appetite and food intake (23). Subsequently, increasing evidence has demonstrated the role of nesfatin-1 in regulating not only glucose and energy metabolism (24) but also mood and cognitive function (25,26). Copines are a conserved cytosolic protein family characterized by two C2 domains with the ability to bind phospholipids in a calcium-dependent manner (27). Copine 6 is a member of the copine family. It has been reported that Copine 6 is expressed in the postnatal brain, with peak expression in the hippocampus, and is necessary for brain-derived neurotrophic factor (BDNF) to increase the abundance of mushroom spines on hippocampal neurons (27,28). Copine 6 has been demonstrated to link activity-triggered calcium signals to spine structural plasticity necessary for learning and memory (27) and to regulate BDNFdependent changes in dendritic spine morphology to promote synaptic plasticity (28). The results of our previous study also showed that BDNF-related imbalance in the expression of Copine 6 and synaptic plasticity markers in both the hippocampus and the prefrontal cortex (PFC) was coupled with depressionlike behavior and immune activation in a stressed rat model (29). The Wnt/β-catenin signaling pathway regulates many crucial pathophysiological processes, including not only hepatic homeostasis and liver function (30) but also cognition and mood regulation (31). Dysregulation of glycogen synthase kinase-3beta (GSK3β) activity has been reported in insulin resistance, T2DM, and neurodegenerative diseases (32). Consistently, our previous research has also shown that metabolic disorders and impaired learning and memory in NAFLD rats are involved in the imbalance of nesfatin-1 abundance and Copine 6 expression, as well as the imbalance of the Wnt/β-catenin signaling pathway (8). However, it is unknown whether these changes can be reversed by RES. To investigate the potential effect of RES on NAFLD and explore the possible mechanism, a NAFLD rat model was established using high-fat diet in the present study, and the protein expression levels of p-GSK3β/GSK3β, p-catenin/catenin, cyclin D, and Copine 6 in the hippocampus and PFC were measured using Western blotting. Blood glucose and lipid concentrations and liver function were measured, and STG and rosiglitazone (RSG) were used as positive control agents. Behavioral performance was examined using the open field test (OFT), the sucrose preference test (SPT), the elevated plus maze (EPM) test, the forced swimming test (FST), and the Morris water maze (MWM); the plasma concentrations of nesfatin-1, leptin, and insulin were also measured. Fluoxetine (FLX) and donepezil (DNP), which are typical drugs used clinically against depression and cognitive impairment, respectively, were used as the positive control agents in the behavioral tests in the present study. Drugs RES was purchased from Sigma Chemical Co. Sitagliptin phosphate (Januvia) was produced by Merck Sharp Dohme Ltd. Rosiglitazone was produced by Chengdu Hengrui Pharmaceutical Co., Ltd. Fluoxetine hydrochloride (Prozac) was provided by Eli Lilly Pharmaceuticals. Donepezil hydrochloride (Haosen) was produced by Jiangsu Haosen Pharmaceutical Co., Ltd. All the drugs were dissolved in an aqueous solution of 0.5% sodium carboxymethyl cellulose for a mixed suspension. Animals and Groups Male Sprague-Dawley (SD) rats aged 2 months were randomly divided into eight groups, namely, control (CON) group, CON + RES (15 mg/kg) group, NAFLD group, NAFLD + RES (15 mg/kg) group, NAFLD + STG (10 mg/kg) group, NAFLD + RSG (5 mg/kg) group, NAFLD + FLX (2 mg/kg) group, and NAFLD + DNP (1 mg/kg) group, with five rats in the CON + RES group and eight rats in each of the other groups. The rats were housed with four to five animals per cage (43 cm long × 31 cm wide × 19 cm high) and maintained under a 12:12-h light/dark cycle (lights on at 0700 h) at an ambient temperature of 21-22 • C and 50-60% relative humidity. The diets fed to the rats were as described in our previous study (8). Briefly, the rats in the CON and CON + RES groups were administered Frontiers in Endocrinology | www.frontiersin.org a standard diet (3,601 kcal/kg; 10% fat, 75.9% carbohydrates, and 14.1% protein as percent of kcal; Trophic Animal Feed High-Tech Co., Ltd., China). Rats in other groups were given a high-fat diet (5,000 kcal/kg; 60% fat, 25.9% carbohydrate, and 14.1% protein as percent of kcal) supplied by the same company. Starting 4 weeks later, RES and the other drugs were administered intragastrically for 4 weeks. The CON and untreated NAFLD model rats received daily intragastric injections of 0.5% sodium carboxymethyl cellulose. The outline of the experimental design is shown in Figure 1. All experimental procedures in the present study were approved by the Animal Care and Use Committee of Anhui Medical University in compliance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals (NIH publication no. 85-23, revised 1985). Behavioral Tests All behavioral tests were performed according to our previous studies (8,22). They were conducted during the light phase of the light/dark cycle in a separate room similar to the housing room, and the timing of the tests was matched between groups. The animals were allowed to adapt to the testing environment for 20 min before each test, and the observers were blinded to the treatment. The performance of the rats was monitored and recorded using a digital camera interfaced with a computer with ANY-maze video imaging software (Stoelting Co., Wood Dale, USA). The behavioral tests began with the OFT, followed by the SPT, EPM, FST, and MWM in that order. Open Field Test The apparatus consisted of a black 100-cm × 100-cm square arena with a 30-cm black high wall. The floor was marked with a grid dividing the floor into 16 equal-sized squares. During the 5-min observation period, the rats were placed in one corner of the apparatus, facing the wall. The total distance, the distance moved in the center, and the frequencies of rearing and grooming were recorded. Sucrose Preference Test After a 12-h period of food and water deprivation, all rats were provided free access to two bottles containing plain water or 2% sucrose solution. After 6 h, the volumes of water and sucrose solution consumed by the rats were measured. The sucrose preference index (SPI), which is the percentage of sucrose solution out of the total volume of liquid ingested, was used as a measure for anhedonia. Elevated Plus Maze Test The maze (made of Plexiglas) consisted of a plus-shaped apparatus, with two opposite closed arms (45 cm × 11 cm) enclosed by walls (22 cm in height) and two opposite open arms (45 cm × 11 cm) without walls. The apparatus also had a central arena (11 cm × 11 cm) and was elevated 80 cm above the floor. Each rat was placed in the central arena of the maze FIGURE 2 | Effect of RES on the bodyweight, liver weight, and liver index of NAFLD rats. The data are presented as the mean ± SEM, with five rats in the CON + RES group and eight rats in each of the other groups. The bodyweight is shown in panel (A), and results of the repeated-measures ANOVA showed a significant effect of time but not treatment on the bodyweight, with a significant interaction between treatment and week. The NAFLD model rats showed greater liver weight (B) and a higher liver index (C) than the control or CON + RES rats. Treatment with RES reversed the increase in liver weight and liver index but had no significant effect on bodyweight. Frontiers in Endocrinology | www.frontiersin.org FIGURE 3 | Effect of RES on the macroscopic and histological liver changes in NAFLD rats. The livers of the NAFLD rats appeared yellow or pale, greasy, and brittle. Inflammatory cells and numerous lipid droplets were detected in the (Continued) FIGURE 3 | livers of NAFLD rats via HE staining (A). The gross and cellular scores of the liver histological changes in the NAFLD model rats were significantly increased (B). Compared with those in the NAFLD group, the color and texture of the livers of the RES-treated rats were closer to normal, and the gross and cellular scores were significantly decreased. facing an open arm and allowed to explore the maze for 5 min. The distances moved in the open arms and the closed arms were analyzed. Forced Swimming Test The cylinder for this behavioral test was 60 cm tall and 25 cm in diameter maintained at 24-25 • C and filled with 30 cm of water. The FST paradigm includes two sections: an initial 15-min pretest followed by a 5-min test administered 24 h later. The immobility time was recorded, and the rats were considered immobile when they did not make any active movements. Morris Water Maze Test A pool (1.8 m in diameter) was filled with opaque water and surrounded by complex maze cues. An escape platform (9 cm in diameter) was placed in the center of a designated quadrant with its top 2 cm below the water surface. In the hidden-platform acquisition phase, each rat performed four training trials per day for 4 days. In each trial in the hidden-platform test, the rat was given 60 s to find the platform and allowed to stay there for 20 s. If a rat failed to find the hidden platform within 60 s, then it was guided to the platform and allowed to remain there for 20 s. A probe test was conducted on day 5, during which the hidden platform was removed from the pool and the rat was allowed to swim for 60 s. The escape latency (latency to find the platform) in the acquisition phase and the duration spent in the target quadrant in the probe test were analyzed. Intraperitoneal Glucose Tolerance Test An intraperitoneal glucose tolerance test (IGTT) was conducted in rats after 12 h of fasting and water deprivation. Glucose was injected intraperitoneally at a dose of 2.0 g/kg, and whole blood was collected from the tail tip before injection and 15, 30, 60, and 120 min after injection. Blood glucose was measured using a Roche glycemic meter. Measurement of Liver Weight, Liver Index, and Liver Histological Changes Measurement of the After the blood was collected, the liver was rapidly dissected and weighted, and the liver index (liver weight/100 g bodyweight) was calculated. Four rats in each group were randomly selected, and the same part of the liver was collected, fixed with 1% neutralbuffered formalin, embedded in paraffin, sectioned at a thickness of 4 µm, and stained with hematoxylin and eosin (HE). The changes in the liver were scored according to the method used in our previous study (8). Briefly, the liver was graded according to gross appearance (0 = no changes in color or consistency; 1 = pale and yellowing; 2 = mottled, pale, and yellowing; 3 = mottled, pale, and yellowing and smooth rounded edges). The extent of fatty change in the liver was graded according to the amount and size of lipid-filled vacuoles presented throughout the stained sections (0 = no evidence of lipid vacuoles; 1 = few small lipid vacuoles present within hepatocytes; 2 = increased number and larger lipid vacuoles within hepatocytes). Western Blot Assays The hippocampi and the PFCs from three rats in each group were rapidly dissected, frozen in liquid nitrogen, and stored at −80 • C. The tissues were homogenized in radioimmunoprecipitation assay (RIPA) buffer ( Statistical Analysis All statistical analyses were performed using SPSS (Statistical Package for the Social Sciences) version 12.0.1 (SPSS Inc., Chicago, IL, USA). The data are expressed as the mean ± standard error of the mean (SEM), and P < 0.05 was considered statistically significant. The distribution of the data was determined by the Kolmogorov-Smirnov test. Between-group effects on bodyweight and escape latency in the MWM task were analyzed by repeated-measures analysis of variance (ANOVA) with group and time as the factors followed by the least significant difference (LSD) post hoc test. Statistical analyses of other parameters were carried out using ANOVA followed by the LSD post hoc test. Correlation analysis was performed using a Pearson's correlation test. RES Administration Alleviated Hepatomegaly, Hepatocyte Steatosis, and Liver Dysfunction in NAFLD Rats using repeated-measures ANOVA with experimental treatment as a between-subject factor and week as a within-subject factor, there was a significant effect of time on the bodyweight [F (8,432) = 3,008.986, P < 0.01], with a significant interaction between treatment and week [F (56, 432) = 1.848, P < 0.01]. However, the factor of treatment did not affect bodyweight significantly [F (7, 53) = 1.207, P = 0.315]. As shown in Figures 2B,C, the liver weight and liver index were increased in the NAFLD group compared with the CON group (P < 0.05 or P < 0.01). Treatment with STG reversed the increase in liver weight (P = 0.022) but not the increase in liver index, whereas RES reversed the increase in both variables (P < 0.05 or P < 0.01). Figure 3 shows the macroscopic and microscopic histological changes in the livers in the different groups. No macroscopic change was observed in the livers of the rats in the CON group, but the livers of the NAFLD rats appeared yellow or pale, greasy, and brittle. The results of the HE staining revealed inflammatory cells and numerous lipid droplets in the livers of NAFLD rats, indicating inflammation and mild hepatocyte steatosis in the liver. However, the livers of the NAFLD + RES group showed less inflammation and hepatocyte steatosis than those of the NAFLD group. As shown in Figure 3B, the gross and cellular injuries of the livers were improved in the RES-treated group (P < 0.01). As shown in Table 1, the plasma concentrations of ALT, AST, TBIL, DBIL, and IBIL in the NAFLD group were all significantly higher than those in the CON group (P < 0.01). Treatment with RES, STG, or RSG reversed these changes (P < 0.05 or P < 0.01). RES Administration Alleviated the Glucose and Lipid Metabolic Disorder in NAFLD Rats Compared with those in the control and CON + RES rats, the plasma concentrations of TC, TG, and LDL-C were significantly increased in the NAFLD model group, while the plasma HDL-C level was decreased, and there was no significant difference among groups with regard to the plasma concentration of very low-density lipoprotein-cholesterol (VLDL-C), FFA, or amylase ( Table 2). As shown in Table 3 and Figure 4, the blood glucose concentrations 0, 15, 30, and 60 min after injection of glucose were all increased in the NAFLD model group (P < 0.05 or P < 0.01), as were the plasma insulin and nesfatin-1 levels (P < 0.01), while the plasma leptin levels were decreased (P < 0.01). Treatment with RES reversed the impaired glucose tolerance and increased plasma insulin and nesfatin-1 levels of NAFLD rats (P < 0.05 or P < 0.01) but had no effect on the plasma leptin level. The results of Pearson's correlation analysis showed that the plasma nesfatin-1 concentration was positively correlated with the plasma concentrations of TC (r = 0.309, P = 0.015), LDL-C (r = 0.378, P = 0.003), and insulin (r = 0.318, P = 0.013), but negatively correlated with the plasma concentration of HDL-C (r = −0.339, P = 0.008) and leptin (r = −0.531, P < 0.001). FIGURE 4 | Effect of RES on the plasma concentrations of insulin, leptin, and nesfatin-1 in NAFLD rats. The data are presented as the mean ± SEM, with five rats in the CON + RES group and eight rats in each of the other groups. Compared with the control and CON + RES rats, the NAFLD group showed increased plasma insulin (A) and nesfatin-1 (C) levels, while the plasma leptin (B) level was decreased. Treatment with RES decreased the plasma concentrations of insulin and nesfatin-1 but had no effect on the plasma leptin level. RES Administration Mitigated the Decrease in the SPI of NAFLD Rats in the SPT Without Significant Changes Among Groups in the OFT, EPM, or FST Figure 5 shows the performance of rats in the behavioral tests of the locomotor activity; anxiety, exploration, and despair behaviors; and anhedonia. In the OFT, there was no significant difference in the total distance traveled (Figure 5A), the distance traveled in the center of the field (Figure 5B), or the frequency of rearing or grooming (Figure 5C) among groups. There was no significant difference in the EPM (Figure 5D) or FST (Figure 5E) performance among the groups. Compared with those of the control and CON + RES groups, the SPI of rats in the NAFLD group was reduced (P < 0.01, Figure 5F), and treatment with RES, STG, and FLX mitigated the decrease in the SPI of the NAFLD rats (P < 0.05 or P < 0.01, Figure 5F). The results of the Pearson's correlation analysis showed that the plasma nesfatin-1 concentration was negatively correlated with the SPI (r = −0.607, P < 0.001). RES Administration Improved the Impaired Learning and Memory Ability of NAFLD Rats in the MWM Task For all the rats studied in this experiment, the escape latency declined with each day during the acquisition phase, as shown in Figure 6A. The results of the repeated-measures ANOVA revealed that both training day [F (3,159) = 148.388, P < 0.01] and experimental treatment [F (7,53) = 16.667, P < 0.01] had significant effects on the escape latency but did not have a significant interaction effect [F (21, 159) = 1.433, P = 0. 110]. Results of the LSD post hoc test showed that NAFLD rats took longer time to find the submerged platform than did the control (P < 0.01) or CON + RES (P < 0.01) rats on all 4 days, and the RES-treated rats also had a shorter escape latency than the NAFLD rats did (P < 0.01). In the probe test (Figure 6B), compared with those of the control and CON + RES groups, the duration that the NAFLD rats spent in the target quadrant was decreased (P < 0.01). However, the RES-treated rats spend more time in that quadrant than the NAFLD rats did (P < 0.01). Notably, a negative correlation was observed between the duration spent in the target quadrant and the plasma concentration of nesfatin-1 (r = −0.338, P = 0.008). Figure 7 shows the protein expression levels of Copine 6, p-catenin, catenin, cyclin D1, p-GSK3β, and GSK3β in the hypothalamus and PFC of the rats. The protein expression levels of Copine 6 and p-catenin/catenin was decreased (P < 0.05 or P < 0.01), whereas the protein expression level of p-GSK3β/GSK3β FIGURE 5 | Effect of RES on the behavioral performance of NAFLD rats in the OFT, EPM, FST, and SPT. The data are presented as the mean ± SEM, with five rats in the CON + RES group and eight rats in each of the other groups. In the OFT, there was no significant difference among groups with respect to the total distance traveled (A), the distance traveled in the center (B), or the frequency of rearing or grooming (C). There was no change in performance in the EPM (D) or FST (E) among the groups. Compared with those of the control and CON + RES groups, the SPI of the rats in NAFLD model group was reduced (F), and this reduction could be reversed by treatment with RES, STG, or FLX (F). FIGURE 6 | Effect of RES on the behavioral performance of NAFLD rats in the MWM. The data are presented as the mean ± SEM, with five rats in the CON + RES group and eight rats in each of the other groups. In the acquisition phase, the escape latency of the NAFLD rats was longer than that of the control or CON + RES rats on all 4 days (A). In the probe trial, NAFLD rats spent less time than control rats in the target quadrant (B). These abnormalities could be reversed by treatment with RES. RES Administration Imbalanced the Protein Expression Levels of Copine 6 and the Canonical Wnt Pathway in the Hippocampus and PFC of NAFLD Rats was increased (P < 0.05 or P < 0.01) in the hippocampus and PFC of NAFLD rats. The expression level of cyclin D1 was increased in the hippocampus but decreased in the PFC of NAFLD rats (P < 0.05 or P < 0.01). Treatment of RES reversed the imbalanced expression levels of Copine 6 and p-GSK3β/GSK3β in both the hippocampus and PFC as well as that of p-catenin/catenin in the hippocampus (P < 0.05 or P < 0.01), without any notable effect on the expression level of cyclin D1. DISCUSSION In the present study, our results showed that RES (15 mg/kg) administration could exert a therapeutic effect against NAFLD in rats, including not only metabolic and liver dysfunction but also behavioral and cognitive impairments such as anhedonia and decreased learning and memory ability in the MWM task. Moreover, the results showed that the effect of RES was partly attributable to its ability to regulate the abundance of nesfatin-1, which was significantly correlated with plasma lipid concentrations and behavioral performance. Furthermore, RES could also reverse the imbalanced protein expression levels of Copine 6 and the Wnt/β-catenin signaling pathway in the hippocampus and PFC of NAFLD rats. In Figure 8, we summarize the therapeutic effect of RES on the peripheral metabolic syndrome and behavioral and cognitive impairments of NAFLD in a rat model. As a consequence of the pandemic spread of obesity, NAFLD is one of the leading causes of liver disease worldwide (3,33). Although weight loss based on diet and exercise is the cornerstone treatment for NAFLD (4,9) and percent weight reduction has been demonstrated to be correlated significantly with improvement in liver chemistry and histological activity (34), it is difficult to achieve and maintain. In the present study, high-fat diet-induced NAFLD rats showed obesity, but the drugs did not significantly decrease the gain of bodyweight. With the remarkable evolution in the understanding of the pathogenesis of NAFLD, new medical therapies and even modifications of currently available therapies have gradually appeared (35,36). Based on increasing experimental and clinical data, the efficacy of agents targeting insulin resistance has been demonstrated (37)(38)(39). Accordingly, STG and RSG, which are representative of DPP4 inhibitors and PPARγ agonists, were used as positive control drugs in the present study. Unsurprisingly, the results showed that treatment with STG or RSG could reverse hyperglycemia and hyperinsulinemia in NAFLD rats. In line with the effect of STG and RSG, RES could also restore the increased plasma glucose and insulin concentrations of NAFLD rats to normal, indicating its therapeutic effect on the dysfunction of glucose metabolism in NAFLD. The development of NAFLD comes from an imbalance between the influx and production of fatty acids, and dyslipidemia and insulin resistance are reciprocal causes underlying the development of NAFLD. In the vicious cycle of insulin resistance and increased lipolysis, excess lipids eventually accumulate in lipid droplets, creating a fatty and inflamed liver (35). Consistently, in this study, the NAFLD rats presented increased liver mass and index and elevated plasma concentrations of TC, TG, LDL-C, TBIL, and liver enzymes, together with inflammation and mild hepatocyte steatosis in the liver. It has been reported that STG and RSG could suppress lipid accumulation and improve insulin resistance (15,40). Consistently, in our present study, treatment with STG and RSG significantly improved the lipid accumulation in the blood and liver of NAFLD rats. Similarly, treatment with RES also reversed the observed hepatomegaly, dyslipidemia, hyperbilirubinemia, and hepatocyte steatosis of NAFLD rats. These results suggest a hepatoprotective and antihyperlipidemic effect of RES on NAFLD rats. FLX and DNP are other two positive control drugs used in this study to observe the behavioral performance of rats. However, lipid metabolism abnormalities have been reported in patients with depression treated with FLX (41) and patients with AD treated with DNP (42), and increased incidence of NAFLD has been found in male rat offspring exposed to FLX during the fetal and neonatal periods (43). In the present study, they showed no effect on the increased plasma glucose and lipid concentrations and the morphological and functional impairments in the liver of NAFLD rats. Additionally, we cannot absolutely exclude the synergistic effect of FLX or DNP and the high-fat diet on the lipid metabolism abnormalities and liver injuries of NAFLD rats. Interestingly, the plasma HDL-C concentration of NAFLD rats was increased after four consecutive weeks on a high-fat diet in our previous study (8) but decreased in the present study after eight consecutive weeks of high-fat diet feeding, which might suggest a compensatory mechanism and dynamic changes in HDL-C during the progression of NAFLD. Additionally, treatment with RES could increase the plasma HDL-C concentration in NAFLD rats. Together with the results of other studies using a hyperuricemiarelated NAFLD rat model (44), these results indicated the therapeutic effect of RES on the NAFLD state. Nesfatin-1 is an anorexic factor regulating feeding and metabolism. Consistent with our previous study (8), the plasma concentration of nesfatin-1 was also increased in the NAFLD rats, and significant correlations with the plasma levels of lipids, insulin, and leptin were observed. Although treatment with RES decreased the plasma concentration of nesfatin-1 in NAFLD rats, it had no significant effect on the plasma leptin level. Moreover, treatment with STG or RSG did not lower the plasma nesfatin-1 level. These results indicated an important role of nesfatin-1 in abnormal glucose and lipid metabolism in NAFLD rats and suggested a potential mechanism through which RES might exert its therapeutic effect on NAFLD, partially distinct from the mechanisms of STG and RSG. Increasing evidence has demonstrated the role of nesfatin-1 in regulating neuropsychiatrically relevant behavior, including mood and cognition (26,45), and plasma nesfatin-1 levels have been reported to be positively correlated with the severity of depression (46). In line with our previous study (8), the NAFLD rats in the present study showed a nesfatin-1-related decline in their SPI in the SPT, together with an impairment of learning and memory ability in the MWM task. However, there was no significant difference among groups with regard to the behavioral performance of rats in the OFT and TST, indicating that all the rats in this study were at the similar level of locomotion and exploration, anxiety, and despair behaviors. In the SPT, FLX-treated NAFLD rats showed an increased SPI, which was consistent with the clinical use of FLX in the treatment of depression. Similarly, DNP-treated NAFLD rats showed an improvement of the learning and memory in the MWM task. In line with findings from animal and human studies that STG produced cognitive improvement and antidepressant effects (16,17), our results showed that STG treatment could improve the learning and memory of NAFLD rats and increase their SPI, which is taken as a measure of anhedonia. Importantly, these abnormal neurobehavioral changes could also be reversed by treatment with RES. Together with the negative correlation between plasma nesfatin-1 concentration and anhedonic behavior, the results confirmed the important role of nesfatin-1 in the pathogenesis of NAFLD-induced neurobehavioral impairments in rats. However, only RES and FLX, rather than STG or RSG, decreased the plasma nesfatin-1 concentrations of NAFLD rats. Considering the different effects of the positive control agents used in the present study, these results indicated that there might be some different mechanisms underlying the therapeutic effect of RES and other agents targeting insulin resistance. The hippocampus and PFC are crucial brain areas involved in functions such as cognition and mood regulation. It has been reported that the Wnt/β-catenin signaling pathway plays important roles in the structure and function of the adult hippocampus and PFC, and impairment of Wnt/β-catenin signaling is involved in the pathogenesis of depression and AD (47,48). Abnormally active GSK3β has been demonstrated to increase the susceptibility to depression-like behavior (49), memory impairment (49,50), and impaired hippocampal neural precursor proliferation (49). In line with our previous findings (8), the present study found that the protein expression level of p-β-catenin/β-catenin was decreased, whereas the protein level of p-GSK3β/GSK3β was increased in the hippocampus and PFC of NAFLD rats. The expression level of cyclin D1 was increased in the hippocampus but decreased in the PFC of NAFLD rats. Treatment with RES could restore the imbalanced expression level of p-GSK3β/GSK3β to normal in both the hippocampus and the PFC and restore normal expression levels of p-β-catenin/βcatenin in the hippocampus, without any notable effects on the expression level of cyclin D1. Together with the effect of RES on the activity of the Wnt/β-catenin pathway in other studies (51,52), our results suggest that the Wnt/β-catenin pathway underlies the neuroprotective mechanism of RES. The calcium sensor Copine 6 has also been reported to play an important role in regulating neurotransmission, synaptic plasticity, and learning and memory (27,28,53). It has been demonstrated that Copine 6 is recruited from the cytosol of dendrites to postsynaptic spine membranes by calcium transients, linking the calcium signals to spine structural plasticity necessary for learning and memory (27). Moreover, a decreased protein expression level of Copine 6 has been demonstrated in the hippocampus and PFC of NAFLD (8) or stressed rats (29). In the present study, the results showed that treatment with RES increased the expression level of Copine 6 in the hippocampus and the PFC of NAFLD rats. Consider the finding that RES could induce the release of Ca +2 in a timedependent manner (52), it might be rational to suppose that Ca +2 might play an important role underlying the mechanism of RES, which should be investigated in detail in the future studies. In conclusion, the results of the present study demonstrated that RES could exert a therapeutic effect on a NAFLD rat model, not only on the dysfunction of liver and glucolipid metabolism but also on behavioral and cognitive impairments. Furthermore, increased plasma nesfatin-1 concentrations might be a link between the dysfunction of glucolipid metabolism and the behavioral and cognitive impairments observed in NAFLD rats, relating significantly to both plasma lipid concentrations and behavioral performance. Additionally, apart from its ability to decrease nesfatin-1 abundance, the therapeutic effect of RES on NAFLD rats might also be related to an imbalance in the expression level of Copine 6 and key proteins in the Wnt/β-catenin signaling pathway in the hippocampus and PFC. DATA AVAILABILITY The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT All experimental procedures in the present study were approved by the Animal Care and Use Committee of Anhui Medical University, in compliance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals (NIH publication No. 85-23, revised 1985). AUTHOR CONTRIBUTIONS J-FG designed the experiment. X-XC and Y-YX carried out the experiment and wrote the manuscript with support from J-FG. RW and ZC contributed to the animal experiment. KF and Y-XH performed the behavioral tests. YY and L-LH helped with the preparation. LP and J-FG supervised the project. All authors helped shape the analysis, research, and manuscript.
2019-07-09T13:03:08.671Z
2019-07-09T00:00:00.000
{ "year": 2019, "sha1": "d12fbcee23c4d83e61449a6383f6ecb2c5a0a262", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2019.00434/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d12fbcee23c4d83e61449a6383f6ecb2c5a0a262", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233396944
pes2o/s2orc
v3-fos-license
Nonlinear T-Wave Time Warping-Based Sensing Model for Non-Invasive Personalised Blood Potassium Monitoring in Hemodialysis Patients: A Pilot Study Background: End-stage renal disease patients undergoing hemodialysis (ESRD-HD) therapy are highly susceptible to malignant ventricular arrhythmias caused by undetected potassium concentration ([K+]) variations (Δ[K+]) out of normal ranges. Therefore, a reliable method for continuous, noninvasive monitoring of [K+] is crucial. The morphology of the T-wave in the electrocardiogram (ECG) reflects Δ[K+] and two time-warping-based T-wave morphological parameters, dw and its heart-rate corrected version dw,c, have been shown to reliably track Δ[K+] from the ECG. The aim of this study is to derive polynomial models relating dw and dw,c with Δ[K+], and to test their ability to reliably sense and quantify Δ[K+] values. Methods: 48-hour Holter ECGs and [K+] values from six blood samples were collected from 29 ESRD-HD patients. For every patient, dw and dw,c were computed, and linear, quadratic, and cubic fitting models were derived from them. Then, Spearman’s (ρ) and Pearson’s (r) correlation coefficients, and the estimation error (ed) between Δ[K+] and the corresponding model-estimated values (Δ^[K+]) were calculated. Results and Discussions: Nonlinear models were the most suitable for Δ[K+] estimation, rendering higher Pearson’s correlation (median 0.77 ≤r≤ 0.92) and smaller estimation error (median 0.20 ≤ed≤ 0.43) than the linear model (median 0.76 ≤r≤ 0.86 and 0.30 ≤ed≤ 0.40), even if similar Spearman’s ρ were found across models (median 0.77 ≤ρ≤ 0.83). Conclusion: Results support the use of nonlinear T-wave-based models as Δ[K+] sensors in ESRD-HD patients. Introduction Heart failure is among the most common cardiovascular complications in end-stage renal disease (ESRD) patients [1,2]. In hemodialysis (HD)-dependent ESRD (ESRD-HD) patients, the risk of cardiovascular mortality caused by electrical instability is 10-to 20-fold higher than in age-and gender-matched healthy subjects [3,4]. This remarkable association can be explained by the extreme fluctuations in blood potassium concentration ([K + ]) occurring in between HD sessions [5,6]. These changes in [K + ] are usually clinically silent and occur without warning to the patient or to the doctor in the absence of blood tests [7]. Therefore, continuous noninvasive monitoring of [K + ] variations (∆[K + ]) is of great importance [8] as it would provide risk warnings, improving an ever-growing clinical need. The electrocardiogram (ECG) reflects the electrical activity of the heart in a noninvasive and inexpensive way. Electrocardiographic consequences of ∆[K + ] are well known [9][10][11]: The earliest effects appear as narrowed and peaked T waves [12], followed by changes in the QT interval duration (or its corrected version, QTc) [13] and in repolarisation complexity [14]. Several studies in the literature have attempted to estimate [K + ] through the analysis of T-wave morphology changes, quantified by features representative of the T-wave shapes, [15][16][17]. In previous studies [18][19][20][21][22], we proposed and investigated six T-wave morphological parameters quantifying T-wave morphology changes by means of time warping analysis [23] for continuous non-invasive ∆[K + ] monitoring. These six T-wave morphology parameters included d u w , d w , andd w,c (unsigned, signed, and heart rate corrected T-wave morphology variations in time, respectively), d a (T-wave morphology variations in amplitude), and their non-linear components (d NL w and d NL a ) as described in [21,23]. In addition, we tested two lead space reduction techniques [20], principal component analysis (PCA) and periodic component analysis (πCA) [24], this later implemented in two different versions: By exploiting the complete QRST complex periodicity, πC B , or just restricting to the T-wave, πC T [20]. This work [20] showed that d w andd w,c had the highest correlation with ∆[K + ]. They also showed that πC T presented higher robustness against noise than PCA or πC B , making it the most suitable lead space reduction technique for ∆[K + ] tracking during the HD session, as well as in the post therapy monitoring before the next HD session. Nevertheless, a quantitative relation between these T-wave morphological parameters derived from ECG analysis and ∆[K + ] has not yet been established for clinical use, which would allow a non invasive measurement of ∆[K + ] value. The direct assessment of a marker as [K + ] surrogate by Pearson correlation analysis implies the assumption of a linear relation between them. However, previous works have reported that the reconstruction of [K + ] from the ECG significantly improves by employing a quadratic regression [16]. This result is compatible with the findings we reported in [20,21] where the same study population described in Section 2 was investigated according to the protocol in Figure 1. In Palmieri et al. [21], we observed a non-linear correspondence between ∆[K + ] and the T-wave time warping biomarkers d w andd w,c (purple and green boxplots, respectively, in Figure 2 of the present study). Therefore, we hypothesised that using patient-specific nonlinear models based on T-wave time warping-derived markers can provide better quantitative assessment of ∆[K + ]. The aim of this study is to derive and to evaluate nonlinear polynomial sensing models to estimate ∆[K + ] by using πC T -based markers, d w andd w,c . As a reference, a patient-specific linear model is also estimated for each marker. This paper is organised as follows. First, we describe the study population, a database recorded during the interdialytic interval between two HD sessions. We next expand on the methodology to calculate the T-wave morphology parameters, as well as the proposed models to monitor [K + ] fluctuations from the ECG. Finally, we present and discuss the results, ending with conclusions and considerations for future research. The notches represent the 95% confidence interval of the median, calculated as q 2 − 1.57(q 3 − q 1 )/ √ n and q 2 + 1.57(q 3 − q 1 )/ √ n being q 2 the median, q 1 and q 3 are the 25-th and 75-th percentiles, respectively, and n is the sample size. Finally, red "+" denotes outliers. Data adapted from [20,21]. Materials The ESRD-HD study population included 29 patients from the Nephrology ward at Hospital Clínico Universitario Lozano Blesa (Zaragoza, Spain). Inclusion criteria were (i) 18-year-old or older patients, (ii) being diagnosed with ESRD, and (iii) undergoing HD at least three times per week, with venous or cannula access. The study protocol was approved by the Aragon's ethics committee (CEICA, ref. PI18/003) and all patients signed informed consent. All procedures and methods were performed in accordance with the Helsinki Declaration. Further details concerning the study protocol and clinical features of the study population can be found in [20,21]. A 48-h, standard 12-lead ECG Holter recording (H12+, Mortara Instruments, Milwaukee, WI, USA, sampling frequency of 1 kHz, amplitude resolution of 3.75 µV) was collected for each enrolled patient, with the acquisition starting 5 min before the HD onset and lasting until the next HD session, programmed 48 h later. Simultaneously, to determine [K + ], a total of 6 blood samples were collected just before starting the HD and every hour during the HD session (5 in total), with a last extraction immediately before the next HD session (Figure 1). The current number of patients included in the database is still limited, hence this work should be interpreted as an exploratory pilot study . Methods In this section, the different steps required for the processing of the ECG signals are described and summarised in the block diagram presented in Figure 3. Time-warping analysis, and ̂, computation [21] Windows selection and MWTW extraction Fitting models and � ∆ , ECG Pre-Processing Baseline wander was removed with a 0.5-Hz cut-off high-pass filter, implemented with a forward-backward 6-th order Butterworth filter [25]. Residual noise out of the T-wave band was removed with a 40 Hz cut-off frequency forward-backward 3-th order low-pass Butterworth filter. QRS complexes were detected and T-waves delineated using a wavelet-based single-lead delineation method applied to each of the 12 leads [26]. Lead Transformation by Periodic Component Analysis, πCA Periodic component analysis is a lead space reduction technique aiming to emphasise the periodic structure of a signal [24,27]. In this work, πCA was applied with a one-beat periodicity to maximise the T-wave beat-to-beat periodic components on the transformed signal, as explained in [20]. For each ECG recording, a transformation matrix Ψ πCA was estimated as detailed in [20], and applied to the 8 independent standard leads, obtaining a new set of 8 transformed leads, named periodic components. In this way and by ordering the transformed leads inversely to their associated eigenvalue, the most beat-to-beat periodic components appear projected onto the first component, πC1, which was selected for subsequent analysis and T waves were again delineated by using the above-mentioned delineator [26]. Warping-Based T-Wave Morphology Markers All T-waves from πC1 were further low-pass filtered at 20 Hz using a forwardbackward 6-th Butterworth filter to remove remaining out-of-band frequency components. T-waves in 2-min wide windows centered around the 5-th minute and 35-th minute of each available hour were selected, from which a mean warped T-wave (MWTW) was computed from all T-waves in each window [21,23]. Finally, the two T-wave morphology parameters, d w and d w,c , were computed by comparing each MWTW with respect to a reference MWTW, selected at the end of the HD session, resulting in relative markers to the reference point at the end of HD (h 4 in Figure 1). A detailed description of the computation of the warping markers here analysed can be found in [21,23], describing how d w represents a relative measure of morphological changes between two T-waves. Likewise,d w,c is obtained from d w marker after being compensated for T-wave morphological changes not attributable to ∆[K + ] but to heart rate changes occurring between the reference and analysis points [20,21]. Blood Potassium Concentration Variations ∆[K + ] The two proposed biomarkers, measured along time, have been associated with the corresponding relative variations in [K + ] with respect to the [K + ] at the reference point (h 4 ), where a blood sample was taken: being [K + ] h i the concentration at the h i -th time point (see Figure 1) and [K + ] h 4 the reference concentration at the end of the HD treatment. The ∆[K + ] distribution across patients for each hour is presented in Figure 2. Marker Fitting Models for ∆[K + ] Estimation For a given patient p, the relationship between the marker d ∈ {d w , d w,c } and ∆[K + ] measured along time was modelled by means of a linear (l), quadratic (q), and cubic (c) regression models for each patient to noninvasively calculate ∆[K + ] values, according to the following models: respectively. The coefficients α l , α q , β q , α c , β c , and γ c were estimated for each patient p and marker d by using a least square regression analysis between∆[K + ] and ∆[K + ] values. For each patient and marker, the parameters of the three models were estimated with two different approaches: (i) By using all the available ∆[K + ] values ("m = a") and (ii) by adopting a leave-one-out cross validation ("m = o") by excluding the h i -th ∆[K + ](h i ) value from the training-set and evaluating the prediction error at this h i -th point, repeating this for all possible h i exclusions. Finally, to avoid physiologically meaningless∆ d [K + ] trends, the three models in Equations (2)-(4) were computed with a constrained parameter estimation in order to guarantee a monotonically increasing relationship between∆[K + ] and d, as physiologically expected and corroborated by the marker trend evolution in Figure 2 in this paper and in Corsi et al. [16] in Figures 2 and 4 . That was implemented by imposing: which for positive values of the marker, The case with d < 0 is anecdotal, see Figure 2, and most likely is due to outliers, since they do not follow physiological interpretations of T-wave narrowing with increased potassium. Statistical Analysis Spearman's rank and Pearson's correlation coefficients (ρ and r, respectively) were used for correlation analyses between where i ∈ {0, 1, 2, 3, 5} is the set of hours where the computation of the estimation error is meaningful. Note that h 4 is the reference point where both ∆[K + ] and∆ All statistical analyses were performed using MATLAB version R2019a and results are given as the median and interquartile range (IQR). Results The median and IQR values of intra-patient Spearman's (ρ) and Pearson's (r) correlation coefficients, computed between ∆[K + ], and∆ Figure 4 show the estimation error e f d,m (p, h i ) distributions, sorted by hours h i , using the linear (Figure 4a,d), the quadratic (Figure 4b,e), and the cubic (Figure 4c,f) models. In addition, the aggregated distribution for all hours is presented with the label (ALL). The widest error distributions are obtained for hours h 0 and h 5 , whose median and IQR are given in Table 1. These time points are of great interest since: (i) The samples are the furthest from the reference (h 4 ) and (ii) when they are estimated by using the leave-one-out (m = o) approach they do not have any temporarily close samples before (i.e., in case of h 0 ) and/or after (i.e., in case of h 5 ) as opposed to h 1 , h 2 , and h 3 ; this together with the fact that their associated marker values are also the farthest from the rest, Figure 2. Therefore, it seemed worthy performing a detailed hour-based error analysis. An example of cubic modeling results for a given patient with and without parameter constriction for monotonic∆ c d w ,o [K + ] behaviour with d is presented in Figure 5. Results are given for no restrictions on {α c , β c , γ c } ( Figure 5a); by just imposing α c ≥ 0 ( Figure 5b); and by the full constrained model (α c ≥ 0, β c ≥ 0, γ c ≥ 0) (Figure 5c). Discussion In this study, we analysed ECG signals from 29 ESRD-HD patients. We extracted Twave morphology parameters, d w andd w,c , previously reported to have a strong correlation with ∆[K + ] [20,21]. Then, we proposed and compared, the use of linear, quadratic, and cubic regression models for ∆[K + ] estimation from d w andd w,c markers. The performance of each model was evaluated through Spearman's and Pearson's correlation coefficients of the estimated∆[K + ] with respect to actual ∆[K + ] values and through hourly-based absolute estimation errors. The results on ESRD-HD patients here reported showed that non-linear regression models could be advantageously used to quantitatively estimate ∆[K + ] and could, therefore, be an effective tool for remote, frequent, and noninvasive monitoring of ESRD-HD patients. According to Spearman's correlation coefficient (ρ) between measured and estimated variations in [K + ], similar ρ median values were found across the three models, with 0.06 being the highest median increment when moving from a linear to a cubic model for d = d w in m = a and, thus, denoting an analogous monotonic relationship between real ∆[K + ] and estimated values (∆ f d,m [K + ]). On the other hand, an improvement can be appreciated when comparing Pearson's correlation coefficient (r) evaluated in the three models, being the IQR reduced in d = d w and m = a by 0.06 and 0.08 when comparing the quadratic and cubic models, respectively, with respect to the linear model. Similar considerations can be made for d =d w,c . This is an expected outcome since the models here proposed were designed to avoid distorting the original monotonic increasing relationship between ∆[K + ] and the ECG derived markers, but only to adjust for the linear/non-linear relationship between them. However, the overall performance decreases considerably when the leave-one-out method, m = o, was used, being the median r lower and the IQR wider than in m = a. Also, for both d w andd w,c in m = o, a remarkable increase in the IQR can be observed when comparing linear and cubic models: From 0.47 to 0.61 for the first marker and from 0.34 to 0.45 for the second one. Overall, these findings seem to suggest that the cubic model does not provide any additional advantages to the linear or quadratic models in estimating ∆[K + ] using the leave-one-out approach. Therefore, the results we observed for m = a could potentially be affected by over-fitting. Another interesting observation can be made when comparing d w withd w,c in terms of r: For the linear model and m = a, a small gain is obtained by heart rate correction, which is more significant for m = o. However, this improvement for the heart rate corrected index d w,c vanishes in m = a when using the quadratic model or the cubic model getting even worse in m = o. This can also be a result of the over-fitting in these estimates,d w,c since it is already subjected to an heart rate correction estimation [21]. A reduction in the median and IQR estimation error for d = d w in m = a results when hours and patients values are pooled together. The IQR decreases from 0.48 for the linear model to 0.34 for both the quadratic and cubic models. The median error goes from 0.30 in the linear model to 0.22 and 0.21 in the quadratic and the cubic models, respectively. An analogous trend can be found for d =d w,c in m = a: IQR reduces from 0.50 in f = l to 0.36 in f = q and to 0.39 in f = c. However, for both markers, the improvements disappear when the leave-one-out method m = o is used, which would support the previously hypothesised over-fitting for m = a. These outcomes would point at the quadratic model as the most suitable model for ∆[K + ] estimation in m = a, as well as in m = o, even if in this latter case the advantage is not very remarkable. Moreover, as mentioned above, there is no clear benefit in using a cubic rather than a quadratic model in any of both m = a and m = o cases, probably due to the full constrained parameter estimation rule we imposed which, when applied to the cubic model, we observed it resulted in a very small cubic term, reducing to quadratic model as in Figure 5. The results observed so far may lead to the conclusion that, according to the performance metrics r or e f d,o considered, the observed improvement for quadratic model estimation in the case of m = a vanishes, or it is largely attenuated, in m = o. However, when analysing data distributions we realise that values of d w andd w,c markers are not evenly distributed in all the analysed range (see Figure 2). This fact can imply an overweight of small d values in m = o modelling, penalising the estimates at h 0 and h 5 , which present d values that might not be well represented in the training set. This could also mean that the leave-one-out cross-validation needs to be cautiously framed when the value of d to be estimated is far from those used in the training set range, which in our dataset usually happens at h 0 and/or at h 5 as exemplified in Figure 6. In these cases, the estimation error between real ∆[K + ] and∆ In the following, some limitations of our study are acknowledged. If blood samples had been collected more frequently during the early stage of the HD treatment when [K + ] and, consequently, d more rapidly change-covering a broad range of values-then the model training set in m = o could have better represented all the possible cases of d in the quadratic as well as in the cubic model, and then the results could have been more conclusive for the non-linear modelling improvement in predicting [K + ]. If this refined learning would have been done, or is done in future studies, it will, predictably, result in less error at the extreme times h 0 and h 5 of the process, and consequently also in a notably improved performance of the quadratic model both for m = a and for m = o. Another limitation that should be taken into account when interpreting this work's results is the lack of perfect time synchronisation between the actual ∆[K + ] and the evaluated d used for estimation at h 5 . As previously reported in [21], 44 h is the average ECG duration in our database-not 48 h, when the last blood sample is takenl-mainly due to electrode detachment or early battery exhaustion. However, in a recent study [20], we observed a low marker dynamics in the late post-HD treatment. Therefore, with some degree of confidence, we have assumed that the estimation error obtained between ∆[K + ] and∆ f d,m [K + ] at h 5 would be quite similar if the actual value-had the ECG lasted, as planned, for 48 h-had been used for modelling. Specific aspects of ESRD-HD patients' clinical status (e.g., possibility of previous infarction not always revealed in clinical history) could have influenced the results, generating the inter-patient variability here observed. In addition, the accuracy of the proposed models in estimating potassium variations for patients other than ESRD-HD remains to be assessed. Finally, the reduced number of patients and available blood samples for each patient included in this study also represents a limitation to frame the conclusion of the work. Indeed, even if the proposed approach may entail a significant step towards a robust and reliable ∆[K + ] sensing from time-warping based biomarkers, it needs to be validated in larger cohorts before any translation to clinical practice. However, the available data would suggest that a patient-specific quadratic model could estimate ∆[K + ] time trends with better accuracy than a linear-model. Also, in real practice, this method implies the collection of several blood samples, which may result in cumbersome procedures. It remains to be studied to what extent the models learned in one session can be extrapolated for sessions in later days/weeks, reducing the learning to just a single session. Future studies should be conducted in a larger population including not only ESRD-HD patients but also subjects at risk of [K + ] imbalance, such as those with diabetes mellitus [28] or severe cardiovascular events like myocardial infarction [29]. In addition, the proposed estimation models should be validated in a follow-up study where the models are learned at the initial HD session and used in later HD sessions to measure ∆[K + ]. In such studies the complete learning with m = a at the initial HD session could be evaluated by its prediction value at subsequent sessions, without any overfitting risk. At this future analysis, we expect that m = a approach will show better performance, in terms of correlation and estimation error, than the one reported here for the m = o case, since the models' coefficients will be estimated over the six ∆[K + ] values (and not just over five as in m = o), thus covering the full range of d values for each patient. Conclusions The present study showed the advantage in using non-linear models in estimating ∆[K + ] measurements in ESRD-HD patients based on T-wave-derived markers. These results suggest a new noninvasive strategy for ECG-based [K + ] sensing, with large implications for monitoring patients with cardiovascular and renal diseases, providing a meaningful tool for a personalised ambulatory cardiac risk assessment of these patients. Data Availability Statement: The dataset is still ongoing and it is available upon request to the corresponding author.
2021-04-27T05:12:52.021Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "08fc6d3c6511915ae13a3c3bd5e1b311a9415a1d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/8/2710/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08fc6d3c6511915ae13a3c3bd5e1b311a9415a1d", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
237515735
pes2o/s2orc
v3-fos-license
Is the Long-term Disease Course of Elderly-Onset Ulcerative Colitis Different from That of Non-Elderly-Onset Ulcerative Colitis? Inflammatory bowel disease (IBD), composed of Crohn’s disease (CD) and ulcerative colitis (UC), becomes a global disease as the incidence of IBD has been increasing in newly industrialized countries like Asia where IBD was rare in the past. UC is more prevalent than CD and it occurs in a wide range of ages from the 20s to 60s. Therefore, the number of elderly patients with UC is rising accordingly. The proportion of elderly-onset UC (EOUC) which was defined as UC diagnosed in those aged 60 years or older was 14.2% to 23% in Western countries. That of EOUC in East Asia is reportedly lower ranging between 9.9% and 14.6%. The elderly with UC have peculiar features compared with the young patients. First, frequent comorbidities may prevent the use of immunosuppressants in concerns of adverse events such as infection and malignancy. This practice pattern may negatively affect the disease outcomes. Second, the relatively immunodeficient state in the elderly may attenuate aberrant immune response probably leading to an improved disease course. Considering that the aging population is growing, it is required to better understand clinical courses of these patients. The natural course and clinical characteristics of patients with EOUC have yet to be concluded. The Dutch population-based study reported that EOUC patients have a higher hospitalization rate while a systematic review and meta-analysis of population-based cohorts showed that EOUC patients have a similar risk of colectomy as patients with non-EOUC. Corticosteroid use was similar but with lower use of immunomodulatory and anti-tumor necrosis factor agents. In Asia, a Japanese nationwide survey study reported that EOUC patients had a more severe disease activity, a higher proportion of IBD-related surgery, and a higher rate of corticosteroid use. In contrast, a cohort study in Hong Kong reported that the disease severity, corticosteroid or immunomodulator use, and colectomy rate were similar in EOUC and non-EOUC patients. However, these Asian studies had limitations in that they used only data from the referral hospital or that they excluded patients with mild disease. In the current issue, Park et al. reported data comparing clinical characteristics and long-term disease course of EOUC with those of non-EOUC in a well-established population-based cohort in Korea. This cohort study included 99 patients with EOUC and 866 patients with non-EOUC between 1986 and 2015. This study showed that cumulative risk of medication use was comparable between groups (p=0.091 for corticosteroids, p=0.794 for thiopurines, and p=0.095 for anti-tumor necrosis factor agents). Also, the cumulative risks of disease outcomes were similar between patients with EOUC and non-EOUC (11.9% vs 18.1% for hospitalization [p=0.240], and 2.3% vs 1.8% for colectomy [p=0.977]) at 10 years after diagnosis. These results suggest that the long-term disease course of patients with EOUC was similar to that of non-EOUC. The strength of this study would be a well-organized population-based cohort with long-term follow-up period (median 104.5 months). Despite the limitation of the study (the lack of baseline data such as activity, laboratory results Inflammatory bowel disease (IBD), composed of Crohn's disease (CD) and ulcerative colitis (UC), becomes a global disease as the incidence of IBD has been increasing in newly industrialized countries like Asia where IBD was rare in the past. 1 UC is more prevalent than CD and it occurs in a wide range of ages from the 20s to 60s. Therefore, the number of elderly patients with UC is rising accordingly. The proportion of elderly-onset UC (EOUC) which was defined as UC diagnosed in those aged 60 years or older was 14.2% to 23% in Western countries. 2,3 That of EOUC in East Asia is reportedly lower ranging between 9.9% and 14.6%. 4 The elderly with UC have peculiar features compared with the young patients. First, frequent comorbidities may prevent the use of immunosuppressants in concerns of adverse events such as infection and malignancy. 5 This practice pattern may negatively affect the disease outcomes. Second, the relatively immunodeficient state in the elderly may attenuate aberrant immune response probably leading to an improved disease course. Considering that the aging population is growing, it is required to better understand clinical courses of these patients. The natural course and clinical characteristics of patients with EOUC have yet to be concluded. The Dutch population-based study reported that EOUC patients have a higher hospitalization rate 2 while a systematic review and meta-analysis of population-based cohorts showed that EOUC patients have a similar risk of colectomy as patients with non-EOUC. 6 Corticosteroid use was similar but with lower use of immunomodulatory and anti-tumor necrosis factor agents. In Asia, a Japanese nationwide survey study reported that EOUC patients had a more severe disease activity, a higher proportion of IBD-related surgery, and a higher rate of corticosteroid use. 7 In contrast, a cohort study in Hong Kong reported that the disease severity, corticosteroid or immunomodulator use, and colectomy rate were similar in EOUC and non-EOUC patients. 8 However, these Asian studies had limitations in that they used only data from the referral hospital or that they excluded patients with mild disease. In the current issue, Park et al. 9 reported data comparing clinical characteristics and long-term disease course of EOUC with those of non-EOUC in a well-established population-based cohort in Korea. This cohort study included 99 patients with EOUC and 866 patients with non-EOUC between 1986 and 2015. This study showed that cumulative risk of medication use was comparable between groups (p=0.091 for corticosteroids, p=0.794 for thiopurines, and p=0.095 for anti-tumor necrosis factor agents). Also, the cumulative risks of disease outcomes were similar between patients with EOUC and non-EOUC (11.9% vs 18.1% for hospitalization [p=0.240], and 2.3% vs 1.8% for colectomy [p=0.977]) at 10 years after diagnosis. These results suggest that the long-term disease course of patients with EOUC was similar to that of non-EOUC. The strength of this study would be a well-organized population-based cohort with long-term follow-up period (median 104.5 months). Despite the limitation of the study (the lack of baseline data such as activity, laboratory results and comorbidities), the results are of importance in understanding the natural course and clinical characteristics of Asian EOUC patients. Meanwhile, it is necessary to consider cancer occurrence and mortality as age-related issues in patients diagnosed with UC over 60 years. In a French populationbased study, there was no increased risk of developing colorectal cancer in EOUC patients. However, the risk of developing lymphoproliferative and myeloproliferative disorders was high, which was unrelated to thiopurine exposure. 10 A 50-year nationwide register-based cohort study in Sweden reported increased all-cause mortality (hazard ratio, 1.4; 95% confidence interval, 1.4 to 1.4) in EOUC patients compared to the general population. 3 But the hazard ratio for various causes of death in EOUC and non-EOUC patients was similar. Currently, data on cancer and mortality in EOUC patients in the Asian population are lacking. The research on this issue with optimal monitoring and management strategies for EOUC patients is warranted in the future. CONFLICTS OF INTEREST No potential conflict of interest relevant to this article was reported.
2021-09-16T06:23:24.249Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "cbc379916413b84d8eadf6d635aedfb073ab78de", "oa_license": "CCBYNC", "oa_url": "https://www.gutnliver.org/journal/download_pdf.php?doi=10.5009/gnl210403", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c168ba838f2040ec3fbed59cb4831b202de9924c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260806551
pes2o/s2orc
v3-fos-license
Nontypeable Haemophilus influenzae released from biofilm residence by monoclonal antibody directed against a biofilm matrix component display a vulnerable phenotype Bacterial biofilms contribute significantly to pathogenesis, recurrence and/or chronicity of the majority of bacterial diseases due to their notable recalcitrance to clearance. Herein, we examined kinetics of the enhanced sensitivity of nontypeable Haemophilus influenzae (NTHI) newly released (NRel) from biofilm residence by a monoclonal antibody against a bacterial DNABII protein (α-DNABII) to preferential killing by a β-lactam antibiotic. This phenotype was detected within 5 min and lasted for ~ 6 h. Relative expression of genes selected due to their known involvement in sensitivity to a β-lactam showed transient up-regulated expression of penicillin binding proteins by α-DNABII NTHI NRel, whereas there was limited expression of the β-lactamase precursor. Transient down-regulated expression of mediators of oxidative stress supported similarly timed vulnerability to NADPH-oxidase sensitive intracellular killing by activated human PMNs. Further, transient up-regulated expression of the major NTHI porin aligned well with observed increased membrane permeability of α-DNABII NTHI NRel, a characteristic also shown by NRel of three additional pathogens. These data provide mechanistic insights as to the transient, yet highly vulnerable, α-DNABII NRel phenotype. This heightened understanding supports continued validation of this novel therapeutic approach designed to leverage knowledge of the α-DNABII NRel phenotype for more effective eradication of recalcitrant biofilm-related diseases. work showed that NTHI NRel displayed preferential killing by a β-lactam antibiotic than by a sulfonamide after 15 m or 2 h exposure to α-DNABII 36 , here we sought to more clearly define the kinetics of this outcome.As such, after incubation of a 16 h NTHI biofilm with α-DNABII for 1 m, 5 m, 15 m, 2 h, 4 h or 6 h, we assayed killing of NTHI NRel by either A/C or T/S (Fig. 1) 39 .Antibiotic concentrations used were increased over time to adjust for bacterial growth in the culture system at the time bacteria were collected (Table 1), while simultaneously maintaining killing of NTHI that grew planktonically in fluids above the biofilms at ~ 15-25% to allow for detection of any increased sensitivity of NRel.Number (CFU/mL) of NTHI that grew planktonically in medium above the biofilm or that of NTHI released from biofilm residence by α-DNABII after the indicated period of time are listed in Supplementary Table S1.Relative ratio of recovery of α-DNABII NTHI NRel to that of those that grew planktonically in the medium above the biofilm ranged from 1.3:1 to 2.4:1, similar to what we previously reported 40 . At every time point tested, T/S-mediated killing of α-DNABII NTHI NRel never exceeded 25%, consistent with our previous report of minimal relative susceptibility to this antibiotic when tested on α-DNABII NRel recovered at 15 m or 2h 36 .Killing of α-DNABII NTHI NRel by T/S was greatest at 5 and 15 m but declined by 2 h and remained at or below 20% at both 4 h and 6 h.Killing of α-DNABII NTHI NRel by T/S was never significantly greater than that of planktonic NTHI by T/S. Conversely, sensitivity of α-DNABII NTHI NRel to killing by A/C was significantly greater than that of planktonic NTHI within 5 m (p = 0.003).This preferential sensitivity to the β-lactam antibiotic peaked within NRel recovered after exposure of the biofilm to α-DNABII for 2 h (p = 0.0001), a degree of significant difference that was maintained at the 4 h timepoint (p < 0.0001).By 6 h, the susceptibility of α-DNABII NTHI NRel to killing by A/C was no different than that of planktonic killing by A/C or T/S. Given that NTHI NRel recovered at the 2 h timepoint were significantly sensitive to killing by A/C, whereas those recovered at the 6 h timepoint had lost this specific characteristic, we next focused on comparative evaluation of these two NRel populations to begin to determine what might contribute to the noted enhanced A/C sensitivity. 2 h and 6 h α-DNABII NTHI NRel were distinct in relative transcription of a panel of genes whose products were likely involved in sensitivity to a β-lactam antibiotic.To begin to determine the relative differences in sensitivity to killing by A/C between 2 h α-DNABII NTHI NRel ('2 h NRel') and those recovered at 6 h ('6 h NRel'), which no longer displayed this characteristic, we established a set of 15 primer Table 1.Concentrations of A/C or T/S used to determine the kinetic profile of antibiotic sensitivity of α-DNABII NTHI NRel when recovered at increasing time points within the assay period.Antibiotic concentrations are listed as a ratio of amoxicillin trihydrate (µg/mL) to clavulanic acid (µg/mL) (A/C), or of trimethoprim (µg/mL) to sulfamethoxazole (µg/mL) (T/S).We increased the antibiotic concentrations used after 2 h to account for bacterial growth in the culture system while also maintaining killing of NTHI that grew planktonically within the fluids that overlay a biofilm in our negative control culture systems between ~ 15 to 25% at any given timepoint.S2) designed to specifically characterize sensitivity to a β-lactam antibiotic via realtime quantitative reverse transcription PCR (qRT-PCR). The first subset profiled are three canonical lag phase genes (fis, deaD and artM), used here as in earlier work where we showed that NTHI NRel recovered after 15 m of biofilm exposure to α-DNABII exhibited both an abundance of ribosomal proteins characteristic of bacteria in lag phase and significantly (p ≤ 0.05) greater transcript abundance of these three genes compared to planktonic NTHI 36,41 .Thus, we now wondered if the 2 h NRel, which showed preferential sensitivity to A/C, were likely still in lag phase but were no longer so at the 6 h time point.These three canonical lag phase genes did indeed exhibit a significant approximately 4-to eightfold greater abundance at 2 h compared to 6 h (Fig. 2).Thereby, these data provided further evidence that upon release from biofilm residence by α-DNABII, NTHI NRel appeared to mimic bacteria in lag phase within 15 min and up until at least 2 h after release. We next assessed relative expression of ompP2, which encodes OMP P2, the major outer membrane porin of NTHI 42,43 , as this porin is likely involved in access of A/C into the bacterial cell.Relative to 6 h NRel, 2 h NRel exhibited a significant nearly tenfold increased expression of ompP2.Thus, this finding also provided a potential mechanism for the observed significantly greater killing of 2 h NRel by A/C, which was no longer observed at 6 h. The next set of three genes (e.g., ftsl, dacA, and dacB) encode bacterial penicillin binding proteins (PBPs), which are targets of β-lactam antibiotics [44][45][46][47][48] .We found a significant threefold increase in expression of each PBP gene in 2 h NRel compared to 6 h NRel, a finding which again suggested a potential mechanism for the observed increased sensitivity to A/C that was detectable shortly after release from biofilm residence by α-DNABII. Next, we evaluated relative expression of three AcrAB-TolC multidrug efflux pump (MDEP) genes due to their role in removal of antibiotics from the bacterial cell 49 .We found significantly increased expression (≥ fourfold) of each of these three genes in 2 h vs. 6 h NRel, which suggested greater activity of this efflux pump at the 2 h time point.Interestingly, however, the gene that encodes for the repressor of this efflux pump, acrR, was also significantly up-regulated (11-fold) in 2 h NRel.Given the paradoxical nature of these findings with regard to the significant A/C sensitivity, we considered whether collection of RNA at the exact time point at which the biological activity of relative antibiotic killing was assessed (e.g., 2 or 6 h) might have provided the wrong 'snapshot in time' to gauge the actual activity of these particular gene products. As such, we also recovered RNA after 30 m of biofilm exposure to α-DNABII (Supplemental Fig. S1) as perhaps a better surrogate for comparative gene product activity.We found significantly increased transcript abundance for acrA, acrG, and tolC at 30 m relative to 2 h (p = 0.004 for acrA or p = 0.0005 for acrG and tolC).Additionally, there was significantly greater acrR transcript abundance at 30 m compared to at 2 h (p ≤ 0.0001).Unfortunately, these additional data did not offer clarity as to the role of this efflux pump in the observed enhanced sensitivity of α-DNABII NRel in this strain of NTHI to A/C, but suggested that overall the net relative biological outcome of these four gene products favored activity of the repressor, as others have similarly www.nature.com/scientificreports/observed 30 .However, it is also possible that the role of AcrR is not clear in this strain.Further, it is important to note that time sensitivity of gene expression changes and how that might relate to relative protein expression and/ or the stability or lifespan of these gene products (as well as likely others) potentially played additional roles here.Whereas we used amoxicillin with the β-lactamase inhibitor clavulanic acid, given the observed significant sensitivity to killing by A/C in 2 h NRel, we were interested in relative expression of bla, which encodes the NTHI β-lactamase precursor.We found no significant difference between the 2 h and 6 h NRel, which suggested that this gene product likely played no role in the transient significant sensitivity to A/C observed.Lastly, we profiled three genes involved in bacterial resistance to oxidative stress, hktE, pdgX and sodA, as given the previously demonstrated rapid efficacy of α-DNABII when used therapeutically in three pre-clinical models of human disease without addition of antibiotics 24,26,27 , we were curious about potential vulnerability of NTHI NRel to host innate immune effectors in addition to antibiotics.Whereas there was no significant difference in relative sodA gene expression, hktE and pdgX were significantly down-regulated by ≥ fourfold in 2 h NRel compared to 6 h NRel.This outcome suggested that the α-DNABII NTHI NRel phenotype might also include being less well equipped to mitigate oxidative stress.Thereby, we now wanted to determine the relative sensitivity of 2 h vs 6 h NRel to killing by human PMNs. 2 h α-DNABII NTHI NRel were significantly more susceptible to NADPH-oxidase sensitive intracellular killing by human PMNs.To assess the relative ability of activated human PMNs to kill 2 h vs 6 h NRel, we introduced use of the humanized version of our anti-tip chimer monoclonal antibody ('HuT-ipMab') 50 , while continuing to use the murine monoclonal (α-DNABII) we had used to this point but will now refer to as 'MsTipMab' for clarity.This direct comparison of HuTipMab to MsTipMab allowed us to expand our evaluation as to whether humanization compromised any assessed activity of this monoclonal, including induction of the NRel phenotype.We found that whereas susceptibility of planktonic NTHI to killing by human PMNs was ~ 33% (Fig. 3A, open box symbols), that of 2 h NTHI NRel, whether induced by either Ms-or HuTipMab, was significantly greater (65% or 62%, respectively, p ≤ 0.0001) (Fig. 3A).To next determine if the enhanced killing of 2 h NRel by human PMNs was likely a result of their limited relative ability to mitigate oxidative stress (due to significant down-regulation of hktE and pdgX), we incorporated the intracellular NADPH-oxidase inhibitor diphenyleneiodonium chloride (DPI) 51 into our PMN killing assay.When neutrophils were incubated with DPI, 2 h NRel induced by Ms-or HuTipMab were now significantly less susceptible to killing (35% and 32%, respectively, p = 0.0006) compared to killing by untreated PMNs (Fig. 3A , closed black and grey circles).Notably, regardless of whether induced by incubation with Ms-or HuTipMab, killing of 2 h NRel by DPI-treated PMNs was now no longer statistically significantly different than that of planktonic NTHI by pre-treated PMNs (p = 0.36).DPI had no effect on killing of planktonic. In 6 h NRel, DPI treatment had no effect on the relative ability of human PMNs to kill these bacteria once phagocytized, as there was no significant difference in the susceptibility to killing by DPI-treated PMNs compared to untreated PMNs (p = 0.3) (Fig. 3B).As observed in assay of 2 h NRel, there was no difference in susceptibility to killing by DPI-treated PMNs of planktonic NTHI relative to 6 h NRel (p = 0.10), and killing of planktonic NTHI appeared largely unaffected by the addition of DPI, which suggested a greater influence of extracellular mechanisms of PMN-mediated killing for planktonic NTHI.Moreover, there was no difference in susceptibility to killing by DPI-treated PMNs in 2 h relative to 6 h NRel (p = 0.07). Taken together, these data indicated that in addition to the transient, yet notably increased sensitivity to killing by A/C, α-DNABII NTHI NRel were also highly vulnerable to intracellular killing by activated human PMNs, and specifically in an NADPH-oxidase sensitive manner.This characteristic was no longer detected in 6 h NRel which suggested yet another transient vulnerability of this phenotype. Bacterial membrane permeability was significantly greater in 2 h vs 6 h α-DNABII NTHI NRel.Given that ompP2 was significantly up-regulated in 2 h NRel compared to 6 h NRel, we wondered if this increased expression of the major NTHI outer membrane porin might correlate with greater membrane permeability.To address this, we used the nucleic acid intercalating green fluorescing dye, SYTOX Green.SYTOX Green can only access the NTHI genome if both the inner and outer membranes of this Gram-negative bacterium are permeable, thereby increased fluorescence signal was used as a surrogate of relative outer membrane permeability. In an assay of both the 2 h and 6 h NRel, the negative control (mid-log phase planktonically grown NTHI) was minimally fluorescent immediately after exposure to SYTOX Green, then maintained this low level of fluorescence throughout the 120 m assay period (Fig. 4A and B, black lines).Conversely, the positive control (mid-log phase planktonically grown NTHI treated with Triton X-100 to artificially permeabilize the bacterial membrane) also demonstrated a low level of fluorescence at time zero, but then rapidly increased to reach and maintain maximum fluorescence of ~ 0.0125 RFU/CFU within approximately 30-45 m (Fig. 4A and B, grey lines).The 2 h NRel also demonstrated a low level of fluorescence immediately upon exposure to SYTOX Green, but then steadily increased over time to reach a maximum plateau at 75 m which approached that of the positive control (Fig. 4A, green line).Throughout the assay, 2 h NRel consistently fluoresced significantly greater than planktonically grown NTHI (p ≤ 0.0001), which suggested increased outer membrane permeability in 2 h NRel. Given that 6 h NRel demonstrated limited sensitivity to either A/C or T/S (see Fig. 1), these 6 h NRel were also assayed to determine their relative outer membrane permeability (Fig. 4B).Whereas planktonic NTHI (with and without Triton X-100 treatment) behaved as described in Fig. 4A, 6 h NRel now emitted a fluorescence signal only slightly greater than mid-log phase planktonically grown NTHI throughout the 120 m assay period (p = 0.11).These data suggested that in alignment with enhanced A/C sensitivity, increased outer membrane permeability observed in 2 h NRel was also transient and had resolved by 6 h. To ensure that the observed increased outer membrane permeability was not an NRel characteristic exclusive to either NTHI as a whole, or to this particular NTHI isolate, we also assessed identically generated NRel of three additional pathogens [Escherichia coli, Pseudomonas aeruginosa, or a methicillin-resistant Staphylococcus aureus (MRSA)] as induced upon exposure of respective 16 h biofilms to α-DNABII for 2 h.Whereas each bacterium demonstrated a unique pattern, 2 h α-DNABII NRel of E. coli, P. aeruginosa and MRSA all exhibited significantly increased fluorescence signal (p = 0.007 for E. coli NRel, p = 0.005 for P. aeruginosa NRel, p = 0.02 for MRSA NRel) compared to their planktonically grown counterparts over the 120 m assay period (Fig. 4C-E).These data suggested that a transient phenotype of increased outer membrane permeability was potentially a common characteristic of bacteria released from biofilm residence by the action of a monoclonal antibody directed against the DNABII structural biofilm matrix proteins. Discussion The need for more effective therapeutic approaches, or preferably prevention strategies, to combat persistent biofilm-related diseases cannot be overstated.Biofilm-related diseases exacerbate the global major public health crisis of antibiotic resistance, as despite the fact that biofilm-resident bacteria are significantly recalcitrant to antibiotic treatment, individuals with these diseases are nonetheless commonly prescribed antibiotics as one of our very limited repertoire of treatment options [52][53][54] .Not only are bacteria resident within biofilms highly resilient to antibiotics, but resistance of biofilms to clearance by immune effectors adds complexity by significantly challenging the host's ability to resolve biofilm-related diseases [55][56][57] .To address these issues, one such strategy includes the use of methodologies that can release biofilm-resident bacteria from their protective fortress to facilitate their killing by the host and, if needed, by co-delivery of traditional antibiotics that may now be substantially more effective. Towards this goal, we've focused on a key structural component of the bacterial biofilm matrix, the bacterial DNA-binding proteins of the DNABII family [19][20][21] .The two DNABII proteins, HU and IHF, bind to crossedstrands of extracellular DNA in the biofilm matrix, thereby effectively stabilizing this structure 22 .When we incubate a biofilm formed either individually, or by two of any of six diverse respiratory tract pathogens, as well as any of the highly antibiotic resistant members of the ESKAPEE pathogens, with a monoclonal antibody directed against the protective DNA-binding 'tips' of a DNABII protein, the biofilm rapidly collapses [22][23][24][25]40 . Thi collapse releases all tested biofilm-resident pathogens into a highly vulnerable, but transient, state wherein they are now significantly more susceptible to commonly used antibiotics of multiple classes in vitro, and also to innate immune effectors in vivo [24][25][26][27]35,36,58,59 .This enhanced susceptibility is observed not only when compared to their isogenic counterparts resident in the biofilm state, but more importantly, is also greater than those in the planktonic state.www.nature.com/scientificreports/Specifically, in an earlier report, we showed that after incubation of a biofilm formed by a clinical isolate of NTHI with α-DNABII for either 15 min or 2 h, the resultant NRel were significantly more susceptible to killing by A/C than they were to killing by T/S 36 .Overall, that the NRel were more sensitive to killing than were their biofilm-resident counterparts was not unexpected.However, that these NRel were just as or significantly more sensitive to either antibiotic than their isogenic planktonically grown counterparts, was indeed notable 36 .Here, we investigated more thoroughly the observed preferential greater sensitivity of NRel to killing by the β-lactam antibiotic A/C, than by the sulfonamide, T/S. Collectively, data presented here revealed more clearly the kinetics of the A/C-sensitive α-DNABII NRel phenotype after release from residence in a biofilm formed by NTHI strain 86-028NP by a murine monoclonal directed against a biofilm structural matrix protein of the DNABII family.We found that this significant sensitivity is detectable within 5 min and endures for approximately 6 h in vitro, after which the NRel population is no longer selectively more sensitive to A/C. Examination of relative expression of several genes whose products are known to contribute to β-lactam sensitivity revealed multiple factors that likely contributed to the increased susceptibility of α-DNABII NTHI NRel to A/C.In earlier work, we showed that within 15 min of release from biofilm residence by anti-DNABII, NTHI exhibit increased expression of genes characteristic of bacteria in lag phase, a period during which bacteria are involved in repair of their cell envelopes and membranes, and thereby are also more membrane permeable 41,60 .Here, we confirmed that at the 2-h time point, α-DNABII NTHI NRel still appeared to mimic bacteria in lag phase as evidenced by significant up-regulation of the same three canonical lag phase genes compared to NRel recovered at the 6 h time point.That α-DNABII NTHI NRel never displayed significantly greater susceptibility to T/S was likely explained by the understanding that bacteria in lag phase are not highly active in protein synthesis (a target for the sulfonamide class of antibiotics, like T/S) [61][62][63] . In previous data from our group 36 complete proteomic analysis of 15 min α-DNABII NRel showed an increase in the peptidoglycan synthesis protein, MurB, which suggests altered cell envelope composition 36,64 .Modification in the α-DNABII NTHI NRel cell envelope was further corroborated by new results shown here of increased transcript abundance of ompP2, which encodes for the major porin of the NTHI outer membrane, thus providing a mechanism for antibiotic entry into the bacterial cell.Moreover, this increased expression of ompP2 aligned with our concomitant demonstration of a similarly timed transient increase in outer membrane permeability in newly released NTHI at the 2-h time point, as evidenced by relative fluorescence upon incubation with SYTOX Green. New data presented here also demonstrated that significant increases in relative expression of three profiled PBPs is expected to have also contributed to the heightened antibiotic sensitivity shown by α-DNABII NTHI NRel, as β-lactams target PBPs [44][45][46] .Indeed, specific to relative increased expression of the H. influenzae ftsI gene, bacteria that have evolved mutations in the transpeptidase region of this encoded protein demonstrate increased β-lactam resistance [44][45][46][47][48] .Lastly, 2-h α-DNABII NTHI NRel showed limited expression of bla, the β-lactamase precursor, which suggested yet another potential contributor to the significant sensitivity to killing by A/C, as even in the absence of clavulanic acid, NTHI NRel would have limited ability to degrade this antibiotic once it had entered the cell 65 .That NTHI newly released from their protective biofilm fortress seem to mimic bacteria in lag phase and exhibited significant up-or downregulation of numerous profiled genes may be explained by the fact that bacteria resident within a biofilm are often metabolically quiescent 8 .As such, when rapidly released from biofilm residence by the action of anti-DNABII, we posit that bacteria are released into a state wherein they are transiently ill equipped to mediate the killing functions of antibiotics and human PMNs.Whereas these defensive functions are ultimately regained, the NRel phenotype nonetheless provides a window of opportunity for more effective eradication of the formerly biofilm-resident bacteria. Whereas increased susceptibility to antibiotics has heretofore been reported as a characteristic of the newly released phenotype for multiple bacteria 23,30,31,[34][35][36] , we were also interested in exploring whether there also might be evidence of increased susceptibility to immune effectors.We were specifically interested in susceptibility to innate immune effectors as to date, and in three separate pre-clinical models of human disease, we have demonstrated that when newly released from a biofilm by the action of a DNABII-directed antibody, bacteria and any biofilm remnants are rapidly cleared by the respective host in the absence of any added antibiotic.We reported this for mucosal biofilms formed in the middle ear by NTHI in a chinchilla model of experimental otitis media, for aggregate biofilms formed in the murine lung by Pseudomonas aeruginosa and for biofilms formed in the oral cavity by Aggregatibacter actinomycetemcomitans in a rat model of periimplantitis [24][25][26][27] .This antibiotic-free efficient bacterial clearance and rapid disease resolution suggested to us that innate immune effectors, particularly PMNs, were likely involved.Our interest was further piqued when here we showed that α-DNABII NTHI NRel demonstrated transient marked down-regulation of two enzymes (a catalase and a peroxiredoxin-glutaredoxin) important to mitigation of oxidative stress [66][67][68][69] . As such, we assessed the relative killing of α-DNABII NTHI NRel by human PMNs given their role as a first line of defense against unwelcome pathogens.PMNs elicited to the site of infection exhibit antimicrobial activities in a variety of ways.Chief among their arsenal is extracellular killing via NETosis, as well as intracellular killing by reactive oxygen species after phagocytosis 70,71 .We found that NTHI newly released from biofilm residence by α-DNABII were indeed transiently, yet highly vulnerable to killing by activated human PMNs.While this killing was overall likely due to both extra-and intracellular means, there was significant reduction in killing when PMNs were pre-treated with the specific intracellular NADPH-oxidase inhibitor, DPI 51 .This finding aligned well with the similarly transient marked reduced expression of both hktE and pdgX by the NRel population.Use of DPI also allowed us to mimic the limited functionality of PMNs recovered from individuals with chronic granulomatous disease who also exhibit increased vulnerability to fungal and bacterial infections, including those induced by pathogens capable of forming highly recalcitrant biofilms 51,72 .Finally, it is worth noting that there was no difference in heightened susceptibility to killing by PMNs whether NTHI were released from biofilm residence by the action of either the murine or humanized α-DNABII monoclonal.Thus, these www.nature.com/scientificreports/data add to others which show that the humanization process did not diminish the activity of this monoclonal antibody in any way tested to date 27,35 . To determine if specific NRel attributes were perhaps more broadly demonstrable in bacteria other than NTHI, we also explored the characteristic of increased outer membrane permeability for three additional human pathogens released from biofilm residence by action of the humanized monoclonal directed against a DNABII protein.This particular characteristic was indeed shared by α-DNABII NRel of two additional Gram-negative pathogens, E. coli and P. aeruginosa, as well as by an isolate of the Gram-positive pathogen methicillin-resistant S. aureus, which suggested that transient increased outer membrane permeability might be a common characteristic of the α-DNABII NRel phenotype.A potential limitation of this study is that we cannot guarantee that α-DNABII NRel were exclusively comprised of only those bacteria newly released from biofilm residence, as they may also have included bacteria that left biofilm residence as part of natural biofilm remodeling 73 .Nonetheless, this possibility did not limit our ability to observe and describe distinct phenotypes of those bacteria newly released from biofilm residence compared to their planktonically grown counterparts. Collectively, new data provided here support continued validation of a therapeutic approach wherein we propose to deliver the humanized monoclonal antibody to an individual with a recalcitrant biofilm infection to enable the host's innate immune system to effectively eradicate those bacteria newly released from the biofilm, ideally in a controlled manner.To date, three separate pre-clinical models of disease support this antibioticfree strategy [24][25][26] .However, if needed or warranted, and given that significant antibiotic sensitivity was realized within minutes, we would propose a combinatorial approach wherein a now effective antibiotic is co-delivered to promote rapid killing of the bacteria as they are released from the pathogenic biofilm.In a highly antibiotic resistant world where bacterial biofilms are persistent and recalcitrant to both conventional antibiotic treatment and host immune clearance, this pathogen-agnostic anti-DNABII approach may provide a powerful and broadly effective novel strategy that offers promising therapeutic potential in the absence of new antibiotic development for eradication of biofilm-related diseases. Materials and methods Ethics statement.De-identified human blood donations provided by healthy adult subjects that span the demographic spectrum of central Ohio were made under the auspices of the Research Institute Blood Donor Services of Nationwide Children's Hospital after informed written consent was obtained.PMNs were isolated from these blood specimens for use in studies conducted within our laboratory in conformity with, and as approved under, Nationwide Children's Hospital Institutional Biosafety Committee (IBC) protocol #IBS-00000449.All experiments were conducted in accordance with relevant guidelines and regulations of the Nationwide Children's Hospital Research Institute Blood Donor Services and Institutional Biosafety Committee. Antibodies. A murine (Rockland Immunochemicals, Inc., Philadelphia, PA) 24 or humanized (Lake Pharma, Inc., San Carlos, CA) 35,50 monoclonal antibody of the IgG isotype against a tip-chimer peptide designed to mimic protective epitopes of the DNA-binding 'tips' of the alpha and beta subunits of a bacterial DNABII protein were prepared for us under contract to Rockland Immunochemicals, Inc. or Lake Pharma, Inc., respectively.These monoclonal antibodies are referred to as MsTipMab (murine) or HuTipMab (humanized). Collection and quantitation of α-DNABII NTHI NRel.Two and a half mL NTHI, E. coli UTI89, P. aeruginosa 142-1 or MRSA at 2 × 10 5 CFU/mL were seeded into separate 10 cm 2 flat tissue culture tubes (TPP, Trasadingen, Switzerland, Cat no.91243) and allowed to establish when incubated statically at 37 °C with 5% CO 2 in a humidified atmosphere in respective medium for 16 h.After 16 h, tubes were carefully inverted in the incubator.Medium containing non-adherent bacteria was poured off.While inverted, 2.5 mL equilibrated (37 °C, 5% CO 2 ) Dulbecco's phosphate buffered saline without calcium or magnesium (DPBS) was added.The tubes were rotated 360° to gently remove additional non-adherent bacteria, and the DPBS was poured off as above.To generate α-DNABII NTHI NRel, washed biofilms were incubated at 37 °C with 5% CO 2 in a humidified atmosphere for 1 m, 5 m, 15 m, 2 h, 4 h, or 6 h with either MsTipMab or HuTipMab at a concentration of 5 µg antibody diluted in sBHI/0.8cm 2 .Number of NTHI released from biofilm residence by α-DNABII or those growing in the fluids above the biofilm at a given timepoint is supplied in Supplementary Table S1.To yield α-DNABII E. coli UTI89, P. aeruginosa 142-1 or MRSA NRel, respective 16 h biofilms were incubated at 37 °C, 5% CO 2 in a humidified atmosphere for 2 h with HuTipMab at a concentration of 5 µg antibody diluted in sBHI/0.8cm 2 .www.nature.com/scientificreports/(T/S) as previously described, with modifications 36 .As above, 16 h NTHI biofilms were established, washed, and treated in tissue culture tubes for 1 m, 5 m, 15 m, 2 h, 4 h, or 6 h to yield α-DNABII NTHI NRel of different 'ages' .α-DNABII NTHI NRel were then carefully poured into Eppendorf tubes and sonicated for 2 m in a water bath sonicator to disperse bacterial aggregates.90 µL aliquots of the bacterial suspensions were added to a 96-well plate, followed by 10 µL of either A/C or T/S.We pre-determined the antibiotic concentration that would maintain killing of NTHI that reside in the fluids that overlay a biofilm in our culture system between ~ 15 and 25% to allow us to readily detect and quantify any enhanced killing of the α-DNABII NTHI NRel 36 .Concentrations of antibiotics used at each time point are listed in Table 1.As a negative control, 10 µL of the antibiotic diluent alone was simultaneously added to separate respective wells in the 96-well assay plate.Bacteria and antibiotics were incubated statically for 2 h at 37 °C, 5% CO 2 in a humidified atmosphere.After 2 h, the 96-well plate was sonicated for 2 m in a water bath sonicator to disrupt any bacterial aggregates.Each well was then serially diluted and spread plated on chocolate agar to determine colony forming units (CFU)/mL.Percent survival was calculated by comparing CFU/mL of the diluent alone ('no-antibiotic') with the antibiotic treated bacteria.CFU/mL of the antibiotic wells were divided by CFU/mL of the diluent only wells, multiplied by 100, then this value was subtracted from 100 to calculate percent killing.Experiments were performed with 2-3 technical triplicates per assay, and a minimum of three times on separate days. RNA isolation and real-time quantitative reverse transcription PCR (qRT-PCR). RNA isolation and qRT-PCR was conducted as previously described 36 with a few modifications.Briefly, to prepare for RNA isolation, 10 cm 2 flat tissue culture tubes were seeded with 2.5 mL NTHI at 2 × 10 5 CFU/mL.After 16 h incubation at 37 °C, 5% CO 2 , humidified atmosphere, the tubes were gently washed as described in detail above.Tissue culture tubes were treated with 5 µg MsTipMab per 0.8 cm 2 .The tubes were returned to the incubator, gently inverted so the medium covered the biofilm, and allowed to incubate at 37 Bacterial killing by human neutrophils.Human neutrophils were isolated from blood via magnetic negative selection using the EasySep Human Neutrophil Isolation Kit (StemCell Technologies, Inc., Cat no.17957).Susceptibility of NTHI to killing by human neutrophils was assessed as previously described 74 , with a few modifications. 1 × 10 6 neutrophils were seeded into 1 mL DPBS in a 24-well non-tissue culture treated plate.Neutrophils were activated by addition of 50 nM phorbol 12-myristate 13-acetate (PMA) (Sigma-Aldrich, Cat no.P8139) for 10 m at 37 °C, 5% CO 2 in a humidified atmosphere.NRel were collected from tissue culture tubes after 2 h or 6 h, as described previously.All bacterial suspensions were sonicated for 2 m in a water bath sonicator to disrupt any aggregates, then diluted such that a range of 4.0 × 10 3 -2.5 × 10 5 CFU NTHI/1 mL DPBS would be added per well in the 24-well assay plate.For experiments that utilized the intracellular NADPH-oxidase inhibitor, diphenyleneiodonium chloride (DPI) (Sigma-Aldrich, Cat no.D2926), 0.05 µM DPI was added to non-activated neutrophils, allowed to incubate for 30 s at room temperature, then 50 nM PMA added to activate neutrophils, as above 51,74 .Regardless of whether neutrophils were or were not pre-treated with DPI, planktonic or α-DNABII NTHI NRel and activated neutrophils were incubated for 30 m at 37 °C, 5% CO 2 in a humidified atmosphere.After 30 m, 100 µL 10 × TrypLE (Thermo Fisher Scientific, Cat no.A1217701) was added to each well, vigorously pipetted, diluted, and plated on chocolate agar for enumeration.Experiments were performed with 2-3 technical triplicates per assay on individual days for a minimum of six separate days. Assessment of bacterial membrane permeability in NRel populations of NTHI 86-028NP, Escherichia coli UTI89, Pseudomonas aeruginosa 142-1, or MRSA isolates.Planktonically grown and α-DNABII NTHI, E. coli UTI89, P. aeruginosa 142-1, or MRSA NRel were assessed for relative membrane permeabilities via use of the fluorescent intercalating dye SYTOX Green Nucleic Acid Stain (Invitrogen, Cat no.S7020) 75,76 .Mid-log phase planktonically grown bacteria were prepared as follows.NTHI from a chocolate agar plate, E. coli UTI89 from an LB agar plate, or P. aeruginosa 142-1 or MRSA colonies from a TSA plate were separately suspended in 1.5 mL equilibrated respective medium to OD 490 0.10, then allowed to grow statically for 3 h (for NTHI or MRSA) or 2.5 h (for E. coli UTI89 or P. aeruginosa 142-1) with a vented cap at 37 °C with 5% CO 2 .After incubation, the OD 490 of the planktonically grown culture was read.All bacterial suspensions were centrifuged at 14,000 rpm for 3 m at 4 °C.Planktonically grown bacteria were then resuspended in 900 µL DPBS.α-DNABII NTHI, E. coli UTI89, P. aeruginosa 142-1, or MRSA NRel were resuspended in 1.2 mL DPBS and centrifuged as before.This process was repeated up to two additional times for α-DNABII NRel to eliminate any background fluorescence.For 2 h α-DNABII NTHI, E. coli UTI89, P. aeruginosa 142-1, or MRSA NRel, bacteria were resuspended in a final volume of 900 µL DPBS.6 h α-DNABII NTHI NRel were resuspended in a final volume of 500 µL DPBS.6 h NRel were resuspended in a lower volume of DPBS given that CFU/mL of collected bacteria after 6 h biofilm exposure to α-DNABII was less than at 2 h.All bacterial suspensions were gently sonicated for 2 m in a water bath sonicator to disrupt bacterial aggregates.100 µL aliquots of each bacterial suspension was added to their respective wells in a black MaxiSorp FluoroNunc 96-well plate (Thermo Fisher Scientific, Cat no.437111) along with 0.5 µM SYTOX Green or 0.5% Triton X-100 (Sigma-Aldrich, Cat no.T8787) plus 0.5 µM SYTOX Green for NTHI and MRSA and 5 µM SYTOX Green or 5 µM SYTOX Green plus 0.5% Triton X-100 (final concentrations) for E. coli UTI89 and P. aeruginosa 142-1.Use of a greater concentration of SYTOX Green with E. coli UTI89 and P. aeruginosa 142-1 was necessary as larger genome sizes (compared to MRSA or NTHI) require a greater concentration of SYTOX Green to maximize intercalation across the larger genome and retain linear range of fluorescence 77,78 .Each bacterial suspension was added to respective wells such that a range of 2-to 4 × 10 7 CFU all respective pathogens/well was achieved.Experiments were performed a minimum of three times on separate days with three technical triplicates per assay.Fluorescence was measured spectrophotometrically (at an excitation wavelength of 480 nm and an emission wavelength of 522 nm) every 15 m for 120 m at 37 °C via FLUOstar Omega microplate reader (BMG LABTECH, Ortenberg, Germany). Figure 1 . Figure 1.Kinetic profile of antibiotic sensitivity of α-DNABII NTHI NRel to A/C or T/S.Concentrations of antibiotics used were pre-determined to maintain killing of bacteria that grew planktonically in the fluids that overlay a biofilm in our culture system (e.g., 'plank' in key) to between ~ 15 and 25% to allow us to readily detect any enhanced killing of the α-DNABII NTHI NRel (e.g., 'α-DNABII NRel' in key).Within 5 m of biofilm exposure to α-DNABII, NTHI NRel demonstrated significantly greater killing by A/C than planktonic NTHI (p ≤ 0.01).By 15 m, α-DNABII NTHI NRel reached greatest level of significantly greater killing by A/C compared to that of planktonic NTHI (p ≤ 0.0001).Overall susceptibility to A/C peaked at 2 h of biofilm exposure to α-DNABII (p ≤ 0.0001) and remained significantly increased at 4 h of biofilm exposure to α-DNABII (p ≤ 0.0001).In contrast, T/S-mediated killing of α-DNABII NTHI NRel remained at ~ 25% or less throughout the assay period and was never statistically greater than killing of planktonic bacteria by T/S.Statistical significance was determined via two-way ANOVA with Šidák's multiple comparisons test.**p ≤ 0.01; ****p ≤ 0.0001.Data are represented as mean ± SEM.Data shown are representative of three separate assays, each conducted on separate days, with 2-3 technical replicates per assay. Figure 2 . Figure 2. 2 h and 6 h α-DNABII NTHI NRel were distinct in relative transcription of a panel of genes whose products are known to be involved in sensitivity to a β-lactam antibiotic.By qRT-PCR, 2 h and 6 h α-DNABII NTHI NRel were transcriptionally distinct via analysis of 15 targeted profiled genes.11 of the 15 genes profiled were significantly (> twofold change) up-regulated and two genes were significantly down-regulated in 2 h α-DNABII NTHI NRel compared to NRel collected after 6 h.Data are represented as mean ± SEM.Analysis of each gene occurred at least three times on separate days with 2-3 technical triplicates per assay.[Note: MDEPmultidrug efflux pump]. Figure 3 . Figure 3. 2 h α-DNABII NTHI NRel were significantly susceptible to NADPH-oxidase sensitive intracellular killing by human PMNs.Here we determined the relative ability of activated human PMNs to kill isogenic planktonic, 2 h or 6 h α-DNABII NTHI NRel.(A) At the 2 h time point, ~ 33% of planktonic NTHI were killed by activated human PMNs.However, regardless of whether induced by MsTipMab or HuTipMab, NTHI NRel were significantly (p ≤ 0.0001) more susceptible to killing by human PMNs as evidenced by ~ 65% and ~ 62% killing, respectively.DPI treatment of human PMNs significantly reduced susceptibility of 2 h NTHI NRel whether induced by MsTipMab or HuTipMab, as evidenced by killing at ~ 35% or ~ 32% respectively (p ≤ 0.001).(B) By 6 h, use of DPI to inhibit intracellular NADPH-oxidase did not affect the relative ability of activated human PMNs to kill either the MsTipMab-or HuTipMab-induced NRel.There was no difference in susceptibility to killing by DPI-treated PMNs for planktonic NTHI relative to 2 h NRel or relative to 6 h NRel (p > 0.05).Red lines indicate the mean.Statistical significance comparing planktonic NTHI killed by untreated or DPI-treated PMNs was determined via unpaired two-tailed t-test or two-way ANOVA with Šidák's correction (Panels A-B).***p ≤ 0.001; ****p ≤ 0.0001.Data presented are mean percent killings from at least six separate assays, each conducted on separate days with two-three technical replicates per data point. Figure 4 . Figure 4. Bacterial membrane permeability was significantly greater in 2 h α-DNABII NRel compared to those that were planktonically grown for NTHI plus three additional human pathogens.Relative emitted fluorescence was measured by the nucleic acid intercalating dye SYTOX Green as a proxy for assessing outer membrane permeability of α-DNABII NRel.In each assay, planktonically grown bacteria with or without treatment with Triton X-100 served as the positive and negative controls, respectively.(A) 2 h α-DNABII NTHI NRel were significantly (p ≤ 0.0001) more fluorescent over the 120 m assay time period than the negative control.(B) By 6 h, the exhibited fluorescence of α-DNABII NTHI NRel resembled that of planktonically grown NTHI.The observed increased membrane permeability of the 2 h but not 6 h α-DNABII NTHI NRel supported the observed transient and significantly greater sensitivity of 2 h NRel to A/C that was no longer observed in the 6 h NRel population.(C-E) To determine if increased outer membrane permeability might be a more universal characteristic of α-DNABII NRel, we also assessed membrane permeability of identically generated 2 h α-DNABII NRel of two additional Gram-negative pathogens and one Gram-positive pathogen.α-DNABII E. coli NRel (Panel C) exhibited significantly increased (p ≤ 0.01) fluorescence compared to the negative control, as did α-DNABII P. aeruginosa NRel (Panel D) (p ≤ 0.01) and MRSA NRel (Panel E) (p ≤ 0.05).Data are presented as mean ± SEM.Data represent three separate assays conducted on three separate days with three technical triplicates per assay.Relative fluorescence units (RFU) were normalized to CFU of respective bacterial population added to the assay plate.Statistical analyses of respective triplicate runs were calculated via repeated measures linear mixed model for comparison of time course data between values of NRel versus those of the negative control. https://doi.org/10.1038/s41598-023-40284-5 NRel by A/C or T/S.To determine the kinetics of the relative sensitivity of the α-DNABII NTHI NRel to antibiotic-mediated killing, we incubated the NRel with either amoxicillin (Sigma-Aldrich, Cat no.31586) and clavulanate lithium (Sigma-Aldrich, Cat no.1134426) (A/C) or with trimethoprim (Sigma-Aldrich, Cat no.T7883) and sulfamethoxazole (Sigma-Aldrich, Cat no.723-46-6) https://doi.org/10.1038/s41598-023-40284-5 Quantification and statistical analyses.Statistical significance was determined with GraphPad Prism Version 9 by Student's unpaired two-tailed t-test for comparison of means between two groups, two-way ANOVA with Šidák's correction or mixed-effects analysis for comparison of more than two groups, or repeated measures linear mixed model for comparison of time course data.Description of statistical analysis used can be found in figure legends.All in vitro assays were repeated a minimum of three times on separate days.A p-value ≤ 0.05 is indicated by *, a p-value of ≤ 0.01 is indicated by **, a p-value of ≤ 0.001 is indicated by ***, and a p-value ≤ 0.0001 is indicated by ****. °C, 5% CO 2 for 2 h or 6 h.After 2 h or 6 h, the tubes were inverted, and the α-DNABII NTHI NRel ('2 h NRel' or '6 h NRel' , respectively) collected by pouring into separate Eppendorf tubes.α-DNABIINTHINRel were centrifuged for 1 m at 16,000×g at 4 °C, the supernatant aspirated, then 1 mL TRIzol Reagent (Thermo Fisher Scientific, Cat no.15-596-026) added to the bacterial pellets.Suspended bacterial solutions were transferred to separate Phasemaker Tubes (Thermo Fisher Scientific, Cat no.A33248), RNA collected following the manufacturer's instructions, and RNA purified using a Qiagen RNeasy kit (Qiagen, Cat no.74106).Residual DNA was removed via treatment with DNase I (NEB, Cat no.M0303L) and SUPERase In RNase Inhibitor (Ambion, Cat no.AM2694) per manufacturer's instructions for 45 m at 37 °C.DNase I treatment was repeated.qRT-PCRwasconducted with the SuperScript III Platinum SYBR Green One-Step qRT-PCR Kit (Invitrogen, Cat no.11736059), and fold-changes in gene expression calculated via the ΔΔC t method.Primers used for qRT-PCR are listed in Supplemental Table1.Experiments were performed a minimum of three times with 2-3 technical replicates per assay and on separate days.
2023-08-12T06:17:38.287Z
2023-08-10T00:00:00.000
{ "year": 2023, "sha1": "3e07b6183156467da39689f2ad91898cb35eb146", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-40284-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79172e6e232b2d84eacea4e4fdcf92723124fa9f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2384256
pes2o/s2orc
v3-fos-license
Benzypyrene hydroxylase activity in isolated parenchymal and nonparenchymal cells of rat liver. Previous studies have implicated the reticuloendothelial cells of the liver in certain aspects of steroid metabolism. The similarity in the metabolism of steroids and polycyclic hydrocarbons suggested that the nonparenchymal cells possibly play a role in these areas. The present study presents evidence that at least one of the microsomal NADPH-requirig enzymes, benzpyrene hydroxylase, is present in nonparenchymal cells and, furthermore, is "inducible." In adult rats treated with 3-methylcholanthrene or beta-naphthoflavone, the nonparenchymal cells exhibited increases in benzpyrene hydroxylase activity of 17-fold and five-fold, respectively. Treatment with phenobarbital resulted in only a slight increase in enzyme activity. Enzyme activity in parenchymal cells under similar conditions was increased sixfold and fivefold by 3-methylcholanthrene and beta-naphthoflavone, respectively, but not by phenobarbital. INTRODUCTION Mammalian liver is a heterogeneous tissue composed of approximately 60-70% parenchymal cells and 30-40% nonparenchymal cells . Cells of the reticuloendothelial (RE) system make up the greatest portion of the nonparenchymal population . Although parenchymal cells have been presumed to be the major cell type involved in drug metabolism, evidence that the nonparenchymal cells play a role has accumulated in recent years . Berliner et al . (1) noted that the RE cells of liver and of adrenal cortex were involved in the biotransformation of steroids . Zymosan, a stimulant of the RE system, was found by Sawyer et al . (2) to enhance the clearance of injected doses of corticosterone and to increase the A-ring reduction of steroids by liver homogenates . The latter biotransformation is found principally in RE cells (3) . Evidence that acetylation of sulfonamides was 3 1 6 THE JOURNAL Or CELL BIOLOGY . VOLUME 52, 1972pages 316-321 confined to the nonparenchymal cells and was absent from parenchymal cells (P cells) of liver was provided by Govier (4) . The finding of DiCarlo et al . (5) that a relationship existed between the stimulation of phagocytic activity of the RE system and the metabolism of barbiturates further implicated the nonparenchymal cells in drug metabolism . While it is clear that hepatic parenchymal cells are a principal site of metabolism of polycyclic hydrocarbons, many tissues in the body have been shown to oxidize these compounds (6,7) . Yet, the oxidation of polycyclic hydrocarbons by the nonparenchymal cells of liver has not been demonstrated . Additional information was obviously required regarding the enzymatic activity of nonparenchymal cells so that the function of these cells in liver might be more clearly understood . A large number of reports (reviewed in [7][8][9] have shown that the drugmetabolizing enzymes are induced by (a) polycyclic hydrocarbons, e .g ., 3 methylcholanthrene (3MC) and (b) barbiturates, e .g ., phenobarbital . It is not clear from these studies, however, if the induction is only characteristic of the parenchymal cell population in liver. The aims of the present study were (a) to isolate parenchymal and nonparenchymal cells from the same liver preparation, to establish the presence of benzpyrene (BP) hydroxylase in nonparenchymal cells, and (c) to ascertain the relative responses of the enzyme in parenchymal and nonparenchymal cells to the administration of inducers, i .e . 3MC, ß-naphthoflavone (BNF), and phenobarbital (PB) . Isolation of Cells P cells and nonparenchymal cells were prepared from 200 g male rats by a combination of the method of Berry and Friend (10) (for the P cells) and the method of Mills and Zucker-Franklin (11) (for the nonparenchymal cells) . Rats were injected intraperitoneally with 3MC or BNF in corn oil (20 mg/kg) 48 hr before sacrifice or intraperitoneally with phenobarbital (35 mg/kg, twice daily) for 3 days before sacrifice . Control rats were treated with vehicle alone . On the days of sacrifice each animal was anesthetized with ether and injected intravenously with 1000 units of heparin to facilitate perfusion . The liver was quickly excised and the portal vein was catheterized. The liver was perfused with Krebsbicarbonate solution, which was gassed with 02 : CO2 (95% :5%) at 25°C until blanched completely. The perfusion apparatus was then adjusted so that the gassed perfusate could be recirculated through the liver. A concentrated solution of hyaluronidase and collagenase was added to the perfusion medium to obtain a final concentration of 50 mge%c of each in a volume of 250 ml . The perfusion was continued for 30-40 min, after which time the liver was removed from the apparatus and diced in a dish over ice . A portion of approximately 1 g was taken for the preparation of parenchymal cells and was gently shaken in 10 ml of the hyaluronidase and collagenase solution for 60 min under 02 : CO 2 . After straining through 64-mesh nylon screen, the P cells were collected and washed with Hanks' balanced salt solution (BSS) followed by centrifugation at 30 g. The P cells were suspended in 50 mm Tris, pH 7.5, containing 3 mm MgCl2 (Tris-Mg) and homogenized in a glass-Teflon homogenizer at 4°C . Nonparenchymal cells were present in the P-cell preparation to the extent of less than 5%ßc . The remaining tissue was stirred vigorously on a magnetic stirrer in 50 ml of 0 .1% pronase in Hanks' BSS for at least 1 hr at 25'C, a procedure which caused disruption of most of the P cells (11) . The nonparenchymal cells were washed in Hanks' BSS by centrifugation at 300 g. After two to three washes in 40 ml of Hanks' BSS, the pellet tended to sediment in three layers . The top gelatinous matrix containing a few small cells could be easily removed with a Pasteur pipette. The nonparenchymal cells along with a few erythrocytes were found in the middle layer . P cells, when present, sedimented as a tan button at the bottom of the tube and could be removed with a pipette . P cells were present in the nonparenchymal population to the extent of less than 0.05%.' The cells were suspended in Tris-Mg and homogenized as described above. The nonparenchymal cells were identified by their size and by the presence of carbon particles [after injecting a rat intravenously with colloidal carbon (4 mg/100 g) 24 hr before sacrifice .] After their isolation, most of the nonparenchymal cells were found to contain phagocytized particles of carbon . A few cells did not take up carbon, perhaps due to the difference in phagocytic threshold in vivo . BP Hydroxylase Activity Enzyme activity was determined under reduced light conditions by a modification of the method of Nebert and Gelboin (12) . Triplicate 0 .5 ml portions of homogenate (0 .5-1 .0 mg protein) were placed into test tubes with 0 .1 ml NADPH (1 .0 mg) on ice. BP was added to all tubes (0 .1 µmole) and the tubes were 1 In later preparations, it was found that nonparenchymal cells could be obtained in even higher purity, without loss of activity, by treating the washed pellet with three to four strokes in a Dounce homogenizer in a volume of 10 ml followed by resuspending in 0 .1 % pronase . The thick gelatinous matrix could be broken up by addition of 0.1 mg of deoxyribonuclease I with shaking . Nonparenchymal cells could then be sedimented and washed in Hanks' BSS . Erythrocytes may be eliminated by suspending the pellet in cold 5 mm MgC12, sedimenting, and resuspending in buffer. CANTRELL AND BRESNICK Benzpyrene Ilydroxylase Activity in Rat Liver 317 placed in a shaking incubator for 15 min at 37'C . The reaction was terminated by placing the tubes in ice water and adding 0 .5 ml cold acetone . Acetone was added to the blanks before incubation . Hexane, 2 .0 ml, was added to each tube which was then placed on a vortex mixer for 20 sec . The tubes were centrifuged in a table-top centrifuge and 1 .9 ml of each organic phase was transferred to 2 .0 ml of 1 .0 N NaOH and vortex-mixed for 15 sec . After centrifugation, portions of the aqueous phase of each tube were transferred to cuvettes for the determination of fluorescence in an Aminco-Bowman spectrophotofluorometer with the excitation and emission wavelengths set at 396 and 522 nm, respectively . The fluorescence was compared with that of quinine sulfate (excitation and emission wavelengths, 352 nm, 452 nm) which had been previously calibrated with authentic 8-hydroxybenzo(a)pyrene in order to quantitate the products of the reaction . One unit of activity represents the fluorescence equivalent to a picomole of 8hydroxy-BP produced per minute per milligram of protein . Protein content of the homogenates was estimated by the method of Lowry et al . (13) . Histochemical Enzyme Demonstration The highly sensitive method of Wattenberg and Leong (14) was employed to examine the hydroxylase activity in individual cells of the parenchymal and nonparenchymal preparations. Air-dried smears of the cell suspensions were fixed in cold acetone and dried. BP, 0 .02 µg/ml in hexane, was added to each slide (2-3 drops per slide) in a cold room . Tris-Mg containing 0 .1 mg/ml NADPH was layered over each slide in a shallow tray, the tray was incubated for 2-5 min at 37°C, and the slides were immediately fixed in 10 1/0 neutral Formalin, rinsed in 807 ethanol, and dried . It was found that the rinse in 80% ethanol effectively removed most of the unreacted benzpyrene without affecting the amount of hydroxylated products . The slides were kept under reduced light throughout . Immediately before examination, a drop of 1 .0 N NaOH was placed on a slide and a coverslip was added. High-speed Ektachrome film recorded the green fluorescence of the cells, indicating the presence of oxidized metabolites of BP . Unincubated controls exhibited no green fluorescence . RESULTS 90% of the P cells were found to exclude trypan blue (Fig . I a) . The presence of nonparenchymal cells in this preparation was about 5 °/o as estimated from hematoxylin-and eosin-stained smears . The appearance of the P cells was similar to that of P cells obtained by others (10, 15), and their diameter was approximately 20 s . Nonparenchymal cells were obtained in high purity (Fig . I b) and were considerably smaller than the P cells ; the former had diameters of 10-15 µ . Those nonparenchymal cells that were prepared from rats previously injected with col-3 18 FIGURE 1 Light micrographs of hepatic parenchymal and nonparenchymal cells . Wet mounts of suspensions of cells . Fig. 1 a, Parenchymal cells ; the arrows denote those cells which did not exclude the trypan blue . X 250 . Fig . 1 b, Nonparenchymal cells . X 400 . Fig . 1 c, nonparenchymal cells from rat injected with colloidal carbon. Note different amounts of carbon in the cells . X 500. FIGURE 2 Demonstration of fluorescence of BP hydroxylase in P cells (Fig. 1 a) and nonparenchymal cells (Fig . 1 b) . P cells and nonparenchymal cells were prepared as described in the text and were assayed for BP hydroxylase by the histofluorescence technique (12) . The bright green fluorescence indicates the presence of oxidized metabolites of BP. X 400 . loidal carbon have an appearance similar to that reported by Rous and Beard (16) . While their preparation selected only cells of low phagocytic threshold, cells with even small amounts of carbon are seen in the preparation shown in Fig. 1 c . The activity of BP hydroxylase in the cell preparations was compared by two techniques, histofluorometry and spectrophotofluorometry . In Fig . 2, it can be seen that both cell preparations were able to oxidize BP as denoted by the fluorescence . Activity in the P cells seemed much greater than that in the nonparenchymal cells, as suggested by the relative intensities . Cell preparations fixed in Formalin without incubation (unincubated controls) exhibited no green fluorescence (the photograph was totally black) . BP hydroxylase was measured in homogenates of the P cells and nonparenchymal cells under conditions where the activity was linear with respect to time (15 min) and was proportional to protein concentration (up to 2 mg per tube) . The quantitative comparison of enzyme activity in both cell types is presented in Table I . As suggested by the histofluorometric demonstration, P cells were found to possess greater activity than the nonparenchymal cells, i .e ., approximately 13-fold greater . Having demonstrated the presence of BP hydroxylase in both cell populations, we considered it of interest to ascertain if both responded equally well (if at all) to the administration of "inducers", i .e ., 3MC, BNF, and PB. These results are also shown in Table I . Both cell types showed greater activity when the animals were treated with 3MC or BNF before isolation of cells . Activity in P cells was increased by six-fold and four-fold after injection of 3MC and BNF to rats, respectively . Nonparenchymal cells appeared more responsive to the inducing agents, with increases of 17-fold and five-fold, respectively . Phenobarbital was not Table II show that although enzyme activity in intact isolated parenchymal cells was unaffected by this treatment, the activity in fragmented cells was destroyed by the pronase treatment . Consequently, the activity of BP-hydroxylase, reported in Table I Received for publication 14 June 1971, and in revised form 13 September 1971 .
2014-10-01T00:00:00.000Z
1972-02-01T00:00:00.000
{ "year": 1972, "sha1": "81c9a67789f17e7e1ef68ac2e12556644cac9e94", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/52/2/316/1070300/316.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "81c9a67789f17e7e1ef68ac2e12556644cac9e94", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261858703
pes2o/s2orc
v3-fos-license
Evaluating the Effect of Artificial Liver Support on Acute-on-Chronic Liver Failure Using the Quantitative Difference Algorithm: Retrospective Study Background Liver failure, including acute-on-chronic liver failure (ACLF), occurs mainly in young adults and is associated with high mortality and resource costs. The prognosis evaluation is a crucial part of the ACLF treatment process and should run through the entire diagnosis process. As a recently proposed novel algorithm, the quantitative difference (QD) algorithm holds promise for enhancing the prognosis evaluation of ACLF. Objective This study aims to examine whether the QD algorithm exhibits comparable or superior performance compared to the Model for End-Stage Liver Disease (MELD) in the context of prognosis evaluation. Methods A total of 27 patients with ACLF were categorized into 2 groups based on their treatment preferences: the conventional treatment (n=12) and the double plasma molecular absorption system (DPMAS) with conventional treatment (n=15) groups. The prognosis evaluation was performed by the MELD and QD scoring systems. Results A significant reduction was observed in alanine aminotransferase (P=.02), aspartate aminotransferase (P<.001), and conjugated bilirubin (P=.002), both in P values and QD value (Lτ>1.69). A significant decrease in hemoglobin (P=.01), red blood cell count (P=.01), and total bilirubin (P=.02) was observed in the DPMAS group, but this decrease was not observed in QD (Lτ≤1.69). Furthermore, there was a significant association between MELD and QD values (P<.001). Significant differences were observed between groups based on patients’ treatment outcomes. Additionally, the QD algorithm can also demonstrate improvements in patient fatigue. DPMAS can reduce alanine aminotransferase, aspartate aminotransferase, and unconjugated bilirubin. Conclusions As a dynamic algorithm, the QD scoring system can evaluate the therapeutic effects in patients with ACLF, similar to MELD. Nevertheless, the QD scoring system surpasses the MELD by incorporating a broader range of indicators and considering patient variability. Introduction Liver failure, including acute-on-chronic liver failure (ACLF), occurs mainly in young adults and is associated with high mortality and resource costs [1,2].Management of patients with liver failure aims to maintain or restore vital organ functions, prevent the development of multiorgan failure, and bridge them to recovery or transplantation until an appropriate donor organ becomes available.As an extracorporeal procedure, the double plasma molecular absorption system (DPMAS) combines broad-spectrum plasma adsorption with specific bilirubin adsorption, making it highly desirable to provide time for spontaneous liver regeneration or emergency liver transplantation to be undertaken.Two absorbers separated and cleaned toxic plasma during the procedure before returning it to the patients [3][4][5]. Meanwhile, the prognosis evaluation of liver failure should run through the entire diagnosis and treatment process, especially in the early prognosis evaluation.This involves using various methods, including the Child-Pugh classification [6], the indocyanine green excretion rate [7,8], the preoperative liver volume assessment, and the Model for End-Stage Liver Disease (MELD) [9][10][11][12][13][14].However, each method has its limitations [15][16][17][18][19].Although most of the prognostic models in hepatology, including MELD and Child-Pugh classification, were developed as static models, the full predictive potential of the dynamic trajectory of these models has received little attention so far [20].In addition, the therapeutic effects in patients with liver failure can only be evaluated according to the level of toxins, transaminase activity, and coagulation function, and the results could be influenced by many factors, including age.Therefore, it is crucial to establish a novel approach to rapidly, accurately, and objectively evaluate therapeutic efficacy of ACLF. As a recently proposed novel algorithm, the quantitative difference (QD) algorithm is based on the ratio response of the Weber law in psychology and the Weber-Fechner law in molecular biology [21][22][23].By drawing from these principles, the QD algorithm can detect the presence of differences among multiple data sets and quantify the magnitude of the disparity between 2 specific data sets.Therefore, the QD algorithm may hold immense value for medical applications, particularly in evaluating the treatment's effectiveness in patients with ACLF, given the variability of factors, such as age, gender, and liver function. In this study, the quantitative difference (QD) algorithm is introduced to evaluate and analyze the effect of DPMAS and conventional treatment in patients with ACLF.The objective is to examine whether the QD algorithm exhibits comparable or superior functionality compared to the MELD in the context of prognosis evaluation. Patients and Setting A single-center retrospective study was conducted to screen hospitalized patients in the Fifth Affiliated Hospital of Guangzhou Medical University between January 2018 and December 2020.The inclusion criteria for patients with ACLF were as follows: (1) meeting the diagnostic criteria for ACLF defined by the Asian Pacific Association for the Study of the Liver [24] and (2) aged 18-80 years.Among the 44 patients included in this study, 17 were excluded, most commonly due to contravening exclusion criteria (n=10) or lack of data (n=7).This left 27 patients with ACLF, who were categorized into the following 2 groups according to the treatment they chose to receive: (1) the DPMAS group, where patients received dialysis with DPMAS as well as conventional treatment (n=15) and (2) the conventional treatment group, where patients received conventional treatment alone (n=12).The formulation, implementation, and diagnosis of all patients were carried out under the regulations of the Fifth Affiliated Hospital of Guangzhou Medical University (Figure 1). The DPMAS Treatment Patients were studied during a single 2-hour DPMAS treatment.The extracorporeal blood and plasma separation flow were maintained at 150 mL/min and 50 mL/min, respectively.A 5.2 version extracorporeal machine equipped with P2 plasma flux dry, MG350 hemoperfusion, and DX350 bilirubin adsorption column (all from Boxin biotechnology Co) were used to remove toxic molecules (Figures 2 and 3).The number of treatments was variable but limited to 16. Treatment was terminated if an organ became available for transplantation, if there was a significant clinical improvement, if the patient experienced marked deterioration, if there was an important adverse event, or if the patient died. Conventional Treatment (Both Groups) Conventional treatment was standardized for each patient with ACLF.Cerebral edema was treated with head-of-bed elevation, prevented hepatic encephalopathy, controlled hypoproteinemia, and hypothermia.Hemorrhage and disseminated intravascular coagulation were treated with coagulation factor replacement (vitamin K1, fibrinogen, or fresh frozen plasma).Patients in the conventional treatment group received intensive critical care according to the current standard best practices at each study site.All patients were assessed for clinical status assessments every 12 hours. A Novel Scoring System As a gold standard of statistical validity, P values are considered unreliable by many scientists, as they can only indicate the presence of differences between 2 data groups but do not provide information about how big these differences are [25][26][27]. XSL • FO RenderX Therefore, we introduce the QD algorithm to analyze treatment efficacy in patients with ACLF.The QD algorithm is based on the ratio response of the Weber law in psychology and the Weber-Fechner law in molecular biology [21,22].In light of the Weber law, the concept of Weber threshold highlights a minimum value in the ratio between an objective parameter and its corresponding base value.Fechner extended the Weber law to create the Weber-Fechner law, which asserts that the relationship between objective parameters and the corresponding subjective parameters is logarithmic in nature.The change of subjective parameters corresponds to the logarithm of the ratio of objective parameters [28], as follows: The golden section constant Lτ is the basic natural unit that measures the ratio response.Liu [29] introduced the logarithm to the base of τ Lτ: The concept of the QD can be approached from the perspective of self-similarity.Self-similarity was studied in the fractal literature, where a pattern is considered self-similar if it does not vary across different spatial or temporal scales [29,30].It was found that there are QD thresholds (α and β) at various levels, including the cellular, molecular, or central nervous system levels (thresholds 0.80 and 1.22), at the organs or tissue level (thresholds 0.47 and 0.80), and the level of the body (thresholds 0.27 and 0.47).At the level of molecules, there are 3 levels of β: health level (β 1 =0.80), subhealth level (β 2 =1.22), and disease level (β 3 =1.69). The MELD Scoring System Numerous studies have demonstrated the prognostic ability of the MELD scoring system [31].Zhou et al [32] indicated that MELD could categorize patients according to their risk scores, distinguish the outcome of patients, and forecast survival in patients with ACLF.It incorporates 3 widely available laboratory variables, including the international normalized ratio [23], serum creatinine, and serum bilirubin.The original mathematical formula for MELD is as follows: MELD = 9.57 × Log e (creatinine) + 3.78 × Log e (total bilirubin) + 11.2 × Log e (international normalized ratio) + 6.43 (3) The higher the MELD score, the higher the short-term mortality risk.In this study, we also used MELD to evaluate the therapeutic effects of 2 different kinds of treatment to verify the feasibility and accuracy of the novel statistical model. Ethical Considerations The study was reviewed and approved by the Ethics Committee of the Fifth Affiliated Hospital of Guangzhou Medical University (GYWY-L2021-31).All research data are processed anonymously. Overview In the DPMAS group, 4 patients received a short session, 1 died during the treatment, and the remaining 10 were recovered and discharged.In the conventional treatment group, 2 patients were healed and discharged, and 3 died during the treatment, leaving 7 patients who gave up attending the treatment sessions. Table 1 summarizes the two groups' ages as well as the MELD and biochemical variables before treatment.There was no significant difference in both groups before and after treatment, except for activated partial thromboplastin time (P=.02),fibrinogen (P=.046), conjugated bilirubin (P=.046), and uric acid (P=.04). Changes in Therapeutic Indicators Biochemical variables are listed in Table 2.In the DPMAS group, there was a significant reduction in alanine aminotransferase (P=.02), aspartate aminotransferase (P<.001), and conjugated bilirubin (P=.002) both in P values and QD values (Lτ>1.69).A significant decrease in hemoglobin (P=.01), red blood cell count (P=.01), and total bilirubin (P=.02) was observed in the DPMAS group, but no significant decrease was observed in QD values (Lτ≤1.69).Nevertheless, all indicator values remained unchanged, both in P and QD values (Lτ≤1.69).In other words, the P value supports the conclusions drawn by the QD algorithm, indicating that the algorithm and the thresholds we have chosen are suitable for evaluating the therapeutic efficacy of ACLF. Assessment of the Therapeutic Efficacy of Liver Failure Next, our objective is to use the QD algorithm to assess the effect of different treatments on ACLF and try to provide a novel approach to prognostic evaluation.The algorithm's steps are outlined in Figures 3 and 4. Figure 3A provides an overview of the QD scoring system, while Figure 3B, Figure 3C, Figure 4, and Figure 3D elaborate on the detailed procedures for part 1, part 2, part 3 and part 4 in the scoring system, respectively.The procedure of the QD scoring system strictly adheres to the sequence outlined in Figure 3A. Figure 3B illustrates the comprehensive procedure for comparing each parameter's X and Y values.If the data after treatment are larger than the data before treatment, the algorithm assigns an output value of -1.If the data after treatment are smaller than the data before treatment, the output value is set to 1.The output value for this step is denoted as "A." Figure 3C provides a detailed procedure for calculating each parameter's value.The indexes mentioned above for each patient in both the DPMAS and conventional treatment groups were collected before and after the treatments.The maximum value of a specific index is divided by the minimum value of the same index before and after the therapy.Then, we calculated the golden logarithm of the value and set it as Lτ.When Lτ≤0.80, the output value is 0; when 0.80<Lτ≤1.22,the output value is 1, when 1.22<Lτ≤1.69,the output value is 2; and when Lτ>1.69, the output value is 3.The output value for this step is denoted as "B." Figure 4 elaborates on the detailed procedure for modifying the normal values.This step involves the correction of the range of normal values.Although the first 2 steps allow for assessing the direction and magnitude of changes before and after treatment, they do not consider whether these changes represent an improvement or deterioration in the patient's condition.Hence, this step is used to evaluate patient index changes.The output value for this step is denoted as "C."A, B, and C).After obtaining the QD scores for each indicator, the scoring system proceeds with the modification of 4 liver function indicators (ie, alanine aminotransferase, aspartate aminotransferase, total bilirubin, and conjugated bilirubin).After analyzing the data, we found the 4 indicators of healed patients had significantly decreased after treatments, proven by the P value and the QD algorithm.However, the patients who dropped out or died only had 2 significantly reduced markers (alanine aminotransferase and aspartate aminotransferase).First, the changes in the 4 indicators (ie, alanine aminotransferase, aspartate aminotransferase, total bilirubin, and conjugated bilirubin) needed to be examined by evaluating their respective A values.If the sum of them equaled 4, all 4 indicators decreased after the intervention.The output of this assessment is denoted as the D value.Then, we needed to examine whether the B value of the 4 indicators was ≥9, ensuring that at least 3 indicators significantly decreased after the treatment.The result of this examination is denoted as the E value.Finally, we calculated the sum of the scores for all indicators (except alanine aminotransferase, aspartate aminotransferase, total bilirubin, and conjugated bilirubin) as well as the score obtained from the modification of the 4 liver function indicators.A higher score for each patient indicates better therapeutic efficacy. According to the procedures mentioned above, we calculated the QD score for each patient.We compared scores that were calculated by 2 different scoring systems, and there was a significant association between MELD score and the QD score (P<.001; Figure 5).Next, we compared each patient's clinical status before and after treatment and tried to find the correlation between clinical status and QD scores.We found that in patients whose fatigue had improved, their QD scores were significantly higher than those of patients whose clinical status had deteriorated or remained unchanged (P=.006;Table 3).Next, we divided patients into 3 groups according to patient status to verify whether the QD scoring system could reflect postoperative patient status.We found that the QD scores of improved patients were significantly greater than those who had dropped out or died (P<.001; Figure 6).The calculation table of the QD algorithm scoring system is presented in Multimedia Appendix 1. Principal Findings The prognosis evaluation of liver failure should run through the entire diagnosis and treatment process.However, it is difficult to objectively evaluate the therapeutic effect of ACLF because of the complex progress of liver failure and multiple impact factors.Although most of the prognostic models in hepatology were developed as static models, the full predictive potential of the dynamic trajectory of these models has received little attention so far [20].In this study, we introduced a novel model for liver failure prognosis evaluation based on the characteristics of the QD algorithm by comparing data from patients who received DPMAS or conventional treatment to evaluate the therapeutic dynamic.After calculating the QD score of each patient, a significant correlation was found between the MELD score and the QD score (P<.001), substantiating that the QD scoring system can effectively gauge the therapeutic effects in patients with ACLF, akin to the MELD scoring system.Next, we compared the clinical status of patients with their QD scores.Improvement of fatigue showed a significant correlation in our study (P=.006).The QD score of the recovery group was significantly higher than that of the patients who dropped out of therapy and the death group (P<.001), indicating that the QD scoring system can effectively reflect the patient's status after treatment. Liver failure is associated with increased metabolites and toxins, such as bilirubin, ammonia, glutamine, aromatic amino acids, and proinflammatory cytokines [33][34][35].These toxins are known to play an essential role in the pathogenesis of liver failure [36][37][38][39][40]. Studies on artificial liver have identified significant reductions in serum bilirubin, urea, and creatinine levels in patients with ACLF [39][40][41]; this improvement in survival rates is attributed to the clearance of ammonia and nitrogen-carrying molecules, such as glutamine and alanine.Total bilirubin and conjugated bilirubin are reduced, whereas no changes in unconjugated bilirubin levels are observed [42].We found significant differences in alanine aminotransferase, aspartate aminotransferase, and conjugated bilirubin in both P values and QD values in the DPMAS group.These findings of the abovementioned studies closely align with the results of our study, which confirmed that the chosen threshold in the QD algorithm was reasonable. As a fixed algorithm, the MELD scoring system was initially developed to objectively determine the priority of liver transplantation and predict short-term mortality in patients with liver disease.It was built using only subjective parameters.Later, a vast body of research demonstrated its prognostic ability, and it continues to maintain the characteristics of the MELD scoring system by using subjective parameters and short-term mortality as prognostic indicators.In this context, the QD algorithm offers a novel way to dynamically evaluate the therapeutic effects in each patient instead of using a fixed algorithm like MELD.Researchers and clinicians can input data from patients into the QD algorithm to obtain the QD score, which can be used to verify therapy efficacy and achieve the objectives of the analysis.Of note, individual variability may contribute to the high SDs observed in the QD scores. Limitations Our study has limitations.The sample size was relatively small, and the follow-up period was short.It should be emphasized that trials of DPMAS are difficult to perform and control appropriately for several reasons, including a lack of well-characterized patients and heterogeneity of causes. Conclusions In conclusion, the QD scoring system can measure the therapeutic effects in patients with ACLF, similar to the MELD scoring system, but surpasses it by incorporating a broader range of indicators and considering patient variability.The QD algorithm can pave the path of tailoring treatment by comparing the difference between pre-and posttreatment for the same patients, which may lead to more precise and effective interventions for patients with ACLF.Future work is needed to assess whether the proposed algorithm applies to other liver diseases, calling for a larger data set and additional samples for clinical validation. Figure 4 . Figure 4.The detailed procedure of Part III. Figure Figure 3D indicates the detailed procedure for modifying the 4 liver function indicators.The scoring system calculates the QD score for each indicator by multiplying the values obtained from the previous steps (A, B,and C).After obtaining the QD scores for each indicator, the scoring system proceeds with the modification of 4 liver function indicators (ie, alanine aminotransferase, aspartate aminotransferase, total bilirubin, and conjugated bilirubin).After analyzing the data, we found the 4 indicators of healed patients had significantly decreased after treatments, proven by the P value and the QD algorithm.However, the patients who dropped out or died only had 2 significantly reduced markers (alanine aminotransferase and aspartate aminotransferase).First, the changes in the 4 indicators (ie, alanine aminotransferase, aspartate aminotransferase, total bilirubin, and conjugated bilirubin) needed to be examined by evaluating their respective A values.If the sum of them equaled Figure 5 . Figure 5. Linear regression models.The correlations between the Model for End-Stage Liver Disease (MELD) score and the quantitative difference (QD) algorithm score are shown. Figure 6 . Figure 6.The quantitative difference (QD) algorithm score of patients' different statuses after therapy.Data are presented as mean (SD). Table 1 . Postoperative data of the study participants. a DPMAS: double plasma molecular adsorption system.b MELD: Model for End-Stage Liver Disease. Table 2 . Preoperative and postoperative data of the double plasma molecular adsorption system (DPMAS) group and conventional treatment group. a QD: quantitative difference. Table 3 . Clinical status and quantitative difference (QD) algorithm score data.
2023-09-15T15:08:45.233Z
2022-12-28T00:00:00.000
{ "year": 2023, "sha1": "f6cd3475d63731d4a728ccc4ca52aa2fdd178201", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/45395", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a863e25ed5ceb48f8b3254934f73c7587ab5913", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
67863958
pes2o/s2orc
v3-fos-license
Peer effects on control-averse behavior The urge to rebel against external control affects social interactions in many domains of our society with potentially far-reaching consequences. Nevertheless, it has remained unclear to what degree this control-averse behavior might be influenced by the people in our surroundings, our peers. In an experimental paradigm with real restrictions of the subjects’ freedom of choice and no systematic incentives to follow the peer, we are able to demonstrate both negative and positive peer effects on control-averse behavior. First, we find that information about a peer’s strongly control-averse behavior, although irrelevant for the subjects’ outcome, increases the subjects’ individual control-averse behavior. Second, we find that information about a peer’s more generous and only weakly control-averse behavior increases subjects’ generous behavior, even though it is associated with greater costs for the subjects. Critically, each subject’s behavior determined the monetary payoff of both the subject and a third person, thereby constituting a social behavior with actual consequences. Interestingly, these peer effects are not moderated by self-assessments of the general resistance to peer influence or the general tendency to rebel against restrictions of one’s freedom of choice. Contributing new insights into a complex and highly relevant social phenomenon, our results indicate that information about a single peer’s behavior can influence individual control-averse behavior. repeated information about a peer's behavioral responses in the same situations. Each subject observed either a strongly control-averse peer or a weakly control-averse peer. The peer's behavior was modeled after participants with similar demographics who had participated in a previous study. For comparison, we added a separate experimental group without peer influence. We predicted that, in comparison to the group without peer influence, observing a strongly control-averse peer would increase the subjects' control-averse behavior, whereas observing a weakly control-averse peer would decrease the subjects' control-averse behavior. Previous work has suggested that peer effects might be moderated by a general resistance to peer influence 17 or the general motivation to rebel against control 18 . To account for these individual characteristics, we assessed them using standardized questionnaires 17,19,20 . The results provide novel insights into peer effects on social behavior and bear implications for neuroscientific investigations as well as clinical work. Results Control-averse behavior without peer influence. Data from 82 subjects were analyzed in this study (42 women, 40 men, M age = 22 ± 3 SD years). Twenty-three of those subjects were assigned to an experimental group without peer influence (group No Peer, Figs 1 and 2). To assess these subjects' levels of control-averse behavior, we implemented a Control aversion task, in which subjects made choices under two conditions ( Fig. 2A): In the Free condition, subjects could choose freely among ten allocation options, called generosity levels, ranging from selfish to more generous monetary allocations between themselves and another person. In the Controlled condition, the other person requested a minimum of generosity and thereby eliminated the three most selfish options. Importantly, one allocation choice per subject was randomly selected at the end of the experiment and paid out to the subject as well as the respective other person. This was done to motivate subjects to behave according to their true preferences in all trials. We first analyzed the choice behavior of the group No Peer, who completed the Control aversion task without peer influence. In line with previous work 4 , subjects in the No Peer group chose, on average, lower levels in the Controlled condition (M = 6.38 ± 1.63 SD, Mdn = 5.72) than in the Free condition (M = 7.52 ± 1.93 SD, Mdn = 7.22; Wilcoxon signed rank test, one-tailed, z = −2.59, p = 0.005, effect size r = −0.38; Hodges-Lehmann median of differences = 1.07, 95% CI [0.28, 1.92]; Fig. 3). Note, that the statistical test was corrected for a bottom effect, following the procedure by Falk and Kosfeld 1 (see Methods for details). Next, we compared the baseline of control-averse behavior in the group No Peer with that of two experimental groups with peer influence, namely a group who received information about a weakly control-averse peer (group Weak CA, n = 29) and a group who received information about a strongly control-averse peer (group Strong CA, n = 30). To capture the baseline behavior, only the first two trials prior to any peer information were included in this analysis, that is one trial in the Controlled condition and the Free condition, respectively. Based on previous work 1,4,21 , control-averse behavior is defined as lower chosen levels when the subjects' choice is restricted (in the Controlled condition) than when the subject can decide freely (in the Free condition). Therefore, we computed the individual baseline level of control-averse behavior as the difference of a subject's chosen level in the Fig. 2. Subjects in the group No Peer first see whether player A lets them choose freely (Free condition) or controls their choice options (Controlled condition). Then they make a choice, and the next trial begins. Subjects in the groups Strong CA and Weak CA first see whether player A lets them choose freely or controls their choice options. Then they make a choice. In one third of all trials, the next trial begins; in two thirds of all trials, the subjects then guess what the peer (player B*) has chosen in the same situation. Then they see what the peer has chosen: subjects in the group Strong CA receive information about a strongly control-averse peer, whereas subjects in the group Weak CA receive information about a weakly control-averse peer. Then the next trial begins. CA, control aversion. In every trial, a new player A either lets the subject choose freely (Free condition) or requests a minimum of level four (Controlled condition). After a delay of 2 seconds, the subject chooses a level by moving a black frame. For trials without peer information and subjects in the group No Peer, the trial ends here. (B-D), implementation of the peer influence. In two thirds of the trials, subjects in the groups with peer information then guess what a peer (player B*) chose in the same situation by moving a black frame (B). Then subjects see the peer's actual choice: Subjects in the group Strong CA observe the choice by a strongly control-averse peer (C), whereas subjects in the group Weak CA observe the choice by a weakly control-averse peer (D), which is indicated by a red frame. For each group, the peer remains identical throughout the experiment. CA, control aversion; RT, Reaction Time. Figure 3. Control-averse behavior in the group without peer information (group No Peer, n = 23). On average, subjects chose lower levels in the Controlled condition than in the Free condition, thereby displaying controlaverse behavior. Left, boxplots of the chosen levels in the Free and the Controlled condition. The central mark of each box shows the median, the box edges show the 25th and 75th percentiles, the notches of each box depict the 95% confidence intervals of the median, and the whiskers represent the limit beyond which a data point is considered an outlier (displayed as a cross). The connected data points in the center show individual subjects' means. Right, histograms showing the distributions of subjects' means and variances of chosen levels in the Free and the Controlled condition. www.nature.com/scientificreports www.nature.com/scientificreports/ first trial of the Free condition minus the subject's chosen level in the first trial of the Controlled condition. An ANOVA confirmed that the level of control-averse behavior at baseline did not differ significantly between groups (F(2, 79) = 0.27, p = 0.77; Fig. 4). Pairwise comparisons using Wilcoxon rank sum tests also revealed no significant differences (all p > 0.1). Peer effects on control-averse behavior. To test whether the peer influence had an effect on the individual control-averse behavior, we analyzed the data across all three experimental groups, in which subjects either received information about the choices of a strongly control-averse peer (group Strong CA, n = 30), a weakly control-averse peer (group Weak CA, n = 29) or no peer information (group No Peer, n = 23). The peer information was presented after two thirds of the subjects' own choices, randomly interspersed across the task. Specifically, subjects were asked to guess what a peer had chosen in the same situation, i.e. in the Controlled condition or the Free condition, respectively (Fig. 2B). This guessing component was included to provide a justification for the presentation of peer information and to ensure that subjects paid attention to the peer's choice, which was otherwise irrelevant to the subjects' payoff. When they were done guessing, subjects were presented with what the peer had chosen (Fig. 2C,D). Subjects were told that the peer was a student of a similar age who had participated in a previous session of the experiment, and that the peer remained identical throughout the task. In reality, the presented peer choices were selected by an algorithm programmed to mimic the behavior of a real subject from a pilot study (see Methods for details). Importantly, the peer information was always presented at the end of a trial and only in two thirds of the trials, whereas the Controlled and Free conditions were implemented in half of the trials each, in random order. Therefore, the peer influence occurs over the history of trials and should not be specific to the most recently observed peer choice. We hypothesized that the type of peer information would moderate the effect of the choice restriction on the chosen level. Specifically, we hypothesized that the effect of the choice restriction on the chosen level would be greater in the group Strong CA (with information about a strongly control-averse peer) than in the other groups, and that it would be smaller in the group Weak CA (with information about a weakly control-averse peer) than in the other groups. To test these hypotheses, we set up a generalized linear mixed effects model (GLMM) with the chosen level in each trial as dependent variable. The predictors were Control ij , which indicated trials in the Controlled condition, and Strong CA i and Weak CA i , which indicated subjects in the group Strong CA and subjects in the group Weak CA, respectively, using the group No Peer (without peer information) as a reference. The model further included a random-effects intercept for each subject as well as a random-effects slope for Control ij within subjects. We find that, in line with our hypothesis, the effect of the Controlled condition on the chosen level is moderated by peer influence (Table 1 . Control-averse behavior at baseline. At baseline (i.e. in the first two trials prior to any peer information), subjects across all groups displayed a similar level of control-averse behavior as measured by the chosen level in the Free trial minus the chosen level in the Controlled trial. The central mark of each box shows the median, the box edges show the 25th and 75th percentiles, and the notches of each box depict the 95% confidence intervals of the median. The whiskers represent the limit beyond which data points are considered outliers, which are displayed as crosses. Weak CA, subject group who observed a weakly control-averse peer; No Peer, subject group who did not observe a peer; Strong CA, subject group who observed a strongly controlaverse peer. www.nature.com/scientificreports www.nature.com/scientificreports/ Inspection of the subjects' mean chosen levels suggests that this behavior closely resembles the observed peer choices (Fig. 5). Therefore, in both groups with peer information, subjects' followed the observed peer behavior. To check the robustness of our results with regard to censoring of the data, we ran an additional Bayesian hierarchical censored linear regression that controlled for the censoring of the data at the upper and lower end of the dependent variable (Supplementary Table S2), and a Bayesian hierarchical ordinal regression that treats the values of the dependent variable as distinct, ordered categories instead of a linear scale (Supplementary Table S3). Both additional models confirm the results of the GLMM: a significant main effect of Control, a significant interaction effect of Control * Strong CA, and a significant main effect of the group Weak CA, as indicated by credible intervals not crossing zero. Also in line with the GLMM, the interaction effect of Control * Weak CA and the main effect of Strong CA remain not significant, as indicated by credible intervals crossing zero. Together, these results support our hypothesis that peer influence can modulate control-averse behavior. Random-effects intercept for subjects (82 levels) Estimated SD 1.81 Random-effects slope for Control ij within subjects (82 levels) Estimated SD 1.86 Table 1. Results of the GLMM testing the peer effects on control-averse behavior. Note. The dependent variable is the chosen level by subject i in trial j. The predictor Control ij is equal to 1 in the Controlled condition and 0 otherwise. Strong CA i is equal to 1 for subjects in the group who observed a strongly control-averse peer and 0 otherwise, and Weak CA i is equal to 1 for subjects in the group who observed a weakly control-averse peer and 0 otherwise. Subjects in the group without peer information served as reference group. The model includes a random-effects intercept for each subject and a random-effects slope for Control ij within subjects. Sample size N = 82. www.nature.com/scientificreports www.nature.com/scientificreports/ Effects of the most recent peer information on control-averse behavior. To test whether the individual choice behavior is influenced by the peer's most recent choice rather than the peer's control-averse behavior, we ran an additional GLMM that included only data from groups with peer information, i.e. groups Strong CA and Weak CA (n = 59). The dependent variable was the chosen level in each trial. The predictors were Control ij , which indicated trials in the Controlled condition, and PeerChoice ij-1, which represents the most recently presented chosen level by the peer for a given trial. Critically, due to the randomized trial sequences the most recent peer information could be either of the Controlled or the Free condition and did not necessarily match the condition of the current trial. This allowed us to test the effect of the peer's prosocial behavior independent of the peer's control-averse behavior. The model further included a random-effects intercept for each subject as well as random-effects slopes for Control ij and for PeerChoice ij-1 within subjects. This GLMM revealed that the most recent peer choice as well as its interaction with the Controlled condition has no significant effect on the chosen level (p > 0.1, Table 2), whereas the effect of the Controlled condition on the chosen level remains significant (p < 0.001). This suggests that the subject's behavior is not simply influenced by the peer's most recent prosocial behavior, but by the peer's control-averse behavior, i.e. the peer's behavior depending on the Controlled condition. Potential moderators of the peer effects on control-averse behavior. Next, we asked whether the effect of peer influence on control-averse behavior remains robust when we control for a moderation by subjects' general resistance to peer influence as measured independently from the task using the Resistance to Peer Influence (RPI) scale (Supplementary Information S1) 17 . The RPI scores do not differ between the three groups as assessed by an ANOVA (F(2,79) = 1.70, p = 0.190). Pairwise comparisons reveal that subjects in the group Strong CA (M = 2.91 ± 0.35 SD, Mdn = 2.90) have slightly lower RPI scores than subjects in the group Weak CA (M = 3.06 ± 0.28 SD, Mdn = 3.10; Wilcoxon rank sum test, z = −1.97, p uncorrected = 0.048, effect size r = −0.26). However, this difference is not significant after correction for multiple comparisons. To control for a potential moderation of the peer effect, we added a main effect of and interactions with RPI i to the GLMM, where RPI i is the normalized and mean-centered score of the general resistance to peer influence as measured by the RPI scale. The GLMM reveals no significant three-way interaction and therefore no moderation of the peer effects on control-averse behavior by the RPI score (p > 0.1; Supplementary Table S4), whereas the peer effect on control-averse behavior remains significant (p = 0.011). Finally, we investigated whether the effect of peer influence on control-averse behavior remains robust when we control for a moderation by the subjects' general tendency to rebel against restrictions of their freedom of choice as measured by the Hong Psychological Reactance Scale (HPRS) 19,20 . The HPRS scores do not differ between groups as assessed by an ANOVA (F(2,79) = 0.56, p = 0.573). Pairwise comparisons using Wilcoxon rank sum tests also reveal no significant differences (all p > 0.1). To control for a moderation of the peer effect by the HPRS score, we ran an additional GLMM. This GLMM was identical to the first GLMM, except that we added a main effect of and interactions with HPRS i , which is the normalized and mean-centered HPRS score. This GLMM reveals no significant three-way interaction and therefore no moderation of the peer effects on control-averse behavior by the HPRS score (Supplementary Table S5), whereas the peer effect on control-averse behavior remains significant (p = 0.012). When testing the individual HPRS subscales in separate GLMMs, we also find no significant moderation effect (all p > 0. 1 www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion Peer influence on social behavior has rarely been studied in controlled experiments with real consequences. In particular, it has remained unknown whether peer influence affects a psychologically and clinically highly relevant phenomenon, control-averse behavior. Control-averse behavior describes the negative response to exogenous control of one's decisions and can impede important social interactions, for example between therapists and patients, or employers and employees. It is therefore important to identify factors that might amplify or attenuate control-averse behavior. Using a novel experimental paradigm with real monetary consequences, we found that control-averse behavior could indeed be influenced by peer behavior. Specifically, individuals who were informed about the choices of a strongly control-averse peer displayed increased control-averse behavior compared with individuals who remained uninformed or individuals who were informed about the choices of a weakly control-averse peer. Moreover, individuals followed the peer behavior even when this resulted in lower profits for themselves. Hence, we demonstrate peer effects on both costly and control-averse social behavior. A critical feature of our study was that subjects had no systematic incentives to follow the peer, because their own profit was independent from the peer's profit. Nonetheless, subjects adopted the peer's behavior both when the peer was more generous, thereby accepting a lower own profit, and when the peer was more control-averse, thereby selecting less generous and more selfish options. If subjects simply used the peer information to justify more selfish choices, we should see a peer effect only when the peer chose a low level, so only for the group with information about a strongly control-averse peer in the Controlled condition. However, we also find a peer effect when the peer chose a high level, i.e. in the Free condition and for subjects in the group with information about a weakly control-averse peer. Therefore, subjects followed the peer even when choices were costly. In this regard, our findings contradict the assumption of classic social preference models according to which behavior should not be influenced by peer behavior that is unrelated to the own costs and benefits, as well as a recent finding that peer influence has negative, but not positive effects on social behavior 13 . Our findings, in contrast, reveal both positive and negative effects of peer influence. They thereby extend previous findings of positive effects of peer influence 7,10 and spillover effects from group environments to private prosocial choices 22 . The effects of peer influence have often been attributed to conformity, suggesting that individuals tend to express the same behavior as a group 23 . Each subject in our study, however, was only confronted with a single peer's behavior instead of a groups' behavior. Likewise, given the absence of an audience and the use of an incentive scheme designed to motivate subjects to behave according to their true preferences, public or superficial conformity seem to be insufficient explanations for the peer effects we observe 23 . Subjects were also informed that the peer herself or himself had not received any information about other people's behavior. It therefore seems unlikely that subjects might have had the impression that the peer was better informed and displayed the more advantageous behavior 24 . We can further dismiss the possibility that subjects feared a bad reputation 25 , because all decisions were anonymous. Furthermore, in contrast to previous studies that have investigated the effects of peer feedback on prosocial decisions 11,16 , the peer influence in our study did not involve any type of social evaluation. An alternative explanation is that the subjects might have inferred a social norm from the peer's behavior, although it was only a single person. Previous work has suggested that a single peer's behavior may function as a reminder of a social norm or a heuristic for their own behavior 26,27 . In line with this reasoning, the generous and weakly control-averse peer might have reminded the subjects of the norm to be generous and fair, whereas the strongly control-averse peer might have reminded the subjects of the norm to punish the player A's distrustful decision to control. Other work discusses that the information about a peer's choice might be used to infer the quality of a choice option 28 . In other words, an option may become more attractive simply because other people have chosen it. Likewise, our subjects might have inferred that the levels chosen by the peer have a greater utility, on top of or despite the associated profits. Another possibility is that subjects changed their actual preferences. Such an internalization of the peer's preferences may have occurred as a by-product of learning about the peer's preferences 29 . Although subjects had no systematic monetary incentive to follow the peer, guessing the peer's choice correctly may have been rewarding in itself and thereby may have reinforced adopting the peer's behavior. In other words, simply acquiring information about the peer's preferences may have changed the subjects' own preferences-in many cases even if that preference was associated with lower payoffs for the subjects. This would be in accordance with a recent finding of peer effects on intentions to volunteer that lasted well into a private setting, during which subjects had no incentive to conform with the peer group 6 . Whereas their study showed effects of a peer group on self-reported intentions, our study shows effects of a single peer on actual (not hypothetical) social decisions. Finally, a recent study has proposed that individuals might follow a deviant peer's behavior as a form of restoring their freedom of choice 18 . The authors found that individuals with higher HPRS scores were more likely to comply with a peer's request to engage in deviant behavior, such as drinking. The authors interpret this compliance with peer influence as the subjects' way of restoring their autonomy to choose a deviant behavior. Critically, the chosen deviant behavior in that study remained hypothetical and was not actually implemented. In line with Leander et al., we find that subjects in the group with a strongly control-averse peer comply with the peer's less generous and somewhat deviant choice in the Controlled condition, which could be a way to demonstrate autonomy in response to the choice restriction. However, subjects in both groups with peer information also followed the more generous peer behavior in the Free condition, in which case compliance with the peer behavior cannot be interpreted as deviant behavior. Moreover, unlike Leander et al. we did not find a significant association between the HPRS score and the compliance with the peer influence. Therefore, we cannot conclude that the peer effect we observe is an attempt to pursue autonomy. The current study may also expand the scope of current neuroscientific research on peer influence, which has overwhelmingly focused on the effects of peer groups. Neuroimaging studies, for example, have found that the information about a peer group's attitudes 30 or the mere presence of a peer group 16 had an effect on behavior and correlated with activation changes in brain regions associated with social decision making, suggesting that similar www.nature.com/scientificreports www.nature.com/scientificreports/ neural networks might be involved for social behavior and sensitivity to peer influence. Here, we demonstrate the effects of a single peer, rather than a peer group, and introduce an experimental paradigm that allows researchers to measure the individual susceptibility to peer influence on control-averse behavior. Building on previous work on the neural basis of control-averse behavior 4,5 , this study prepares the ground for fruitful future investigations of the neural processes underlying the peer effects on control-averse behavior. In conclusion, our results indicate that even information about a single, anonymous peer can contribute to the complex phenomenon of control-averse behavior. In particular, information about a strongly control-averse peer can contribute to an increase of individual control-averse behavior. This is relevant for many domains of our society, in which successful social interactions rely on the interaction partner's compliance, such as therapist-patient interactions, parent-child interactions or employer-employee interactions. Furthermore, the fact that our subjects' behavior had real consequences for themselves as well as a third person speaks to the generalizability of our findings to social interactions outside of the laboratory, which has important implications for clinical work with patients. Our study suggests that to destabilize compliance, all it takes is information about one non-compliant peer. On the bright side, we also find that information about a single generous peer can lead to an increase of generous behavior. Future studies could build on these findings to develop and test applications in the field. Methods Participants. We recruited a total of 84 students from the University of Bern for participation in this study. The sample size was determined by a statistical power analysis as implemented in G*Power 31 . Based on the effect size r = 0.56 of control-averse behavior in a previous data set 4 , the power analysis suggested that to achieve this effect size at p < 0.05 with a power of approximately 0.80 in a one-tailed Wilcoxon signed rank test, a sample size of 23 participants would be required. Hence, we recruited 23 to 30 participants for each of the three experimental groups described in the next section. Students of economics, psychology and social sciences were excluded from participation to reduce the possibility of prior knowledge of the concept of control aversion or social influence theory. Further exclusion criteria were left-handedness, smoking and a reported history of psychological disorders, neurological or cardiovascular diseases, because participants also underwent magnetic resonance imaging while engaging in the task described in the next section. The imaging analyses are beyond the scope of this paper and will be reported elsewhere. One participant was excluded due to technical problems during data acquisition and another participant due to a neurological disease discovered after data acquisition. The remaining 82 participants were included in the analysis. All experimental protocols were approved by the Bern Cantonal Ethics Commission. The methods were carried out in accordance with the relevant guidelines and regulations. All participants gave informed, written consent and received a compensation of CHF 50 (≈USD 50) for participation in the study in addition to the payoff from the task described next. Control aversion task with and without peer influence. Subjects were randomly assigned to one of three groups, in which they received information about a strongly control-averse peer (group Strong CA, n = 30), information about a weakly control-averse peer (group Weak CA, n = 29) or no peer information (group No Peer, n = 23). Experimenters were blind as to whether a subject was assigned to the group Strong CA or Weak CA and the task instructions were identical in these two groups. Each group completed a different version of the Control aversion task, which will be described in detail below. In short, an individual called player A either lets the subject make a free choice or restricts the subject's choice options and thereby exerts control over the decision. The subject, labeled player B, then chooses a monetary allocation that will affect both their own and the player A's payoff. On a subset of trials, peer information is presented as the allocation chosen by an individual called player B*, a student of the same age range who had faced the same decisions as the subject. In total, subjects were presented with 36 anonymous players A's decisions from a pilot study. Subjects were informed that the players A's decisions had been prerecorded for logistic reasons, but that their choices in the task had real consequences in the sense that one trial would be randomly selected and paid out to themselves and the corresponding player A. Subjects in the groups with peer information were further told that they would see choices made by a peer, labeled player B*, a student between 18-35 years who had responded to our invitation email-just like themselves-and participated in a previous session of the experiment. In reality, the peer's choices were programmed with simple algorithms that mimicked the behavior of strongly or weakly control-averse participants from a pilot study, respectively. This was done to ensure that subjects would see one of two extremes of a peer's control-averse behavior and to ensure homogeneity of the peer influence within each group with peer information. Care was taken that the peer behavior reflected realistic choices that had actually occurred in our pilot study. Concretely, none of our subjects in the pilot study had made highly selfish choices when they could decide freely or had been more generous when their choice options were restricted than when they could decide freely. Neither did any subject believe that other subjects would behave this way. Therefore, to ensure the credibility of the peer's existence these peer behaviors were not included in the experiment. No subject voiced any doubts about the existence of the peer in a post experimental debriefing. Prior to performing the task, subjects read the instructions and were quizzed to ensure they had understood the task and its payoff scheme. We now describe the task and its different versions in detail. All subjects made repeated monetary allocation decisions in two conditions as follows (Fig. 2). In the Free condition, subjects had the choice between ten monetary allocations, called generosity levels one to ten (from left to right). In the Controlled condition, player A requested a minimum of level four and thereby restricted the subjects' choice to levels four to ten. The generosity levels ranged from a selfish allocation (98 points for the subject, 9 points for the player A) to a more generous and fair allocation (80 points for both the subject and the player A). With increasing generosity levels, the player A's profit increased linearly (in increments of seven or eight points) whereas the subjects' profit decreased linearly (in increments of two points). The monetary allocations were specifically designed to motivate subjects to choose www.nature.com/scientificreports www.nature.com/scientificreports/ a high level in the Free condition and to create room for the choice of a lower level in the Controlled condition, which is a prerequisite for measuring control-averse behavior 21 . To achieve this, we built on subjects' preference for equality and efficiency 32 and designed the levels such that the highest level represents both an equal allocation between player A and B as well as the largest sum of points. We visualized the levels as stacked color bars, such that each player was assigned one specific color throughout the task: For example, player A's profit was represented by orange bars, the subject's profit by blue bars and-when peer information was presented-the peer's (player B*'s) profit was represented by green bars. The colors assigned to each player were counterbalanced across subjects. The subjects' (or the peer's) and player A's points were also printed below and above the color bars, respectively. This way subjects always received the exact information about the points, but were also given an intuitive, easy-to-grasp visualization of the point allocations in each level. Subjects selected a generosity level by moving a selection frame that appeared on a random (available) level to the desired level and selecting OK via button presses. The players A's decisions were preselected such that the subjects completed the same number of trials in the Free and in the Controlled condition, i.e. 18 trials per condition. In the first two trials of the task, all subjects completed one trial in the Controlled condition and one trial in the Free condition without peer information, in random order. Based on previous work 1,4,21 , control-averse behavior is defined as lower chosen levels when player A requests a minimum level (in the Controlled condition) than when the subject can decide freely (in the Free condition). Therefore, the difference between the chosen level in the first trial of the Free condition minus the chosen level in the first trial of the Controlled condition serves as a baseline measure of the individual level of control-averse behavior. The remaining trials were presented in random order. For the subjects in the group without peer information (group No Peer), all 36 trials ended after the choice of a generosity level (Figs 1 and 2A). For the subjects in the groups with peer information, 12 randomly interspersed trials (33%) also ended there (trials without peer information); in the remaining 24 trials (66%), subjects were asked to guess what the peer (player B*) had chosen in the same situation (Figs 1 and 2B). Then they were presented with the peer's choice (trials with peer information, Fig. 2C,D). Note that peer information was always presented at the end of a trial and only in two thirds of the trials, whereas the Controlled and Free conditions were implemented in half of the trials each, in random order. This leads to randomized trial sequences, in which a subject might see, for example, a peer's choice in the Controlled condition, but subsequently will be asked to make a choice in the Free condition. This design feature helps to disentangle the direct effect of the most recent peer choice (e.g. a selfish choice) from the more indirect effect of the peer's control-averse behavior that a subject has observed over the history of trials (e.g. selfish choices in the Controlled condition, but generous choices in the Free condition). Importantly, subjects were instructed that player B* remained identical throughout the experiment and that we were interested in how well they could predict player B*'s choices. This feature was included to motivate subjects to pay attention to the peer's choices as well as a justification for the presentation of peer information. In the Free condition, both groups with peer information observed the choices of a peer who chose level ten with a likelihood of 80% and level nine with a likelihood of 20%. In the Controlled condition, the group Strong CA observed the choices of a peer who chose level four with a likelihood of 70% and level five with a likelihood of 30%, reflecting strongly control-averse behavior. By contrast, the group Weak CA observed the choices of a peer who chose level ten with a likelihood of 70% and level nine with a likelihood of 30%, reflecting more generous and weakly control-averse behavior. The variability in the peer's choices was implemented for verisimilitude and to keep subjects engaged in guessing the peer's choices. Critically, the peer's choices were independent of the subjects' payoffs. At the end of the task, one trial was randomly selected for payoff to the subject and the matched player A. The profits in the selected trial were converted into CHF (with 1 point = CHF 0.20 ≈ USD 0.20). Based on the task, the subjects' received a mean CHF 17.00 ± 1.10 SD, and the players A received a mean CHF 12.00 ± 4.40 SD. Analysis of peer effects on control-averse behavior. To test the effects of peer influence on control-averse behavior, we set up a GLMM using the function fitglme as implemented in the MATLAB Statistics and Machine Learning Toolbox (R2015b, MathWorks). The dependent variable of this GLMM was the chosen level by subject i in trial j. The predictors modeled whether the subject's choice was restricted (Controlled condition) or free (Free condition) for a given trial, and what type of peer information each subject received (strongly control-averse, weakly control-averse, or no peer information). Note that the statistical test was corrected for a bottom effect between the conditions, following the procedure by Falk and Kosfeld 1 : chosen levels in the Free condition that were smaller than level four were set to level four, which is the smallest possible level in the Controlled condition. The GLMM included a predictor Control ij that is equal to 1 for trials in the Controlled condition and 0 otherwise. The groups with peer information were included as categorical predictors Strong CA i and Weak CA i , using the group No Peer as reference. Strong CA i is equal to 1 for subjects in the group with information about a strongly control-averse peer and 0 otherwise, and Weak CA i is equal to 1 for subjects in the group with information about a weakly control-averse peer and 0 otherwise. We further included random-effects intercepts for subjects and random slopes for Control ij within subjects. Visual inspection of residual plots did not reveal any obvious deviations from homoscedasticity or normality. To check the robustness of the GLMM results with regard to censoring of the data, we implemented two additional models. First, we ran a Bayesian hierarchical censored linear regression that controls for the censoring of the data at the upper and lower end of the dependent variable. Second, we ran a Bayesian hierarchical ordinal regression that treats the dependent variable as distinct, ordered categories. Both additional regression models were estimated using Bayesian Markov-chain Monte Carlo methods with R (version 3.2.4) 33 www.nature.com/scientificreports www.nature.com/scientificreports/ Analysis of the effects of the most recent peer information on control-averse behavior. To test whether the subjects' behavior was influenced by the most recently presented peer information in addition to the Controlled condition, we ran a new GLMM using the data from the two groups with peer information, group Strong CA and group Weak CA (n = 59). The dependent variable of this GLMM was the chosen level by subject i in trial j. The predictors modeled whether the subject's choice was restricted (Controlled condition) or free (Free condition) for a given trial, and the most recently observed choice of the peer. A predictor for the type of peer information (Strong CA, Weak CA) was omitted from the model due to rank deficiency of the design matrix. More specifically, the GLMM included a predictor Control ij that is equal to 1 for trials in the Controlled condition and 0 otherwise, and a predictor PeerChoice ij-1 , which was the most recently presented level chosen by the peer (see Fig. 2C,D). Note that the first trials before any peer information were dropped. Moreover, due to the randomized trial sequences the most recent peer information could be either of the Controlled or the Free condition and therefore did not necessarily match the condition of the current trial. Furthermore, because peer information was presented in only two thirds of the trials, the most recent peer information could be more than one trial ago. For trials between peer information, missing values for the predictor PeerChoice ij-1 were filled with the last available peer information for the respective subject. We further included random-effects intercepts for subjects and random slopes for Control ij and for PeerChoice ij-1 within subjects. Analysis of potential moderators of the peer effects. To investigate whether the peer effects on control-averse behavior might be moderated by the subjects' general resistance to peer influence, we asked subjects to fill in a German version of the Resistance to Peer Influence (RPI) scale (for the German version see Supplementary Information S1) 17 . To maintain linguistic validity the RPI scale was translated into German by the first author of the current study, then back-translated into English by a professional translator and compared with the original. In the RPI scale, subjects are asked to indicate which of ten pairs of statements best describes them as a person, for example: "Some people go along with their friends just to keep their friends happy" BUT "Other people refuse to go along with what their friends want to do, even though they know it will make their friends unhappy". Responses are coded on a 4-point Likert scale, ranging from 1 = really true and 2 = sort of true for one statement to 3 = sort of true and 4 = really true for the other statement. In our sample, the RPI scale had an excellent internal consistency (Cronbach's α= 0.96). The overall score of general resistance to peer influence ('RPI score') was computed as described in the Supplementary Information S1. On average, subjects had an RPI score of mean 2.97 ± 0.33 SD (Mdn = 2.95, range: 2.2-4). To control for the RPI score as a potential moderator of the peer effects, we ran a new GLMM, which was identical to the first GLMM, except that we added the moderator variable RPI i , which is the normalized and mean-centered score of the general resistance to peer influence as measured by the RPI scale. Specifically, we added a predictor RPI i , its interactions with Strong CA i and Weak CA i , and three-way interactions of Control ij , Strong CA i and Weak CA i , respectively, and RPI i to the GLMM. Because we did not assume that the resistance to peer influence should affect control-averse behavior in the absence of peer influence, we omitted the interaction of RPI i and Control ij from the GLMM. To account for the possibility that the peer effects on control-averse behavior might be moderated by subjects' general tendency to rebel against restrictions of their freedom of choice, we asked subjects to fill in a German version of the Hong Psychological Reactance Scale (HPRS) 19,20,36 . The HPRS consists of 14 items that describe general attitudes and habits. Subjects rated these items on a 5-point Likert scale ranging from 1 = strongly disagree to 5 = strongly agree. Based on the subjects' ratings, we computed the scores of the four subscales Emotional response toward restricted choices, Reactance to compliance, Resisting influence from others, Reactance toward advice and recommendations. All subscales of the HPRS had good to excellent internal consistencies (Cronbach's α between 0.85 and 0.99). To achieve one overall score of the HPRS ('HPRS score'), we computed the mean of the four subscale scores for each subject. On average, subjects had an HPRS score of mean 3.04 ± 0.47 SD (Mdn = 3.14, range: 1.86-4.07). To control for the HPRS score as a potential moderator of the peer effects, we ran a third GLMM, which was identical to the first GLMM, except that we added the moderator variable HPRS i , which is the normalized and mean-centered score of the general tendency to rebel against restrictions of one's freedom of choice as measured by the HPRS. Specifically, we added a predictor HPRS i , its interaction with Control ij and three-way interactions of Control ij , Strong CA i and Weak CA i , respectively, and HPRS i to the first GLMM. Because we assumed that the general tendency to rebel against restrictions of one's freedom of choice is relevant for the effect of the Controlled condition, but not for the effect of the peer influence per se, we omitted the interactions of HPRS i with Strong CA i and Weak CA i, respectively, from the GLMM. Data Availability The data that support the findings of this study are available from the corresponding author S.R. upon request.
2019-03-08T15:44:59.838Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "ab3b7d7bc8f6f85465ec397cd3939a0c6cd61998", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-39600-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab3b7d7bc8f6f85465ec397cd3939a0c6cd61998", "s2fieldsofstudy": [ "Psychology", "Economics" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
221623662
pes2o/s2orc
v3-fos-license
Malic Enzyme Couples Mitochondria with Aerobic Glycolysis in Osteoblasts SUMMARY The metabolic program of osteoblasts, the chief bone-making cells, remains incompletely understood. Here in murine calvarial cells, we establish that osteoblast differentiation under aerobic conditions is coupled with a marked increase in glucose consumption and lactate production but reduced oxygen consumption. As a result, aerobic glycolysis accounts for approximately 80% of the ATP production in mature osteoblasts. In vivo tracing with 13C-labeled glucose in the mouse shows that glucose in bone is readily metabolized to lactate but not organic acids in the TCA cycle. Glucose tracing in osteoblast cultures reveals that pyruvate is carboxylated to form malate integral to the malate-aspartate shuttle. RNA sequencing (RNA-seq) identifies Me2, encoding the mitochondrial NAD-dependent isoform of malic enzyme, as being specifically upregulated during osteoblast differentiation. Knockdown of Me2 markedly reduces the glycolytic flux and impairs osteoblast proliferation and differentiation. Thus, the mitochondrial malic enzyme functionally couples the mitochondria with aerobic glycolysis in osteoblasts. INTRODUCTION Proper bone remodeling is essential for maintaining the integrity of bone, and it requires an exquisite balance between bone resorption by osteoclasts and bone formation by osteoblasts. Loss of the balance in favor of bone resorption causes osteoporosis or osteopenia that leads to millions of bone fractures annually (Cummings and Melton, 2002). The current bone anabolic agents teriparatide and abaloparatide, both activating the parathyroid hormone receptor, are effective in increasing bone mineral density and reducing fracture risks, but their use is limited by unresolved concerns about osteosarcoma risks (Cipriani et al., 2012;Ramchand and Seeman, 2018). The newly US Food and Drug Administration (FDA)approved romosozumab, an antibody against the Wnt inhibitor sclerostin, is dually functional in inducing bone formation and suppressing bone resorption but carries a warning about increased cardiovascular risks (Sølling et al., 2018). Thus, there is a continuing need for developing safe and effective alternative treatments for osteoporosis. Recent studies have increasingly linked diabetes mellitus with increased bone fractures (Schwartz, 2017;Weber et al., 2015). Although the mechanisms underlying bone comorbidity in diabetes are complex, potential disruption of energy metabolism in osteoblasts may lead to impaired osteoblast function (Lee et al., 2017;Napoli et al., 2017). Moreover, the bone anabolic function of PTH and Wnt signaling has been partially attributed to their regulation of osteoblast metabolism (Chen et al., 2019;Esen et al., 2013Esen et al., , 2015Frey et al., 2015;Karner et al., 2015). These advances have raised the potential that metabolic pathways may be targeted for developing additional bone therapies. Cellular metabolism in osteoblasts however is just beginning to be understood. Historical studies demonstrated that bone ex-plants as well as freshly isolated calvarial cells from rodents consumed glucose at a brisk rate in vitro (Borle et al., 1960;Cohn and Forscher, 1962;Peck et al., 1964). Recent work with radiolabeled glucose analogs has confirmed a significant uptake of glucose by bone in the mouse, further supporting glucose as a main energy substrate for osteoblasts (Wei et al., 2015;Zoch et al., 2016). On the other hand, glutamine and fatty acids were also implicated in osteoblast bioenergetics (Adamek et al., 1987;Biltz et al., 1983). However, the relative contribution of each substrate to energy production in osteoblasts has not been determined. Furthermore, the mechanism for maintaining the NAD+/NADH redox state necessary for sustaining rapid glycolysis in osteoblasts is not known. Several studies have investigated glucose metabolism in rodent calvarial cells following differentiation in vitro but have reported variable results. Whereas one study showed that glucose consumption and lactate production decreased initially but increased later during differentiation, another reported a consistent increase in glycolysis with osteoblast differentiation (Guntur et al., 2014;Komarova et al., 2000). Moreover, studies of osteoblast differentiation from human bone marrow mesenchymal stem cells have produced opposite results. There, osteoblast differentiation was associated with increased oxidative phosphorylation, but with either no change or a decrease in glycolysis (Chen et al., 2008;Shum et al., 2016). Thus, the metabolic changes during osteoblast differentiation remain to be fully established. In this study, we have refined a culture system for freshly isolated calvarial cells to undergo robust osteoblast differentiation within seven days. We show that aerobic glycolysis is a main bioenergetic mechanism throughout differentiation and especially dominates energy production in mature osteoblasts. We further demonstrate that the malate-aspartate shuttle between mitochondria and the cytosol is necessary for active glycolysis in the osteoblast. osteoblast differentiation, we repeated the procedure with ColI-GFP transgenic mice that express GFP mainly in mature osteoblasts. No GFP was detected at day 0, but an increasing number of GFP+ cells appeared after days 4 and 7 ( Figures 1G-1I). Finally, to obtain a global profile of gene expression, we performed RNA sequencing (RNA-seq) experiments with the cells of different stages. Heatmap analyses showed that all known osteoblast markers including Runx2, Sp7, Atf4, Alpl, and Bglap were induced after four days of differentiation, and further upregulated after seven days ( Figure 1J). Sost, which is believed to be expressed mainly by osteocytes, was clearly elevated at day 7, indicating that some cells might be transitioning to the more advanced stage. Thus, the current protocol supports robust osteoblast differentiation from calvarial cells within seven days. Glucose Is the Major Energy Source for Osteoblasts To explore potential changes in bioenergetics, we first measured the steady-state levels of intracellular ATP (adenosine triphosphate) at different stages of osteoblast differentiation. After being cultured for different days in control or mineralization media, the cells were dissociated and reseeded in fresh media supplemented with glucose, glutamine, and free fatty acids for ATP measurements. The day-4 or −7 differentiated cells showed lower ATP levels than the cells cultured in control media for the same length of time, likely reflecting increased ATP consumption associated with osteoblast differentiation as later analyses showed increased ATP production in the process (Figure 2A; see below). To determine the relative contribution of the major nutrients to bioenergetics, we used 2-DG (2-deoxyglucose), BPTES, or etomoxir to block the utilization of glucose, glutamine or fatty acids, respectively, and monitored the intracellular ATP levels for up to 2 h. The appropriate dose of each inhibitor was determined by screening a series of dilutions to identify a high concentration that did not cause >5% lethality after 2 h of treatment (see STAR Methods). Inhibition of glucose metabolism by 2-DG abruptly reduced the steady-state ATP level within 30 min in day-0, day-4, and day-7 cells by 56%, 63%, and 75%, respectively ( Figure 2B). In contrast, inhibition of glutamine or fatty acid consumption by BPTES or etomoxir, respectively, had no consistent effect on ATP levels for up to 2 h despite slight fluctuations at certain time points that might reflect compensatory reactions (Figures 2C and 2D). Consistent with the ATP measurements, when cultured in media supplemented with all three energy substrates the undifferentiated calvarial cells consumed glucose at approximately three times the rate of glutamine and 30 times that of free fatty acids ( Figures 2E-2G). Moreover, glucose consumption was significantly increased after four or seven days of differentiation over the undifferentiated cells, whereas glutamine was not changed and fatty acid consumption was only transiently increased at day 4 of differentiation ( Figures 2E-2G). A similar finding that glucose is the main energy substrate for osteoblasts has been reported previously based on a different experimental approach (Wei et al., 2015). Functionally, whereas 2-DG notably suppressed Alp expression at day 4 and abolished mineralization as stained by alizarin red at day 7, BPTES or etomoxir had no obvious effect ( Figure 2H). The concentrations of the inhibitors were chosen so that they were sufficiently high but did not cause >10% cell lethality at the end of the culture (see STAR Methods). To test further the functional relevance of increased glycolysis to osteoblast differentiation, we replaced glucose in the media with galactose, which is known to reduce the glycolytic flux as it must be metabolized via the Leloir pathway before entering the glycolytic mainstream (Bustamante and Pedersen, 1977). Consistent with the previous report, galactose was largely sufficient to support cell proliferation as it achieved the same cell number as glucose after seven days of differentiation, and only modestly reduced the number at day 4 ( Figure 2I). However, a number of osteoblast-marker genes were suppressed after four or seven days in mineralization media, indicating impairment of osteoblast differentiation (Figures 2J and 2K). Staining for Alp activity at day 4 confirmed a clear deficit in differentiation with galactose; alizarin-red staining at day 7 revealed that galactose blocked the formation of mineralized nodules, which were readily visible even without the staining in cultures with glucose ( Figure 2L). These results therefore support that osteoblast lineage cells use glucose as the major energy source, and that the glucose dependence increases with osteoblast differentiation. Aerobic Glycolysis Dominates Energy Production in Osteoblasts As osteoblasts have been indicated to convert glucose mainly to lactate under aerobic conditions, exhibiting a phenomenon known as aerobic glycolysis, we measured lactate levels in the culture medium. Similar to the increase in glucose consumption, the rate of lactate production per cell after four or seven days of differentiation increased to two-tothree times that of the undifferentiated control cells ( Figure 3A). Reliance on aerobic glycolysis predicts glycolysis instead of OXPHOS (oxidative phosphor-ylation) as the main mechanism for energy production in osteoblasts. To test this prediction, we used oxamate, UK-5099, or oligomycin to inhibit lactate dehydrogenase, mitochondrial pyruvate carrier, or ATP synthase, respectively. Inhibition of pyruvate-to-lactate conversion by oxamate reduced intracellular ATP levels by ~30% in the day-0 cells and ~60% in the day-7 osteoblasts ( Figure 3B). In contrast, inhibition of pyruvate entry to the mitochondria by UK-5099, or inhibition of mitochondrial ATP production by oligomycin, did not have any effect on the steady-state ATP levels ( Figures 3C and 3D). Measurements of the relative copy number of mt-Nd4, a representative mitochondrial gene, over the nuclear gene Hk2 showed that the abundance of mitochondria per cell stayed relatively constant during differentiation ( Figure 3E). Thus, aerobic glycolysis is mainly responsible for energy production in osteoblasts. To gain further insights into the metabolic changes during osteoblast differentiation, we performed Seahorse analyses. As we detected no difference in their metabolic behavior among day-0, day-4, or day-7 control cells, we include here only the day-0 control cells for comparison with the differentiated cells. The basal oxygen consumption rate (OCR) was similar between day-0 and day-4 cells, but significantly reduced in the day-7 osteoblasts ( Figures 4A and 4B). Moreover, ATP production OCR and spare capacity OCR were both decreased in the day-7 cells (Figures 4C and 4D). In contrast to OCR, the extracellular acidification rate (ECAR) was significantly higher in day-4 and day-7 than day-0 cells ( Figures 4E and 4F). The increased ECAR was consistent to the greater lactate production rate associated with osteoblast differentiation as shown above. Together, the data indicate that osteoblast differentiation is associated with increased glycolysis but reduced oxidative phosphorylation. We next calculated the theoretical ATP production from either glycolysis or OXPHOS based on ECAR or OCR from Seahorse analyses. Such calculations showed that ATP from OXPHOS decreased by ~50% in day-7 versus day-0 cells, whereas ATP from glycolysis increased by ~150% in day-4 or day-7 cells compared to day-0 cells ( Figures 4G and 4H). Further breakdown of the glycolytic ATP production indicated that a predominant majority resulted from substrate-level phosphorylation in the cytoplasm, ranging from 85% in day-0 to 95% in day-4 to 98% in day-7 cells, whereas the contribution from mitochondrial oxidation of NADH transported from the cytoplasm progressively decreased with differentiation ( Figures 4I and 4J). The increase in glycolytic ATP production resulted in a significant increase in total ATP output in the day-4 and day-7 cells ( Figure 4K). Furthermore, the contribution of glycolysis toward total ATP production increases from 40% in day-0 cells to 60% or 80% in day-4 or day-7 cells, respectively ( Figure 4L). These results therefore demonstrate that glycolysis supplies most of the energy in mature osteoblasts. Glucose Is Mainly Converted to Lactate and Contributes to the Malate-Aspartate Shuttle in Osteoblasts To directly examine the metabolic fate of glucose, we performed isotope-tracing experiments with uniformly labeled D-glucose ( 13 C 6 -Glc) and analyzed intracellular metabolites by mass spectrometry. The fully labeled glucose (m+6) was expected to produce fully labeled pyruvate (m+3) through glycolysis, which could then convert to lactate (m+3) in the cytosol or produce citrate (m+2) in mitochondria to fuel TCA (tricarboxylic acid) cycle metabolism ( Figure 5A). After 30 min of incubation with 13 C 6 -Glc, the amount of intracellular pyruvate (m+3) was significantly higher in day-4 than day-0 cells and was further increased in the day-7 osteoblasts ( Figure 5B). Moreover, the intracellular lactate (m +3) amount increased by 150% in day-4 and day-7 over day-0 cells ( Figure 5C). In contrast, the amount of citrate (m+2) was less than 10% of lactate (m+3) in day-0 cells and further decreased with differentiation ( Figure 5D). Succinate (m+2) or fumarate (m+2), which are TCA metabolites downstream of citrate (m+2), were not detectable at any stage. We next calculated the relative conversion rate of key metabolites by normalizing their labeling percentage to that of the precursor. Such calculations showed that glucose was converted to lactate at a significantly higher rate in day-4 and day-7 than day-0 cells ( Figure 5E). Interestingly, the increased rate was driven by the greater production of pyruvate from glucose as the conversion from pyruvate to lactate stayed constant throughout differentiation (Figures 5F and 5G). Finally, the generation of citrate from pyruvate by pyruvate dehydrogenase was significantly reduced in day-7 versus day-4 cells ( Figure 5H). The results therefore provide direct evidence that glycolysis is accelerated to produce lactate in osteoblasts with little contribution to the TCA cycle via pyruvate dehydrogenase. Analyses of the tracing data revealed additional fates for glucose. Malate (m+2) was barely detectable in day-0 cells and further deceased with differentiation, again confirming minimal contribution of glucose to the TCA cycle through pyruvate dehydrogenase. Interestingly however, malate (m+3) was present at a significantly higher level than malate (m+2) in day-4 and day-7 cells ( Figure 5I). Fumarate (m+3), which interconverts with malate (m+3), was also readily detected in all cells ( Figure 5J). Furthermore, aspartate (m+3), which can be derived from malate (m+3), was readily detectable in day-0 cells and more than doubled in day-4 and day-7 cells ( Figure 5K). These results indicate that glucose-derived pyruvate is converted to malate through carboxylation, perhaps to engage the malate-aspartate shuttle in osteoblasts ( Figure 5L). Glycolytic Genes Are Coordinately Upregulated during Osteoblast Differentiation To gain insight into the molecular basis for metabolic reprogramming during osteoblast differentiation, we analyzed the transcriptomic changes obtained by RNA-seq. A cutoff of RPKM (reads per kilobase of transcript, per million mapped reads) >2 and fold changes >2 between any two of the three stages resulted in 1,214 genes. Analyses of those genes with KEGG Mapper identified metabolic pathways (ko01100) as the top category encompassing ~10% of all changes during osteoblast differentiation. Specifically, essentially all glycolysis genes were upregulated after four days of differentiation and further enhanced at day 7 ( Figure 6A). In contrast, the TCA cycle genes as well as the fatty acid catabolism genes were largely suppressed at day-4 compared to day-0, although some of the genes rebounded to the day-0 level at day 7 ( Figures 6B and 6C). In keeping with the relatively stable load of mitochondria in the cells, the genes regulating mitochondrial biogenesis did not exhibit consistent changes during differentiation (note that Ppargc1a was virtually undetectable at all stages and thus not included in the heatmap; Figure 6D). Unexpectedly, however, most genes encoding the subunits of the mitochondrial electron transport chain (ETC) complexes I to V were significantly upregulated at day 7 compared to day 0, likely reflecting a compensatory response to the suppressed OXPHOS in day-7 cells ( Figure 6E). Finally, the genes for fatty acid synthesis or amino acid metabolism, as a whole, did not exhibit an obvious change pattern during differentiation (Figures S1A and S1B). The gene expression changes therefore support the increased glycolysis associated with osteoblast differentiation. Me2 Is Necessary for Glycolysis in Osteoblasts RNA-seq further revealed that Me2, encoding an NAD+-dependent mitochondrial isoform of malic enzyme, was expressed at the highest level among the three family members and further induced with osteoblast differentiation ( Figure 7A). In contrast, Me1, encoding the NADP+-dependent cytosolic malic enzyme, did not change, whereas Me3 for the NADP+dependent mitochondrial form was hardly expressed (RPKM < 0.2 in all samples; Figure 7A). The cytosolic or mitochondrial malic enzymes convert pyruvate to malate in a reversible manner. As we have detected a notable contribution of glucose to the malateaspar-tate shuttle (which is known to regulate the cytosolic NAD+/NADH redox state), we tested a potential role for Me2 in modulating the glycolytic flux in the osteoblast lineage. Knock down of Me2 with two independent shRNA constructs reduced its mRNA level by >50% at all three differentiation stages ( Figure 7B). As a result, glucose consumption per cell was consistently reduced by ~30% in day-0 cells, and by ~70% in day-4 and day-7 differentiated cells ( Figure 7C). Likewise, Me2 knockdown decreased lactate production per cell by 40% to 50% at all three stages ( Figure 7D). During the 24-h culture for glucose and lactate measurements, we noted that the Me2-deficient cells propagated less than the control cells, indicating that Me2 knockdown likely impaired cell proliferation ( Figure 7E). In addition, the knockdown also impaired osteoblast differentiation, as both constructs reduced the mRNA level of Atf4 at all three stages, Alpl, Col1a1 in day-0 and day-7 cells, along with Osx (official gene name Sp7) in day-7 cells ( Figures 7F, 7G, and 7H). Knock down of Me1, on the other hand, did not impair glucose consumption or lactate production, thus highlighting the specific function of Me2 in promoting glycolysis ( Figure S2). Finally, inhibition of the malate-aspartate shuttle with aminooxyacetate (AOA), which targets aspartate aminotransferase, dose dependently reduced glucose consumption in day-4 and day-7 cells ( Figure 7I). The data therefore provide evidence that funneling glucose carbons to the malate-aspartate shuttle via Me2 is important for boosting glycolysis in osteoblasts. Glucose Contributes Minimally to the TCA Cycle in Murine Cortical Bone The data so far establishes aerobic glycolysis as a prominent metabolic feature of osteoblasts in vitro. To determine the physiological relevance of the observation, we performed glucosetracing experiments in the mouse. Briefly, mice were injected with 13 C 6 -Glc through the tail vein at 60 min before sacrifice for the plasma and the femoral cortical bone to be extracted for metabolites. The cortical bone was chosen because it contains mostly osteoblasts and osteocytes with few other cell types. Quantitative analyses of the plasma by mass spectrometry detected clear enrichment of the 13 C-labeled citrate, malate, and succinate, as well as ~65% enrichment of glucose (m+6) and ~40% lactate (m+3), indicating active glycolysis and TCA cycle metabolism of 13 C 6 -Glc in the body ( Figures 7J and 7K). However, in the bones of the same animals, despite ~60% enrichment of glucose (m+6) and ~20% lactate (m+3), no 13 C-labeling of citrate, succinate, fumarate, or malate was detected even though those organic acids themselves were readily detectable ( Figures 7K and 7L). Thus, glucose is predominantly metabolized to lactate with little contribution to the TCA cycle in the murine cortical bone. As a whole, the data support a model wherein aerobic glycolysis is the predominant bioenergetic pathway in osteoblasts in which Me2 plays a critical role by fueling an active malate-aspartate shuttle ( Figure 7M). DISCUSSION We have conducted a comprehensive analysis of the metabolic profile in osteoblasts. The data indicate that glucose instead glutamine or fatty acids is the principal energy source for osteoblasts in culture. Metabolic tracing with labeled glucose shows that glucose is predominantly metabolized to lactate with contribution to the TCA cycle in osteoblasts in vitro and cortical bone in vivo. The metabolic features of osteoblasts are coupled with wholesale upregulation of the glycolytic genes during osteoblast differentiation. Furthermore, genetic knockdown of mitochondrial malic enzyme indicates that the malateaspartate shuttle is necessary to sustain the glycolysis flux in osteoblasts. Thus, the present study sheds light on the metabolic wiring osteoblasts. The study also provides insights into fatty acid utilization osteoblasts. Consistent with previous reports, we have detected fatty acid uptake by calvarial cells before and after osteoblast ferentiation (Frey et al., 2015). However, the uptake amount approximately 30fold less than that of glucose even with palmitate and oleic acid supplemented at physiological levels. More importantly, inhibition of mitochondrial fatty acid oxidation either etomoxir or oligomycin had little or no effect on steady-state ATP levels. Although we cannot rule out that compensation from glycolysis might mask the mitochondrial energy contribu tion, the data nonetheless indicate that such contribution is dispensable for energy homeostasis. Interestingly, fatty acid uptake peaked at day 4 of differentiation but declined by day 7 to the base level as seen in the undifferentiated cells. A transient increase in fatty acid oxidation at day 4 may explain the ~10% suppression of steady-state ATP levels by oligomycin specifically observed at that stage. Thus, fatty acids oxidation may contribute to osteoblast bioenergetics in a stage-specific manner, but generally appears to play an auxiliary role. In addition, the current work clarifies the potential contribution of glutamine to energy production in osteoblasts. Glutamine uptake was detected in calvarial cells but did not increase with osteoblast differentiation, indicating that glutamine normally may not play a major role in meeting the increased energy demand in mature osteoblast. In keeping with this view, inhibition of glutamine catabolism with BPTES had no effect on the steady-state ATP levels. The notion is also consistent with our previous finding that inhibition of glutamine catabolism does not affect basal bone mass in the mouse even though it suppresses the excessive bone formation caused by hyperactive Wnt signaling (Karner et al., 2015). A more recent study linked increased glutamine uptake with specification of skeletal stem cells toward the osteoblast lineage but did not report a specific role in energy production (Yu et al., 2019). Collectively, the data to date indicate that glutamine has a limited role as a direct energy substrate in osteoblasts under normal conditions. The comprehensive metabolic profiling was enabled by our optimized protocol for osteoblast differentiation. Multiple assays including visual inspection of nodule formation, mineral staining, and RNA-seq indicate that the calvarial cells reliably undergo robust osteoblast differentiation within seven days of culture in the current study. In contrast, previous protocols required a minimum of 14 days of induction to achieve differentiation (Guntur et al., 2014;Komarova et al., 2000). Another key difference between the studies lies in how the metabolic data were acquired and normalized. Previously, the metabolic measurements were made directly in the culture wells following differentiation and then normalized to either cell number or protein content recovered from the wells (Guntur et al., 2014;Komarova et al., 2000). However, in our experience, mineralization makes it difficult to fully recover the cells or cellular proteins from the cultures. Therefore, we have chosen to perform all metabolic studies after reseeding of the cells dissociated from the cultures, and to normalize the data to the reseeded cell number. The technical differences could account for the discrepant findings, particularly regarding oxygen consumption, which was previously shown to increase with osteoblast differentiation (Guntur et al., 2014;Komarova et al., 2000). The reliance of glycolysis for energy production in osteoblasts is counterintuitive as less ATP is produced from each glucose molecule through glycolysis than the TCA cycle. Glycolysis produces a net gain of 2 ATP per glucose molecule through substrate-level phosphorylation, but additional ATP may be generated through oxidative phosphorylation after NADH derived through glycolysis is translocated to the mitochondria via the malateaspartate shuttle. Our calculations from the Seahorse data indicate that glycolysis produces approximately 80% of the energy in mature osteoblasts. More over, 98% of the glycolytic ATP production in those cells results from substrate-level phosphorylation in the core glycolysis pathway (Table S1; [PPR glyc * ATP/lactate] in equation for glycolytic ATP). The marked increase in glycolysis leads to an overall increase in ATP production even though OXPHOS is diminished in mature osteoblasts. These results therefore resolve the apparent energy paradox about aerobic glycolysis in osteoblasts. A main finding from the Seahorse assays is that mature osteoblasts exhibit little spare respiration capacity in response to the mitochondrial uncoupling reagent FCCP (carbonyl cyanide-p-trifluoromethox-yphenyl-hydrazon). The failure to increase oxidative phosphorylation could indicate either the ETC capacity or the reducing equivalent (NADH or FADH 2 ) from the TCA cycle is limiting. As RNA-seq showed that virtually all genes encoding the ETC subunits were upregulated in mature osteoblasts, we consider it unlikely that the ETC capacity became limiting after differentiation. On the other hand, Pdk1, which encodes pyruvate dehydrogenase kinase suppressing the activity of pyruvate dehydrogenase, was expressed at four times higher in osteoblasts than preosteoblasts. Moreover, intracellular pyruvate accumulated at a higher level with the progression of osteoblast differentiation. Thus, suppression of pyruvate dehydrogenase activity may restrict pyruvate from entering the TCA cycle, resulting in less NADH or FADH2 to fuel oxidative phosphorylation. The present study provides insight about the mechanism for maintaining the highly glycolytic state in osteoblasts. The critical role of Me2 may be explained by funneling pyruvate into the malate-aspartate shuttle that reoxidizes the cytoplasmic NADH to NAD+. However, as Me2 knockdown reduced glycolysis more severely than the shuttle inhibitor AOA did, we cannot rule out that Me2 may perform additional activities to support glycolysis. We did not detect direct conversion of pyruvate to malate in bone through carbon tracing, but this could be due to insufficient enrichment of 13 C-glucose in vivo. Although the phosphate glycerol shuttle is also known to regenerate NAD+ from NADH in certain cells, we consider this unlikely here as the key enzyme Gpd1 is barely detectable in the osteoblast lineage cells (RPKM < 0.3 at all stages). It is worth noting that oxamate, though commonly used as an inhibitor of lactate dehydrogenase, also inhibits aspartate aminotransferase (Thornburg et al., 2008). Thus, the strong suppression of oxamate on intracellular ATP levels in osteoblasts may result from simultaneous inhibition of both lactate dehydrogenase and the malate-aspartate shuttle. Future studies are warranted to determine the relative contribution of each mechanism to cytoplasmic NADH reoxidation in osteoblasts. Overall, the study identifies aerobic glycolysis as the principal bioenergetic pathway in normal osteoblasts, and thus provides a foundation for future investigations into bone metabolism in pathological conditions such as diabetes. Further elucidation of the relationship between glycolysis and the malate-aspartate shuttle may uncover molecular targets for developing additional boneenhancing therapies. EXPERIMENTAL MODEL AND SUBJECT DETAILS All primary calvarial cells were isolated from newborn pups of the C57BL/6J (wild type or ColI-GFP) mouse strain at 1-5 days of age; both male and female pups were used. Use of the animals were approved by the Animal Studies Committee at Washington University in St. Louis School of Medicine and the IACUC Committee at The Children's Hospital of Philadelphia. METHOD DETAILS Cell isolation, culture and osteoblast differentiation-Isolation of calvarial preosteoblasts was modified from a previous protocol (Jonason and O'Keefe, 2014). Briefly, calvaria were dissected free of periosteum from neonatal (P1-P5) C57BL/6J wild type or ColI-GFP (Col1a1*2.3-GFP) mice (Kalajzic et al., 2002). Each calvarium was sequentially digested with 0.6 mL of 4 mg/ml collagenase I (Sigma, C0130) dissolved in PBS for multiple rounds of 15 mins at 37°C with gentle shaking at 100 rpm. Cells were collected from the second through fourth digestion and filtered with 70 μm strainer before being centrifuged and seeded at 4×10 4 cells/cm 2 in ascorbic acid-free MEMα (Thermo, A10490) supplemented with 10% FBS (Thermo, 26140087) and Penicillin-Streptomycin (Thermo, 15140122). The cells can be passaged once in the same culture condition to increase the cell number. After reaching 100% confluency usually after 3 days, the cells were switched to MEMα containing 10 or 4 mM β-glycerol phosphate (Sigma, G9422) and 50 ug/ml ascorbic acid (Sigma, A4544) with daily changes of media for osteoblast differentiation. Cells after 4 or 7 days of differentiation in 10-cm culture dishes were dissociated first with 4 mg/ml collagenase I in PBS for 30 or 45 minutes, respectively, and then with 0.25% trypsin for 10-15 mins. The cells were then collected and filtered with 70 μm strainer before being reseeded for subsequent studies. In certain differentiation assays, 100 μM 2-DG, or 10 μM BPTES, or 200 μM Etomoxir, or 5.5 mM galactose in place of glucose was added to the mineralization medium. High throughput RNA-sequencing-Total RNA was isolated with QIAGEN RNeasy Kit. Library construction, high-throughput sequencing and bioinformatics were performed by Genome Technology Access Center at Washington University School of Medicine. RNAseq reads were aligned to the Ensembl release 76 top-level assembly with STAR version 2.0.4b. Gene counts were derived from the number of uniquely aligned unambiguous reads by Subread:featureCount version 1.4.5. Transcript counts were produced by Sailfish version 0.6.3. Sequencing performance was assessed for total number of aligned reads, total number of uniquely aligned reads, genes and transcripts detected, ribosomal fraction known junction saturation and read distribution over known gene models with RSeQC version 2.3. All genelevel and transcript counts were then imported into the R/Bioconductor package EdgeR and TMM normalization size factors were calculated to adjust for samples for differences in library size. Genes or transcripts not expressed in any sample were excluded from further analysis. The TMM size factors and the matrix of counts were then imported into R/ Bioconductor package Limma and weighted likelihoods based on the observed meanvariance relationship of every gene/transcript and sample were then calculated for all samples with the voomWithQualityWeights function. Performance of the samples was assessed with a spearman correlation matrix and multi-dimensional scaling plots. Gene/ transcript performance was assessed with plots of residual standard deviation of every gene to their average log-count with a robustly fitted trend line of the residuals. Generalized linear models were then created to test for gene/transcript level differential expression. Differentially expressed genes and transcripts were then filtered for FDR adjusted p values less than or equal to 0.05. Heatmaps were generated by Heatmapper (http:// www.heatmapper.ca/). ATP, glucose and lactate measurements-Custom media were used for all metabolic studies. A medium free of glucose, glutamine, pyruvate, phenol red, and sodium bicarbonate was based on MEMα with no nucleosides (Thermo, 10490) and custom produced (GIBCO). The medium was first reconstituted with sodium bicarbonate with pH adjusted to 7.4. Complete MEMα (cMEMα) medium was then prepared by adding fresh ingredients to achieve 5.5 mM glucose, 2 mM glutamine, 1 mM pyruvate and 10% FBS. For certain experiments, cMEMα medium was supplemented with 10 mM FFA: 3 mM BSA to a final concentration of 100 μM each of palmitate and oleate. For glucose, glutamine, free fatty acid and lactate measurements, cells were seeded at 1×10 5 cells/cm 2 in 6-well plate for 4 hours before being rinsed once with cMEMα medium with 100 μM each of palmitate and oleate and then incubated in 2 mL of the same medium per well for 24 hours. Glucose, glutamine, free fatty acid, and lactate concentrations were measured with Glucose (HK) Assay Kit (Sigma, GAHK20), Glutamine Colorimetric Assay Kit (BioVission, K556), Free Fatty Acid Quantification Colorimetric/Fluorometric Kit (BioViosn, K612) and L-Lactate Assay Kit I (Eton Bioscience, 120001), respectively. Seahorse assays-Cells were seeded at 4×10 5 cells/cm 2 into poly-D-lysine (Sigma, P6407) coated XF96 plate (Agilent) for 4 hours prior to experiments. Complete seahorse medium was prepared from Agilent Seahorse XF Base Medium (Agilent, 102353) to contain 5.5 mM glucose, 2 mM glutamine and 1 mM pyruvate, with pH7.4. The cells were incubated in 180 μl complete seahorse medium at 37°C for 1 hour before measurements in Seahorse XFe96 Analyzer. The following working concentrations of compounds were used: 2 μM Oligomycin, 2 μM FCCP, 1 μM Rotenone, and 1 mM Antimycin A. The oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) were normalized to seeded cell number. Calculation of ATP production from either glycolysis or oxidative phosphorylation is based on a published method (Mookerjee et al., 2017). See Table S1 for equations. Metabolic tracing of glucose-For in vitro studies, cells were seeded at 1×10 5 cells/cm 2 into 10-cm dish for 4 hours before being switched to 10 mL cMEMα for 1 hour. The cells were then incubated for 30 mins with cMEMα containing 5.5 mM uniformly labeled 13 C-D-glucose ( 13 C 6 -Glc) (Sigma, 389374) in lieu of regular glucose. The cells were then rinsed with 10 mL ice-cold PBS and lysed with 4% perchloric acid (PCA). For in vivo glucose tracing, 13 C 6 -Glc dissolved in water at 3.3 M concentration was injected at 80 mg/ mouse and 60 minutes before euthanization through the tail vein of eight-week-old C57BL6/J male mice. Plasma was collected immediately before sacrifice. The tibias and femurs were immediately dissected clean of connective tissue and the bone shafts were excised free of trabecular bone with a sharp razor blade. The bone shafts were then centrifuged at 11,000 g in a table-top microcentrifuge to remove the marrow content before being homogenized in 600 ul 4% perchloric acid (PCA) in water. Metabolite measurements were performed at the Metabolomics Core of the Children's Hospital of Philadelphia as previously described (Nissim et al., 2012. Plasma samples and a neutralized perchloric acid (PCA) extract prepared from cell cultures or bone samples were used for measurement of 13 C enrichment in glucose and/or TCA Cycle intermediates. Measurement was performed on either an Agilent Triple Quad 6410 mass spectrometer combined with an Agilent LC 1260 Infinity or Hewlett-Packard 5971 Mass Selective Detector (MSD), coupled with a 5890 HP-GC, GC-MS Agilent System (6890 GC-5973 MSD) or a Hewlett-Packard HP-5970 MSD using electron impact ionization with an ionizing voltage of −70eV and an electron multiplier set to 2000V. Isotopic enrichment in 13 C aspartate isotopomers was monitored using ions at m/z[C0]418, 419, 420, 421 and 422 for M0, M1, M2, M3 and M4 (containing 1 to 4 13 C atoms above M0, the natural abundance), respectively. Isotopic enrichment in 13 C lactate was monitored using ions at m/z 261, 262, 263 and 264 for M0, M1, M2 and M3 (containing 1 to 3 13 C atoms above natural abundance), respectively. Isotopic enrichment in 13 C malate isotopomers was monitored using ions at m/z 419, 420, 421, 422 and 423 for M0, M1, M2, M3 and M4 (containing 1 to 4 13 C atoms above natural abundance), respectively. Isotopic enrichment in 13 C fumarate isotopomers was monitored using ions at m/z 287, 288, 289, 290 and 291 for M0, M1, M2, M3 and M4 (containing 1 to 4 13 C atoms above natural abundance), respectively, and 13 C enrichment in 13 C citrate isotopomers was monitored using ions at m/z 459, 460, 461, 462, 463, 464 and 465 for M0, M1, M2, M3, M4, M5 and M6 (containing 1 to 6 13 C atoms above natural abundance), respectively. 13 C enrichment in glucose was determined by LC-MS . Organic acids levels were determined by the isotope-dilution approach and GC-MS system (Weinberg et al., 2000). 13 C-enrichment is expressed by atom percent excess (APE), which is the fraction (%) of 13 C enrichment above natural abundance. The level of 13 C-labeled mass isotopomer was calculated by the product of (APE/100) times concentration and is expressed as nmoles 13 C metabolite per gram wet weight bone, or per mg cellular protein. Gene knockdown with shRNA-Lentiviral based shRNA constructs were obtained from High-Throughput Screening Core at University of Pennsylvania. Construct IDs are as follows: shEGFP, SHC005; shMe1-1, TRCN0000114877; shMe1-2, TRCN0000114878; shMe2-1, TRCN0000114867; shMe2-2, TRCN0000114870. Viruses expressing shRNA were produced by transfecting HEK293T cells with the pLKO.1 shRNA plasmid together with packaging plasmids pΔ8.2 and pVSVG by using Lipofectamine 3000 (Thermo). Cells at the various differentiation stages were dissociated as described earlier and plated at 1 × 10 5 cells/cm 2 for d0 cells, and 2 × 10 5 cells/cm 2 for d4 or d7 cells, before being infected with lentiviruses at 1 transduction unit (TU)/cell for 16 hours. Infected cells were selected with 1 then 2 μg/ml puromycin for 24 hr each and then switched to cMEMα (supplemented with ascorbate and β-glycerophosphate for d4 and d7 cells) for 24 hr before the media was harvested for glucose and lactate measurements. The cells were harvested for RT-qPCR to determine the knockdown efficiency and the expression level of various genes. RT-qPCR analysis-Total RNA was harvested from 2 × 10 6 cells first lysed with 370 μL RLT buffer containing 1% 2-mercaptoethanol, and then extracted with the RNeasy mini kit (QIAGEN) according to the manufacturer's protocol. Complementary DNA was synthesized from 1μg mRNA per reaction with the SuperScript IV VILO Master Mix with ezDNase Enzyme (Thermo). The relative expression level of specific mRNA to the 18S ribosomal RNA was determined by qPCR with PowerUp SYBR Green Master Mix (Thermo) in QuantStudio 3 Real-Time PCR System (Thermo). The relative changes were calculated with ΔΔCt method, and expressed as fold change (2 −ΔΔCt ). PCR Primer information is listed in Table S2. The relative abundance of the mitochondrial gene mt-Nd4 (mitochondrially encoded NADH:ubiquinone oxidoreductase core subunit 4) to the nuclear gene Hk2 (hexokinase-2) was determined by qPCR with 15 ng total DNA using SsoAdvanced Universal SYBR® Green Supermix (BioRad). The relative mitochondria copy number change was calculated with ΔΔCt method, and expressed as fold change (2 −ΔΔCt ). See Table S3 for PCR primer information. QUANTIFICATION AND STATISTICAL ANALYSIS Statistical significance is calculated with two-tailed Student's t test. All quantification graphs are presented as mean ± standard deviation. The number of biological replicates (N) is indicated in figure legends. Statistical significance is defined as p < 0.05. • Glycolysis produces 80% of the energy in osteoblasts under aerobic conditions • Lactate is the predominant metabolic fate for glucose in bone in vivo • Mitochondrial respiration is diminished during osteoblast maturation • Me2 funnels glucose carbons into malate-aspartate shuttle to sustain glycolysis
2020-09-10T10:20:39.380Z
2020-09-08T00:00:00.000
{ "year": 2020, "sha1": "78e1b846e6fb319575b7c30614214545cb4a13be", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2211124720310974/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5f2d90e40852c21efdadbaf1092b4a1462ea1d6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
245342990
pes2o/s2orc
v3-fos-license
How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal This research examines the role of the winery tourism experience in the formation of brand image and brand loyalty. A qualitative analysis of 2540 TripAdvisor reviews—a user-generated form of electronic word of mouth—of four wineries of the Okanagan Valley posted over six years (2014-2020) reveals not only Pine and Gilmore’s (1999) four categories of consumer experiences (i.e., esthetics, education, entertainment, and escape), but also an additional factor (i.e., social interactions with employees and other visitors). The TripAdvisor reviews also show that—based on their winery tourism experiences—consumers express differentiated brand image impressions associated with wineries and brand loyalty. The contribution of this research lies in the identification of social interactions as a complementary dimension of winery tourism experiences, and in linking winery tourism experiences with brand image and brand loyalty. From a theoretical perspective, the findings encourage a greater integration of the consumer experience and the brand image and loyalty literature, as well as quantitative research examining their relation. The findings also have managerial implications for brand experience management in the wine tourism sector. Introduction An increasing number of wine producers offer winery tourism experiences in order to boost direct-to-consumer sales of wines on site, and-perhaps more importantly-as a means of building brand image and valuable, long-term relationships with consumers (Karlsson & Karlsson, 2017).The managerial and theoretical importance of winery tourism experiences (for reviews, see Gómez et al., 2019;Santos et al., 2019) has given rise to research exploring their nature and dimensions (Massa & Bédé, 2018;Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018).Research has examined consumer experiences and motivations with regard to wine regions as destinations (Afonso et al., 2018;Bruwer & Rueger-Muck, 2019;Byrd et al., 2016;Gu et al., 2020;Pikkemaat et al., 2009;Quadri-Felitti & Fiore, 2013;Thanh & Kirova, 2018), and winery tasting rooms (Charters et al., 2009), with most articles focusing on the sensory, hedonic, and experiential nature of winery visits associated with a specific geographic region (Bruwer & Rueger-Muck, 2019;Byrd et al., 2016).Winery experiences as perceived by specific target segments have also received some attention (Fountain, 2018;Fountain & Charters, 2009).Nonetheless, there are currently few articles (for an exception, see O'Neill et al., 2002) examining to what extent aspects of the winery tourism experience give rise to brand-related outcomes for wineries.Furthermore, there is a need for more research on consumer experiences, particularly in the winery context (Santos et al., 2019). To answer the call for more research on consumers' win-ery experiences (Santos et al., 2019) and to contribute to the relatively limited literature on consumers' brand image perceptions and brand loyalty as they relate wineries (Gómez et al., 2019), this article has two objectives: First, it examines the dimensions of winery visitors' experiences at the level of the winery.Second, it explores the development consumers' brand image associations and brand loyalty toward wineries they have visited. To achieve these objectives, this research focuses on four wineries in the Okanagan Valley of British Columbia that are part of the region's rapidly growing wine industry, differ in size, and offer markedly different wine tourism experiences.Wineries were selected based on their geographical location in the Okanagan Valley, and constitute a convenience sample of wineries that differ in size.The selection was informed by the classification of British Columbia (B.C.) wineries (Cartier, 2012), which includes three large wineries selling about 2,790,786 cases of wine (83% of the market), 16 medium-sized wineries selling 301,216 cases (9%), and 89 small wineries selling 168,346 cases (5%).Major wineries sold on average 930,262 cases, medium sized wineries 18,826 cases, and small wineries 1,892 cases.A classification provided by British Columbia Ministry of Agriculture, Food and Fisheries (2004) categorized Mission Hill as a large estate winery and Quails' Gate as a medium winery, but does not identify small wineries by name.Among small Okanagan wineries, the selection therefore included an organic winery, as consumers are increasingly interested in sustainability, and a winery associated with a celebrity (i.e., author and oenophile Salma Rushdie), as its celebrity status likely attracts visitors and provides a reasonably sized sample of reviews.This research therefore included the following wineries: To capture consumer responses to the winery experience holistically (Massa & Bédé, 2018) and to preclude biases associated with survey data collection at wineries (Charters et al., 2009), this article examines TripAdvisor reviews of the four selected wineries posted between January 2014 and December 2020.TripAdvisor reviews offer valuable insights into brand experiences in the context of wine tourism (Massa & Bédé, 2018;Thanh & Kirova, 2018), while also reflecting resulting brand image impressions and manifestations of brand loyalty.This article's contribution to the literature on consumers' winery experiences is twofold: First, in addition to the winery experience factors discussed in the literature (i.e., the 4E framework; esthetics, escape, education, entertainment; Pine & Gilmore, 1999;Quadri-Felitti & Fiore, 2012), this research identifies social interactions as an important factor contributing to consumers' winery experiences, and thus extends findings on the nature of consumer experience.Second, focusing on the winery as the unit of analysis, this research demonstrates that consumers' winery experiences are associated with brand image impressions and brand loyalty.This suggests that a stronger integration of consumer experience and brand equity models could yield important insight for theory (e.g., experience dimensions as differential antecedents of brand image associations) and practice (e.g., resource allocation to elicit a desired brand image and encourage brand loyalty).This research thus has implications for future research on the creation of winery tourism experiences and for brand experience management in this context. Conceptual Background Experience Marketing in the Winery Tourism Context Wine tourism encompasses a variety of activities (Charters & Ali-Knight, 2002), including tours of a single winery or a series of wineries in a region, dining at winery restau-rants and cafés offering carefully curated wine and food pairings, or wine tastings.In offering winery tourism experiences, wine producers engage in experience marketing, which involves an economic offering that consists of sensorial stimuli (e.g., wine, food) and thematic content (e.g., cellar tours, tasting events) staged by the winery (Becker & Jaakkola, 2020;Bruwer & Alant, 2009;Bruwer & Rueger-Muck, 2019;Massa & Bédé, 2018;Pine & Gilmore, 1998, 1999). Wineries' experience marketing gives rise to brand experience, because it allows consumers to interact directly (e.g., wine tasting) and indirectly (e.g., visits to wine cellars and vineyards, story telling) with the brand (Becker & Jaakkola, 2020;Brakus et al., 2009).According to Pine and Gilmore (1998), consumer experiences are conceptualized in terms of four experience categories (i.e., the 4E framework: esthetics, education, entertainment, escape).These also apply to winery tourism (Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018).Esthetics refers to enrichment through sensory aspects of the experience (e.g., scenic beauty, enjoyment of wine and food), escape to the immersion into a different time and place (e.g., participation in traditional grape picking), education to knowledge development (e.g., wine tastings, seminars, wine-food pairing events), and entertainment to attending performances (e.g., live music, art displays, themed events; Pine & Gilmore, 1999;Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018).Although the 4E framework suggests that consumer experience includes active (i.e., education, escape) and passive (i.e., esthetics, entertainment) consumer participation (Pine & Gilmore, 1999;Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018), experience marketing involves staging (Pikkemaat et al., 2009)-the integration of the experience category within a comprehensive and thematic design of the environment (e.g., architecture, natural setting) and offerings delivered therein (e.g., historic tours, cultural activities, food, wine tastings) in order to create experiences that consumers find unique, memorable, want to repeat, and enthusiastically promote via word-of-mouth (Bruwer & Alant, 2009;Pine & Gilmore, 1998;Pizam, 2010;Quadri-Felitti & Fiore, 2012).Positive memories of the experience, in particular, increase future revisit intentions, satisfaction, and destination loyalty (Pizam, 2010;Quadri-Felitti & Fiore, 2013) This literature also suggests that wine tourists seek an experience that is "a complex interaction of natural setting, wine, food, cultural and historical inputs and above all the people who services them" (Charters, 2006, p. 214).Although social interactions have been recognized as playing a role in consumers' winery experiences (Charters et al., 2009;Gu et al., 2020;O'Neill et al., 2002), they have not been investigated in relation to the 4E framework categories (Pine & Gilmore, 1998).The current research thus examines whether and to what extent social interactions emerge in consumers' reflections on their winery experiences, along with the 4Es. In addition, whereas the literature focuses on the emergence of the four experience categories in the context of visits to a wine region (Knutson et al., 2006;Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018), or their impact on consumer responses to a wine region (e.g., loyalty, intention to revisit; Afonso et al., 2018;Quadri-Felitti & Fiore, 2013), this research examines consumer experience of a specific winery, and brand-related consumer responses (i.e., brand image, brand loyalty) toward the winery. Brand Image Brand image consists of brand associations consumers hold in their minds (Keller, 1993(Keller, , 2020)).Favorable, unique, and strong brand associations are an important tool for brand positioning and differentiation (Keller, 1993(Keller, , 2020)).Unique, positive, and memorable brand experiences give rise to a positive brand image associations (Andreini et al., 2018), including brand personality perceptions (Nysveen et al., 2013), which in turn encourage brand loyalty (Brakus et al., 2009;Ramaseshan & Stein, 2014).While the literature has examined the consumer experience-image link in the context of wine regions (Bruwer & Lesschaeve, 2012), the current research focuses on the winery as a brand.Because brand image associations arise from both product and nonproduct related attributes-such as experiences (Keller, 1993(Keller, , 2020))-the branding literature suggests that consumers' recollections of their winery experiences likely also reflect associated brand image perceptions. Brand Loyalty Whereas brand image associations are cognitive consumer responses to the brand (Andreini et al., 2018), brand loyalty is a relational consumer response to brand experience (O'Neill et al., 2002).Brand loyalty comprises an attitudinal and a behavioral dimension (Chaudhuri & Holbrook, 2001;Oliver, 1999).Attitudinal brand loyalty refers to repurchase intentions, greater likelihood of recommendation, positive word-of-mouth, and willingness to pay a price premium, whereas behavioral brand loyalty is captured by repeat purchases (Chaudhuri & Holbrook, 2001;Iglesias et al., 2011).The branding literature supports a positive relationship between favourable brand experience and brand loyalty (Brakus et al., 2009;Iglesias et al., 2011), mediated by consumers' affective brand attachment (Iglesias et al., 2011;Thomson et al., 2005) as well as brand personality perceptions (Brakus et al., 2009). In the context of the wine industry, a survey of Australian wine consumers furthermore confirmed a positive, indirect relation between wine consumption experience and brand loyalty, mediated by brand trust and satisfaction (Bianchi et al., 2014).In a Chilean wine industry context, the relation between wine consumption experience and brand loyalty was only mediated by brand satisfaction, but not trust (Bianchi, 2015). The literature on winery tourism also proposes a positive relation between consumer experience and destination loyalty, although findings are mixed.In the context of a South African wine farm destination, Back et al. (2020), do not find a significant relation between visitor experience dimensions (which differed from those proposed by the 4E framework) and loyalty, with the exception of a positive impact of favorable food and beverage tastings.On the other hand, positive consumer experiences as defined by the 4E framework were associated with winery tourism destination loyalty (Quadri-Felitti & Fiore, 2013).In the context of winery tasting rooms, positive consumer experiences were associated with greater long-term, off-premise sales, which are indicative of brand loyalty (Cuellar et al., 2015).Similarly, positive emotions associated with winery tasting room experiences were positively associated with brand loyalty (Nowak et al., 2006). Although often not specifically focused on the winery experience as conceptualized in the 4E framework or on holistic winery experiences (as opposed to wine consumption or winery tourism destinations), the literature suggests that consumer experiences in winery settings culminate in brand experience, which gives rise to brand image associations and brand loyalty.This research builds on earlier work (Thanh & Kirova, 2018) to examine to what extent the 4E conceptual framework and social interactions arise in consumers' TripAdvisor reviews, and whether consumers' winery experiences give rise to brand image perceptions and brand loyalty.Figure 1 illustrates this conceptual framework. Method Given that wineries attract consumers with different needs and interests (Charters & Ali-Knight, 2002), and be- cause experience is highly subjective, examining consumer experience, brand image, and brand loyalty in this context lends itself to an investigation based on qualitative data (Charters et al., 2009).In line with previous research on consumer experiences (Cassar et al., 2020;Massa & Bédé, 2018;Thanh & Kirova, 2018), this research was informed by a netnographic approach, and involved the collection and coding of consumers' TripAdvisor reviews to understand how visitors experience winery visits, perceive the winery's image, and whether they express brand loyalty.This approach is appropriate to understand complex phenomena and allows for the identification of emergent themes (Kozinets, 2002).Since 2000, the U.S. based TripAdvisor-a platform for consumers' description and evaluation of their travel experiences-has featured more than 730 million reviews.It is one of the leading providers of online word-of-mouth recommendations regarding a vast range of travel destinations (https://tripadvisor.mediaroom.com/US-about-us).While credibility of information on social media is a concern, consumers frequently rely on TripAdvisor information, and reviews thus allow to cautiously gauge consumer responses to experiences (Ayeh et al., 2013;Massa & Bédé, 2018;Thanh & Kirova, 2018). The analysis included all available postings for the four selected wineries.TripAdvisor reviews of the four wineries in the Okanagan Valley span a six-year period (2014)(2015)(2016)(2017)(2018)(2019)(2020) in order to allow for first time and repeat experiences to occur.They consist of 2540 reviews overall (MH: 1163, QG: 1097, RD: 90, H: 190).We categorized each review according to the four themes identified by Quadri-Felitti & Fiore (2012; see also Pine & Gilmore, 1999;Thanh & Kirova, 2018), and identified an additional, emerging theme (i.e., social interactions with employees and other visitors).Reviews were analyzed and categorized manually.The analysis proceeded by rating category (positive, neutral, negative) such that relations between review valence, themes, and brand-related responses (e.g., brand image impressions, Reviews that pertained to several themes were categorized with all themes they related to.Reviews that were ambiguous with regard to their relation to themes were categorized as "other." Results Table 1 illustrates the frequency of reviews by experience category and winery, and summarizes the valence of TripAdvisor ratings (i.e., excellent/ very good, average, poor/terrible). Across all wineries, esthetics and social interactions were reflected most often, followed by entertainment and education, and finally escape.For MH, esthetics (35%) was identified as the most important theme, followed by social interactions (18.3%), education (4.38%), and entertainment How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal (4.21%).For QG, esthetics was also the most important theme (18%), followed by social interactions (17%), education (2.55%), and entertainment (1.25%).For the two smaller wineries, social interactions emerged as the most frequently expressed theme associated with the winery experience. Most reviews were positive, both at the winery level and overall (80.1% were very good/excellent, 10.47% neutral; 9.6% negative).Although social interactions refer to the interactions between staff and visitors and amongst visitors themselves, mention of positive interactions with employees frequently emerged in conjunction with positive experiences visitors reported. We now turn to the discussion of the winery experience categories and consumers' references to brand image and brand loyalty emerging from the TripAdvisor reviews, and provide excerpts to illustrate how consumers reflect dimensions of their winery tourism experience, brand image, and brand loyalty. Esthetics Esthetics captures the feelings, concepts, and judgments arising from appreciation of the arts or other objects considered moving, beautiful, or sublime, with a consumers' esthetic experience comprising both sensory and symbolic elements (Charters, 2006).Aesthetic cues, such as open vistas of rolling green hills or the sunny climate common to grape-growing regions, shape consumer experience, and can lead to satisfaction and pleasurable memories: "Walking into the property after you park, statues, … vines, and landscaping, and [the] view of the mountains and valley, take your breath away.The grand arches, outdoor dining area, bell tower… there is a natural emotional response this property, story, and experience invokes."Visited MH from Victoria, Canada. Although many of the reviews refer to the beauty of the wineries' surroundings and scenery, the physical setting created by architecture and the servicescape (i.e., design elements such as color, layout, architectural style, or type of furnishings; Baker et al., 1992, p. 457) plays an important role in consumers' esthetic experience.This mirrors a retailing context, where environmental features such as color, lights, design, scent, and sound affect consumers' responses (Borghini et al., 2020;Joy & Sherry, 2003;Rinallo et al., 2010).For this reason, wineries, like retailers, use their setting to stage engaging, interactive, and participatory experience for visitors, as one reviewer noted: "I love the atmosphere here.Really cool and welcoming.The art on the labels alone is worth checking out and it gets even better when the bottles are opened.A creative space and fun experience.Well worth the visit!"Visited H from Toronto. Other visitors talked about the beauty and aesthetics of other wineries: "We were taken into a private and beautiful room for our tasting where we sat at a stunning table that was once a door to a Mexican jail."Visited MH from Courtenay, Canada. "I visited Quails' Gate with some girl friends as we spent the day touring wineries.This one in particular was so stunning and picturesque, we were in heaven." TripAdvisor reviews suggest that consumers are an active part of the winery environment and respond to environmental cues.Among the four wineries, MH has the feel of a brand museum, a space wherein "…consumers can build sensory, affective, and cognitive associations with a brand that result in memorable and rewarding experiences" (Hollenbeck et al., 2008, p. 351).MH's brand image arises from visitors' experience of its graphic design, culture, and history through tastings and tour activities.Structured winery tours provide a consistent, high quality, and informative tour experience.In addition, ambient and social factors contribute to atmospheric characteristics that provide a pleasurable experience (Baker et al., 1992), such as the one described by a visitor at MH: "We booked an hour-long wine tour which was both informative and wonderfully presented.We watched a video explaining how Mission Hill was created, why it was created and what all the wonderfully crafted artifacts and architectural structures represented."Visited MH from Toronto. Many of the MH reviews reflect immersion in the setting and appreciation of environmental aesthetics.An aesthetic response arising from consumers' appreciation of beauty can be a cognitive, affective, and sensory (physical) response (Charters, 2006).Esthetic experience is shaped by the environment, as well as other sensory stimuli (e.g., wine, food) delivered within it.Wine tastings and food pairings involve the senses and amount to an aesthetic experience (Charters & Pettigrew, 2005): "Service was fantastic.View is perfect on a nice day from the patio.Had their cheese platter and I don't think there is anything better with wine.Everything was great.It was a pleasurable experience."Visited QG from Vancouver Island. Research that applies the concept of embodiment as a means of understanding consumer responses to an aesthetic encounter (Joy & Sherry, 2003) discusses the apprehension of experience using the body, without divorcing sensation, cognition, or emotion.The senses are engaged individually and collectively in the experiences engendered by wine tourism, and often enhanced in settings highly conducive to relaxation.Moreover, a sense of community, experienced as being part of a formal group (such as a winery tour) or in tastings shared with others, similarly primes consumers to welcome sensory experiences as they engage in the process of aesthetic appreciation.Some of the TripAdvisor reviews reveal, however, that despite a positive esthetic experience, some visitors had negative experiences, due to social interactions: "Beautiful view, beautiful landscape -horrible service!Upon arriving we were met by an abrupt, rude, and unfriendly server who ruined our experience with her attitude and unwelcoming nature (tall Blonde).Treating customers respectfully and being friendly shouldn't be a difficult concept.Our group of 11 were all in agreement that we would never return to the vineyard, nor Table 1 shows that 612 reviews (24.1%) referred to theme of esthetics. Escape Escape as an experience occurs when people immerse themselves in an environment that may be distinct from everyday life (Kirillova et al., 2014, p. 283).How and why tourists perceive a destination as beautiful or unique potentially relates to the degree to which it differs from their everyday environment.Wine tourism serves as a form of as rural tourism, which allows consumers escape and respite from familiar urbanized spaces (Carmichael, 2005): "Wow, what a way to end our Canadian trip.The food was excellent, the view was superb.The staff were delightful and friendly.The setting was idyllic, with [a] panoramic view of the lake, [the] vineyard with the barren backdrop of the hills opposite… let your imagination run wild and transport yourself to Tuscany.It was a true escape from the mundane and the ordinary."Visited QG from Brisbane, Australia. Winery visitors immersed in esthetic experience escape, not only from the ordinary, but also from the familiar.Spacious vistas, peaceful vineyards, 'old world' elements such as a bell tower, summon visions of enchanting vacations.Visitors also form aesthetic evaluations based on a winery's spatial characteristics, with perceptions of large scale and escaping from crowds frequently observed in MH reviews: "It is a large, open winery that allows for many people to walk around without bumping into one another."Visited MH. An appreciation of their own experience can encourage consumers to provide recommendations to others and serve as informal brand ambassadors: "We were visiting friends in Kelowna and they took us to the Mission Hill winery because they felt it was a venue that was a must see for their visitors.They were certainly correct in assuming this because the buildings and grounds are beautiful and interesting.We especially loved the sculptures that were around the grounds in various spots."Visited MH from Saskatoon, Canada. Table 1 shows that 35 reviews (1.4%) captured the theme of escape. Education Many consumers who engage in wine tourism do so with the deliberate intent to develop or enhance their wine knowledge (Charters & Ali-Knight, 2000), and are eager to participate in wine tasting and tour activities: "Of all the wineries I've been to in the area so far this one is by far the most educational."Visited MH from Kelowna. " […] paid attention to our individual preferences and got creative with the tasting, he was incredibly friendly and passionate about wine.I learnt a lot from him.I would highly recommend this experience."Visited QG from Saskatoon. Visitors' desire to learn is associated with a sense of satisfaction with the experience as a whole: "Thoroughly enjoyed our visit to this winery!Very knowledgeable staff who were willing to modify the tasting menu to suit our likes and interests.Ended up buying a number of wines -really like the Foch wines and iced wine too!" Visited QG from Regina, Canada. The degree to which winery staff members are knowledgeable plays an important role in visitors' judgment of the educational aspect of their experience: "Our guide […] was fantastic and personable!She was obviously very knowledgeable about the wines and the estate and she answered all of our many questions with patience and enthusiasm.Would highly recommend this tour, especially with […] as a guide, and I am already looking into our tour for next summer!"Visited from Prince George. The evaluation of an experience involves a comparison between visitors' expectations and the wineries' offerings.Educational experiences are less favorable when consumers feel they did not receive the attention of knowledgeable, friendly, and courteous employees.Reviews again suggest that experience providers play an important role in shaping consumers' overall experience: "We were in the Kelowna area for a couple of days; Quails' Gate and Mission Hill were the 2 wineries recommended to us by our Airbnb hosts.We absolutely loved Quail's Gate -friendly and knowledgeable staff, good wine and amazing property!They don't rush you at all, and give you your time and space to enjoy and learn about the wines and their beautiful property.Now Mission Hill on the other hand has an amazing location, but the staff is beyond rude, not friendly at all and very snobby in terms of wine knowledge!You better show up here photoshoot ready or they will give you unwelcoming looks!" Visited QG and MH from Calgary. Table 1 illustrates that 85 reviews (3.3%) pertained to the theme of education. Entertainment A truly entertaining experience can take one's mind off the concerns of everyday life, offering in their place a feeling of play, fun, and well-being (Kozinets et al., 2004).Table 1 shows that the entertainment theme captured 129 of the reviews (5.1%).Entertainment often arises from stories unique to specific wineries, and a closely orchestrated experience from winery touring to tasting, depend on their delivery by attentive employees: "Thoroughly impressed by our tour and 4 course lunch.Our tour guide […] was very kind and exceptionally knowledgeable.The head sommelier […] introduced us to some delectable wines.And, Chef […]'s creations were awe inspiring!It was an experience we will re-How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal member for a long time.We highly recommend it!"Visited MH. "I love how they have changed the event to a sign-up time to see Santa, it made the experience so quick and easy.I would love to see the option to have a digital download, as we like to use that for Christmas cards.The whole event was amazing, with the market, the mulled wine, and the Grinch!We try to make it out every year for this event".Visited QG from Toronto. The literature identified esthetics, escape, education and entertainment as themes related to winery tourism experiences (Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018).Table 2 provides additional examples of these four factors. The TripAdvisor reviews also included many references to social interactions with employees and other visitors, and we therefore discuss these next. Social Interactions: Employees and Other Visitors TripAdvisor reviews also reflected an important role of both positive and negative social interactions with employees and other visitors.Consistent with previous research (Charters, 2009;O'Neill et al., 2002), reviews suggest that employees contribute to the creation of memorable brand experiences: "We visited Rollingdale last out of 5 wineries on this tour.And boy, they did not disappoint!Not only were all of the wines absolutely delicious, but our experience was made even more special by the entertaining and insightful tasting provided by our amazing host, […]!He really knew his stuff -and his knowledge, combined with excellent customer service, really made our last winery of the day the best!I ended up buying a few bottles of wine and will no doubt be back for more in the future.Overall, great experience -10/10!"Visited RD from Edmonton. "All the service & servers were first class.All working together to make sure everything was on point!With the wine pairing the Sommeliers description of why each wine was paired -a memorable and enjoyable day for us.My husband is not a regular wine drinker -he has already asked when we can go back!" Visited from the U.K. "Right from the parking lot, it was a terrible experience.Bossey, rude uniformed staff aggressively telling us and others not to take photos.Other staff just chatting amongst themselves, ignoring visitors.The architecture shocked us…built in the style of a neo-Nazi concentration camp.Much more appropriate for a crematorium than a winery."Visited from Calgary, Canada. "Then we got to the shop… we gave up trying to try any wines; we just couldn't get any staff member's attention to even ask if could taste some wines.The staff seemed more interested in wandering around checking the merchandise displays, than helping in the process of wine tasting and wine sales.Quite amazing, and thoroughly disappointing.We guessed that by charging $75 and upwards for a bottle of wine, means that you don't have to sell too many to make the financials work!" Visited from Winnipeg, Canada. At times, reviewers had an educational, esthetic, and entertainment experience focus, but it was their interactions with employees that made the difference, to the degree that visitors mentioned employees by name, an indication of a strong and enjoyable connection: "I visited the Mission Hill estate with a group of friends on a stormy day in June, and we were all greatly impressed by the personable service and illuminating conversation provided to us by the mission hill team.Our guide through the wine, […], was engaging and charismatic and provided a high level of service that felt unique and engaging despite the fact that she must be leading the same tour countless times."Visited MH from Vancouver. "This was our 1st visit to Rollingdale Winery and I'm so glad we selected it.The staff were so friendly and knowledgeable about their wines and the history of the winery.[…] was our server, he made our experience that much better.This is a must stop."Visited RD from Prince George. A sense of connection with fellow consumers is also an important element in shaping a winery experiences.Space can be at a premium in popular wineries, as visitors gather in significant numbers, queuing for tours, tastings, gift shop and wine purchases, and restaurant seating.This scenario is particularly common in larger wineries, as reflected in reviews from weary travelers, their patience worn thin by unrelenting crowds: "I was shocked to see how busy this place was, it seemed like an amusement park." Visited MH from Burnaby. "The pinnacle of our recent getaway to the Okanagan area was this Legacy experience that we had to book a week in advance.All the other time slots for Sunday evening was sold out except for 4:30 pm." Visited MH from Ontario. These reviews suggest that to minimize negative effects of crowding, wineries would benefit from providing meaningful social interactions that can lead to enjoyable and even extraordinary experiences.Table 1 shows that a relatively high frequency of reviews (525 or 20.7%) related to the theme of social interaction. Overall, the TripAdvisor reviews suggest multiple factors that consumers' convey as part of their wine tourism experiences.In addition to the esthetics (i.e., scenery, setting, sensory), educational, entertainment and escape aspects of the experience, and social interactions (i.e., staff and other consumers) contributed to a great extent to the experience.The high frequency of reviews pertaining to this social category suggests that interactions with employees or winery owners, and with other visitors, are a meaningful addition to the prior, four-dimensional conceptualizations of winery experience categories (Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018).• The view and the architecture are stunning.Our experience began at the winery entrance, where we were welcomed with a glass of sparkling wine.• Beautiful architecture and scenery. • We had […] give us a tour around and she shared all her knowledge about the winery.• From the custom bells in the tower with its neat story to the hill side restaurant looking over vineyards and the lake this winery is a feast for your eyes. • Enjoyed the tasting.Beautiful property.Would highly recommend going here.• The setting is gorgeous and the wine tastings are done on a beautiful balcony overlooking the water. • This is a beautiful winery and the views are gorgeous!• Went to this amazing spot for a wine tasting 3 years ago.They had incredible wines. • The atmosphere is fun, approachable & unpretentious.• The vibe in the tasting room was super chill, great music playing, and everyone was quite casual which was nice. • From the landscape, architecture, monuments, art all contributed to that old world feeling you may get from Napa.You knew when you drove through the front gate that you were going to be driving into something special.This after all isn't just a winery. • We were travelling to Big White and decided to stop for lunch at the winery.The food is only surpassed by the views of the lake and area.Had the oysters and artichoke char with their wine pairings.To die for!A mini escape within a larger escape!and he MADE the experience.His knowledge of the wines and even his wit made us laugh.He made us feel welcome and you could tell he loved his job and talking to everyone he met. • […] made it a fun experience.He was very easy to talk to and answered our questions in a way that all of us understood.• Learned a lot while we were there and their wines are delightful.Nothing fancy just good staff and good winewhich is nice to encounter compared to some of the other wineries that were a bit snooty or just were not interested in their job -This winery was friendly and made you feel at home. • […], our server, was both knowledgeable about the wines that he was serving and his jokes were very relatable to us. • A very large wine and souvenir store, products ranging from glasses, to jams/jellies, cookbooks, art work, chocolates, cookware and wine accessories.The outdoor common area is spectacular, catching a concert there would be awesome but tickets are sold out fast (weeks/months in advance).• We got a great and very generous wine flight from a knowledgeable member of staff who was clearly really passionate about the business. • Friends from Calgary are in town, we work at St. Hubertus winery in Kelowna so we thought let's take them to The Hatch and The Hatch's new sister The Black Swift. […] at the Hatch was great and fun too.Tasted some great wines. Then over to Black Swift where […] took awesome care of us.She was so knowledgeable and so great to talk to. • Unfortunately, I don't have many positive things to say about Mission Hill... the building is grandiose and the grounds were well-maintained.We went into the tasting bar and were not greeted by the four staff present, even though there was only one other group in the building.We weren't asked once if we wanted to do a tasting.• We visited Mission Hills, we where the only ones in the tasting room, three staff completely ignored us, obviously they were not interested in letting us taste wine or sell us any of their product. • The wines we sampled were lovely, sadly the gal assigned to us for tasting told us nothing about the wines and when asked questions she did not know the answers.I might give it a try again when I return, hopefully will have a better experience • More of an industrial feel here.Very friendly and knowledgable host.Unfortunately I didn't enjoy the wines they had available. • Beautiful view, beautiful landscape -horrible service!Upon arriving we were met by an abrupt, rude, and unfriendly server who ruined our experience with her attitude and unwelcoming nature (tall Blonde).Treating customers respectfully and being friendly shouldn't be a difficult concept.Our group of 11 were all in agreement that we would never return to the vineyard, nor buy their products in the future. How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal In addition to describing and evaluating their experience, the TripAdvisor reviews also indicate that consumers share brand image associations and express brand loyalty in their reviews.We now turn to the discussion of these consumer responses to winery tourism experiences. Brand Image In the wine tourism context, managing brand image perceptions based on consumer experience is challenging due to the active participation of consumers in the creation of the experience (Pine & Gilmore, 1998;Quadri-Felitti & Fiore, 2012).Wine tourism experiences are likely inconsistent and reflect the impact of multiple factors that are not all under the experience provider's control.Inferences arising from interactions with other brand users, in particular, may affect brand image while being difficult to anticipate or control (Bellezza & Keinan, 2014).Ultimately, what consumers take away from their winery visit is a composite impression based on the experience dimensions, as well as social interactions: "The wait staff were friendly, helpful and delightful, the wines that were paired with the dishes were perfect & the food was delicious.I thoroughly recommend for a beautiful way to pause and enjoy the beauty the Okanagan has to offer."Visited MH from Calgary. "The restaurant server freaked out when we tried to move our table six inches to get under the cover from the rain (tables were far apart).The tasting room was also a Gong show.We lined up outside at the appointed time with four other groups.No one came outside to speak with us.The line-up for the cash register goes horizontally through the room.All he did was say the name of the wine, pour it, then served the young/cute ladies in the next group.He forgot to give us our last pick.He never gave any information about what we were drinking."Visited QG from Langley. As reviews show, wine tourists do not enjoy wineries that seem overly commercialized, with large spaces experienced as sterile and unwelcoming, and thus not conducive to social and psychological interaction (Kozinets et al., 2004).Some dimensions of the experience may be highly positive, but others, such as negative interactions with employees can detract from the overall impression of the winery: "What an incredible property.The views, the buildings, the gardens, everything is immaculate.The only thing that was not perfect was that some of the staff are a bit snobby and unfriendly."Visited MH from Kelowna. Consistent with the experience branding literature (Brakus et al., 2009), the TripAdvisor reviews reflected brand associations that included, but also went beyond, brand personality traits.Consumers' descriptions and reviews of the four wineries illustrate that divergent experiences evoke differential brand image associations. MH's image is strongly anchored in its architectural and aesthetic components, including its lush vineyards and visually imposing twelve-story bell tower, redolent of ancient European culture.Its open vistas contribute to the winery's brand as an aspirational destination: a place to attend out-door concerts and culinary workshops, and to enjoy lavish dining experiences: "The entrance to the park is beautifully landscaped with rose beds….itall is very well kept.Observant business people will immediately sense that Mission Hill is primarily not a winery, but a tourist attraction."Visited MH from Germany. Based on the TripAdvisor reviews, the MH brand is associated with beauty and grandeur, but for some visitor's brand associations are marred by perceived overt commercialization; notably, several referenced MH as a tourist attraction rather than a working winery.QG similarly offers Edenic views and high-end dining experiences.As a medium-sized winery, however, it provides a less overtly commercialized experience: "The most amazing place for lunch, brunch, or dinner everything 5 star and food some of the best I have ever had.Incredible attentive staff and the winery has the most fabulous views ever, been to several Italian and French vineyards but this place exceeded them all."Visited QG from Stirling, UK. In contrast to MH, QG offers a more intimate feel to consumers, as might be expected given its far smaller size.Its brand image is associated with beauty, friendliness, knowledge, being laid back, and comfort. As a genuinely small winery, RD offers a unique experience.Unlike many other wineries in the area, RD's wine shop is located inside its cellar.Its building's small size embodies a homey atmosphere for visitors.With no restaurant, café, or gift shop, the winery's sole amenity for guests is a picnic area."Among the few wineries that we visited during our stay in the Okanagan valley, it was by far my favorite.First, for their authenticity: they have not turned their vineyards into [a] mega shopping center as several of their neighboring competitors [have].And of course, for the quality of their products: we tasted five ice wines, and they were all excellent."Visited RD from Montreal, Canada. RD demonstrates that memorable wine tourism experiences are possible without high-end aesthetic elements.Based on reviews, RD's rustic ambience attracts many loyal consumers.The brand is associated with authenticity, organically grown grapes, and a lack of pretense. The small size of H encourages its open embrace of eccentricity.As the winery website explains: "'the hatch' is the absolute culmination of the wants and dreams of a select, rare and bizarre cadre of people who are unconcerned about… convention… [it is] the direct confluence of all our favorite arts; the liquid arts, the visual arts, and the living arts."(https://thehatchwines.com).Reviewers concur: "Now I'm not the type to give 5 stars to anything all that easy, but this place I'd give 6 to if I could!The tasting room location is beautiful, funky and inviting.From the get-go I knew I was in for something unique and different; unlike any other winery in BC for sure.The staff there are wonderful, and friendly… And the labels!Coolest wine labels ever, apparently they're origi- Brand associations pertaining to H are cool, unique, crazy, funky, fun, friendly, smart, humorous, and inviting.H has a unique brand positioning quite distinct from the more traditional approach of many wineries in the area. Table 3 summarizes the brand image associations reflected in the TripAdvisor reviews.In conjunction with the reviews included here, it demonstrates that the wineries differentiate themselves from their competitors, with some of the wineries showing a much more strongly differentiated position.MH, for example, consistently evokes strong associations of sophistication (and market power), whereas H stands out in terms of its non-conformity, which is nonetheless perceived very positively. Brand Loyalty Attitudinal brand loyalty was also evident in the TripAdvisor reviews, and expressed in terms of intentions to repurchase or revisit, recommendations to visit, positive wordof-mouth, and positive attitude. "Dinner/Lunch was outstanding with staff that made us feel at home.We will come back."Visited MH from California. Brand loyalty involves the desire to repeat an experience (Chaudhuri & Holbrook, 2001), and many reviewers indicated that visited wineries or purchased wines multiple times after an initial visit: "Quails' Gate is a favorite meeting place to catch up with friends who come to holiday in the Okanagan.The staff are so friendly and attentive.We come here so often."Visited QG from Kelowna. Despite their large size, both MH and QG foster brand loyalty.Nonetheless, consumer resistance to overt commercialism also appeared, especially regarding large wineries.Such conflicts result when companies attend to "their inter-nal interests rather than seek to meet consumer wants and needs" (Holt, 2002, p. 70). "We were very excited to visit Mission Hills on our recent trip to the Okanagan and of course the scenery blew us away, but there's where the magic ended.Underneath all that beauty lies just a business, an institution that doesn't seem to care about your overall experience.Unlike other wineries we ad been to, we didn't get any attention."Visited MH from Vancouver. In other reviews, visitors express a preference for smaller wineries, such as RD, and declare loyalty, experiencing the wineries as treasures to be shared with friends: "Unlike Mission Hill, a very glossy and sharpened tourist destination, resembling a monastery with its walled estate and stone structures, Rollingdale is rustic, unpretentious.As it is smaller and more Rollingdale encourages more dialogue with the staff…" Visited RD from Prince George, Canada. Based on TripAdvisor reviews, consumers feel strong loyalty toward wineries that provide an experience beyond what they feel the market usually offers, one based not only on ambience, the physical setting, and exterior and interior design, but also on interactions with winery employees. "We received a tasting in the barrel room, because it is a small winery and since the winemaker himself was there, he allowed us to do a tasting right from the barrels!! Since it was a special occasion, he even signed one of the bottles I bought (although it was a glass pen not a sharpie, so it rubbed off immediately, but still was very cool!" Visited RD from Calgary. Satisfied visitors further became loyal consumers, and actively recruited members of their personal network.Interacting with the winery owner and winemaker and the physical environment led to the perception that the winery was a genuine and unique place, and not limited to commercial intent (Debenedetti et al., 2013): "We always take our guests on a winery tour when they Table 4 provides examples of brand loyalty expressed in TripAdvisor reviews that rated the winery as very good or excellent.Brand loyalty manifests in terms of actual behavior (i.e., purchase of wines), behavioral intentions (i.e., will come back), likelihood to recommend (i.e., must visit, highly recommend) or positive attitude (i.e., visit was awesome).Table 5 provides additional examples of expressions of positive attitudinal responses to the winery experience. Overall, TripAdvisor reviews of the four Okanagan Valley wineries collectively reflect four aspects of experience (i.e., esthetics, education, entertainment, escape;Quadri-Felitti & Fiore, 2012;Thanh & Kirova, 2018), albeit to different degrees, and the integration of an additional aspect (i.e., social interactions).Consumer reviews also reflect distinct brand image associations across wineries, as well as brand loyalty. General Discussion TripAdvisor reviews of four wineries in British Colum-bia's Okanagan Valley reflected five aspects of the wine tourism experience, and point toward the emergence of brand image and brand loyalty.Despite individual idiosyncrasies-one reviewer's high-end luxury experience was another's hyper-commercialized letdown akin to an amusement park-overall patterns emerged: Consistent with the literature, esthetics, escape, education, and entertainment played an important role in consumers' experience.Above all else, however, social interaction between winery employees and visitors, and among visitors themselves, influenced experience.Brand image arose from a confluence of elements, from a winery's particular ambience to the availability of personal space (i.e., effective crowd control and adequate parking), the design of wine labels and other graphic images, the quality of menu options, to the soundscape (i.e., background music).Brand image plays an important role in creating consumer loyalty, especially in situations that make brand differentiation based on tangible quality features difficult (Keller, 2020).Creative use of physical design (e.g., interior and landscape design) in a winery setting generated positive impressions that enhance visitors' experience and contributed to a differentiated brand image. Both positive and negative reviews are influential in In addition to brand image, the TripAdvisor reviews also pointed toward brand loyalty as an outcome of consumers' winery experiences.Brand loyalty manifests in terms of increased revisit and repurchase intention, and given the positive revenue outcomes associated by brand loyalty, its formation deserves managerial attention.The reviews indicate that both esthetics (i.e., welcoming, homey atmosphere) and social interactions (i.e., friendly, warm employees) play a critical role in the development of brand loyalty.Overall, this research shows that consumers develop brand image and brand loyalty based on positive and unique winery visit experiences. Theoretical Implications This research extends the 4E framework of consumer experience by identifying social interactions between visitors and employees as well as other consumers as an important factor contributing to consumer experience.The integration of social interactions in frameworks capturing winery experiences has the potential to increase the predictive power in regard to winery brand-related outcomes in survey-based empirical studies involving large consumer samples.This extended framework of winery experiences can also lead to further integration of the winery experience and the services literature (e.g., Charters et al., 2009;O'Neill et al., 2002), which recognizes the importance of consumeremployee interaction (Bitner, 1992), as well as the consumer research literature on the impact of social presence of other consumers (e.g., Argo et al., 2005). The second theoretical contribution of this research lies in initial evidence of the impact of winery experiences on consumers' winery image perceptions and subsequent attitudinal and behavioral brand loyalty (e.g., positive attitude, recommendation likelihood, revisit and repurchase intentions) toward a specific winery.This links the consumer experience literature with the brand equity literature, which posits that brand image perceptions based on product-related (i.e., wine and food) and non-product related (i.e., environmental setting, interior design of the winery, employee behavior) attributes are important components of consumers' brand equity (Keller, 1993).An integration of these two literature streams would therefore be fruitful. Managerial Implications This research indicates that several aspects of an experience influence consumers' brand image and loyalty.In order to develop positive brand image and brand loyalty, wineries benefit from attention to the delivery of esthetics, escape, education, entertainment, and social interactions. Although each of these aspects is important, resource limitations may lead wineries to focus on one or several aspects that require improvement (e.g., based on TripAdvisor reviews), lead to the most differentiated brand positioning (e.g., based on non-conformity with established schemas, as exemplified by H), or are an aspect that is most important to the winery's desired target market.For managerial practice, the findings of this study suggest that the esthetics and social interaction aspects of the winery experience are the most important in consumers' reflections on their experience.Investments in these two experience aspects are therefore most likely to affect consumer experience positively. Consumers' interests, expectations, and levels of wine expertise play a role in the co-creation of the experience (Charters & Ali-Knight, 2002), as consumers are active participants (Pine & Gilmore, 1998).The findings of this research highlight the importance of recognizing and addressing the motivations and perceptions of different market segments (Charters & Ali-Knight, 2002) and of crafting experiences that reflect the differential weight consumer segments attach to experience dimensions (e.g., education versus entertainment; Fountain, 2018).In this research of Okanagan wineries, for example, education and tastings emerged as very important experience, and could therefore be further developed and targeted toward specific market segments.The findings of this study suggests that once wineries have achieved a desirable level of esthetic and social interaction experience among their target consumers, dedication of resources to develop or enhance educational experiences can further strengthen brand image and brand loyalty. This research strongly suggests that winery experiences involve both positive and negative social interactions.Social interactions constitute an experience aspect that is perhaps the most difficult to control and standardize.In reference to social interactions with employees, many reviews mentioned the competence, ease, good humor, and patience with which employees imparted wine knowledge.This suggests that a focus on hiring of knowledgeable staff and employee training is an important, but perhaps frequently overlooked aspect of creating winery experiences focused on education.Winery employees vary not only in their wine knowledge, but also their social skills and behavior, and this influences consumers' experience.The impact of employee behavior on consumer experiences reflected in TripAdvisor reviews therefore points toward the need to encourage positive employee behavior in order to maintain or increase consumer satisfaction (Kattara et al., 2008) with the winery experience.This study suggests that negative social interactions can preclude positive experiences, even if consumers perceive other experience dimensions positively.Investment in employee training and motivation is thus extremely important to ensure that the positive experience emanating from esthetics, entertainment, education and escape are strengthened. Visitors themselves also add complexity to social interactions.For example, the perceived valence of social interactions is influenced by crowding that overwhelms some visitors.The creation of positively valenced winery experiences therefore requires consideration of managing visitor numbers, as well as consumers' demands and expectations.Throughout the COVID-19 pandemic, many wineries had to limit the number of visitors due to public health regula-How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal tions.A relevant question moving forward may be whether a cap on visitor numbers results in more positive consumer experiences, while not negatively impacting revenues generated by winery visits.Consumer demands and expectations regarding the winery visit could be addressed a priori by providing relevant information regarding products, services offered, and the structure of a winery visit on winery websites, direct communication to consumers signing up for a visit, or via social media.For managerial practice, this study suggests that measures to reduce perceived crowding (e.g., adequate number of parking spaces, physical layout of the winery and service scape, directional flow of visitors, management of expectations) could be very beneficial in enhancing consumers' winery visit experience. This research has implications not only for the design and continuous improvement of visitor experiences, but also for engagement with social media platforms, such as TripAdvisor.Storbacka and colleagues (2012) propose that TripAdvisor's business model offers opportunities for value co-creation: First, TripAdvisor facilitates the generation of content by travellers and increases the value of this content by linking it to information and services provided by businesses or destinations (Storbacka et al., 2012), thus giving visitors the opportunity to create value for other visitors as well as the wineries that created memorable and positive experiences.Second, TripAdvisor generates revenue by offering businesses opportunities to derive market intelligence, provide consumer service, manage their online reputation and implement targeted advertising campaigns (Storbacka et al., 2012); this has implications for continuous improvement and earnings.Third, TripAdvisor provides the platform through which value is co-created in terms of the resources (e.g., word-of-mouth, information) wineries and visitors can exchange (Storbacka et al., 2012).This model of value co-creation highlights that experiences involve an exchange of contributions of both wineries and visitors, but also that social media allows co-creation before or after actual winery visits.For managerial practice, this research has two implications with regard to social media: First, consumer reviews are a valuable source in tracking consumers' winery visit experience in order to assess and continuously improve relevant aspect of the experience.Second, social media also allows wineries to further contribute to consumers' experience with the brand by appropriately responding to criticism, or by expressing appreciation for positive feedback provided by visitors.The findings regarding the importance of social interactions to consumers' overall experience suggests that positive social media interactions that extend beyond the physical visit of the winery could further contribute to a strong, positive, and unique brand image, as well as brand loyalty. Limitations and Future Research This research focused on four wineries in the Okanagan Valley of British Columbia, and while they capture a range of experiences offered in this region (e.g., traditional and organic wine production, small versus large size), the nature of winery tourism experiences likely differs from other wine regions.The esthetic value derived from the natural scenery, for example, may not emerge as strongly else-where, whereas other experience dimensions may play more of a role.Future research could therefore include wineries from several distinct regions to gauge the relative influence of the environmental setting on consumers' experience. Another limitation of this study relates to the uneven distribution of reviews and review length across the wineries examined in this research.The quantity of TripAdvisor reviews of the four wineries appears to correlate with the marketing reach of the winery.Reviewers of large wineries (particularly MH) tended to produce longer and more detailed reviews.This may reflect higher education and income levels of this winery's target markets.Future research could shed more light on consumer and winery characteristics that influence review length, as well as the subsequent impact of review length on outcomes such as perceived credibility or helpfulness of the review. Third, it is uncertain whether TripAdvisor reviews are representative of winery visits in general.There may be a self-selection bias toward consumers who are willing to share their experiences, either because these experiences were highly memorable in a positive or negative sense, or because these consumers are more willing to engage in electronic word-of-mouth.To address this concern, future studies could complement analyses based on reviews with surveys administered among winery visitors on site to capture evaluations of consumers who are less inclined to share their experiences on social media. Another limitation associated with the use of TripAdvisor reviews is that, due to data confidentiality, data on age, nationality, income, or level of wine knowledge of consumers providing reviews were not available for analysis.This precluded an investigation of the link between demographic factors (e.g., age, income, education) and individual difference variables (e.g., wine knowledge, involvement, variety seeking) and visitors' responses to wine tourism experiences.An examination of these influences requires a targeted survey of winery visitors measuring consumer experiences as well as relevant consumer demographics and individual difference variables. Despite these limitations, this research elucidates how consumers experience brands, and reveals that esthetics, escape, entertainment and education, along with product quality and sustainability, as well as social interactions contribute to the formation of brand image and brand loyalty.These insights open avenues for a future exploration of how these experiences differ from those the brand had hoped to provide.Given that brand image and brand loyalty are key drivers of brand performance, research on the design and choreography of immersive sensory experiences can prove invaluable. This qualitative study can serve as a basis for a largerscale quantitative consumer survey to test the strength of associations between the 4E's and social interactions, subsequent brand image perceptions and brand loyalty, and potential interactions between experience aspects as well as the mitigating role of negative valence on brand image and brand loyalty.Inclusion of a larger sample of wineries and additional variables to capture the influence of firm-level characteristics (e.g., winery size, longevity, presence of objective cues to quality and credibility, such as prizes) could How Winery Tourism Experience Builds Brand Image and Brand Loyalty Wine Business Journal shed more light on how winery characteristics affect consumer experience and subsequent consumer responses. An additional avenue for future research lies in an examination of the influence of consumer attributions on consumer experience of products and services (Folkes, 1984;Folkes et al., 1987).Consumer attributions relate to perceptions of the consumers' personal control over the outcome, stability (i.e., predictability), and causes internal and external to the firm (McAuley et al., 1992).In service contexts, consumer attributions have been found to predict consumer satisfaction and subsequent repurchase intentions and loyalty (Oliver & DeSarbo, 1988;Tsiros et al., 2004;Weiner, 2000).In a winery context, external cause attributions can relate to the weather and other visitors encountered throughout an experience, whereas internal cause attribu-tions can relate to the winery's offering, design of landscapes and space, or employee behavior.Brand image perceptions and brand loyalty are likely more strongly influenced by stable and internal cause attributions. Figure 1 . Figure 1.Conceptual framework linking consumer experience, brand image, and brand loyalty How Winery Tourism Experience Builds Brand Image and Brand LoyaltyWine Business Journal buy their products in the future."Visited H fromVancouver Island. Table 1 . Frequency of themes and valence by winery (2014-2020) Note. a Valence here represents the valence of the review as indicated on TripAdvisor. How Winery Tourism Experience Builds Brand Image and Brand Loyalty Table 2 . Examples of 4E's and negative/positive social interactions How Winery Tourism Experience Builds Brand Image and Brand Loyalty Table 3 . Brand image of Okanagan Valley wineries reflected in TripAdvisor reviews How Winery Tourism Experience Builds Brand Image and Brand Loyalty Table 5 . Examples of positive attitudinal statements in consumers' five-star ratings How Winery Tourism Experience Builds Brand Image and Brand LoyaltyWine Business Journal brand image construction.Reviewers appeared most likely to embrace an escape experience at MH, the largest of the Okanagan Valley's wineries, with a worldwide reputation not only for wines but also for food, architecture, and overall grandeur.Although MH lacks the characteristics of a homey experience (e.g., intimate settings, a casual atmosphere, unimposing design), its aesthetic appeal makes it one of the most popular wineries in the Okanagan Valley.
2021-12-21T16:32:11.000Z
2021-12-18T00:00:00.000
{ "year": 2021, "sha1": "9c41272a40d1db8d441eb52146fd166ea4e953d2", "oa_license": "CCBY", "oa_url": "https://wbcrj.scholasticahq.com/article/30210.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "904b2095368530f1faa31b891bb0855e30e7d316", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
264677359
pes2o/s2orc
v3-fos-license
Use of dermoscopy in the diagnosis of temporal triangular alopecia* Temporal triangular alopecia, also referred as congenital triangular alopecia, is an uncommon dermatosis of unknown etiology. It is characterized by a non-scarring, circumscribed alopecia often located unilaterally in the frontotemporal region. It usually emerges at ages 2-9 years. Alopecia areata is the main differential diagnosis, especially in atypical cases. Dermoscopy is a noninvasive procedure that helps distinguish temporal triangular alopecia from aloepecia areata. Such procedure prevents invasive diagnostic methods as well as ineffective treatments. INTRODUCTION Temporal triangular alopecia (TTA), also called congenital triangular alopecia, was first described by Sabouraud in 1905 as "alopecia triangulaire congenitale de la temp". 1,2,3The term congenital triangular alopecia has become inadequate because most cases arise at ages 2-9 years and the disease may even manifest itself in adulthood. 2,3,4TA is an uncommon dermatosis of unknown etiology.It usually emerges sporadically.Reports of familial cases suggest the presence of a para-dominant inheritance. 3,5It is also postulated that TTA may be related to mosaicism.Autosomal dominant inheritance is possibly present in cases associated with syndromes. 2,5,6TA is clinically characterized by a rounded, oval, triangular or more commonly spear-shaped area of alopecia located in the frontotemporal region. 2,4,5he main differential diagnoses are alopecia areata, trichotillomania, traction alopecia and congenital aplasia cutis. 4Many cases of TTA are diagnosed and treated as alopecia areata, especially when the area of alopecia occurs outside its usual location or when it arises at a later stage. 4,5us, scalp dermoscopy is an indispensable tool for a correct diagnosis. 2,4Its use reduces the need for invasive diagnostic procedures and avoids unnecessary treatments. CASE REPORT A three-month-old girl presented an area of alopecia in the right temporal region from birth.Physical examination revealed a well-demarcated, spear-shaped area of alopecia measuring 4.0 x 1.5 cm (Figure 1).The overlying skin was normal (absence of atrophy, desquamation or inflammation).Dermoscopy (DermLite I; 3Gen) revealed normal follicular openings with vellus hairs surrounded by the terminal hairs (Figure 2).The child´s parents denied the occurrence of previous traumatic events and having similar cases in the family.After clinical and dermoscopic diagnosis, the parents were reassured about the benign nature of the disease, and decided together with the physician for an expectant management of the disease. IMAGING IN DERMATOLOGY An Bras Dermatol. DISCUSSION TTA is a non-inflammatory and non-scarring form of alopecia that remains stable throughout life. 2,3,4t is described by many authors as a rare dermatosis. 2 Its incidence has been estimated at 0.11% by Hernandes et al. 2 However, it is believed that TTA is a common but underdiagnosed disorder, because only a few affected persons seek medical care and many patients are misdiagnosed.4,7 In most cases, TTA becomes clinically evident between ages 2 and 9 years.2,4,7 It may occur at birth or later during adulthood. 8TTA affects mainly white patients and both sexes are equally affected.3,4 The area of alopecia is usually asymptomatic, but some patients report dysesthesia.4 It may affect other areas of the scalp -including the occipital region -and it may also be bilateral.9 Sometimes there is a small fringe with terminal hairs at the front edge of the lesion and even a tuft of hair at the center of the lesion.3,5,6 Some diseases have been associated with TTA, such as: Down syndrome, iris nevus syndrome, phakomatosis pigmentovascularis, congenital heart disease, bone and tooth abnormalities, mental retardation and congenital aplasia cutis.6,10 Histopathology shows a normal number of follicles with a predominance of vellus hairs and rare terminal hairs on the superficial dermis.2,4 Inflammatory and/or scarring processes are not observed.4 Alopecia areata is the main differential diagnosis of TTA.2,4,5 Dermoscopy helps differentiate between these two diseases, avoiding the performance of biopsies to confirm the diagnosis. 4,5ermoscopic findings include normal follicular openings with vellus hairs covering the area of alopecia and terminal hairs on the outskirts of the lesion.4 Black and/or yellow dots and 'exclamation mark' hairs, which are present in alopecia areata, are absent in this dermatosis. 2,4 I a study conducted by Inui et al in 2011, the authors stressed the importance of the diagnostic criteria of TTA and proposed the following criteria: I) triangular or spear-shaped area of alopecia involving the frontotemporal region of the scalp; II) dermoscopy reveals normal follicular openings with vellus hairs surrounded by normal terminal hair; III) dermoscopy shows absence of yellow and black spots, dystrophic hairs, and decreased follicular openings; IV) persistence of no significant hair growth after dermoscopic and clinical confirmation of the existence of vellus hairs.4 There is no effective treatment for TTA and, in most cases, there is no need for therapeutic intervention.2,7,8 Enlightening the parents about the benign nature of this dermatosis is essential. Hir implant and surgical excision of the lesion are the main therapeutic proposals in cases with significant aesthetic and emotional injury.7,8,9,10 Surgical exeresis is limited to cases with small areas of alopecia.Hair implant is considered by some authors to be the first-line treatment for TTA. Stuies show the effectiveness and good cosmetic outcomes obtained by using this treatment modality.However, further studies with long follow-up periods need to be conducted.7,8,10 Bang et al. described the first successful case using topical minoxidil. Nertheless, there is no scientific evidence confirming the efficacy of such treatment.7,8 Dermoscopy is a noninvasive tool that aids in the differential diagnosis of TTA. 4 This method avoids invasive diagnostic procedures and ineffective treatments.2,4 ❑ FIGURE 2 : FIGURE 1: Clinical examination: non-scarring, spear-shaped area of alopecia.No signs of inflammation.No changes in consistency or appearance of the overlying skin
2017-09-07T13:29:30.445Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "c63e9409381ba2d209f3c9e7dfd4c378c5b96682", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abd/a/r7X6SnQSFVkjqLqZwCrnKbh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c63e9409381ba2d209f3c9e7dfd4c378c5b96682", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210920655
pes2o/s2orc
v3-fos-license
DALC: Distributed Automatic LSTM Customization for Fine-Grained Traffic Speed Prediction Over the past decade, several approaches have been introduced for short-term traffic prediction. However, providing fine-grained traffic prediction for large-scale transportation networks where numerous detectors are geographically deployed to collect traffic data is still an open issue. To address this issue, in this paper, we formulate the problem of customizing an LSTM model for a single detector into a finite Markov decision process and then introduce an Automatic LSTM Customization (ALC) algorithm to automatically customize an LSTM model for a single detector such that the corresponding prediction accuracy can be as satisfactory as possible and the time consumption can be as low as possible. Based on the ALC algorithm, we introduce a distributed approach called Distributed Automatic LSTM Customization (DALC) to customize an LSTM model for every detector in large-scale transportation networks. Our experiment demonstrates that the DALC provides higher prediction accuracy than several approaches provided by Apache Spark MLlib. Introduction In the past decade, several approaches for short-term traffic prediction have been proposed. They can be classified into parametric approaches and nonparametric approaches. The autoregressive integrated moving average (ARIMA) model is a widely used parametric approach [1], in which the model structure is predefined. The nonparametric approaches include k-nearest neighbors method, artificial neural network, recurrent neural network (RNN), etc. As a type of RNN, long short-term memory (LSTM) [2] is superior in predicting time series problem with long temporal dependency such as traffic prediction. Prior study [3][4] [5] [22] have demonstrated that LSTM provides satisfactory prediction accuracy. However, to our knowledge, none of existing LSTM-based prediction methods is designed to provide fine-grained traffic prediction for large-scale transportation networks where numerous detectors are geographically deployed to collect traffic data. The success of LSTM depends on choosing an appropriate hyperparameter configuration, including the number of hidden layers and the number of epoch [6], since the configuration determines if LSTM can achieve satisfactory prediction accuracy or not. However, determining such a configuration is usually done manually. Each time a different configuration is used for training an LSTM model, and many times of retrainings might be required until the LSTM model provides satisfactory prediction performance. This process might be time consuming and energy-inefficient, and such a process will be even longer when providing the above-mentioned fine-grained traffic prediction for large-scale transportation networks. To address the above issues, in this paper, we formulate the problem of customizing an LSTM model with an appropriate hyperparameter configuration for a single detector into a Markov decision process and then employ Value Iteration [7] to suggest the policy that consumes the least expected training time. We then incorporate the policy into an automatic LSTM customization (ALC) algorithm and further take prediction accuracy into account to automatically customize an appropriate LSTM model for a single detector. More specifically, ALC will keep training the LSTM model by preferentially following the policy suggested by Value Iteration until the prediction accuracy of the LSTM model reaches a predefined threshold or until the prediction accuracy cannot be further improved by all possible choices. In order to provide fine-grained traffic speed prediction, we propose that each detector-period combination (DPC), i.e., each detector in a different time period, should have its own LSTM model. In addition, to effectively customize LSTM models for all DPCs in large-scale transportation networks, we introduce a distributed approach based on the ALC algorithm, named DALC. Note that the first letter D stands for "distributed". In the DALC, each DPC will have its own LSTM model, and the jobs for customizing LSTM models for all DPCs will be executed by a set of computation nodes in a parallel manner. To demonstrate the effectiveness of DALC, we conduct an experiment to compare DALC with several distributed machine learning approaches provided by Apache Spark MLlib [8]. The results show that DALC provides the best prediction accuracy and is able to achieve fine-grained traffic speed prediction for large-scale transportation networks in a distributed and parallel manner. The rest of the paper is organized as follows: Sections 2 and 3 describe the background of LSTM and related work, respectively. In Section 4, we introduce the details of ALC and DALC. Section 5 presents the experiment result. In Section 6, we conclude this paper and outline future work. LSTM LSTM [2] is a special type of RNNs with ability to learn long-term dependencies and model temporal sequences. The architecture of LSTM is similar to that of RNN except that the nonlinear units in the hidden layers are replaced by memory blocks. Each memory block contains one or more self-connected memory cells to store internal state. Each memory block also contains three multiplicative units (input, output and forget gates) to manage cell state and output using activation functions. These features enable LSTM to preserve information in the memory block over long time lags. In order to optimize the prediction performance of LSTM, it is essential to choose appropriate hyperparameters, including the number of hidden layers, the number of hidden units, the number of epochs (Note that an epoch is defined as a complete pass through a given dataset [6]), learning rate, activation function, etc. Determining the above hyperparameters often depends on a trial-and-error approach and lots of practices and experiences. In this paper, we focus on determining a configuration consisting of two hyperparameters, i.e., the number of hidden layers and the number of epochs, since these two hyperparameters are influential on determining both training time and prediction accuracy. Our goal is to automatically customize an LSTM model with an appropriate hyperparameter configuration for a detector such that the prediction accuracy can be as satisfactory as possible and the corresponding time consumption can be as low as possible. Related work Existing traffic prediction approaches can be classified into two categories: parametric approaches and nonparametric approaches. Parametric approaches are also called model-based methods in which the model structure has to be determined in advance based on some theoretical assumptions, and the model parameters can be derived with empirical data. The autoregressive integrated moving average (ARIMA) model is a widely used parametric approach [9], with which Ahmed and Cook [1] predicted short-term freeway traffic flow and Hamed et al. [10] forecasted traffic volume in urban arterial roads. Many ARIMA-based approaches were then developed to enhance prediction accuracy, including Kohonen-ARIMA [11] and seasonal ARIMA [12]. Different from parametric approaches, nonparametric approaches do not require a predefined model structure. Typical examples of nonparametric approaches include knearest neighbors (k-NN), artificial neural network (ANN), RNN, hybrid approaches, etc. In year 1991, the k-NN method was used by Davis and Nihan [13] to forecast freeway traffic. After that, several variants of the k-NN method were introduced for traffic prediction. For instances, Bustillos et al. [14] proposed a travel time prediction model based on n-curve and k-NN methods. Lv et al. [9] proposed a deep learning approach with a stacked autoencoder model to learn generic traffic flow features for traffic flow prediction. The greedy layerwise unsupervised learning algorithm is applied to pre-train the deep network, and then a fine-tuning process is used to update the parameters of the model so as to improve prediction accuracy. Ma et al. [3] employed LSTM to forecast traffic speed using remote microwave sensor data. Their experiment results compared with other recurrent neural networks (including Elman NN, Time-delayed NN, Nonlinear Autoregressive NN, support vector machine, ARIMA, and the Kalman Filter approach) show that LSTM provides superior prediction accuracy and stability. Different from all the above work, in this paper, we focus on providing fine-grained traffic speed prediction for large-scale transportation networks in a distributed and parallel manner. Customizing an LSTM for a single detector is automatically done by the proposed ALC algorithm. In addition, to effectively customize LSTMs for the enormous number of detectors in the target large-scale transportation networks, we introduce a distributed approach based on the ALC algorithm. LSTM customization for a single detector In this section, we introduce how to convert the LSTM customization problem for a single detector into a finite Markov decision process (MDP), and then present the ALC algorithm to achieve automatic customization. Markov decision process formulation As mentioned earlier, this paper focuses on customizing an LSTM model for a detector in terms of two-hyperparameter configuration: hidden layers and the number of epoch. For every detector, its LSTM can have up to hidden layers and the maximum allowed training for every different configuration is epochs, where ≥ 1 and ≫ 1. Fig. 1 illustrates the state transition graph for the LSTM customization problem. Each state is a large oval labelled by the number of hidden layers, and the number of epochs, except the start state which is labelled by start. We define an LSTM model under configuration 〈ℎ, • 〉 as state /,0•1 , implying that the LSTM model has been trained with the configuration of ℎ hidden layers and • epochs, where ℎ ≤ , is a fixed integer number (e.g., 100), and = 1,2, … , / . For instance, when the state is 7,1 , it means that the LSTM model has been trained with configuration 〈1, 〉. Note that the number of epochs is assumed to start from , regardless of the value of ℎ. We define the action set for state to be 9 . Given the current state /,0•1 , two actions denoted by small solid circles in Fig. 1 i.e., adding one more hidden layer with the initial number of epochs. If the answer is true, the state will transit to /F7,1 . Otherwise, the state will still be /,0•1 . Table 1. Taking action 9 ;,<•= &9 ;,(<@A)•= in state /,0•1 means that the LSTM model needs to be retrained with configuration 〈ℎ, ( + 1) • 〉 , i.e., ℎ hidden layers with ( + 1) • epochs. Hence, the corresponding time consumption is ( + 1) • • / where / is the time for executing an epoch when the number of hidden layers is ℎ. On the other hand, taking action 9 ;,<•= &9 ;@A,= in state /,0•1 means that the LSTM model needs to be retrained with configuration 〈ℎ + 1, 〉, i.e., ℎ + 1 hidden layers with epochs. Therefore, the corresponding time consumption is • /F7 where /F7 is the execution time per epoch when the number of hidden layers is ℎ + 1. The ALC algorithm To find an appropriate hyperparameter configuration for a detector such that the resulting LSTM model is able to provide satisfactory prediction accuracy with low time consumption, we propose the ALC algorithm based on the state transition graph shown in Fig. 1 and Value Iteration [7]. Value Iteration is an iterative method of computing an optimal MDP policy and its value. Let Q ( , ) be the action-value function assuming there are steps to go from state by taking action . Let Q ( ) be the state-value function assuming there are steps to go from state . where is a discount rate, which equals to 1 in this paper so that all the costs can be accumulated as they are. Fig. 2 shows the ALC algorithm. By starting with an arbitrary function ^ (i.e., = 0) and using the above two equations to get the functions for + 1 steps to go from the functions for steps to go (i.e., working backward), the ALC algorithm calculates Q ( ) for each state and then checks if | Q ( ) − QV7 ( )| is larger than for all the states (see lines 3 to 7), where is a predefined threshold with a positive value. If the answer is yes, implying that the difference between the two expected time consumptions is more than we accept, the ALC algorithm terminates its searching. As line 9 shows, for each state , the action leading to the least expected time consumption will be stored as ( ), i.e., ( ) is the action suggested by Value Iteration to take when the state is . The ALC algorithm Input: The training data and testing data associated with a detector Output: An LSTM model with an appropriate hyperparameter configuration for the detector Procedure: Following all suggested actions can lead the total time consumption to the minimum, but it does not guarantee that the resulting configuration can achieve satisfactory prediction accuracy. On the other hand, keep searching for a configuration and use it to retraining the LSTM might be able to keep enhancing the prediction accuracy, but it might take a very long time. To avoid unnecessary time consumption, the ALC algorithm keeps searching for configurations that can enhance prediction accuracy and terminates when the LSTM under a configuration provides satisfactory prediction accuracy or when the prediction accuracy cannot be improved by all possible choices. The detailed process is as follows: The algorithm first uses configuration 〈1, 〉, i.e., one hidden layer with epochs, to train an LSTM model (see line 12). If the average absolute relative error (AARE) of the LSTM model (which is calculated based on Equation 1) is less than a predefined threshold (i.e., line 14), implying that the prediction accuracy is satisfactory, then the ALC algorithm outputs this LSTM model to be the LSTM model of the detector and sets to be true so as to terminate the search process. Otherwise, the ALC algorithm takes the action suggested by Value Iteration for state /,0•1 , i.e., ( /,0•1 ). Note that is a boolean variable indicating if the desired LSTM model is derived or not. If However, as line 24 shows, if the LSTM model under configuration 〈ℎ, ( + 1) • 〉 is worse than the LSTM model under configuration 〈ℎ, • 〉, implying that the action suggested by Value Iteration is unable to enhance the prediction accuracy, the algorithm will take the other action, i.e., 9 ;,<•= &9 (;@A),= . In this case, the LSTM model will be retrained with configuration 〈ℎ + 1, 〉 (see line 25). If the prediction accuracy of this new LSTM model is satisfactory, it will be outputted (see line 28). In the case that this new LSTM model is better than the previous one but its AARE is still not low than , the algorithm will try another configuration by increasing ℎ by one and setting to be one (see line 29). The algorithm will go back to line 17 and to see if it can proceed. It might be possible that the LSTM model under 〈ℎ + 1, 〉 is worse than that under 〈ℎ, • 〉 (see line 30), it means that neither taking action 9 ;,<•= &9 ;,(<@A)•= nor DALC In this section, we introduce how to customize LSTMs for all detectors in large-scale transportation networks based on the ALC algorithm. This paper focuses on predicting traffic speed in two specific periods on weekdays. One is from 4 am to 10 am. The other is from 2 pm to 8 pm. The reason we choose these two periods is that they cover peak commute hours, which might significantly affect traffic speed. Due to the dynamic nature of large-scale transportation networks, detectors deployed in different places might have diverse traffic-speed patterns in the two abovementioned periods. To demonstrate this, we choose ten detectors deployed between mile 1.14 and mile 14.4 on freeway I5-N in California [21] to compare their traffic-speed patterns in the AM period of a typical weekday. As illustrated in Fig. 3, not all of their patterns are identical. Hence, we propose that each detector should have its own LSTM model in order to achieve fine-grained traffic speed prediction. Furthermore, for any single detector, it is also possible that its traffic-speed patterns in these two periods are completely different from each other. According to our observation, we found that many detectors deployed on freeway I5-N have this phenomenon. For instance, the traffic speed collected by detector 1114190 (which is one of the detectors in Fig. 3) for five consecutive weekdays (from Oct. 16 th 2017 to Oct. 20 th 2017) illustrated in Fig. 4 shows that the pattern in the AM period is totally different from that in the PM period. Therefore, we propose that each detector in each of the two periods should have its own LSTM model in order to achieve our goal. In other words, the total number of LSTM models will be 2 if is the total number of the detectors in large-scale transportation networks. To effectively customize LSTMs for each of the 2 detector-period combinations (DPCs for short) in parallel, we extend ALC in a distributed and parallel manner and call the distributed approach DALC. DALC utilizes a set of computation nodes to share the workload of customizations. As long as a computation node is available, DALC requests it to customize a LSTM model for a DPC. In this way, LSTM customization for all the 2 DPCs can be conducted in parallel. Experiment results We validated the prediction accuracy of our proposed approach in comparison with five distributed machine learning approaches provided by Apache Spark MLlib [8], including Linear Regression (LR), Generalized Linear Regression (GLR), Decision Tree Regression (DTR), Gradient Boosted Tree Regressor (GBTR), and Random Forest Regressor (RFR). All the six approaches are applied to the traffic data collected by the California Department of Transportation Performance Measurement System [21], which is a consolidated database of traffic data collected at 5-minute intervals by each detector placed on state highways throughout California. In this paper, we concentrate on predicting traffic speed on freeway I5-N. [16]. The reason we chose Hadoop YARN is that it is an open-source software framework with high scalability, efficiency, and flexibility for processing high volume of dataset [17] [18]. This cluster consists of one master node and 30 slave nodes. Each node ran Ubuntu 12.04.1 LTS with 2 CPU Cores, 2GB of RAM, and 100 GB of storage. To guarantee a fair comparison, no other job or work was executed when each of the abovementioned approaches runs on the cluster. When the five MLlib approaches were employed, they utilized current traffic flow to predict future traffic speed in 5-minute intervals. For DALC, we used DL4J [19] to implement the corresponding LSTM and adopted the default suggested values for all hyperparameters [19], except the two parameters considered in this paper, i.e., the number of hidden layers and the number of epochs. Recall that the average training time for each epoch under different number of hidden layers is required. This information is shown in Table 2 after we ran some experiments on the cluster. We can see that 7 < x < y < z < { , implying that the training time for each epoch increases as the number of hidden layers increases. By following the suggestion from [20] to achieve highly accurate prediction capability, the threshold used in the ALC algorithm is 0.05 for our approaches. To extensively measure and compare the effectiveness of all the approaches, one widely used performance metric, i.e., average absolute relative error (AARE) is employed, and it is defined as follows: where is the total number of data samples for comparison, is the index of time point, j is the observed traffic speed at time point , and j ‚ is the predicted traffic speed at time point . In this experiment, we selected 60 detectors deployed on freeway I5-N ranging from mile 0 to mile 150.35 to be our targets. Recall that this paper focuses on providing traffic speed prediction for every detector in two specific AM and PM periods. Therefore, there are 120 DPCs (which stands for detector-period combinations) for the 60 detectors. For each DPC, we chose its traffic-speed data in the corresponding period from five weekdays (from Oct. 16th, 2017 to Oct. 20th, 2017) to be the training data of all the approaches, and chose its traffic-speed data in the corresponding period from the next three weekdays (i.e., Oct. 23th, 2017 to Oct. 25th, 2017) to be the testing data of all the approaches. Fig. 5 illustrates the average AARE results of all approaches for the 120 DPCs. It is clear that the DALC approach outperforms the rest approaches. When DALC was employed, the average AARE values are all less than 0.04 with small standard deviation (see Fig. 5). However, when the rest five approaches were tested, the corresponding average AARE values are between 0.12 and 0.17 with significant standard deviation, implying that these five approaches provide offer poor prediction accuracy for the DPCs. In other words, they could not guarantee good prediction accuracy for all the DPCs. Conclusion and future work In this paper, we have introduced the ALC algorithm to achieve automatic LSTM customization for a single detector by automatically configure the number of hidden layers and the number of epochs. Due to the diverse traffic patterns collected by detectors, we proposed to customize one LSTM model for each detector in a different time period (i.e., DPC). Furthermore, to effectively customize LSTMs for tremendous DPCs in large-scale transportation networks, we have introduced DALC to perform all the customization jobs in a distributed and parallel way. The experimental result based on real traffic data on freeway I5-N in California have demonstrated the outstanding prediction accuracy of DALC as compared with another five approaches provided by Apache Spark MLlib. In our future work, instead of customizing one LSTM for every single DPC, we would like to cluster DPCs into groups if they all observe a similar traffic pattern and customize one LSTM model for each group so as to speed up LSTM customization for the entire large-scale transportation networks.
2020-01-28T02:00:50.785Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "9ce188e5100b5b6a039637a2f31f29836cf1f4da", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2001.09821", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9ce188e5100b5b6a039637a2f31f29836cf1f4da", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
251946398
pes2o/s2orc
v3-fos-license
Chitosan-Gelatin Films Cross-Linked with Dialdehyde Cellulose Nanocrystals as Potential Materials for Wound Dressings In this study, thin chitosan-gelatin biofilms cross-linked with dialdehyde cellulose nanocrystals for dressing materials were received. Two types of dialdehyde cellulose nanocrystals from fiber (DNCL) and microcrystalline cellulose (DAMC) were obtained by periodate oxidation. An ATR-FTIR analysis confirmed the selective oxidation of cellulose nanocrystals with the creation of a carbonyl group at 1724 cm−1. A higher degree of cross-linking was obtained in chitosan-gelatin biofilms with DNCL than with DAMC. An increasing amount of added cross-linkers resulted in a decrease in the apparent density value. The chitosan-gelatin biofilms cross-linked with DNCL exhibited a higher value of roughness parameters and antioxidant activity compared with materials cross-linked with DAMC. The cross-linking process improved the oxygen permeability and anti-inflammatory properties of both measurement series. Two samples cross-linked with DNCL achieved an ideal water vapor transition rate for wound dressings, CS-Gel with 10% and 15% addition of DNCL—8.60 and 9.60 mg/cm2/h, respectively. The swelling ability and interaction with human serum albumin (HSA) were improved for biofilms cross-linked with DAMC and DNCL. Significantly, the films cross-linked with DAMC were characterized by lower toxicity. These results confirmed that chitosan-gelatin biofilms cross-linked with DNCL and DAMC had improved properties for possible use in wound dressings. Introduction Recently, a substantial number of interesting research works about polysaccharides nanocrystals were published. They are prepared by acidic hydrolysis of bio-sourced polysaccharides and are characterized by biodegradability, biocompatibility, renewability, and an abundance of functional groups [1]. Nanocrystalline polysaccharides are environmentally friendly nanomaterials, which distinguishes them from inorganic nano-sized particles, such as layered silicates [2], carbon material [3], and metals [4]. Cellulose nanocrystals are the most common nanocrystalline polysaccharide and excellent nanomaterials for synthesizing advanced biomaterials. Cellulose nanocrystals have some extra properties, such as transparency, high crystallinity, strength, and reactivity. These cellulose nanocrystals have gained emerging applications in papermaking [5], polymers [6], food [7], the pharmaceutical industry [8], and catalysis [9]. Nowadays, numerous studies are focused on the topic of modification of cellulose nanocrystals, such as by esterification [10], oxidation [11], carbamation [12], amidation [13], etherification [14], or by nucleophilic substitution [15]. In recent years, it has been reported The particle concentrations and size distributions of CNF (cellulose nanocrystals from cellulose fibers) and CNM (cellulose nanocrystals from microcrystalline cellulose) were measured and presented in Figure 1a,d. The mean sizes of the cellulose nanocrystals from fibrils and microcrystalline cellulose were 99.5 and 193 nm, respectively. Lu and Hsieh reported different results of particle size of cellulose nanocrystals from rice husk (700 nm) [51], and Brito et al. reported different results from bamboo fiber (130 nm) [49]. The surface of obtained CNF and CNM samples were illustrated by SEM images, shown in Figure 1b,c,e,f. The hydrolysis reaction causes fragmentation of cellulose fibers into crystals with irregular shape structures with large aggregates. In the image of CNF with the 1000× magnitude, we observed many irregular, small particles ( Figure 1c). The morphology of CNM was characterized by round grains with fairly varying diameters and flake-like structures. In the CNM images with larger magnification, numerous loosely bound grains with irregular and rough surfaces were observed. The hydrolysis of cellulose fibers or microcrystalline cellulose with concentrated acid removes an amorphous region and leaves a nano-sized crystalline part [52]. Therefore, an XRD analysis was used to determine the crystallinity of the obtained products ( Figure 2a). The patterns of cellulose nanocrystals from fibers display diffraction peaks at 2θ = 15.11 • , 16.68 • , 20.73 • , 23.03 • , and 34.80 • . These patterns correspond mainly to cellulose type I and were assigned to (110), (110), (102), (200), and (004) planes of the CNF, respectively [53]. In the X-ray diffraction pattern of cellulose nanocrystals from microcrystalline cellulose (CNM), no diffraction angle at about 20 • was observed. According to the literature data, this peak is not always present in all samples of type cellulose I [54]. A higher amount of diffraction peaks for CNF might indicate a higher degree of crystallinity in this sample. The difference in the crystallinity of the samples may be due to the different origins of the neat celluloses. The hydrolysis of cellulose fibers or microcrystalline cellulose with concentrated acid removes an amorphous region and leaves a nano-sized crystalline part [52]. Therefore, an XRD analysis was used to determine the crystallinity of the obtained products ( Figure 2a). The patterns of cellulose nanocrystals from fibers display diffraction peaks at 2θ = 15.11°, 16.68°, 20.73°, 23.03°, and 34.80°. These patterns correspond mainly to cellulose type I and were assigned to (11 0), (110), (102), (200), and (004) planes of the CNF, respectively [53]. In the X-ray diffraction pattern of cellulose nanocrystals from microcrystalline cellulose (CNM), no diffraction angle at about 20° was observed. According to the literature data, this peak is not always present in all samples of type cellulose I [54]. A higher amount of diffraction peaks for CNF might indicate a higher degree of crystallinity in this sample. The difference in the crystallinity of the samples may be due to the different origins of the neat celluloses. ATR-FTIR analysis was used to determine the structure of CNF and CNM ( Figure 2b). CNF and CNM spectra showed characteristic bands at 3334, 2891, and 1641 cm −1 , corresponding to stretching vibration of O-H, asymmetric vibration of C-H, and bending vibration of adsorbed water in the sample, respectively [55]. The band at 1427 cm −1 might be attributed to the symmetric bending of CH2, while the band at 1318 cm −1 corresponds to O-H in-plane bending. The absorbance band observed around 1161 cm −1 was assigned to C-O-C asymmetric stretching vibrations, and the absorption band at 1029 cm −1 was associated with the stretching vibration of the primary hydroxyl group [56]. The obtained spectra also showed a band around 899 cm −1 , corresponding to interactions between glycosidic linkages and glucose units of the cellulose [57]. Between the CNF and CNM spectra, no significant differences were observed. Thermogravimetry was used to determine the thermal stability of CNF and CNM. The TG and DTG curves of obtained samples are shown in Figure 3. CNF and CNM were characterized by two degradation stages, which agreed with previously reported studies [58]. For the CNF sample, the initial weight loss of 3.71% was observed at a temperature of 64 °C, while for the CNM sample, the first step of degradation was noticed at a temperature of 61 °C with a loss of 4.70% initial weight. This degradation step was related to the evaporation of water and volatile components present in the samples [59]. The main degradation stage for CNF with approximately 75% of the weight lost at 335 °C was received. A significant weight loss of 87% for the CNM sample occurred at 348 °C. This step of degradation of both samples might be related to the depolymerization of the obtained compounds [60]. ATR-FTIR analysis was used to determine the structure of CNF and CNM (Figure 2b). CNF and CNM spectra showed characteristic bands at 3334, 2891, and 1641 cm −1 , corresponding to stretching vibration of O-H, asymmetric vibration of C-H, and bending vibration of adsorbed water in the sample, respectively [55]. The band at 1427 cm −1 might be attributed to the symmetric bending of CH 2 , while the band at 1318 cm −1 corresponds to O-H in-plane bending. The absorbance band observed around 1161 cm −1 was assigned to C-O-C asymmetric stretching vibrations, and the absorption band at 1029 cm −1 was associated with the stretching vibration of the primary hydroxyl group [56]. The obtained spectra also showed a band around 899 cm −1 , corresponding to interactions between glycosidic linkages and glucose units of the cellulose [57]. Between the CNF and CNM spectra, no significant differences were observed. Thermogravimetry was used to determine the thermal stability of CNF and CNM. The TG and DTG curves of obtained samples are shown in Figure 3. CNF and CNM were characterized by two degradation stages, which agreed with previously reported studies [58]. For the CNF sample, the initial weight loss of 3.71% was observed at a temperature of 64 • C, while for the CNM sample, the first step of degradation was noticed at a temperature of 61 • C with a loss of 4.70% initial weight. This degradation step was related to the evaporation of water and volatile components present in the samples [59]. The main degradation stage for CNF with approximately 75% of the weight lost at 335 • C was received. A significant weight loss of 87% for the CNM sample occurred at 348 • C. This step of degradation of both samples might be related to the depolymerization of the obtained compounds [60]. Thermogravimetry was used to determine the thermal stability of CNF and CNM. The TG and DTG curves of obtained samples are shown in Figure 3. CNF and CNM were characterized by two degradation stages, which agreed with previously reported studies [58]. For the CNF sample, the initial weight loss of 3.71% was observed at a temperature of 64 °C, while for the CNM sample, the first step of degradation was noticed at a temperature of 61 °C with a loss of 4.70% initial weight. This degradation step was related to the evaporation of water and volatile components present in the samples [59]. The main degradation stage for CNF with approximately 75% of the weight lost at 335 °C was received. A significant weight loss of 87% for the CNM sample occurred at 348 °C. This step of degradation of both samples might be related to the depolymerization of the obtained compounds [60]. Preparation of Dialdehyde Cellulose Nanocrystals The oxidation with sodium periodate is a specific reaction that modifies nanocrystalline polysaccharides. In this reaction, the hydroxyl groups on the second and third carbons (C2 and C3) are converted to aldehyde groups, leading to crosslinking properties. The degree of oxidation for the samples in the ratio of cellulose nanocrystals to oxidant (1:1) was 72% and 75% for DAMC and DNCL, respectively. This sample with the highest amount of the aldehyde group content was chosen for further analysis. When using other nanocrystalline cellulose to oxidant ratios (1:0.5 and 1:0.7), the degree of oxidation was much lower-23% and 51% for DAMC and 24% and 53% for DNCL. Gao et al. reported Preparation of Dialdehyde Cellulose Nanocrystals The oxidation with sodium periodate is a specific reaction that modifies nanocrystalline polysaccharides. In this reaction, the hydroxyl groups on the second and third carbons (C2 and C3) are converted to aldehyde groups, leading to crosslinking properties. The degree of oxidation for the samples in the ratio of cellulose nanocrystals to oxidant (1:1) was 72% and 75% for DAMC and DNCL, respectively. This sample with the highest amount of the aldehyde group content was chosen for further analysis. When using other nanocrystalline cellulose to oxidant ratios (1:0.5 and 1:0.7), the degree of oxidation was much lower-23% and 51% for DAMC and 24% and 53% for DNCL. Gao et al. reported that the degree of the oxidation for dialdehyde cellulose nanocrystals at 60 • C for two hours was 79.2% [43]. Xing et al. determined the impact of reaction time on the degree of oxidation [61]. Within 3 h of the reaction, they obtained only about 30% of aldehyde groups. This result was very different from ours obtained at the same reaction time. The oxidation reaction yield for obtained cross-linkers DAMC and DNCL was 81% and 78%, respectively. Another research group received similar results regarding the reaction yield [61]. Properties of Dialdehyde Cellulose Nanocrystals The particle concentration and size distributions of DNCL and DAMC were presented in Figure 4a,d. The mean size of dialdehyde cellulose nanocrystals from fibers (DNCL) was 218.5 nm, while for nanocrystals obtained from microcrystalline cellulose (DAMC), the mean size was approximately 234.6 nm. The oxidation process caused an increase in the mean size of the obtained cross-linking agents. Similar particle size results were obtained by Xu et al. for the cross-linking agents from α-cellulose [62]. Huang et al. described that the obtained dialdehyde cellulose nanocrystals from softwood (pine) have a mean size of 348-376 nm [54]. face of DNCL were observed. The DAMC surface morphology was also composed of packed aggregates, but they were much smaller in size. The morphology of DNCM was characterized by particles with fairly varying diameters and rough surfaces. In the DAMC images, at higher magnifications, we could see loosely arranged particles of various sizes and shapes; visible mesopores also appear. Xing et al. defined the morphology of the surface of dialdehyde cellulose nanocrystals from a waste newspaper [61]. They observed a large number of pores and holes, which were formed due to the interweaving of the fibers. The DNCL and DAMC surface morphology were observed by SEM, as shown in Figure 4b,c,e,f. It can be clearly seen that oxidation of cellulose nanocrystals changes the morphology of both DNCL and DAMC. The surface morphology of DNCL consisted of aggregates of particles with irregular shapes. Their surface was relatively rough and heterogeneous. After magnification (1000×), needle particles with various lengths on the surface of DNCL were observed. The DAMC surface morphology was also composed of packed aggregates, but they were much smaller in size. The morphology of DNCM was characterized by particles with fairly varying diameters and rough surfaces. In the DAMC images, at higher magnifications, we could see loosely arranged particles of various sizes and shapes; visible mesopores also appear. Xing et al. defined the morphology of the surface of dialdehyde cellulose nanocrystals from a waste newspaper [61]. They observed a large number of pores and holes, which were formed due to the interweaving of the fibers. The X-ray diffraction patterns of DNCL and DAMC are presented in Figure 5a. Both DNCL and DAMC were characterized by the same X-ray diffraction patterns, with multiple diffraction signals indicating the crystallinity of the obtained products. The database analysis demonstrated that these signals originate from NaIO 3 , which was a reduced form of oxidant. Nevertheless, it should be noted that the obtained product was washed three times, yet the diffraction signals of this product were still visible in the diffraction patterns. The obtained results showed that DNCL and DAMC formed a stable complex with a reduced form of the oxidant-IO 3 ; hence the received materials were characterized by their crystalline nature. A similar effect was previously presented for other dialdehyde polysaccharides [19,63]. According to theoretical data, the oxidation process causes a decrease in the crystallinity of the sample. This is due to the ring-opening of the glucopyranose, collapse of the crystal region, and reduction of the molecular weight. The works of Ma et al. [64] and Nam et al. [65] confirmed this effect. tion [65]. The band intensity in the range of 1300-1400 cm −1 was decreased. It could be caused by the opening of glucoside rings and the oxidation process. The intensity of the band at 1029 cm −1 associated with the stretching vibration of the primary hydroxyl group decreased, while at 793 cm −1 new absorption band was created associated with vibration group C-H at the aldehyde group [45]. The above-described changes in the spectra of the obtained cross-linking agents prove the effective oxidation process of nanocrystalline cellulose. The recorded thermograms of DNCL and DAMC are illustrated in Figure 6. It should be noted that the oxidation process deteriorated the thermal properties of nanocrystalline dialdehyde cellulose from microcrystalline cellulose. By comparison of thermograms of DNCL and DAMC, it can be seen that the thermal stability of dialdehyde cellulose nanocrystals from fibers was improved. The DNCL and DAMC exhibited four steps of thermal degradation. For both samples, the first stage of degradation took place at 60 and 56 °C with 4.9 and 6.5% weight loss for DNCL and DAMC, respectively. This step was related to the dehydration of the obtained cross-linking agents. In the second degradation stage, both DNCL and DAMC were characterized by the peak, with maximum temperatures of 147 and 138 °C, respectively. These degradation steps were accompanied by 4.0% weight Figure 5b shows the ATR-FTIR spectra of obtained cross-linking agents. Both DNCL and DAMC have nearly identical absorption spectra. The DAMC and DNCL spectra have the two new absorption bands at 1724 and 893 cm −1 attributed to C=O stretching vibration originating from the carbonyl group and hemiacetal bond, respectively [43]. The oxidation process changed the position and intensity of individual absorption bands. The hydroxyl band at 3334 cm −1 was changed due to the decrease in the number of C2 and C3 hydroxyl groups after oxidation [47]. In addition, the band of C-H stretching vibration at 2891 cm −1 broadened and shifted since new hydrogen bonds were created after the oxidation reaction [65]. The band intensity in the range of 1300-1400 cm −1 was decreased. It could be caused by the opening of glucoside rings and the oxidation process. The intensity of the band at 1029 cm −1 associated with the stretching vibration of the primary hydroxyl group decreased, while at 793 cm −1 new absorption band was created associated with vibration group C-H at the aldehyde group [45]. The above-described changes in the spectra of the obtained cross-linking agents prove the effective oxidation process of nanocrystalline cellulose. The recorded thermograms of DNCL and DAMC are illustrated in Figure 6. It should be noted that the oxidation process deteriorated the thermal properties of nanocrystalline dialdehyde cellulose from microcrystalline cellulose. By comparison of thermograms of DNCL and DAMC, it can be seen that the thermal stability of dialdehyde cellulose nanocrystals from fibers was improved. The DNCL and DAMC exhibited four steps of thermal degradation. For both samples, the first stage of degradation took place at 60 and 56 • C with 4.9 and 6.5% weight loss for DNCL and DAMC, respectively. This step was related to the dehydration of the obtained cross-linking agents. In the second degradation stage, both DNCL and DAMC were characterized by the peak, with maximum temperatures of 147 and 138 • C, respectively. These degradation steps were accompanied by 4.0% weight loss for the DNCL and 2.9% for the DAMC sample, which could be attributed to the release of more substantial bonded water or breakaway of weaker bonded functional groups in the structure of cross-linkers. A similar interpretation has been recently suggested [66]. The main degradation stage for DAMC occurred at a temperature of 266 • C with 51.3% weight loss. In the DNCL sample, the main decomposition appears at 284 • C and corresponds to a weight loss of 33.2%. For both DNCL and DAMC, the main degradation stage can be attributed to the rupture of polysaccharide chains and the elimination of low-molecularweight degradation products. In the additional degradation step, about 20.2% mass loss was observed. In this stage, more stable structures in DNCL are destroyed (i.e., macromolecules partially degraded in the first stage). The maximum rate of this process occurs at 308 • C. For the DAMC sample, the additional degradation step took place at 234 • C without mass loss in the TG curve. The polymer residue at 600 • C for both samples was comparable and, in both cases, was around 40%. Lu et al. investigated the thermal properties of spherical (SDACN) and rod-like (RDACN) dialdehyde cellulose nanocrystals [16]. They observed that for the RDACN sample, the main decomposition stage with a temperature of 344.7 • C was achieved, while for SDACN, this was achieved at a temperature of 235.3/267.2 • C. The rod-like dialdehyde cellulose nanocrystals were more thermally resistant than the spherical nanocrystals. can be attributed to the rupture of polysaccharide chains and the elimination of low-molecular-weight degradation products. In the additional degradation step, about 20.2% mass loss was observed. In this stage, more stable structures in DNCL are destroyed (i.e., macromolecules partially degraded in the first stage). The maximum rate of this process occurs at 308 °C. For the DAMC sample, the additional degradation step took place at 234 °C without mass loss in the TG curve. The polymer residue at 600 °C for both samples was comparable and, in both cases, was around 40%. Lu et al. investigated the thermal properties of spherical (SDACN) and rod-like (RDACN) dialdehyde cellulose nanocrystals [16]. They observed that for the RDACN sample, the main decomposition stage with a temperature of 344.7 °C was achieved, while for SDACN, this was achieved at a temperature of 235.3/267.2 °C. The rod-like dialdehyde cellulose nanocrystals were more thermally resistant than the spherical nanocrystals. Preparation of Cross-Linked Chitosan-Gelatin Films The design and synthesis of biofilms is an essential part of the research as they represent model materials that can find a variety of applications. This is particularly important in studying materials with barrier properties, i.e., the vascular system, the skin, or wound dressings. Such materials should have appropriate properties, i.e., biocompatibility, non-toxicity, good mechanical properties, and proper water and oxygen permeability. Biofilms were obtained from a blend of chitosan with gelatin in a volume ratio of 1:1, which were then cross-linked with two different agents (DNCL, DAMC). Here, 5, 10, and 15% addition of DNCL and DAMC was used. The names of all the obtained materials are coded as shown: chitosan-gelatin-percent addition of DAMC/DNCL, e.g., CS-Gel-5%DAMC/DNCL. All obtained biofilms were visually homogeneous (Figure 7), which was attributed to the interaction between the amine groups (NH3 + ) on chitosan chains in acidic solution and the carboxylic groups (COO¯) of gelatin. Additionally, -NH2 and -OH groups of chitosan may form hydrogen bonds with many polar groups in gelatin-like -COOH, -NH2, -OH. Chitosan and gelatin with cross-linkers formed a covalent bond. Moreover, strong electrostatic interactions and hydrogen bonds led to the formation of Preparation of Cross-Linked Chitosan-Gelatin Films The design and synthesis of biofilms is an essential part of the research as they represent model materials that can find a variety of applications. This is particularly important in studying materials with barrier properties, i.e., the vascular system, the skin, or wound dressings. Such materials should have appropriate properties, i.e., biocompatibility, nontoxicity, good mechanical properties, and proper water and oxygen permeability. Biofilms were obtained from a blend of chitosan with gelatin in a volume ratio of 1:1, which were then cross-linked with two different agents (DNCL, DAMC). Here, 5, 10, and 15% addition of DNCL and DAMC was used. The names of all the obtained materials are coded as shown: chitosan-gelatin-percent addition of DAMC/DNCL, e.g., CS-Gel-5%DAMC/DNCL. All obtained biofilms were visually homogeneous (Figure 7), which was attributed to the interaction between the amine groups (NH 3 + ) on chitosan chains in acidic solution and the carboxylic groups (COO ) of gelatin. Additionally, -NH 2 and -OH groups of chitosan may form hydrogen bonds with many polar groups in gelatin-like -COOH, -NH 2 , -OH. Chitosan and gelatin with cross-linkers formed a covalent bond. Moreover, strong electrostatic interactions and hydrogen bonds led to the formation of complexes between the components and the formation of the desired chitosan-gelatin materials cross-linked with DNCL and DAMC. Degree of Cross-Linking The gel content in the samples was measured ( Figure 8) to study the degree of crosslinking of obtained materials. As expected, the gel amount increases with the increase in the content of cross-linking agents. The degree of crosslinking of CS-Gel-5%DNCL and CS-Gel-15%DNCL was 44.12% and 74.69%, respectively; these values were higher than those of CS-Gel-5%DAMC and CS-Gel-15%DAMC, at 42.45% and 63.74%, respectively. It can be explained by the formation of Schiff base in the reaction of aldehyde groups of DAMC, DNCL and the amine groups of gelatin and chitosan, leading to additional cross-linking. complexes between the components and the formation of the desired chitosan-gelatin materials cross-linked with DNCL and DAMC. Degree of Cross-Linking The gel content in the samples was measured ( Figure 8) to study the degree of crosslinking of obtained materials. As expected, the gel amount increases with the increase in the content of cross-linking agents. The degree of crosslinking of CS-Gel-5%DNCL and CS-Gel-15%DNCL was 44.12% and 74.69%, respectively; these values were higher than those of CS-Gel-5%DAMC and CS-Gel-15%DAMC, at 42.45% and 63.74%, respectively. It can be explained by the formation of Schiff base in the reaction of aldehyde groups of DAMC, DNCL and the amine groups of gelatin and chitosan, leading to additional crosslinking. Other research groups also investigated the degree of cross-linking on biopolymers and their mixtures. Kwak et al. studied fish gelatin films crosslinked with di-aldehyde cellulose nanocrystal (D-CNC) that was weight equaled (5, 10, 15, and 20 wt%, based on the gelatin weight). Additionally, they noted that the degree of crosslinking increased with an increased amount of the crosslinker in the gelatin films [45]. In the work of Taheri et al. [67], tannic acid (5 and 8 wt%) was a cross-linking agent for chitosan/gelatin (1:2 w/w) films with or without the addition of bacterial nanocellulose. After modification, the Other research groups also investigated the degree of cross-linking on biopolymers and their mixtures. Kwak et al. studied fish gelatin films crosslinked with di-aldehyde cellulose nanocrystal (D-CNC) that was weight equaled (5, 10, 15, and 20 wt%, based on the gelatin weight). Additionally, they noted that the degree of crosslinking increased with an increased amount of the crosslinker in the gelatin films [45]. In the work of Taheri et al. [67], tannic acid (5 and 8 wt%) was a cross-linking agent for chitosan/gelatin (1:2 w/w) films with or without the addition of bacterial nanocellulose. After modification, the observed degree of cross-linking was higher for materials with tannic acid than in the case of pure chitosan/gelatin film. ATR-FTIR Spectroscopy The ATR-FTIR spectra of the gelatin-chitosan polymer blend are shown in Figures 9 and S1. The blend of gelatin-chitosan showed similar characteristic peaks to pure chitosan and gelatin with some shifts. After mixing chitosan with gelatin, absorption bands belonging to the hydroxyl and amino groups are combined into one broad band at 3288 cm −1 . This may be due to the creation of strong hydrogen bonds between components of the mixture [68]. The gelatin-chitosan blends led to a slight modification of the spectrum, i.e., a shift of both carbonyls (from 1634 to 1642 cm −1 ) and amino bands (from 1561 to 1573 cm −1 ). The peak shifts in the spectrum of the CS-Gel mixture indicate the formation of a hydrogen bond between chitosan and gelatin, which is supported by other reported results [69]. Other research groups obtained similar spectra with characteristic bands for gelatinchitosan samples cross-linked with genipin [70] and glutaraldehyde [71]. Apparent Density The apparent density is an essential parameter for wound dressing applications since an ideal material should allow for sufficient gas and nutrient exchange [72]. This parameter for obtained samples is shown in Table 1. The highest value of apparent density is for a chitosan-gelatin biofilm. This parameter of all obtained materials is in the range of 0.470-0.186 g/cm 3 . In most cases, an increase in the amount of added cross-linking agents causes a decrease in the value of apparent density. In the series of samples cross-linked with DAMC, lower values of this parameter were achieved. In the literature, one can find ambiguous results for the effect of the amount of a cross-linking agent on the apparent density. For example, Liu et al. observed a decrease in apparent density with rising crosslinker content [73], contrary to Ahmed et al., who reported the opposite effect [74]. Other research groups obtained similar spectra with characteristic bands for gelatinchitosan samples cross-linked with genipin [70] and glutaraldehyde [71]. Apparent Density The apparent density is an essential parameter for wound dressing applications since an ideal material should allow for sufficient gas and nutrient exchange [72]. This parameter for obtained samples is shown in Table 1. The highest value of apparent density is for a chitosan-gelatin biofilm. This parameter of all obtained materials is in the range of 0.470-0.186 g/cm 3 . In most cases, an increase in the amount of added cross-linking agents causes a decrease in the value of apparent density. In the series of samples cross-linked with DAMC, lower values of this parameter were achieved. In the literature, one can find ambiguous results for the effect of the amount of a cross-linking agent on the apparent density. For example, Liu et al. observed a decrease in apparent density with rising cross-linker content [73], contrary to Ahmed et al., who reported the opposite effect [74]. The surface texture analysis can be utilized to plan improved host-material responses in some biomedical applications. The surface topography of materials strongly influences the adhesion, migration, arrangement, and differentiation of cells [75]. The AFM images of obtained materials are shown in Figure S2, and roughness parameters are listed in Table 1. The neat chitosan-gelatin film was characterized by a smooth surface with a maximum roughness (R max ) of 29.4 nm. This could result from the excellent integration of chitosan with gelatin through non-covalent interactions, including electrostatic interactions and hydrogen bonds [76]. The cross-linking process gives rise to higher rough parameters of obtained biofilms. When the content of the added cross-linking agent increases, the resulting biomaterials become rougher. This effect was more apparent for chitosan-gelatin samples cross-linked with DNCL. The chitosan-gelatin sample with the highest amount of DNCL (CS-Gel-15%DNCL) has the roughest surface, with R q and R a being 8.29 nm and 4.44 nm, respectively. The opposite results were obtained by Liu et al. [77] for a membrane cross-linked with potassium pyroantimonate (PA) and genipin (GN). It was found in this work that the cross-linked chitosan-gelatin membrane was a more uniform surface with regular small dents. Antioxidant Activity Natural antioxidants are used to accelerate the wound healing process. Antioxidant agents diminish the production of intracellular reactive oxygen, thereby suppressing growth in the activity of toxic nitric oxide synthesis [78]. The DPPH radical scavenging activity test was performed, and the obtained results are shown in Table 1. The pure chitosan-gelatin film showed poor scavenging of the free radical of only 10.3%. The cross-linking process of chitosan-gelatin films with DNCL and DAMC significantly improved the scavenging ability. Moreover, it should be added that in the case of materials cross-linked with nanocrystalline dialdehyde cellulose from cellulose fibers (DNCL), higher values of radical scavenging with the same amount of cross-linker were obtained. In addition, as the amount of cross-linker increased, the degree of DPPH free radical scavenging by the obtained films was enhanced. In the case of chitosan-gelatin samples, the mechanism of free radical scavenging is related to residual free amino groups (NH 2 ). These moieties could react with free radicals to create stable macromolecule radicals and absorb hydrogen ions from the solution to obtain ammonium cations (NH 3 + ) [79]. Kan et al. studied free radical scavenging of chitosan-gelatin films (without crosslinking) with and without hawthorn fruit extract for packaging applications. They obtained similar DPPH free radical scavenging values for all samples with different content of fruit extract from Chinese hawthorn [80]. Oxygen Permeability Oxygen permeability is a fundamental parameter of a wound dressing as it is essential for wound healing, cell growth, and reducing the risk of infection by anaerobic bacteria [81]. The results of oxygen permeability for obtained samples are presented in Figure 10. Under normal conditions in the temperature range of 0-35 • C, the dissolved oxygen value is 7-14.6 mg/L [82]. In the present work, the dissolved oxygen in the water of an airtight flask (negative control) and opened flask (positive control) were 7.50 and 11.84 mg/L, respectively. The oxygen permeability value for the chitosan-gelatin mixture was 8.61 mg/L. The highest oxygen permeability exhibited a CS-Gel-15%DNCL sample (9.65 mg/L). However, it should be added that in the case of using nanocrystalline dialdehyde cellulose from fibers, higher values of oxygen permeability with the same amount of cross-linker were obtained. In addition, all results of oxygen permeability lie within the range of an ideal dressing [83]. The cross-linking agents contained hydroxyl and carbonyl groups, which impart biofilms hydrophilic. Due to hydrophilicity, confirmed by contact angle measurement, the biofilms exhibited higher dissolved oxygen permeability. Another important feature in dressing materials is water vapor permeability. A low WVTR causes wound exudate to accumulate, while high WVTR leads to dehydration of the wound; thus, optimal values of this parameter must be obtained [84]. The WVTR value for healthy skin is 0.85 mg/cm 2 /h, while in the case of damaged skin, it ranges from 1.16 to 21.41 mg/cm 2 /h [85]. Therefore, the appropriate value of WVTR for dressings is in the range of 8.33-10.42 mg/cm 2 /h [86]. The WVTR of obtained biofilms is presented in Table 2. For all films cross-linked with DAMC and DNCL, the WVTR parameter increases with increasing analysis time. The cross-linking process improves the water permeability compared with pure chitosan-gelatin-based films. The highest value of the WVTR parameter for CS-Gel-15%DNCL was observed. Samples of CS-Gel with 10% and 15% addition of DNCL have WVTR values in the desired range for dressings. However, other samples have values of this parameter relatively close to the desired range. Patel et al. [87] investigated the effect of the various ratios of chitosan and gelatin in the mixture on the WVTR parameters. They obtained chitosan-gelatin mixtures crosslinked with glutaraldehyde (3 mL, 0.25%) with the addition of lupeol. This research group observed that the chitosan-gelatin in a 50:50 ratio exhibited the highest WVTR parameter value. Another important feature in dressing materials is water vapor permeability. A low WVTR causes wound exudate to accumulate, while high WVTR leads to dehydration of the wound; thus, optimal values of this parameter must be obtained [84]. The WVTR value for healthy skin is 0.85 mg/cm 2 /h, while in the case of damaged skin, it ranges from 1.16 to 21.41 mg/cm 2 /h [85]. Therefore, the appropriate value of WVTR for dressings is in the range of 8.33-10.42 mg/cm 2 /h [86]. The WVTR of obtained biofilms is presented in Table 2. For all films cross-linked with DAMC and DNCL, the WVTR parameter increases with increasing analysis time. The cross-linking process improves the water permeability compared with pure chitosan-gelatin-based films. The highest value of the WVTR parameter for CS-Gel-15%DNCL was observed. Samples of CS-Gel with 10% and 15% addition of DNCL have WVTR values in the desired range for dressings. However, other samples have values of this parameter relatively close to the desired range. Patel et al. [87] investigated the effect of the various ratios of chitosan and gelatin in the mixture on the WVTR parameters. They obtained chitosan-gelatin mixtures crosslinked with glutaraldehyde (3 mL, 0.25%) with the addition of lupeol. This research group observed that the chitosan-gelatin in a 50:50 ratio exhibited the highest WVTR parameter value. Toxicity Studies The prepared films were subjected to a Microtox acute toxicity evaluation. This test uses the bacteria Aliivibrio fischeri, whose bioluminescence decreases linearly after contact with a toxic substance [88]. Microtox has been tested for its use as an assessment of the toxicity of soils and polluted waters. Still, because the organisms used in the test are Gram-negative bacteria, it can also be used for the initial evaluation of the antimicrobial potential of chemical compounds, including the biopolymeric system [89]. The results of the study are presented in Figure 11. toxicity of soils and polluted waters. Still, because the organisms used in the test are Gramnegative bacteria, it can also be used for the initial evaluation of the antimicrobial potential of chemical compounds, including the biopolymeric system [89]. The results of the study are presented in Figure 11. Figure 11. Decrease in the A. fischeri bacteria bioluminescence upon contact with the prepared films; n = 2; mean ± SD (SD-standard deviation); statistical significance is indicated with an asterisk: * p < 0.05. It can be seen that the highest decrease in the bacterial luminescence was exerted by un-cross-linked CS-Gel, where the decrease was over 90%. Such effect is probably caused by the antimicrobial properties of chitosan, as both gelatin and chitosan are known to be Figure 11. Decrease in the A. fischeri bacteria bioluminescence upon contact with the prepared films; n = 2; mean ± SD (SD-standard deviation); statistical significance is indicated with an asterisk: * p < 0.05. It can be seen that the highest decrease in the bacterial luminescence was exerted by un-cross-linked CS-Gel, where the decrease was over 90%. Such effect is probably caused by the antimicrobial properties of chitosan, as both gelatin and chitosan are known to be highly biocompatible materials [90]. In the case of the films cross-linked with nanocrystalline dialdehyde cellulose derived from microcrystalline cellulose, there is no apparent trend in the change of the bioluminescence as a function of the amount of the cross-linking agent content. The addition of 5% decreases the toxicity of the films compared to the un-crosslinked materials, and an increase in the DAMC further decreases the toxicity of the films. However, after reaching a certain concentration, the toxicity of the CS-Gel film with 15% DAMC rapidly increases, surpassing the values for the 5% DAMC film. In the case of the films cross-linked with the DNCL, the obtained toxicity is higher than for DAMC and lower than for neat CS-Gel, in the range of 70-80%. Interestingly, there seems to be no difference in the toxicities of the films cross-linked with different amounts of DNCL. Both these observations suggest that the different cross-linking agents must induce different changes in the functionalization of the materials, most probably in the surface groups, which would be responsible for the toxic or antibacterial properties of these materials. Human Serum Albumin Adsorption Study Assessing the HSA adsorption capacity with the obtained material is a crucial step in the design of dressing materials. During the contact of the obtained biofilms with blood, proteins are adsorbed, which in turn causes the adhesion and activation of blood elements [91]. According to the literature reports, the dressing material should be highly able to interact with proteins [92]. The amount of adsorbed HSA after the full incubation time is presented in Figure 12 and Table S1, and it ranges from 0.024 to 0.054 mg/cm 2 after 1 h and 24 h, respectively. As can be seen, all materials can interact with this protein. [93] amount of adsorbed protein after 24 h was 0.022 mg/cm 2 and 0.0012 mg/cm 2 for PH PHBV surfaces, respectively. These amounts of adsorbed protein are much less than we obtained at the same time of incubation. As can be seen, the material is capa interacting with HSA, which makes it promising as a dressing. Anti-Inflammatory Study Inflammation is the main response to the wound healing mechanism [94]. Th cess causes the regeneration of impaired cells. The chitosan and gelatin play a cruci in this mechanism. Protein denaturation is the cause of the inflammatory process; Akdogan studied the amount of adsorbed BSA on the surface of poly-3-hydroxybutyrate (PHB) and poly(3-hydroxybutyrate-3-hydroxyvalerate) (PHBV) [93]. The amount of adsorbed protein after 24 h was 0.022 mg/cm 2 and 0.0012 mg/cm 2 for PHB and PHBV surfaces, respectively. These amounts of adsorbed protein are much less than what we obtained at the same time of incubation. As can be seen, the material is capable of interacting with HSA, which makes it promising as a dressing. Anti-Inflammatory Study Inflammation is the main response to the wound healing mechanism [94]. This process causes the regeneration of impaired cells. The chitosan and gelatin play a crucial role in this mechanism. Protein denaturation is the cause of the inflammatory process; hence, examining the inhibition of protein denaturation in the presence of studied materials is necessary. The percentage inhibition of the denaturation of BSA (bovine serum albumin) by the obtained samples with various concentrations are presented in Figure 13. The neat chitosan-gelatin samples with different concentrations showed similar antiinflammatory properties as early described by Sakthiguru et al. [95]. Moreover, it should be pointed out that the cross-linking process improved the results of inhibition denaturation of BSA for all obtained samples. The improvement in these properties is more apparent for a series of samples cross-linked with DNCL. The highest inhibition value for CS-Gel-15%DNCL with a concentration of 500 µg/mL was observed. It should also be emphasized that an increase in the concentration of samples and the amount of added crosslinking agents improves the anti-inflammatory properties of the obtained films. The inhibition values of obtained samples are more than half lower than for diclofenac sodium. However, these results are promising from the point of view of practical applications, for example, in wound dressings. Tensile Properties Wound dressings should have appropriate properties because they will be exposed to rubbing, pulling, and other activities. Appropriate wound dressings should have sufficient flexibility to prevent breakage when applied to a wound and allow the skin to move freely after their application. Healthy human skin has a tensile strength that varies between 2.5 and 35 MPa [96], while Young's modulus ranges from 4.6 to 20 MPa [97]. In general, dressing materials should exhibit higher mechanical properties than healthy skin to avoid damage to the dressing, even with slight movement near the vicinity of the wound [98]. The tensile strength, Young's modulus, and elongation at break of all biofilms in dry and wet conditions were shown in Figure 14. The neat chitosan-gelatin samples with different concentrations showed similar antiinflammatory properties as early described by Sakthiguru et al. [95]. Moreover, it should be pointed out that the cross-linking process improved the results of inhibition denaturation of BSA for all obtained samples. The improvement in these properties is more apparent for a series of samples cross-linked with DNCL. The highest inhibition value for CS-Gel-15%DNCL with a concentration of 500 µg/mL was observed. It should also be emphasized that an increase in the concentration of samples and the amount of added cross-linking agents improves the anti-inflammatory properties of the obtained films. The inhibition values of obtained samples are more than half lower than for diclofenac sodium. However, these results are promising from the point of view of practical applications, for example, in wound dressings. Tensile Properties Wound dressings should have appropriate properties because they will be exposed to rubbing, pulling, and other activities. Appropriate wound dressings should have sufficient flexibility to prevent breakage when applied to a wound and allow the skin to move freely after their application. Healthy human skin has a tensile strength that varies between 2.5 and 35 MPa [96], while Young's modulus ranges from 4.6 to 20 MPa [97]. In general, dressing materials should exhibit higher mechanical properties than healthy skin to avoid damage to the dressing, even with slight movement near the vicinity of the wound [98]. The tensile strength, Young's modulus, and elongation at break of all biofilms in dry and wet conditions were shown in Figure 14. The Young's modulus of the neat chitosan-gelatin sample was about 806 MPa. In both cases, the cross-linking process caused higher values of Young's modulus, thus resulting in an increase in stiffness [99]. For both the DNCL and DAMC cross-linked samples, an increase in the amount of added cross-linker resulted in an increase in Young's modulus. The more visible effect of rigidity among the studied systems was observed for samples cross-linked with DNCL. The highest value of this parameter was achieved for the CS-Gel sample with a 15% addition of DNCL. This phenomenon was caused by the Schiff base formation between the chitosan-gelatin system and cross-linkers, which was confirmed by ATR-FTIR analysis and degree of cross-linking. The wet samples were less rigid than the dry samples. Nevertheless, adding DAMC and DNCL to the chitosan-gelatin sample increases the value of Young's modulus. This effect for wet conditions was more visible for materials cross-linked with DNCL. The stiffest sample was CS-Gel, with a 15% addition of DNCL. The tensile strength of the chitosan-gelatin sample cross-linked with 5% of DNCL was practically the same as a neat one. For a series of samples cross-linked with DNCL, an increase in the amount of added cross-linking agent caused an increase in the tensile strength value. This testifies to the increased resistance to rupture of this series of samples. For samples cross-linked with DAMC, no relationship was found. The samples with the highest amount of DNCL and DAMC had practically the same resistance value to fracture. However, it should be emphasized that the cross-linking process in both series of materials improved the tensile strength values. Taken together, the enhancement of mechanical properties of the chitosan-gelatin mixture cross-linked with DAMC and DNCL suggests that these materials can potentially be used as dressings. In the case of the tensile strength in wet conditions, the samples with a 5% addition of DAMC and DNCL achieved almost the same value compared to the CS-Gel biofilm. An increased amount of DNCL causes higher tensile strength. For the series of materials cross-linked with DAMC, the materials with the addition of 10% and 15% achieved almost the same tensile strength value. All samples had a value of elongation at break less than 3%, which may indicate the low elasticity of the samples. Low values of this parameter may result from the lack of The Young's modulus of the neat chitosan-gelatin sample was about 806 MPa. In both cases, the cross-linking process caused higher values of Young's modulus, thus resulting in an increase in stiffness [99]. For both the DNCL and DAMC cross-linked samples, an increase in the amount of added cross-linker resulted in an increase in Young's modulus. The more visible effect of rigidity among the studied systems was observed for samples cross-linked with DNCL. The highest value of this parameter was achieved for the CS-Gel sample with a 15% addition of DNCL. This phenomenon was caused by the Schiff base formation between the chitosan-gelatin system and cross-linkers, which was confirmed by ATR-FTIR analysis and degree of cross-linking. The wet samples were less rigid than the dry samples. Nevertheless, adding DAMC and DNCL to the chitosan-gelatin sample increases the value of Young's modulus. This effect for wet conditions was more visible for materials cross-linked with DNCL. The stiffest sample was CS-Gel, with a 15% addition of DNCL. The tensile strength of the chitosan-gelatin sample cross-linked with 5% of DNCL was practically the same as a neat one. For a series of samples cross-linked with DNCL, an increase in the amount of added cross-linking agent caused an increase in the tensile strength value. This testifies to the increased resistance to rupture of this series of samples. For samples cross-linked with DAMC, no relationship was found. The samples with the highest amount of DNCL and DAMC had practically the same resistance value to fracture. However, it should be emphasized that the cross-linking process in both series of materials improved the tensile strength values. Taken together, the enhancement of mechanical properties of the chitosan-gelatin mixture cross-linked with DAMC and DNCL suggests that these materials can potentially be used as dressings. In the case of the tensile strength in wet conditions, the samples with a 5% addition of DAMC and DNCL achieved almost the same value compared to the CS-Gel biofilm. An increased amount of DNCL causes higher tensile strength. For the series of materials cross-linked with DAMC, the materials with the addition of 10% and 15% achieved almost the same tensile strength value. All samples had a value of elongation at break less than 3%, which may indicate the low elasticity of the samples. Low values of this parameter may result from the lack of plasticizers in the structure of the obtained mixture [45]. Moreover, the relatively low elongation at break is caused by the cross-linking process of chitosan-gelatin, which restricts the motion of the macromolecules [100]. In all samples, no relationship was observed between the amount of cross-linking agent and the value of elongation at the break of the chitosan-gelatin mixture. The series of samples cross-linked with DAMC was more ductile. The CS-Gel-10%DAMC sample has the highest elongation at break value. Materials in wet conditions achieved a more flexible character. This is due to the plasticizing effect of water molecules. The CS-Gel sample with a 15% DNCL addition was the most flexible. The cross-linking process caused a higher value of sample elasticity for both measurement series. It should be concluded that Young's modulus and tensile strength achieved lower values in wet conditions. Nevertheless, these parameters after DNCL and DAMC crosslinking are higher than those of the CS-Gel sample. Dong and Li [101] achieved lower values of Young's modulus and tensile strength in wet conditions for chitosan dressings crosslinked with dialdehyde cellulose nanocrystals with the addition of silver nanoparticles. Generally, the mechanical properties of these samples were much weaker than those obtained in this work. The study of the mechanical properties of chitosan-gelatin materials was also the subject of research by other research groups. Taheri et al. studied the addition of tannic acid and/or bacterial nanocellulose on the properties of chitosan/gelatin blend films. The crosslinking process with 5% and 8% of added tannic acid elevated moderately tensile strength and Young's modulus of chitosan/gelatin sample. They ascribed this effect to the formation of physical cross-linking through hydrogen bonding in the presence of tannic acid [67]. Akhavan-Kharazian et al. reported that the mechanical properties of chitosan/gelatin systems could be improved by adding nanocrystalline cellulose and calcium peroxide with sodium tripolyphosphate as a crosslinker [102]. Swelling and Degradation Rate The biomaterials used as dressing materials should maintain an appropriate moisture level on the wound surface. This is mainly affected by the swelling ability of the materials. Therefore, the study of the swelling properties of dressing materials enables the prediction of the amount of wound exudate managed [103]. Hydrophilicity, water content, and porosity affect the swelling capacity of the obtained materials [104]. The results of the swelling rate for chitosan-gelatin biofilms cross-linked with DAMC and DNCL are shown in Figure 15a,b. After the first hour of immersion in the PBS solution, the swelling capacity of the chitosan-gelatin biofilm was 323.08 ± 14.01%. The chitosan-gelatin samples cross-linked with DAMC and DNAL presented a different swelling profile, increasing quickly after 1 h. After this time, the liquid absorption capacity of all obtained samples increased slightly until the end of the measurement. The CS-Gel-15%DNCL sample achieved the highest swelling index of 831.87 ± 7.91%. The samples cross-linked with DNCL achieved higher values of swelling capacity than the samples cross-linked with DAMC at the same time of the measurement. This could be related to the higher hydrophilicity of samples cross-linked with DNCL, which was confirmed by the contact angle measurement of obtained materials. Other research groups investigated the swelling ability of chitosan-gelatin samples cross-linked with other cross-linking agents. As reported by Cui et al., swelling capacity was closely related to chitosan-to-gelatin weight ratios and genipin content. Moreover, at pH 7.4, the higher percentage content of chitosan caused a lower degree of swelling [105]. Ranjbar et al. studied the swelling ability of chitosan/gelatin/oxidized cellulose sponges. All the samples received a higher absorption capacity after 24 h compared to the measured values after 30 min [106]. linked with oxidized guar gum did not exceed more than 50% maximum degradation. The chitosan-gelatin hydrogels showed a more significant degradation compared to crosslinked hydrogels [108]. Liu et al. also studied the degradation process of a chitosan-gelatin membrane cross-linked with potassium pyroantimonate (ionic-bond cross-linker) and genipin (covalent-bond cross-linker). According to their report, the degradation rate of cross-linked samples increased with the increased content of potassium pyroantimonate [77]. Surface Free Energy and Wettability Characteristics Contact angle measurement is one of the essential parameters to determine the character of the surface of obtained materials. From a biomedical point of view, applicable biomaterials should be characterized by wettability properties [109]. The hydrophilic nature of the surface promotes cellular response, such as adhesion and proliferation. In In the next stage of research, the rate of degradation of the systems was determined, which is of key importance for estimating the efficiency of biomaterials in the controlled release of active agents [107]. The degradation rate should also be at the appropriate speed for the controlled release of bioactive molecules. The degradation rate of all obtained samples is presented in Figure 15c,d. The chitosan-gelatin sample degraded by 8.21% and 64.3% after 1 and 8 days, respectively. As the concentration of cross-linking agents increased, the degradation of the samples decreased. The samples cross-linked with DNCL were characterized by a lower degree of degradation compared to biofilms with the addition of DAMC. The least degraded sample was chitosan-gelatin with a 15% addition of DNCL. The cross-linking process with DAMC and DNCL caused slow degradation of the materials compared to non-cross-linked chitosan-gelatin films. This may be related to the stable covalent cross-linking bonds. The efficiency of degradation correlates with the degree of cross-linking. In a recent review, Zhang et al. investigated the degradation of chitosan-gelatinoxidized guar gum hydrogels for 3, 7, 14, and 21 days. They noticed that all hydrogels crosslinked with oxidized guar gum did not exceed more than 50% maximum degradation. The chitosan-gelatin hydrogels showed a more significant degradation compared to cross-linked hydrogels [108]. Liu et al. also studied the degradation process of a chitosan-gelatin membrane cross-linked with potassium pyroantimonate (ionic-bond cross-linker) and genipin (covalent-bond cross-linker). According to their report, the degradation rate of cross-linked samples increased with the increased content of potassium pyroantimonate [77]. Surface Free Energy and Wettability Characteristics Contact angle measurement is one of the essential parameters to determine the character of the surface of obtained materials. From a biomedical point of view, applicable biomaterials should be characterized by wettability properties [109]. The hydrophilic nature of the surface promotes cellular response, such as adhesion and proliferation. In addition, the wettability of the obtained materials might depend on the chemical composition and topographic structure [110]. The contact angle of the surface of the tested biofilms with the measuring liquids glycerin (polar) and diiodomethane (non-polar), the calculated value of the surface free energy, as well as its polar and dispersion components are shown in Table 3. All of the obtained biofilms have a contact angle of glycerin lower than 90 • , indicating the hydrophilic nature of the surface. The glycerin contact angle value for the chitosangelatin sample is higher than for the DAMC and DNCL cross-linked samples, which indicates an improvement in the hydrophilicity of the cross-linked biofilms. The exception is the CS-Gel-5%DAMC sample, where the measurement of the contact angle of glycerin was 74 • . The surface free energy of all obtained materials ranges from 36.70-38.80 mJ/m 2 . According to literature reports, the value of the surface free energy of 20-30 mJ/m 2 causes the potential ability of the material to adhere to cells, while the value of the surface free energy is 40 mJ/m 2 , which promotes cell adhesion [111]. In our case, the highest surface free energy has a CS-Gel-10%DNCL sample (38.80 mJ/m 2 ), while the lowest value has a pristine biofilm of chitosan-gelatin. However, the cross-linking process improves the surface free energy value of chitosan-gelatin biofilms relatively close to the desired range. For a series of samples cross-linked with DAMC, an increasing amount of cross-linkers cause an increase in the polar components (γ s p ) of samples. The highest polarity from the series of these samples has CS-Gel-5%DNCL due to the highest value of γ s p . Other research groups investigated the wettability of the surface of chitosan-gelatin samples using water-contact angle measurements. Kenawy et al. studied the impact of cinnamaldehyde content variation on chitosan-gelatin biofilms' wettability [110]. As cinnamaldehyde's content increased, the obtained samples' hydrophilicity decreased. In the work of Whu et al., carbodiimide was used as a cross-linking agent for chitosan-gelatin scaffold [112]. They reported higher hydrophilicity of chitosan-gelatin (1:1) scaffolds crosslinked with water-soluble carbodiimide (57.5 • ) than in a neat chitosan sample (61.9 • ). Preparation of Cellulose Nanocrystals The cellulose nanocrystals were obtained according to the described procedure [113]. Cellulose fibers and microcrystalline cellulose were added to a 3.16M aqueous H 2 SO 4 solution. The hydrolysis process was performed under magnetic stirring at 40 • C for 2 h. After that, the mixture was washed and centrifuged at 12,000 rpm for 15 min to remove acid residues and achieve neutrality of obtained samples. This operation was repeated several times. The gained suspensions were homogenized by an Ultra Turrax T25 homogenizer for 5 min at 13,500 rpm. The cellulose nanocrystals from fibrils (CNF) and microcrystalline cellulose (CNM) were dried for 48 h at room temperature. Preparation of Dialdehyde Cellulose Nanocrystals The previously obtained CNF and CNM suspensions were oxidized using sodium periodate (0.7 M) under magnetic stirring (weight ratio of oxidant/starch nanocrystal = 0.5:1, 0.7:1, and 1:1). The mixture was heated to 40 • C, and stirring was continued in the dark for 3 h. After cooling to room temperature, the appropriate quantity of acetone was added until a white amorphous powder precipitated. The obtained product was isolated by filtration and washed three times with deionized water. Finally, dialdehyde cellulose nanocrystals from fibrils (DCNF) and from microcrystalline form (DAMC) were dried at room temperature for 24 h. Preparation of Cross-Linked Chitosan-Gelatin Films In the first step, 1% acetic acid was used to dissolve chitosan and gelatin separately. Then, both solutions were mixed in the volume ratio of 1:1, and 5%, 10%, or 15% (in relation to the dry weight of the polysaccharide) of the cross-linking agents (DCNF and DCNM) were added. Chitosan-gelatin mixtures with the appropriate amount of cross-linkers were stirred by a magnetic stirrer at room temperature for 2 h. The received mixtures were poured onto the leveled glass plates. The evaporation process was guided for five days at room temperature. 3.5. Properties of Cellulose Nanocrystals, Dialdehyde Cellulose Nanocrystals, and Cross-Linked Biofilms 3.5.1. Content of Aldehyde Groups The number of aldehyde groups was determined by acid-base titration of the dialdehyde cellulose nanocrystals in the presence of phenolphthalein, as described in a previous study [19]. Particle Size Distribution The particle size distribution was analyzed using the Malvern Panalytical NanoSight LM10 instrument (sCMOS camera, 405 nm laser). The samples were diluted with deionized water, and the temperature of the sample chamber was set and maintained at 25.0 • C. Three 60 s videos were recorded for each sample. The measurement was repeated three times. Thermogravimetry The thermal properties of cellulose nanocrystals and dialdehyde cellulose nanocrystals were performed by thermogravimetric analysis using TA Instruments (SDT 2960 Simultaneous DSC235 TGA, New Castle, DE, USA). The thermograms were received by subjecting the samples to heating from room temperature to 600 • C with a heating rate of 10 • C/min under a nitrogen atmosphere. Cross-Linking Degree and Apparent Density The extraction method determined the cross-linking degree of films, which was mentioned in a previous literature report [114]. The apparent density of obtained chitosan-gelatin films was measured using the method previously described by Ediyilyam et al. [115]. The samples of known thickness were cut in a round shape and weighed. The apparent density value was an average of five measurements and was calculated from the formula (1): where W (g) is the weight, D (cm) is the diameter, and H (cm) is the thickness of the sample. ATR-FTIR Spectroscopy and X-ray Diffraction In order to confirm the structure of the cellulose nanocrystals, dialdehyde cellulose nanocrystals, and chitosan-gelatin cross-linked films, the ATR-FTIR analysis was used. The spectra were recorded using a Spectrum TwoTM spectrophotometer (Perkin Elmer, USA) equipped with a diamond crystal within a spectral range of 400 to 4000 cm −1 with a scanning rate of 4 cm −1 for 64 scans at room temperature. Morphology Analysis The topography of the obtained biofilms was studied using the technique of atomic force microscopy (AFM) (MultiMode Nanoscope IIIa Veeco Metrology Inc., USA). The roughness parameters, R a -arithmetic mean, R q -root mean square, and R max -the highest peak value, were calculated for the scan area of 25 µm 2 at room temperature and analyzed using NanoScope Analysis software. The morphological features of samples were observed with a 1430 VP LEO Electron Microscopy Ltd. operated at an accelerating potential of 20 kV. The powder samples were sputter-coated with a thin conductive layer of gold to avoid charging under the high electron beam during micrography. Photomicrographs were taken at 200× and 1000× magnification. Antioxidant Activity The antioxidant activity of the obtained samples was evaluated by the DPPH free radical scavenging assay with some modification [78]. Briefly, 20 mg of samples were placed in 1 mL of 1 mM ethanolic solution of DPPH, and the mixture system was reacted in the dark at ambient temperature for 30 min. The absorbance of the supernatant was measured at 517 nm. Antioxidant activity was stated as a single measurement. The percentage of the DPPH radical scavenging activity was determined using the following Equation (2): where A DPPH is the value of absorbance of DPPH ethanolic solution, and A sample is the sample's absorbance values. Oxygen Permeability The oxygen permeability of chitosan-gelatin biofilms was performed by measuring the amount of dissolved oxygen in distilled water using the Winkler method [116]. Firstly, 200 mL of deionized water was added to the bottles, which were sealed with obtained biofilms on their tops (test area: 4.9 cm 2 ). The closed bottles with an airtight cap and the open bottles were negative and positive controls, respectively. The samples were placed in an open environment for 24 h, and the oxygen permeability value was an average of three measurements. The amount of dissolved oxygen was measured in milligrams per liter (mg/L). The Water Vapor Transmission Rate (WVTR) The water vapor transmission rate (WVTR) of the obtained biofilms was investigated gravimetrically using the desiccant method [117]. Firstly, the desiccant (calcium chloride) was prepared by drying at 100 • C for 24 h before use. A weighed amount of the dried calcium chloride was placed in plastic containers with a diameter of 40 mm. Obtained biofilms were placed on top of the vessels and were tightly sealed. A container without a cover with calcium chloride was left as a control sample. After 24, 48, and 72 h, the biofilms were removed, and the weight of the desiccant was determined. WVTR is an average of three measurements and was calculated using the following Equation (3): where ∆m (mg) is the weight gain after a fixed time interval t (h) and A (cm 3 ) is an effective transfer area. Toxicity Studies The acute toxicity of the prepared materials was assessed using an 81.9% Screening Test procedure with modifications. Briefly, to the bacterial suspension, the diluent (2% aqueous NaCl) was added, immediately followed by the addition of film fragments of the same size. The test was performed using Microtox M500 with Modern Water Microtox Omni 4.2 software [118]. The analysis was conducted in duplicate. Human Serum Albumin Adsorption Study Human serum albumin adsorption of the received biofilms was measured using a spectrofluorometer. In the first step, albumin was dissolved in phosphate buffer pH = 7.4 (50 mM) at a concentration of 6.05 µM. Samples cut to size 2 × 2 cm were mixed with albumin solution and incubated at different intervals at 36 • C. After the incubation process, fluorescence spectra at excitation at 280 nm were recorded using Jasco FP-8300 spectrofluorometer (Jasco, Tokyo, Japan). The parameters for registration fluorescence spectra were as follows: scanning speed-100 nm/min, Em/Ex bandwidth-2.5 nm/5 nm, and registration range-285-400 nm. The protein adsorption was performed as a single measurement (n = 0). Anti-Inflammatory Studies A bovine serum albumin (BSA) denaturation assay was done to examine the antiinflammatory properties of obtained materials. The obtained samples and diclofenac sodium were dissolved in a minimal amount of DMF (dimethylformamide). Next, the materials were diluted in 0.2 M PBS solution at physiological pH = 7.4 to the appropriate concentration. The reaction mixture (5 mL) consisted of 1 mL of 1 mM bovine serum albumin (prepared in PBS) and 4 mL of obtained samples, or diclofenac sodium with different concentrations, which were incubated at 37 • C for 15 min. The same volume of phosphate buffer was used as a control sample. Then, the reaction mixture was heated to 70 • C for 30 min to induce denaturation. After cooling, the absorbance of obtained samples was measured at 660 nm using a UV/VIS spectrometer UV-1601 PC (Shimadzu, Kyoto, Japan). The analysis was conducted as a single measurement (n = 0). The following formula (4) is used for calculating the percentage inhibition of protein denaturation: where A s and A c are the absorbance of the sample solution and control, respectively. Tensile Properties The mechanical properties of the obtained chitosan-gelatin biofilms were evaluated using an EZ-Test E2-LX Shimadzu texture analyzer (Shimadzu, Kyoto, Japan). The specimens were cut by dumbbell-shaped sharpeners with initial dimensions of 50 mm in length and 4.5 mm in width. The prepared films were placed between the machine clamps and stretched to break at the speed of 5 mm/min. Next, the obtained samples were immersed in PBS solution for 2 h and analyzed in the same parameters. The tensile strength, Young's modulus, and elongation at the break were calculated from 5 measurements for each type of biofilm. Swelling and Degradation Rate The swelling rate of obtained biofilms in PBS solution was measured by the conventional gravimetric method as described in the previous work [114]. The degradation rate of the obtained samples was calculated based on the weight loss after immersion in the PBS solution at appropriate time intervals. The weighted samples were immersed in PBS solution and incubated at 37 • C for 8 days. The samples were removed from the solution, dried, and weighed every day. This analysis was conducted in triplicate, and the degradation rate was measured using the following Equation (5): where m 0 and m i are the initial weight and weights after removal from solution at appropriate time intervals, respectively. Surface Free Energy and Wettability Characteristics The contact angle was measured using a drop-shape analysis system (DSA produced by KRÜSS GmbH, Hamburg, Germany) at room temperature. The drops of glycerin (polar) and diiodomethane (non-polar) liquids were placed on the biopolymers' surfaces using a syringe. The contact angle was recorded 3 s after placing of measuring liquids on the surface of obtained materials. Each θ is an average of five measurements. The surface free energy, polar and dispersive components were calculated by the Owens-Wendt method [119]. Conclusions In the present study, new cross-linking agents, dialdehyde cellulose nanocrystals, were obtained from two sources and fully characterized. Dialdehyde groups from cross-linkers interact with amino groups from gelatin and chitosan to create Schiff bonds, which was confirmed by ATR-FTIR spectroscopy. These chitosan-gelatin systems with varying degrees of cross-linking formed thin films by casting from solutions. The obtained biofilms crosslinked with DNCL were characterized by a high degree of cross-linking. The apparent density of obtained materials decreased with an increased amount of cross-linkers. The cross-linking process improved the roughness and antioxidant activity of obtained chitosangelatin films. All samples exhibit good mechanical properties and swelling ability in PBS solution. Additionally, an increasing amount of cross-linkers caused a decrease in the degradation rate of obtained samples. The cross-linking process improves the surface free energy, which promotes cell adhesion on the biofilms' surface. The oxygen permeability of all samples is within the range of ideal dressing. In addition, CS-Gel biofilms with 5 and 10% addition of DNCL have a WVTR value to the desired range for wound dressings. The cross-linked biofilms were characterized by good antimicrobial activity against A. fisheri, adsorption of HSA, and anti-inflammatory properties. However, it should be noted that materials cross-linked with dialdehyde cellulose nanocrystals from fiber showed better properties. Based on the above research, it can be concluded that the obtained CS-Gel biofilms meet the requirements for wound dressing applications.
2022-08-31T15:03:49.573Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "97f195ca5106687d12adb6f106e1d9b2804e5c2a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/17/9700/pdf?version=1661751484", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f24e2ad0e41a4aaa5c407551a14f3ddf4650a4b1", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256967543
pes2o/s2orc
v3-fos-license
A New Open-source Geomagnetosphere propagation tool (OTSO) and its applications We present a new open-source tool for magnetospheric computations, that is modelling of cosmic ray propagation in the geomagnetosphere, named the “Oulu - Open-source geomagneToSphere prOpagation tool” (OTSO). A tool of this nature is required in order to interpret experiments and study phenomena within the cosmic ray research field. Within this work OTSO is applied to the investigation several ground level enhancement events. Here, we demonstrated several applications of OTSO, namely computation of asymptotic directions of selected cosmic ray stations, effective rigidity cut-off across the globe at various conditions within the design, general properties, including the magnetospheric models employed. A comparison and validation of OTSO with older widely used tools such as MAGNETOCOSMICS was performed and good agreement was achieved. An application of OTSO for providing the necessary background for analysis of two notable ground level enhancements is demonstrated and the their spectral and angular characteristics are presented. Introduction The Earth is under constant bombardment by high-energy particles known as cosmic rays (CRs).Primary CRs can have a solar, galactic or extra-galactic origin and are composed of ≈ 90% protons, 9% helium nuclei, and 1% heavier element nuclei (e.g.Gaisser et al., 2016, and references therein).CRs with a solar origin are produced during and following solar eruptions, such as solar flares and/or coronal mass ejections (CMEs) (e.g.Desai & Giacalone, 2016, and references therein), whilst CRs produced outside of the solar system are believed to come primarily from supernova remnants (e.g.Blasi, 2013, and references therein).CRs were first discovered in the early 20th century by Dr. Victor Hess and since then our knowledge of CRs has been constantly developing through the application of groundbreaking experiments, recent examples include the PAMELA (Payload for Antimatter Exploration and Light-nuclei Astrophysics) detector (Adriani et al., 2017), AMS-02 (Alpha Magnetic Spectrometer) (Aguilar et al., 2021) space-probes, and a plethora of ground-based experiments. If a CR reaches the Earth's atmosphere it collides with atmospheric constituents.This collision produces numerous secondary particles, which then proceeds to collide or decay further into more secondary particles creating a cascade, the process developing until a threshold energy is reached.This phenomenon is known as an extensive air shower and is widely exploited by ground based detectors as a mechanism for study CRs.Within this work neutron monitors (NMs) are used as an example for CR detection, specifically of solar origin.NMs are fixed to a single location on the Earth making them especially good at studying CRs of solar origin, typically revealing anisotropy (e.g.Moraal & Mc-Cracken, 2012;Bütikofer et al., 2009).When NMs detect an increased flux of CRs with solar origin, it is dubbed a ground level enhancement (GLE), a relatively rare event that occurs only a few times per solar cycle (Shea & Smart, 2012).Introduced during the 1957 -1958 International Geophysical Year, NMs are standardised, nowadays assembled in a global network (Simpson, 2000;Mavromichalaki et al., 2011). CRs are charged particles and as such experience the Lorentz force when travelling within a magnetic field.Thus, when a CR encounters the Earth's magnetic field there is the potential for the CR to be deflected by the magnetic field or penetrate it.The ability of the Earth's magnetic field to deflect certain CRs is known as magnetic shielding. Whether a CR is able to penetrate the magnetosphere depends greatly on the CR's energy, the geomagnetic conditions at the time of arrival, the location of the CR's arrival, and its incidence.Only once a CR has penetrated the magnetosphere can it proceed to reach the Earth's atmosphere.An important characteristic of CRs is their rigidity, this value is typically used instead of CR energy as it is independent of the CR charge and species (Cooke et al., 1991).Rigidity quantifies the impact that magnetic fields have on the propagation of the CR, the larger the rigidity value the less deflected the particle is by a magnetic field.The rigidity of a CR is calculated using equation: where P is the rigidity, p is the CR's momentum, c is the speed of light, Z is the atomic number, and e is the elemental charge.Rigidity is important when considering CRs arriving at Earth as it can tell us which CRs are able to penetrate the magnetosphere and which are deflected away as a result of magnetic shielding.The rigidity needed by the CR to penetrate the magnetosphere ranges from 0 GV, at the magnetic poles, to ≈ 17 GV at the magnetic equator, this is due to the increase in magnetic shielding at lower latitudes (Gerontidou et al., 2021).Knowing the rigidity needed by a CR to arrive at different locations on the Earth (known as the cutoff rigidity) is important to analyse both ground-based and inside the geomagnetosphere space-borne experiments. Determining an exact cutoff rigidity can be a difficult task due to the complex nature of CR propagation in the magnetosphere.It is typical to see a collection of CRs with sequential rigidities having a mixture of trajectories that can and can not penetrate the magnetosphere, referred to allowed and forbidden respectively (for details see Cooke et al., 1991).This region of allowed and forbidden CR trajectories is known as the penumbra.In order to get a useful quantitative value for the cutoff at given points on the Earth's surface an effective cutoff rigidity (R c ) is found, which accounts for the effects of the penumbra.As mentioned previously, the trajectories of CRs can be very complex, increasingly so at lower rigidities, this means that CRs that are detected at e.g.NMs do not necessarily arrive from directly above the station, therefore each station has its own asymptotic direction of acceptance for CRs, that is which part of the sky the detector is actually observing (Rao et al., 1963). The complex nature of the Earth's magnetic field structure and CR propagation within it makes modelling CR trajectories very computationally intensive.The equations of motion that describe the trajectory of a charged particle in the Earth's magnetosphere currently have no known closed form solution.As such the trajectory of said particle must be determined using numerical integration (e.g.Bütikofer, 2018).It is almost impossible to predict where an arriving CR will encounter the Earth based on its point of arrival at the magnetosphere, therefore, the trajectory is typically computed backwards from just above the Earth's surface, around 20 km in altitude, to the CR's point of entry into the magnetosphere. During GLEs, SEPs can have energies ranging from 10 MeV/nucleon up to about several GeV/nucleon (Biswas, 2000), relativistic effects should thus be accounted for in the model.To resolve any issues that can arise from this, the computation of the particle's trajectory must be done in small steps to avoid the model breaking, however this exacerbates the computational intensity of the modelling process.Finding a good balance between maintaining accuracy of the simulation and time efficiency is one of the main tasks of creating a magnetosphere computation tool. The magnetosphere is a complex and dynamic environment, constantly changing in response to external conditions, which makes the accurate modelling very challenging.Empirical observations made by spacecraft have been used historically to create models describing the magnetic field structure (Jordan, 1994).As of present the field is best described as a combination of the inner magnetic field (created by the dynamo process in the Earth's core) and external magnetic fields (created by the various different currents within the magnetosphere).For the internal field, models such as the IGRF (Alken et al., 2021) and dipole models (Nevalainen et al., 2013) can be used and for external fields the Tsyganenko models are typically used (Tsyganenko, 1989(Tsyganenko, , 1995(Tsyganenko, , 1996(Tsyganenko, , 2002a(Tsyganenko, , 2002b)). A tool that can compute the trajectories of charged particles within an accurate model of the magnetosphere under various conditions is highly valuable within the CR research field.The usefulness of such a tool has lead to the creation of multiple tools in the past (see (Bütikofer, 2018) and references therein), some examples are the developed by Smart and Shea (2001), COR by Gecášek et al. (2022), and MAGNETOCOSMICS by Desorgher (2006), the latter taken as a reference tool in this work, being widely used over the years.We emphasise that MAGNETOCOSMICS was designed within the framework of the Geant4 toolkit (Agostinelli et al., 2003), was released in 2006, and is in practice outdated nowadays.The only way to resolve this issue is to either update MAGNE-TOCOSMICS to be compatible with the newer Geant4 versions or create a new tool.Here we have chosen the latter approach and present the newly developed Oulu -Open-source geomagneToSphere prOpagation tool (OTSO), which can be tailored by the scientific community to meet the corresponding needs. Formalism of CR Propagation The trajectories of charged particles are influenced by the magnetic field generated by the Earth's core.This is due to the Lorentz force generated perpendicular to a charged particle moving through a magnetic field.The Lorentz force is described by: where F is the force [N], q is the charge [C], E and B are the electric [V m −1 ] and magnetic fields [T] respectively, and v is the particle's velocity [ms −1 ]. When considering magnetosphere calculations we can neglect E from the equation as its influence is negligible due to the high electrical conductivity of the region [for details see Bütikofer (2018) and the discussion therein].It is important to note that the bulk of CRs are travelling at relativistic speeds and as such the impact this has on the particle's mass must be considered when calculating the acceleration.This is achieved by incorporating the Lorentz factor γ: Combining equation 2 and the Lorentz factor within Newton's second law we can determine the equations of motion for a relativistic particle as a result of the Lorentz force within the magnetosphere in Cartesian coordinates: with m 0 being the rest mass of the particle.Knowing the acceleration of the CR at a specific point in the magnetosphere allows for the trajectory to be determined by preforming numerical integration of the equations of motion, as there is no solution in their enclosed form.There are multiple methods that can be implemented to achieve this, such as the Runge-Kutta, Euler, Boris, and Vay methods.All of these methods have their own benefits and drawbacks when considering computation speed and accuracy [see R.P. and Leigui (2019) and references therein].The most widely used method in CR simulations is the 4 th order Runge-Kutta method discussed by Smart et al. (2000), and as such it has been incorporated into the present work.This method offers a good balance between approximation accuracy and calculation time.Other integration methods can be added to OTSO later if required. The trajectory of the particle is then determined up until the allowed or forbidden condition for the trajectory is met.Similarly to Smart and Shea (1981), we employ the approach of starting the particle propagation at 20km above the surface with the particle's velocity being directed vertically at the zenith, yet incidences of varying zenith and azimuth are also possible within OTSO.The trajectory is considered allowed if the particle is then able to reach the model magnetopause boundary and forbidden if it returns below the 20km starting altitude.We emphasise that 20km is selected as this is the typical altitude that atmospheric cascades start as a result of CR collisions with atmospheric particulates (Grieder, 2001).While collisions are possible above this point the model assumes the atmosphere is collisionless until 20km for simplicity.In addition, the tool is ending the simulation once the CR has travelled more that 100 Earth radii, to avoid endless simulation of a trapped particle that neither escapes or returns to Earth, and in this instance the trajectory is assumed forbidden. OTSO can calculate individual trajectories of CRs with any given initial rigidity and start position on the Earth's surface.The user can select a range of rigidity values to test across as well as a rigidity step value, ∆R.OTSO will then repeat the trajectory calculation over all iterations of rigidity within the given range determining allowed and forbidden rigidity values.Due to the penumbra, that is encountered around the region in which rigidities change from allowed to forbidden, it is important to know the upper most accepted rigidity before the first forbidden value, R U , and the last allowed value, R L .These values are recorded during computations and to account for the effect of the penumbra an effective cutoff rigidity, R c , is calculated using a method described in Cooke et al. (1991).In which the sum of the number of allowed rigidities multiplied by ∆R is subtracted from R U , seen in equation 5. Determining Time-step One of the most important parts of the computation is determining the time-step (∆t) to use during the numerical integration of the equations of motion.If ∆t is too large then errors will accumulate leading to incorrect trajectories or the simulated particles accelerating to speeds faster than light, breaking the simulation.Vice versa, if ∆t is too small then the computation can take an irrationally long time to complete, making the tool impractical to use. A convenient way to determine ∆t involves using the CR's properties and position in the magnetosphere.This work uses the method developed within Smart and Shea (1981) in which ∆t is the time taken for the particle to travel 1.0% of its gyration distance, making the assumption the magnetic field is uniform over the step.In order to optimise the computation further, the adaptive time-step method also outlined within Smart and Shea (1981) was utilised.This allows ∆t to grow by a maximum of 10% between Runge-Kutta iterations, only if the previous iteration was completed within an accepted error range, and sets the maximum value of ∆t to be 1.5% of the gyration time.This growth limit prevents any regions of sudden acceleration being skipped by large ∆t values.However, through testing of the new program the 1.5% limit was found to lead to extended computation times and was changed to 15 % with marginal impact on the results of the calculations.This limit can be edited or disabled depending on the accuracy of results required by the user and the desired computation time.The error between Runge-Kutta iterations is determined by checking the β value of the CR before and after the step, where β is the speed of the CR in units of c.As a charged particle in a magnetic field, with no other external force, should have a constant speed we can take the change in β to represent the error between steps.Within the tool this is implemented so that if β has grown by over 0.0001% during a Runge-Kutta step ∆t is assumed too large and the same iteration is repeated using ∆t 2 .This β error check is quite conservative and reduced error values have been able to reproduce similar results in a vastly reduced computation time, especially at higher latitude stations.The value of this error check parameter can be selected at the leisure of the user. Employed Magnetosphere Models In order to model the magnetic field OTSO uses an internal and external magnetic field model.There are only two main internal field models included in OTSO, these being the IGRF (Alken et al., 2021) and geodipole field models.The external component of the magnetic field is modelled using the Tsyganenko models: TSY87, TSY89, TSY96, TSY01, and TSY01S (Tsyganenko, 1987(Tsyganenko, , 1989(Tsyganenko, , 1995(Tsyganenko, , 1996(Tsyganenko, , 2002a(Tsyganenko, , 2002b;;N. Tsyganenko et al., 2003).We plan to include other models in the future, based on a convenient parameterisation.This allows for easier comparison with e.g.MAGNETOCOSMICS and/or other similar tools.The latter Tsyganenko models get increasingly complex and computationally intensive, leading to long simulation times.The use of later Tsyganenko models should be considered during periods of intense geomagnetic activity (e.g.periods of Kp index above 6).Generally a combination of the IGRF and TSY89 models is sufficient to provide fast and reliable results (e.g.Kudela & Usoskin, 2004;Nevalainen et al., 2013, and the discussion therein).Unless stated differently, the future calculations using OTSO will be performed using this combination of models. In order to determine whether a CR has escaped the magnetosphere, this tool constantly checks the CR's position in relation to the model magnetopause chosen for the simulation.If the CR reaches the magnetopause boundary it is then assumed to have escaped.The TSY96, TSY01, and TSY01S models for the external magnetic field contribution use their own model magnetopause described within Tsyganenko (1995Tsyganenko ( , 1996Tsyganenko ( , 2002aTsyganenko ( , 2002b)).However, TSY89 has no such empirical magnetopause model used within it and therefore a "de-facto" boundary must be selected by the user which is then applied to the simulation when using the TSY89 model. Models of the magnetopause have historically been produced using empirical methods, utilising data from satellite magnetopause crossings to best fit the shape.The models that have been included in the tool currently are a sphere with a 25 Earth radii radius centred around the Earth (for use when not considering any external field models) as well as the Formisano, Sibeck, and Kobel models (Formisano et al., 1979;Sibeck et al., 1991;Kobel, 1992;Flückiger & Kobel, 1990).Due to the differing assumptions made during the creation of these models the magnetopause shape can vary significantly between them, leading to slightly different simulation outcomes.When using TSY89 within this work the Kobel (1992) model has been used. There are many more more advanced magnetopause models that take into account different variables, such as the solar wind conditions.Some examples of newer models are Lin et al. (2010) and Shue et al. (1998).While these models may provide a more accurate portrayal of the magnetopause they are not included in this tool at present.While these models may provide a more accurate portrayal of the magnetopause they are not included in this tool at present.These models require many more input variables to function, complicating and increasing the computational strain of the simulation.Their inclusion is planned to be accommodated in future versions of the tool if the need arises for the extra accuracy they may provide. Programming Languages OTSO has been developed within the framework of both the python and fortran programming languages.A precompiled language, such as fortran, was crucial in the development of this tool as the processing speed offered by compiled languages help complete the computationally intensive CR trajectory simulations within a reasonable time frame.Fortran is an old and dated language, with limited utility when compared to other more modern complied languages, such as C++.However, fortran does benefit from being an older language by being relatively simple as well as having many freely accessible and verified libraries previously written for it, which are already extensively used by the CR community, such as the Tsyganenko models, geopack library, and IRBEM library (https://prbem.github.io/IRBEM/).For these reasons fortran was chosen to utilise these libraries and speed up the development of this tool. Python was also used as the way to initialise the tool and input the parameters. Python is a very simple programming language and was picked to allow anyone with a basic understanding of programming to use the tool.The installation of the python language is also simple, being easily achieved through the download of the anaconda software.Anaconda also includes all the needed python modules needed for the tool to run, such as F2PY (which allows python and fortran to transfer information between each other). The result of both these decisions is that the tool is simple to obtain, use, and edit. Example Results, Comparison, and Applications As this tool is being constructed to be a possible alternative to older programs, namely MAGNETOCOSMICS, the analysis of this new tool relies on the comparison of results between the two programs.To achieve this, several cases were selected, specifically related to GLE analysis, and both programs were used to conduct computations for said GLE(s).The first case is a well studied, within not complicated as magnetospheric conditions, derived characteristics GLE event, namely GLE # 70 (Vashenyuk et al., 2006;Bütikofer et al., 2009;Mishev & Usoskin, 2016).NM stations at various latitudes were used to test OTSO by conducting cutoff and asymptotic cones computations.All computations were done using a rigidity step of 1×10 −3 GV, manuscript submitted to JGR: Space Physics increasing the precision of the results and allowing the penumbra to be shown in greater detail, and employing combination of IGRF and TSY 89 models. Cutoff Rigidity The results for the vertical cutoff computations can be seen in Table 1, where a general good agreement between the OTSO and MAGNETOCOSMICS is found.The stations with the greatest difference between the two tools were Oulu and Tixie Bay.The slight variation in results can be attributed to the accuracy of the integration methods used within the two tools. Asymptotic Cones Once the trajectory of a CR has been simulated the asymptotic longitude and latitude are computed.The CRs with accepted trajectories then have these values plotted in order to construct the asymptotic cone of acceptance.Figure 1 shows the high rigidity value region of the cones created by OTSO and MAGNETOCOSMICS for three of the NM stations considered: Cape Schmidt, Oulu, and Rome respectively, encompassing the case of anti-sunward NM (CAPS), polar sunward (Oulu), and low latitude station (Rome). One can see that the cones are in good agreement with each other, this is particularly true at the higher end of the rigidities.The cones calculated for the Oulu, and Rome NMs (see Figure 1) are almost identical with minor variations, however Cape Schmidt's cone shows that there can be some deviations in the cone shape between the two tools at lower rigidity values, namely the width of the OTSO cone increased (left panel of Fig- ure 1).This is because the trajectories of lower rigidity CRs are more complex, especially when their rigidity is close to the cutoff value, making simulation of these CRs more difficult.The accuracy of the integration method is important in these circumstances, and is likely the cause of the difference. Global Cutoff OTSO can compute the vertical cutoff rigidity on a global scale.Due to the mixed language nature of the tool multi-core processing was implemented using python to con- duct the large number of computations required for this operation in a time efficient manner.The global cutoff map was created by conducting cutoff calculations at regular intervals of 1 • in latitude and longitude.The same computation was done using MAGNE-TOCOSMICS and, in order to compare the two results, the absolute value for the difference between the computed cutoff rigidities at each point on the Earth was found and plotted in Figure 2.There are two clear results that can be inferred from Figure 2. Firstly, in general the difference between the two tools is minor, this is especially evident in the polar and equatorial regions.Secondly, there are anomalous regions on the Earth where the difference between the two tools is noticeable, with the most prominent region being found over the south pacific ocean, Figure 3 Ground level enhancement analysis OTSO was employed for GLE analysis, namely for computations of the cut-off rigidity and asymptotic directions (ADs) for NMs used as input for the method deriving spectral and angular distribution of SEPs.Here, we used a method based on neutron monitor records analysis [e.g.Shea and Smart (1982); Cramp et al. (1997)], whose details and applications are given elsewhere (Mishev et al., 2018;Mishev, Koldobskiy, Usoskin, et al., 2021;Mishev et al., 2022).The method is an unfolding procedure, that is mod- (Tsyganenko, 1989) or Tsyganenko 01 (Tsyganenko, 2002).The former combination allowed straightforward computation of ADs and rigidity cut-offs with reasonable precision [e.g.Kudela and Usoskin (2004); Kudela et al. (2008); Nevalainen et al. (2013)], whilst the latter is usually employed in the case when the K p index is greater than 6 [for details see Smart et al. (2000)]. Here we analysed two notable GLEs: GLE # 66 and GLE # 71.The GLE #66 occurred during one of strongest geomagnetic storms when the 3-hour planetary K p index was 9, on October 29, 2003.The event was the second in the sequence of three GLEs, the so-called Halloween events [e.g.Gopalswamy et al. (2005); Liu and Hayashi (2006); Gopalswamy et al. (2012)], recorded by the global neutron monitor network, the count rate increases are given in (http://gle.oulu.fi).In addition to the complicated geomagnetospheric conditions, a strong Forbush decrease was also observed prior to and during this event, which was explicitly considered in our analysis similarly to Mishev, Koldobskiy, Kocharov, and Usoskin (2021).Hence, the complicated geomagnetospheric conditions and accompanying Forbush decrease, make the analysis of this particular GLE specifically challenging.After computing the ADs with OTSO (see the left panel of Fig. 4), and employing the method described above, we derived the spectral and angular characteristics of the GLE producing SEPs. The best-fit of the derived SEP spectra was obtained by a modified power-law rigidity spectrum [e.g.Vashenyuk et al. (2008); Mishev, Koldobskiy, Kocharov, and Usoskin (2021)], i.e.,: where the flux of particles with rigidity P in [GV] is along the axis of symmetry identified by geographic latitude Ψ, longitude Λ and the power-law exponent is γ with the steepening of δγ, J 0 is the particle flux at 1 GV in [m −2 s −1 sr −1 GV −1 ], for SEPs with rigidity P > 1 GV.Accordingly, for SEPs with P ≤ 1 GV, the rigidity spectrum is: For the angular distribution the best-fit was obtained by Gaussian-like distribution: where α is the pitch angle, σ accounts for the width of the distribution. Details of derived spectra and pitch angle distribution (PAD) are given in Table 2 and 3 for the application of OTSO using IGRF+TSY 89 and IGRF+TSY 01 respectively.The merit function (Equation 9), that characterised the quality of the fit, that is the residual according to Himmelblau (1972); Dennis and Schnabel (1996) is defined as: Normally D ≤ 5 % for strong events [e.g.see Vashenyuk et al. (2006)], whilst for weak events it is ≈ 10 -15%, in some cases it can even approach 20% [for details see Mishev et al. (2018see Mishev et al. ( , 2022))].Here, the angular distribution of the arriving SEPs was fitted by complicated pitch an- 394 -13-manuscript submitted to JGR: Space Physics gle distribution (PAD) with a shape similar to that considered by Cramp et al. (1997), namely superposition of two Gaussians: where α is the pitch angle, σ 1 and σ 2 are parameters corresponding to the width of the pitch angle distribution, B and α ′ are parameters corresponding to the contribution of the second Gaussian, including the direction nearly opposite to the derived axis of symmetry.The best fit for the spectra was obtained by employing modified power-law, details given in Table 4.The derived characteristics of the SEPs during GLE # 71 are in practice the same as by Mishev, Koldobskiy, Usoskin, et al. (2021), and are in good agreement with the PAMELA direct measurements [for details see Adriani et al. (2015)]. Conclusion A new open-source tool for conducting magnetospheric computations, called OTSO, has been developed at the request of the wider CR research community.The primary aim of which is to provide a user friendly alternative to older tools that fulfil the same purpose, such as MAGNETOCOSMICS. OTSO has a good agreement with other magnetospheric computation tools, with the variations in results being likely due to differences in the integration methods used. New models, integration methods, and optimisations will be incorporated into OTSO over time by the community upon scientific goals requests.Some of the additions to OTSO The creation of OTSO bolsters the CR research field's arsenal of tools that can be used to study CRs in the Earth's magnetosphere, providing the basis for detailed analysis of various CR experiments including GLEs and space weather service(s). Within this work OTSO was successfully used for the analysis of two GLEs, namely the event that occurred during one of the strongest geomagnetic storms, GLE #66 on October 29, 2003, and the widely studied complex event, used for verification of NM data analysis using PAMELA measurements, GLE #71 that occurred on May 17, 2012.OTSO was able to obtain a good agreement with prior studies and in-situ space-borne measurements for these two events, proving that it is capable of being used to study complex events, such as those with high anisotropy like GLE #71, as well as events during intense and complicated magnetospheric conditions, such as GLE #66.Hence, it has been demonstrated that OTSO can be used as a reliable tool for geomagnetospheric computations under various conditions and circumstances, providing the necessary basis for strong SEP analysis. As such OTSO represents, a community requested new generation tool, with the possibility for constant improvement, providing reliable geomagnetospheric computations related to CR research. occurred on 13 December 2006 during the declining phase of solar cycle 23 as the result of a X3.4/4 B solar flare, with an associated GLE being detected around 03:00 UTC.The magnetic field distortion is described using an IOPT value within the Tsyganenko (1989) model used for these tests.For GLE # 70 the IOPT was set to 5, this corresponds to a planetary Kp index value of 4-, 4, 4+ at the time of the GLE. 12 Figure 1 . Figure 1.Computed asymptotic cones for three selected NMs during GLE # 70 using both OTSO and MAGNETOCOSMICS, as denoted in the legend.The NMs shown are: Cape Schmidt (left), Oulu (middle), and Rome (right). looks into this anomalous region in more detail.Within Figure3OTSO shows a gradual decrease in rigidity values with a significant penumbra present across the south pacific anomaly.In contrast MAGNETOCOS-MICS' plot is much more sporadic with sudden changes in R u and R l with a small penumbra is some regions, leading to the difference in R c seen in Figure2within this region. Figure 2 . Figure 2. Absolute difference in calculated effective vertical cutoff rigidities for the entire Earth during GLE # 70 between OTSO and MAGNETOCOSMICS. Figure 3 . Figure 3. Cross sections of the cutoff rigidity values over the region of largest difference between MAGNETOCOSMICS (left) and OTSO (right), taken at a longitude of −140 • . elling the global NM network response and optimisation of the model response over experimental data, which involves computation of the ADs and cut-off rigidity of NM stations used for the data analysis; assuming a convenient initial guess for the optimisation [e.g.Cramp et al. (1995); Mishev et al. (2017)]; selection of model parameters and the optimisation itself (Mishev & Usoskin, 2016).The method was recently verified by direct space-borne measurements [for details see Koldobskiy et al. (2019); Mishev, Koldobskiy, Kocharov, and Usoskin (2021); Koldobskiy et al. (2021)].The ADs were computed employing the aforementioned two field combination: internal, namely IGRF geomagnetic model (Alken et al., 2021), and external model, Tsyganenko 89 The derived GLE spectral and PAD employing different combinations of magnetospheric models, namely IGFR+TSY 89 and IGRF+TSY 01 are comparable, despite several differences in asymptotic directions, specifically for MCMD and TERA NMs, for details see the right panel of Fig.4.Virtually the same spectra and PAD can be explained by the complexity of the unfolding procedure [see the discussion inHimmelblau (1972);Mishev, Koldobskiy, Usoskin, et al. (2021)].In general, the employment of IGRF+TSY 01 resulted on slightly harder spectra, wider PAD and reduced D. Note, that the asymptotic directions of SOPO, the NM with the greatest count rate increase, which is the station with maximal weight during the optimisation, are in practice the same.OTSO was also used for the analysis of GLE # 71, which occurred on May 17, 2012.The event was observed as a weak enhancement of the count rates at several NMs with greater signals recorded by APTY, OULU, and SOPO/SOPB NMs, while the other stations registered marginal count rate increases.This implied large anisotropy of the SEPs, confirmed by the following analysis [e.g.Mishev et al. (2014);Kocharov et al. (2018)]. Figure 4 . Figure 4. Left panel: Asymptotic directions (IGRF+TSY 89) of selected NM stations during GLE #66 at 21:00 UT.The small circle depicts the derived apparent source position, and the cross the interplanetary magnetic field (IMF) direction obtained by the Advanced Composition Explorer (ACE) satellite.The lines of equal pitch angles relative to the derived anisotropy axis are plotted for 30 • , 60 • , and 90 • for sunward directions, and 120 • , 150 • for anti-Sun direction.Right panel: Comparison of computed asymptotic directions of selected NM stations during the GLE #66 employing TSY 89 (solid lines) and TSY 01 models (dashed lines). will allow it to more accurately recreate older tools.The open-source and community driven element of this new tool will allow it to evolve into a robust magnetospheric computation tool that can facilitate the many needs of the CR research community, including space weather service(s), latitude surveys etc... [e.g.Mavromichalaki et al. (2018);Nuntiyakul et al. (2020)].OTSO has been designed to be as user friendly as possible, for both those wishing to edit the program and those with little programming knowledge.The main tool being accessed via python opens the tool up to computer novices and the inclusion of libraries such as IRBEM provides a strong foundation for OTSO's further development.As such the new tool provides a good starting point for a community driven magnetospheric computation tool. Table 1 . Data for the calculated effective vertical cutoff rigidity for selected NMs using both OTSO and MAGNETOCOSMICS. Table 2 . Derived spectral and angular characteristics during GLE # 66 on October 29, 2003 fitted with modified power-law rigidity spectrum employing AD computed with IGRF and TSY Table 4 . Derived spectral and angular characteristics during GLE # 71 on May 17, 2012 fitted with modified power-law rigidity spectrum employing AD computed with IGRF and TSY 89 models.
2023-02-18T16:10:15.426Z
2023-02-16T00:00:00.000
{ "year": 2023, "sha1": "0aa51b2b80a33e1318289d0047c029a1bb4bf6f9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1029/2022ja031061", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "757f5aaa9f304c01d38030e943fd9b4de46cf12a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
8762034
pes2o/s2orc
v3-fos-license
Analysis of Discourse Structure with Syntactic Dependencies and Data-Driven Shift-Reduce Parsing We present an efficient approach for discourse parsing within and across sentences, where the unit of processing is an entire document, and not a single sentence. We apply shift-reduce algorithms for dependency and constituent parsing to determine syntactic dependencies for the sentences in a document, and subsequently a Rhetorical Structure Theory (RST) tree for the entire document. Our results show that our linear-time shift-reduce framework achieves high accuracy and a large improvement in efficiency compared to a state-of-the-art approach based on chart parsing with dynamic programming. Introduction Transition-based dependency parsing using shiftreduce algorithms is now in wide use for dependency parsing, where the goal is to determine the syntactic structure of sentences. State-of-theart results have been achieved for syntactic analysis in a variety of languages (Bucholz and Marsi, 2006). In contrast to graph-based approaches, which use edge-factoring to allow for global optimization of parameters over entire tree structures using dynamic programming or maximum spanning tree algorithms (McDonald et al., 2005) transition-based models are usually optimized at the level of individual shift-reduce actions, and can be used to drive parsers that produce competitive accuracy using greedy search strategies in linear time. Recent research in data-driven shift-reduce parsing has shown that the basic algorithms used for determining dependency trees (Nivre, 2004) can be extended to produce constituent structures (Sagae and Lavie, 2005), and more general de-pendency graphs, where words can be linked to more than one head (Henderson et al., 2008;Sagae and Tsujii, 2008). A remarkably similar parsing approach, which predates the current wave of interest in data-driven shift-reduce parsing sparked by Yamada and Matsumoto (2003) and Nivre and Scholz (2004), was proposed by Marcu (1999) for data-driven discourse parsing, where the goal is to determine the rhetorical structure of a document, including relationships that span multiple sentences. The linear-time shift-reduce framework is particularly well suited for discourse parsing, since the length of the input string depends on document length, not sentence length, making cubic run-time chart parsing algorithms often impractical. Soricut and Marcu (2003) presented an approach to discourse parsing that relied on syntactic information produced by the Charniak (2000) parser, and used a standard bottom-up chart parsing algorithm with dynamic programming to determine discourse structure. Their approach greatly improved on the accuracy of Marcu's shift-reduce approach, showing the value of using syntactic information in discourse analysis, but recovered only discourse relations within sentences. We present an efficient approach to discourse parsing using syntactic information, inspired by Marcu's application of a shift-reduce algorithm for discourse analysis with Rhetorical Structure Theory (RST), and Soricut and Marcu's use of syntactic structure to help determine discourse structure. Our transition-based discourse parsing framework combines elements from Nivre (2004)'s approach to dependency parsing, and Sagae and Lavie (2005)'s approach to constituent parsing. Our results improve on accuracy over existing approaches for data-driven RST parsing, while also improving on speed over Soricut and Marcu's chart parsing approach, which produces state-of-the-art results for RST discourse relations within sentences. Discourse analysis with the RST Discourse Treebank The discourse parsing approach presented here is based on the formalization of Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) used in the RST Discourse Treebank (Carlson et al., 2003). In this scheme, the discourse structure of a document is represented as a tree, where the leaves are contiguous spans of text, called elementary discourse units, or EDUs. Each node in the tree corresponds to a contiguous span of text formed by concatenation of the spans corresponding to the node's children, and represents a rhetorical relation (attribution, enablement, elaboration, consequence, etc.) between these text segments. In addition, each node is marked as a nucleus or as a satellite, depending on whether its text span represents an essential unit of information, or a supporting or background unit of information, respectively. While the notions of nucleus and satellite are in some ways analogous to head and dependent in syntactic dependencies, RST allows for multi-nuclear relations, where two nodes marked as nucleus can be linked into one node. Our parsing framework includes three components: (1) syntactic dependency parsing, where standard techniques for sentence-level parsing are applied; (2) discourse segmentation, which uses syntactic and lexical information to segment text into EDUs; and (3) discourse parsing, which produces a discourse structure tree from a string of EDUs, also benefiting from syntactic information. In contrast to the approach of Soricut and Marcu (2003), which also includes syntactic parsing, discourse segmentation and discourse parsing, our approach assumes that the unit of processing for discourse parsing is an entire document, and that discourse relations may exist within sentences as well as across sentences, while Soricut and Marcu's processes one sentence at a time, independently, finding only discourse relations within individual sentences. Parsing entire documents at a time is made possible in our approach through the use of lineartime transition-based parsing. An additional minor difference is that in our approach syntactic information is represented using dependencies, while Soricut and Marcu used constituent trees. Syntactic parsing and discourse segmentation Assuming the document has been segmented into sentences, a task for which there are approaches with very high accuracy (Gillick, 2009), we start by finding the dependency structure for each sentence. This includes part-of-speech (POS) tagging using a CRF tagger trained on the Wall Street Journal portion of the Penn Treebank, and transition-based dependency parsing using the shift-reduce arc-standard algorithm (Nivre, 2004) trained with the averaged perceptron (Collins, 2002). The dependency parser is also trained with the WSJ Penn Treebank, converted to dependencies using the head percolation rules of Yamada and Matsumoto (2003). Discourse segmentation is performed as a binary classification task on each word, where the decision is whether or not to insert an EDU boundary between the word and the next word. In a sentence of length n, containing the words w 1 , w 2 … w n , we perform one classification per word, in order. For word w i , the binary choice is whether to insert an EDU boundary between w i and w i+1 . The EDUs are then the words between EDU boundaries (assuming boundaries exist in the beginning and end of each sentence). The features used for classification are: the current word, its POS tag, its dependency label, and the direction to its head (whether the head appears before or after the word); the previous two words, their POS tags and dependency labels; the next two words, their POS tags and dependency labels; the direction from the previous word to its head; the leftmost dependent to the right of the current word, and its POS tag; the rightmost dependent to the left of the current word, and its POS tag; whether the head of the current word is between the previous EDU boundary and the current word; whether the head of the next word is between the previous EDU boundary and the current word. In addition, we used templates that combine these features (in pairs or triples). Classification was done with the averaged perceptron. Transition-based discourse parsing RST trees can be represented in a similar way as constituent trees in the Penn Treebank, with a few differences: the trees represent entire documents, instead of single sentences; the leaves of the trees are EDUs consisting of one or more contiguous words; and the node labels contain nucleus/satellite status, and possibly the name of a discourse relation. Once the document has been segmented into a sequence of EDUs, we use a transition-based constituent parsing approach (Sagae and Lavie, 2005) to build an RST tree for the document. Sagae and Lavie's constituent parsing algorithm uses a stack that holds subtrees, and consumes the input string (in our case, a sequence of EDUs) from left to right, using four types of actions: (1) shift, which removes the next token from the input string, and pushes a subtree containing exactly that token onto the stack; (2) reduce-unary-LABEL, which pops the stack, and push onto it a new subtree where a node with label LABEL dominates the subtree that was popped (3) reduce-left-LABEL, and (4) reduceright-LABEL, which each pops two items from the stack, and pushes onto it a new subtree with root LABEL, which has as right child the subtree previously on top of the stack, and as left child the subtree previously immediately below the top of the stack. The difference between reduce-left and reduce-right is whether the head of the new subtree comes from the left or right child. The algorithm assumes trees are lexicalized, and in our use of the algorithm for discourse parsing, heads are entire EDUs, and not single words. Our process for lexicalization of discourse trees, which is required for the parsing algorithm to function properly, is a simple percolation of "head EDUs," performed in the same way as lexical heads can be assigned in Penn Treebankstyle trees using a head percolation table (Collins, 1999). To determine head EDUs, we use the nucleus/satellite status of nodes, as follows: for each node, the leftmost child with nucleus status is the head; if no child is a nucleus, the leftmost satellite is the head. Most nodes have exactly two children, one nucleus and one satellite. The parsing algorithm deals only with binary trees. We use the same binarization transform as Sagae and Lavie, converting the trees in the training set to binary trees prior to training the parser, and converting the binary trees produced by the parser at run-time into n-ary trees. As with the dependency parser and discourse segmenter, learning is performed using the averaged perceptron. We use similar features as Sagae and Lavie, with one main difference: since there is usually no single head-word associated with each node, but a EDU that contains a sequence of words, we use the dependency structure of the EDU to determine what lexical features and POS tags should be used as features associated with each RST tree node. In place of the head-word and POS tag of the top four items on the stack, and the next four items in the input, we use subsets of the words and POS tags in the EDUs for each of those items. The subset of words (and POS tags) that represent an EDU contain the first two and last words in the EDU, and each word in the EDU whose head is outside of the EDU. In the vast majority of EDUs, this subset of words with heads outside the EDU (the EDU head set) contains a single word. In addition, we extract these features for the top three (not four) items on the stack, and the next three (not four) words in the input. For the top two items on the stack, in addition to subsets of words and POS tags described above, we also take the words and POS tags for the leftmost and rightmost children of each word in the EDU head set. Finally, we use feature templates that combine these and other individual features from Sagae and Lavie, who used a polynomial kernel and had no need for such templates (at the cost of increased time for both training and running). Results To test our discourse parsing approach, we used the standard training and testing sections of the RST Discourse Treebank and the compacted 18label set described by Carlson et al. (2003). We used approximately 5% of the standard training set as a development set. Our part-of-speech tagger and syntactic parser were not trained using the standard splits of the Penn Treebank for those tasks, since there are documents in the RST Discourse Treebank test section that are included in the usual training sets for POS taggers and parsers. The POS tagger and syntactic parser were then trained on sections 2 to 21 of the WSJ Penn Treebank, excluding the specific documents used in the test section of the RST Discourse Treebank. Table 1 shows the precision, recall and f-score of our discourse segmentation approach on the test set, compared to that of Soricut and Marcu (2003) and Marcu (1999). In all cases, results were obtained with automatically produced syntactic structures. We also include the total time required for syntactic parsing (required in our Using our discourse segmentation and transition-based discourse parsing approach, we obtain 42.9 precision and 46.2 recall (44.5 f-score) for all discourse structures in the test set. Table 2 shows f-score of labeled bracketing for discourse relations within sentences only, for comparison with previously published results. We note that human performance on this task has f-score 77.0. While our f-score is still far below that of human performance, we have achieved a large gain in speed of processing compared to a state-ofthe-art approach. Conclusion We have presented an approach to discourse analysis based on transition-based algorithms for dependency and constituent trees. Dependency parsing is used to determine the syntactic structure of text, which is then used in discourse segmentation and parsing. A simple discriminative approach to segmentation results in an overall improvement in discourse parsing f-score, and the use of a linear-time algorithm results in an a large improvement in speed over a state-of-theart approach. F-score Time Marcu99 37.2 -S&M03 49.0 481s this work 52.9 69s human 77.0 - Table 2: F-score for bracketing of RST discourse trees on the test set of the RST Discourse Treebank, and total time (syntactic parsing, segmentation and discourse parsing) required to parse the test set (S&M03 and our approach were run on the same hardware).
2014-07-01T00:00:00.000Z
2009-10-07T00:00:00.000
{ "year": 2009, "sha1": "e365e8af684bd2337cb81f65f90e1597406b5f77", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1697253&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "60f6422be90734e9d8d81f8403181363c08fe895", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
55699035
pes2o/s2orc
v3-fos-license
OMFM: A Framework of Object Merging Based on Fuzzy Multisets Information fusion is a process of merging information from multiple sources into a new set of information. Existing work on information fusion is applicable in various scenarios such as multiagent system, group decision making, and multidocument summarization. This paper intends to develop an effective framework to solve object merging problem based on fuzzy multisets. The objects defined in this paper are data segments in document fusion task, referring to the concepts with semantic-related terms of different semantic relations embedded. The fundamental operation is the merge function mapping data segments in multiple fuzzy multisets onto one object, which is a solution. Under this framework, we define quality measures of purity and entropy to quantify the quality of the solutions, balancing accurateness, and completeness of the results. Merge function that yields this kind of solutions is VI-optimal merge function and a series of theoretical properties concerning it are studied. Finally, we investigate the proposed framework in a special application scenario (i.e., document fusion) which is related to the task of multidocument summarization and show how the framework works with illustrative example. Introduction As an important research area, information fusion is a process of merging information from multiple sources into a new set of information.There are many applications in this research area such as heterogeneous database, multiagent system, group decision making, and multidocument summarization.Under different application scenarios, different principles and procedures are utilized to solve the problems.Many classical mathematical theories of aggregation operators [1][2][3][4][5] have been developed for multiagent system and group decision making system, and the information that aggregation operators try to fuse typically expresses facts of opinion or score of an agent.Besides these researches, a fair amount of work focused on the situation where the source is regarded as a propositional belief [6][7][8].The existence of nonfactual knowledge like integrity constraints and inference rules makes the difference between these two theories.As a result, a lot of work has been done in the heterogeneous database area on first-order theory.Another type of fusion is that each source presents knowledge by means of a possibility distribution [9], in this case, the imperfection of incorrectness, uncertainty, and incompleteness in the data should be coped with.The main challenge is how to deal with conflicting information provided by different sources. To address the issues in the third type of fusion, a framework of object merging has been investigated by using multiset theory currently, which could be utilized to solve the problem of multidocument summarization (MDS) [10].Also, object merging is a hot spot of research in many domains with good prospect for application.The framework of multiset merging for MDS has defined the merge function which maps the objects in multisets onto a single object and has got some foregoing results which cannot be considered as a final summarization yet [11,12], owing to the fact that these foregoing results are just some keywords without any relation among them, not mention to context of co-text, context of culture, context of situation, and so forth.The essential reason for this result is that the framework defined the quality measures with the multiplicity of element as the measure of important element.In other words, the multiplicity is equal to term frequency which is just shallow text feature.When performing source selection in MDS, the traditional method transformed one document into the representation of a vector of words or a multiset of words, which are just simple settings.Other progressive approaches should be proposed, which are semantically richer than using words as source representation.In short, the problem of processing coreferent objects has not been deeply investigated at present.On one hand, merging of nonquantitative objects, especially the objects with semantic information, has not been proposed.On the other hand, object merging functions and the rationality of merging still need to be further investigated. Within the scope of our paper, we also focus on the problem of object merging in information fusion, and our work should be treated as an extension of the framework mentioned hereinbefore.There are many differences between these two works.The basic difference concerning the definition of coreferent objects: coreferent objects in paper [11] are the objects describing the same entity in the real world, while in our paper the object we're discussing is a piece of data or information, which could be used to denote the same concept with semantic-related terms of different semantic relations embedded.Then, fuzzy multiset theory is investigated in our paper, in which membership degree function and length function are used to describe both uncertainty and repeatability of the natural language.When performing fusion in practical situations, the object merging process has considered deep text features of semantic relations such as hypernym, synonym, and antonym.Moreover, two quality measures (purity [13] and entropy [14,15]) widely used in the text mining literature are adopted to quantify the result of a merge function.Thus, the behavior of the merge functions we defined in this paper can be characterized by the behavior of the quality measures.With this strategy, we can get an optimal merge result.The possible application of this work is document fusion [16], where a collection of textual documents is used to produce the shortest description containing all information found within the document set, but without repetition.Existing solutions for this problem normally focused on statistical methods or heuristics methods used in multidocument summarization [17,18].In this paper, object merging based on fuzzy multisets (OMFM) is definitely a meaningful attempt, where a source set of multiple documents is denoted as a multiset and each document is denoted as a fuzzy multiset of multiple concepts. This paper is organized as follows.In Section 2, we review mathematical preliminaries.Furthermore, the general framework of objects and object merging are proposed in Section 3, and definition of the quality measures and construction of merge functions are introduced in Section 4. Next, demonstration about how our framework works on practical problem (i.e., document fusion) with illustrative example is presented in Section 5. Finally, in Section 6, we give the conclusion and future work to the proposed framework OMFM. Preliminaries In mathematics, fuzzy set introduced by Zadeh in 1965 is set whose elements have degrees of membership which is an extension of the classical notion of set [19,20].Fuzzy set theory is very useful to deal with problems that are not easily handled by classical computing techniques.On the other hand, the use of membership degrees instead of real numbers to represent memberships also provides a mean to measure the possible uncertainty in languages computational theory.The notion of multiset is a generalization of the classical notion of set in which members are allowed to appear more than once.As a data structure, multiset stands in between strings where a linear ordering of symbols is presented and sets where no ordering is considered.Combined with the notion of fuzzy set, multiset is generalized to fuzzy multiset [21], which could describe both uncertainty and repeatability of the natural language.Consider one language modeling problem: given some sentences, identify the concepts and words which are similar or identical, and merge these objects to get a condensed description.This task is a challenging natural language problem with large amounts of diverse and compositional data.To solve this problem, we extend fuzzy multiset to produce a language model which maps data segments in multiple fuzzy multisets onto one object, where different semantic relations for one concept are treated as repeated elements with different membership degree in fuzzy multisets.In this section, mathematical theories of fuzzy set, multiset, and fuzzy multiset will be briefly reviewed. Definition 1 (membership function).The membership function : → [0, 1] indicates the degree of belonging to . () = 1 indicates that element completely belongs to set ; that is, ∈ is the concept of traditional set.Definition 2 (fuzzy set).The membership function over = { 1 , 2 , . . ., } defines a fuzzy set, which is represented as .Fuzzy set with elements 1 , 2 , . . ., can be denoted as According to the definition of fuzzy set, to what extent an object belongs to a set is not fixed any more, and the membership of each object falls in the range of interval [0, 1]. A multiset also could be denoted as There are some basic operators and relations of multiset below: Inclusion: (1) Equality: Intersection: Union: Addition: Definition 4 (-cut set of multiset).The -cut set of a multiset is denoted as and given by = { | ∈ ∧ Count () ≥ }. Note that the difference between the notation () and is that the former one means assigning an index to the multiset and the latter one means the -cut set of the multiset [23]. The set of all fuzzy multisets drawn from a universe is denoted as M(). Definition 8 (-cut set of fuzzy multiset).The -cut set of a fuzzy multiset M is denoted as M and given by Note that the difference between the notation M() and notation M is that M is preserved for the -cut set of the fuzzy multiset M, while M() means assigning an index to the fuzzy multiset M. Objects and Object Merging 3.1.The General Framework.We have reviewed the most relevant definitions in the previous section.As we've mentioned earlier, the framework in our paper extends the work in paper [11], so now we will introduce some work basis below.The bases involve the redefinitions of coreferent objects and merge function in OMFM, and a brief review of properties of preservation and majority rule in [11]. The bases involve the redefinitions of coreferent objects and merge function in OMFM. Reference function : → is formalized to describe a concept in the real world, where symbolizes the real world.By definition, two concepts are called coreferent if they describe the same real world concept.Definition 9 (coreferent objects).Let be a universe set of concepts.Two concepts 1 and 2 are coreferent if and only if ( 1 ) = ( 2 ).By the definition above, two objects that describe the same real world concept with semantic-related terms of different semantic relations embedded are formalized axiomatically.Here, we consider the context as the baseline: when describing a theme in a document, some semantic-related terms relating to this concept will be used to extend the theme. Definition 10 (merge function). The merge function in OMFM is represented by function 𝜛 : M(𝑈) → 𝑈. Mapping the fuzzy multisets of objects onto a single object is the job of merge function in our work, and these functions are often idempotent; that is, (ũ, ũ, . . ., ũ) = ũ, ∀ũ ∈ .This conclusion is also suitable in this paper and corresponding proof will be given in the following section. A brief review of two important properties. Property 1 (preservation).A merge function is preservative when merge function only selects one of the elements from the source set, the property of preservation in OMFM is denoted as Property 2 (majority rule).If the multiplicity value of an element is larger than the half of cardinality value of the source set then this element must be selected by the merge function, which is denoted as The majority rule above is an important property for merge function in multiset that was further studied in [25] and a weaker version has been proved in [26].By now, the majority rule is not extended deeply in fuzzy multiset as it does not apply in general, but the preservation rule will be elaborated in our paper. Merging of Fuzzy Multisets. Within the scope of OMFM, we focus on the case of object merging of compound a multiset and multiple fuzzy multisets with the function of the type below: where the elements of M() are denoted as M() , and the elements of M ( M ()) are () .Here, the multiset () could be denoted as The fundamental operator is mapping the data segments of fuzzy multisets onto one object, which is called a solution.In following sections the symbol is used to represent a random solution of a given merge function; that is, () = . The case is ( M(), ⊆) is not an upper bounded lattice.The normalization criterion that is needed when performing merge functions is usually omitted by fuzzy multiset theory.Therefore, we show another property below. Property 3 (boundedness). A bounded merge function 𝜛 over M() should satisfy the following constraint: It indicates that the merge function selects one of the elements from all the source sets.A corresponding inference is that This inference explains that any element not belonging to any source set should not exist in the outcomes of a bounded merge function.We could easily get this natural property just from the observation, because element with membership degree () should not be mixed into a solution arbitrarily.Also, it is a weaker notion of preservation.Besides, we also formulate the enforcing preservation: Then, Property 3 is equivalent to indicating Paper [11] has pointed out that keeping the weaker version of Property 2 in the situation of multiset is advantageous.They take multidocument summarization (MDS) as an example to explain that keeping a strict preservation would lead to a bad result in practical situations, that is, one of the documents itself would be the summary of the entire document set.While the task of document fusion (DF) is to generate a text containing all the information in entire document set.So, a weaker version of Property 2 is also advantageous in our framework.The bounded merge function of fuzzy multiset will be further elaborated in subsequent sections. Proof.We can get the proof from the case that for any Optimal Merging of Fuzzy Multisets 4.1.Quality Measures.The purpose of defining quality measures is to construct the merge functions that could get good performance for object merging in multiple fuzzy multisets.On one hand, the behavior of the merge functions we defined could be characterized by the value of the quality measures.On the other hand, adjusting a merge function could also optimize a balance between accurateness and completeness of a given solution to get a higher value of quality measures.The relationship between the merge functions and quality measures can be shown in Figure 1. Within the scope of our paper, we adopted two quality measures widely used in the text mining literature: the first one is purity [13], and the second one is entropy [14,15].Information entropy is a concept used to measure the amount of information in the information theory, which is often taken as a measure of "disorder;" that is, the higher the value of entropy, the higher the extent of disorder; information purity is a measure of correlation between a system and its environment, where a higher value of purity means that a system is more relevant to its environment.Both of the two measures fall into range interval [0, 1].Basically, the maximum purity and minimum entropy of results are the goals we try to achieve.Nevertheless, when we try to analyze the effect of a merge function, we should be able to analyze the effect at fundamental level of the elements.So, some local quality measures will be introduced first. Definition 13 (local precision).Given a multiset () = { M( 1 ) , M( 2 ) , . . ., M( ) }, the local precision of the element could be defined as (23) such that Count () ( M() ) . ( The local precision judges the accurateness of adding the element with the membership degree into the solution. Here, () is a multiset of sources. * judges the proportion of fuzzy multisets where the membership degree of element is . Property 4 (monotonity of * ).Local precision * is a decreasing function in accordance with the membership degree threshold : The monotonity of * is a natural property.The lower membership degree means more sources will be added into the solution, owing to the fact that higher membership degree indicates relative simple relations related one concept (say the synonym of one word), and lower membership degree indicates more unspecific and more layered descriptions concerning one concept.As a result, we will get more complete information with higher precision. Definition 15 (purity).Purity is computed using the maximal local precision value for each element in the solution as follows: such that Property 5 (monotonity of * ).Local precision * is an increasing function in accordance with the degree of membership threshold when 0.5 ≤ * ≤ 1, a decreasing function in accordance with the degree of membership threshold when 0 ≤ * ≤ 0.5: Property 5 implies that the variation trend of local entropy is impacted by both fuzziness and proportion of an element in a solution; that is, neither excessively detailed or excessively brief information, nor more sources or less sources contained in the solution is appropriate to enrich the information of a fusion system.The proofs of these natural properties are omitted here.Back to our approach, the important connection exists between local precision and local entropy is also reflected by this property. Definition 18 (total entropy).The total entropy of () is calculated as such that The purity and entropy can, respectively, express the quality measures, but the variation scales between them may be unequal.As mentioned above, the maximum purity and minimum entropy of results are the goals we try to achieve.Therefore, we try to investigate an index with the similar variation scales.(39) Next, the rationality of this index will be shown.Generally, a brilliant result is generated by the higher value of the purity and the lower value of the entropy.That is to say, if the discrepancy between these two values is large, the value of the validation index is large and a good result can be determined by this validation index.That is to say, a balance between purity and entropy is expressed by validation index.In the case where the variation scales of these two values are similar, we propose a constant value which could change the similar variation scales of purity value and entropy value.In practice, we determine the most significant singular values by selecting the best VI, and it is kind of an empirical value which could be achieved during the simulation and modified through iterated procedure.But how to determine the value of this constant is not the problem we really care about now, we have not discussed this problem deeply in this paper.In our future work, we will explore this problem deeply with experimental analysis. Note that for any solution , VI( | ) ̸ = 0 if and only if the local precisions of all elements in this solution differ from zero. Optimization of Quality. The effect of a merge function can be judged by quality measures introduced in previous phase.And then we try to investigate the solutions optimizing the values of the quality measures.This type of optimization problem also appears in other research fields, paper [27] utilized the transitive closure as the effective mechanism transforming a matrix into fuzzy equivalence relation, by this way, finding the approximate partitions of data sequences.It is a classic example in the field of fuzzy set theory.Another example involved searching approximate minimum-distance by transforming a fuzzy reciprocal relation with a transitive reciprocal relation [28].That is to say, the optimization mechanism could not be one of a kind.At the next step, we will concentrate on maximum quality generated from VI-value (the maximization of the purity and the minimization of the entropy).The difficulty of this step is to find the solution which gets the best VI-value.Therefore, the main task here is to define and investigate a suitable merge function.(41) At this step, some properties of VI-optimal merge function will be studied further.A notable point is that there may appear several solutions sharing one maximum VI-value.With the definition of the merge function, how to select the unique solution is an important task here.Therefore, a selection criterion that selects one solution from the optimal solutions set is needed when performing these merge functions.With the special application area of OMFM, we will Mathematical Problems in Engineering show the details in illustrative examples.Another problem is a solution that has VI( | ) ̸ = 0 does not always exist.Hence, the notion of invalid solution is given below. Definition 22 (invalid solution). Assume a VI-optimal merge function 𝜛 and a fuzzy multiset of sources 𝑀 ∈ M( M(𝑈)). A multiset ∈ M() is defined as an invalid solution of () if A solution of a VI-optimal merge function that is not invalid is called avalid solution.Notice the differences between invalid solution and valid solution.Then, we will introduce another significant theorem. Theorem 23. Any solution that is a real subset of the source intersection or a real superset of the source union has that Proof.Assume a fuzzy multiset of source () = { M( 1 ) , M( 2 ) , . . ., M( ) }. (1) A solution that satisfies Also it satisfies Owing to the case of all the elements of the solution would generate a local precision equivalent to 0, then (2) A solution that satisfies also satisfies Owing to the case of all the elements of the solution would generate a local precision equivalent to 0, then The conclusion here is that a valid solution of VI-optimal merge function should include the intersection of the sources and should be included by the union of the source.In view of this point, we define the intersection of the sources as the lower bound and the union of the sources as the upper bound.The formalized definition is shown as where the lower bound is denoted as and the upper bound is denoted as .Hence, we shall only consider solution that satisfies ⊆ ⊆ in the following section. Theorem 24.An VI-optimal merge function is idempotent. Thus, for = = , we have that The corresponding proof is also shown when applying the previous theorem. Theorem 25. A VI-optimalmerge function 𝜛 is bounded. Proof.With Theorems 12 and 23, we could get this conclusion. An important point is that VI-optimal merge functions do not satisfy the property of preservation.Nevertheless, due to the theorem we just proved above, they are bounded undoubtedly and boundedness offering a weaker version of preservation is shown in previous section.Besides the theorem of boundedness, several interesting theorems relevant to VI-optimal merge function need to be mentioned here.One of them is the theorem of VI-optimality invariance when scaling of multiplicity of the sources below. Theorem 26.Assume a fuzzy multiset () = { M( 1 ) , M( 2 ) , . . ., M( ) } and a merge function .A conclusion could be got that Proof.Several facts could be got that Proof.We could get the corollary in last theorem. An Application: Document Fusion with Illustrative Example 5.1.Document Fusion.One possible application for this fuzzy multiset framework is document fusion.It involves the merging of elements with the different relations embedded.When it comes to document fusion, we have to introduce multidocument summarization briefly.Document fusion and multidocument summarization are two relevant areas.The important difference between these two areas is that, for multidocument summarization, the main task is to generate the shortest description containing the most relevant information, while for document fusion, the focus is to generate the shortest description containing all information contained in the whole document set excluding the redundancy [15,16].It is like that multidocument summarization is the intersection of the documents and document fusion is the union of the documents.Unlike multidocument summarization system, there is no organization like DUC (Document Understand Conference) [29] providing "ideal" datasets for document fusion research yet, with which multiple documents under same subject and ideal summarization results for testing can be achieved.In addition, intrinsic and extrinsic evaluations in multidocument summarization system could not be suitable in fusion task: intrinsic evaluation where evaluation is done by human on accessing the quality of the fused documents itself makes the evaluation process subjective [30], and on the other hand, the difficulty in intrinsic evaluation of document fusion systems is that there is no existing collection of human written fusion results of multiple documents, serving as a gold standard for such evaluations by now; and extrinsic evaluation where the result of the document fused is evaluated by the completion of a specific task makes the evaluation process more complicated.Thus, there are no standard methods used to estimate the work in fusion task like in some document summarization tasks [31][32][33].Given the problems we mentioned above, the evaluation that we performed is limited to date.To demonstrate our work, an example of an article cluster concerning the spoilage problem complaints of the dairy products on a particular brand has been selected from "315 consumption complaint" website to show the general fusion process and results by utilizing our framework.Although we use Chinese text for illustration, it is worth mentioning that there is not any fundamental difference between Chinese and English or other language under this framework. The work of our paper is to propose a framework for document fusion, so we are not only aiming to get keywords, but for comprehensive information.Here, we just try to consider the situation of fuzzy multiset.With such extensions, the membership degree could be used to show the importance and fuzziness of an element, which makes the document representation more granular and semantically richer than multiset merging model in paper [11].Assigning different weights to the same element also makes sense, when considering the situation that semantic-related terms with different semantic relations are used to identify the concept, which is semantically richer than just using words.Under our framework, semantic methods and statistical methods could be combined and used in many domains. Illustrative Example. The main processing that needs to be performed is to get the Extra Strong, Strong, and Medium Strong relations of every concept in each article by using HowNet [34].As a common-sense knowledge base, HowNet unveils interconceptual and interattribute relations of concepts.In HowNet, every concept of a word or phrase and its description form one entry with relations such as hypernym, hyponym, synonym, antonym, meronym, and Holonym (descriptions for these relations could be seen in Table 1), existing in HowNet and presented in DEF (concept definition) as shown in Box 1. When performing English text, a large lexical database of English, WordNet, could be used to identify these relations instead of HowNet.Here, textual intention structure is determined by three relations of every concept.As an indicator in Relations Descriptions Hypernym Also known as a superordinate, which is a word referring to broad categories or general concepts.For example, "musical instrument" is a hypernym of "guitar" because a guitar is a musical instrument. Hyponym A word or phrase whose semantic field is included within that of another word, a hyponym shares a type-of relationship with its hypernym, for example, "pigeon, " "crow, " "eagle, " and "seagull" are all hyponyms of "bird." Synonym A word with the same or similar meaning of another word. Antonym A word which has the opposite meaning as another word. Meronym Meronym denotes a constituent part of or a member of something.For example, "finger" is a meronym of "hand" because a finger is part of a hand.Similarly, "wheels" is a meronym of "automobile." Holonym Holonym defines the relationship between a term denoting the whole and a term denoting a part of or a member of the whole.For example, "tree" is a holonym of "bark, " of "trunk" and of "limb." linguistic segments, three relation segments of every concept tend to indicate the theme segments.That is to say, once three relations of every concept have been confirmed, the corresponding linguistic segments will have determinate tendency.Each concept is defined as an element in fuzzy multisets and the three relation segments that are included in each concept determine the different membership degree of each element as shown in Table 2. As we've mentioned above, document fusion is to produce the shortest description containing all information found within the document set, but without repetition.The solution that we need is the solution concluding all the key concepts (, , and in this example) and these concepts are constructed by three relations (Extra Strong, Strong, and Medium Strong relations).Let us consider the solution = {(, 1), (, 0.5), (, 0.2)}.We get the local precision of all elements in this solution On the same principle, we present the local precision, purity, local entropy, entropy, and VI-value of the solutions with only one semantic relation embedded in each concept.If every concept is treated of equal importance with single relation embedded, in this case, we have got a maximal VIvalue 0.355 (see Table 3). As mentioned in former section, a valid solution of a VIoptimal merge function should follow the constraint below: We should only consider solution that satisfies the constraint mentioned above in practical application.So, more complicated solutions could also be considered.For example, when concept is an important concept which needs to be described explicitly in the fusion result, more details as Medium Strong relation in Table 2 should be contained to construct the description.If the concept is not treated of equal importance with single relation embedded, in this case, we've got two maximal VI-value of 0.491 (see Table 4).Generally, for any solution , we can calculate the VI-values and choose the solution based on the observation of maximal VI-value.Within the scope of this paper, a tie-breaking criterion does not always exist, the accessorial choice criterion that helps selecting a solution from the set of optimal solutions is necessary.From the observations of 10 and 19 , we need to decide the merging function by actual requirement or finding a new solution with more semantic relations embedded. More examples with considering concept and concept as important concepts which needs to be describe explicitly in the fusion result, will be shown in Tables 5 and 6.When concept and concept are with similar scale of multiple relations embedded, we could get equivalent maximum VIvalue 0.410 from 28 and 38 , which means these two strategies get the same effect.If only one concept is with multiple relations embedded, we should consider 10 to 45 and get the maximum VI-value 0.491 from 10 or 19 .Both the situations needs to be further selected by considering the specific application: For 28 and 38 , two solutions selected strong and medium strong relations for concept and , so the importance of concept should be further considered, that is to say, if concept is more importance for fusion, 38 should be selected.For 10 or 19 , when the fusion results need to be described with more details, corresponding solution 10 should be selected; otherwise, 19 would also be a candidate selection for fusion. Figure 2 shows the distribution of the VI-values with different scales of semantic relations embedded using boxand-whisker plot and evaluates the effectiveness of the merge function.The observations on partial of VI-values are presented here.The top bar stands for the maximum observation value; the bottom bar represents the lowest observation value.The bottom of the box is the lower quartile with 25% of We have not performed all the merge functions with corresponding VI-values in our illustrative example yet.The data and explanation we give above is just to show the proposed framework vividly.Now, we've got the conclusion that we could get a corresponding VI-value for any solution.On the other hand, we could use the VI-value to select the best solution.As our framework is flexible enough to generate shortest description containing all information found within the document sets with different levels of details depending on practical requirement.With this framework, neither the relations between the keywords nor the context of co-text will be lost.To generate a moderately fluent semantic fusion result from a collection of documents, sentence planning and regeneration are then used to combine the segments together to form a coherent whole.In our paper, the framework solved basic issues on developing a document fusion system.(1) The documents are fused on more levels of granularity, as we assign different weights to different semantic relations embedding in the same element of concept.(2) Meanwhile, taking into account semantic relations in the fusion progress ensures the readability of the fused document. Conclusion and Future Work We have presented a framework OMFM to map the fuzzy multisets of objects into one object.Our framework for merging multiple fuzzy multisets of documents is an interesting work, where a document set is modeled as a multiset of documents and each document is modeled as a fuzzy multiset of concepts.Also, OMFM is an extension of the work in paper [11], which could describe both uncertainty and repeatability of the natural language by using the membership degree as the semantic fuzziness of the objects.The quality measures widely used in the text mining literature are defined to quantify the result of a merge function: purity (a measure of correctness) and entropy (a measure of completeness), where the maximum purity is got by upper solution, and the minimum entropy is got by lower solution.Then, we have constructed VI-optimal merge function to get the best solution, where both the higher purity and the lower entropy could be achieved simultaneously.Moreover, we have proved the properties related to constraints of merging problem.Finally, how to settle the problem in document fusion application using OMFM has shown the practicality and effectiveness of our work.With comparatively higher theoretical value and prospect of application, object merging problem will become a hot spot of research in many domains.Future work will further focus on experimental research and applying this framework to solving more relevant problems. Figure 1 : Figure 1: The relationship between the merger functions and quality measures. Figure 2 : Figure 2: The distribution of VI-value with different scales of semantic relations embedded. Table 1 : Descriptions for different semantic relations. Table 3 : VI-values of solutions (concept with single relation embedded). Table 4 : VI-values of solutions (concept with multiple relations embedded). solution and lower solution).The -axis presents the VIvalues.By observing the range of VI-values, we see that, for all VI-values, there is a wide range of values that are achieved by different merge functions.Thus, simple strategies are not likely to work well.Observing the performance of the multiple relations embedded strategy, we see that Table 5 : VI-values of solutions (concept with multiple relations embedded). Table 6 : VI-values of solutions (concept with multiple relations embedded).
2018-12-13T09:23:38.581Z
2014-11-10T00:00:00.000
{ "year": 2014, "sha1": "6f9935d864abf50d609f16b79cc34c1ad8f8e0e9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/304537.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6f9935d864abf50d609f16b79cc34c1ad8f8e0e9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53625746
pes2o/s2orc
v3-fos-license
An audit- indications and diagnosis of bone marrow biopsies at a tertiary care hospital in Saudi Arabia Bone marrow aspirate and trephine examination has a significant role in diagnosis, staging and management of malignant hematological disorders.1 In addition, it is a valuable diagnostic tool in number of non-hematological diseases or systemic illness like pyrexia of unknown origin (PUO), storage disorders, infectious diseases including Leishmaniasis, or granulomatous lesions1-4 and metastatic solid tumors.1,5,6 Sometimes, systemic illness or non-hematological disorders can mimic sign and symptoms of hematological diseases, if patient presents with pallor, bleeding, lymphadenopathy, or hepatosplenomegay. In such cases bone marrow examination may either confirm the suspected diagnosis or direct a clue to unsuspected systemic or non-hematological disease.4,7 Sometimes cases referred by clinicians with some provisional diagnosis but found to have some other diseases on bone marrow biopsy. Introduction Bone marrow aspirate and trephine examination has a significant role in diagnosis, staging and management of malignant hematological disorders. 1 In addition, it is a valuable diagnostic tool in number of non-hematological diseases or systemic illness like pyrexia of unknown origin (PUO), storage disorders, infectious diseases including Leishmaniasis, or granulomatous lesions 1-4 and metastatic solid tumors. 1,5,6 Sometimes, systemic illness or non-hematological disorders can mimic sign and symptoms of hematological diseases, if patient presents with pallor, bleeding, lymphadenopathy, or hepatosplenomegay. In such cases bone marrow examination may either confirm the suspected diagnosis or direct a clue to unsuspected systemic or non-hematological disease. 4,7 Sometimes cases referred by clinicians with some provisional diagnosis but found to have some other diseases on bone marrow biopsy. Bone marrow aspirate is a reliable tool for evaluation of cellular morphology and trephine provides detailed information about bone marrow cellularity, bone architecture, overall hematopoiesis and post chemotherapy changes. 6,8 The objective of this study was to find out the common indications and bone marrow examination findings at tertiary care university hospital. In addition, to ascertain role of this procedure in the diagnosis of non-hematological or benign hematological diseases in our setup. These results might be helpful for clinicians in selecting cases of bone marrow biopsy as numbers of patients are referred by them in a tertiary care hospital. Materials and methods It was a descriptive retrospective audit conducted over a period of two years from January 1 st 2014 till December 31 st 2015, at the Section of Hematology, Department of Pathology, King Khalid University Hospital in Riyadh, Saudi Arabia. All patients underwent bone marrow biopsy either admitted in clinical wards or referred from outpatient clinics at the King Khalid Hospital. We analyzed 481 bone marrow aspirate and trephines in all age groups and both gender. Patient age, sex, clinical history, indication for the procedure and provisional diagnosis made by primary clinicians were recorded. After routine hematological investigations, bone marrow specimens were obtained from the posterior iliac crest in all patients according to standard technique. 6,8 Peripheral blood and bone marrow smears were prepared and stained by wright-Giemsa stain while trephine were decalcified and paraffin embedded blocks were stained with usual haematoxylin and eosin (H&E) stain and examined. Appropriate marrow immunohistochemical, and reticulin stains were used where necessary. Descriptive statistical analysis of the data was performed by using SPSS software (version 22.0, SPSS, Chicago, Illinois, USA) to determine the frequencies with percentages of various diseases involving the bone marrows and to elaborate the indications of the procedure. As it is a tertiary care hospital, three hundred and fifty five (73.8%) patients were referred for bone marrow examination by the clinicians with a provisional diagnosis based on their clinical presentations or initial laboratory work up. Of these, 292 patients were already admitted at clinical wards of the hospital and 63 were referred from outpatient clinics. Other 126 bone marrow specimens were performed from patients of hematology/oncology wards or clinics. The frequent indications or referrals of bone marrow examination were as follows: 89 patients had a procedure for diagnosis or management of acute leukemia, 75 for staging of lymphoma and in 63 cases for work up of pancytopenia (Table 1). Bone marrow examination findings Of the 481 bone marrows examined, 22 biopsies were "inadequate or not suitable" for assessment due to blood clot, crushed, or dilute specimen. 207 (43%) bone marrows reported as normal showing active tri-lineage hematopoiesis, and 17 trephines showed mild to moderate hypo-cellular changes as compared to the patient's respective age with no other significant findings, most of these bone marrows were performed in follow-up cases receiving therapy to see remission or response or for work up of pancytopenia. Non-hematological diseases reported in 9 (1.8%) out of 481 bone marrow trephines. Six bone marrows were infiltrated with metastatic tumors; three of these had neuroblastoma in children. Bone marrow examination in three adults revealed metastatic adenocarcinoma from prostate, breast and bladder respectively. All of these cases were diagnosed on bone trephines with help of a panel of immunohistochemical stains. Other three bone marrows showed granulomatous changes, leishmaniasis and Neimann Pick disease respectively. Discussion Examination of bone marrow aspirate and trephine is a valuable tool in diagnosing a suspected disease or in assessing a known case of hematological disorders and also in evaluation of patients with nonhematological diseases or systemic illness. 1,7,9,10 This study was conducted to determine the major indications and frequencies of various disorders diagnosed on bone marrow aspirate and trephines at tertiary care university hospital. We reported variable spectrum of diseases on bone marrow examination in our cohort of patients. This study showed that acute leukemia was the most commonly encountered indication, and bone marrow examination finding in our setup, an observation similar to others from Kingdom of Saudi Arabia (KSA). 11,12 Second frequent reason of bone marrow referrals was for staging of lymphoma. It was also already reported common indication and malignancy in local reports. 11,12 We evaluated 50 specimens for staging of non-Hodgkin's lymphoma (NHL) and 14 showed bone marrow infiltration, mostly with large B cell lymphoma. 25 trephines were examined for Hodgkin's lymphoma involvement and only 4 were found to have stage IV disease. Pancytopenia is one of the diagnostic dilemmas to clinicians and here third common reason of bone marrow referrals. Among 63 patients presented with pancytopenia, 27 showed normocellular bone marrow with active tri-lineage hematopoiesis suggesting cause of pancytopenia is peripheral destruction, Aplastic anemia or hypocellular trephines were reported in five cases suggestive of idiopathic or drug induced changes. Table 3, Enlist the causes of pancytopenia in these 63 patients. We found higher numbers of bone marrows involved with malignant hematological disorders than benign or non-hematological diseases. Various studies from other regions revealed an incidence of benign hematological disorders in bone marrows from 60-80% 2,9,10 and this can be explained by prevalence of anemia in their regions. Acute leukemia was the common indication, and bone marrow examination finding in our setup and as well from other centers in Kingdom. 11,12 However, in this report acute myeloid leukemia (AML) is more frequently encountered than acute lymphoblastic leukemia (ALL) and it may be because our study population comprised of more adult patients. Next to this, other frequently diagnosed hematological malignancies were myeloproliferative disorder (MPD) followed by multiple myeloma (MM), variable incidence rates were reported in regional reports. 11,13 Among 50 patients of MPD, 27 were of chronic myeloid leukemia (CML), 13 had essential thrombocythemia (ET), 5 cases of polycythemia rubra vera (PRV) and primary myelofibrosis (MF) respectively. Patients diagnosed with these disorders are mainly adults >45years old. Incidence of myeloproliferative disorder and myeloma increases with the age and rare in children 1,3 so the higher incidence of these disorders in this study is because of 80% study population were adult patients in our group. In non-malignant hematological diseases, idiopathic thrombocytopenic purpura (ITP) was common, diagnosed mainly in children and results are comparable with other reports from KSA. 11 We reported very small number of bone marrows that were involved with non-hematological disorders and were incomparable with others. Although contrary to other. 2,9,10 we found that bone marrow involved less often with benign hematological and non-hematological disorders. These differences in bone marrow examination findings may be because of our sample size, or study population involving more adults as benign disorders like ITP frequently occur in children, and or we received less referral for workup of anemia. In addition most cases referred by the clinicians who can utilized easily available imaging techniques along with tissue biopsy in a tertiary care hospital for diagnosis of metastatic tumors. These reasons can decrease the yield of bone marrow examination for diagnosis of benign and nonhematological diseases in our center. In 4.5% bone marrow biopsies, sample was inadequate for reporting, and was mostly in children. The reported failure rates of bone marrow biopsy range from 2-10% or even higher. 2,11 Bone marrow biopsy is an uncomfortable procedure for the patients and in a tertiary care centers it should be performed by trained person to minimize the inadequacy of specimens. Almost half of the bone marrow specimens were reported normal but most of these were done in known patients with hematological diseases to assess response of treatment, an observation similar to others. 11 Herein; interesting finding is, bone marrow biopsy performed for evaluation of anemia is one of the most frequent indication in 20-50% biopsies by multiple studies from different parts of the world. 2,4,5,7,9,10 In contrast to these, we found only 8% bone marrows were referred for workup of anemia, concordance with another study in Kingdom. 11 We found, 3% bone marrow biopsies reported with variable types of anemia, which is similar to a local report 11 and few others. 14 Nevertheless, in contrast to our findings; nutritional anemia reported as common bone marrow examination findings by some. 2,6,7,9,10 These differences can be explained by nutritional deficiencies or H. pylori infections are less common in our region. In addition, we conclude that bone marrow examination had not utilized as diagnostic tool for anemia in our hospital unless other relevant blood investigations failed to suggest the cause. So, indication of bone marrow examination for workup of anemia is not common here as majority of patients can be easily diagnosed on peripheral blood smears or other easily available blood tests and treated as outpatients. Conclusion Herein, indications of bone marrow examination were for evaluation of acute leukemia and lymphoma. The yield of bone marrow examination for diagnosis of anemia or other benign or nonhematological disorders are very low. Common diagnosis encountered in our setting was acute leukemia and myeloproliferative disorders. Bone marrow examination is an important tool for diagnosis and assessing response of therapy in malignant hematological disorders.
2019-03-18T14:04:10.273Z
2018-10-17T00:00:00.000
{ "year": 2018, "sha1": "0a6abc871b5abed825675500d692ed78045426cf", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/HTIJ/HTIJ-06-00181.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ddd9c5fad898645e0eee5d5bf474ea77486818f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25971944
pes2o/s2orc
v3-fos-license
Clostridium difficile Hfq can replace Escherichia coli Hfq for most of its function A gene for the Hfq protein is present in the majority of sequenced bacterial genomes. Its characteristic hexameric ring-like core structure is formed by the highly conserved N-terminal regions. In contrast, the C-terminal forms an extension, which varies in length, lacks homology, and is predicted to be unstructured. In Gram-negative bacteria, Hfq facilitates the pairing of sRNAs with their mRNA target and thus affects gene expression, either positively or negatively, and modulates sRNA degradation. In Gram-positive bacteria, its role is still poorly characterized. Numerous sRNAs have been detected in many Gram-positive bacteria, but it is not yet known whether these sRNAs act in association with Hfq. Compared with all other Hfqs, the C. difficile Hfq exhibits an unusual C-terminal sequence with 75% asparagine and glutamine residues, while the N-terminal core part is more conserved. To gain insight into the functionality of the C. difficile Hfq (Cd-Hfq) protein in processes regulated by sRNAs, we have tested the ability of Cd-Hfq to fulfill the functions of the E. coli Hfq (Ec-Hfq) by examining various functions associated with Hfq in both positive and negative controls of gene expression. We found that Cd-Hfq substitutes for most but not all of the tested functions of the Ec-Hfq protein. We also investigated the role of the C-terminal part of the Hfq proteins. We found that the C-terminal part of both Ec-Hfq and Cd-Hfq is not essential but contributes to some functions of both the E. coli and C. difficile chaperons. INTRODUCTION Hfq is a small protein (102 amino acid residues in Escherichia coli) encoded by the hfq gene. Hfq belongs to an ancient family of RNA-binding proteins that is implicated in RNAmediated reactions in all three domains of life and plays a pivotal role in the control of gene expression (for a recent review, see Vogel and Luisi 2011). Eukaryotic and archaeal homologs of Hfq are, respectively, Sm-proteins and the closely related Sm-like (LSm) proteins. Sm proteins are arranged in a multimeric ring-like quaternary structure comprising seven different monomers. In contrast, the bacterial Hfq proteins form homohexameric rings. Hfq is a regulatory protein that has recently received much attention because of its crucial role in cellular processes controlled by small noncoding RNAs (sRNAs). Hfq facilitates the pairing of sRNAs with their target mRNAs, thereby affecting their expression either positively or negatively. Moreover, Hfq plays an important role in modulating mRNA degradation and RNA transcription (Folichon et al. 2003;Le Derout et al. 2010). Hfq is present in half of the bacterial sequenced genomes including many pathogens (Chao and Vogel 2010). Hfq was shown to act as a virulence factor in several bacterial pathogens (for review, see Hajnsdorf and Boni 2012), but it is not clear whether it can be considered as a general factor required for the virulence of all Hfq-containing pathogens. The E. coli and Staphylococcus aureus Hfqs have been crystallized. The ring-like structure (the so-called core made up of two Sm motifs), is formed from the highly conserved N-terminal parts of Hfq molecule (aa 1-65 in E. coli). A number of amino acids in the E. coli protein have been identified as important for interaction with RNA. Three RNA-binding sites were defined by groups of residues and called the proximal (Q8, D9, F39, Y55, K56, F42), distal (Y25, I30), and the rim (R16, R19, and R17) RNA-binding surfaces (Mikulecky et al. 2004;Zhang et al. 2013). While the core of the various Hfq paralogs is rather conserved in sequence and structure (residues 7-66 in E. coli) (Fig. 1), in contrast, the C-terminal sequences have different lengths and lack homology in different species (Fig. 1). Hfq proteins of γand β-proteobacteria have an extended C terminus (up to 38, mainly hydrophilic, amino acids in E. coli and close relatives), whereas some Gram-positive bacteria, including Bacillus subtilis and Staphylococcus aureus, have Hfq proteins with short (<10 amino acid) C-terminal extensions (Sauter et al. 2003). The role of the C-terminal part has not been elucidated. In E. coli, it was shown that the hfq2::Ω mutation, which leaves 79 N-terminal amino acids intact out of 102 residues, does not affect its function and had no obvious phenotype . The C-terminal domain of E. coli Hfq is dispensable for hexamer formation, a truncated form of E. coli Hfq deprived of its 19 C-terminal residues is fully able to bind to polyadenylated rpsO mRNA (Arluison et al. 2004) and sRNA DsrA (Sonnleitner et al. 2004). But more recent data show that the C-terminal domain is required for binding to long mRNAs like rpoS (Vecerek et al. 2008;Beich-Frandsen et al. 2011) and to activate GlmS expression by GlmY but not GlmZ (Salim et al. 2012). The C-terminal part possesses no identifiable motif, it is flexible and in the crystallographic structure extends laterally away from the hexameric core and exhibits features typical of intrinsically disordered proteins. Some intrinsically disordered domains have been implicated in facilitating intermolecular interactions (Babu et al. 2011). It has been suggested that a disordered C-terminal domain could participate in the interaction of Hfq with RNA molecules (Beich-Frandsen et al. 2011). In Gram-negative bacteria, it is well-established that Hfq assists sRNAs in their regulatory function, while in Gram-positive bacteria, its role is still poorly characterized (Jousselin et al. 2009;Waters and Storz 2009;Chao and Vogel 2010). Although numerous sRNAs have been detected in many Gram-positive bacteria, it is not yet clear whether Hfq is implicated in sRNA network. In Listeria monocytogenes, Hfq has been shown to play a role in stress tolerance and virulence (Christiansen et al. 2004) and its inactivation does not generally affect sRNAs abundance, only three sRNAs coimmunoprecipitated with Hfq (Christiansen et al. 2006;Mandin et al. 2007;Toledo-Arana et al. 2009) with facilitation of LhrA binding to its targets (Nielsen et al. 2010(Nielsen et al. , 2011. In S. aureus, Hfq is neither required for RNAIII function nor for sRNA stabilization (Boisset et al. 2007;Geissmann et al. 2009). The role of Hfq homologs from several bacteria has been previously tested by complementation of hfq mutations in E. coli. These studies revealed that heterologous hfq genes may or may not reverse phenotypes associated with hfq deficiency in E. coli. As examples, Pseudomonas aeruginosa Hfq, which comprises only 82 amino acids, and the Moraxella catarrhalis hfq gene (210 amino acids) both functionally replace the E. coli protein (Sonnleitner et al. 2002;Attia et al. 2008), but the Hfq homologs of Synechocystis sp.PCC 6803 and Anabaena PCC 7120 (Boggild et al. 2009) are not able to replace E. coli Hfq while Staphylococcus aureus Hfq fails to substitute for Salmonella typhimurium Hfq (Rochat et al. 2012). However, the experimental conditions greatly varied in these studies regarding the expression system of the Hfq homologs as well as the functions examined. An increasing amount of data shows that the Clostridia also contain sRNAs (Chen et al. 2011;Mraheil et al. 2011). For example, a great number and a large diversity of regulatory RNAs have been recently identified in the pathogenic clostridium Clostridium difficile (Soutourina et al. 2013). Moreover, a gene (CD1974), homologous to hfq, has been shown to be transcribed in this bacterium (I Verstraete, O Soutourina, pers. comm.). The C. difficile Hfq (Cd-Hfq) protein exhibits 46% identity (31 amino acids out of the first 66 amino acids) with Ec-Hfq. Most of the key residues that have been identified as important for interaction with RNA in the E. coli Hfq protein are present in Cd-Hfq: e.g., in the FIGURE 1. Multiple sequence alignment of Hfq from different species. The ortholog cluster multiple alignment of the amino acids sequences of Hfq proteins of several Clostridia and various Gram-positive or Gram-negative model species was performed by using ClustalW on the website http://mbgd.genome.ad.jp. Proximal face residues important for the interaction with RNA are highlighted in gray, those important for distal face interaction and at the rim are double underlined and underlined, respectively, both in the Ec-Hfq (Mikulecky et al. 2004;Zhang et al. 2013) and Cd-Hfq. The numbering is based on Ec-Hfq. The secondary structure is indicated by boxes (α-helix) and arrows (β sheets). The Sm motifs are boxed with dotted lines. The core structured part of Ec-Hfq and Cd-Hfq are boxed with plain lines. proximal binding surface (Q8, D9, F39, Y55, K56, but not F42 which is replaced by Y, also an aromatic residue), in the distal site (Y25, I30) and at the rim (R16, R19, but not R17, replaced by K and which retains the positive charge) (Mikulecky et al. 2004;Zhang et al. 2013). However, the Cd-Hfq C-terminal domain is very different; it is much shorter (16 aa) and includes an unusual stretch of seven asparagine residues, unique to this species (Fig. 1). To investigate the functionality of the Cd-Hfq protein and its potential role in sRNA regulation of gene expression, we tested the ability of Cd-Hfq to complement various phenotypes associated with the E. coli hfq mutation. We have examined four specific functions of Hfq where the involvement of Ec-Hfq has previously been well-documented. We show that Cd-Hfq is as efficient as Ec-Hfq in the negative control of OppA expression and in the positive control of RpoS expression but with differential effects on the stability of the sRNAs involved. In addition, Cd-Hfq is as proficient as Ec-Hfq in controlling Ec-Hfq synthesis. Surprisingly, Cd-Hfq does not participate in the negative control of PtsG carried out by SgrS. In addition we show that deletion of the C-terminal extensions despite having different primary sequences similarly impact the regulatory function of the two proteins. Cd-Hfq is expressed at similar levels as Ec-Hfq in E. coli The Cd-hfq ORF was cloned in place of the Ec-hfq open reading frame of plasmid pTX381 (a low copy-number derivative of pACYC184) . These two plasmids are hereafter designated as pCd-Hfq and pEc-Hfq, respectively. We also cloned the two core proteins without the C-terminal part: pEc-Hfq core and pCd-Hfq core, corresponding to the 65 and 68 N-terminal amino acids of the two proteins, respectively ( Fig. 1). We first verified that the four Hfq proteins were expressed at comparable levels from the plasmids. All hfq variants are expressed from both P2 hfq and P3 hfq promoters present on the original pTX381 plasmid (Tsui et al. , 1996(Tsui et al. , 1997. The relative expression levels of the different hfq variants was estimated by performing an abortive primer extension with an oligonucleotide that hybridizes to the 5 ′ UTR of the hfq gene present in all the constructs. This sequence is still present in the chromosomal hfq deletion and also in the hfq-lacZ chromosomal fusion in strain IBhfq95 Δhfq ( Fig. 2A, lane p; Ziolkowska et al. 2006). The reverse transcript measured in the strain carrying the vector plasmid corresponds to these two transcripts ( Fig. 2A). The presence of the plasmids expressing full-size and core Hfqs increases the amounts of the hfq UTR transcript less than twofold, showing that hfq is expressed from the plasmids at levels comparable to the physiological levels. An Hfq mutant exhibits several phenotypes compared to the wt strain. In LB liquid medium the growth of the mutant is diauxic with a decrease of growth rate and an entry into stationary phase at a lower OD than for the wt . We first compared growth in LB medium and found that the hfq mutant transformed with pCd-Hfq exhibited a growth intermediary between the control strain (Δhfq/p) and the complemented strain (Δhfq/pEc-Hfq) (Fig. 2B), giving the first indication that the Cd-hfq gene is functional in E. coli. Removing the C-terminal tails had only a minimal effect on the complementation efficiency, suggesting that the C-terminal tail of Ec as well as Cd is probably dispensable in the tested conditions. Cd-Hfq is functional for Hfq autoregulation in E. coli To further compare the Cd-Hfq and Ec-Hfq proteins, we examined their ability to negatively affect the expression of an hfq::lacZ fusion under the control of the P3 hfq promoter. Hfq autoregulates its own expression fourfold by binding to its own mRNA, although the mechanism has not been completely elucidated (Vecerek et al. 2005;Ziolkowska et al. 2006). Table 1 shows that Ec-Hfq and Cd-Hfq exhibit the same repression factor (4.0) but that the plasmids expressing just the N-terminal cores of Ec-Hfq and Cd-Hfq were inactive in repression. This implies that the C-terminal unstructured region of the two proteins might be important for their interaction with the 5 ′ UTR of the hfq transcript or for the stability of the proteins. Δhfq transformed with the empty vector (p), pEc-Hfq, pEc-Hfq core, pCd-Hfq, or pCd-Hfq core were used as templates for reverse transcriptase primed with hfg primer, which hybridizes upstream of the hfq ORF, and carried out in abortive conditions. Quantification of the radioactivity (arbitrary units) was indicated below the gel. The transcript detected in the plasmid vector containing strain (IBhfq95 Δhfq/p) corresponds to the 5 ′ end of mRNA covering the hfq deletion on the chromosome and the 5 ′ end of the hfq-lacZ fusion present at lacZ (Ziolkowska et al. 2006); both should respond to Hfq autoregulation. Fainter bands detected on the same gel due to unspecific hybridization of the primer do not vary between samples, revealing that equivalent loading has been applied on the gel. (B) Wild-type strain carrying the empty vector (p) (▾) and the Δhfq mutant transformed with pEc-Hfq ( • ) and pCd-Hfq (▪) were grown in LB medium with appropriate antibiotics at 37°C. Turbidity of the cultures was monitored at 37°C. Negative regulation of oppA expression by Cd-Hfq A 56-kDa protein identified as OppA, a periplasmic component of the oligopeptide transport system, was detected only in the hfq mutant (Ziolkowska et al. 2006). We took advantage of this easy assay to examine the effect of the various Hfq derivatives on oppA expression. Coomassie blue staining did not reveal any detectable band corresponding to OppA protein in the strains containing hfq genes expressed from the plasmid (data not shown). This result was confirmed by the absence of oppA mRNA transcript in the same strains indicating that all the Hfq proteins tested inhibit the synthesis of this polypeptide (Fig. 3). OppA repression is achieved by Hfq and GcvB, a conserved sRNA, which targets many ABC transporters of small peptides, amino acids, and also proteins involved in amino acid biosynthesis pathways (Sharma et al. 2007(Sharma et al. , 2011. GcvB is an Hfq-dependent sRNA, its transcription is independent on Hfq and it is unstable in the hfq mutant (Urban and Vogel 2007;Pulvermacher et al. 2009). We analyzed its level by Northern blot and found that the expression of Ec-Hfq increases its level threefold compared with the hfq mutant, and only a slightly reduced level was found in the strain with the deletion of the C terminus domain (Fig. 3). Surprisingly, GcvB levels were higher when the Cd-Hfq protein, with or without the C-terminal region, was expressed from the plasmid. Moreover, a 130-nt processed GcvB form, already observed by Urbanowski et al. (2000), and other degradation products were detected only when the Cd-Hfq core was present (Fig. 3). We conclude that Cd-Hfq is as efficient as the E. coli protein for the negative control of OppA expression and that the C-terminal domains of Ec-Hfq and Cd-Hfq are not necessary for this regulation. However, Cd-Hfq has a greater stabilizing effect on GcvB sRNA than Ec-Hfq with a role of the Cd-Hfq C-terminal domain. Cd-Hfq activates rpoS expression An increase in RpoS expression occurs under the conditions of nutrient deprivation, stress or at the entry into the stationary phase. This accumulation is regulated at multiple levels. Three sRNAs DsrA (Majdalani et al. 1998), RprA (Majdalani et al. 2001), and ArcZ (Mandin and Gottesman 2010) activate the translation of RpoS while OxyS represses it (Zhang et al. 1998), and all of these regulatory processes are Hfq-dependent. Therefore, rpoS appears to be an ideal target to examine the regulatory function of the various Hfq constructs. We used a translational rpoS-lacZ chromosomal fusion (Ziolkowska et al. 2006) and determined its expression in exponential growing bacteria in the presence of the various hfq-expressing plasmids (compared with the plasmid without an Hfq insert). Figure 4A shows that both Ec-Hfq and Cd-Hfq activate the expression of the fusion protein to comparable levels. Deletion of the C-terminal coding sequence of Ec-Hfq and Cd-Hfq partly reduces expression of the fusion, confirming that it is required for optimal regulation of rpoS expression (Vecerek et al. 2008;Beich-Frandsen et al. 2011). This latter result indicates that the sequence of the C-terminal extension is not crucial by itself. The various sRNAs that control the expression of RpoS are expressed under different stress conditions, notably in entrance to stationary phase or nutrient stress. We first concentrated on examining ArcZ since it is expressed in rich medium in exponential growth corresponding to our growth conditions and contributes to rpoS expression (Mandin and Gottesman 2010). Hfq coimmunoprecipitates with the ArcZ sRNA (Wassarman et al. 2001;Zhang et al. 2003) and ArcZ is required for RpoS activity (Papenfort et al. 2009). ArcZ (previously named as RyhA/SraH) is transcribed as a 121-nt RNA, which is subsequently processed to generate a IBhfq95 Δhfq strain carrying an hfq-lacZ fusion on the chromosome was transformed with the empty vector (p), pEc-hfq, pEc-Hfq core, pCd-hfq, and pCd-hfq core. β-galactosidase was measured in bacteria harvested in exponential phase. Mean values are calculated from three independent experiments. smaller, stable RNA that consists of the last 56 nt of the original transcript (nt 66-121 of the full-length transcript, hereafter designated as ArcZ * ) (Argaman et al. 2001;Papenfort et al. 2009;Mandin and Gottesman 2010), which base pairs within the rpoS mRNA leader (Mandin and Gottesman 2010). ArcZ * sRNA is abundant, while the full-length transcript is hardly detected in E. coli and S. typhimurium and the 5 ′ part is not detectable (Papenfort et al. 2009). Neither the full-length transcript nor the 5 ′ processed region were detected in S. typhimurium hfq mutant (Papenfort et al. 2009). Figure 4B shows that the full-length ArcZ sRNA is hardly detectable in the hfq mutant and in strains complemented by Ec-Hfq but accumulated in the Cd-Hfq complemented strains. Moreover, both the total amount of the two transcripts and their relative abundance varied depending on the origin of the Hfq protein. Since Hfq deficiency does not impair ArcZ transcription (Papenfort et al. 2009), our results suggest that the origin of the Hfq protein differently affected ArcZ stability and/or its processing into ArcZ * . ArcZ * levels are high in strains growing in early exponential phase complemented with either Cd-Hfq or Ec-Hfq. In both cases, higher levels of ArcZ * were detected in the presence of the full-length Hfq compared with the truncated form. Interestingly, the fulllength sRNA was more abundant in the presence of Cd-Hfq and Cd-Hfq core. RNase E has been proposed to process the ArcZ sRNA (Papenfort et al. 2009). One interpretation of our results is that Cd-Hfq might be less efficient than Ec-Hfq to recruit RNase E. According to this hypothesis, thermal inactivation of RNase E increases the level of ArcZ and decreases that of ArcZ * in the presence of Ec-Hfq whereas there is no effect on the levels of both ArcZ and ArcZ * in the presence of Cd-Hfq (Fig. 4C). However, we cannot exclude that Cd-Hfq strongly interacts with ArcZ in the cell, thus decreasing its turnover. We conclude that Cd-Hfq participates in both RpoS synthesis and ArcZ sRNA stabilization, which are required for the activation of RpoS expression. In Ec as in Cd, the Cterm domain has a minor role but could maximize the effects by favoring contact with rpoS mRNA as previously reported (Vecerek et al. 2008;Beich-Frandsen et al. 2011). Inactivation of ArcZ has only a faint impact on the expression of rpoS in the presence of the different Hfq variants (Fig. 4A). This is not surprising since rpoS transcription is very low in our experimental conditions and therefore its expression may be controlled by regulators that are functionally redundant. For this reason we then examined DsrA sRNA, which also contributes to rpoS expression in LB medium at 37°C (Mandin and Gottesman 2010) and is unstable in the hfq mutant when expressed from the chromosome (Sledjeski et al. 2001). DsrA is more abundant in Ec-Hfq complemented strains than in the mutant but as ArcZ, it is even more abundant in Cd-Hfq complemented strains (Fig. 4B). Both RNase and RO91-Δhfq arcZ mutant (dark-gray) were transformed with pACYC184 derivative plasmid expressing either wild-type or C-terminally truncated Hfq proteins from E. coli and C. difficile giving pEc-Hfq, pEc-Hfq core, pCd-Hfq, and pCd-Hfq core, respectively. The empty vector was used as a control (p). β-galactosidase activity from the rpoS-lacZ chromosomal fusion was measured in exponentially growing cells at 37°C (OD 650 of 0.4). Each value is the mean of three independent experiments, with standard deviations not exceeding 15% of magnitude. (B) Northern blots showing ArcZ, ArcZ * , and DsrA in RO91-Δhfq transformed with the same plasmids (as in Fig. 3). ( * * ) A 61 -nt long DsrA processed fragment identified in Repoila and Gottesman (2001). (C ) Northern blots showing ArcZ, ArcZ * , and DsrA in IBPC 928 transformed with the same plasmids, grown at 30°C, and shifted for 15 min to 44°C to inactivate thermosensitive RNase E. Relative amounts of ArcZ and DsrA normalized to 5S RNA are indicated. E and RNase III have been shown to cleave DsrA and DsrA-rpoS duplex, respectively (Moll et al. 2003;Resch et al. 2008). The number of processed forms we detected is higher than that previously reported and their relative abundance greatly varies making it difficult to establish a link between these processing and the Hfq variant acting in the bacteria. As for GcvB and ArcZ, thermal inactivation of RNase E has no effect on the levels of these processing products while it stabilizes DsrA in Ec-Hfq but not Cd-Hfq containing cells (Fig. 4C), reinforcing our assumption that RNase E cannot interact with Cd-Hfq. Cd-Hfq does not trigger ptsG control by SgrS The addition of α-methylglucoside (αMG) to wild-type cells rapidly generates phosphosugar stress, resulting in induction of the SgrS sRNA (Vanderpool and Gottesman 2004), which in turn leads to destabilization of ptsG mRNA. PtsG mRNA degradation is dependent upon RNase E and Hfq in an RNase E-Hfq-SgrS ribonucleoprotein complex (Morita et al. 2003(Morita et al. , 2004(Morita et al. , 2006. As a consequence, an hfq mutant is unable to initiate the phosphostress response and does not grow on plates containing αMG (Fig. 5A). (Vanderpool 2007). The pCd-Hfq protein as well as the Ec-Hfq and Ec-Hfq core proteins were able to restore growth on αMG containing plates but not the Cd-Hfq core carrying strain (Fig. 5A). This indicates that Cd-Hfq deprived of its C-terminal domain cannot fully replace Ec-Hfq, and cannot properly regulate ptsG expression. However, strains containing Ec-Hfq and Ec-Hfq core grew more rapidly in liquid LB medium containing αMG (doubling time of 33 and 35 min, respectively), than strains with Cd-Hfq and Cd-Hfq core (generation time of 68 and 77 min, respectively) (data not shown). The addition of αMG for 15 min in liquid LB medium caused an increased synthesis of SgrS sRNA in the Ec-Hfq complemented strains compared with the control and pCd-Hfq-containing strains (Fig. 5B). Interestingly, ptsG mRNA degradation, which takes place in the strain complemented with Ec-Hfq as previously observed (Morita et al. 2005) and with Ec-Hfq core, is strongly reduced or absent in strains containing the Cd-Hfq proteins (Fig. 5B). In addition, we detected in all cases an inverse correlation between ptsG mRNA and SgrS levels. It has previously been postulated that the folded secondary structure of SgrS sRNA has to be modified by Hfq before it can anneal with the ptsG mRNA (Maki et al. 2010). The lower SgrS levels detected in strains containing Cd-Hfq constructs might mean that the C. difficile proteins inefficiently interact with SgrS sRNA in vivo and as a consequence do not provide the formation of the SgrS-Hfq-RNase E ribonucleic complex required for ptsG degradation in the presence of αMG. Our assumption on the inability of Cd-Hfq in binding to RNase E would strengthen this effect. In addition, our results indicate that for the E. coli Hfq, the C-terminal domain is not required for interaction with SgrS and/or ptsG mRNA. E. coli Hfq core protein is equally efficient as the full-length Hfq at controlling the expression of ptsG by SgrS and at promoting the degradation of the ptsG messenger carried out by RNase E. Comparison of Ec-Hfq and Cd-Hfq RNA-binding capacity The RNA-binding capacity of purified His-tagged Ec-Hfq and Cd-Hfq was evaluated in parallel by mobility-shift assay using SgrS, ArcZ, ArcZ * , and GcvB sRNAs and also the 5 ′ -hfq UTR. Determination of the binding constants for half saturation (K 1/2 ) shows that Cd-Hfq binds all RNAs but with a 13to 240-fold higher K 1/2 than Ec-Hfq (Table 2). This may be related to a difference in protein stability and/or hexamer formation of the two purified proteins. The two proteins were purified in parallel with similar yields but unlike the Ec-Hfq protein which, immediately after purification, was 50% recalcitrant to denaturation on an SDS gel, there was little hexamer (15%) detected with Cd-Hfq, (Fig. 6). There seemed to be no direct correlation between sRNA abundance in vivo and in vitro binding affinities for Cd-Hfq. The ortholog cluster multiple alignment of the Hfq proteins of several Clostridia and various Gram-positive or Gram-negative model species (http://mbgd.genome.ad.jp) shows that there is an extra amino acid at the Sm1-Sm2 junction only in Cd-Hfq (NTV in Ec-Hfq and DNRQ in Cd-Hfq) (Fig. 1). The model of Cd-Hfq protein based on the known structure of Ec-Hfq (Protein Data Bank entry 1HK9) (Swiss-Model at http://expasy.org) shows a different orientation of the loop that could propagate along the β3 and β4 sheets and thus alter the interaction of the monomers in the hexameric form of Hfq (data not shown) and affect the stability of the Cd-Hfq hexamer. DISCUSSION Expression of Hfq is tightly regulated at both transcriptional and post-transcriptional levels Vecerek et al. 2005;Ziolkowska et al. 2006), and its availability is a limiting factor in sRNA-mediated silencing and activation of gene expression (Wagner 2013). To avoid any artifact due to overproduction, sudden induction of Hfq synthesis or formation of heterohexamers, we chose to express the Ec-and Cd-hfq genes in the hfq mutant at physiological levels under the control of the native E.coli hfq promoters and translation initiation sites. This is in contrast to previous studies where Hfq was synthesized from inducible promoters (Vecerek et al. 2008;Boggild et al. 2009;Salim et al. 2012). We demonstrate that the Hfq protein of C. difficile can substitute for E. coli Hfq in the positive control of RpoS, the negative control of OppA expression, and in hfq autoregulation showing that it does act as an RNA chaperone, at least in E. coli for these functions. However, it is not functionally identical to Ec-Hfq, since Cd-Hfq is unable to carry out the negative control of ptsG expression. A possible explanation for this difference is discussed below. Mutagenesis analysis has identified different conserved amino acids of Hfq implicated in regulation of mRNAs by sRNAs. These amino acids have defined three regions in the Hfq structure, two important for the interaction with sRNAs, the proximal face and lateral surface, while the proximal face is thought to contact the mRNA (Mikulecky et al. 2004;Ziolkowska et al. 2006;Sauer et al. 2012;Zhang et al. 2013). As described above, the Hfq protein of C. difficile retains most but not all the critical residues found to be essential for the function of the E. coli protein. Furthermore, the C-terminal domains are very different. By comparing both the full size and C-terminal deleted versions of Ec-Hfq and Cd-Hfq, we have also investigated whether the C-terminal region has a role in Hfq's RNA chaperon function. We examined the role of Hfq in various sRNA-dependent gene regulations in E. coli, which implicate different binding surfaces of the protein. ArcZ regulation of rpoS expression requires Q8, F39, F42, K56, H57 on the proximal face, Y25 and I30 on the distal face, and R16 at the rim (Zhang et al. 2013). Among these essential residues only F42 is missing in Cd-Hfq but replaced by Y, another aromatic amino acid. We found that RpoS expression is fully activated in Cd-Hfq containing strains, indicating that Cd-Hfq actively cooperates with the sRNAs in charge of regulating RpoS expression. Our previous results indicated that different sites of Hfq were involved in the control of RpoS and OppA, since mutation of Ec-Hfq Valine 43 to Arginine abolished regulation of RpoS expression but did not impair OppA control (Ziolkowska et al. 2006). In Cd-Hfq V43 is replaced by an Isoleucine and in the case of Ec-Hfq a Cystein at this position is fully compatible with RpoS repression (Ziolkowska et al. 2006). While Cd-Hfq is as efficient as Ec-Hfq in oppA repression and rpoS activation of gene expression, it has a greater impact on GcvB, DsrA, and ArcZ sRNAs stability. These three sRNAs are more abundant in Cd-Hfq than in Ec-Hfq containing cells. GcvB coimmunoprecipitates with Hfq and is unstable in its absence, but less than other sRNAs (Zhang et al. 2003(Zhang et al. , 2013Pulvermacher et al. 2009;Busi et al. 2010). We show here that GcvB sRNA accumulates to higher levels in the Cd-Hfq containing cells than in the presence of Ec-Hfq. This indicates that Cd-Hfq has a higher protective effect against GcvB degradation than Ec-Hfq. ArcZ is one out of four sRNAs that contributes to rpoS translation (Mandin and Gottesman 2010). This sRNA is processed, probably 5 ′ end-labeled RNA indicated at the top of each column was mixed with increasing concentrations of Hfq ranging from 5 pM to 100 nM. Complexes were separated on native polyacrylamide gels (data not shown). The data were plotted using KALEIDAGRAPH 3.0.4 (Abelbeck Software) and the generated curves were fitted to Hill plot, the K 1/2 values representing the protein concentration at half-maximal RNA binding. The dissociation constants for Hfq binding to the 4 sRNAs were determined as the K 1/2 s (half saturation values), which were derived from the best fit of the data. by RNase E (Papenfort et al. 2009), to ArcZ * , which is the active form of the sRNA. Surprisingly, full-length ArcZ which is normally undetected in hfq+ cells, accumulated to high levels in the presence of Cd-Hfq and the processed form ArcZ * , and was also more abundant especially with the full-size Cd-Hfq. Thus, GcvB, ArcZ, and DsrA are inefficiently processed in the presence of Cd-Hfq. GcvB, DsrA, and ArcZ * are probably degraded by RNase E when associated with Ec-Hfq as a consequence of their mRNA base-pairing activity, as previously described (Massé et al. 2003;Morita et al. 2005). As RNase E does not exist in C. difficile, it is conceivable that Cd-Hfq does not allow recruitment of RNase E in E. coli, thus explaining the enhanced stability of certain mRNAs and small RNAs (Monot et al. 2011). In agreement, inactivation of thermosensitive RNase E that stabilizes ArcZ and DsrA in the presence of Ec-Hfq has no impact on their abundance when Cd-Hfq is substituted for Ec-Hfq (Fig. 4C), suggesting that Cd-Hfq may not be proficient in RNase E binding. In the case of oppA where the mRNA is as unstable with Cd-Hfq as with Ec-Hfq, degradation of the mRNA would be independent of the formation of the Hfq-sRNA-RNase ribonucleoprotein complex and most probably result from translational inhibition and accessibility of the messenger to the endoribonucleases. Alternatively, both GcvB and ArcZ may interact more tightly with Cd-Hfq than with Ec-Hfq, conferring a higher protection against ribonucleases. A role for the C-terminal domain of Hfq We show that both Ec-and Cd-Hfq proteins are able to negatively control oppA expression in the absence of the nonstructured C-terminal part of the proteins. This indicates that the two core proteins are stable and functional enough in vivo to regulate the expression of certain targets. Similarly Ec-Hfq and Ec-Hfq core are equally efficient in the control of ptsG expression. On the other hand, for other targets the Cterminal region seems to play a role with different graduations: the C-terminal domain of both Ec and Cd Hfq increases the activation of RpoS expression and is required for the autoregulation of Hfq. Those differences could be due to different affinities with the mRNAs. Despite the fact that the Cterminal regions are of different lengths and sequences, they confer similar functions that are not exhibited by the core regions alone. It could be the intrinsically disordered character of this part of the proteins that is required for the interaction with the rpoS messenger for both Hfqs. The special case for SgrS Two mechanisms have been proposed to account for sRNAinduced mRNA decay. In the first pathway, interruption of translation frees the messenger of ribosomes, allowing RNase E to access and cleave the mRNA (Morita et al. 2006). In the second pathway, proposed for degradation of ptsG and sodB mRNAs, formation of a sRNA-Hfq-RNase E ribonucleoproteic complex stimulates RNase E cleavage of the target mRNA (Morita et al. 2005) even at distance from the interaction site (Prevost et al. 2011). SgrS interaction with ptsG mRNA inhibits its translation promoting the rapid degradation of the SgrS-ptsG complex (Ikeda et al. 2011). The rapid turnover of ptsG mRNA requires the C-terminal scaffold region of RNase E, which interacts with Hfq as well as with Hfq bound to small RNAs (Morita et al. 2005). In addition, RNase E and Hfq bind similar sites on the RNA (Folichon et al. 2003) and base pairing of the sRNA and mRNA may allow Hfq displacement from the sRNA and access of RNase E to the duplex (Massé et al. 2003). We show here that Ec-Hfq carried out repression and degradation of ptsG transcript with no role of the C-terminal domain, indicating that the latter is not required for interacting with RNase E. Inhibition of ptsG translation and its degradation are not observed in Cd-Hfq-containing cells because in this case SgrS sRNA is not stabilized by Cd-Hfq. This situation is unique compared with the two other sRNAs we have examined here, ArcZ and GcvB, and also GlmZ (data not shown), because all of them are more stabilized by Cd-Hfq than by Ec-Hfq. SgrS sRNA was the first dual-function Hfq-dependent, sRNA regulator identified. It encodes a peptide, SgrT (Wadler and Vanderpool 2007), which is capable of inhibiting αMG uptake through a mechanism that is independent of the base-pairing function. If translation of SgrT is efficient in the presence of Cd-Hfq even when ptsG mRNA is not degraded, this may explain why Cd-Hfq-containing cells can survive in the presence of α-MG (Fig. 5A). Similarities and differences of Ec-Hfq and Cd-Hfq As discussed above, Ec-Hfq residues are important for ArcZ function, Q8, F39 (proximal face), Y25, and I30 (distal face) ( Fig. 1; Zhang et al. 2013) are also present in Cd-Hfq, except that I30 is substituted by a Valine, which does not impact on ArcZ function in E. coli. The proximal face of Hfq (Q8, D9, F39, F42, Y55, K56, H57) is important for SgrS function. It binds the terminator poly(U) tail. This interaction is crucial for PtsG silencing since a SgrS variant with a short poly(U) tail was inactive (Otaka et al. 2011). Q41 and F42 participate in the direct recognition of uracil (Sauer and Weichenrieder 2011). They are replaced by S41 and Y42 in Cd-Hfq. However, replacement of Q41 by S41 in Ec-Hfq neither stabilizes ptsG transcript nor destabilizes SgrS (Fig. 5). So, loss of one of the two U-contacting amino acids is not sufficient to prevent SgrS stabilization. It has been recently demonstrated that conserved arginines on the rim of E. coli Hfq protein (RRER motif) are required for its chaperon activity. An interesting correlation has been emphasized between the number of positively charged amino acids (arginines or lysines) at the rim and the function of Hfq from Gram-positive bacteria in sRNA regulation (Panja et al. 2013). Indeed, among the Hfq proteins studied in Gram-positive bacteria, L. monocytogenes Hfq carries the RKEK rim motif and stabilizes at least one sRNA-mRNA complex (Nielsen et al. 2010(Nielsen et al. , 2011, while B. subtilis Hfq having a RKEN motif associated with numerous RNAs was not required for sRNA regulation (Heidrich et al. 2007;Gaballa et al. 2008;Dambach et al. 2013). The function of S. aureus Hfq carrying the KANQ rim motif remains unclear (Bohn et al. 2007). Interestingly, the Cd-Hfq carries the most conserved RKER motif with just one amino acid substitution as compared with Ec-Hfq (Fig. 1), and thus maintains a similar surface charge with two arginines on the rim as the E. coli protein. It is interesting to correlate this conserved positive charge with the similar RNA chaperone functions of this Hfq homolog that we have demonstrated in this work. On the other hand, there are some differences between the two Hfq proteins that may account for the reduced stability of the hexameric Cd-Hfq chaperone detected in our preparations. First, the presence of an extra amino acid at the junction of the Sm1 and Sm2 motifs in Cd-Hfq as compared with Ec-Hfq may modify the interactions of monomers in the hexameric form (Fig. 1). Second, the sequence of the C-terminal domain may also have an impact on the hexamer organization in agreement with our previous data, suggesting that in Ec-Hfq residues 84-102 protect the interface between monomers and possibly contribute to the thermodynamic stabilization of the hexameric Hfq structure (Arluison et al. 2004). Comparative studies of in vitro properties of these two proteins will be useful to compare the structure and the function of these RNA chaperons. Perspectives While the function of Hfq in Gram-negative bacteria is now well-documented, its role in Gram-positive bacteria is still under debate. Hfq was shown to act as a virulence factor in several bacterial pathogens but it is not at all clear how it acts or whether it cooperates with sRNAs to control physiological functions. In Gram-positive bacteria containing Hfq such as B. subtilis, S. aureus, and L. monocytogenes, the vast majority of the sRNA-mRNA duplexes characterized in vivo do not require Hfq (for review, see Brennan and Link 2007;Toledo-Arana et al. 2007;Repoila and Darfeuille 2009;Hajnsdorf and Boni 2012)). Here we show that Cd-Hfq is functional in sRNA-mediated regulation in E. coli. However, its role in C. difficile remains to be determined. Numerous potential trans-riboregulators have been identified in C. difficile by global deep sequencing (Soutourina et al. 2013). Indeed, among the 250 potential regulatory RNAs detected experimentally (with expression of 35 sRNAs confirmed by gene-specific experimental approaches), about 100 sRNAs are located in intergenic regions and might represent trans-RNA regulators. These identified RNAs could require Hfq for their regulatory action as observed in many other bacterial systems (Vogel and Luisi 2011). The growth phase-dependent expression of some of the identified sRNAs and the conservation of the majority of them within C. difficile strains strongly suggest that they have a functional role and could impact the physiopathology of this pathogen. Together with the present work showing that Cd-Hfq protein can function as an RNA chaperon, these data emphasize the potential importance of RNA-based regulatory mechanisms in C. difficile. Further studies will be required to assess the function of these new sRNA regulators as well as to identify their targets, mechanisms of action, and to determine the role of Hfq protein in related regulatory processes. Bacterial strains and plasmids The strains and plasmids used in this study are listed in Supplemental Table S1 and the primers in Supplemental Table S2. New strains were constructed by P1 transduction. The plasmids pCd-Hfq, pCd-Hfq core, pEc-Hfq core, and pEc-HfqQ41S used to complement the deletion of the hfq gene in E. coli were constructed by a two-step PCR procedure from the original pTX381 plasmid containing the miaA ′ -hfq-hflX ′ region ). The first step was made of individual PCRs. The second step combined the former individual PCRs. The product of this PCR is then cleaved by restriction enzymes and recloned in the same sites of pACYC184. -pCd-Hfq The first step is made of three individual PCRs. The first PCR amplifies the 5 ′ UTR of the E. coli hfq gene (primers JC411 out and JC400 with pTX381 as the template). The second PCR allows the amplification of the Cd-Hfq ORF from the ATG to the stop codon with primers JC401 and JC414 and C. difficile chromosomal DNA as the matrix. The third PCR amplifies a sequence corresponding to 30 nt of the 3 ′ part of Cd Hfq ORF together with sequence of the plasmid pCA24N (Kitagawa et al. 2005) with primers JC413 and JC412out and pCA24N as the template. The second step is a PCR combining the three former PCRs with primers JC411out and JC412out. -pCd-Hfq core construction The first step is made of two individual PCRs. The first PCR is performed using primers JC411out and JC416 with pCd-Hfq as the template, which allows the amplification of the Cd hfq gene to the position corresponding to G68 codon with two stop codons. The second PCR is performed with primers JC415 and JC412out and pCd-Hfq DNA as the matrix. The second step combined these two PCR products with primers JC411out and JC412out. -pEc-Hfq core The first step is made of two individual PCRs. The first PCR is performed with primers JC411out and JC418 with pTX381 as the template. The second PCR is performed with primers JC417 and JC412out and pCA24N DNA as the matrix. We then used these two PCR products with primers JC411out and JC412out. For these three constructions the PCR products were then cleaved by BamHI and cloned into the corresponding site of pACYC184 plasmid. -pEc-HfqQ41S The first two PCRs allow the amplification of Ec-Hfq in two parts with the pairs of oligos JC425-JC427 and JC426-JC428 and pTX381 as template, with JC425 and 426 harboring the Q41S mutation. The two PCR products were used with primers JC427 and JC428 in a second set of amplification. The resulting product was cleaved with HindIII and XmaI and cloned into the corresponding sites of pTX381. -pCd-HfqHis6 The C. difficile hfq gene was amplified by PCR from C. difficile strain 630 DNA using oligonucleotides JC407 and JC408. The 280-bp amplified product was cloned into the PCRII blunt vector (Invitrogen), giving pTOPO-Cd-HfqHis6. The 263-bp NcoI-XhoI fragment of pTOPO-Cd-HfqHis6 was then cloned between corresponding sites of the pET28a expression vector downstream from the T7 promoter. The resulting plasmid pET28a-Cd-HfqHis6 allows expression in E. coli BL21 λDE3▵hfq of the C. difficile hfq protein bearing a C-terminal His-tag (Cd-HfqHis6). β-Galactosidase assay ▵hfq derivative of RO91 containing the rpoS::lacZ chromosomal fusion (Lange and Hengge-Aronis 1994) and ▵hfq derivative of IBhfq95 containing the hfq::lacZ chromosomal fusion (Supplemental Table S1) were transformed with the pACYC184 series of hfq-expressing plasmids described above. Cells were grown at 37°C in Luria-Bertani (LB) medium with chloramphenicol. Cells were harvested in exponential phase, disrupted by sonication, and β-galactosidase activities were measured in clarified cell extracts. Specific β-galactosidase activity was expressed as nmol ONPG (o-nitrophenyl-β-D-galactopyranoside) hydrolyzed/min/ mg of total soluble cell proteins (Ziolkowska et al. 2006). Protein concentrations in lysates were measured by Bradford assay (Bio-Rad). Means values are calculated from three independent experiments. Protein purification and band-shift assay Ec-HfqHis6 and Cd-HfqHis6 were overexpressed and purified in parallel as previously described (Ziolkowska et al. 2006). Templates for the synthesis of the different sRNAs were obtained by PCR amplification using the primers described in Supplemental Table S2. Band-shift assays were performed as in Folichon et al. (2005) with Hfq concentration expressed on the basis of the hexamer form. The radioactivity signal above the level of the free RNA was counted as retarded RNA on the gel. Data were analyzed and fitted to the Hill equation using Kaleidagraph and the K 1/2 (half saturation values) were derived from the fit of the data. RNA preparation, analysis, and labeling Total RNA was prepared from bacteria grown to an A 650 = 0.35-0.4 in LB medium using the hot-phenol procedure described in Braun et al. (1996). Ten micrograms of total RNA were separated either on 1% agarose formaldehyde gel or 6% polyacrylamide gels for mRNA and sRNA analysis, respectively, and analyzed by Northern blotting (Hajnsdorf et al. 1994;Hajnsdorf and Régnier 1999). Templates for the synthesis of ArcZ, GcvB, SgrS, and oppA and ptsG RNA probes were obtained by PCR amplification using a oligonucleotide containing the T7 promoter and a RNA-specific oligonucleotide (indicated with prefix m) described in Supplemental Table S2. RNAs were synthesized by T7 RNA polymerase with [α-32 P]UTP yielding uniformly labeled RNAs (Hajnsdorf and Régnier 2000). Membranes were also probed for 5S rRNA with a 5 ′ -labeled oligonucleotide (Supplemental Table S2). RNA levels were quantified by a Phos-phorImager. hfq RNA levels were determined by abortive reverse transcription using 5 ′ -labeled hfg primer (Supplemental Table S2) as described in Hajnsdorf et al. (1995) except that reverse transcription was carried out with 0.25 mM dATP, 0.25 mM dGTP, 0.25 mM dTTP, and 0.25mM ddCTP. RNA levels were quantified using a PhosphorImager. SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2017-10-11T08:27:38.660Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "f2fb9dac394584a92489bfc0f974ccf433add2a7", "oa_license": "CCBYNC", "oa_url": "http://rnajournal.cshlp.org/content/20/10/1567.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e7e612de2c1a170ece1a30370b806a2fd202f4d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
58918048
pes2o/s2orc
v3-fos-license
What are the main local drivers determining richness and fishery yields in tropical coastal fish assemblages ? Seasonal ecological effects caused by temperature and photoperiod are typically considered minimal in the tropics. Nevertheless, annual climate cycles may still influence the distribution and abundance of tropical species. Here, we investigate whether seasonal patterns of precipitation and wind speed influence the structure of coastal fish assemblages and fishing yields in northeast Brazil. Research trips were conducted during the rainy and dry seasons using commercial boats and gear to sample the fish community. Diversity was analyzed using abundance Whittaker curves, diversity profiles and the Shannon index. Principal Component Analysis (PCA) was used to analyze associations between the abundance of species and various environmental variables related to seasonality. A total of 2,373 fish were collected, representing 73 species from 34 families – 20 of which were classified as both frequent and abundant. Species richness was greater and more equitable during the rainy season than the dry season – driven by changes in the precipitation rather than to wind speed. Species diversity profiles were slightly greater during the rainy season than the dry season, but this difference was not statistically significant. Using PCA was identified three groups of species: the first associated with wind speed, the second with precipitation, and the third with a wide range of sampling environments. This latter group was the largest and most ecologically heterogeneous. We conclude that tropical coastal fish assemblages are largely influenced by local variables, and seasonally mediated by annual changes related to precipitation intensity and wind speed, which in turn influences fishery yields. INTRODUCTION Water quality and nutrient availability in coastal waters are influenced by environmental factors such as the spatial and temporal patterns of precipitation, land drainage, and wind patterns (Kennedy et al. 2002).Seasonal changes in biophysical factors can modify the composition of communities, affecting the occurrence and distribution of species within and between trophic levels (Blaber et al. 1995, Brown et al. 1997, Walther et al. 2002).In coastal ecosystems, these effects are usually mediated through temporal changes in temperature profile, the extent and magnitude of precipitation and continental runoff, wind patterns and storm frequency (Bernal-Ramírez et al. 2003, Jury 2011, Kennedy et al. 2002).Studies on the effects of changes in climate and biological productivity in the eastern Atlantic have documented a range of seasonal effects on trophic levels, coral reefs, juvenile dispersal, fishery yields and fish species abundance (e.g.Muehe and Garcez 2005, Ciotti et al. 2010, Costa et al. 2010, Leão et al. 2010, Schroeder and Castello 2010).Even so, the generalization of these findings is not possible as climate and local landscape characteristics may drive communities to different profiles. In the tropics, where annual changes in temperature and photoperiod are minimal, seasonality is heavily determined by precipitation (Lowe-McConnell 1987), which generates dry and rainy seasons (Figueroa and Nobre 1990).Such seasonal variation, though less apparent than that observed in temperate areas, has the capacity to influence fish behavior (Wootton 1990) and, consequently, may influence the distribution and abundance of species within a given area (Laevastu and Hayes 1981) with a knock-on effect on fishery yields (Hilborn and Walters 1992).The consequences of these variability includes the estuarization process, when coastal waters resembles estuarine conditions (Longhurst and Pauly 1987b), even affecting the functional diversity of fish assemblages (Passos et al. 2016). Even facing those large-scale patterns, local effects are detectable after higher scale effects being known and considered. In this study, the objectives were to determine how seasonal patterns of precipitation and wind influence the temporal structure of tropical fish assemblages and the yields of the coastal gillnet fishery. MATERIAL AND METHODS Samples were collected in the central coastal zone of the state of Alagoas, north-eastern Brazil, near the main regional fisheries harbor (Fig. 1). The seasonal patterns are typically tropical and consist of a rainy season from March to August and a dry season from September to February (Macêdo et al. 2004).Precipitation is high on the coastal plain; with an annual average of 1,800 mm.Coastal currents are conditioned by winds and tides and, in the rainy season, trade winds.During the rainy season, the southeast quadrant (SE) experiences more frequent and intense winds, while during the dry season, such winds flow from the northeast quadrant (NE) (Araújo et al. 2006). Trips were made on crescent moon days in an 8 m wooden boat with an ice capacity of 500 kg and a one-cylinder, diesel-powered B18 engine.Six sampling trips were conducted between October 2010 to August 2011, with three occurring during the rainy season (June/July/August) and three during the dry season (October/December/February).Each trip lasted three days, totaling 30 launches of effective fishing. Sets were established between two fishing areas with mud and gravel substrates, known locally as "Lama Grande" and "Tira da Pedra" (Fig. 1).The first set was approximately 11 km from Jaraguá Harbor at a depth of approximately 12 m; the second was located approximately 14 km from the port of Jaraguá in a depth of approximately 20 m.The distance between the centroids of the sites was approximately 6.5 km. A commercial gillnet 1,330 m long and 1.5 m high was used, with a mesh size of 40 mm between opposite knots and a nylon thread size of 50 mm.Gillnets were set near the surface parallel to the seabed and was fixed at both ends with anchors.The mean ± standard deviation of the duration of each set was 3.51 ± 1.01 hours. For each launch, the set position and duration were recorded, wind direction and intensity, the type and number of each species caught and the total length (cm) of each fish caught.All fishes were physically anesthetized by hypothermia on board and killed freezing on ice. Additional data on precipitation, wind speed and direction were obtained from INMET/SEMARH, Brazilian government.To facilitate the communication of our results to environmental managers and members of local communities, a scale of wind intensity was created based on the reports of the fishermen on From these data, there was calculated: the number of species and specimens per set, the mean length per species per set, the catch per set (kg), the CPUE (catch per unit effort) with standardized effort (1,330 m gillnet * set hours), the mean wind speed in m/s (mean of 6 hours per day: 3 hours before + 2 hours during the set + 1 hour after) and the total monthly precipitation (mm).All data were compared between the dry and rainy seasons. Data analysis Samples of all species were taken to the laboratory (SISBIO 1837810) after each fishing trip and identified using a variety of keys (Lessa andNóbrega 2000, Menezes andFigueiredo 1980).One or two type specimens of each species were placed in a standard collection as reference material (see Suppl.material 1). Variables were analyzed by univariate and factorial analyses of variance (ANOVA).The frequency of wind direction during the surveys was analyzed using the chi-square test (Legendre and Legendre 1998).Univariate and factorial analyses were performed in Statistica v.8. Because the variables used in the exploratory analysis were all quantitative and linear, a Principal Component Analysis (PCA) was performed (Legendre and Legendre 1998).To facilitate the PCA and focus on species of high commercial value, there were selected 16 species from the most abundant and frequent species and included precipitation and wind speed data.The multivariate model was tested by analysis of similarity (ANOSIM), and groups were identified by the greatest influence test SIMPER (Similarity Percentage) using Bray Curtis distance index (Clarke 1993).Multivariate analysis was performed using Statistica v. 8 and tests were done in PAST. Diversity There were collected 2,373 fishes representing 73 species and 34 families (Suppl.material 1).From these species, there were 39 categorized as abundant but uncommon, 51 as occasional species, and 51 and 20 as abundant and common species, respectively (the same species can belong to more than one category depending on the weather station) (Suppl.material 1).The frequent and abundant species, from most to least abundant, were as follows: Carangidae: ).Among these 20 species, seven were prevalent during the rainy season and three in the dry season (Suppl.material 1).Rarefaction curves estimated for the samples caught during dry and rain seasons using the bootstrap richness estimator yield different patterns of relative diversity for both seasons for all sample sizes (Fig. 2).Although the dry season richness is closer at high sample size, the asymptotic level was reached for both seasons and the difference remained. The size and slope of the curves in the Whittaker abundance diagram indicated that the rainy season (Fig. 3) had greater species richness and equitability than the dry season (Fig. 4).In the rainy season, the predominant species in order of decreasing abundance were C. crysos, E. alletteratus, S. brasiliensis and L. breviceps.Apart from these species, abundance decreased gradually with increasing richness.In the dry season, the predominant species were L. breviceps, C. nobilis, C. chrysurus and C. crysos. The diversity profiles indicated that diversity was marginally higher in the rainy season than in the dry season.The relevant alpha values are close to one indicating greater abundance.Diversity varied more in richness than in abundance between seasons (Fig. 5).However, the Shannon diversity index did not differ between seasons (t-test, p > 0.05). Interaction of physical and biological variables The univariate ANOVA showed significant differences between season in the number of species, number of fish, CPUE and mean species length (Tab.1).The highest average observed values were observed in the rainy season (Fig. 6).To identify those variables responsible for these seasonal differences, the influences of precipitation and wind speed were tested using a factorial ANOVA.Neither wind speed nor the in-teraction between precipitation and wind speed had significant seasonal effects (Tab.2).However, there were significant effects of precipitation on the number of species and the number and average size of fish (but not the CPUE) (Tab.2).The changes were greatest with respect to the number and species of fish, which exhibited linear increasing trends with increasing precipitation.Changes in average length and CPUE, while significant, did not follow the same trend (Fig. 8 and 9).Although the CPUE did not vary seasonally, possibly due to the selectivity of the net, the average catch was higher in the rainy season (p < 0.05; Fig. 7), indicating a possible influence of precipitation on the catch.Although the variance analysis revealed significant differences in average size between seasons, the size range (20 to 44 cm) remained consistent across seasons. The chi-square (χ2) indicated that the winds of the northeast quadrant (NE) predominated during the dry season, and the winds of the southeast quadrant (SE) predominated during the rainy season.No quadrants predominated with respect to source winds in December (i.e. during the rainy season) (Fig. 10). The PCA clustered species into three groups (Fig. 11): Group one was associated with wind speed and consisted of C. chrysurus, L. breviceps, C. parallelus and H. corvinaeformis, with the former two species being dominant in the dry season.The second group was formed by two species: C. crysos, S. brasiliensis, E. alletteratus, O. oglinum and L. synagris (Fig. 11), the first three of which were dominant in the rainy season.Species of Group three were associated with areas of high precipitation and wind speed or from high turbidity waters: C. nobilis, M. littoralis, C. spixii, C. hippos, C. edentulus, B. bagre, M. ancylodon; these latter two species were only recorded during the rainy season.The first two PCA axes explained 57.32 % of the total variation, with factor 1 primarily explained by precipitation (40.43 %) and factor 2 by wind speed (16.89 %) (Fig. 11).The similarity analysis revealed a significant interaction between precipitation and wind speed (ANOSIM: r = 0.6, p < 0.01).The SIMPER test identified precipitation as contributing the most to the interaction (55.67 %) and wind speed the least (0.48 %) (Tab.3). DISCUSSION The results support the hypothesis that precipitation and winds are significant drivers of fish species richness and fishing yields in the coastal tropical waters.The temporal dynamics and quality of water and nutrients are ultimately affected by climate variation, especially in precipitation and wind, generating environmental conditions that can affect the structure of estuarine and marine systems (Kennedy et al. 2002) shifts in biotic responses (Lowe-McConnell 1987).In the current study, seasonal differences were observed in the number of species caught, number of individuals, CPUE and average species length.Moreover, seasonal variation in precipitation appears to influence all measures except for CPUE. More generally, there was found that the communities sampled with a bottom gillnet off the coast of Alagoas were characterized by relatively high species richness (n = 73), corresponding to 72 % of richness as estimated by the bootstrap method.The 16 most frequent species (Suppl.material 1) belong to nine families common in several pelagic, reef and estuary regions of the eastern coast of Brazil (e.g.Araújo et al. 2002, Costa et al. 2003, Tubino et al. 2007, Lessa et al. 2009, Rangely et al. 2010, Carneiro and Salles 2011).However, the communities were dominated by few species (n = 20) that were both abundant and frequent as observed in other marine areas (e.g.Godefroid et al. 2003, Lira andTeixeira 2008). Mean yield was higher during the rainy season than during the dry season.This is not straightforward to interpret and the results of other seasonality studies vary widely with respect to periods of higher yield depending on the type of gear used, the amount of effort (Béné andTewfik 2001, Pet-Soede et al. 2001); and the types of environments and species sampled (Jury 2011, Tubino et al. 2007).In contrast to the current study, yields were found to be higher in the dry season in the bay and estuarine environments of Rio de Janeiro, Brazil (Tubino et al. 2007) and Paranaguá Bay, southern Brazil (Vendel et al. 2003).In these regions, both primary and secondary yields and fish concentra-tions were higher during the dry season (Allen 1982).Despite such regional differences, Tubino et al. (2007) also reported a predominance of C. crysos during the dry season in tropical estuaries, when high temperatures and biological production makes these environments more attractive to enhance reproduction of marine species (Araújo et al. 1998).With respect to our data, increases in abundance and the rainy season catch could be explained by the increase in drainage (Day et al. 2012, Dittmar et al. 2001) originating from the large number of mangroves, lagoons and rivers along the Brazilian coast (Lara 2003, Araújo et al. 2006).The nutrient-rich sediments of these environments are carried to the marine environment by the rains, increasing the biological productivity of coastal waters. The pattern of wind direction affected the seasonality of assemblage structure, with a predominance of trade winds from the northeast quadrant (NE) in the dry season and a predominance of those from the southeast quadrant (SE) in the rainy season -this is typical of regional atmospheric patterns (Servain and Legler 1986).However, as precipitation modulates seasonality in the tropics (Lowe-McConnell 1987, Macêdo et al. 2004), its effect was stronger than that of wind speed (wind speed was the least influential variable in the multivariate analysis).A weak effect of wind speed on fish assemblages were also reported for Paranaguá Bay (Vendel et al. 2003), where the highest wind speeds occurred in summer.North-eastern and south-eastern trade winds in the study region, which are associated with precipitation in the rainy season, should affect coastal currents (Hazin 2009).These currents then carry nutrients from Figures 6-9.Mean ± sd of the number of fishes, fish total length, species richness, and CPUE by climatic seasons (6); CPUE (mean ± sd) by month (7); precipitation by month (8) and wind strength by month (9).mangroves, lagoons and rivers to the surface and column waters of marine environments (Dittmar et al. 2001, Day et al. 2012), thereby increasing fishing yields. Our multivariate analysis separated species into three groups.All species that were both abundant and frequent in the dry season where associated with wind.An example of this group is L. breviceps, a demersal species that is one of the most abundant fish on the Brazilian coast (Lira andTeixeira 2008, Souza et al. 2008).Individuals and species of the family Sciaenidae, with representatives found in all three groups were very abundant in the present work.Species of this family are very abundant along the northeastern coast (Lessa et al. 2009) and include marine, estuarine and freshwater species throughout the world (Nelson 2006). Group two contained no estuarine resident species, only pelagic or reef species.Group two species are more flexible to environmental changes related to seasonality and its effects on marine dynamics (Longhurst and Pauly 1987a).Carangidae, Clupeidae, Scombridae, and Lutjanidae were represented in this group.Larvae of the former three families have been recorded along the north-eastern coast of Brazil (Mafalda Jr et al. 2006).The families Carangidae and Clupeidae contain many surface pelagic coastal species (Zavala-Camin 1983).Scombridae contains pelagic ocean species that use coastal waters only as nurseries (Moyle and Cech 1988).However, species of Scombridae, such as E. alleteratus and S. brasiliensis (group 2), are captured at various stages of growth in coastal areas of north-eastern Brazil (Lessa et al. 2009) due to the narrow continental shelf region.Group three contained the greatest number of species that inhabit the widest variety of habitats.It contained primarily estuarine or estuarine-dependent species, with estuarine, pelagic and reef species also present.Here, the association of the estuarine environment with precipitation patterns and the life cycles of Neotropical species is more evident, confirming the importance of precipitation levels in influencing fish species richness and abundance in the coastal tropics. Gillnets were chosen to collect data because they are the favored fishing method of artisanal fishers in tropical regions (Castello 2010, Godınez-Domınguez et al. 2000, Hovgård and Lassen 2000), including northeastern Brazil (Lessa et al. 2009).It also can capture several species of varying sizes (Nielsen and Figure 11.PCA of species abundance associated to precipitation and wind speed. Johnson 1983), and tend to be less harmful to the aquatic environment than other methods (Hovgård and Lassen 2000).Moreover, they are also inexpensive, easy to repair and technologically simple (Hovgård and Lassen 2000). Species co-occurrence is complex and poorly understood along neotropical waters (Andrade- Tubino et al. 2008, Azevedo et al. 2006, Barletta et al. 2010).Hidden behind precipitation and wind effects may appear the influence of trophic relationships, reproductive cycles (Keddy and Weiher 1999) and may also a stochastic component (Grossman et al. 1982).Moreover, this influence may be modulated by specific variables correlated to the rivers input into the coastal areas, including salinity, river flow (Gillson et al. 2012, Mitchell et al. 1999), turbidity (Castillo-Rivera et al. 2002, Cyrus and Blaber 1987, 1992, Johnston et al. 2007, Whitfield 1999) or pollution (Lekve et al. 2002).Nevertheless, evidences provided here indicated for consistent seasonal changes determined by precipitation and wind direction/intensity in the distribution and abundance of species within coastal fish assemblages.These ecological changes have inevitable knock-on effects on fisheries yields and the composition of the catch and should be carefully considered when developing coastal conservation, as well as for fisheries policy and regulations. Figure 1 . Figure 1.Location of the sampled area within the Alagoas coast.Jaraguá fishing harbor and the fishery sites were indicated. Figure 2 . Figure 2. Rarefaction curves for the samples caught during dry and rain seasons using the bootstrap richness estimator. Figures 3 Figures 3-4.Whittaker plot of fishes from Alagoas coast sampled during the rainy season (3) and the dry season (4). Figure 5. Diversity profile of the dry and rain seasons on the coast of Alagoas.Alpha = 1 Emphasizes the separation of richness and diversity profile. Figure 10 . Figure 10.Frequency and direction of winds in each sampled month on the coast of Alagoas. Table 1 . Analysis of variance tables to compare seasonal effects on species number, number of fishes, CPUE and mean length for fishes caught at the coast of Alagoas. Table 2 . Analysis of variance tables comparing effects of rainfall and winds on species number, number of fishes, CPUE and mean length for fishes caught at the coast of Alagoas. Table 3 . Similarity percentage analysis (SIMPER) tested for differences in the main species composition and abiotic variables (rainfall and wind strength) at the coast of Alagoas.
2018-12-18T10:16:44.109Z
2018-09-03T00:00:00.000
{ "year": 2018, "sha1": "1e5b8e4d44f31f216864c87e0f336e9707381f89", "oa_license": "CCBY", "oa_url": "https://zoologia.pensoft.net/article/12898/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1e5b8e4d44f31f216864c87e0f336e9707381f89", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
37346219
pes2o/s2orc
v3-fos-license
Disease-specific B cell epitopes for serum antibodies from patients with severe acute respiratory syndrome (SARS) and serologic detection of SARS antibodies by epitope-based peptide antigens Abstract Severe acute respiratory syndrome (SARS) has emerged as a highly contagious, sometimes fatal disease. To find disease-specific B cell epitopes, phage-displayed random peptide libraries were panned on serum immunoglobulin (Ig) G antibodies from patients with SARS. Forty-nine immunopositive phage clones that bound specifically to serum from patients with SARS were selected. These phageborne peptides had 4 consensus motifs, of which 2 corresponded to amino acid sequences reported for SARS-associated coronavirus (SARSCoV). Synthetic peptide binding and competitive-inhibition assays further confirmed that patients with SARS generated antibodies against SARS-CoV. Immunopositive phage clones and epitope-based peptide antigens demonstrated clinical diagnostic potential by reacting with serum from patients with SARS. Antibody-response kinetics were evaluated in 4 patients with SARS, and production of IgM, IgG, and IgA were documented as part of the immune response. In conclusion, B cell epitopes of SARS corresponded to novel coronavirus. Our epitope-based serologic test may be useful in laboratory detection of the virus and in further study of the pathogenesis of SARS. By activating host humoral immunity, viral infections trigger production of antibodies directed against viral protein epitopes. Knowledge of viral protein epitopes is pivotal in understanding the pathogenesis of viral infections and in developing diagnostic reagents and effective vaccines. Phage display has been used to identify immunogenic targets or epitopes recognized by antibodies. Selection of peptide libraries on monoclonal antibodies [22,23] and in complex serum from patients with disease [24,25] has led to the isolation of immunoreactive peptide epitopes. Disease-specific epitopes that are useful for the development of diagnostic or preventive reagents have been identified by screening of phage-displayed peptide libraries in serum or cerebrospinal fluid samples from patients with viral infections [24,25], rheumatoid arthritis [26], multiple sclerosis [27], autoimmune Phage-displayed peptide-library screening of serum samples from patients with severe acute respiratory syndrome (SARS). A, Illustration of principle of selection of the SARS-specific epitopes used in this study. The phage-displayed peptide library was precleared by use of normal human serum and by affinity selection with serum antibodies (Abs) from patients with SARS. After biopanning 3 times, immunopositive phage clones were selected by ELISA. Disease-specific epitopes were further identified and characterized by synthetic peptide binding and competitive-inhibition assay. Phage-displayed disease-specific epitopes can be used to determine microbiologic origins, to study immunotyping, and to provide information for the development of diagnostic reagents and vaccines. B, Selection of a phage-displayed peptide library on immunoglobulins purified from serum samples from patient SP1. After the third round of biopanning, marked enrichment (up to 14,000-fold) was evident, compared with the first round of selection. Ag, antigen; pfu, plaque-forming units. To identify peptides corresponding to or mimicking natural epitopes, we applied phage-display technology to the characterization of specific antibodies present in serum samples from patients with SARS. Results of this study will contribute to the development of a sensitive, simple test for diagnosis of SARS and will help in the investigation of the pathogenesis of SARS. [31] were identified and treated at the National Taiwan University Hospital (NTUH) in Taipei. Serum samples from the 7 patients were sent to the hospital's Department of Laboratory Medicine for routine serologic and biochemical analysis. The laboratory diagnostic test, clinical manifestations of illness, and treatment of these patients have been described in detail in our recently published report [31]. Serum samples were collected from patients with SARS and were tested by use of the following methods [31]: virus isolation in Vero E6 cells (American Type Culture Collection), serotype-specific reversetranscriptase (RT) polymerase chain reaction (PCR) [16], and a standard indirect fluorescent antibody (IFA) assay for the de- Identification of serum antibody-selected phage clones, by ELISA, in serum samples from a patient with severe acute respiratory syndrome (SP1). A phage-displayed random peptide library was screened with serum antibodies from patient SP1. After 3 screening rounds, 49 phage clones from 72 selected phage clones were significantly reactive to antibodies in serum samples from patient SP1 (A) but not to samples of normal human serum (B). OD 490 , optical density at 490 nm. Patient tection of antibody to SARS-CoV [19]. All 7 patients with SARS were positive for SARS RNA by RT-PCR, and 6 of them were positive for SARS antibody by IFA assay (patient SP7 was seronegative) [31]. Although the patients had received ribavirin, corticosteroid, and intravenous immunoglobulin treatment during the early stage of the disease, the antibody was detected by IFA assay as early as 10-12 days after onset of illness [31]. Written informed consent was obtained from all volunteers. The study was approved by the ethics committee of the College of Medicine, National Taiwan University (Taipei), and the human experimentation guidelines of the US Department of Health and Human Services were followed in the conduct of this research. Affinity selection of phages, by biopanning. Protein G magnetic microbeads (Dynabeads protein G; Dynal) were used to purify IgG from serum samples from patients with SARS. To enhance selection of peptides binding to IgG specifically associated with patients with SARS, we designed a preclearing step to remove nonspecific clones by preabsorption of the phage-displayed 12-mer peptide library ( phage parti- 10 4 ϫ 10 cles; New England BioLabs) onto purified IgG from normal serum pooled from 5 control subjects (SARS-CoV seronegative by IFA assay) who were blood donors. Next, the precleared phage library was selected onto IgG purified from the serum of patients with SARS. Affinity selection on immobilized IgG from the serum of patients with SARS, for 1 h at 4ЊC, was used for screening. Unbound phage particles were removed, and magnetic beads were washed extensively with PBST 0.5 (PBS plus 0.5% [wt/vol] Tween-20). Bound phage particles were eluted with glycine buffer (pH 2.2), neutralized with Tris buffer (pH 9.1), and amplified for subsequent rounds of selection. Three rounds of selection were performed for each patient. The biopanning protocol for the second and third rounds was identical to that of the first round, with the addition of pfu of phage par- 11 2 ϫ 10 ticles for biopanning. Titration of unamplified third-round phage eluate was done on Luria broth/isopropyl b-d-thiogalactoside/5bromo-4-chloro-3-indolyl-b-d-galactoside plates (Falcon; Becton Dickinson). Remaining eluate was stored at 4ЊC. and was blocked with PBSB (1% bovine serum albumin in PBS), at 4ЊC overnight. Serially diluted phage particles were added to plates coated with serum antibodies from patient SP1 and were incubated at room temperature for 1 h. The plate was washed 6 times with PBST 0.5 , and a 5000-fold dilution of horseradish peroxidase (HRP)-conjugated anti-M13 antibody (Pharmacia product no. 27-9411-01) was added. The plate was incubated at room temperature for 1 h, with agitation, and was washed 6 times with PBST 0.5 . Color was subsequently developed in the dark, with o-phenylenediamine dihydrochloride (Sigma) and hy-drogen peroxide. The reaction was stopped with 3 N HCl, and absorbance was measured, at 490 nm, by use of an ELISA reader (Versa Max Tunable Microplate Reader; Molecular Devices). Immunopositive phage clones were further characterized by DNA sequencing, according to our method described elsewhere [22]. DNA sequencing and computer analysis. DNA sequences of purified phage clones were determined according to the dideoxynucleotide chain-termination method, by using an automated DNA sequencer (ABI PRISM 377; Perkin-Elmer). The primer used for phage DNA sequencing was 5 -CCCTCATAG- Table 2. Alignment of phage-displayed peptide sequences with complete genome of severe acute respiratory syndrome-associated coronavirus (SARS-CoV). Virus or phage clone(s) Peptide sequence The peptide sequences of SARS-CoV were retrieved from GenBank (accession nos. NC004718, AY278554, and AY278741). Phage-displayedconsensus amino acids are indicated by boldface type. a The peptide sequence corresponds to amino acids 1181-1190 of the CDS2 gene. b The peptide sequence corresponds to amino acids 17-29 of the CDS4 gene. Antibody competitive-inhibition assay. For the competitive-inhibition assay, 10 9 immunopositive phage particles were incubated with individual peptide antigens or with control peptide before being transferred to the antibody-coated plate and incubated for 1 h. The plates were incubated with HRP-con-jugated anti-M13 antibody, and the procedures described above were followed. Detection of serum samples from patients with SARS, by use of immunopositive phage clones. The ELISA plates were coated with 10 mg/mL purified anti-human IgM or anti-human IgG capture antibodies (Jackson ImmunoResearch Labs), were blocked with PBSB, and then were incubated with the tested serum samples, diluted 1:100, at room temperature for 1 h. We added 10 9 immunopositive phage particles to the antibodycoated plates; the plates were washed 6 times with PBST 0.5 , and the procedures described above were followed. ImmunoResearch Labs), diluted 1:20,000, was added to the microtiter plates, following the procedures described above. Screening of serum samples, with the phage-displayed peptide library. Patient SP1, the first documented case patient with SARS in Taiwan, developed a febrile illness and was admitted to NTUH on 8 March 2003. During the patient's convalescence, serum samples were collected on 8 April 2003, to study diseasespecific epitopes of SARS. Selection of immunopositive phage clones by SARS-specific serum antibodies was done by immobilization on protein G magnetic beads; the bound phage clones were selected after biopanning. To enhance selection of peptides binding to IgG specifically associated with patients with SARS, we designed a preclearing and selection procedure (see Materials and Methods). The strategy for identification of SARS-specific epitopes is shown in figure 1A. A marked enrichment (log 10 scale) of as much as 14,000-fold followed the third round of selection, compared with the first round of selection ( figure 1B). Individual phage clones from the third round of biopanning were randomly selected. ELISA was performed to determine whether antibodies present in serum samples from patient SP1 were specifically recognized by selected phage clones derived from biopanning. Of 72 selected phage clones, 49 had significant enhancement of binding activity to SARS-specific serum antibodies ( figure 2A). These clones did not bind to normal human serum ( figure 2B). Alignment of phage-displayed peptide sequences with the complete genome of several viruses, described in Materials and Methods, showed that 2 binding motifs were highly conserved in many immunopositive phage clones and that they corresponded exactly to amino acid residues of SARS-CoV (table 2) TIVAKLR; table 1), were similar to amino acid residues 375-386 of CDS4 in hMPV (LSPLGALVACYK; consensus sequences are indicated by boldface type). To prove that selected phage clones bound specifically to serum from patients with SARS, antibodies from 1 patient with SARS (SP1) and from 2 healthy control subjects were incubated in a 3-fold serial dilution of immunopositive phage clone SP1-1. SP1-1 bound serum from patients with SARS specifically and dose dependently. Two control serum samples did not react with SP1-1 ( figure 3A). For further confirmation that selected phage clones specifically bound serum from patients with SARS, 8 immunopositive phage clones (SP1-1, -8, -29, -30, -32, -39, -54, and -72) were incubated with serum samples from 4 patients with SARS and with serum samples from 4 healthy control subjects. ELISA indicated that all immunopositive phage clones had high antibody specificity to the SP1 phage clones (figure 3B). Three phage clones (SP1-1, -8, and -29) were highly reactive with serum samples from the 4 patients with SARS. None of these phage clones reacted with serum samples from the 4 healthy control subjects ( figure 3B). To further confirm that the phage-displayed peptide was the epitope of SARS-specific serum antibodies, a peptide competitive-inhibition assay was performed to determine whether the synthetic peptide MSP1-1 and the selected phage clone SP1-1 competed for the same antibody-binding site. Binding activity of SARS-specific serum antibodies with phage clone SP1-1 was inhibited by the synthetic peptide MSP1-1 in a dosedependent manner. The arbitrary control peptide P7M-M1 (SLHNTMPSES) had no effect on the ability of the phage particles to bind SARS-specific serum antibodies (figure 4B). One microgram per milliliter of peptide MSP1-1 inhibited 92.4% of the phage-clone binding to SARS-specific serum antibodies. Binding of the other phage clone, SP1-20, to SARS-specific serum antibodies also was inhibited by the synthetic peptide MSP1-20 in a dose-dependent manner. One microgram per milliliter of peptide MSP1-20 inhibited 92.7% of phage-clone binding to SARS-specific serum antibodies ( figure 4D). The synthetic peptide SP3M (VKIDNASPAS), which corresponded to amino acid residues 18-27 of CDS4 of SARS-CoV, bound antibody in a concentration-dependent manner ( figure 4A). For further confirmation that phage-displayed peptides were epitopes of SARS-specific serum antibodies, a peptide competitive-inhibition assay was conducted to determine whether the SP3M peptide and phage clone SP1-1 competed for the same antibody-binding site. Our results indicated that the binding activity of SARS-specific serum antibodies with phage clone SP1-1 was inhibited by SP3M in a dose-dependent manner. One microgram per milliliter of peptide SP3M inhibited 83.8% of phage-clone binding to SARS-specific serum antibodies, whereas the arbitrary control peptide P7M-M1 (SLHNTMPSES) had no effect ( figure 4C). Detection of serum samples from patients with SARS, using epitope-based peptide antigens. We evaluated whether epitope-based peptide antigens in immunopositive phage clones or synthetic peptides could be used as diagnostic reagents for the detection of serum samples from patients with SARS. Between 8 March and 8 May 2003, we collected serum samples from 7 patients with illnesses that met the CDC case definition of probable SARS, to test the detection efficacy of the peptide antigens. During the patients' convalescence, serum samples were obtained 118 days after disease onset. For patients SP1-SP7, serum samples were obtained 42, 33,30,30,25,24, and 19 days, respectively, after onset of illness. To diagnose SARS, we tested these serum samples with phage clone SP1-1, using IgM-or IgG-capture ELISA. Results indicated that, for samples from patients SP1-SP6, detection was positive by IgM-capture ELISA ( figure 6A) and that, for samples from patients SP1-SP5 and SP7, detection was positive by IgG-capture ELISA (figure 6B). None of 22 healthy control subjects was detected as having SARS ( figure 6A and 6B). Mean optical density plus 3ϫ SD ( , 0.108 + 0.186 measured at 490 nm) was used to determine the cutoff value (0.294) for IgM-capture ELISA. The sensitivity of this serologic test was 85.7%. Mean optical density plus 3ϫ SD (0.069 + 0.079, measured at 490 nm) was used to determine the cutoff value (0.148) for IgG-capture ELISA. The sensitivity of this serologic test was 85.7%. If we combined the results of IgM-and IgGcapture ELISAs, samples from all 7 patients would be detected, and sensitivity would increase to 100% ( figure 6A and 6B). In contrast, all serum samples obtained from the 22 healthy control subjects were seronegative ( figure 6A and 6B). The specificity of this test was 100% for healthy control subjects. To test the reactivity of serum antibodies from patients with SARS, we used epitope-based synthetic peptide SP3M, which contains 10 aa residues in CDS4 of SARS-CoV. SP3M detected samples from all 7 patients with SARS, by ELISA ( figure 6C). In contrast, all serum samples obtained from the 22 healthy control subjects were seronegative. Mean optical density + 3 ( , measured at 490 nm) was used to determine the 0.035 + 0.104 cutoff value (0.139). Sensitivity and specificity were both 100% when we used epitope-based synthetic peptide SP3M for this serologic test ( figure 6C). Kinetics of antibody response to SARS viral infection. We further analyzed antibody-response kinetics, by using ELISA, in 2 patients (SP1 and SP4) who had higher SARS-CoV-specific antibody titers. In patient SP1, IgM antibodies appeared on day 13 after disease onset, and levels peaked on day 17 ( figure 7A). IgG antibody titers were not detected until day 17 after disease onset, and levels increased exponentially until day 27 ( figure 7C). The kinetics of the IgA antibody response was similar to that for IgG: IgA antibody titers were not detected until day 17 after disease onset, and levels increased exponentially until day 27 ( figure 7E). In patient SP4, IgM antibodies were not detected until day 11 after disease onset, and levels peaked on day 19 (figure 7B). IgG antibody titers were not detected until day 19 after disease onset, and levels increased IgM (A and B), IgG (C and D), and IgA (E and F) responses in patients SP1 and SP4 were measured by using phage clone SP1-1, as described in Materials and Methods. IgM and IgG responses of 2 additional patients with SARS, SP2 (G) and SP3 (H), were also measured by using phage clone SP1-1, as described in Materials and Methods. Control phage clones did not react with serum samples from the patients with SARS. To study the kinetics of antibody response, we used phage clone SP1-1 but not synthetic peptide SP3M, because the background of ELISA increased when we used synthetic peptides as antigens for the detection of human IgM. OD 490 , optical density at 490 nm. exponentially until day 26 ( figure 7D). In patient SP4, levels of IgA antibody titers increased exponentially from onset of the disease until day 26 ( figure 7F). The kinetics of antibody response also were studied in 2 patients (SP2 and SP3) who had lower SARS-CoV-specific antibody titers (figure 7G and 7H). Patient SP2 tested seronegative for IgM at day 8 after the onset of symptoms (figure 7G), and patient SP3 tested sero-negative for IgM and IgG at days 10 and 14, respectively, after the onset of disease (figure 7H). DISCUSSION Identification of viral B cell epitopes is important in understanding virus-antibody interactions at a molecular level and provides information for the development of virus-specific serologic diagnostic reagents and subunit vaccines. This is the first study to characterize serotype-specific B cell epitopes of SARS-CoV, using the phage-display method. Selected phage-displayed peptide sequences had 2 consensus motifs, Pro-Pro-Asn and Val-Lys-Ile-X-Asn, which corresponded to amino acid residues 1184-1186 of CDS2 (PPN) and 18-22 of CDS4 (VKIDN) in SARS-CoV, respectively. Results of synthetic peptide-binding and competitive-inhibition assays further confirmed that patients with SARS generated antibodies against SARS-CoV. We also detected antibodies in serum samples from patients with SARS by using phage-displayed and epitope-based peptide antigens. Tissue culture, electron microscopy, IFA tests, and PCR amplification of specific genomic sequences and of the complete genome of SARS-CoV have provided evidence of novel coronavirus infection in patients with SARS [8,16,34]. Using phagedisplay methods, we found B cell epitopes in the antibody response of patients with SARS that corresponded to SARS-CoV, further confirming the microbiologic origin of SARS. While studying the alignment of phage-displayed peptide sequences and the published protein sequences of several viruses (described in Materials and Methods), 16 immunopositive phage clones displayed a consensus motif of PPN, which only corresponded to amino acid residues 1184-1186 of CDS2 of SARS-CoV (table 2). The gene product of CDS2 (GenBank accession no. AAP30029) of SARS-CoV is an orfla polyprotein containing 4382 aa residues. Another 16 immunopositive phage clones displayed the consensus motif VKIXN, which only corresponded to amino acid residues 18-22 of CDS4 of SARS-CoV (table 2). The gene product of CDS4 (GenBank accession no. P59632) of SARS-CoV is a hypothetical transmembrane protein containing 274 aa residues. We also found the peptide sequences of 2 immunopositive phage clones, SP1-32 and -50 (ISPYNTIVAKLR; table 1), to be similar to amino acid residues 375-386 of CDS4 in hMPV (LSPLGALVACYK; consensus sequences are indicated by boldface type). The serum samples from all 7 patients with SARS were detected by use of ELISA with phage clone SP1-1, but only the serum samples from patient SP1 were detected by phage clone SP1-32 (figures 3 and 6; data not shown). These findings suggest that SARS is caused by SARS-CoV. We also cannot exclude the possibility that hMPV infection may have exacerbated the disease in some of the patients with SARS. Correlations between SARS and hMPV infection need further investigation. We also compared phage-displayed peptide sequences with those from reference viruses representing each species in the 3 groups of coronaviruses [34], but no similarity was found. Serum samples from patients with SARS were accurately detected only by use of ELISA with phage clone SP1-1 and syn-thetic peptide SP3M (figure 6). Sensitivity for IgM-or IgG-capture ELISA alone was 85.7%; however, when IgM-and IgGcapture ELISAs were combined, serum samples from all 7 patients with SARS were detected, and sensitivity increased to 100%. Using the same serologic test, we found that serum samples from all 22 healthy adults were seronegative, yielding a specificity of 100% for healthy control subjects (figure 6A and 6B. Epitopebased synthetic peptide SP3M, corresponding to amino acid residues 18-27 of CDS4 in SARS-CoV, detected serum samples from all 7 patients with SARS, by ELISA, and was nonreactive with serum samples from all 22 healthy adults ( figure 6C). Sensitivity and specificity were both 100% when we used the 2 methods for this serologic test. For all 7 patients with SARS and the 22 healthy adults, a blinded analysis also was done by ELISA. Serum samples from all 7 patients with SARS were detected, and the 22 healthy adults were found to be seronegative, further confirming the efficacy of this means of detecting the disease (data not shown). In our study of antibody-response kinetics in patients with SARS, antibodies were detectable at 13 -18 days after onset of disease (figure 7). IgM was not found earlier than days 8, 10, and 11 after onset of symptoms in patients SP2, SP3, and SP4, respectively (figure 7B, 7G, and 7H). During acute SARS-CoV infection, IgM antibodies did not appear until days 8-11, and levels peaked on days 17-19 after onset of disease. IgG antibody titers were detectable only by days 17-19 and continued to increase exponentially until days 26 and 27 (figure 7). Delayed induction of IgM and IgG responses to SARS needs further investigation. To some extent, this delay may explain the severity of SARS-CoV infections. The experimental approach described in this study identifies peptides that react with disease-specific antibodies. We isolated the B cell epitope of SARS-CoV and confirmed that the Pro-Pro-Asn and Val-Lys-Ile-X-Asn motifs were crucial for peptide antigen-antibody binding. From the study of antibody kinetics (figure 7), we found that the peptides were able to detect patient serum samples during the convalescent phase of SARS but not during the acute phase. Peptide-binding and competitive-inhibition assay showed that synthetic peptide SP3M corresponded to amino acid residues 18-27 of CDS4 of SARS-CoV, further confirming that patients with SARS generated antibodies against SARS-CoV ( figure 4). To further confirm that epitope-based synthetic peptides generated antibodies against SARS-CoV, we generated antibodies against peptides MSP1-1 and SP3M. Both MSP1-1-and SP3M-immunized serum samples were found, by IFA, to generate antibodies that recognized SARS-CoV-infected Vero E6 cells (figure 5), further proving that the phage-displayed epitope was immunogenic to SARS-CoV. We also studied the neutralizing activity of these antibodies (using 100 TCID 50 of SARS-CoV). The neutralizing activity of patient SP1 antiserum was у1:60. No notable neutralization of SARS-CoV was observed in MSP1-1-and SP3M-immunized mouse serum samples. Identification of neutralizing epitopes by using spike protein-purified antibodies is currently being investigated. Our findings also indicate that B cell epitopes of these antibodies are linear. Typically, B cell epitopes identified by this method have an easily recognizable consensus sequence, often corresponding to the peptide sequence found in the natural antigen [22]. These epitopes can be used to study microbiologic origins, to develop epitope-based diagnostic reagents and immunogens against viral infection, and to dissect the antibody response in SARS. The phage itself is an excellent immunogen. Immunization of mice with phage particles elicits a T cell-dependent response against the phage-displayed epitope [35,36] and antibodies against the antigen [23,24]. Development of a SARS vaccine consisting of a cocktail of phage-displayed neutralizing epitopes may be possible. To date, we have analyzed results for 7 patients with illnesses that met the CDC case definition of probable SARS, and we have studied serum samples obtained from convalescing patients 118 days after onset of disease. Further investigation regarding the applicability of this serologic test to human serum samples is in progress with more cases of SARS. Specifically, we are evaluating the percentage of subclinical or mild infections and the relationship between antibody response and disease severity. Serologic diagnostic reagents for SARS that are highly sensitive, specific, and convenient need to be developed. We found that using synthetic peptides for SARS diagnosis was relatively simple. Because selected phage clones and epitope-based peptide antigens in our study were highly specific for serum from patients with SARS, we feel that they may be useful not only in developing diagnostic and prognostic reagents but also in understanding the pathogenesis of SARS.
2018-04-03T04:02:11.988Z
2004-08-15T00:00:00.000
{ "year": 2004, "sha1": "290958bacfddb40a96925bd3d7405ce5f31063c0", "oa_license": null, "oa_url": "https://academic.oup.com/jid/article-pdf/190/4/797/2489324/190-4-797.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "502f8c21aca1e2e1f17dc79ee6be922c3e5dcfa6", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
131768229
pes2o/s2orc
v3-fos-license
Flud: a hybrid crowd-algorithm approach for visualizing biological networks Modern experiments in many disciplines generate large quantities of network (graph) data. Researchers require aesthetic layouts of these networks that clearly convey the domain knowledge and meaning. However, the problem remains challenging due to multiple conflicting aesthetic criteria and complex domain-specific constraints. In this paper, we present a strategy for generating visualizations that can help network biologists understand the protein interactions that underlie processes that take place in the cell. Specifically, we have developed Flud, an online game with a purpose (GWAP) that allows humans with no expertise to design biologically meaningful graph layouts with the help of algorithmically generated suggestions. Further, we propose a novel hybrid approach for graph layout wherein crowdworkers and a simulated annealing algorithm build on each other's progress. To showcase the effectiveness of Flud, we recruited crowd workers on Amazon Mechanical Turk to lay out complex networks that represent signaling pathways. Our results show that the proposed hybrid approach outperforms state-of-the-art techniques for graphs with a large number of feedback loops. We also found that the algorithmically generated suggestions guided the players when they are stuck and helped them improve their score. Finally, we discuss broader implications for mixed-initiative interactions in human computation games. INTRODUCTION Many fields of science require meaningful and visually appealing layouts of networks (also known as graphs). A prominent example is the discipline of network biology, where scientists use networks to understand the chemical reactions and protein interactions that underlie processes that take place in the cell [3]. In order to present and analyze these networks, researchers require aesthetic layouts of these networks that clearly convey the relevant biological information. A layout of a network assigns x and y coordinates to each node and routes each edge using a straight line or a curve in order to create a meaningful visual representation. There are two major approaches for creating network layouts. The first approach views humans as the deciding agent and primary creator, with computers seen as support tools. An example is a graph (or network) drawing interface in Cytoscape [64] that provides the layout tools for designers to use while drawing networks. This approach offers creative freedom and allows the user create meaningful layouts by capturing complex, domain-specific constraints. However, it is timeconsuming to create layouts manually. The second approach uses fully-automated methods and does not require human intervention. In contrast to the first approach, it can generate data visualizations at scale [18,32,55,66]. However, these methods lack the ability to capture complex visualization constraints and domain-specific needs. As a result, it is a common for biologists to depend on the time-consuming practice of manually improving automatically generated visualizations. Crowdsourcing has emerged as a promising solution to scale up such manual tasks [63,71,77] by tapping into the human intelligence and creativity of crowdworkers. However, it is unclear how to design interfaces and tasks that will allow novice crowds-who lack expertise in biology and computer science in general-to make domain-specific modifications to network layouts and balance various constraints. Our research seeks to bridge this fundamental gap. In this work, we present Flud, an online game with purpose (GWAP) that allows humans with no expertise to design biologically meaningful network layouts with the help of algorithmicallygenerated suggestions. The goal of the game is to move nodes in a given network so as to create a layout that optimizes a score based on pre-specified design criteria. These criteria include four previously defined aesthetic considerations ( Figure 1): 1) minimizing the number of edge crossings, 2) keeping nodes connected by an edge close to each other, 3) dispersing disconnected node pairs, and 4) increasing the separation between nodes and edges. We also introduce a new, fifth criterion inspired by a biological application to cellular signaling: 5) maximize the number of downward pointing paths in the layout ( Figure 1). These types of paths draw visual attention to sequences of edges that lead from receptor proteins in the cell membrane (green triangles, placed at the top of the layout) through internal nodes to effector molecules in the nucleus (transcription factors, yellow squares, placed at the bottom of the layout). The role of the game players is to make modifications to the layout so as to balance the criterionspecific scores based on the direction of flow of information in the network, edge crossings, and relative distances between nodes and edges. Since, these criteria may conflict with each other, Flud allows the players to track the changes in layout score and the corresponding criterion-specific scores with every move. Another important feature of Flud is the availability of a "clue" for each criterion. If the player is stuck, perhaps because the network is complex or it is unclear what the next move should be, the game highlights a small subset of nodes and edges in the network as clues. These nodes and edges are selected by Flud such that changing the positions of these elements is likely to improve the score for the corresponding criterion. With the help of criterion-specific scores, clues, and layout tools in Flud interface, we expect that game players will succeed in generating aesthetic and meaningful layouts. Flud also facilitates a novel mixed-initiative network layout strategy that utilizes a sequential and collaborative process wherein human players and a simulated annealing based layout algorithm [17] build on each other's progress. The simulated annealing algorithm seeks to iteratively reach a layout with higher score by randomly searching the neighborhood of an initial guess. Its strength is that it avoids getting stuck at local optima -a solution that is better than the others nearby but is not the very best -by probabilistically accepting an inferior solution. In contrast, human players use their intelligence and creativity to optimize the score by making modifications that may be global in nature and challenging to achieve using an automated method. To showcase the effectiveness of Flud and the novel mixed-initiative approach, we recruited nearly 2,000 novice crowd workers on Amazon Mechanical Turk to lay out and visualize complex protein networks that represent signaling pathways in human cells. We presented three different versions of the hybrid crowd-algorithm approach and evaluated them against a crowd-only strategy and algorithmic baselines. Our results show that such a collaboration between humans and algorithms leads to higher scoring layouts than either from humans or from algorithms alone. We found that the game elements such as criterion-specific modes and clues supported crowd workers in the visualization tasks. The results also show that the crowd workers who moved one of the nodes suggested by Flud contributed more to the overall layout score in comparison to rest of the nodes in the network. In summary, our contributions include: • A novel mixed-initiative approach that combines crowdsourcing with computational engines to create high-quality visualizations of biological networks. • A game with a purpose, Flud, that facilitates this approach with two components: a) an interface that gamifies the network visualization task to make it more accessible to crowd workers with no biological or computer science expertise, and b) an implementation that allows crowd workers and algorithms to build on each others progress. • Experiments that provide empirical evidence of the benefits of mixed-initiative layout methods compared to algorithmic baselines. Algorithms for network layout Several algorithms exist to automatically compute network layouts. Popular methods include circular [6,30], hierarchical [23,31,65], orthogonal [24], force directed [29,43], and spectra [50,51] layouts. Several of these methods can generate layouts in a few seconds for moderately-sized networks. Implementations are available in various tools or libraries, e.g., Cytoscape [61], Gephi [42], Graphviz [25], NetworkX [34], NetBioV [69], and Pajek [4]. Despite their wide availability, these methods have several drawbacks. They may produce very specific types of layouts, e.g., arrange nodes in circles [6,30], or be appropriate for restricted classes of networks, e.g., directed acyclic networks [23,65]. They may compute layouts that try to optimize specific aesthetic criteria, e.g., prevent edges from being stretched and disconnected nodes from coming too close [29,43] or hierarchically arrange nodes and remove edge crossings [31,65]. One such popular method is Dig-Cola [20], which arranges the nodes in layers while preserving edge length and symmetries. Since highlighting information flow is an important criterion for experiments in this paper, we use Dig-Cola as a baseline to compare performance against Flud. In general, these methods do not have the flexibility to accommodate a diverse variety of layout criteria and domain-specific constraints. As discussed earlier, a general purpose approach to accommodate multiple layout criteria is to formulate network layout as an optimization problem and solve it using simulated annealing [17]. Taking inspiration from these ideas, we propose to examine how a mixed-initiative approach that combines human intelligence with simulated annealing can yield better layouts than either of these methods alone. Leveraging human abilities through crowdsourcing to lay out networks Some research has examined how users lay out networks, including in comparison to automated layouts. Two studies [22,71] asked participants to draw an aesthetic layout of a given network. Dwyer et al. [22] concluded that the best user-generated layouts performed as well as or better than automated layouts based on physical models. Although these studies do not use crowd workers, they provide evidence that non-expert humans can create network layouts comparable to algorithms. Researchers have explored using non-expert volunteers (e.g., citizen scientists) to perform other visualization tasks. For example, systems such as ManyEyes [72] and Sense.us [39] have used novice crowds to create, collaborate on, and analyze data visualizations. Connect the Dots [54] and CRICTO [12] ask novice crowds to build social networks for intelligence analysis, but not to create layouts for them. Inspired by these projects, we explore the use of non-expert workers on Amazon Mechanical Turk to create biological network layouts. Most related to this paper, our prior work on CrowdLayout leverages crowd workers to design layouts of biological networks that satisfy domain-inspired guidelines specified in natural language [63]. CrowdLayout also uses crowd workers to evaluate how well these layouts satisfy the guidelines. Our work in CrowdLayout shows that translating domain knowledge into representational guidelines can allow non-expert crowds to create effective layouts. Our work differs from and complements these efforts in several important ways. First, we focus on combining multiple aesthetic criteria with a specific domain knowledge guideline, all of which can be translated into mathematical formulae. Second, we score layouts to provide real-time feedback to the user who is modifying the layout rather than (as in the case of CrowdLayout) wait for other crowd workers to evaluate the layout. Third, we formulate layout creation as a collaboration among multiple crowd workers or between crowd workers and computational engines, in contrast to single-user design tasks. Designing network layouts by combining algorithms with crowdsourcing Prior work has explored using mixed-initiative systems to perform a variety of tasks like data wrangling [44], exploratory analysis [76], and natural language translation [33]. In a recent work, Heer [38] discussed how these systems use shared representations like text-editing interfaces and domain-specific languages through which humans and algorithms can work together to accomplish a goal. In contrast to these systems, the shared representation in Flud involves three components: (i) a network layout interface, (ii) 2D coordinates on the screen as potential actions, and (iii) a scoring scheme as a shared objective. More importantly, we use these shared representations to support mixed-initiative interaction in a novel network layout task. Some mixed-initiative systems also leverage the complementary strengths of human intelligence and automated techniques to help users understand network data. These include a visualization tool that allows users to analyze legal citation networks with degree of interest functions [70], and a system that helps users explore clusters of similar research papers via machine learning and visualization [10]. These systems can be highly effective for individual expert users, but this paper considers how non-expert crowds can work with algorithms to pre-process network layouts for expert users. Researchers have also begun exploring mixed-initiative systems that use crowdsourcing. For example, Flock [11] helps build accurate classifiers by supporting a mixed-initiative interaction where crowd nominates features and machine learning weighs them. Jellybean [59] was able to accurately count objects in images by combining results from crowds and computer vision algorithms. Ideahound [62] combines human judgment with machine learning techniques to create a computational semantic model of the emerging solution space. Mobi [78] demonstrated how a mixed-initiative crowdsourcing system could support trip planning by allowing a traveler to specify both quantitative global constraints (e.g., allotted time, number of activities) enforced by the software, and qualitative constraints (e.g., visiting the beach vs. the downtown) satisfied by crowd workers. Subsequent research explored these ideas for other tasks and domains, such as conference planning [47]. Moreover, while Mobi guided worker effort through todo items, systems like Flock, Jellybean, and Ideahound used simple instructions to guide them. In contrast, we consider how gameplay mechanisms can motivate desired behavior in crowd workers. Also, Flud differs in that we adapt these ideas for a novel task type (i.e., network layout) and task domain (i.e., biology). Yuan et al. [77] proposed a strategy that utilized crowd workers to lay out subnetworks of a complex network. Subsequently, the authors used a constrained distance-embedding algorithm to compose the layouts of the subnetworks into one for the entire network. In contrast, our work seeks to facilitate the design process for crowd workers in order to improve the overall quality of layouts they generate. Designing a game with a purpose to lay out networks The idea of using games to solve real-world scientific problems is not new [13,27,60] and has been used to solve a wide range of complex problems, such as protein folding [14], RNA sequence design [53], DNA sequence alignment [46], molecular design [2], and neuron reconstruction [49]. Inspired by the success of these games, we developed Flud, a game with a purpose that leverages the creativity and cognitive abilities of the players to create biological network visualizations, a challenge that has not previously been tackled using games. We also explore a game mechanic that differs from most prior citizen science games. While the players in most of these games compete against each other, Flud players sequentially collaborate on a layout by building on each others' progress. Some computer games have been developed for network drawing purposes [19], including the popular one-player game Planarity [26]. Planarity (also known as UntangleManiak or The Plateau) asks the player to move nodes in order to untangle (remove) the edge crossings in the given network. CycleXing [5] is a two-player game with an adversarial game mechanic where one player tries to maximize the number of crossings, and the other tries to minimize them. Similar to Planarity and CycleXing, Flud asks the players to move the nodes around to manipulate layouts. In contrast, Hashi [37] is a single-player network drawing based game where the players draw an edge between nodes. The goal of a Hashi game is to create a single connected component while following specific rules like the edges cannot cross and should be orthogonal. While minimizing the number of edge crossings is one of our criteria, Flud players need to consider four other layout criteria as well. More importantly, none of these games asks the players to optimize the number of downward pointing paths. In a recent study, Hamari et al. found that leaderboards are the most commonly studied gamification technique for motivating the players [35]. They are generally used to help players judge their success by comparing their performance in against other players (e.g., Foldit [14], Peekaboom [74]) and even themselves (e.g., Planarity [26]). In contrast to these games, since Flud players sequentially collaborate by building on the previous best layout, we only show them the best score so far instead of showing a leaderboard. However, it might be interesting to explore the use of leaderboard to motivate the players to play multiple games by aggregating the points. The scoring system is another essential gaming element and is used to reward players for their performance. The scoring system generally depends on more than one component. While games like Phylo [46] provide detailed scoring information by showcasing values for all of the components that make up the overall score, games like Foldit [14] and EyeWire [48] only show the overall score. In Flud, our players need to consider multiple layout criteria while playing the game, and therefore, we show all of the component scores along with the overall scores. Our scoring information differs from these systems in two important ways. First, to help players understand the scoring method, we transparently show the weights of all the component scores. Second, we use visual cues like up/down arrows and green/red colors to allow players to track progress after each move. Game modes is another gaming element used in Flud. They are commonly used in games to offer players different gameplay settings, difficulty levels, tools, rules, and even graphics. A typical example of the game mode is the choice between single-player versus multi-player setting. Expectedly, game modes are common in serious games as well. For example, Phylo has three different games modes (Story, Ribo, Phylo) with different gameplay settings including one for RNA molecules. Foldit has fives game modes: Modeless, Pull, Structure, Note, Design. Each of these modes enable the players with unique abilities, which are otherwise not available in other modes. In Flud, we adapt these ideas and use five modes where we enable players with a game visualization and clues unique to a given mode. However, our use of modes differs from other serious games in that we allocate a mode to players instead of giving them an option to choose it. Moreover, in this paper, we empirically study the best strategy to assign modes to Flud players. DESCRIPTION OF THE FLUD SYSTEM Flud is a web-based game with a purpose that allows a requester -a biologist seeking to visualize his or her network data -to crowdsource the layout design task for biological network visualizations to novice game players. Flud, as a system, has two main components: the requester interface and the game interface. We now describe how Flud assists novice players in visualizing and laying out a network in the context of layout criteria specified by a requester. Flud requester interface This interface allows a requester to send a network to Flud and crowdsource the layout design task. The requester can use the interface to control parameters such as the number of players per game, layout criteria, and the number of minutes a player can play the game. We detail some of the most important parameters below. 3.1.1 Flud layout criteria. A requester can ask Flud players to optimize the network layout for five types of criteria (see Figure 1). The first criterion is domain-inspired, while the other four are aesthetic and have been previously used as design guidelines in network drawing (also called graph drawing). (1) Downward pointing paths: This guideline asks players to maximize the number of downward pointing paths. This domain-specific constraint is especially useful for analyzing the flow of information in biological networks that represent cellular signaling. (2) Non-crossing edge pairs [57,67,68]: The goal of this guideline is to maximize the number of edge pairs that do not intersect, i.e., minimize the number of edge crossings. (3) Edge length [67,68]: This guideline asks players to minimize the length of the edges in the network. (4) Node distribution [7,17,67,68]: In this guideline, unconnected nodes should be far apart from each other. (5) Node edge separation [17]: This guideline seeks to create layouts where nodes are positioned away from edges. Requesters may also assign priorities or weights to each layout criterion to convey their relative importance to Flud players. These priorities help players to prioritize layout criteria in case of conflicts. A requester can also exclude a layout criterion by assigning it a priority of zero. 3.1.2 Crowdsourcing approach. One of the challenges in creating network layouts is that different criteria may conflict with each other. Heuristics used by automated methods may compute nonoptimal solutions or get stuck in local optima. For instance, the correct orientation of several edges in a path may be required to make it point downwards. Therefore, it is common practice for experts (biologists in our case) to manually improve automatically generated visualizations. In Flud, we use crowdsourcing to leverage the visual and cognitive abilities of humans to observe patterns and identify solutions that escape local optima. Flud allows requesters to specify the total number of game players for a layout design task. They can also select one of two available crowdsourcing approaches: (1) Crowd: In this approach, Flud asks a fixed number of players (specified by the requester) to play the game in a sequence. In each game session, a player starts with the highest-scoring layout created so far, i.e., across all earlier sessions. During a game session, a player may create multiple layouts. Flud stores the highest-scoring layout of these. If this layout scores better than the current leader, Flud updates the best overall layout. In this fashion, players can iteratively improve upon one another's results. (2) Hybrid: Here, Flud alternates sessions of gameplay between players and simulated annealing [17]. In each session, either a player or simulated annealing starts with the highest-scoring layout created so far, with the this layout updated as in the crowd-only approach. In this fashion, human players and the algorithm can iteratively improve upon one another's results. Flud allows requesters to specify the initial temperature (default = 100) and number of iterations (default = 500) in a given simulated annealing session. Flud game interface The game interface has three major parts: a visualization of the network being laid out on the left, text panels with game-related tips in the middle, and a sidebar with game controls and score information on the right ( Figure 2). Reorienting the red edge to make it point downward will increase the number of downward pointing paths. Network visualization. Flud provides an interactive network visualization that supports both touch and mouse gestures to select and drag one or more nodes. Since Flud players do not need biological expertise, they do not see any node or any edge labels describing their biological meaning. Instead, each node displays the number of edges incident on it as a signal of that node's importance. Scores. The top portion of the sidebar displays the highest score achieved by a layout for the network, the score of the current layout, and a criterion-specific score board. The score board displays the scores and priorities (in coin-shaped badges) for each individual criterion. In the score board, we sort the criteria in decreasing order of the priorities assigned by the requester. We scale the per-criterion scores between 0 and 10,000 to avoid displaying floating point values. After the player makes a move, Flud recalculates and displays, in real time, each per-criterion score as well as the total score. A green up arrow (↑) or a red down arrow (↓) next to each per-criterion score allows players to track the impact of their last move on the score. In addition, we display the change in the overall score with a similar color scheme. Criterion-specific modes and clues. Csikszentmihalyi [16] argues that a user can achieve flow, a state of being "in the zone, " if they attain multiple component states. These states include challenge-skill level balance, immediate and unambiguous feedback from the system, concentration on the task at hand, and clarity of goals. Inspired by flow theory, which has been influential in game design [15], we implemented two important gameplay features -criterion-specific modes and clues ( Figures 5 and 6) -to help players focus on their goals and make progress without becoming bored or frustrated. At the start of a game session, the player is assigned a single criterion-specific mode. In this mode, to delineate the task, the visual representation of the network highlights (in red) the elements that are relevant to the criterion-specific task. Flud has five such modes: Downward pointing paths, Non-crossing edge pairs, Edge length, Node distribution, and Node edge separation. Moreover, if the player is stuck while playing, Flud reduces the challenge level by presenting them with an algorithmically-generated, mode-specific "clue" that highlights a small subset of nodes and edges in the network and further narrows the focus of a player to a very specific task [73]. Flud selects specific elements in the clue such that changing their positions is likely to improve the score for the corresponding criterion. We now describe each criterion-specific mode and the corresponding clue. (i) Downward pointing paths: We say that an individual directed edge is downward pointing if the y-coordinate of its head is below the y-coordinate of the tail and angle between the edge and the x-axis is greater than equal to a fixed degree (15 degrees, in our implementation; Figure 3 In the downward pointing paths mode, Flud highlights every upward pointing edge in red ( Figure 5a). A player can reorient such an edge in an attempt to increase the number of downward pointing paths. A clue in this mode ( Figure 5b) highlights a path in the network that (a) contains at least one upward pointing edge, and (b) has the property that reorienting all upward-pointing edges in the path is guaranteed to increase the number of downward pointing paths. If there is no such path, the player does not see any clue. (ii) Non-crossing edge pairs: This mode displays every edge crossing as a red point (Figure 5c). When a player hovers on an edge crossing, the mode highlights the intersecting edges. Decreasing the number of edge crossings is perhaps one of the most challenging layout criteria since the player has to manipulate nodes without introducing new intersections. Intuitively, the higher the degree of a node, the harder it may be to remove intersections involving edges incident on that node. Therefore, the clue in this mode highlights a pair of crossing edges such that the total degree of the four nodes involved is the smallest over all the intersections in the layout (Figure 5d). Note that we cannot guarantee that the player will indeed be able to decrease the number of crossings by moving one or more of these nodes. (iii) Edge length: For this criterion, players need to consider the length of all the edges in the network. Therefore, the mode presents all the nodes and edges in blue. The clue highlights an edge that is either very long or very short (Figure 6a). Correcting the length of this edge should lead to the improvement of the edge length score. (iv) Node distribution: In this mode, the clue highlights the unconnected pair of nodes that are the closest to each other ( Figure 6b). Thus, the clue provides a greedy way for players to consider node pairs that need to be moved further apart. (v) Node edge separation: Here, the clue highlights the node-edge pair that is the closest to each other ( Figure 6c). Layout controls. The bottom part of the sidebar contains multiple controls that allow players to rapidly perform non-trivial changes to the layout. These changes include expanding and squeezing the spacing between selected nodes, undoing and redoing earlier actions, and reverting to the layout with the best score. 3.2.5 Game tips. The middle part of the interface contains text panels with game related tips. These tips include an example of a good layout for a toy network, a link to the tutorial on how to use criterion-specific clues, and specific instructions on how to improve the per-criterion score for the assigned mode. Bounding box. To prevent players from creating pathological layouts that move nodes indiscriminately far apart from each other, we set the scores for each criterion to zero if any node is moved outside a bounding box of fixed size. Flud scoring system In describing the scoring system, we use G = (V , E) to denote the network G being laid out with V denoting its nodes and E denoting its edges. We use n to denote the number of nodes and m for the number of edges in G. Given the xand y-coordinates for every node in V , we use d(u, v) to denote the Euclidean distance between nodes u and v and l(e) to denote the length of an edge e (the Euclidean distance between its endpoints). We use w and h to represent the fixed width and height of the bounding box. We now describe how we computed the score corresponding to each criterion (per-criterion score). (1) Downward pointing paths: We compute a normalized downward pointing score as follows: where π (v) is the number of downward-pointing paths and ρ(v) is the approximated upper bound of the number of downward-pointing paths. The closer DP(G) is to one, the larger the number of downward-pointing paths in the layout. (2) Non-crossing edge pairs: Since the maximum number of non-crossing edge pairs possible in G is m(m − 1)/2, we compute the non-crossing edge pairs score as . The closer EC(G) is to one, the smaller is the number of edge crossings in the layout. We have intentionally defined EC(G) as above so that high values reflect good performance and players focus on increasing their scores. (3) Edge length: We define the cost c(e) of an edge to be equal to its length l(e) if l(e) ≥ size of a node or equal to a large penalty, otherwise. We normalize the cost of each edge by the largest possible edge length and therefore, penalty should be greater the diagonal of the screen. We compute the edge length score of a layout as Thus, the closer EL(G) is to one, the closer the nodes connected by an edge are, on average. (4) Node distribution: Here, we want to maximize the average distance in the layout between a node and its closest unconnected node. Unlike the previous criterion, it applies only to pairs of nodes not connected by an edge. We compute the node distribution score as Node edge separation: In this criterion, we want to maximize the average distance between a node and the closest edge to it in the layout. Therefore, we define the node edge separation score as where d(v, e) denotes the distance between node v and edge e. We define the overall score OS(G) for a layout as a weighted sum of the five per criterion scores where the coefficients are the priorities assigned to each criterion by the requester. We describe the algorithm and parameters used to compute the per criterion scores in Appendix. Simulated Annealing We use simulated annealing [17] as the primary baseline for comparison and as the algorithm in the hybrid approach. We selected this method primarily due to its flexibility to accommodate all of our criteria (Section 3.1.1). We run the algorithm for a fixed number of iterations (e.g., 500) with the temperature decreasing by a factor of 0.995 after every iteration. Each iteration involves 10n steps, where n is the number of nodes in the network. In every step, we move a node to a random position (x new , y new ) computed as follows from the current position (x current , y current ): where T max = 100 and T represent the maximum and current temperatures, respectively, and we draw ρ x and ρ y uniformly at random from the interval [0, 1]. Thus, we select the new position uniformly at random from a rectangle centered at the current location of the node. The size of this rectangle is proportional to the fixed page size and decreases quadratically with the temperature. We compute the score of the layout with this new position. We accept the move if it improves the score. Otherwise, we accept this move, even though it worsens the score, with a probability Pr(∆s) = e − ∆s T , where ∆s is the change in score. Therefore, as the temperature decreases, so does the probability of accepting a move that worsens the score. Once the algorithm stops, we use the best layout in the entire run as the final layout. Overall, the annealing schedule took approximately 1 hour to cool down from high initial temperature (T = T 0 = 100) to low temperature (T ≈ 1). Implementation details We implemented Flud in Python using the Django web framework, with network visualization supported by Cytoscape.js [28]. Flud interfaces with GraphSpace [8] for storing and sharing networks and their layouts. To create a game of Flud, a requester interface shares the networks posted by the requester to a public group called 'Flud' on GraphSpace. The Flud system pings GraphSpace for new networks in 'Flud' group and makes it available on the Flud website for a player to lay out. When a player selects a graph, Flud presents the highest-scoring layout so far for that graph. During a game session, a player may create multiple layouts to maximize their score, and Flud stores the highest-scoring layout. If this layout scores better than the current leader, Flud updates GraphSpace with this layout. In this fashion, players can iteratively improve upon one another's results. Finally, the requesters can access all of the crowd-generated layouts, including the best overall layout on GraphSpace. EVALUATION Having implemented the Flud system as described above, we aimed to showcase its effectiveness to crowdsource the layout design task for biological network visualizations. To this end, we posed the following research questions: RQ1: How should Flud assign criterion-specific modes to players to optimize the scores they achieve? RQ2: How do the crowd and hybrid approaches perform in comparison to automated methods? RQ3: What are the dynamics of the mixed-initiative collaboration in the hybrid approach? In the rest of this section, we describe how we selected crowd workers, the networks we laid out, the task each worker had to perform, and how we compensated them before presenting our experiment design. Networks We selected three different complex protein networks of similar size ( Table 1) that represent signaling pathways in cells. These networks contained both directed and undirected edges. These networks contained a small (G1), medium (G2), and high (G3) number of cycles. The presence of a large number of cycles makes it difficult to create layouts with many downward-pointing paths. Hence, these networks should present different levels of challenges to crowd workers and to automated algorithms. Crowd workers To showcase that Flud can support broad player bases, we did not require the participants to have network or biology expertise. Therefore, we recruited novice crowd workers from the Amazon Mechanical Turk (MTurk) platform as our game players. We used MTurk's built-in qualification types to only recruit workers from the US with a Human Intelligence Task (HIT) approval rate of at least 97% and at least 100 completed HITs. In a pilot study where we asked crowd workers to play the game, we noticed that some players (a) did not carefully go through the tutorial and (b) moved the nodes aimlessly to use up the fixed number of moves required to be paid for completing the game. In order to prevent such unproductive crowd activity, we only invite crowd workers to play the real game who correctly solved two small puzzles towards the end of the tutorial. The first puzzle asks the crowd worker to increase one of the per-criterion scores in a toy network by at least one point with the help of a clue. The second puzzle again asks the worker to increase the per criterion score for the same network except that there is no clue available. To avoid learning effects, we recruited the crowd workers such that no individual repeated the same network or criterion-specific task. Crowd worker task We start the task for a crowd worker by randomly selecting a game corresponding to one of the three networks. Next, we assign them one of the layout criteria (using one of the strategies described in Section 4.5.1) to restrict their gameplay to a criterion-specific mode. We ask the crowd worker to go through a two-part interactive tutorial that introduces the visual elements, game rules, and use of the criterion specific clues. In the first part of the tutorial, we train the worker to improve the criterion-specific subscore both with and without the help of a clue. At the end of part one of the tutorial, we give the worker an option to submit the HIT or to continue with part two of the tutorial and play the game to earn a bonus (Refer to Section 4.4). The second part of the tutorial introduces the worker to the remaining the subscores and the complete rules of the game. During the game session, each worker had one hour to generate an improved layout. A player can quit the game at any point. Irrespective of the assigned mode, the crowd worker's goal is to create a layout that optimizes the total score based on the layout criteria (see Section 3.1.1). Compensation For the first half of the tutorial, we compensate the workers with at least the minimum hourly wage rate ($7.25 per hour) in our region. If a worker elects to continue playing Flud in order to obtain bonus compensation, we pay this amount according to how they increase the score of the layout. We use this strategy to motivate a worker to fully utilize the time available to them. We assign budgets to each criterion in proportion to its priorities. These budgets determine the amount of bonus a player can earn while playing the game for the assigned criterion-specific mode. The bonus amount earned when a player improves the score from s i to s j is where s j > s i , b is the budget assigned to the given criterion and s target is the target score we want the players to achieve. The exponential nature of the bonus computation ensures that the workers earn more money per point as they approach the target score. The goal is to motivate the workers to continue playing the game, even though the task of improving the score gets harder as the player gets closer to the target score. Ideally, we should aim to set s target to an achievable value in order to ensure that the bonus is fair. For example, it is not possible for a player to achieve a score 10,000 (maximum possible score) if the network has many cycles. Therefore, we suggest using an achievable target (say, 2,000). Once the workers achieve the target score, the requester has the option to increase the budget and the desired score. Experiment design Given the central importance of domain-specific downward pointing path criterion, we assigned it a high priority of 400. We assigned the non-crossing edge-pairs criterion a priority of 3 and rest of the criteria a priority of 1. We conducted two experiments to address our research questions. Experiment 1. We designed the first experiment to answer RQ1, where we want to find a good strategy to assign a criterion-specific mode to a game player in a sequence. To this end, we evaluated the performance of two different approaches: Crowd and Crowd-Random. In the Crowd approach, we assigned criterion-specific modes to workers in the order of their priorities, i.e., downward pointing paths (DP), non-crossing edge pairs (EC), edge length (EL), node distribution (ND), and node-edge separation (NED), as shown in Figure 7. We recruited 20 crowd workers for each game sequence where four (N = 4) workers focused on each of the five layout criteria used in Flud. In contrast, for the Crowd-Random approach, we randomly assigned each criterion four (N = 4) times in a sequence of 20 crowd workers (refer to Figure 7). We recruited crowd workers for three game sequences per network for each approach. Overall, we recruited 360 crowd workers for this experiment. Experiment 2. In our second experiment, we evaluated the performance of the Crowd approach described in Experiment 1 and the Hybrid approach against automated baseline methods with the goal of answering RQ2 and RQ3. Similar to the Crowd approach, for Hybrid approach, we recruited 5N crowd workers for each game sequence where N workers focused on each of the five criteria; we provide the values of N below. However, in Hybrid, we alternate sessions of gameplay between crowd worker and simulated annealing (refer to Figure 7). Davidson and Harel's simulated Figure illustrating the Crowd, Crowd-Random, and Hybrid (Crowd-SA) approaches evaluated in this paper. Each circle represents a crowd worker in a game sequence where a crowd worker is assigned a criterionspecific mode M. Each criterion-specific mode is assigned N times in a game sequence. A rectangle represent a simulated annealing schedule with initial temperature t. annealing based layout algorithm [17] starts with "non-local" moves (T = 100), where a node's next location is not restricted to nearby positions, and ends with "local" moves (T ≈ 1) where a node can move only to a nearby position. In this experiment, we tried three different types of hybrid approaches (Crowd-SA100, Crowd-SA50, and Crowd-SA20), where we alternate the Crowd approach and simulated annealing schedules with three different initial temperatures (refer to Figure 8). • SA100 (High temperature). In this schedule, we started with a high temperature (T 0 = 100) and only ran the initial part of the schedule (approximately 15 minutes). In this schedule, we only made random moves that are "non-local" in nature. • SA20 (Low temperature). Here, we started with a low temperature (T 0 = 20) and only ran the later part of the schedule (approximately 15 minutes). Due to the low initial temperature, we only made random moves that are "local" in nature. Such moves further optimized the score while preserving the overall structure of the layout. Overall, we aimed to recruit N = 10, N = 15, and N = 20 crowd workers per criterion for networks G1, G2, and G3, respectively, for both Hybrid and Crowd approach. We stopped recruiting crowd workers once the total gameplay time taken by the workers in the sequence exceeded 24 hours. At the end of the sequence, we fine-tuned the best layout so far to generate the final output layout by using the fine-tuning procedure described in Harel and Sardas's work on incremental improvement of layouts of planar networks. In order to highlight the quality of layouts generated by the Flud system, we compared our results against four automatic layout algorithms. • Simulated annealing (SA) [17]. We selected this algorithm as our primary baseline due to its flexibility to accommodate all of our criteria (Section 3.1.1). We ran the algorithm for a fixed number of iterations (e.g., 500) where the temperature cools down from high initial temperature (T = T 0 = 100) to low temperature (T ≈ 1). To compare performance of SA against Crowd and Hybrid, we ran the simulated annealing schedule multiple times for 24 hours such that each schedule builds upon the best layout created so far. We ran the baseline simulated annealing algorithm on a machine with an Intel Xeon (16 cores at 2.40 GHz) CPU. • Dig-Cola [20]. Our second baseline was Dig-Cola, which may indirectly optimize the number of downward pointing paths criterion since it lays out nodes in a hierarchical manner. Dig-cola also provides a way to conserve aesthetic criteria such as edge lengths and symmetries. We used the implementation of Dig-Cola in the Neato program in the Graphviz package [25]. We ran Dig-Cola with nine different values of the minimum gap between levels {0.1, 0.2, 0.5, 1, 2, 5, −0.5, −1, −2}, four different values of the preferred edge length parameter 1, 3.125, 5, and all possible values of the overlap parameter. Finally, we used the layout that produced the largest total score. • IPSEP-Cola [21]. Our third baseline was IPSEP-Cola, which tries to ensure that node s is placed above node t if there is a directed edge from s to t. If the network contains directed cycles, IPSEP-Cola computes the largest acyclic subgraph and relaxes the constraints on the excluded edges. This approach may optimize the number of downward pointing paths. We used the implementation of IPSEP-Cola in the Neato program in the Graphviz package [21]. We ran a parameter search similar to Dig-Cola and used the layout that produced the largest total score in our comparisons. • Spring-electrical model [41]. This force-directed algorithm uses spring elasticity to keep connected nodes closer and electric charge repulsion to keep disconnected nodes away from each other. The method is available as the 'sfdp' program in the Graphviz package [25]. We ran the program with six different values of the spring constant K = {0.1, 0.2, 0.5, 1, 2, 5, 10} and of the power of repulsive force R = {0.1, 0.2, 0.5, 1, 2, 5, 10}. We used the layout from the parameter pair with the highest score in our comparisons. Finally, we also instrumented Flud to record each action taken by the crowd workers during their gameplay as well as the corresponding scores and layout. We used the collected data to analyze how the crowd workers played the game, and understand the impact of features such as criterion-specific modes and clues on the scores. RESULTS 5.1 RQ1: How should Flud assign criterion-specific modes to players to optimize the scores they achieve? We used Experiment 1 to compare the performance of the crowdsourcing approach when the criterion-specific modes are assigned in decreasing order of the priorities (i.e., Crowd) versus a random assignment of modes (i.e., Crowd-Random). Figure 9A shows that for each of the three networks, the median total score achieved by the Crowd approach was greater than the median score achieved by Crowd-Random approach. On further investigation, we found out that the Crowd approach achieved a better median per-criterion score for downward pointing paths (DP), node distribution (ND), and node edge separation (NED) criterion for all three networks ( Figure 9B). On the other hand, the Crowd-Random approach achieved a higher median per-criterion score for non-crossing edge pairs (EC) and edge length (EL) criterion for networks G2 and G3. Since the Crowd approach achieves a higher score for three out of five layout criteria, including the important domain-specific DP criterion (reflected in the total layout score), we decided to assign criterion-specific modes in order of priorities in Experiment 2. In the Crowd approach, we also observed that while four crowd workers were able to lay out as many as 1,303 downward pointing paths in network G1, the same number of crowd workers could only lay out 321 and 131 downward pointing paths in networks G2 and G3, respectively. We attributed this difference in performance to a large number of cycles in G2 and G3 compared to G1 (Table 1). We noticed that in the presence of large numbers of cycles, crowd workers had to make more moves in network G2 (mean=899) and G3 (mean=850) compared to network G1 (mean=403) where the number of cycles is very low (Table 1). Therefore, we decided to recruit crowd workers in proportion to the number of simple paths from sources to targets (Table 1) in Experiment 2 to balance out the crowd workers' low throughput for downward pointing paths criterion in networks with large number of cycles. RQ2: How do the crowdsourcing approaches perform in comparison to automated methods? We used the data collected from Experiment 2 to answer this research question. We considered several aspects to answer RQ2: overall scores, number of downward-pointing paths, scores for aesthetic criteria, rate of improvement, and final network layouts. Overall score. First, we compared the performance of all the approaches on the overall score ( Figure 10). We observed that the crowd and hybrid approaches clearly outperformed all the automated techniques for networks G2 and G3. In contrast, for network G1, the automated methods, especially SA, had comparable performance to the crowd and hybrid approaches. We also noticed that Dig-Cola, IPSEP-Cola, and Spring Electrical algorithms hardly improved the starting overall score for networks G2 and G3. However, Dig-Cola and IPSEP-Cola were able to considerably improve the overall score for network G1. We attributed this difference to the much larger number of cycles in G2 and G3 in comparison to G1 (Table 1). To further understand these results, we examined the downward-pointing paths results in detail. Downward-pointing paths criterion. Here, we compared the performance of all the approaches on the number of downward pointing pathways, our sole domain-specific layout criterion ( Figure 11). In Figure 11, we saw a trend similar to overall scores shown in Figure 10. We attribute this similarity to the high priority we assigned to the DP criterion. This explains that DP is the main reason why our baseline methods did not perform well on networks G2 and G3. We believe as the number of cycles increase in a network, it becomes harder for the automated methods to optimize for downward pointing paths. In fact, Dig-Cola, Spring Electrical, and IPSEP-Cola computed very few downward pointing paths (≤ 10) in networks G2 and G3. In contrast, crowd workers seem to be able to observe the direction of the flow along the edges and develop moves that significantly increase the number of downward pointing paths despite the presence of cycles. Overall, these results indicate that crowd workers are better than automated methods at downward pointing paths task, specially for networks with several cycles. Moreover, Crowd-SA100 hybrid approach achieves even more number of downward pointing paths than crowd workers alone (Crowd). We discuss the performance of the hybrid approaches in detail later in the section. Other aesthetic criteria. Next, we compared the distributions of per-criterion scores for the remaining layout criteria of all the approaches ( Figures A1-A4 in Appendix). In these figures, we observed that the simulated annealing (SA) baseline outperformed the Crowd, Hybrid, and other automated approaches on the EC, EL, ND, and NED criteria. However, when we analyzed the median scores achieved by the Crowd and SA (refer to Figure 12A), we found that the (positive) percentage difference for the DP criterion was one order of magnitude higher than the (negative) percentage difference for each of the remaining criteria. Since, DP is our sole biology inspired criterion, it is important to achieve high DP score to get a biologically meaning drawing. Therefore, we believe there is an advantage (more biologically meaningful drawing) of using Crowd approach even though it performs slightly worse (less aesthetic) than Simulated Annealing on the EC, EL, ND, and NED criteria. We also observed that the Crowd approach outperformed the Spring Electrical and Dig-Cola methods for all criteria ( Figure 12B-C). Crowd also outperformed IPSEP-Cola on the DP, ND, and NED criteria (refer to Figure 12D). Crowd versus Hybrid. The Crowd approach outperformed the Crowd-SA100 hybrid approach (Figure 12H) on distance-based criteria (EL, ND, and NED). On the other hand, Crowd-SA100 performed better on the DP and EC criteria. We attribute this behavior to assistance from the SA100 simulated annealing component in the Crowd-SA100 hybrid approach. We believe that since SA100 starts at a high temperature, it has the ability to make non-local node movements that could result in the re-orientation of an edge to remove a crossing or to make it downward pointing. These non-local moves allow the Crowd-SA100 approach to use SA100 to further optimize for the DP and EC criteria. In contrast, the hybrid Crowd-SA20 approach performed better than the Crowd approach for distance-based criteria such as EL and ND. Here, SA20 can move a node only to a nearby position and, therefore, allows Crowd-SA20 to explore the local neighborhood of a layout state in a more exhaustive manner compared to crowd workers. Surprisingly, the Crowd approach outperformed the hybrid Crowd-SA20 approach on the NED criterion, despite assistance in local moves from SA20 ( Figure 12F). Overall, the results show that the hybrid Crowd-SA100 approach, where simulated annealing makes non-local jumps, performs as well as or better than the non-hybrid Crowd approach. Rate of improvement. Next, to evaluate the rate of improvement, we computed the average improvement in total score per minute achieved by a crowd worker or a simulated annealing run in the SA, Crowd, and Hybrid approaches ( Figure 13). Here, we observed that the Crowd and Hybrid approaches improved the scores at a faster rate than SA in all three networks, while also considerably outscoring it in networks G2 and G3. We also noted that Crowd-SA100 had a better average improvement in total score per minute than Crowd for all three networks. These results show that while SA and Crowd generate layouts comparable to Crowd-SA100 in some networks, Crowd-SA100 offers a better rate of improvement than these methods. Separately, we also observed that the automated methods -Dig-Cola, IPSEP-Cola, and Spring Electrical-generated the final layout within seconds. The Crowd and Hybrid approaches were slower, but clearly outperformed the automated methods within a few minutes. This observation is supported by Figure A6 in Appendix showing the scores achieved by each approach over time for each network for all game sequences. Qualitative comparisons of layouts. Finally, we qualitatively compared the best layouts generated using Flud and automated approaches (Figure 14 and 15). Figure 14 shows the layouts generated for network G1, a network representing crosstalk from the Estrogen signaling pathway to the HIF-1 signaling pathway. While Dig-Cola generated a layout with several downward pointing edges (Figure 14b), it is hard to read the layout because of many edge crossings and node-node and nodeedge overlaps. In contrast, the Crowd and SA approaches (Figure 14a and Figure 14c, respectively) created layouts with many downward-pointing paths while achieving better separation among nodes and edges. Figure 15 shows the layouts generated for network G2, a network representing the Epithelial-to-mesenchymal transition (EMT). In the crowdsourced layout (Figure 15a), we can clearly see the downward-pointing path from FGF to SOS/GRB2 on the right, whereas it is hard to clearly see this two-edge path in layouts generated using Dig-Cola (Figure 15b) and Spring Electrical (Figure 15d) method. We also note that Figure 15b gives the false impression that paths from source nodes to the SNAI1 target node are very short. In contrast, it is clear from the crowdsourced layout ( Figure 15a) that SNAI1 is regulated by multiple upstream source nodes via multiple paths or mechanisms. RQ3: What are the dynamics of the mixed-initiative collaboration in hybrid approach? To answer this question, we compared the average improvement to the total score contributed by the crowd workers in different approaches (Figure 16), using the data collected in Experiment 2. We found that the contribution by crowd workers is lower in hybrid approaches than in the Crowd approach, where the crowd workers work alone without any help from simulated annealing. We also noticed that on average, the contribution by simulated annealing is lower in hybrid approaches than in the SA approach for networks G1 and G2. We believe that this decrease in contribution from crowd and simulated annealing in hybrid approaches is because of the distribution of the total work between these two components. However, the Crowd and SA collaborate with each other in the Crowd-SA100 approach, leading to higher scoring layouts than either from Crowd or from SA alone ( Figure 10). We also found that simulated annealing's contributions in hybrid approaches increased in comparison to SA for graph G3. This observation is supported by Figure 16, which shows that the simulated annealing in Crowd-SA100 achieved 1,875% more average improvement in total score than in SA. To highlight the increase in simulated annealing's performance in Crowd-SA100 in comparison to Crowd, we present an example which shows the progression of the layout as crowd workers and SA100 collaborate back to back during one of the games (Figure 17). In this figure, we have highlighted two paths from source to target, for illustration purposes. Figure 17a shows the layout at the beginning before crowd workers and simulated annealing start optimizing it. There are many upward pointing edges that need corrections to make the highlighted paths downward pointing. During the crowd task, the worker corrected all of these edges except one (Figure 17b). However, one of the highlighted paths was still left with an upward pointing edge. Next, SA100 took over and fixed the other path via two major movements in the layout (Figure 17c-d). We argue that SA100 was able to optimize DP in this scenario (Figure 17b) because the number of moves required to create a downward pointing path was small in comparison to the starting layout (Figure 17a). 17. G3 network layouts at four different stages in an example game, illustrating how crowd workers and simulated annealing build on each other's progress with the Crowd-SA100 approach. a) Layout before crowd worker started playing the game. b) Layout after the crowd worker finished playing the game. c) Layout during the SA100 annealing process. d) Layout after the SA100 annealing process ends. For illustration purposes, the node and edge widths in these layouts have been modified to highlight two representative paths from source node to target node. We attribute such hybrid collaborations as the main reason behind simulated annealing's ability to escape local optima (see the nearly vertical lines ending in squares in Figure 18) in the hybrid approaches, unlike in the case of SA. Overall, these results indicate the importance of contributions from the crowd workers in the hybrid approach, since SA100 is similar to the early annealing phase in SA (Figure 8). Total Score SA stuck in local optima SA escapes local optima G3 Fig. 18. Scores achieved by each approach over time for network G3 for a representative sequence. The x-axis represents the time taken in hours to reach a particular score and the y-axis represents the score. The circle and square markers correspond to crowd workers and simulated annealing respectively. How did the players play the Flud game? We analyzed the crowd workers' gameplay, their interaction with the game elements in the Flud interface, and its impact on their performance (Figures 20 and 19) using the data from Experiment 2. Figure 20A shows the distribution of the number of moves made by the crowd workers in each criterion-specific mode. We found that the crowd workers not only made the highest number of moves (median=126) in the DP mode, but also used the downward pointing path clue (median=28) more often in comparison to other criterion-specific clues (median=19). We attribute this behavior to the higher increase in score when the crowd workers are assigned DP mode (mean improvement in total score per session ≈ 73, 629) and use the downward pointing path clue (mean improvement in DP score per move ≈ 6.3). This observation is supported by Figure 20B and Figure 19. In contrast to the DP criterion, we found that the node-edge separation criterion clue, which gave the lowest average improvement per move ( Figure 20B), was used the least across all criteria ( Figure 20A). Next, in order to understand if the clues helped the players, we compared the average improvement per move for each criterion ( Figure 20B). Our results show that when the crowd workers moved a node not involved in a clue, they lost points and decreased the criterion-specific score. In contrast, when players changed the position of a node that was highlighted in a clue, the criterionspecific score increased, on average. These results indicate that the criterion-specific clues helped the crowd workers to improve the criterion-specific scores. Then, we analyzed the improvement in per-criterion scores made by crowd workers in each criterion-specific mode as shown in Figure 19. We found that the crowd workers improved the criterion-specific score while playing the game in a criterion-specific mode. DISCUSSION In this paper, we presented Flud, an online game with a purpose that allows humans with no expertise to design biologically meaningful graph layouts with the help of algorithmically generated suggestions. Below, we discuss some key takeaways from our evaluation, focusing on overall performance differences, reflections, and design implications. Crowd outperforms algorithms We found that crowdsourcing the layout design task on Flud can generate more meaningful layouts than state-of-the-art algorithms for laying out biological networks. Specifically, our results indicated that crowd workers, in general, are considerably better than the state-of-the-art algorithmic approaches at the downward pointing path task, i.e., laying out downward pointing paths from source nodes to target nodes in a directed network. We believe that there are two main reasons for the crowds' superior performance. First, biological networks have feedback loops, and the crowd workers are able to carefully position the nodes in a feedback loop with respect to each other in order to optimize the number of downward paths. In contrast, an optimization method such as simulated annealing may get stuck in local optima in the presence of feedback loops or cycles, as seen in the network G3. Second, algorithmically generated suggestions (clues) played an essential role in the players' performance. These suggestions streamlined the broader layout design goal by asking the players to carefully solve a smaller puzzle (or micro-task), e.g., to move the suggested elements in a certain way in the given network so as to increase the total score. In other words, we were able to reduce the overall challenge level by algorithmically breaking down the layout design task into smaller puzzles that benefit from human judgment and are solvable by untrained crowd workers. In contrast, algorithmic approaches such as Dig-Cola and IPSEP-Cola focus on optimizing the number of downward pointing edges instead of paths. We stress that merely counting the number of downward pointing edges [56] is not sufficient to capture the flow of information. For example, a downward pointing path from a receptor (a protein molecule that receives signals from outside a cell) through intermediary proteins to a transcription factor (a protein that controls the cell's response to the signal) might be crucial to understand the underlying mechanism of cell signaling in a biological network. While the Crowd approach was able to outperform the automated methods, the hybrid Crowd-SA100 approach offered faster rate of improvement of the score. Moreover, our results indicated that there is a symbiotic relationship between crowd workers and simulated annealing in the hybrid approaches. On the one hand, we believe that the crowd workers improve the layouts such that it is in a state where simulated annealing can make a non-trivial modification to the layout that increases the score. On the other hand, simulated annealing relieves crowd workers of redundant and trivial tasks by improving the layout whenever possible between crowd worker sessions. Reflecting on design decisions We now discuss some of the decisions we made while planning the scoring system, experiments, and incentives. 6.2.1 Scoring system. One key decision we made was to normalize the per-criterion scores used to compare the quality of different layouts. While the downward-pointing paths criterion is new, quantitative metrics for rest of layout criteria have been defined in prior work [56,57]. However, many of these metrics are on different scales, which makes it challenging to compare layouts of different network sizes. Additionally, we cannot use the same priorities (weights) to compute the weighted layout score for different network sizes. Therefore, we normalized each per-criterion score to lie between 0 and 1, with higher scores being more desirable. This strategy allowed us to use the same priorities for networks G1-G3 in our experiments. Minimizing the number of edge crossings is one of the most popular aesthetic criteria. It has been empirically validated as an important measure for good network drawing [57,67,68]. As an alternative to EC score, we considered crosslessness [52,56]. We discarded it as an option since its values were very close to EC(G) for all the networks we used in our experiments. Moreover, we did not see any benefit of using crosslessness over EC(G) to give feedback to the game players. Hence, we decided to use the EC(G) metric. Another design decision we made was to choose a very high priority (400) for the downward pointing paths criterion score (DP) in our experiments. We decided to assign the highest priority to DP because it is our sole domain-oriented criterion, and we wanted it to be preferred over the aesthetic layout criteria in case of conflicts. Moreover, due to our poor approximation of the maximum possible number of downward pointing paths in networks with cycles, the DP scores were prone to be very low for such networks. If we had selected uniform priorities, due to these low DP scores, algorithms such as Simulated Annealing would choose to optimize for aesthetic layout criteria in case of a conflict, despite even trying the priority of four (greater than the priority of rest of the criteria) for the DP criterion. Therefore, we decided to select a very high priority of 400 so that the weighted contribution of downward pointing paths to the overall layout score is generally higher than other per-criterion scores. Worker incentives. In this work, we empirically showed that paid crowd workers recruited from Amazon Mechanical Turk can help researchers by successfully playing the Flud game and achieving high scores that correspond to improved network layouts. We incentivized these workers to submit high-quality layouts by paying them based on their scores [40]. Paid workers offered useful advantages for this research, allowing us to temporarily sidestep the need to build an online community with a critical mass of game players while rapidly prototyping and iterating on the Flud game mechanics in a scaleable, controlled environment. However, prior work in crowdsourcing suggests [9,58] that unpaid workers or volunteers submit higher quality work in comparison to paid workers. Moreover, the volunteers may be more motivated to perform the task if it supports a worthy cause (for example, scientific research) [45]. Given that Flud aims to aid scientific researchers, we expect an intrinsically motivated subset of volunteers would spend more time playing the game and make more improvements in a game session. Moreover, as the players get familiar with the task, we expect high-performing players to develop layout strategies they can reuse across different games (networks), as has been observed in other games like EteRNA and FoldIt. In future work, we are exploring what modifications are necessary to the Flud platform to motivate volunteer gameplay, and consequently, how volunteer player performance compares with paid crowd workers from this study. Currently, Flud only displays the current top score during a game and does not have a leaderboard that shows the total points earned by the top performers. However, prior work shows that leaderboard can play an important role in games to motivate high performance [35]. In this work, we decided not to use a leaderboard since our participants were crowd workers on MTurk who are primarily motivated by compensation rather than by competitive gameplay. Designing a leaderboard to motivate volunteer gameplay in a context where players are essentially collaborating with each other is a promising future direction. Design implications We now discuss some practical implications for our work. 6.3.1 Mixed-initiative systems are effective in layout tasks. Prior work shows that humans are adept at spatial reasoning skills and strategic thinking [75]. These qualities allow users to successfully explore a search space by outperforming algorithms at tasks that require visual perception. In our work, we found that humans are good at a task that requires observation and spatial arrangement to maintain a certain direction of network flow. Corroborating prior work, we found that combining human intelligence with automated approaches can achieve results better than what either could produce alone. One implication of this finding is that systems can leverage mixed-initiative settings to solve for other types of layout tasks that require careful relative arrangement of entities in a two-dimensional space. Examples of such tasks, which manifest in many creative and analytical disciplines, include drawing integrated circuits, mapping social networks, and designing interior spaces in houses and buildings. 6.3.2 Enable preference elicitation to balance multiple criteria. Balancing multiple criteria in a design task can be challenging for novices. Prior work found that novices are limited by their incomplete knowledge and therefore need preference elicitation support from the system [1]. For example, it may be difficult for a non-expert to tell whether one criterion is more important (and to what extent) than another criterion. It may be even more difficult for a user to understand the consequences of choosing to focus on one constraint before another. Therefore, preference elicitation support is required to help the non-expert user decide on a certain course of action. Flud overcomes this challenge by transparently showing the per-criterion scores, their priorities, and the relative change (increase/decrease) after each move. These features allows the players to take informative decisions while trying to balance multiple criteria. 6.3.3 Computationally assign modes to break players out of unproductive play strategies. Prior work shows that in human computation games, players can gravitate towards gameplay strategies that are not advised under certain situations [75]. We saw a similar behavior when we allowed the players to choose the mode instead of fixing a mode at the start of the game. We found that players gravitate towards distance based modes such as edge length and node distribution, ignoring the higher priority modes like downward pointing paths. We attributed this behavior to players inexpertise in choosing the appropriate criterion to focus on at a given stage in the game. We overcame this challenge by passing the control to the Flud system and letting it decide the modes for the player. In our work, we empirically showed that assigning the modes in the order of their priorities generates better results than assigning them randomly. 6.3.4 Automatic suggestions offer guidance when players are uncertain. Despite having superior intelligence and cognitive skills, players can sometimes be overwhelmed by the complexity of the game. Under such circumstances, Flud automatically provides players with a suggestion via the clue feature. These suggestions guide players by focusing their attention on a smaller, more tractable puzzle inside the overall game. This approach not only reduces the challenge level, but also helps the player make progress when they get stuck. Limitations One of the limitations of our evaluation is that the number of networks is small. Since we sought to evaluate multiple game sequences for each of the proposed approaches, we were limited to a small number of networks. A larger number of networks would allow us to show the applicability of the proposed methods to various types of networks. However, the main goals of this paper were to show that the proposed approaches perform better than the automated methods for networks with large number of cycles and that our results are replicable. Therefore, we used three networks with different numbers of cycles and repeated the game sequence for each approach three times. Another limitation of our Flud implementation is that its network rendering is currently optimized for small networks (< 100 nodes). Prior work shows that people with no expertise in network drawing find it difficult to handle large networks [22]. Therefore, we believe that optimizing the system to support larger networks is not the way forward. Instead, we propose to split the larger networks and ask the Flud players to work on small subnetworks. Yuan et al. [77] proposed a strategy to combine user-generated layouts into a single layout for a large network. Adapting their method to support the downward pointing path criterion for large networks on Flud would be an important future goal. CONCLUSION In this work, we explored the potential of crowd-algorithm collaboration for visualizing biological networks. We describe how we gamified the visualization task and made it more accessible to humans, even if they have no biological or computer science expertise. Our results show that such a collaboration between humans and algorithms leads to higher scoring layouts than either from humans or algorithms alone. Our contributions include (i) a novel mixed-initiative game with a purpose that combines crowdsourcing with computational engines to create high-quality visualizations of biological networks and (ii) experiments that provide empirical evidence of the benefits of mixed-initiative layout schemes compared to algorithmic baselines. APPENDIX 1 HOW WE COMPUTE PER-CRITERION SCORES? To prevent pathological layouts that move nodes indiscriminately far apart from each other, we set the per-criterion scores to zero if any node is moved outside a bounding box of fixed page width w and height h. We chose w = 5000 and h = 6000 in our implementation. The aspect ratio of our bounding box approximately matches that of a page in a scientific journal, i.e., 0.8. (1) Downward pointing paths: We count the number of downward-pointing paths in a layout using a dynamic program that runs in time linear in the number of edges in the network. Note that this algorithm does not assign node positions in order to maximize the number of downward-pointing paths since this task will be undertaken by Flud players. The subgraph composed only of downward-pointing edges is acyclic. Hence, for any node v in the network, we can compute π (v), the number of downward-pointing paths that start at v using the following recurrence: Here, the sum is taken only over outgoing neighbors of v that have smaller y-coordinate than u, i.e., if the edge (v, u) is pointing downward. The base case is a node v that has no downward-pointing edges leaving it: π (v) = 0 in this case. We can compute the total number π (G) of downward-pointing paths in the graph by summing π (v) over all nodes v that have no downward-pointing edges entering them. Note the maximum possible value of π (G) is the number of paths (all directed paths, not just downward-pointing ones) in G. We count this number ρ(G) using a depth-first search based algorithm. Although this algorithm has worst-case running time that is exponential in the graph size, it was very efficient in our experiments. Finally, we compute a normalized downward pointing score as follows: DP(G) = π (G) ρ(G) . (2) Non-crossing edge pairs: We count the number χ (G) of edge pairs that do not cross by checking for each pair of edges whether they intersect or not. Since the maximum number of edge crossings possible in G is m(m − 1)/2, we compute the non-crossing edge pairs score as . (3) Edge length: A trivial but undesirable solution that satisfies this criterion is to place all nodes at the same location. Note that such a layout may have a high score despite the node distribution constraint (described next) because of the per-criterion priorities. Therefore, we require every edge to have a minimum fixed length (300 pixels, in our implementation). We now define the cost c(e) of an edge to be equal to its length l(e) if l(e) ≥ 300 or equal to a large number (say, 10, 000), otherwise. We normalize the cost of each edge by the largest possible edge length (a diagonal of the screen) and compute the edge length score of a layout as where w and h represent the width and height of the bounding box, respectively. FINE-TUNING PROCEDURE At the end of the sequence, we fine-tuned the best layout so far to generate the final output layout by using it as the input to simulated annealing with a small value of the initial temperature (T 0 = 10). Due to the low initial temperature, the random moves are "local, " i.e., a node can move only to nearby position. Additionally, we did not accept moves that decrease the score. We expected fine tuning to improve the distance-based components of the layout scores without dramatically affecting the components for downward pointing paths and edge crossings. In their work on incremental improvement of layouts of planar networks, Harel and Sardas [36] concluded that when applied to a preprocessed network, fine-tuning can yield significant improvements over simulated annealing without any preprocessing. This work serves as an inspiration for us to use fine-tuning as a way to make local moves that improve a player's layout. Plots showing the rate at which each approach improved the overall layout score. Each row corresponds to a network and shows the time taken by different approaches to increase the total layout score. The x-axis represents the time taken in hours to reach a particular score and the y-axis represents the score. The circle and square markers correspond to crowd workers and simulated annealing respectively.
2019-04-20T19:58:13.582Z
2019-08-20T00:00:00.000
{ "year": 2019, "sha1": "cf7099f528d7186379001da009db8b8bf231f00d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1908.07471", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cf7099f528d7186379001da009db8b8bf231f00d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1673584
pes2o/s2orc
v3-fos-license
A new model for self-organized dynamics and its flocking behavior We introduce a model for self-organized dynamics which, we argue, addresses several drawbacks of the celebrated Cucker-Smale (C-S) model. The proposed model does not only take into account the distance between agents, but instead, the influence between agents is scaled in term of their relative distance. Consequently, our model does not involve any explicit dependence on the number of agents; only their geometry in phase space is taken into account. The use of relative distances destroys the symmetry property of the original C-S model, which was the key for the various recent studies of C-S flocking behavior. To this end, we introduce here a new framework to analyze the phenomenon of flocking for a rather general class of dynamical systems, which covers systems with non-symmetric influence matrices. In particular, we analyze the flocking behavior of the proposed model as well as other strongly asymmetric models with"leaders". The methodology presented in this paper, based on the notion of active sets, carries over from the particle to kinetic and hydrodynamic descriptions. In particular, we discuss the hydrodynamic formulation of our proposed model, and prove its unconditional flocking for slowly decaying influence functions. Introduction The modeling of self-organized systems such as a flock of birds, a swarm of bacteria or a school of fish, [1,4,5,12,19,20,21,26], has brought new mathematical challenges. One of the many questions addressed concerns the emergent behavior in these systems and in particular, the emergence of "flocking behavior". Many models have been introduced to appraise the emergent behavior of self-organized systems [2,3,7,13,17,22,25,27]. The starting point for our discussion is the pioneering work of Cucker-Smale, [8,9], which led to many subsequent studies [3,6,14,15,16,23]. The C-S model describes how agents interact in order to align with their neighbors. It relies on a simple rule which goes back to [22]: the closer two individuals are, the more they tend to align with each other (long range cohesion and short range repulsion are ignored). The motion of each agent "i" is described by two quantities: its position, x i ∈ R d , and its velocity, v i ∈ R d . The evolution of each agent is then governed by the following dynamical system, Here, α is a positive constant and φ ij quantifies the pairwise influence of agent "j" on the alignment of agent "i", as a function of their distance, The so-called influence function, φ(·), is a strictly positive decreasing function which, by rescaling α if necessary, is normalized so that φ(0) = 1. A prototype example for such an influence function is given by φ(r) = (1 + r) −s , s > 0. Observe that the C-S model (1.1) is symmetric in the sense that the coefficients matrix φ ij is, namely, agents "i" and "j" have the same influence on the alignment of each other, The symmetry in the C-S model is the cornerstone for studying the long time behavior of (1.1). Indeed, symmetry implies that the total momentum in the C-S model is conserved, Moreover, the symmetry of (1.2) implies that the C-S system is dissipative, Consequently, (1.3) yields the large time behavior, x i (t) ≈ vt, and hence min ij φ ij (t) > ∼ φ(|v|t). This, in turn, implies that the C-S dynamics converges to the bulk mean velocity, provided the long-range influence between agents, φ(|x j − x i |), decays sufficiently slow in the sense that φ(·) has a diverging tail, We conclude that the C-S model with a slowly decaying influence function (1.5), has an unconditional convergence to a so-called flocking dynamics, in the sense that (i) the diameter, max i,j |x i (t) − x j (t)|, remains uniformly bounded, thus defining the domain of the "flock"; and (ii) all agents of this flock will approach the same velocity -the emerging "flocking velocity". , v i (t)} i=1,...,N be a given particle system, and let d X (t) and d V (t) denote its diameters in position and velocity phase spaces, The system {x i (t), v i (t)} i=1,...,N is said to converge to a flock if the following two conditions hold, uniformly in N , holds for all initial data, {x i (0), v i (0)} i=1,...,N , it is referred to as unconditional flocking, e.g., [6,8,14,15,23]. In contrast, conditional flocking occurs when (1.7) is limited to a certain class of initial configurations. The flocking behavior of the C-S model derived in [15] was based on the ℓ 2 -based arguments outlined in (1.3). Other approaches, based on spectral analysis, ℓ 1 -and ℓ ∞ -based estimates were used in [6,8,14] to derive C-S flocking with a (refined version of) slowly decaying influence function (1.5). Though the derivations are different, they all require the symmetry of the C-S influence matrix, φ ij . Despite the elegance of the results regarding its flocking behavior, the description of selforganized dynamics by the C-S model suffers from several drawbacks. We mention in this context the normalization of C-S model in (1.1a) by the total number of agents, N , which is shown, in section 2.1 below, to be inadequate for far-from-equilibrium scenarios. The first main objective of this work is to introduce a new model for self-organized dynamics which, we argue, will address several drawbacks of the C-S model. Indeed, the model introduced in section 2.2 below, does not just take into account the distance between agents, but instead, the influence two agents exert on each other is scaled in term of their relative distances. As a consequence, the proposed model does not involve any explicit dependence on the number of agents -just their geometry in phase space is taken into account. It lacks, however, the symmetry property of the original C-S model, (1.2). This brings us to the second main objective of this work: in section 3 we develop a new framework to analyze the phenomenon of flocking for a rather general class of dynamical systems of the form, which allows for non-symmetric influence matrices, a ij = a ji . Here we utilize the concept of active sets, which enables us to define the notion of a neighborhood of an agent; this quantifies the "neighboring" agents in terms of their level of influence, rather than the usual Euclidean distance. The cornerstone of our study of flocking behavior, presented in section 3.1, is based on a key algebraic lemma, interesting for its own sake, which bounds the maximal action of antisymmetric matrices on active sets. Accordingly, the main result summarized in theorem 3.4, quantifies the dynamics of the diameters, d X (t) and d V (t), in terms of the global active set associated with the model. We conclude, in section 4, that the dynamics of our proposed model will experience unconditional flocking provided the influence function φ decays sufficiently slowly such that, This is slightly more restrictive than the condition for flocking in the symmetric case of C-S model, (1.5). Another fundamental difference between the flocking behavior of these two models is pointed out in remark 4.2 below: unlike the C-S flocking to the initial bulk velocity v(0) in (1.4), the asymptotic flocking velocity of our proposed model is not necessarily encoded in the initial configuration as an invariant of the dynamics, but it is emerging through the flocking dynamics of the model. The methodology developed in this work is not limited to the new model, whose flocking behavior is analyzed in section 4.1. In section 4.2, we use the concept of active sets to study the flocking behavior of models with a "leader". Such models are strongly asymmetric, since they assume that some individuals are more influential than the others. Finally, in section 5 and, respectively, section 6, we pass from the particle to kinetic and, respectively, hydrodynamic descriptions of the proposed model. The latter amounts to the usual statements of conservation of mass, ρ, and balance of momentum, ρu, We extend our methodology of active sets to study the flocking behavior in these contexts of mesoscopic and macroscopic scales. In particular, we prove the unconditional flocking behavior of (1.9) with a slowly decaying influence function, φ, such that (1.8) holds, sup x,y∈Supp(ρ(·,t)) |u(t, x) − u(t, y)| t→∞ −→ 0. A model for self-organized dynamics In this section, we introduce the new model that will be the core of this work. This model is motivated by some drawbacks of the C-S model. 2.1. Drawbacks of the C-S model. Originally, the C-S model was introduced in [8] to model a finite number of agents. The normalization pre-factor 1/N in (1.1a) was added later in Ha and Tadmor, [15], in order to study the "mean-field" limit as the number of agents N becomes very large. This modification, however, has a drawback in the modeling: the motion of an agent is modified by the total number of agents even if its dynamics is only influenced by essentially a few nearby agents. To better explain this problem, we sketch a particular scenario shown in figure 1. Assume that there is a small group of N 1 agents, G 1 , at a large distance from a large group of N 2 agents, G 2 ; by assumption, we have N 1 << N 2 . If the distance between the two groups is large enough, we have, In this situation, the C-S dynamics of every agent "i" in group G 1 reads, Therefore, since there are only N 1 "essentially" active neighbors of "i", yet we average over the much larger set of N 1 + N 2 ≫ N 1 agents, we would have dv i /dt ≈ 0. Thus, the presence of a large group of agents G 2 in the horizon of G 1 , will almost halt the dynamics of G 1 . Figure 1. A small group of birds G 1 at a large distance from a larger group G 2 (2.1a). Due to the normalization 1/N in the C-S model (1.1a), the group G 1 will almost stop interacting. 2.2. A model with non-homogeneous phase space. We propose the following dynamical system to describe the motion of agents Here, α is a positive constant and φ(·) is the influence function. The main feature here is that the influence agent "j" has on the alignment of agent "i", is weighted by the total influence, N k=1 φ ik , exerted on agent "i". In the case where all agents are clustered around the same distance, i.e., φ ij ≈ φ 0 , then the model (2.2) amounts to C-S dynamics, But unlike the C-S model, the space modeled by (2.2) need not be homogeneous. In particular, it better captures strongly non-homogeneous scenarios such as those depicted in 2.1: the motion of an agent "i" in the smaller group G 1 will be, to a good approximation, dominated by the agents in group G 1 , Here, φ 0 is the coefficient of interaction inside the nearby group G 1 , i.e., φ ij ≈ φ 0 for i, j ∈ G 1 , whereas the agents in the "remote" group G 2 , will only have a negligible influence, The normalization of pairwise interaction between agents in terms of relative influence has the consequence of loss of symmetry: the model (2.2) can be written as, where the coefficients a ij , given by, , lack the symmetry property, a ij = a ji . Two more examples of models with asymmetric influence matrices will be discussed below. The flocking behavior of a model with leaders, in which agents follow one or more "influential" agents and hence lack symmetry, is analyzed in section 4 below. In section 7 we introduce a model with vision in which agents are aligned with those agents ahead of them, as another prototypical example for self-organized dynamics which lacks symmetry, and we comment on the difficulties in its flocking analysis. Tools for studying flocking behavior of such asymmetric models are outlined in the next section. New tools to study flocking We want to study the long time behavior of the proposed model (2.2). The lack of symmetry, however, breaks down the nice properties of conservation of momentum, (1.3a), and energy dissipation, (1.3b), we had with the C-S model. The main tool for studying the C-S flocking was the variance, ( |v i − v| p ) 1/p , in either one of its ℓ p -versions, p = 1, 2 or p = ∞. But since the momentum is not conserved in the proposed model (2.2), the variance is no longer a useful quantity to look at; indeed, it is not even a priori clear what should be the "bulk" velocity, v, to measure such a variance. In this section, we discuss the tools to study the flocking behavior for a rather general class of dynamical systems of the form, Here, α is a positive constant, and a ij > 0 quantifies the pairwise influence of agent "j" on the alignment of agent "i", through possible dependence on the state variables, {x k , v k } k . By rescaling α if necessary, we may assume without loss of generality that the a ij 's are normalized so that Setting a ii := 1 − j =i a ij , we can rewrite (3.1) in the form where the average velocity, v i , is given by a convex combination of the velocities surrounding agent "i", We should emphasize that there is no requirement of symmetry, allowing a ij = a ji . This setup includes, in particular, the model for self-organized dynamics proposed in (2.2), with asymmetric coefficients a ij = φ ij / k φ ik . In order to study the flocking behavior of (3.1), we quantify in section 3.2, the decay of the diameter, d V (t) using the notion of active sets. The relevance of this concept of active sets is motivated by a key lemma on the maximal action of antisymmetric matrices outlined in section 3.1. This, in turn, leads to the main estimate of theorem 3.4, which governs the evolution of d X (t) and d V (t). 3.1. Maximal action of antisymmetric matrices. We begin our discussion with the following key lemma. Let u, w ∈ R N be two given real vectors with positive entries, u i , w i ≥ 0, and let U , W denote their respective sum, U = i u i and W = j w j . Fix θ > 0 and let λ(θ) denote the number of "active entries" of u and w at level θ, in the sense that, Then, for every θ > 0, we have Remark 3.2. Lemma 3.1 tells us that the maximal action of S on u, w, does not exceed which improves the obvious upper-bound, | Su, w | ≤ M U W . Proof. Using the antisymmetry of S, we find and since S is bounded by M , we obtain the inequality, By assumption, there are at least λ(θ) active entries at level θ which satisfy both inequalities, Therefore, by restricting the sum in (3.4) only to the pairs of these active entries we find and the desired inequality (3.3) follows. 3.2. Active sets and the decay of diameters. The concept of an active set aims to determine a neighborhood of one or more agents in (3.1) based on the so-called influence matrix, {a ij }, rather than the usual Euclidean distance. The following definition, which applies to arbitrary matrices, is formulated in the language of influence matrices. , is the set of agents which influence "p" more than θ, The global active set, Λ(θ), is the intersection of all the active sets at that level, This notion of active set, Λ p (θ), defines a "neighborhood" for agent "p", and can be generalized to more than just one agent. For example, is the set of all agents whose influence on both, "p" and "q", is larger than θ, see figure 2. The number of agents in an active set Λ I (θ) is denoted by λ I (θ), e.g. λ pq (θ) = |Λ pq (θ)|. The numbers {λ pq (θ)} pq are difficult to compute for general θ's: one needs to count the number of pairs of agents in the underlying graph G, which stay connected above level θ, One simple case we can count, however, occurs when θ takes the minimal value, θ = min ij a ij . Then, the active sets Λ p (θ) includes all the agents, Λ p (θ) θ=min ij a ij = {1, . . . , N }, and since this applies for every "p", then Λ pq (θ) and the global active set, Λ(θ), include all agents, Armed with the notion of active set and with the key lemma 3.1 on maximal action of antisymmetric matrices, we can now state our main result, measuring the decay of the diameters d X (t) and d V (t) in the dynamical system (3.2). , v i (t)} i be a solution of the dynamical system (3.2). Fix an arbitrary θ > 0 and let λ pq (θ) be the number of agents in the active sets, Λ pq (θ), associated with the influence matrix of (3.2). Then the diameters of this solution, d X (t) and d V (t), satisfy, Since Λ(θ) ⊂ Λ p,q (θ) then λ pq (θ) ≥ λ(θ) and (3.10b) yields the following global version of the theorem above. Theorem 3.5. Fix an arbitrary θ > 0 and let λ(θ) be the number of agents in the global active set, Λ(θ), associated with (3.2). Then the diameters of its solution, d X (t) and d V (t), satisfy, Proof of theorem 3.4. We fix our attention to two trajectories x p (t) and x q (t), where p and q will be determined later. Their relative distance satisfies, Thus, (3.10a) holds. Next, we turn to study the corresponding relative distance in velocity phase space, recall that v p and v p are the average velocities defined in (3.1b). Given that ℓ a kℓ ≡ 1, the difference of these averages is given by, Inserting this into (3.12), we find, To upper-bound the first quantity on the right, we use the lemma 3.1 with u i = a pi , w i = a qi and the antisymmetric matrix Here, λ pq (θ) is the number of agents in the active set Λ pq (θ), λ pq (θ) = |{j a pj ≥ θ and a qj ≥ θ}|. Therefore, the relative velocity v p − v q in (3.13) satisfies, In particular, if we choose p and q such that |v p (t) − v q (t)| = d V (t), the last inequality reads, and the inequality (3.10b) follows. Indeed, by convexity, v i ∈ Ω(t) for any i, and consequently, if v i is at the frontier of Ω, then the vector (v i − v i ) points to the interior of Ω at v i , see figure 3. More precisely, if n denotes the outward-pointing normal to Ω at v i , thenv i · n = (v i − v i ) · n ≤ 0 Therefore, the frontier of Ω(t) is a "fence" [18] for the vectors v i (t) and The bound of d V (t) implies that the spatial diameter of the flock, d X (t) grows at most linearly in time. Indeed, for agents "p" and "q" which realize the maximal distance, d Theorem 3.4 and 3.5 will be used to prove the flocking behavior of general systems of the type (3.2). The key point will be to make the judicious choice for the level θ = θ(d X (t)), to enforce the convergence d V (t) → 0 through the inequalities (3.10), (3.11). In this context we are led to consider dynamical inequalities of the form, The long time behavior of such systems is dictated by the properties of ψ(·) > 0. Then the underlying dynamical system convergences to a flock in the sense that (1.7) holds, In particular, if ψ(·) has a diverging tail, then there is unconditional flocking. Proof. We apply the energy method introduced by Ha and Liu [14]. Consider the "energy functional", E = E(t), The energy E is decreasing along the trajectory (d X , d V ), and we deduce that, thus improving the linear growth noted in remark 3.6. The uniform bound on d X (t) in (3.21) implies that the velocity phase space of this flock shrinks as the diameter d V (t) converges to zero. Indeed, the inequality (4.5b) yields, and Gronwall's inequality proves that d V (t) converges exponentially fast to zero. Flocking for the proposed model In this section we prove that the model (2.2) converges to a flock under the assumption that the pairwise influence, φ(|x j − x i |), decays slowly enough so that φ(·), has a non square-integrable tail, (1.8), ∞ φ 2 (r)dr = ∞. In section 4.2, we show that the same result carries over the dynamics of strongly asymmetric models with leader(s). We will conclude, in section 4.3, by revisiting the flocking behavior of the C-S model. 4.1. Flocking of the proposed model. Theorem 4.1. Consider the model for self-organized dynamics (2.2) and assume that its influence function φ satisfies, Then, its solution, {(x i (t), v i (t))} i , converges to a flock in the sense that (1.7) holds. In particular, there is unconditional flocking if φ 2 has a diverging tail, Proof. Since φ(d X ) ≤ φ ij ≤ 1, the alignment coefficients a ij in (3.1) are lower-bounded by We now set θ to be this lower-bound of the a ij 's, so that the global active set at that level, Λ(θ(t)), include all agents. Thus, as noted already in (3.9), λ(θ) = N , and the global version of our main theorem 3.5 yields, The result follows from lemma 3.7 with ψ(r) = φ 2 (r). In contrast to the C-S model, however, our model does not seem to posses any invariants which will enable to relate v ∞ to the initial condition, beyond the fact noted in remark 3.6, that v ∞ belongs to the convex hull Ω(0). We can therefore talk about the emergence in the new model, in the sense that the asymptotic velocity of its flock, v ∞ , is encoded in the dynamics of the system and not just as an invariant of its initial configuration. Whether v ∞ can be computed from the initial configuration remains an open question. 4.2. Flocking with a leader. In this section, we discuss the dynamical systems with (one or more) leaders. Definition 4.3. Consider the dynamical system (3.1). An agent "p" is a leader if there exists β > 0, independent of N , such that: In other words, an agent "p" is viewed as a leader if its influence on aligning all other agents "i", is decreasing with distance, but otherwise, is independent of the number of agents, N . We illustrate this definition, see figure 4, with the following dynamical system: a leader "p" moves with a constant velocity and influences the rest of the agents with a non-vanishing amplitude 0 < β < 1, He influences every other agents (sheep) more than a certain quantity βφ(|x p − x i |). We note that there could be one or more leaders. The presence of leader(s) in the dynamical system (3.1) is of course typical to asymmetric systems. We use the approach outlined above to prove that the existence of one (or more) leaders, enforces flocking. , v i (t)} the solution of the dynamical system (3.1) and assume it has one or more leaders in the sense that (4.2) holds. Then {x i (t), v i (t)} admits a conditional and respectively, unconditional flocking provided (4.1a) and respectively, (4.1b) hold. Remark 4.5. If the leader p is not influenced by the other agents, then one deduces that the asymptotic velocity of the flock v ∞ will be the velocity of the leader v p . But we emphasize that in the general case of having more than one leader the asymptotic velocity of the flock emerges through the dynamics of (3.1), and as with the model (2.2), it may not be encoded solely in the initial configuration. 4.3. Flocking of the C-S model revisited. We close this section by showing how the flocking behavior of the C-S model (1.1) can be studied using the framework outlined above. By our assumption, the scaling of the influence function φ(·) ≤ 1, we have Hence, we can recast the C-S model (1.1a) in the form (3.2) In this case, a ij ≥ φ(d X (t))/N for j = i. Moreover, the same lower-bound applies for j = i, because of the normalization φ ≤ 1: Therefore, if we now set θ to be this lower-bound of the a ij 's, then Λ p (θ(t)), and consequently, Λ(θ), include all agents, λ(θ) = N , consult (3.9). Theorem 3.5 yields, Now, apply lemma 3.7 with ψ(r) = φ 2 (r) to conclude the following. Corollary 4.6. Consider the C-S model (1.1) with an influence function, φ, that has a non square-integrable tail, (1.8). Then the C-S solution, {(x i (t), v i (t))} i , converges, unconditionally, to a flock in the sense that (1.7) holds. In particular, since the total momentum is conserved, (1.3a), Comparing the quadratic divergence (4.1b) vs. the sharp condition for C-S flocking, (1.8), we observe that the unconditional C-S flocking we derive in this case requires a more stringent condition of the influence function. This is due to the fact that the proposed approach for analyzing flocking is more versatile, being independent whether the underlying model is symmetric or not. From particle to mesoscopic description We would like to study the model (2.2) when the number of particles N becomes large. With this aim, it is more convenient to study the kinetic equation associated with the dynamical system (2.2). The purpose of the section is precisely to derive formally such equation. We introduce the so-called empirical distribution [24] of particles f N (t, x, v), where δ x ⊗ δ y is the usual Dirac mass on the phase space R d × R d . Integrating the empirical distribution f N in the velocity variable v gives the density distribution of particles ρ N (t, x) in space, Using the distributions f N and ρ N , the particle system (2.2) reads, Therefore, we can easily check that the empirical distribution f N satisfies (weakly) the Liouville equation, where the vector field F [f ] and the total mass ρ are given by, To study the limit as the number of particles N approaches infinity, we first assume that the initial condition f N 0 (x, v) converges to a smooth function f 0 (x, v) as N → +∞. Then it is natural to expect that f N (t, x, v) convergences to the solution f (t, x, v) of the kinetic equation, However, the passage from the discrete system (5.3) to the kinetic formulation (5.4) is more delicate than in the argument for the C-S model [14,15]: here, the vector field F [f ] may not posses enough Lipschitz regularity due to the normalizing factor at the denominator of (5.4b). But since this question does not play a central in the scope of this paper, we leave the study of existence and uniqueness of solution of the kinetic equation (5.4) for a future work, and we turn our focus to the hydrodynamic model. Hydrodynamics of the proposed model and its flocking behavior Having the kinetic description associated with the particle dynamics (2.2), we can derive the macroscopic limit of the dynamics [11,10,15]. We also extend our method developed in section 3 to prove the flocking behavior of the model in the macroscopic case. To this end we extend the notion of active sets from the discrete setup the continuum, and the corresponding key algebraic lemma 3.1 for skew-symmetric integral operators. 6.1. Macroscopic system. To derive the macroscopic model of the particle system (2.2), we just integrate the kinetic equation (5.4a) in the phase space. With this aim, we first define the macroscopic velocity u and the pressure term P, where ρ is the spatial density defined previously (5.4b). Then integrating the kinetic equation (5.4a) against the first moments (1, v) yields the system (see also [15]), where the source term S(u) is given by, (recall the notation of (1.9b), w = φ * (wρ)), The system (6.1) is not closed since the equation for ρu (6.1b) does depend on the third moment of f which is encoded in the pressure term P. In order to close the system, we neglect the pressure, setting P = 0 (in other words, we assume a monophase distribution, ). Under this assumption, (6.1) is reduced to the closed system (1.9), We want to study the flocking behavior of general systems of the form (consult figure 5), The expression on the right reflects the tendency of agents with velocity u to relax to the local average velocity, u(x), dictated by the influence function a(x, y), (6.3c) u(x) = y a(x, y)ρ(y)u(y) dy, y a(x, y)ρ(y) dy = 1. The class of equations (6.3) includes, in particular, the hydrodynamic description of our self-organized dynamics model, (6.2), with ËÙÔÔ(ρ) Figure 5. The quantity a(x, y) (6.4) is the relative influence of the particles in y on the particles in x. We begin with the definition of a flock in the macroscopic case. 6.2. Active sets at the macroscopic scale. To prove that the solution (ρ, u) converges to a flock, we need to show that the convex hull in velocity space, shrinks to a single point, as its diameter, d V (t), converges to zero. To this end, we employ the notion of active sets which is extended to the present context of macroscopic framework. We begin by revisiting our definition of active set using the influence function a(x, y) (6.4). For every x in the support of ρ, we define the active set, Λ x (θ), as The global active set Λ(θ) is the intersection of all the active set Λ x (θ): As before, we let λ I (θ) denote the density of agents in the corresponding active set; thus We would like to extend the key lemma 3.1 from the discrete case of agents to the macroscopic case of the continuum. This is formulated in terms of the maximal action of integral operators which involve antisymmetric kernels, k(x, y). Then, for every positive number θ, we have: Here, λ(θ) is the density of active agents at level θ for u and w, Proof. To simplify, we denote S := x,y k(x, y) u(x)w(y) ρ(x)ρ(y) dxdy. The anti-symmetry of k enables us to rewrite, The bound on k and the identity |a − b| ≡ a + b − 2 min(a, b) yields, Using the notations, we obtain, x,y min u(x)w(y), u(y)w(x) ρ(x)ρ(y) dxdy. We now restrict the domain of integration on the right hand side to (x, y) ∈ Λ θ × Λ θ , where the lower-bounds of u and w yield, and (6.12) follows. 6.3. Decay of the diameters. The diameters (d X , d V ) also satisfy the same inequality at the macroscopic level. We only need to adapt the proof using the characteristics of the system (6.3). Proposition 6.4. Let (ρ, u) the solution of the dynamical system (6.3). Fix an arbitrary θ and let λ(θ) be the density of agents on the corresponding global active set Λ(θ) associated with this system, (6.10). Then, the diameters d X (t) and d V (t) in (6.5) satisfy, Proof. We fix our attention on two characteristicsẊ(t) = u(t, X) andẎ (t) = u(t, Y ), subject to initial conditions, X(0) = x and Y (0) = y for two points (x, y) in the support of ρ(0). Their relative distance satisfy: Since this inequality is true for every characteristics, (6.13a) follows. We turn to study the relative distance in velocity phase space: using (6.3c) we find, and hence, Using the fact that a(X, ·) has a unit ρ-mass, the difference of averages u(Y ) − u(X) can be expressed as We now appeal to the maximal action lemma, 6.3, with anti-symmetric kernel, k(w, z) = u(w) − u(z), and the positive functions u(w) = a(Y, w) and w(z) = a(X, z): since |u(y) − u(z)| ≤ d V , U = y a(Y, y)ρ(y) dy = 1 and W = y a(X, y)ρ(y) dy = 1, Inserted into (6.14), we end up with Finally, since the support of ρ is compact, we can take the two characteristics Y (t) and X(t) such that at time t we have: d V (t) = |u(Y ) − u(X)|, and the last inequality yields (6.13b), 6.4. Flocking in the hydrodynamic limit. Since the diameters d X and d V satisfy the same system of inequalities at the macroscopic level, (6.13), as in the particle level, (3.10), we immediately deduce that theorem 4.1 is still valid for the macroscopic system (6.3). Theorem 6.5. Let (ρ, u) the solution of the system (6.3). If the influence kernel, φ, decays sufficiently slow, then (ρ, u) converges to a flock in the sense of definition 6.1. Proof. For every x and y in the support of ρ(t, ·), we have, Thus, if we take θ(t) = φ(d X (t))/ρ, every point y in the support of ρ(t, ·) belongs to the global active set Λ(θ). Therefore, for this choice of θ, we have, We deduce that, To conclude, we apply lemma 3.7 with ψ(r) = φ 2 (r). Conclusion There is a large number of models for self-organized dynamics [1,2,4,5,7,13,12,17,19,20,22,21,26,25,27]. In this paper we studied a general class of models for self-organized dynamics which take the form (3.1), We focused our attention on the popular Cucker-Smale model, [8,9]. Its dynamics is governed by symmetric interactions, a ij = φ ij /N , involving a decreasing influence function φ ij := φ(|x i − x j |). Here we introduced an improved model where the interactions between agents is governed by the relative distances, a ij = φ ij / k φ ik , which are no longer symmetric. To study the flocking behavior of such asymmetric dynamics, we based our analysis on the amount of influence that agents exert on each other. Using the so-called active sets, we were able to find explicit criteria for the unconditional emergence of a flock. In particular, we derived a sufficient condition for flocking of our proposed model: flocking occurs independent of the initial configuration, when the interaction function φ decays sufficiently slow so that its tail is not square integrable, (1.8). Similar results holds for models with one or more leaders. This is only slightly more restrictive than the characterization of unconditional flocking in the symmetric case, which requires a non-integrable tail of φ, (1.5). In either case, these requirements exclude compactly supported φ's: unconditional flocking is still restricted by the requirement that each agent is influenced by everyone else. A more realistic requirement is to assume that φ is rapidly decaying or that the influence function is cut-off at a finite distance. Here, there are two possible scenarios: (i) conditional flocking, namely, flocking occurs if d V (0) and d X (0) are not too large relative to the rapid decay of φ 2 , d V (0) ≤ ∞ d X (0) φ 2 (r)dr; (ii) a remaining main challenge is to analyze the emergence of flocking in the general case of compactly supported interaction function φ. Clearly, this will have to take into account the connectivity of the underlying graph G, (3.8). We expect that the notion of active sets will be particularly relevant in this context of compactly supported φ's. The main difficulty is counting the number of "connected" agents in the corresponding active sets. As a prototypical example for the difficulties which arise with both -asymmetric models and compactly supported interactions, we consider self-organized dynamics which involves vision, where each agent has a cone of vision, Here, κ(ω i , x j − x i ) determines whether the agent "i", heading in direction ω i := v i /|v i |, "sees" the agent "j": with γ being the radius of the cone of vision (see figure 6). The φ ij 's determine the pairwise alignment within the cone of vision, and can be modeled either after C-S (1.1), or after our proposed model for alignment, (2.2) . In either case, the resulting model (7.16) reads, and it lacks symmetry, a ij = a ji . The loss of symmetry in this example reflects possible configurations in which agent "i" "sees" agent "j" but not the other way around. This example demonstrates a main difficulty in the flocking analysis of local influence functions, namely, counting the number of active agents a ij ≥ θ inside the cone of vision. We leave the flocking analysis of this example to a future work. γ j i Figure 6. Adding a cone of vision in the C-S model (7.16) breaks down the symmetry of the interaction. Here, the agent "i" does not "see" the agent "j" whereas the agent "j" sees the agent "i".
2011-07-25T15:01:35.000Z
2011-02-28T00:00:00.000
{ "year": 2011, "sha1": "f2cdf1dee1a68f19e70b80f09c2ba91be20e242b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1102.5575", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c094bf9cdf5b716da00fe45924689eaac733b7f1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
263581607
pes2o/s2orc
v3-fos-license
Predicting the effect of statins on cancer risk using genetic variants: a Mendelian randomization study in UK Biobank Laboratory studies have suggested oncogenic roles of lipids, as well as anticarcinogenic effects of statins. We here assess the potential effect of statin therapy on cancer risk in Mendelian randomization analyses. We obtained genetic associations with the risk of overall and 22 site-specific cancers for 367,703 individuals in UK Biobank. In total, 75,037 individuals had a cancer event. Variants in the HMGCR gene region, which represent proxies for statin treatment, were associated with overall cancer risk (OR per 1 standard deviation increase in LDL-cholesterol 1.32, 95% CI 1.13-1.53, p=0.0003), but variants in gene regions representing alternative lipid-lowering treatment targets (PCSK9, LDLR, NPC1L1, APOC3, LPL) were not. Genetically-predicted LDL-cholesterol was not associated with overall cancer risk (OR 1.01, 95% CI 0.98-1.05, p=0.50). Our results predict that statins reduce cancer risk, but other lipid-lowering treatments do not. This suggests that statins reduce cancer risk through a cholesterol independent pathway. Laboratory studies have suggested oncogenic roles of lipids, as well as anticarcinogenic effects of statins.We here assess the potential effect of statin therapy on cancer risk in Mendelian randomization analyses.We obtained genetic associations with the risk of overall and 22 sitespecific cancers for 367,703 individuals in UK Biobank.In total, 75,037 individuals had a cancer event.Variants in the HMGCR gene region, which represent proxies for statin treatment, were associated with overall cancer risk (OR per 1 standard deviation increase in LDLcholesterol 1.32, 95% CI 1.13-1.53,p=0.0003), but variants in gene regions representing alternative lipid-lowering treatment targets (PCSK9, LDLR, NPC1L1, APOC3, LPL) were not. Genetically-predicted LDL-cholesterol was not associated with overall cancer risk (OR 1.01, 95% CI 0.98-1.05,p=0.50).Our results predict that statins reduce cancer risk, but other lipidlowering treatments do not.This suggests that statins reduce cancer risk through a cholesterol independent pathway. Statins are inhibitors of 3-hydroxy-3-methyl-glutaryl-coenzyme A reductase (HMGCR), which is the rate-limiting enzyme in the melavonate pathway; a pathway producing a range of cell signalling molecules with the potential to regulate oncogenesis.This is supported by strong laboratory evidence that statins induce anticarcinogenic effects on cell proliferation and survival in various cell lines [1][2][3][4] , and reduce tumour growth in a range of in vivo models [5][6][7][8][9][10] .Furthermore, epidemiological studies of pre-diagnostic use of statins have been associated with reduced risk of specific cancer types [11][12][13] .However, meta-analyses of cardiovascular-focused randomized controlled trials have shown no effect of statins on cancer 14,15 .Conclusions from these trials are limited as they lack adequate power and longitudinal follow-up necessary for assessing impact on cancer risk.At present, no clinical trials have been designed to assess the role of statins in primary cancer prevention and their role in chemoprevention remains uncertain. A putative protective effect of statins on cancer development could be through either cholesterol dependent or independent effects [16][17][18][19] .Cholesterol is a key mediator produced by the mevalonate pathway and is essential to cell signalling and membrane structure, with evidence demonstrating the potential to drive oncogenic processes and tumour growth 20,21 . However, the epidemiological relationships between circulating cholesterol and cancer risk remain unclear.Individual observational studies have reported positive 22,23 , inverse [22][23][24][25] and no association [26][27][28][29] between circulating levels of total cholesterol, low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides with risk of overall and site-specific cancers.Different cancer types have distinct underlying pathophysiology, and meta-analyses of observational studies highlight a likely complex relationship which varies according to both lipid fraction 30,31 and cancer type 29,[32][33][34] .Furthermore, cancer can lower cholesterol levels for up to 20 months prior to diagnosis 35 .Thus, the true relationship between lipids and cancer development remains equivocal. Mendelian randomization is an epidemiological approach which assesses associations between genetically-predicted levels of a risk factor and a disease outcome in order to predict the causal effect of the risk factor on an outcome 36 .The use of genetic variants minimizes the influences of reverse causality and confounding factors on estimates.Mendelian randomization studies also have the potential to predict the outcomes of trials for specific therapeutic interventions. A limited number of Mendelian randomization studies have investigated the relationship between HMGCR inhibition and cancer [37][38][39][40][41] , with protective associations observed for prostate cancer 37 , colorectal cancer 38 , breast cancer 39,40 , and ovarian cancer 41 .However, no comprehensive Mendelian randomization investigation has evaluated the predicted impact of HMGCR inhibition or the causal role of specific lipid fractions on the risk of many of the most common site-specific cancers. We here investigate the relationship between HMGCR inhibition and the risk of overall cancer and site-specific cancers using genetic variants in the HMGCR gene region.To understand whether statins may influence cancer risk through lipid-related mechanisms, we also assess the relationship between lipids and cancer risk by polygenic Mendelian randomization analyses using common lipid-associated genetic variants.Additionally, to mimic other lipid-lowering pharmaceutical interventions, gene-specific analyses were performed using variants in or near gene regions targeted by these therapies. Baseline characteristics of the participants in the UK Biobank and numbers of outcomes are provided in Table 1.In total, 75,037 of the participants had a cancer event, of which 48,674 participants had one of the 22 defined site-specific cancers.Power calculations for the various analyses are presented in Supplementary Figure 1 (site-specific cancers) and Supplementary Table 1 (overall cancer).Gene-specific analyses were only well-powered for overall cancer. Polygenic analyses were well-powered to detect moderate effects for overall cancer and for common site-specific cancers, but less well-powered for less common site-specific cancers. For site-specific cancers, the HMGCR gene region showed positive associations for five of the six most common cancer sites (breast, prostate, melanoma, lung, and bladder; not for bowel), although none of these results individually reached a conventional level of statistical significance.There was little evidence for associations in site-specific analyses for other lipidlowering drug targets. For site-specific cancers, there were positive associations between risk of bowel cancer and genetically-predicted levels of total cholesterol (OR 1.18, 95% CI 1.06-1.32,p=0.002) and LDL-cholesterol (OR 1.16, CI 1.04-1.29,p=0.006).Results were attenuated in robust methods (Supplementary Table 3).No other associations were statistically significant after accounting for multiple testing. Our comprehensive Mendelian randomization investigation shows a positive association between overall cancer and variants in the HMGCR gene region which can be considered as proxies for statin therapy.However, gene regions which can be considered as proxies for alternative lipid-lowering therapies were not associated with cancer risk.Furthermore, there was little consistent evidence of an association between genetically-predicted lipid fractions and cancer outcomes in polygenic analyses either for overall cancer or for any site-specific cancer.Taken together, our findings predict that statins lower the risk of cancer, and provide important mechanistic evidence that this occurs through mechanisms other than lipid lowering. We found that genetic variants in the HMGCR region, serving as proxies for targets of statin therapy, were associated with a 26% decrease in risk of overall cancer per standard deviation (around 39 mg/dL or 1.0 mmol/L) reduction in genetically-predicted LDL-cholesterol.For coronary artery disease, the short-term impact of statins in trials is around one-third of the Mendelian randomization estimate (which represents the impact of lifelong reduced exposure) 42 .This suggests that any reduction in cancer risk from statins is likely to be modest. While our results should be seen as tentative until trials have demonstrated benefit, associations of HMGCR variants show broad concordance with statin therapy for many continuous phenotypes 43 , and suggest that statins reduce risk of coronary artery disease 44 , increase risk of Type 2 diabetes 45 , and increase risk of intracerebral haemorrhage 46,47 , as confirmed in clinical trials [48][49][50] . The notion that statins could be used for chemoprevention is longstanding.Nobel Prize winners Goldstein and Brown proposed that this occurs through non-lipid lowering mechanisms 16 .We provide mechanistic evidence using human genetics supporting this theory.Our results suggest that with respect to genetically predicted HMGCR inhibition and cancer risk, LDL-cholesterol is simply a biomarker of HMGCR inhibition that is accessible, but the true causal pathway is likely via another molecule whose levels are correlated with its LDL-cholesterol lowering effect.HMGCR catalyses the rate-limiting step of the mevalonate pathway; a pathway with an arm leading to the end point of cholesterol synthesis and another arm leading to isoprenoid synthesis.Measuring levels of intra-cellular isoprenoids is challenging but these molecules are implicated in cancer via their role as major post-translational modifiers of key oncogenic proteins 17 .In particular, mevalonate and other isoprenoid metabolites are required for the prenylation and functioning of the Ras and Rho GTPases, which are oncoproteins, and involved in important cellular processes including apoptosis, phagocytosis, vascular trafficking, cell proliferation, transmigration, cytoskeleton organisation, and recruitment of inflammatory cells. Statin inhibition of these metabolites has demonstrated anti-oncological effects in vivo and in vitro 18 including the promotion of tumour cell death and apoptosis [51][52][53][54] , inhibition of angiogenesis 55 , and reduction of tumour cell invasion and metastasis 56,57 .Other potential statinmediated mechanisms of tumour suppression include the reduction of systemic inflammatory mediators like interleukin 1-beta and tumour necrosis factor 55,58 , and epigenetic regulation through inhibiting HMGCR-mediated deacetylation 59 which contributes to colorectal cancer in mouse models 60 .Thus, our findings based on large-scale human genetic data are consistent with pre-clinical studies on statins in cancer which have repeatedly argued for a cholesterol independent mechanism for statin effects on cancer. Our investigation has many strengths, but also limitations.The large sample size of over 360,000 participants and the broad set of outcomes analysed render this the most comprehensive Mendelian randomization analysis of lipids and cancer outcomes conducted to date.However, the investigation has a number of limitations.For many site-specific cancers, there were not enough outcome events to obtain adequate power to rule out the possibility of moderate causal effects.While there is evidence to support our assumption that genetic variants in relevant gene regions can be used as proxies for pharmacological interventions, our findings should be considered with caution until they have been replicated in clinical trials.Our investigation was able to compare subgroups of the population with different lifelong average levels of lipid fractions, but the impact of lowering a particular lipid fraction in practice is likely to differ from the genetic association, particularly quantitatively 61 .Finally, analyses were conducted in UK-based participants of European ancestries.While it is recommended to have a well-mixed study population for Mendelian randomization in order to ensure that genetic associations are not influenced by population stratification, it means that results may not be generalizable to other ethnicities or nationalities. In conclusion, our findings suggest that HMGCR inhibition may have a chemopreventive role in cancer though non-lipid lowering properties, and that this role may apply across cancer sites. The efficacy of statins for cancer prevention must be urgently evaluated. Study design and data sources We performed two-sample Mendelian randomization analyses, taking genetic associations with risk factors (i.e., serum lipid levels) from one dataset, and genetic associations with cancer outcomes from an independent dataset, as performed previously for cardiovascular diseases 62 . We obtained genetic associations with serum lipid concentrations (total cholesterol, LDLcholesterol, HDL-cholesterol, and triglycerides) from the Global Lipids Genetic Consortium (GLGC) on up to 188,577 individuals of European ancestry 63 .Genetic associations were estimated with adjustment for age, sex, and genomic principal components within each participating study after inverse rank quantile normalization of lipid concentrations, and then meta-analysed across studies. We estimated genetic associations with cancer outcomes on 367,703 unrelated individuals of European ancestry from UK Biobank, a population-based cohort recruited between 2006-2010 at 22 assessment centres throughout the UK and followed-up until 31st March 2017 or their date of death (recorded until 14th February 2018) 64 .We defined cancer outcomes for overall cancer and for the 22 most common site-specific cancers in the UK (Supplementary Table 4). Outcomes were based on electronic health records, hospital episode statistics data, national cancer registry data, and death certification data, which were all coded according to ICD-9 and ICD-10 diagnoses.Further cancer outcomes were captured by self-reported information validated by interview with a trained nurse, and from cancer histology data in the national cancer registry.To obtain genetic association estimates for each outcome, we conducted logistic regression with adjustment for age, sex, and 10 genomic principal components using the snptest software program.For sex-specific cancers (breast, uterus, and cervix for women, prostate and testes for men), analyses were restricted to individuals of the relevant sex. Gene-specific analyses for HMGCR and other drug proxy variants We performed targeted analyses for variants in the HMGCR gene region that can be considered as proxies for statin therapy.Additionally, we conducted separate analyses for the PCSK9, LDLR, NPC1L1, APOC3, and LPL gene regions, mimicking other lipid altering therapies (Supplementary Table 5).These regions were chosen as they contain variants that explain enough variance in lipids to perform adequately powered analyses.Variants in each gene region explained 0.4% (HMGCR), 1.2% (PCSK9), 1.0% (LDLR), 0.2% (NPC1L1), 0.1% (APOC3) and <0.1% (LPL) of the variance in LDL-cholesterol.The APOC3 and LPL variants also explained 1.0% and 0.9% of the variance in triglycerides respectively.We performed the inverse-variance weighted method accounting for correlations between the variants 65 . Estimates for the HMGCR, PCSK9, LDLR and NPC1L1 gene regions are scaled to a 1 standard deviation increase in LDL-cholesterol, whereas estimates for the APOC3 and LPL gene regions are scaled to a 1 standard deviation increase in triglycerides. Polygenic analyses for all lipid-related variants We carried out polygenic analyses based on 184 genetic variants previously demonstrated to be associated with at least one of total cholesterol, LDL-cholesterol, HDL-cholesterol or triglycerides at a genome-wide level of significance (p < 5 × 10 −8 ) in the GLGC 66 .These variants explained 15.0% of the variance in total cholesterol, 14.6% in LDL-cholesterol, 13.7% in HDL-cholesterol, and 11.7% in triglycerides in the GLGC. To obtain the associations of genetically-predicted values of LDL-cholesterol, HDLcholesterol and triglycerides with each cancer outcome while accounting for measured genetic pleiotropy via each other, we performed multivariable Mendelian randomization analyses using the inverse-variance weighted method 67 .For total cholesterol, we performed univariable Mendelian randomization analyses using the inverse-variance weighted method 68 .To account for between-variant heterogeneity, we used random-effects models in all analyses.For polygenic analyses that provided evidence of a causal effect, we additionally performed robust methods for Mendelian randomization, in particular the MR-Egger 69 and weighted median methods 70 .All estimates are expressed per 1 standard deviation increase in the corresponding lipid fraction (in the GLGC, 1 standard deviation was 45.6 mg/dL for total cholesterol, 39.0 mg/dL for LDL-cholesterol, 15.8 mg/dL for HDL-cholesterol, and 90.5 mg/dL for triglycerides). As power calculators have not been developed for multivariable Mendelian randomization analyses, we performed power calculations for polygenic analyses based on univariable Mendelian randomization for each lipid fraction in turn, and for gene-specific analyses for each gene region in turn 71 .We carried out all analyses using R (version 3.4.4)unless otherwise stated.All statistical tests and p-values presented are two-sided. Table 1 : Baseline characteristics of UK Biobank participants included in this study and numbers of outcome events Figure1: Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in lipid fraction) for variants in gene regions representing targets of lipidlowering treatments.Estimates are scaled to a 1 standard deviation increase in LDL-cholesterol for the HMGCR, PCSK9, LDLR and NPC1L1 regions, and to a 1 standard deviation increase in triglycerides for the APOC3 and LPL regions.A: associations with overall cancer for each gene region in turn.B: associations with site-specific cancers for variants in the HMGCR gene region. Table 1 : Power calculations for polygenic and gene-specific analyses, representing the power to detect a given effect size (odds ratio per 1 standard deviation increase in lipid fraction) at a significance threshold of p<0.05 for overall cancer (367,703 total individuals, 75,037 cases) Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in LDL-cholesterol) for variants in the PCSK9 gene region Supplementary Figure3: Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in LDL-cholesterol) for variants in the LDLR gene region Supplementary Figure4: Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in LDL-cholesterol) for variants in the NPC1L1 gene region Supplementary Figure5: Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in triglycerides) for variants in the APOC3 gene region Supplementary Figure6: Gene-specific Mendelian randomization estimates (odds ratio with 95% confidence interval per 1 standard deviation increase in LDL-cholesterol) for variants in the LPL gene region Table 2 : Estimates (odds ratio per 1 standard deviation increase in lipid fraction and 95% confidence interval) from polygenic multivariable Mendelian randomization analyses including all lipidrelated variants.Estimates with p < 0.05 are reported in bold.
2020-03-05T10:37:01.184Z
2020-02-29T00:00:00.000
{ "year": 2020, "sha1": "e87dd6cca3c0b2732f7687e036cc11085477ee5e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.57191", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "dbd5e1a48e5b9157ad4b163178e718b30e9d6fb1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53025014
pes2o/s2orc
v3-fos-license
Patients who achieved long-term clinical complete response and subsequently terminated multidisciplinary and anti-HER2 therapy for metastatic breast cancer: A case series Highlights • Breast cancers that are positive for human epidermal growth factor receptor 2 (HER2) are aggressive and typically associated with a poor prognosis.• Novel anti-HER2 therapies have recently improved the outcomes in these cases.• We report a case series in which women were treated for metastatic HER2-positive breast cancer using trastuzumab and various chemotherapies.• These patients ultimately achieved clinical complete response, and subsequently terminated their systemic therapy after maintenance therapy.• Our findings indicate that select patients may be suitable for treatment termination if they have achieved a prolonged period of complete response. Patients We retrospectively reviewed the medical records of 171 patients with MBC who underwent surgical treatment at our institute between 2011 and 2017. The retrospective protocol was approved by the appropriate ethics review board, and the study complied with the tenets of the Declaration of Helsinki. The patients had either de novo MBC, local recurrence of breast cancer, or distant metastases that appeared after treatment of the primary cancer. Forty patients (23.4%) had a primary tumor that was positive for HER2, although 5 patients (2.9%) had primary HER2-negative disease and metastases that were HER2-positive (1 turned from HER2 score 0, and 4 turned from 2+, FISH negative). Table 1 shows the characteristics of the patients with HER2-positive metastatic or recurrent breast cancer. Cases with cCR were identified based on no evidence of disease after treatment for MBC (i.e., no clinical or radiological evidence of disease according to the Response Evaluation Criteria in Solid Tumors). These assessments were performed at frequencies and intervals that were selected by the treating physician, using computed tomography (CT), magnetic resonance imaging, and/or positron emission tomography. Nine patients achieved cCR, with 4 patients (Case 1-4) experienced distant metastasis and 5 patients experienced regional recurrence. Since five patients with regional recurrence obtained cCR by local resection, detailed information about them is omitted this report. The research work has been reported in line with the PROCESS criteria [16]. Case 1 A 41-year-old woman underwent breast conserving surgery and axillary dissection in February 2002. The Pathological results revealed that she had pT2N2M0 disease (stage IIIA, luminal-HER2 type breast cancer). The patient underwent postoperative chemotherapy using 4 cycles of 5-fulorouracil plus epirubicin plus cyclophosphamide. As trastuzumab had not been approved as an adjuvant therapy in Japan at that time, the patient also received luteinizing hormone-releasing hormone agonist (LHRH-a) with tamoxifen and tegafur plus uracil after the chemotherapy and whole-breast radiotherapy. At 4 years after surgery, and during adjuvant systemic therapy, she experienced recurrence in multiple supraclavicular lymph nodes. Thus, first-line treatment for MBC was started using paclitaxel (PTX; 80 mg/m 2 on days 1, 8, and 15) and trastuzumab (4 mg/kg as a loading dose followed by 2 mg/kg as a weekly maintenance dose). After 4 cycles of the first-line treatment, the patient achieved a complete radiological response and a non-pathological values for CEA and CA15-3. The patient remained in cCR during 5 years of maintenance therapy using trastuzumab, and subsequently terminated systemic therapy. The last follow up was August 2018 and she has survived for 11.5 year after termination of anti-HER2 therapy (Fig. 1). Case 2 A 41-year-old woman with cT3N2M0 disease (stage IIIA, luminal-HER2 type cancer) underwent preoperative chemotherapy using 2 cycles of epirubicin plus cyclophosphamide followed by 2 cycles of weekly PTX in 2013. Mastectomy and axillary lymph node dissection revealed a Grade 1b therapeutic effect. The association between pathological complete response and long-term outcomes was strongest in patients with triple-negative breast cancer and in those with HER2-positive, hormone-receptor-negative tumors who received trastuzumab [17]. However, the impact of pathological CR on luminal-HER2 type breast cancer patients is currently unknown. The patient subsequently received trastuzumab and LHRH-a with tamoxifen, but did not undergo post-mastectomy radiotherapy. At 2 years after surgery, and during adjuvant endocrine therapy, pathology results revealed lung and internal mammary lymph nodes metastases. Thus, first-line treatment for MBC was started using docetaxel (75 mg/m 2 on day 1) with pertuzumab (840 mg as a loading dose followed by 420 mg on day 1 of each subsequent cycle) and trastuzumab (8 mg/kg followed by 6 mg/kg on day 1). After 4 cycles of the first-line therapy, the patient achieved a complete radiological response and a non-pathological values for CA15-3 and NCC-ST-439. She subsequently underwent irradiation to the chest wall and internal mammary lymph node region, and received maintenance therapy using pertuzumab plus trastuzumab for approximately 18 months. She stopped maintenance therapy at October, 2017. The last follow up was August, 2018 and she has survived for 10 months after termination of anti-HER2 therapy (Fig. 2). Case 3 A 32-year-old woman was diagnosed with cT3N3M1 disease (HER2-enriched breast cancer), and multiple lung metastases were detected in CT in 2014. Docetaxel with pertuzumab and trastuzumab was not approved as a first-line treatment in Japan at that time. Radiological evaluations revealed no therapeutic effect from 2 cycles of first-line treatment using epirubicin (90 mg/m 2 ) plus cyclophosphamide (600 mg/m 2 ). Thus, weekly PTX and trastuzumab were administered as second-line therapy, and the patient achieved cCR after 4 cycles. She continued maintenance therapy using trastuzumab for 1 year and subsequently terminated her therapy at December, 2015. The last follow up was June 2018 and she has survived for 2 years and a half month after termination of anti-HER2 therapy (Fig. 3). Case 4 A 56-year-old woman with cT4bN2M1 disease (HER2-enriched breast cancer) had contralateral lymph node metastasis that was pathologically detected in 2016. The patient started first-line treatment using docetaxel with pertuzumab and trastuzumab, achieved cCR after 4 cycles. Although the tumor disappeared from her left chest, an abscess-like secretion persisted from a skin ulcer. Mastectomy and sentinel lymph node biopsy were performed, and confirmed a pathological complete response. The patient continued maintenance therapy using pertuzumab and trastuzumab, but subsequently terminated systemic therapy after approximately 18 months at November, 2017. The last follow up was July, 2018 and she has survived for 8 months after termination of anti-HER2 therapy (Fig. 4). The patients in Cases 1 and 2 experienced relapse during adjuvant therapy, and resistance to endocrine therapy was predicted. The patient in Case 1 had never received trastuzumab, while the patient in Case 2 had received trastuzumab without chemotherapy as postoperative therapy [4,5,[18][19][20]. Both patients subsequently achieved cCR during their first-line therapy for MBC. The patients in Cases 3 and 4 were diagnosed with MBC, but subsequently achieved cCR using anti-HER2 therapy combined with chemotherapy (during second-line therapy in Case 3 and during first-line therapy in Case 4) [4,5,19]. All 4 patients subsequently terminated their systemic therapy for MBC. Interestingly, all patients had a relatively small total tumor volume, were asymptomatic during the systemic therapy, and only had 1-2 metastatic sites. After the cCR was confirmed clinically or pathologically, they terminated the chemotherapy and received maintenance anti-HER2 therapy for 1-4 years without any recurrent lesions being detected. How long the appropriate maintenance period is an important theme discussed among experts. Previous reports about HER2-positive metastatic breast cancer patients who achieved cCR was referred [20][21][22][23][24][25][26][27]. Furthermore, all patients underwent tapering of maintenance therapy dose (Fig. 5), as even low-dose trastuzumab is known to have an antitumor effect [28][29][30][31]. If the tumor recurs or regrows during this low-dose trastuzumab period, the cancer cells are resistant to anti-HER2 therapy. However, if patients terminate treatment and experience a long recurrence-free period, it is possible that the cancer may remain susceptible to re-challenge using anti-HER2 therapy [18]. In the previous studies, median OS of HER2-positive breast cancer patients was under 40 months even after the new target agents [4,[6][7][8][9]11,32,33], Pertuzumab and T-DM1 treatment. Patients surviving over 4 years are thought to relatively longer prognosis. For these patients, it is urgent to avoid complications that contribute to the treatment as well as prolonging their prognoses. In general, 12 months of adjuvant trastuzumab therapy is the standard treatment duration for patients with HER2-enriched disease, even with locally advanced breast cancer when they achieved pathological CR after 6 months of preoperative chemotherapy. These patients undergo treatment-free follow-up after a year maintenance trastuzumab. Patients with de novo stage IV breast cancer or postoperative recurrent disease may have more tumor volume than those with early breast cancer patients and distant metastasis associates with complicated mechanisms. Thus, we believe that the maintenance duration in MBC should be longer than postoperative adjuvant therapy. The initial treatment is considered the most important for patients with MBC. However, if the case does not involve de novo stage IV disease, it is possible that the patient may have been treated heavily using anti-HER2 therapy and chemotherapy [ [32][33][34][35]. Therefore, cCR is considered to be relatively difficult to achieve in cases of MBC, compared to de novo stage IV cancer. Although some reports have described patients achieving cCR, after treatment for metastatic HER2-positive breast cancer, we are not aware of any reports regarding the termination of systemic therapy for these patients. Thus, although intensive monitoring is needed after terminating therapy, it is possible that select patient may not need to continue receiving maintenance treatment for MBC. Conclusions The present cases highlight the possibility that select patients with MBC may be able to terminate systemic therapy after they have achieved a prolonged period of cCR. Conflicts of interest The authors declare that they have no competing interests. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Ethics approval Our institution has exempted ethical approval for this case series as there are no patient identifiers in the images or text. Consent Written informed consent was obtained from the patients for publication of this case series and any accompanying images.
2018-11-09T23:10:04.932Z
2018-10-12T00:00:00.000
{ "year": 2018, "sha1": "592e561b3a20a82233db1246111d1f2c4b1f4167", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2018.10.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1fea40e2e4dfa4a0da12583ad05a9137b2a95c8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9456565
pes2o/s2orc
v3-fos-license
Targeted treatment of folate receptor-positive platinum-resistant ovarian cancer and companion diagnostics, with specific focus on vintafolide and etarfolatide Among the gynecological malignancies, ovarian cancer is the leading cause of mortality in developed countries. Treatment of ovarian cancer is based on surgery integrated with chemotherapy. Platinum-based drugs (cisplatin and carboplatin) comprise the core of first-line chemotherapy for patients with advanced ovarian cancer. Platinum-resistant ovarian cancer can be treated with cytotoxic chemotherapeutics such as paclitaxel, topotecan, PEGylated liposomal doxorubicin, or gemcitabine, but many patients eventually relapse on treatment. Targeted therapies based on agents specifically directed to overexpressed receptors, or to selected molecular targets, may be the future of clinical treatment. In this regard, overexpression of folate receptor-α on the surface of almost all epithelial ovarian cancers makes this receptor an excellent “tumor-associated antigen”. With appropriate use of spacers/linkers, folate-targeted drugs can be distributed within the body, where they preferentially bind to ovarian cancer cells and are released inside their target cells. Here they can exert their desired cytotoxic function. Based on this strategy, 12 years after it was first described, a folate-targeted vinblastine derivative has now reached Phase III clinical trials in ovarian cancer. This review examines the importance of folate targeting, the state of the art of a vinblastine folate-targeted agent (vintafolide) for treating platinum-resistant ovarian cancer, and its diagnostic companion (etarfolatide) as a prognostic agent. Etarfolatide is a valuable noninvasive diagnostic imaging agent with which to select ovarian cancer patient populations that may benefit from this specific targeted therapy. Introduction to ovarian cancer Ovarian carcinoma is the most lethal gynecological cancer worldwide. 1 The World Health Organization GLOBOCAN database reported a worldwide incidence of around 200,000 cases of ovarian cancer in 2008, with a 5-year survival rate of 30%-92% depending on the disease spread at diagnosis. A variety of factors influence the risk of developing ovarian cancer (Table 1). A positive family history of ovarian or breast cancers is the most important factor, and nulliparity is also associated with an increased risk of ovarian cancer. 2 Evidence concerning the effect of hormone replacement therapy on the risk of developing ovarian cancer has to date been conflicting, although a meta-analysis has associated use of hormone replacement therapy with an increased risk of ovarian carcinoma. 3 Other factors suggested to be associated with an increased risk of epithelial ovarian carcinoma, but for which the evidence is less Serpe et al robust, include infertility, 4 pelvic inflammatory disease, 5 polycystic ovaries, 6 obesity, 7 and animal fat consumption. 8,9 Conversely, oral contraceptive use, pregnancy, and lactation are associated with a reduced risk. 10 About 90% of ovarian tumors are epithelial in origin, while the remainder comprises germ or stromal tumors. The World Health Organization classification describes three major types of epithelial adenocarcinoma, ie, serous, mucinous, and endometrioid. There is some evidence that the prognosis for women with a diagnosis of mucinous epithelial ovarian cancer is worse than for those with a diagnosis of serous histology, and the prognosis of patients with clear-cell histology is unlikely to be better. 11,12 Treatment of ovarian cancer is based on surgery integrated with chemotherapy. 13 Chemotherapy plays a major role both in adjuvant treatment and in the care of patients with advanced disease. Platinum-based drugs (cisplatin and carboplatin) are the core of first-line chemotherapeutics for patients with advanced ovarian cancer. 14 Several drugs have been combined with cisplatin or carboplatin in an attempt to improve survival, and large clinical trials have confirmed the benefits of adding paclitaxel to first-line chemotherapy for woman with advanced ovarian cancer; 15 however, ovarian cancer continues to be characterized by stagnant mortality statistics. A clear difference has been found between serous and nonserous carcinomas in terms of folate receptor (FR) expression, in particular that of overexpression of the FRα isoform on the surface of almost all epithelial ovarian cancers, making it an excellent "tumor-associated antigen" for tackling one of the most important challenges in ovarian cancer treatment, ie, platinum-resistant disease. 16 For a recent review of current approaches to treating platinum-resistant ovarian cancer, see Leamon et al. 17 This review examines targeted treatment of FRα in women with platinum-resistant ovarian cancer, focusing especially on vintafolide and etarfolatide. The term "platinum-resistant" is now used to describe patients whose disease recurrence is documented within 6 months of platinum-based therapy; 18,19 unfortunately these patients have a poor prognosis, and thus novel compounds and approaches, including new treatment options that are more selective and more individualized in their approach are welcome. Personalized medicine in oncology As defined by the USA President's Council on Advisors on Science and Technology, "Personalized Medicine refers to the tailoring of medical treatment to the individual characteristics of each patient … to classify individuals into subpopulations that differ in their susceptibility to a particular disease or their response to a specific treatment. Preventive or therapeutic interventions can then be concentrated on those who will benefit, sparing expense and side effects for those who will not." 20 The concept of personalized medicine is closely related to the concept of targeted therapy, given that the possibility to treat each patient in the best way is linked to the possibility of recognizing a specific molecular target to drive selective drugs. Undoubtedly, oncology is a promising field for this kind of approach, because molecular targets that are specific for a particular tumor can frequently be identified. The objective of personalized cancer treatment is to select the ideal therapy for an individual cancer patient, based on knowledge of that patient's tumor characteristics and/or genetics. The first example of application of targeted therapy was imatinib, a tyrosine kinase inhibitor that can improve survival in patients with chronic myelogenous leukemia and who carry a particular translocation in their leukemic white blood cells. 21 Another example of an application of targeted therapy concerns colorectal cancer, for which drugs targeting epidermal growth factor receptor, such as cetuximab or panitumumab, or those targeting vascular endothelial growth factor, such as bevacizumab, have entered routine clinical use. [22][23][24] The FR is thus a valuable therapeutic target in ovarian cancer since it is highly expressed on a variety of cancers, whereas it is largely absent from normal tissue. The identification of FRα as a molecular target may lead to the development of drugs, specifically targeted to ovarian cancer cells. Introduction to the folate receptor FRs are cysteine-rich cell surface glycoproteins that bind folate with high affinity. Three FR isoforms have been identified to date, ie, FRα, FRβ, and FRγ. In 2000, Spiegelstein et al, through genome database mining, identified a fourth isoform, FRδ, but neither its tissue expression nor its functionality as a folate binder has been clearly established. 25 These receptors actually comprise a family of proteins, since they share highly conserved sequences and are all encoded by the folate receptor multigene family, which is localized on chromosome 11q13.3-q14.1. 26 FRα and FRβ are the most studied isoforms; they are membrane-anchored receptors and mediate internalization of receptor-bound folate compounds and folate conjugates. [27][28][29][30] FRγ is primarily a secretory protein, in that it lacks an efficient signal for glycosylphosphatidylinositol (GPI) modification. 31 FRα and FRβ, in particular, are colocalized in lipid rafts, ie, membrane microdomains that function as platforms able to recycle GPI-anchored proteins. 32 All FR isoforms bind folic acid with high affinity (Kd ,1 nM). In contrast, FRα and FRβ display different affinities towards reduced folate isoenzymes; for example, FRα has 50 times greater affinity than FRβ for N5-methyltetrahydrofolate. This difference is correlated with the different amino acid composition of the two receptors, namely Leu-49 in FRβ, and Ala-49, Val-104, and Glu-166 in FRα. 33,34 In 1986, Elwood et al identified a soluble high-affinity folate-binding protein in the KB human nasopharyngeal cell line, 35 which was also isolated from extracellular fluids such as human milk 36 and human placenta. 37 It has been shown that soluble high-affinity folate-binding protein may originate from FRα or from FRβ, as well as from FRγ. 34 That soluble highaffinity folate-binding protein can originate from FRα was demonstrated in KB and placenta cells, where it was derived either by proteolysis mediated via an Mg 2+ -dependent protease or by phospholipase cleavage of the GPI anchor. 38,39 FRβ is processed intracellularly via two independent pathways; one results in GPI anchor addition and the other results in its secretion. 40 Specific role of FR in ovarian cancer The significance of this receptor as a tumor marker was discovered in 1991 when, through amino acid sequence analysis, a protein enriched on the surface of a human ovarian carcinoma cell line was shown to be the FR. 41 The FR was later shown to be expressed on the majority of nonmucinous ovarian carcinomas, and subsequent analyses have revealed more marked upregulation of the FRα isoform than of the other isoforms in ovarian carcinoma. 42,43 It has been suggested that FRα might confer a growth advantage on the tumor by modulating folate uptake from serum, which in turn might facilitate rapid cellular growth and division. Alternatively, it has been suggested that FRα might affect cell proliferation via cell signaling pathways, similarly to other cellular membrane proteins with a GPI anchor. 44,45 It also appears possible that FRα levels may be elevated during the early stages of carcinogenesis, when they would increase folate uptake and stimulate cells to repair DNA damage in transcription factors or in other proteins. 46 The inability of these cells to repair these proteins coding DNA might lead to continued FRα expression, which could eventually support the transition to a cellular environment favoring tumor progression and increasing the tumor folate requirements for rapid growth. 47 Comparatively little attention has been paid to FRα levels and patient survival in ovarian cancer; in one such study, expression of FRα protein was found to be associated with tumor progression. 48 In another study it was associated with high-grade ovarian cancers, platinum therapy resistance, and poor prognosis, 49 suggesting that metabolic changes related to its upregulation may occur early in carcinogenesis; the study authors offered some hypotheses to explain their findings, including that FRα may increase folate uptake, which could stimulate cells to repair DNA damage caused by platinum, or that FRα involvement in signal transduction could help cells progress through the cell-cycle phases faster than cells with lower levels of FRα, or again that FRα might predispose cells to overcome drug-induced injury, as observed for genes involved in cellular signaling or apoptosis. 50,51 A recent study 16 confirmed an FRα expression rate of roughly 82% in patients with serous ovarian cancer, although expression was marked in a small proportion of these cases. Further, the study authors showed that chemotherapy does not significantly alter FRα expression in vital residual tumor tissue, suggesting an important role for FRα as a target for diagnostic agents and drugs. The limited tissue-specific expression of the FR isoforms enables FRα to be exploited for the selective delivery of cytotoxic agents into malignant cells, with reduced toxic side effects in nontarget tissues. For these reasons, FRα is an appropriate target for cancer immunotherapy with monoclonal antibody-based reagents. Specific monoclonal antibodies (bearing radioisotopes) may be used for imaging and/or therapeutic purposes (used alone, as bispecific monoclonal antibodies, or after conjugation with toxins, drugs, radionuclides, or cytokines). Several anti-FRα antibodies have been developed, the most interesting being the murine monoclonal antibodies MOv18, MOv19, and LK26. These recognize two noncompeting epitopes of FRα, and have been developed by Miotti et al 42 and Garin-Chesa et al. 52 Guided selection of MOv18 or MOv19 resulted in an optimization process that led to a chemical dimer, AFRA-DFM5.3, now in advanced preclinical evaluation. 53 An optimized process of humanization of LK26 led to farletuzumab (MORab-003; Morphotek Inc., Exton, PA, USA). In this case, the cytotoxicity of the monoclonal antibody is mediated via complement-dependent cytotoxicity and antibody-dependent cell-mediated cytotoxicity. Promising initial findings led to 34 Serpe et al advanced clinical trials (NCT01218516) in platinum-sensitive patients who experienced first relapse to determine the efficacy of farletuzumab as monotherapy and in combination with carboplatin/taxane. 54,55 Meanwhile, Phase II trials of farletuzumab as a first-line agent in combination with traditional platinum-containing chemotherapies in lung adenocarcinoma are ongoing (NCT01218516). 56 Recently, a Phase III trial (FAR-122, NCT00738699) of farletuzumab in combination with paclitaxel in advanced platinum-resistant ovarian cancer has been discontinued because of its limited survival benefit for patients. 57 Additionally, some immune-mediated events were observed. Furthermore, Morphotek Inc has announced that it is actively developing a companion diagnostic assay to identify patients with high FRα expression because these patients may receive more benefit from farletuzumab therapy than those with low FR expression. 58 The rationale underlying FRα-targeted drug delivery lies in the substrate specificity of folic acid versus FRα. In this type of approach, folate can be linked to various therapeutic agents, namely low-molecular-weight chemotherapeutic agents, liposomes with entrapped drugs, antisense oligonucleotides, and immunotherapeutic agents, that can then target cancer cells that overexpress FR. 59 This is possible because folate is amenable to chemical conjugation with other molecules through its γ-carboxyl group, without decreasing its binding affinity to the FR. 60 In this connection, Leamon and Low have introduced a novel and personalized approach to identifying patients who are most likely to benefit from FR-targeted therapy ( Figure 1). 59 This strategy has led to the development of several small-molecule drug conjugates to target cells that overexpress all the FR isoforms. 61 One of the most promising and the one studied in most depth is vintafolide (originally known as EC145), which combines a water-soluble derivative of folic acid (pteroic acid) and desacetylvinblastine hydrazide, a potent vinca alkaloid ( Figure 2). 62 The two molecules are connected in a regioselective manner via a hydrophilic peptide spacer and a self-immolative group based on disulfides as the cleavable linkage. Desacetylvinblastine hydrazide is prepared from vinblastine-free base by reaction with anhydrous hydrazine, whereas the targeting and spacer components are prepared by assembly, using standard fluorenylmethyloxycarbonylbased solid-phase peptide synthesis. The second step involves inserting the disulfide cleavable linkage on desacetylvinblastine hydrazide by reaction with the heterobifunctional reagent (2-[benzotriazole-1-yl-(oxycarbonyloxy)-ethyldisulfanyl]pyridine). The final reaction comprises a mild thiol-disulfide exchange reaction between the components. The companion imaging agent has also been developed, and is known as etarfolatide (EC20), which contains a 99m Tcbased imaging group 61 ( Figure 2). Through an efficient solid-phase synthetic procedure, a small-molecular-weight peptide derivative of folate (Cys-Asp-Dap-D-Glu-Pte) was produced. A D-Glu enantiomer residue was incorporated into the molecule for the purpose of providing additional metabolic protection against tissue-resident hydrolases, without altering the ability of folic acid to bind to the highaffinity FR. 63 Efficacy and safety of vintafolide and etarfolatide In preliminary work, vintafolide was fully characterized using a KB human nasopharyngeal cancer cell line that overexpresses FR. KB cell lines treated with a short incubation pulse (1-2 hours) of vintafolide showed high cytotoxicity values (around 9 nM, versus 2 nM for free vinblastine). The specificity of vintafolide was demonstrated by two methods, ie, using a free folic acid excess in the KB cell line and testing the compounds in the 4T1 FR-negative cell line, in which activity was either completely blocked or not observed. Vintafolide has been tested in a number of different in vivo models, including M109 mouse lung adenocarcinoma, a KB tumor xenograft model, and aggressive FR-positive J6456 lymphoma. In all these cases, vintafolide exerted a notable antitumor effect. The KB tumor model was also used to evaluate the effect of dosage and treatment schedule on therapeutic efficacy, with different schedules evaluated using a fixed total quantity of 12 µmol/kg. The most efficacious (100% cure rate) was found to be that entailing frequent administration of lower doses of vintafolide, ie, once daily for 5 days. Furthermore, etarfolatide, the radiodiagnostic imaging agent, showed that uptake by the liver (nontargeted organ) increased and was proportional to the dose administered. Increased uptake in the liver and a concomitant drop in uptake by the tumor could explain the observed reduction in antitumor effect of vintafolide when administered using the lower-frequency, higher-dose regimens. Determination of the toxicity of novel anticancer agents, especially those bearing very potent molecules such as desacetylvinblastine hydrazide, is a difficult challenge. In a study in which KB or M109 tumors were grown in mouse models, aside from minimal-to-moderate weight loss during therapy, no other gross toxicity was observed after administration of 5 µg/kg or 10 µg/kg once daily for 5 days. With the exception of the liver, all other tissues appeared to be normal. An important finding was the lack of renal toxicity, despite the fact that mouse kidneys express very high levels of FR. 64 The first clinical pharmacokinetic evaluation was reported in a single-center, dose-escalation, open-label, Phase I clinical trial (EC-FV-01, NCT00308269) completed in 2007, which involved 32 patients with refractory or metastatic solid tumors (six affected by ovarian cancer). 65 Vintafolide was administered either as an intravenous injection on days 1, 3, 5 (week 1) and days 15, 17, 19 (week 3) of a 4-week cycle at doses of 1.2, 2.5 and 4.0 mg (in three, ten, three patients, respectively), or as a one-hour infusion administered on the same schedule at doses of 2.5 mg and 3 mg (ten and six patients, respectively). The pharmacokinetic profile is accurately described by a two-compartment model, and is characterized by rapid distribution and elimination (half-life 6 and 26 minutes, respectively). The area under the concentrationtime curve values for administration of 2.5 mg as an 36 Serpe et al intravenous bolus or as a one-hour infusion were equivalent (42-40 hours*ng/mL), while the time to peak concentration values were 129 ng/mL and 42 ng/mL, respectively. The same study also included a population analysis, in which vintafolide showed a clearance of 56.1 L/hour, with an interindividual variability of 48% and an interoccasion variability of 8%. Significant covariance of clearance with body surface area was found, although other covariates not tested in the study may account for a larger proportion of the interindividual variability. The volume of distribution at steady-state was 26 L. 66 From the pharmacokinetic and clinically relevant toxicity evaluations performed in this trial, a well tolerated intravenous bolus dose of 2.5 mg was recommended as the dose to be used in the Phase II trial. Regarding diagnostics for recurrent ovarian carcinoma, 111 In-DTPA-folate was the first FR-targeted low-molecularweight agent to enter clinical trials. 67 Due to the relatively long half-life and high cost of 111 In, a 99m Tc-based imaging agent (half-life 6 hours) was greatly preferable; etarfolatide was tested and radiopharmaceutical analysis showed it to have a time-dependent and concentration-dependent association with FR-positive cells. It appeared to accumulate preferentially within FR-positive tumors, and to do so in large amounts. Furthermore, its rapid pharmacokinetics (cleared from the blood with a half-life of 4 minutes) improves its quality for use as a diagnostic imaging agent. 63 An in vivo pilot study was performed to determine the percentages of various solid tumors that accumulate etarfolatide, and to correlate its uptake with immunohistochemistry analysis of FR expression in available biopsied tumor tissue from 154 patients. 68 As determined by immunohistochemistry staining for the FRα isoform, 67% of these patients had FR-positive tumors. Overall, the etarfolatide evaluation corresponded to the immunohistochemistry staining result in 61% of patients. Agreement between etarfolatide-positive results and FR-positive results was 72%, whereas agreement between etarfolatide-negative results and FR-negative results was 38%. This relatively poor agreement between imaging and immunohistochemistry results may be explained in part by the fact that the study was not designed as a lesionto-lesion comparison between the two methods. The study authors suggest that the discrepant results for the two methods may reflect a difference in FR status of the primary neoplasm versus metastatic disease after excision of the primary tumor, or a difference in FR expression between metastatic lesions in the same patient. 68 Administration of etarfolatide was safe, and the investigators considered that none of the 17 serious adverse events were "related" to administration of the imaging agent. Rather than diagnosis, the primary purpose of etarfolatide administration is currently as a companion agent to enable preselection of patients whose tumors are highly FR-positive, and who thus constitute the best candidates for FR-targeted therapy. Etarfolatide has been a component of more than 16 clinical trials in over 500 patients with ovarian, endometrial, renal, pituitary, and pulmonary cancers, and has been shown to be valuable for predicting response to FR-targeted chemotherapy. Clinical studies to evaluate the safety and efficacy of vintafolide started in 2007 with a nonrandomized Phase II clinical trial (NCT00507741, EC-FV-02) 69 in patients with advanced ovarian, fallopian tube, or primary peritoneal carcinoma, after identification of FR expression using etarfolatide (n=47, median age 61 years). The trial, completed at the end of 2012, examined two different doses of vintafolide, administered three times a week on weeks 1 and 3 (4-week cycle). The primary endpoint was the percentage of patients deriving clinical benefit. The disease control rate (complete response + partial response + stable disease) at 8 weeks in patients receiving vintafolide as third-line or fourth-line intravenous therapy was 75%, compared (historically) with a rate of 47% in women receiving second-line or third-line PEGylated liposomal doxorubicin (hereafter PLD). 19,70 There were also three partial responses. From this study, it appeared that vintafolide was very well tolerated, with minimal toxicity. Fatigue was the most common grade 3 toxicity, occurring in 8.2% of patients. 71 In this initial trial, the patients were not preselected by expression of FR; however, the lesional uptake of etarfolatide was assessed retrospectively to determine whether the level of radioactive positivity in tumors correlated with vintafolide response rates. 72 Evaluable tumor lesions (n=145) were classified according to three levels of etarfolatide uptake (ie, ++, +, -). The probability of a response was greater with + than with -lesions (P=0.0022). The disease control rate was 57% (++), 36% (+), and 33% (-) for patients with differently responsive lesions, whereas the disease control rate was 42.2% for all lesions regardless of response status. The overall response rate was 14% for patients with the most strongly positive lesions, and 0% for patients with less positive or negative lesions. Among a subgroup of patients who had failed fewer than three previous treatments, a disease control rate of 86% was observed for patients with high etarfolatide uptake (++), compared with 50% (+) and 0% for those with less reactive lesions (-). The group of patients with highly reactive lesions had a median overall survival of 63. An international, randomized Phase II study (EC-FV-04, NCT00722592, Platinum Resistant Ovarian Cancer Evaluation of Doxil and EC145 Combination Therapy [PREC-EDENT]) completed in 2013 compared coadministration of vintafolide and PLD with a liposome formulation alone in women with platinum-resistant ovarian cancer (n=149). 73 Patients were randomized to receive vintafolide (2.5 mg on days 1, 3 and 5 and days 15, 17 and 19 of each 4-week cycle) plus PLD (50 mg/m 2 intravenously, on day 1 of each 4-week cycle) or PLD alone (at the same dosage/schedule) until disease progression or death. No statistically significant difference between the study arms was found with regard to total adverse events. An interim analysis (conducted after the 46th event, of a planned study total of 95 progressions or deaths) indicated that median progression-free survival was 20 weeks for women receiving vintafolide plus PLD (P=0.014), compared with 10.8 weeks in the PLD alone group. 74 Vintafolide plus PLD was the first combination to show a statistically significant increase in progression-free survival (versus controls) for women with platinum-resistant ovarian cancer. Another combination, ie, trabectedin/PLD, appeared only to benefit the partially platinum-sensitive subgroup. The full evaluation of this study has very recently been published. 75 To evaluate the association between progression-free survival, hazard ratio, and level of FR positivity, a threshold analysis was conducted based on etarfolatide scan results (Table 2). Benefit was observed in patients with FR positive disease (10% to 90%, FR 10%-90%), and in patients with 100% of lesions positive for FR (FR 100%); it was greatest in FR 100% patients, with a median progression-free survival of 22 weeks compared with 6.6 weeks for PLD alone. Of note, FR 100% patients in the PLD arm seemed to have a poorer prognosis, with the shortest median progression-free survival of any group (1.5 months); this is consistent with reports regarding the correlation between FR expression and poor outcome. 48 Based on these promising results, a randomized, double-blind, placebo-controlled Phase III study (NCT01170650, Study for Women With Platinum Resistant Ovarian Cancer Evaluating EC145 in Combination With Doxil ® [PROCEED]) is currently recruiting patients with platinum-resistant ovarian cancer. 76 At baseline, patients undergo etarfolatide imaging to identify FR-positive lesions; they are then randomized to vintafolide with or without PLD. PLD 50 mg/m 2 is administered on day 1 of a 4-week cycle and treatment continues until the maximum allowable cumulative dose (550 mg/m 2 ) is reached, or until disease progression or intolerable toxicity. Vintafolide 2.5 mg or placebo is administered on days 1, 3, 5, 15, 17, and 19 of a 4-week cycle, and treatment can continue for up to 20 cycles, or until unacceptable toxicity or disease progression. The primary objective is to assess progression-free survival based on investigator assessment (Response Evaluation Criteria In Solid Tumors version 1.1) in FR-positive patients. 77 Secondary objectives include investigation of overall survival, safety/ tolerability, overall response rate, and disease control rate. 78 Table 3 summarizes the main characteristics of the clinical trials described here. The FR-targeted approach is currently being investigated in breast cancer. An open-label, randomized Phase IIa trial is underway to evaluate the safety and efficacy of vintafolide and the vintafolide plus paclitaxel combination in subjects with advanced triple-negative breast cancer; etarfolatide was used for subject selection (NCT01953536). 79 A Phase I study of the safety of vintafolide in combination with carboplatin and paclitaxel in patients with FR-reactive endometrial cancer (NCT01688791) is ongoing. 80 Adverse effects The adverse effects of vinblastine are important, relate to its hematologic toxicity, and are dose-limiting; in addition, nausea, constipation, mucositis, and stomatitis are common. Neurotoxicity occurs less frequently than with vincristine, and is characterized by peripheral neuropathy. 81 Vinblastine is a vesicant, and extravasation precautions must be applied. When evaluating folate-directed vinblastine conjugates, care must be taken to guard against any potential nephrotoxicity due to the high expression of FR in the kidneys. During a doseescalating clinical trial in 32 patients, vintafolide was generally well tolerated. Decreased gastrointestinal motility (constipation) and peripheral sensory neuropathy were reported as adverse events. Twenty-six of the 32 patients reported at least one drug-related adverse effect. Constipation appeared to be dose-dependent, predictors were found to be clearance and area under the concentration-time curve. 65 Dose-limiting toxicity at 4 mg included reversible ileus and neuropathy. The same adverse effects (all grades/grade $3) were observed during the EC-FV-03 study in the 22 patients for whom full toxicity data were available, ie, fatigue (8/1), constipation (6/0), anorexia (5/1), weight loss (3/0), and dyspepsia (2/0). 82 The safety data collected during the NCT00722592 (PRECEDENT) trial showed that there were no cumulative treatment-emergent adverse events except for palmar-plantar erythrodysesthesia syndrome, which is frequently related to PLD. The frequencies of leukopenia, neutropenia, abdominal pain, and peripheral sensory neuropathy were significantly higher in the vintafolide plus PLD arm than in the PLD arm. No drug-related mortality or statistically significant difference in incidence of serious drug-related events was observed between treatment arms, and all adverse events occurred in fewer than 5% of patients, with the exception of small bowel obstruction (vintafolide plus PLD arm, 8.4%; PLD arm, 12%). 75 However, despite clinical efforts to minimize the adverse effects of vintafolide, peripheral neuropathy remains an important toxicity. A possible strategy to avoid peripheral neurotoxicity might be to seek a balance between the potential therapeutic efficacy of high doses and the potential of such doses to cause painful peripheral neuropathy. Place in therapy Epithelial ovarian cancer is the most lethal gynecological malignancy among women worldwide. Most women present with advanced disease, and despite excellent responses to initial surgery and chemotherapy, 5-year survival statistics remain poor. Among several new therapeutic approaches for ovarian cancer, FR-targeted agents show significant promise. 58,83 The fact that FRα is overexpressed in ovarian and other cancer cells, while its expression is limited in normal tissues, accentuates its potential as a diagnostic and therapeutic target. 54,83 Strategies of this sort might allow treatment to be selected based on a tumor's molecular characteristics, advancing therapy from empirical cytotoxic therapies to more individualized ones. Vintafolide, administered in combination with PLD, is the first combination to lead to a statistically significant increase in progression-free survival for women with platinumresistant ovarian cancer. Our knowledge of FR has very recently been enhanced through crystallographic models, which reveal representative stages of endocytic trafficking and conformation changes occurring in FRs. 84 These data would appear to provide a platform from which to rationally design drugs as lead compounds with greater selectivity, together with excellent diagnostic agents, which together can greatly reduce nonspecific effects; this may lead to the development of more potent but safer therapeutic agents. However, as may be seen in the case of vintafolide, the time lag between design and clinical use is still several years. 85 In the future, prescreening a patient's FR status using etarfolatide may also become a companion diagnostic tool for other FR-targeted agents. Etarfolatide might be used to select FR-positive patients and, in combination with fluorescent folate-targeted compounds, could allow more precise removal of tumor tissue. 86 In conclusion, vintafolide is showing itself to be an important tool in the treatment of ovarian cancer, particularly for the patient population with platinum-resistant ovarian cancer, for whom the prognosis is very poor. These promising new possibilities, if confirmed, will mark the achievement of a new goal in oncology; by utilizing targeted therapy, there is the possibility of targeting therapy to the area where a specific molecular target is present, in this case a marker of ovarian cancer. 87 The FR-targeting approach is also steadily improving, delivering more than one type of cytotoxic agent to tumors simultaneously. In this next generation of conjugates, folate is tethered to two different drug molecules, eg, mitomycin C and vinca alkaloids, with distinct biological mechanisms of action. 88 Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/pharmacogenomics-and-personalized-medicine-journal Pharmacogenomics and Personalized Medicine is an international, peerreviewed, open access journal characterizing the influence of genotype on pharmacology leading to the development of personalized treatment programs and individualized drug selection for improved safety, efficacy and sustainability. This journal is indexed on the American Chemical Society's Chemical Abstracts Service (CAS). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress. com/testimonials.php to read real quotes from published authors.
2018-04-03T06:05:16.720Z
2014-01-29T00:00:00.000
{ "year": 2014, "sha1": "acb29716878a060db56b977b7fc2920204e4d3d9", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=18843", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a21765fe3e223ca6c3a24db5af1485d95975161", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
268151097
pes2o/s2orc
v3-fos-license
Cardiac Tamponade in Concurrent Sickle Cell Disease and Systemic Lupus Erythematosus: An Unusual Association This case report describes a rare occurrence of the coexistence of sickle cell disease (SCD) and systemic lupus erythematosus (SLE) in a 33-year-old female. The overlapping clinical manifestations posed diagnostic challenges, leading to a delayed diagnosis. The patient's presentation with pericardial effusion and tamponade during a concurrent SLE flare highlights the complexity of managing these conditions. The case underscores the importance of heightened clinical awareness and multidisciplinary collaboration for accurate diagnosis and timely intervention in such rare comorbidities. Introduction The coexistence of sickle cell disease (SCD) and systemic lupus erythematosus (SLE) is of interest but seems to be a rare association, as only approximately over 40 similar cases have been reported in the literature in the last 50 years [1].Individuals of African, Afro-Caribbean, or African-American descent, particularly women, are at a higher risk of developing both SCD and SLE as independent diseases.A previous study examining the overlap of these conditions within the same individual found that 73% of affected individuals were Black women [2].Reported cases have shown patients to have been diagnosed with SCD for several years before SLE, with articular involvement as the most frequent lupus-related symptom (85%), followed by serositis (36%) and glomerulonephritis class III or IV (11%) [1]. Individuals with SCD are known to have a higher likelihood of contracting infections.Deficiencies of the complement system were first studied by Johnston et al. to understand this increased risk of infection.Through prevention of the activation of C1 and the classic complement sequence, they observed that individuals with SCD did not fully activate and attach the opsonin C3 to foreign microorganisms via the alternate complement pathway [3].Individuals with SCD were determined to have dysfunctional activation of the alternate pathway of the complement system, which heightens their susceptibility to infections caused by encapsulated bacteria and impedes their ability to clear antigens.Some authors have posited that this increases the susceptibility of those with SCD to developing autoimmune disorders [4,5].This makes the diagnosis of the etiology of pericardial effusion in this population subset complex and interesting to learn about. Cardiac tamponade, a sequela of pericardial effusion, is particularly challenging to diagnose in patients with concurrent SLE and SCD.Patients with SCD often complain of shortness of breath, especially during times of sickle cell crisis.This complaint often alerts physicians to acute chest syndrome, but it is important to evaluate for cardiac tamponade to avoid missing this life-threatening diagnosis.This case will delve deeper into the clinical course of a young female with concurrent SCD and SLE who developed cardiac tamponade. Case Presentation A 33-year-old African American female presented with progressive chest pain, polyarthralgia, and limb swelling for three days.A physical exam was significant for a temperature of 39.4°C, tachycardia to a heart rate of 119 beats per minute (bpm), systolic murmur, bibasilar rales, bilateral shoulder and hip tenderness, and polyarticular effusions.Her medical history included SCD and SLE on hydroxychloroquine and mycophenolate.The EKG showed sinus tachycardia with low-voltage QRS and no evidence of ST/T wave changes or PR depression (Figure 1).A transthoracic echocardiogram performed at the time of admission revealed a small pericardial effusion (Figure 2).Laboratory results showed mild leukocytosis at 11.5 x 10 3 / μL and normal procalcitonin and troponin levels.A chest X-ray revealed a left pleural effusion and patchy consolidation concerning multifocal pneumonia.The CT-pulmonary embolism protocol revealed patchy consolidation, ground-glass opacities within the lingula, a small left effusion, and no evidence of pulmonary artery dilatation or pulmonary embolism.Sepsis protocol was initiated, and she was started on empiric Her hospital course was complicated by recurrent fevers, tachycardia, dyspnea, and hypoxia with worsened chest pain.A repeat chest X-ray revealed worsening consolidations; a CT of the chest showed a moderate pericardial effusion (Figure 3).A repeat echocardiogram revealed a large pericardial effusion with tamponade physiology showing right atrial free wall buckling, right ventricular systolic obliteration, and inspiratory variations (Figure 4).Cardiology was consulted, and she underwent emergent pericardiocentesis using a subxiphoid approach with fluoroscopy.A decrease in mean pericardial pressure from 10 mmHg to 4 mmHg post pericardiocentesis is shown in Figures 5-6.The patient had marked improvement in her shortness of breath after the pericardiocentesis.Pericardial fluid studies appeared inflammatory in etiology, more likely from SLE flare than infectious etiology.After a pulse dose of steroids at 10 mg/kg and a new rituximab infusion, her dyspnea and chest pain symptoms improved.Her transthoracic echocardiogram at the time of discharge showed resolution of the pericardial effusion (Figure 7).She was discharged home on a steroid taper with outpatient rituximab infusions and close follow-up with cardiology, rheumatology, and her primary care physician. FIGURE 3: The chest CT shows bibasilar consolidative opacities indicating an infection. A new moderate pericardial effusion is seen (blue arrow). Discussion Initially, there was no strong association in the literature linking SCD and SLE, although they have several overlapping clinical manifestations, including arthritis, anemia, fever, and renal, cardiovascular, and pulmonary involvement.However, there is increasing co-existence between SCD and SLE, which is not so rare anymore.The incidence of connective tissue diseases such as SLE in adult patients with SCD appears to be increasing.The exact causes underlying this increased risk are still unknown, but a link with B regulatory (Breg) cells is possible as these cells suppress inflammatory responses and maintain tolerance [6].Few published case reports have linked pericardial effusion as a rare complication in SCD [7,8].Diagnosing concurrent SCD and SLE is a challenge, as approximately 20% of SCD patients demonstrate positive antinuclear antibodies (ANAs) with titers greater than 1/160 [9].An observational cohort study in London suggests a significantly increased incidence of connective tissue disorders like SLE in patients with SCD compared to the general population of a similar ethnic background [10].However, the exact reasons for this heightened risk remain unclear. Cardiac tamponade has not been described in the literature for SCD patients.The majority of cardiac manifestations of SCD involve pulmonary hypertension and diastolic heart failure [11].The diagnostic challenge in identifying cardiac tamponade in this patient population is that acute chest can present similarly to cardiac tamponade, with chest pain and shortness of breath [12].This is primarily the reason we decided to write this case report: to inform clinicians of this possible association.Earlier recognition of this life-threatening clinical condition could be lifesaving. Cardiac tamponade is a life-threatening condition where pericardial effusion causes an impairment of the diastolic filling of the ventricles, therefore, compromising cardiac output.The characteristics of Beck's triad for its recognition on a physical exam are hypotension, elevated jugular venous pressure, and muffled heart sounds.In the era of point-of-care ultrasounds, a large pericardial effusion can be assessed in a timeefficient manner.A combination of either atrial or ventricular collapse can be seen in up to 90% of cases of cardiac tamponade [13].The recognition of cardiac tamponade prompts an urgent pericardiocentesis to relieve the pressure within the pericardiac sac, often accompanied by a pericardial drain.Further laboratory testing of the pericardial effusion specimen can clue the physician to the etiology, in our case, showing an inflammatory pattern consistent with an SLE flare.A repeat transthoracic echocardiogram is useful in assessing for re-accumulation of fluid and is used to time the removal of the pericardial drain. Conclusions Further investigation into the possible association between SLE and other connective tissue diseases and SCD is warranted for greater diagnostic vigilance among providers.It is imperative that clinicians promptly recognize and treat life-threatening complications associated with each disease when they appear concurrently, as they might require more aggressive management. , and a pain regimen for concern of pneumonia triggering a sickle cell crisis and SLE flare. FIGURE 1 : FIGURE 1: The EKG shows sinus tachycardia with low voltage and electrical alternans. FIGURE 2 : FIGURE 2: The initial transthoracic echocardiogram on admission shows a small pericardial effusion (blue arrow) on the parasternal longaxis view. FIGURE 4 : FIGURE 4: A transthoracic echocardiogram in the apical four-chamber view shows rapid enlargement of a large circumferential pericardial effusion (blue arrow) withright atrial free wall buckling and right ventricular systolic obliteration. FIGURE 7 : FIGURE 7: The apical four-chamber view shows complete resolution of pericardial effusion.
2024-03-03T16:38:55.710Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "59b79e865c3aa83f873bcc8d939c363eda62e793", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.55285", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11c81c5bfadd88669770bd6b787c89292afbe56b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58936172
pes2o/s2orc
v3-fos-license
DETECTION OF HEAT-STRESSED FURNACE WALL'S AREAS BASED ON RESULTS OF NUMERICAL SIMULATIONS OF THERMAL FLOWS In this paper, a numerical study of the formation of nitrogen oxides in the combustion chamber based on the model created by Mitchellom and Terbellom. The distribution of furnace temperature and the concentration of nitrogen oxides, as well as a comparison of numerical results with the data of field experiment. Introduction In the field of minor power engineering using steam for heating and industrial purposes the steam boilers of DKVR type manufactured by Biysk Boiler Plant became highly popular in the middle of last century.The line of these boilers includes several types with steam producing capacity from 2.5 to 20 tons of steam per hour and using various types of fuel with grate-fired furnaces of different types -for combustion of solid fuel; or with gas-andoil burners for liquid and gaseous fuel combustion. In the course of the long term operation the majority of boilers have been upgraded and modified.The main reasons for this upgrade and modification were as follows: the need to transfer the boiler to some other fuel beyond its design parameters; the attempt to make the boiler more cost-effective; the change in the steam parameters and changes in the boiler load (due to the changes in the technology of the industrial steam use); the need to fix the consequences of operation disorders. Description of the model and the object of research The DKVR-20 boiler has been investigated which has been designed for natural gas flaring.The boiler is equipped with upper and lower drums located along the axis of the boiler.The drums are connected by bent steam generating tubes spread inside the drums and forming advanced convection tube bank.The circulation system of the boiler is fairly complicated: the staged evaporation system is used, which from one hand expands the ranges of natural waters used for boilers, on the other hand reliability parameters of circulation system require special attention in case of long term operation of the boilers. Due to this it is necessary to evaluate the changes in heat absorption by heating surfaces, reliability parameters of water-steam circuit hydrodynamics, temperature mode of pipes within evaporation elements.In order to evaluate the above the first stage includes variation calculations of conditions for fuel combustion, heat exchange and air mechanics in the furnace, results of these calculations are presented in this paper. One of the techniques of variation evaluation of heat-mass-exchange conditions in the furnace is mathematic simulation of furnace processes, and the current level of this method development allows solving this task numerically.The FIRE 3D software [1,2] is used in this paper after its adaptation to small furnaces and gaseous fuel. Numerical simulation allows achieving the following for the DKVR boiler: x Obtaining 3D visual models for each parameter being evaluated, x Visualizing the measurement of each parameter in plane and shifting section planes in depth, width and height within following ranges: Xfrom 0 to 10 m, У from 0 to 5.5 m, Zfrom 0 to 2.7 m. x Building up distribution dependence curves for temperature, radiation density, heat flow, flow rate of fuel-air mixture, pressure, oxygen concentration, density of gases, x Defining extremes of all above parameters, x Varying boiler load within ranges from 15 to 100 %. Results and discussion Fig. 1 shows temperature and aerodynamic fields in the furnace of the DKVR-20-23 boiler with 100% load in the vertical section through the axis of two burners (z=1.43 m).In this section one can observe the uneven temperature field, which is due to burners location on the front side of the boiler in two rows (у 1 = 1.22 m; у 2 =2.3 m).This distribution of temperatures is defined by two horizontal flows which fill the middle and lower part of the burner in the even manner, and it correlates with the distribution of fuel-air mixture. In the lower part of the burner at the level of y=0.5 m the values of flow rate of combustion gases are within ranges 4-4.5 m/sec.The field of low flow rates is observed in the upper part of the furnace by the front screen and ceiling connection.The average flow rate in this field makes 1-2.5 m/sec.This is due to the fact that the whole flow is drawn to the horizontal gas duct through the furnace throat.The maximum gas flow rate achieves 15 m/sec near the burners. Operation conditions of the boiler have been investigated within the ranges from 15 to 100 % of nominal steam producing capacity through measuring the gas flow rate in case of regular excess of air in the furnace.In case of the load at 50 % and 100 % the fuel supply was through two burners and if the load is 15 % the fuel is supplied only through the lower burner.The obtained result allows analyzing the fuel combustion process, intensity of heat flows in the furnace not only under various operation modes of the boiler but in the local fields of the furnace.The opportunity to evaluate the heat physical conditions in the near wall region for any part of the screen regarding the height and depth of the furnace is especially interesting.Fig. 2. The changes in the average heat flow regarding furnace height for various operation modes of the boiler: 1 -15 % load out of the nominal one, the lower burner is operating; 2 -50 % load out of the nominal one, the lower burner is operating; 3 -50 % load out of the nominal one, two burners are operating; 4 -50 % load out of the nominal one, the upper burner is operating; 5 -100% load of the boiler. Conclusions This model can be used for defining temperature fields, fields of flow rates and heat flows when evaluating the reliability of operation modes of the DKVR boilers, including the head and hydraulic calculations and assessment of operation conditions for furnace screens. Fig. 1 . Fig. 1.Temperature field in the section as per burners' axis. Figure 2 Figure 2 shows the measurement of the average value of the mean heat flow regarding the furnace height as per horizontal section.Changes in the average heat flow have certain typical features.For the minimum selected load of the boiler (15 % out of the nominal one) the maximum of the average heat flow is at the level of 0.6-0.8m from the furnace floor (fig. 1, Curve 1).The heat flow has a similar distribution when the boiler operates at 50% of its nominal load with the lower burner being in operation (fig. 1, Curve 2).If the installed burners are equally loaded (fig.2,Curves 3 and 5) the field of maximum values shifts to the level between axes of burners and makes 653.4 and 795.7 kW/m 2 respectively.Should the upper burner be involved into operation (fig.2, Curve 4) the maximum value of the average heat flow makes 562.6 kW/m 2 .The obtained result allows analyzing the fuel combustion process, intensity of heat flows in the furnace not only under various operation modes of the boiler but in the local fields of the furnace.The opportunity to evaluate the heat physical conditions in the near wall region for any part of the screen regarding the height and depth of the furnace is especially interesting.
2018-12-15T20:18:45.609Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "3f0611a43e954e132156d62d10438525925ed966", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/35/matecconf_hmttsc2016_01089.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3f0611a43e954e132156d62d10438525925ed966", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
214547581
pes2o/s2orc
v3-fos-license
Economic profitability of crop rotation systems in the Caiuá sandstone area. Economic profitability of crop rotation systems in the Caiuá sandstone area : Even in areas of predominance of Caiuá sandstone, with soils of low natural fertility that are highly susceptible to erosion and degradation processes, farmers have adopted systems with little diversification, because they believe that they provide a greater economic return. However, agricultural practices such as crop rotation can bring agronomic benefits in terms of conservation agriculture, in addition to economic gains, circumventing edaphoclimatic difficulties in the region. In this context, the objective of this study is to verify whether no-till crop rotation systems are economically profitable, in a Caiuá sandstone area in the northwest region of the Brazilian state of Paraná. To this end, an experiment was conducted in the municipality of Umuarama, state of Paraná, in the crop year 2014/15 to 2016/17. The experimental design used random blocks, with four treatments and four repetitions. The treatments consisted of four crop rotation systems, involving wheat, black oats, canola, safflower, rye, crambe, beans, maize, fodder radish, soybean, sorghum, lupin beans, buckwheat, and triticale cultivars. Crop yields, operating costs, income, and net farm income were assessed. From the results, it was reported that the highest income was obtained in the systems that adopted the largest number of winter and summer commercial crops. Only one treatment was profitable, that is, it had a positive net farm income. This scenario may be associated with the fragility of the region’s soil, which having low fertility, requires a high investment in fertilization and liming to ensure adequate production. INTRODUCTION The choice of farmers for systems with little diversification is largely justified by the short-term economic return (MARCELO et al., 2012). However, these systems go against what is recommended from a technical-agronomic point of view and conservation agriculture (KASSAM et al., 2009;TELLES et al., 2019). This is because the adoption of these cultivation systems may lead to several obstacles in the sustainability of agricultural production, such as the development of pests, diseases, weeds, and nematoids, and may result in loss of soil quality, compromising its productive capacity (VEZZANI; MIELNICZUK, 2009). Thus, systems based on specialization in a few crops Volsi et al. are becoming less efficient and sustainable, due to stagnated production and increased production costs (TILMAN et al., 2019). Sustainability in agriculture remains relevant in the Brazilian agricultural scene, with producers increasingly searching for systems aiming to mitigate the environmental impact of extractive agriculture and food security of production (COSTA, 2010). Thus, crop rotation stands out as a sustainable agricultural practice, because when carried out continuously, it leads to structural and physicochemical soil improvements (CASTRO et al., 2011;BORTOLUZZI et al., 2010). The economy of the Umuarama region in the northwest of the Brazilian state of Paraná is directly linked to agriculture. One of the characteristics of the region is the predominance of sandy soils, derived from Caiuá sandstone, which have low natural fertility and are highly susceptible to erosion and degradation processes. One of the highest mean temperatures in Paraná is also recorded there, with a significant annual thermal amplitude (SILVA et al., 2015). In addition, according to data obtained from Municipal Agricultural Production (PAM) from the Brazilian Institute of Geography and Statistics (IBGE), in 2017, the cultivation of soybean (38%), sugar cane (23%), and cassava (8%) was predominant in areas occupied with temporary crops. In other words, these crops occupied about 70% of the region's agricultural area, denoting low diversity. In general, these data confirmed the occurrence of poorly diversified systems, a fact that opposes the model recommended by conservation agriculture, namely, crop rotation systems (CHAVAS, 2008). Crop rotation increases the levels of organic carbon, nitrogen, and the overall amount of nutrients readily available to plants, contributing to the preservation of soil quality (LISBOA et al., 2012). However, these benefits depend on the species and crop sequences adopted by the producer (FONSECA et al., 2007;PERIN et al., 2004). Therefore, commercial plants are preferentially recommended and, whenever possible, associated with regionally adapted roofing plant species producing large quantities of dry matter and rapidly developing (MACHADO; ASSIS, 2010). The sowing of grasses and legumes, whether grown alone or in association, can also be considered, as they contribute to a greater balance of this system as a whole. Furthermore, rotation is an alternative to grain producers, because other crops can also be used in the production system, both agronomically benefiting the rotation system and generating economic gains (FONTANELI et al., 2000). Although, the technical-agronomic benefits of no-till crop rotation systems are well described in literature (MALÉZIEUX et al., 2009;BERTOL, 2004;MCGILL et al., 1984;VIEIRA;MUZILLI, 1984), studies on the economic advantages resulting from their adoption are incipient (AL-KAISI et al., 2016;AL-KAISI et al., 2015;GRASSINI et al., 2014;GENTRY et al, 2012), particularly in the Brazilian reality (FUENTES-LLANILLO et al., 2018;LEAL et al., 2005;SANTOS et al., 1999) and especially for the region where Caiuá sandstone is predominant. The low adoption of conservationist production systems may be because farmers do not see an economic return with crop rotation, especially in the short term. In this context, the hypothesis is that more diversified crop rotation systems may be more profitable than less diversified systems. Bearing this in mind, the objective of this study was to verify whether no-till crop rotation systems are economically profitable in a Caiuá sandstone area in the northwest region of Paraná. MATERIALS AND METHODS The experimental area is located in the municipality of Umuarama, State of Paraná, Brazil, and conducted at the Agronomic Institute of Paraná. It is located geographically at 23º 44'South and 53º 17'West, at an altitude of 480 m. The soil is classified as a dystrophic red oxisol, with flat or slightly undulating relief (SANTOS et al., 2013) and sandy and medium texture, associated with the sandstones of the Caiuá Formation (CUNHA et al., 2012). According to the Köppen classification, the region has a climate of Cfa type, humid subtropical, with an annual average temperature of 22.2 °C and annual average rainfall of approximately 1544 mm. As for the climatic conditions of crop years 2014/15, 2015/16, and 2016/17, the graph of maximum and minimum daily temperature and the 10-day water balance of the experiment is presented (Figure 1), using the Tornthwaite and Mather method according to the spreadsheets by ROLIM et al. (1998). The experimental design was in random blocks, consisting of four blocks and four treatments. Each treatment relates to a different production system (Table 1), with each plot measuring 10 m × 30 m (300 m 2 ), spaced 10 m from each other to leave room to maneuver machines. In the 11 years before the experiment was installed, the area had been planted in a no-till system. Each production system had a distinct purpose. Treatment I aimed to obtain the maximum amount of straw, giving way in winter to black oats, fodder radish, and rye crops. Treatment II was a little exploited system but had commercial potential. Treatment III aimed to produce crops linked to agroenergy, such as canola, crambe, and safflower. Treatment IV aimed to have the greatest diversification of cultures. From table 2, we can observe the genotypes of the different species used in each production system and their respective sowing dates. The soybean seeds used were of the BMX Potência cultivar in the 2014/2015 and 2015/2016 crops, and of the Ícone cultivar in the 2016/2017 harvest. For the cultivation of maize, the 30A95 cultivar was used. Winter crops were sown between March and May and summer crops in October. For the economic analysis, all services and inputs used in each production system were considered. For the calculation of the cost of machinery operations, such as machinery rental and labor, a medium-sized rural property was assumed, that is, a rural property with an area between 4 and 15 "fiscal modules", one "fiscal module" in the Umuarama In the analysis of operational costs, all of those related to production were considered, that is, expenditure from soil preparation to harvest, following the methodology by KAy et al. (2014). To compose the costs of sowing, spraying, and harvesting operations, the technical coefficients of the Experimental Station were used. The values for the machinery operating and for the inputs used were all extrapolated per hectare. To obtain these costs, a survey on the average values paid by producers in August 2014, 2015, and 2016 was undertaken based on information obtained from at least three cooperatives or companies in the Umuarama region. Net farm income was calculated by subtracting each treatment's operating cost from its income. Revenue calculation was based on the average production obtained in each production system, multiplied by its respective selling price at the time of harvest. Productivity, in turn, was obtained from weighing the harvested grains coming from the useful area of the plots and extrapolating the values to kgha -1 , corrected to 13% humidity (wet weight). All economic indicators were corrected to June 2019 values using the Extended National Consumer Price Index (IPCA), the official inflation index in Brazil. Values were converted to US dollars based on the current exchange rate. Table 3 presents the productivity results for each crop rotation system in the three crop years of the experiment. It was found that overall, soybean productivity was below the average of the state of These low yields are mainly due to the morphological characteristics of the soil derived from the Caiuá sandstone formation, which is highly susceptible to weathering (BARBOSA et al. 2013). The water deficit may also be responsible for low yields, because it has a negative impact on plant growth and development, especially when it occurs in the period of flowering and grain filling (GAVA et al., 2015;TAVARES et al., 2013;SANTOS et al., 2012;FIOREZE et al., 2011). Water deficit was observed in the summer of 2014/15, between October and January; in the winter of 2015/16, in March and August; and in the winter and summer of 2016/17, between March and April and between September and January (Figure 1). RESULTS AND DISCUSSION High and low temperatures also contributed to the lower growth of plants. The main crops affected were safflower, triticale, buckwheat, sorghum, and canola. In the case of buckwheat, it was the low temperature observed in the winter of 2015/17, which reached 5 °C. The damage caused by frost starts with an air temperature below 3 °C, since there is a difference of 2.1 °C to 4.8 °C between the air temperature under the cover and the grass (SILVA; SENTELHAS, 2001). Regarding soybean, in the summer of 2016/17, only Treatment IV showed soybean productivity (3,780 kgha -1 ) slightly above the state average (3,741 kgha -1 ). This treatment adopted a rotation system with the greatest crop diversification. The worst performance was obtained in Treatment I (3,285 kgha -1 ). In the case of maize cultivation, only Treatment II in the summer of 2014/15 and Treatment IV in the summer of 2015/16 presented productivity higher than the average of Paraná of 8,025 kgha -1 (CONAB, 2018). The best performance for maize was observed in Treatment II (8,324 kgha -1 ), in which maize was grown shortly after the triticale harvest, while the lowest productivity was in Treatment III in the summer of 2015/16 (7,926 kgha -1 ), shortly after canola cultivation. Crops such as safflower, crambe, and especially triticale, which have productive potential, tolerance to soil acidity, and good cycling and weed suppression capabilities (BRANCALIÃO et al., 2015), presented results below those reported in other studies. However, it is important to emphasize that the benefits of rotation can be significant, since the more diversified systems presented the best results, mainly due to the physical improvements that may occur in the soil, allowing the crop to develop properly and resulting in higher productivity in the following crops (FONSECA et al., 2007). Table 4 presents the income, operating costs, and net farm income per hectare for each crop rotation system for the three crop years of the experiment. From the income data, it was reported that Treatment I, whose winter crops were not marketed, presented the lowest result. Thus, the highest income occurred in Treatment IV (US$ 4,993.07), followed by treatments II (US$ 3,818.11), III (US$ 3,710.87), and I (US$ 2,681.20). Regarding operating costs, the highest accumulated expenditure was observed in Treatment II (US$ 4,617.84), followed by IV (US$ 4,292.86), III (US$ 4,013.13), and I (US$ 3,331.88). On average, the cost of inputs accounted for approximately 54% of production costs, machinery operational costs for around 30%, and other costs for around 15.8%. It is worth noting that machinery operating was accounted for as outsourced services, which may have resulted in higher expenses with this item; and consequently in increase in production costs. Quantification and analysis of the variables that make up production costs and revenues are of utmost importance for the rural producer's decision-making. However, this analysis requires a certain amount of caution, since higher costs do not necessarily mean lower profits, and conversely, lower costs do not necessarily mean higher profits. Investments in farming, especially in technologies and inputs, such as genetically modified seeds, may, on one hand, generate higher production costs, but on the other hand, generate higher revenue. This is because these investments, expressed in production costs, can bring improvement in plant development, increasing productivity; and consequently, the producer's income (ARTUZO et al., 2018). Regarding the net farm income, a profitability indicator of production systems, the only positive result was observed in Treatment IV (US$ 700.20), while the others generated losses: III (US$ −302.26), I (US$ −650.68), and II (US$ −799.73). Treatment IV stood out for having presented good soybean productivity in the summer of 2016/17 and for the high market value of maize and beans in the 2015/16 agricultural year, resulting in the highest accumulated revenue. Thus, even with high variable production costs, Treatment IV obtained the highest revenue. This result shows that more diversified production systems are more profitable. The benefits of this more diverse system of production are also expressed in the scope economy (reduction of the cost per unit area due to the production of multiple crops). In the composition of this rotation system's costs, the expenditure on inputs in relative terms was 55.5%. Among input costs, those destined to acquire highertechnology seeds stand out, representing 15.5% of the total cost -a value higher than those observed for the same component in the other treatments evaluated in this study. However, the use of higher-technology seeds, considered an investment, was converted into reduction in spending on fertilizers, agrochemicals (such as herbicides, insecticides, and fungicides), and machinery operating costs. With the reduced use of fertilizers, agrochemicals, and fuel, this agricultural production system also becomes more sustainable. The average revenue was US$ 822.96 for winter crops and US$1,115.71 for summer crops. The average variable cost was US$ 568.19 for winter crops and US$ 862.77 for summer crops. The crop with the highest production costs was Carioca beans, which in the winter of 2015/16 cost US$ 1,056.34 to produce, mainly due to high expenditure on inputs. Overall, crop rotation systems, planned with a wide diversification of commercial plants, as was the case of this treatment, were able to present more profitable results compared with those with fewer commercial crops, in accordance with what was reported in the municipality of Passo Fundo, in the Brazilian state of Rio Grande do Sul (SANTOS et al., 2004), in the Midwest region of the United States (GOPLEN et al. 2018), and in Chile (GONZÁLEZ et al., 2013). In Treatment III, a production system in which winter crops with low production costs were adopted, such as crambe, canola, and safflower, it was not possible to obtain profitable results, even though all winter and summer crops were commercialized. The average revenue was US$ 312.24 for winter crops and US$ 924.72 for summer crops. The average variable cost was US$ 503.32 for winter crops and US$ 834.39 for summer crops. Only maize and soybean crops showed a positive net farm income. The maize grown in the summer of 2015/16 stood out as more profitable due to the high market price during that harvest period. Soybean stood out for its productivity, which was close to the state average. Sorghum grown in the summer of 2014/15, due to its high production cost, had the worst gross-margin result. Overall, fertilizers were the input with the greatest participation in operating costs, and in the case of sorghum, it represented about 60% of the expenditure on inputs. Regarding the low revenue received from the sale of winter crops, especially crambe and canola, none of them showed a positive net farm income. In Treatment I, the associations of noncommercial crops with all winter crops were the main contributor to the negative profitability result. Even though this treatment comprised different winter crops with low production costs, revenue acquired only from the sale of summer crops was not sufficient to cover all expenses. This system showed the lowest operating costs, averaging US$ 265.02 for winter crops and US$ 845.61 for summer crops. Since winter crops were not commercialized, the phytosanitary management adopted proved to be less rigorous, requiring a lower amount of inputs for the development of the crops and thus reducing spending. As in winter species were cultivated only for plant cover in this system, the execution of the experiment for three crop years may not be sufficient to obtain the expected results, especially regarding productivity gains (NUNES et al., 2006). Thus, even with the system showing a small revenue in this first crop rotation cycle, in the long term, the production of chaff can positively influence the profitability of successor crops (VALICHESKI et al., 2012;LEAL et al., 2005), since the plant cover contributed positively in several factors, such as weed suppression (KOOCHEKI et al., 2009) and reduced soil compaction (DEBIASI et al., 2010). Treatment II was the least profitable, even though all winter and summer harvests were marketed and its revenue was the second highest. The negative result was mainly due to the high production cost of this treatment's crops. The highest operating costs were verified in this system, at US$ 658.31 for winter crops and US$ 880.96 for summer crops on average. Triticale had the highest production costs because of phytosanitary problems in the winter of 2016/17, leading to an about 7.5 times greater expenditure on fungicides compared with that for the winter of 2014/15. Due to the high costs in winter crops, there was loss of profitability in the production system, these crops usually generate a lower income than summer crops. It is worth noting that no winter culture managed to obtain a positive net farm income. In addition, the low market prices of both triticale and sorghum compromised this system's income. Thus, information about production costs and profitability is of paramount importance for producers, since an optimal combination of resources can help in choosing the most appropriate production system to their rural producer reality (GERLACH et al., 2013). Since many producers have the possibility of storing the grain in silos in Southern Brazil, especially in cooperatives, the marketing system has become different from that in the rest of the country. This is because in the first year after the harvest of soybeans and maize, there are no administrative costs with storage, allowing a decision on when to sell stored products, with greater caution on the part of producers. Considering that soybean and maize can be sold at the peak prices recorded in each quarter over the 12 months after harvest, the results could show a differentiated trend for better or worse, considering that current prices at the time of the sale may be higher or lower than at the time of the harvest. Figure 2 shows the evolution of the prices of 60 kg sacks (in US$) of soybean and maize from July 2014 to May 2018. Results indicate that, even if soybean and maize are sold at peak prices, the results related to the economic profitability of the production systems analyzed in this study would not be altered. Only Treatment IV would still have a positive net farm income. From these findings, it is evident that the market conditions the farmer's profitability and may influence the result of the analysis, both positively and negatively, according to daily variations in the market prices of grains (LEHMANNA et al., 2013). However, for the cycle from 2014/15 to 2016/17, it was reported that in the region of Umuarama, an area where Caiuá sandstone is predominant and which has low-fertility soils, even rotation systems with a broad crop diversification presented negative final results. Results may be associated with the fragility of soils in the region, which due to low natural fertility, require greater investment with fertilization and liming to ensure adequate production. This is evidenced by the fact that even with high fertilizer spending, soybean production was almost always below that of the average of Paraná (62 sacks per hectare). In addition, many of the crops selected for winter production had negative profitability and ended up compromising the economic results of production systems. An example of this was the loss generated in winter with crops such as black oats, canola, safflower, crambe, sorghum, buckwheat, and triticale, and the low profit obtained from bean production. CONCLUSION Only the rotation system with the greatest crop diversification (Treatment IV) was profitable, with a positive net farm income. Although, it had the second highest cost of production, it was also the one that generated the highest income, thus showing that this production system's higher cost may be more than offset by its revenue. The largest revenue was recorded in the most diverse rotation systems, which adopted the largest number of commercial crops both in summer and winter -especially in bean cultivation. Results obtained in this study indicated that more diversified crop-rotation production systems are more profitable, and this is an important indicator to promote and accelerate the adoption of more sustainable technologies. Regarding the limitations of the study, it is worth noting that; although, treatments have been devised to obtain the best sequence of plants adapted for the region, this expresses an experimental condition that may not accurately reflect the reality of rural producers. Moreover;, although, the results obtained in three agricultural years are consistent, for a more precise analysis of the profitability of crop rotation systems and the definition of their benefits, it would be appropriate to conduct at least one more cycle of the experiment. In addition, disbursements with machinery operations, recorded as outsourced services, were relatively high, which may have increased production costs, negatively affecting income indicators. However, no remuneration has been computed for the farmer. Production cost estimates made without considering the remuneration of the rural owner (or owners, in the case of commercial partnerships), may reach a value below expectations. It is important to emphasize that the present study does not consider opportunity costs and does not include an economic viability analysis, both important indicators for the producer's decisionmaking and for the management of a rural property. The incorporation of these indicators should be considered in future studies. ACNOWLEDGEMENTS To the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for providing financial support [grant number 429050/2016-0]. To the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for financing part of the study [code 001], and for the scholarship of the first author. To the Fundação Araucária for the scholarship of the third author.
2020-02-27T09:07:32.520Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "7278119a06bbd25b0a7af2f74fe7bdd018a3ee7c", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/cr/v50n2/1678-4596-cr-50-02-e20190264.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "58b4c75d7294a418557fe8259378da578d560c73", "s2fieldsofstudy": [ "Economics", "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
67777737
pes2o/s2orc
v3-fos-license
CORPORATE ENVIRONMENTAL REPORTING PRACTICES IN FINLAND : A REVIEW AND AGENDA FOR FUTURE RESEARCH The thinking, as summarized by Jones (2010), of writers such as Naess (1985) and Rolston (1985) sheds light on a relatively new theoretical perspective that humans are both parts of and apart from the natural environment. Jones (2010) explains further that humans, as suggested by the theory of evolution, have evolved through the process of natural selection from within the animal kingdom, but through manipulative technology the natural environment is being shaped by humans increasingly and intermittently and this is how humans are both parts of and apart from the natural environment. Jones (2010) argues that human impact, particularly industrial activity, is directly responsible for incidents (e.g., the Exxon Valdez oil spill in Alaska in 1989; Chernobyl disaster in 1986; Bhopal gas tragedy in 1984) that have put the natural environment under threat. The consequences of industrial activities include global warming, erosion of ozone layer, a decline of biodiversity, acid rain and global water crisis (Balali et al., 2009; DeCanio, 1992; Morisette, 1989; Pretty, 1990; Regens & Rycroft, 1988; Sahay, 2004). These environmental problems or threats which Beck (1992, 1999) theorizes as environmental risks neither observe geographical boundaries nor do they differentiate rich and powerful from poor and powerless (Beck, 1992, 1999; Jones, 2010; Sahay, 2004). In the face of such environmental risks, “managing environmental responsibilities has become an integral part of doing business in the global economy” (Sahay, 2004, pp. 12-13). Moreover, public awareness of the role that corporations play in environmental change is increasing (Braam et al., 2016) and compelling management to build synergy between their economic and environmental policies (Sahay, 2004). Various stakeholders, as evidenced by the worldwide growth in Abstract INTRODUCTION The thinking, as summarized by Jones (2010), of writers such as Naess (1985) and Rolston (1985) sheds light on a relatively new theoretical perspective that humans are both parts of and apart from the natural environment. Jones (2010) explains further that humans, as suggested by the theory of evolution, have evolved through the process of natural selection from within the animal kingdom, but through manipulative technology the natural environment is being shaped by humans increasingly and intermittently and this is how humans are both parts of and apart from the natural environment. Jones (2010) argues that human impact, particularly industrial activity, is directly responsible for incidents (e.g., the Exxon Valdez oil spill in Alaska in 1989; Chernobyl disaster in 1986; Bhopal gas tragedy in 1984) that have put the natural environment under threat. The consequences of industrial activities include global warming, erosion of ozone layer, a decline of biodiversity, acid rain and global water crisis (Balali et al., 2009;DeCanio, 1992;Morisette, 1989;Pretty, 1990;Regens & Rycroft, 1988;Sahay, 2004). These environmental problems or threats which Beck (1992Beck ( , 1999 theorizes as environmental risks neither observe geographical boundaries nor do they differentiate rich and powerful from poor and powerless (Beck, 1992(Beck, , 1999Jones, 2010;Sahay, 2004). In the face of such environmental risks, "managing environmental responsibilities has become an integral part of doing business in the global economy" (Sahay, 2004, pp. 12-13). Moreover, public awareness of the role that corporations play in environmental change is increasing (Braam et al., 2016) and compelling management to build synergy between their economic and environmental policies (Sahay, 2004). Various stakeholders, as evidenced by the worldwide growth in The research in the area of corporate environmental accounting and reporting in the context of Finland is scarce. This paper outlines the studies conducted to date on Finnish firms' environmental reporting practices with a view to discovering research gaps in the literature concerning environmental accounting and reporting in the Finnish context. The paper adds to the existing literature by identifying research gaps such as the antiquity of datasets used in the previous studies, the risk of failure to generalize the findings of the prior investigations and most importantly the research negligence towards the impact of Finnish firms' activities and operations on climate change and changes in biodiversity. Hence, the paper has implications for researchers, who could address the identified void in future research and thereby advance further the literature concerned with environmental accounting and reporting. Policy makers could also benefit from this paper as its findings could help them formulate necessary disclosure requirements for the improvement of corporate environmental reporting practices in Finland. This paper focused only on the studies on Finnish firms and thereby limited the scope for any comparison between Finland and other Nordic countries as far as research on environmental reporting practices is concerned; this is the principal limitation of this study. Keywords: Corporate Environmental Reporting, Finland, Review, Agenda for Future Research, Climate Change, Biodiversity corporate responsible investments, are urging companies to become more responsible for the impacts that their decisions and activities have on the environment and are putting pressure on them to assume greater responsibility for sustainable development 1 (Braam et al., 2016). Along with stakeholders, a variety of environmental laws, rules and agreements and market-oriented emission-trading schemes encourage companies to become more accountable for environmental issues (Braam et al., 2016;Sahay, 2004) leading to the demand for increased information transparency regarding environmental concerns (Meng et al., 2014) as such transparency rationalizes the expectations of investors and other stakeholders for the corporate environmental responsibility (Giannarakis et al., 2017;Liao et al., 2015). Stakeholders' demand for environmental information transparency can be met by adopting corporate environmental reporting (CER) practices. CER is a process through which "companies often disclose environmental information to their stakeholders to provide evidence that they are accountable for their activities and the resultant impact on the environment" (Lodhia, 2006, p.65). CER, which is a sub-division of the larger area of corporate social reporting, has attracted attention from researchers for three decades (Sahay, 2004). In the 1970s, the limitations of the traditional management paradigm were being questioned and researchers were exploring the linkages between accounting, organizations and society, but the concern turned more specifically to environmental issues in the 1990s (Jones, 2010). Most of the world's biggest companies have already adopted corporate social and environmental reporting practices (KPMG, 2017) and in recent years, improvements have been found in the general quality of the disclosures and comparability of the information reported; the breadth of topics discussed has widened as well (Vinnari & Laine, 2013 Hence, the primary objectives of this paper are two-fold. First, to outline the studies that have been undertaken on corporate environmental reporting practices in the Finnish context. Second, to identify potential avenues for future research on corporate 1 In 1987, the World Commission on Environment of the United Nations Organization defined sustainable development as development that meets the needs of the present generation without compromising the ability of future generations to meet their own needs. environmental reporting in Finland. The paradoxical nature of the natural environment in Finland makes the country a relevant geographical area for this paper. The findings presented in a general report titled 'State of the Environment in Finland 2013' 2 show an improvement in the state of the environment with a decrease in air and water pollution; a decline in emissions has also been reported, the credit thereof being given to advances in fuel technology and improvements in industrial processes and treatment technologies as well as use of natural resources from overseas on which a considerable share of Finland's economic growth in recent decades has been based. The emissions of sulphur and nitrogen oxide have declined by more than three quarters since 1990 (measures taken to reduce ammonia emissions have not been that effective though) and discharges from industry and communities have reduced sharply since 1980 2 . Despite these visible improvements, a number of serious environmental threats still exist. Serious problems like climate change and biodiversity loss remain unresolved 2 . The average temperature has increased by nearly one degree in Finland over the last hundred years, warming is most intense in spring time 2 . Approximately one-tenth of Finnish species were threatened in 2010 2 . In addition, rivers still carry high quantities of nutrients and since the 1990s, the nutrient balance of cropland has declined in Finland, with the phosphorus balance, in particular, falling by up to one quarter from 1996 to 2011 2 . Although the status of the easternmost part of the Gulf of Finland has improved in recent years (thanks to water protection measures and more efficient wastewater treatment), many small lakes in Southern Finland suffer from eutrophication 2 . High nutrient concentrations are also degrading the status of rivers 2 . In the coastal region, the status of the Archipelago Sea and the Gulf of Finland is alarming 2 . The presence of these contradictions in the natural environment in Finland calls for the current review. Though this paper reviews the research on environmental reporting practices in Finland, it does not belittle the importance of such research in other parts of the world as this type of studies have implications for investors, policy makers and corporate managers across the globe. By way of example, the value relevance of environmental information can be considered. Research confirms that environmental information is relevant to investors ; the disclosure of environmental information in addition to financial information can decrease the information asymmetries between a company and its external shareholders (Myers & Majluf, 1984) and can lead to a higher market valuation of its shares (Healy & Palepu, 2001). Therefore, corporate managers can increase the informativeness of share prices through environmental reporting. Policy makers can also play an important role in this issue by formulating relevant disclosure policies for the improvement of corporate environmental reporting practices. The rest of the paper proceeds as follows. Section 2 reviews prior studies on environmental reporting in Finland. Section 3 discusses possible avenues for future research. Section 4 concludes the paper. PRIOR STUDIES ON FINNISH FIRMS' ENVIRONMENTAL REPORTING In this section, we aim to review prior studies conducted in the context of Finland. The review begins with the study conducted by Niskala and Pretes (1995), who draw a sample of 75 largest Finnish firms from the most environmentally sensitive industries. They analyze the annual reports of these firms at two points in time: 1987 and 1992. Using the technique of content analysis, these annual reports are scrutinized with a view to determining the type of environmental information disclosed in them. The researchers gather three types of environmental information namely, qualitative, quantitative and financial. Qualitative environmental information refers to all verbal disclosures, whereas quantitative and financial environmental information includes information on environmental measures (e.g., emission levels) and all environmental information expressed in monetary terms respectively. The results of this study reveal that most of the disclosures are qualitative in nature. The findings indicate further that though the disclosure level has increased significantly from 1987 to 1992, less than half of the sampled firms are disclosing environmental information. The results are frustrating as these firms are selected from the highly environmentally sensitive industries. The authors also report that the environmental reporting of Finnish firms is less common compared to other European countries. Halme & Huse (1997) survey annual reports (of 1992) of 140 firms from Finland, Norway, Sweden and Spain in order to examine the influence of corporate governance, industry and country factors on environmental reporting. Their study offers some interesting findings. They find the industry to be the most influential factor in explaining the level of environmental disclosure in corporate annual reports as corporations that have been traditionally heavy polluters report the most on the environment. The researchers have not found corporate governance variables to be significantly associated with the level of environmental reporting. Another interesting finding of the study is that Finnish firms are less attentive to the environment than Norwegian and Swedish firms. The researchers make a mention of Finland's industrial culture as a possible explanation thereof: Finnish firms are "reluctant to use environmental issues as competitive or marketing factors" (p.153); moreover, emissions from industrial plants are closely monitored by authorities for decades and in addition, information on emissions is accessible to the public. Niskanen and Nieminen (2001) examine the objectivity of listed Finnish firms' environmental disclosures in their annual reports. For this purpose, the authors review the annual reports of 27 listed Finnish firms (12 firms from the forest industry and 15 from other industries) for a 12-year period from 1985 to 1996. In this study, 'objectivity' has been defined as the egalitarian approach of a firm in reporting positive and negative environmental issues relating to its operations. The findings of the study indicate that the percentage of negative events reported (14.0 percent) in the annual reports of the sampled firms is much smaller than the respective percentage of positive events (83.6 percent). The researchers divide the data collection period into two sub-periods: 1985-1991 and 1992-1996 and discover no mention of any negative environmental issue before 1992. The study reveals further that environmental investments are the most reported positive issue whereas occasional emissions and restrictions set by authorities are rarely disclosed negative issues and most frustratingly, the firms make no disclosure at all on legal actions taken against them concerning their environmental behaviour. In a nutshell, the study suggests that the environmental reporting of listed Finnish firms may not be objective. In order to examine the relationship between organization types and corporate social responsibility The study reveals further that all three case organizations have reported their environmental issues for years. An interesting finding of the study is that through all the case organizations have reported negative news on their environmental impact, only listed firms have reported how they have solved the negative issues; the cooperative falls behind them in this regard. Kotonen (2009) conducts a cross-sectional study on formal corporate social responsibility (CSR) reporting practices of large Finnish listed companies. The sample of the study includes 31 large Finnish companies listed at OMX Nordic Exchange Helsinki. The author analyzes qualitative data consisting of annual reports and where applicable, formal CSR reports (in 2006) of the sampled companies. The author reports that most of the companies use the Global Reporting Initiative (GRI) guidelines, either strictly or to an appropriate extent. The study finds the CSR system of the companies has paid the most attention towards environmental responsibilities of those companies; companies are found to have reported environmental management, strategy, targets and their implementation, environmental investments, environmental risks and environmental certifications. The companies are also found to have disclosed other environmental themes such as emissions, waste, water and electricity consumption, energy efficiency, bioenergy, raw materials, material flows and transportation, recycling, climate change, economic safety and ecological footprint indicating that the environmental information reported is both qualitative and quantitative in nature. Vinnari & Laine (2013) undertake a qualitative field study to examine the factors contributing to the rise and subsequent fall of environmental reporting practices within the Finnish water sector from the late 1990s onwards. The researchers study five water utilities and for the purpose of collecting data, they conduct semi-structured interviews with 18 individuals as well as analyse the annual reports and different types of stand-alone social and environmental reports published by the water utilities under study between 1997 and 2010. They also examine other professional journals and event programmes published in this period (i.e., 1997-2010) with a view to obtaining supplementary insights. The findings of the study reveal that a variety of factors contribute to the diffusion and subsequent decline of environmental reporting practices in the Finnish water sector. The findings unfold that the initial adoption of environmental reporting may be explained from the perspectives of fad and fashion and the subsequent decline of such reporting may be driven by internal organizational factors and a lack of outside pressure 3 . Second, the studies put emphasis on a particular type of companies. For example, the study was undertaken by Niskala and Pretes (1995) samples only environmentally-sensitive companies whereas Halme & Huse (1997) and Kotonen (2009) Third, the issues of climate change and changes in biodiversity 4 due to the industrial activities and operations of Finnish firms have not received adequate attention from the researchers in the corporate environmental accounting and reporting field. Climate change, which is thought to be caused by greenhouse gas (GHG) emissions, is one of the principal environmental risks in today's world (Jones, 2010). Growing concern over the issue of climate change coupled with increasing environmental consciousness in the public has led firms to adopt environmentfriendly strategies contributing to the global target of reducing GHG emissions (Giannarakis et al., 2017). Climate is an integral part of ecosystem functioning and climate change has impacted upon ecosystems (e.g., terrestrial and marine ecosystems 5 ) and subsequently on human lives (Giannarakis et al., 2017). Finland is already affected by climate change and the effects of such change on weather condition and biodiversity are clearly visible 6 . For example, many Northern and Southern species that are available in Finland are affected by climate change; in winter, many snow and ice-dependent species are at risk of disappearing altogether and in the spring and summer, the probability of forest fires increases due to climate change 6 . Moreover, climate change can also facilitate the spread of foreign species to Finland 6 . Hence, climate change and changes in biodiversity are the two crises that must be tackled together. Consequently, researchers in the field of environmental accounting and reporting are increasingly becoming interested in the issues of climate change and accounting for biodiversity (Giannarakis et al., 2017;Schneider et al., 2014), but such research is surprisingly lacking in the Finnish context. SCOPE FOR FUTURE RESEARCH The afore-mentioned gaps will pave the way for future research that could be conducted in the Finnish context. CONCLUSIONS There is a paucity of research in the area of corporate environmental accounting and reporting in the context of Finland. This paper outlines the studies conducted to date on Finnish firms' environmental reporting practices. The paper adds to the existing literature by identifying a number of research gaps in the literature concerning corporate environmental accounting and reporting practices of Finnish firms. For instance, the datasets used in the previous studies are outdated and hence risk the failure of reflecting the current status of corporate environmental reporting practices in Finland; the findings of the prior investigations may not be generalized across industrial sectors as researchers have paid attention only to a particular type of companies; the last but perhaps the most important research gap exists because of the research negligence towards the impact of Finnish firms' activities and operations on climate change and changes in biodiversity. Hence, the paper has implications for researchers who could contribute to and thereby advance further the literature concerned with environmental accounting and reporting by addressing the lacunae identified herein. This study would also be useful for policy makers as they could use its findings to develop related disclosure requirements for the improvement of corporate environmental reporting practices. The government 5 should also take appropriate steps so that Finnish companies could disclose more important information about the natural environment. For example, information relating to GHG emissions, water consumption, energy consumption and production of hazardous waste could be of relevance to various stakeholders. However, this paper is not without limitations. First, the paper reviewed the environmental reporting practices of Finnish companies only. For broader comparability purpose, the studies on environmental reporting practices of other Nordic countries could have been reviewed; such review would have provided a greater understanding of the relative position of each Nordic country as far as research on environmental reporting practices is concerned. Second, this study is purely a conceptual one and therefore, it did not perform any statistical analysis. A comprehensive analysis of Finnish data could reveal further the current status of the corporate environmental reporting practices in Finland. These shortcomings could be overcome in Future research. 2013-2014 Both quality and quantity of CSR disclosure are significantly associated with the firm value measured by market capitalization but when Tobin's Q and return on assets are used as proxies of firm value, no significant relationship is found among them and CSR disclosure quantity and quality. Australia (40 companies; 20 companies that were prosecuted for breach of various environmental protection laws and 20 companies that were not prosecuted). 1990-1993 Both prosecuted and non-prosecuted firms are reluctant to disclose negative news about their environmental performance within their annual reports. The prosecuted firms provided significantly more positive environmental disclosures than non-prosecuted firms; the plausible explanation thereof may be the belief of the prosecuted firms that there is a need to legitimize the existence of their operations, the legitimation endeavour taking the form of increased disclosure of positive environmental news. The level of CSR disclosure is positively associated with the board size. On the contrary, the CSR disclosure is negatively linked with the proportion of independent directors, institutional directors and the existence of female directors on the board. Guthrie & Farneti (2008), descriptive (based on content analysis of the annual reports and sustainability reports of the selected public organizations). 7 Australian public organizations. 2005/2006 Sampled organizations applied the GRI indicators fragmentarily. They "cherry-picked" the GRI indicators they wanted to disclose. Disclosures were generally non-monetary and narrative in nature.
2018-12-21T03:15:30.133Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "2bdd9072a633165ddba920b5a0797c87bb8ee5d6", "oa_license": null, "oa_url": "https://doi.org/10.22495/cocv15i3c1p9", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "018618234256d1b601a235b10ef67474a748d70d", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [ "Business" ] }
253792528
pes2o/s2orc
v3-fos-license
A Polyvinyl Alcohol–Tannic Acid Gel with Exceptional Mechanical Properties and Ultraviolet Resistance Design and preparation of gels with excellent mechanical properties has garnered wide interest at present. In this paper, preparation of polyvinyl alcohol (PVA)–tannic acid (TA) gels with exceptional properties is documented. The crystallization zone and hydrogen bonding acted as physical crosslinkages fabricated by a combination of freeze–thaw treatment and a tannic acid compound. The effect of tannic acid on mechanical properties of prepared PVA–TA gels was investigated and analyzed. When the mass fraction of PVA was 20.0 wt% and soaking time was 12 h in tannic acid aqueous solution, tensile strength and the elongation at break of PVA–TA gel reached 5.97 MPa and 1450%, respectively. This PVA–TA gel was far superior to a pure 20.0 wt% PVA hydrogel treated only with the freeze–thaw process, as well as most previously reported PVA–TA gels. The toughness of a PVA–TA gel is about 14 times that of a pure PVA gel. In addition, transparent PVA–TA gels can effectively prevent ultraviolet-light-induced degradation. This study provides a novel strategy and reference for design and preparation of high-performance gels that are promising for practical application. Introduction Due to outstanding biofunctionality, biocompatibility, and applications in biomedicine and wearable devices, gels have been extensively explored and have attracted continuous interest [1][2][3][4]. However, intrinsic brittleness and nonideal mechanical properties of gels limit their utilization [5][6][7]. Robust mechanical properties, including high strength, stretchability, and toughness, are critical for meeting practical demands [8][9][10]. To overcome the challenges above, many efforts have been devoted to improving mechanical properties of gel through various strategies [11][12][13][14]. Mechanical strength and toughness of gel can be enhanced by block copolymerization, construction of interpenetration, hybrids, and double networks [15][16][17][18], as well as multiple interactions (such as electrostatic complex) to increase crosslinking points [19]. Mechanical properties are enhanced because of multiple "sacrificial domains" for mechanical-energy dissipation based on the dissipation-induced toughening theory [20,21]. Despite improvement in mechanical properties, these changes could lead to a significant decrease in water content and stretchability [22]. Meanwhile, any residual monomers or initiators are toxic, which is not desirable for biomedicine or wearable devices. Therefore, it has been very challenging to develop gels with a combination of superior mechanical strength and stretchability [23,24]. Tannic acid is a natural polyphenol compound. Tannic acid with catechol groups can easily form hydrogen bonds and crosslink domains with other active groups [1,25,26]. Nevertheless, because tannic acid possesses strong free-radical scavenging ability, its freeradical polymerization is inhibited and retarded [1]. Therefore, using tannic acid as a Gels 2022, 8, 751 2 of 10 crosslinker in free-radical polymerization is difficult. Polyvinyl alcohol (PVA), thanks to its low cost, nontoxicity, and good biocompatibility, is a widely used polymeric material and has attracted extensive interest [27][28][29][30][31]. In the process of preparation of PVA, a repeated freeze-thaw process has been applied to physically crosslink via formation of a crystallization zone between PVA molecular chains [32][33][34]. However, mechanical strength and toughness of PVA hydrogels obtained only by the freeze-thaw cycle was low. Strength and toughness of PVA can be further improved through creation of abundant hydrogenbond crosslinking points [23,35]. Introducing hydrogen-bond crosslinking points into a gel is a straightforward strategy to endow that gel with excellent mechanical properties [36][37][38]. After the freeze-thaw cycle, a PVA gel may be soaked in tannic acid to form hydrogen bonds, which would improve mechanical properties of the gel [39]. However, the processes above often require a long time of freeze-drying to obtain aerogel, and they only enhance strength, with almost no improvement in elongation [39]. As a consequence, preparation of PVA hydrogel with excellent mechanical properties remains a challenge. Moreover, due to the special structure of tannic acid, it has free-radical scavenging ability to effectively shield ultraviolet light [1,40,41]. Nevertheless, UV-radiation shielding of tannic acid has rarely been reported in previous studies of PVA hydrogels [37,39]. In order to improve mechanical properties of gels, the study in this paper used PVA as a matrix gel ( Figure 1). First, the physical crosslinking region was generated via the method of repeated freeze-thawing treatment. After formation of the physical crosslinking region, the surrounding void region was very conducive to the introduction of tannic acid, and hydrogen-bond crosslinks were formed, which greatly increased crosslinking points of the gel. Because of abundant and effective energy dissipation, mechanical properties of PVA-TA gel were very high. Strength and elongation at the break of the 20 wt% PVA-TA gel were 5.97 MPa and 1450%, respectively, and toughness reached 50 MJ/m 3 . In addition, tannic acid absorbed ultraviolet light so that the transparent gel was protected from ultraviolet-light-induced degradation. Results and Discussion Crystalline regions were formed in PVA via the freeze-thaw cycle. Simultaneously, tannic acid molecules were introduced into those crystalline regions' surrounding space. Next, hydrogen bonds were formed between that tannic acid and PVA chains to form multiple hydrogen bonding interactions. In this way, a double crosslink, combining both crystallization regions and multiple hydrogen bonds, was formed to enhance each gel with superior mechanical properties. Results and Discussion Crystalline regions were formed in PVA via the freeze-thaw cycle. Simultaneously, tannic acid molecules were introduced into those crystalline regions' surrounding space. Next, hydrogen bonds were formed between that tannic acid and PVA chains to form multiple hydrogen bonding interactions. In this way, a double crosslink, combining both crystallization regions and multiple hydrogen bonds, was formed to enhance each gel with superior mechanical properties. As shown in Figure 2a, for the 15.0 wt% PVA-TA gel, strong absorption peaks appeared at 3400~3200 cm −1 after soaking in tannic acid for 1 h and 7 h; this was caused by the hydrogen bonding association between phenolic hydroxyl groups of tannic acid and hydroxyl groups of PVA [42]. The broad and strong absorption peaks of the 15.0 wt% PVA-TA gel imply enhanced hydrogen bonding interactions as compared to those of pure PVA hydrogel [32]. FTIR spectra of tannic acid have been reported in previous studies [32,42,43]. For pure tannic acid, stretching vibrations of C=O, C=C (aromatic groups), and C-O appeared at 1730-1705 cm −1 , at around 1450 cm −1 , and at 1100-1300 cm −1 . The band located at 1720 cm −1 could be assigned to C=O stretching vibration for the 15.0 wt% PVA-TA gel. In addition, bands at 1610 cm −1 and 1447 cm −1 could be observed in the 15.0 wt% PVA-TA gel, corresponding to the stretching vibrations of C-C in aromatic groups and the distortion vibration of C=C in benzene rings, respectively [42]. Simultaneously, for the 15.0 wt% PVA-TA gel, a new peak appeared at 1030 cm −1 after treatment with TA [32]. The FTIR analysis implied composition of the gel and formation of hydrogen-bond crosslinking between PVA and TA [32,42,43]. The crosslinking network could be evaluated by a thermogravimetric analyzer (TG). Thermogravimetric analysis (TGA) of tannic acid was also reported in previous studies [42,43]. Two decomposition steps-moisture loss and mass loss for tannic acid-appeared at about 38 °C and 255 °C, respectively. After 500 °C, a 28.3 wt% carbonized residue remained. Figure 2b shows influence of the crosslinking network on thermal stability of the 15.0 wt% PVA-TA gel and the pure 15.0 wt% PVA hydrogel. Decomposition of gel can be divided into two steps: water loss and mass loss. The weight loss before about 240 °C for the pure PVA hydrogel was due to evaporation of water and acetic acid in the hydrogel [44]. For the 15.0 wt% PVA-TA gel, weight loss was not obvious at about 200 °C; this was likely caused by presence of less free water than was in the pure 15.0 wt% PVA hydrogel. In the second stage, thermal decomposition of PVA occurred at about 240-500 °C for the pure 15.0 wt% PVA hydrogel. Weight loss during the second step for the 15.0 wt% PVA-TA gel owed to pyrolysis of a handful of unevaporated acetic acids, PVA molecular chains, and TA. The initial weight-loss temperature of the 15.0 wt% PVA-TA gel was lower than that of the pure PVA hydrogel in the second stage. This was presumably due to the fact that TA destroys crystallization of PVA, implying formation of interaction between PVA and TA [32]. Furthermore, the weight-loss rate of the 15.0 wt% PVA hydrogel was significantly higher than that of the 15.0 wt% PVA-TA system with an increase in temperature. For example, abrupt loss in weight started at 240 °C for the 15.0 wt% PVA hydrogel, but did not occur until about 300 °C for the 15.0 wt% PVA-TA gel, suggesting that the hydrogen bond formed between PVA and TA increases thermal The crosslinking network could be evaluated by a thermogravimetric analyzer (TG). Thermogravimetric analysis (TGA) of tannic acid was also reported in previous studies [42,43]. Two decomposition steps-moisture loss and mass loss for tannic acid-appeared at about 38 • C and 255 • C, respectively. After 500 • C, a 28.3 wt% carbonized residue remained. Figure 2b shows influence of the crosslinking network on thermal stability of the 15.0 wt% PVA-TA gel and the pure 15.0 wt% PVA hydrogel. Decomposition of gel can be divided into two steps: water loss and mass loss. The weight loss before about 240 • C for the pure PVA hydrogel was due to evaporation of water and acetic acid in the hydrogel [44]. For the 15.0 wt% PVA-TA gel, weight loss was not obvious at about 200 • C; this was likely caused by presence of less free water than was in the pure 15.0 wt% PVA hydrogel. In the second stage, thermal decomposition of PVA occurred at about 240-500 • C for the pure 15.0 wt% PVA hydrogel. Weight loss during the second step for the 15.0 wt% PVA-TA gel owed to pyrolysis of a handful of unevaporated acetic acids, PVA molecular chains, and TA. The initial weight-loss temperature of the 15.0 wt% PVA-TA gel was lower than that of the pure PVA hydrogel in the second stage. This was presumably due to the fact that TA destroys crystallization of PVA, implying formation of interaction between PVA and TA [32]. Furthermore, the weight-loss rate of the 15.0 wt% PVA hydrogel was significantly higher than that of the 15.0 wt% PVA-TA system with an increase in temperature. For example, abrupt loss in weight started at 240 • C for the 15.0 wt% PVA hydrogel, but did not occur until about 300 • C for the 15.0 wt% PVA-TA gel, suggesting that the hydrogen bond formed between PVA and TA increases thermal stability of the gel [32]. Finally, the weight-loss curves after 1 h and 24 h soaking times were very similar, with a return to a stable weightlessness platform when the polymer almost completely decomposed to generate carbon residue. X-ray diffraction (XRD) was used to investigate the structure of the pure 15.0 wt% PVA hydrogel and that of the 15.0 wt% PVA-TA24 gel. As illustrated in Figure 3a, there were three typical peaks-at 2 θ = 19.5 • , 2 θ = 22.9 • , and 2 θ = 40.8 • -for the pure PVA hydrogel. This can be assigned to the (101), (200), and (102) planes of PVA crystallites [40,45,46]. For the 15.0 wt% PVA-TA1 gel and the 15.0 wt% PVA-TA24 gel, the XRD pattern only displayed a diffusion peak at 2 θ = 30.0 • , which may have simply indicated a smaller or less orderly crystalline structure. Therein, PVA chains were restricted by the strong hydrogen bond between PVA and TA [40]. Additionally, PVA microstructure was observed using field-emission scanning electron microscopy (SEM). The 15.0 wt% PVA-TA24 gel was first freeze-dried and then cut by scissors. The fracture and surface structure of the 15.0 wt% PVA-TA24 gel can be seen in Figure 3b,c, respectively. The fracture structure was similar to fiber bundles and delamination with about a 0.2 µm thickness. The surface showed a relatively flat surface and a few layered structures. Typical tensile stress-strain curves of each gel are displayed in Figure 5a,d,g. Tensile strength, elongation at break, and toughness of each gel are summarized in Figure 5b,c,e,f,h,i, respectively. Tensile strength and elongation of the pure PVA hydrogel were lower than those of the PVA-TA gel. Strength and toughness values experienced a process The PVA-TA gel displayed excellent mechanical properties. As can be seen in Figure 4a-c, the 20.0 wt% PVA-TA24 gel could be stretched from 1 cm to about 5 cm and could easily lift a weight of 200 g. The 20.0 wt% PVA-TA24 gel (1.65 cm wide and 0.128 cm thick) could also lift a weight of 4.8 kg, as is evident in Figure 4d. Typical tensile stress-strain curves of each gel are displayed in Figure 5a,d,g. Tensile strength, elongation at break, and toughness of each gel are summarized in Figure 5b,c,e,f,h,i, respectively. Tensile strength and elongation of the pure PVA hydrogel were lower than those of the PVA-TA gel. Strength and toughness values experienced a process Typical tensile stress-strain curves of each gel are displayed in Figure 5a,d,g. Tensile strength, elongation at break, and toughness of each gel are summarized in Figure 5b,c,e,f,h,i, respectively. Tensile strength and elongation of the pure PVA hydrogel were lower than those of the PVA-TA gel. Strength and toughness values experienced a process of increasing, decreasing, and increasing again with extension of soaking time, according to the curves for the 10% and 15% samples, while the 20% samples showed a monotonic increase. The reason may have been that a short immersion time leads to too few tannic acid and hydrogen bonds, while a long immersion time leads to tannic acid concentration equilibrium and hydrogen bond rearrangement (Table 1). When the mass fraction was high, concentration of tannic acid and the number of hydrogen bonds slowly increased due to high intermolecular density. Therefore, tensile strength and elongation at break of the 20.0 wt% PVA-TA gel gradually increased with time. At 24 h, the 20% wt% PVA-TA24 gel had the highest strength, elongation, and toughness. The 20.0 wt% PVA-TA gel evidenced high toughness, of 50 MJ m −3 , with an ultrahigh ultimate stress value of 5.97 MPa and ultimate strain of 1450% (Figure 5g). The toughness of the 20.0 wt% PVA-TA gel after soaking in tannic acid for 24 h was about 14 times that of the pure 20.0 wt% PVA hydrogel, which was better than reported in the previous literature (Table 2). after soaking in tannic acid for 24 h was about 14 times that of the pure 20.0 wt% PVA hydrogel, which was better than reported in the previous literature (Table 2). The gel effectively blocked penetration of UV rays because of the tannic acid. Herein, the 15.0 wt% PVA-TA24 gel (TA, 24 h) was selected for UV analysis. Figure 6a,b show rhodamine dye handwriting on the pure PVA hydrogel and on the 15.0 wt% PVA-TA24 gel. The three letters "DZU" could be clearly seen, indicating that natural visible light could pass through. A UV lamp was used to test UV-light absorption or blocking capacity of the 15.0 wt% PVA-TA24 gel. The pure PVA hydrogel not soaked in tannic acid did not absorb UV light. Rhodamine writing showed a fluorescence effect under the UV lamp, whether covered by the gel or not (Figure 6a). When the 15.0 wt% PVA-TA24 gel was partially placed over the dye writing, the "DZ" letters were darkened and only the "U" showed a fluorescent effect, indicating that the 15.0 wt% PVA-TA24 gel blocked the UV light. Tannic acid endowed the gel with excellent UV blocking performance due to phenolic quinone reversible tautomerism and a π electron conjugation effect [48]. Figure 6c shows the mechanism for UV light absorption. A 320-400 nm spectrum possessed mostly strong penetration, and our eyes were the most sensitive to 550 nm. Therefore, 300-550 nm was chosen for further comparison and analysis. As demonstrated in Figure 6d, the pure 15.0 wt% PVA hydrogel had excellent visible light transmittance but no blocking of ultraviolet light transmission. Intriguingly, the 15.0 wt% PVA-TA24 gel not only showed transparency in the visible light region but also had a certain ability to block ultraviolet light. It could be seen that tannic acid had excellent UV-radiation shielding ability, which could effectively solve the deficiency of traditional gels in UV resistance. mostly strong penetration, and our eyes were the most sensitive to 550 nm. Therefore, 300-550 nm was chosen for further comparison and analysis. As demonstrated in Figure 6d, the pure 15.0 wt% PVA hydrogel had excellent visible light transmittance but no blocking of ultraviolet light transmission. Intriguingly, the 15.0 wt% PVA-TA24 gel not only showed transparency in the visible light region but also had a certain ability to block ultraviolet light. It could be seen that tannic acid had excellent UV-radiation shielding ability, which could effectively solve the deficiency of traditional gels in UV resistance. Conclusions PVA-TA gels with high elongation, high strength, high toughness, and enhanced UV-radiation shielding were synthesized. The PVA-TA gel provided high elongation at a break of 1450%, strength of 5.97 MPa, and toughness of 50 MJ m −3 . These good mechanical properties were due to the crystallization region formed by freeze-thaw cycles and multiple hydrogen-bond crosslinking between the tannic acid and the molecular chain. In addition, because tannic acid has phenol-quinone reversible tautomerism and a π electron conjugation effect, a PVA-TA gel can effectively absorb ultraviolet light. Because of its excellent mechanical properties and UV resistance, this gel gives a solid foundation for potential applications, such as load-bearing devices and fabrication of artificial materials. This study also provides an important reference for study and improvement of mechanical properties of gels. Materials Polyvinyl alcohol (PVA, 1797, molecular weight 74,800) was purchased from Shanghai Macklin Biochemical Co. LTD. Tannic acid (purity 97.51%) and acetic acid (36%) were supplied by Tianjin Shengao Chemical Reagent Co. LTD and Tianjin Damao Chemical Reagent Factory, respectively. All chemicals were used directly without further purification. Deionized water was used for all experiments. Preparation of PVA-TA Gel PVAs with mass fractions of 10.0 wt%, 15.0 wt%, and 20.0 wt% were prepared and uniformly dissolved in water (containing 36% acetic acid; m H2O :m 36%acetic acid = 9:1) at 90 • C. PVA was then poured into a 1 mm thick polyvinylidene fluoride mold and frozen at −20 • C for 2 h. The temperature was raised to 25 • C, then cooled back down to −20 • C and held for another two hours. After three cycles of freeze-thawing, each gel was soaked in saturated tannic acid solution for 0.5 h, 1 h, 12 h, and 24 h, respectively. Finally, each gel was frozen again and then raised to room temperature to obtain PVA-TA gels (Figure 1). The control sample was prepared with a similar process, but without soaking in tannic acid solution (Table 1). Characterization Fourier transform infrared (FTIR) spectroscopy of the gel was collected by a Nicolet iS50 FT-IR spectrometer (Thermo Scientific Fisher, USA) in the wavelength range of 500-4000 cm −1 . The microstructure of the gel was analyzed by field emission scanning electron microscopy (SEM, Merlin Compact, Zeiss, Germany). Specimens were freeze-dried in vacuum and then sputtered with gold for 30 s. Crystallinity was investigated on an X-ray diffractometer (XRD, D8 Advance, Bruker, Germany) with Cu Kα radiation (sampledetector distance was about 21 cm). Mechanical property tests were performed under ambient conditions, using a universal testing machine (Jinan Marxtest Technology Co., Ltd., Jinan, China) with a 50 N load-cell (precision was 0.0002 N). Each gel was cut into rectangles (20 mm × 5 mm × 1 mm), and a tensile test of each gel with a 10 mm initial distance was carried out at a rate of 100 mm/min. The tensile test was carried out three times for each type of gel. To obtain the light-filtering capability measurement, the fluorescent mark "DZU" was irradiated by UV lamps (364 nm), then covered with the pure PVA hydrogel and the 15.0 wt% PVA-TA gel, respectively. Resultant light was photographed with a mobile phone camera. Visible and UV transmittance of gel were collected on an UV2700 (Shimadzu Co., Japan) in the wavelength range of 800 nm to 200 nm. A thermogravimetric analyzer (Netzsch STA 449 F5 Jupiter, Germany) was used to determine thermal stability of samples with a 20 mL/min nitrogen flow rate. The temperature was raised from room temperature to 800 • C at a heating rate of 10 • C/min. Data Availability Statement: The data that support the findings of the current study are listed within the article.
2022-11-23T16:15:15.117Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "7c5676a6901d3e2623407cf73e9d60d49e56c46c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/8/11/751/pdf?version=1668926866", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9e95c8d93f9ec45be014722f83428a388fd71f9", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
236136212
pes2o/s2orc
v3-fos-license
Topographically guided hierarchical mineralization Material platforms based on interaction between organic and inorganic phases offer enormous potential to develop materials that can recreate the structural and functional properties of biological systems. However, the capability of organic-mediated mineralizing strategies to guide mineralization with spatial control remains a major limitation. Here, we report on the integration of a protein-based mineralizing matrix with surface topographies to grow spatially guided mineralized structures. We reveal how well-defined geometrical spaces defined within the organic matrix by the surface topographies can trigger subtle changes in single nanocrystal co-alignment, which are then translated to drastic changes in mineralization at the microscale and macroscale. Furthermore, through systematic modifications of the surface topographies, we demonstrate the possibility of selectively guiding the growth of hierarchically mineralized structures. We foresee that the capacity to direct the anisotropic growth of such structures would have important implications in the design of biomineralizing synthetic materials to repair or regenerate hard tissues. Introduction Mineralized tissues in nature, such as nacre, bone, and dental enamel, exhibit a rich spectrum of remarkable functionalities as a result of their anisotropic structures, which are spatially organized at different length scales [1,2]. Enamel, for example, is composed of hydroxyapatite (HAP) nanocrystals that bundle together to form either thick prisms or differently oriented interprismatic regions [3,4]. This structural anisotropy, resulting from an intricate spatial organization of nanocrystals, provides remarkable mechanical properties [5] and chemical stability [6]. In spite of the realization of the potential of these structures, there remains an unmet need to recreate such functional architectures [7]. Toward this goal, a variety of innovative mineralizing material platforms based on organic matrices are emerging. The supramolecular organization of organic matrices plays a key role in regulating the interaction between organic and inorganic phases that control mineral nucleation and growth [8]. Inspired by the role of amelogenin in enamel development, self-assembled amelogenin nanoribbons [9], synthetic amelogenin-mimetic peptide coatings [10], and phase transited lysozyme films comprising N-terminal amelogenin domain and synthetic peptides [11] have been shown to control apatite nucleation and template crystallites with preferential growth along the c-axis and integrating firmly to etched enamel tissue. The role of collagen has also been explored. For example, dentin phosphophoryn-inspired phosphopeptides with (SSD) 3 motifs were developed to induce biomineralization with spatially organized intrafibrillar apatitic crystallites nucleating from collagen fibrils [12]. Furthermore, self-assembling elastin-like recombinamer (ELR) fibers have been reported to undergo collagen-like intrafibrillar mineralization via spatially confined ELR β-spiral structures [13]. Other examples also based on supramolecular matrix-guided biomineralization have been explored [14][15][16][17]. Despite these advances, however, key challenges remain such as the capacity to control kinetics of crystal precipitation, guide the orientation of crystal growth, and ultimately generate rationally designed hierarchical macrostructures. The capacity to guide mineralization with spatial control is critical, as the intricate organization of individual nanocrystals and their arrangement at higher size scales determine the mechanical properties of the resulting material [18]. In this effort, the supramolecular organic matrices play a key role [19]. For example, amelogenin-chitosan hydrogel matrices have been used to stabilize and guide the aggregation of calcium phosphate (CaP) clusters into bundles of co-aligned high aspect ratio crystals [20]. These crystals integrate and grow on an acid-etched enamel substrate via a cluster growth process. In another example, Wang et al. reported biomimetic remineralization of demineralized enamel surfaces using an organic matrix and glycine-guided HAP nanoparticles to form ordered and oriented needle-like elongated apatite crystals through a non-classical crystallization process [21]. Recently, we demonstrated how ELR membranes with tunable levels of conformational order and disorder can nucleate well-defined HAP nanocrystals and guide their growth into hierarchical macrostructures [22]. However, this process does not provide spatial control, which would have important implications in the design of more complex and functional mineralizing materials. The use of volumetrically [23] and topographically [24] confined environments has been explored to guide crystal precipitation and formation with spatiotemporal control [25][26][27]. For example, calcium carbonate precipitation and co-oriented calcite nanocrystals have been grown within confined nanoscale volumes inside collagen nanofibrils [28]. We have previously demonstrated that a volumetrically confined ELR matrix [22] is able to generate hierarchical biomineralization, whereas the same ELRs presented as unconfined surface coatings did not [29]. Others have also reported on the precipitation and transformation of CaP ions during biomimetic apatite formation under confined volumes of cross-linked gelatin at both the nanoscale [23] and microscale [30]. In addition, topographically confined regions of nanoporous [31] and microporous [24] structures exposed to calcium carbonate precipitation have been reported to affect ion diffusion and control growth kinetics of crystal formation. Therefore, taking advantage of patterned templates with well-defined geometrical topographies offers an attractive method to form mineralized macrostructures with spatial control and anisotropic organization. For example, patterned macroporous polymeric templates have been used to precipitate calcium carbonate and trigger diffusion-mediated growth of spatially guided calcite crystals exhibiting porous and sponge-like three-dimensional (3D) morphologies [32]. Nonetheless, despite great interest in mineralizing strategies based on organic matrices [33,34], there is a limited capacity to design such organic materials capable of guiding the orientation and alignment of crystallites within a hierarchical mineralized structure with spatial control. Furthermore, the identification of parameters that play a role in the growth of such structures would shed light on underlying mineralization mechanisms and open opportunities for the synthesis of advanced mineralized materials. In this study, we investigate the use of precise surface microtopographies on ELR membranes to grow apatite nanocrystals with spatial control. Our approach offers a novel platform to integrate the volumetrically and topographically confined organic matrix to spatially guide mineralization. Our results demonstrate the capacity to use surface topographies to affect and guide mineralization and the direction of crystal growth both within the bulk and on the surface of the matrix. We envisage the ability to spatially directing the anisotropic growth of such mineralized structures would open promising avenues for the repair or regeneration of hard tissues such as dental enamel and bone. Chrome mask photolithography fabrication A chrome photomask was designed in Tanner L-Edit and fabricated by Micro Lithography Services Ltd. (UK). Briefly, a 4-inch silicon wafer (Pi-kem, UK) was washed sequentially in acetone, methanol, and isopropyl-alcohol (IPA) for 5 min each in an ultrasonic bath and dried under nitrogen. The washed wafers were further dehydrated for 2 h at 180 C and surface treated with oxygen plasma at 100 W for 2 min. Before spinning the S1818 resit, the wafers were coated with hexamethyldisilazane primer, both at 4000 rpm for 30 s. The resist was then soft baked for 2 min at 115 C on a vacuum hotplate. The resist was patterned through the photomask on an MA6 mask aligner (Süss MicroTec, Germany), exposure dose 42 mJ/cm 2 before development in MF-319 developer for 75 s with gentle agitation. Excess developer was washed off in reverse osmosis water and dried under nitrogen before the wafer and pattered resit were subject to oxygen plasma at 80 W for 30 s to descum. Next, the patterned wafer was mounted to a carrier wafer (Cool Grease; AITechnology, USA) before etched in an STS-ICP etch tool (Surface Technology Systems, UK' SF 6 /C 4 F 8 ¼ 40/50 sccm, coil/platen ¼ 600/12 W, 10 mTorr, 20 C for 6 min and 30 s) to achieve an etch depth of 5 μm. Finally, the resist was stripped overnight in SVC-14 at 50 C, before the wafer was washed in acetone, methanol, IPA for 5 min each, and dried under nitrogen completely. Polydimethylsiloxane (PDMS) masters were prepared using the method described previously [35]. Briefly, components of the PDMS kit (Sylgard 184, Dow Corning, Midland, MI), which included PDMS base and curing agent, were mixed in a ratio of 10:1 and degassed for 7 min followed by pouring on top of the patterned Si master. After additional 12 min of degassing, PDMS on Si master was cured at 65 C for 3 h and at room temperature (~25 C) for 21 h. ELR membrane fabrication ELR membranes were fabricated using the procedure described previously by our group [22]. Briefly, ELR molecules (Technical Proteins Nanobiotechnology, Valladolid, Spain) were dissolved in solvent mixture of anhydrous dimethyformamide (Sigma-Aldrich, UK) and dimethyl sulfoxide (Sigma-Aldrich, UK) at a ratio of 9/1 at room temperature inside a polymer glovebox (BELLE Technology, UK) under controlled humidity (<20%). The resultant solution was then cross-linked with hexamethylene diisocyanate (HDI; Sigma-Aldrich, UK), drop casted, and left for drying overnight on top of the smooth and topographically patterned PDMS masters (Fig. 1A). Dried ELR membranes were carefully peeled off from the PDMS masters and washed several times with dimethyl sulfoxide (DMSO) followed by de-ionized water to get rid of excess HDI and finally stored at 4 C until use. Mineralization experiment Mineralizing solution was prepared by dissolving HAP powder (2 mM) and sodium fluoride (2 mM) in de-ionized water under continuous stirring. The powder was completely dissolved by adding nitric acid (69%) dropwise into the solution until it becomes clear. Ammonium hydroxide solution (30%) was later added dropwise into the clear solution to readjust the pH value to 6.0 [36]. Fabricated smooth and topographically patterned membranes were then dipped in the mineralization solution and incubated for 8 days at 37 C inside a temperature-controlled incubator (LTE Scientific, Oldham, UK). Scanning electron microscopy and energy-dispersive X-ray spectroscopy Mineralized ELR membranes were dried under nitrogen and mounted on aluminum stubs using self-adhesive carbon tape followed by coating with conductive material (gold or carbon) via auto-sputter coating machine and observed using and FEI Inspect F (Hillsboro, OR, USA). The surface topographies of non-mineralized membranes and morphology of the mineralized microstructures were investigated using scanning electron microscopy (SEM; Hillsboro, OR, USA), and the chemistry elemental analysis was performed using INCA software through the spectra mapping of interested areas collected via energy-dispersive X-ray spectroscopy detector (INCA x-act; Oxford Instruments) at an accelerating voltage of 10 kV [37]. Focused ion beam-SEM (FIB-SEM) Transmission electron microscopy (TEM) specimens were prepared using an FEI Quanta 3D ESEM (Hillsboro, OR, USA). Mineralized structure was milled by focused gallium beam of which the parameters were set to 30 kV and 5 nA. Then the stairs cut technique followed by an in situ lift-out operation and polishing was applied afterward. The cross-section cleaning was applied with 30 kV ion beam, and the current went down from 100 pA to 28 pA for both sides of the specimen at 2 incidence angle until the lamella was around 150 nm in thickness. The final polishing was done with the parameter of 5 kV, 16 pA, and 1 incidence angle to thin the lamella below 100 nm [38]. Transmission electron microscopy TEM study was performed using a JEOL 2010 operated at 120 KV, and a double Cs corrected JEOL JEM-ARM200F (S)TEM operated at 80 kV, equipped with a LaB6 and a cold-field emission gun, correspondingly. All measurements were made on focused ion beam (FIB)-SEM lamellas prepared from different samples. The obtained high-resolution images and selected area electron diffraction (SAED) patterns were analyzed using the Gatan Microscopy Suite (GMS 3) software. Interplanar distances calculated from SAED patterns were compared with the standard powder diffraction file PDF2 database (ICDD, USA, release 2009) and were found to be in correspondence with X-ray diffraction (XRD) measurements. X-ray diffraction An X'Pert Pro X-ray diffractometer (PANalytical, B.V., Almelo, the Netherlands) was used to analyze the phase composition of the mineralized structures at room temperature. Instrument was operated with flat plate θ/θ geometry and Ni-filtered Cu-Kα radiation at 45 kV and 40 mA (Kα1 ¼ 1.54059 Å, Kα1 ¼ 1.54442 Å) [39]. The 2θ values were recorded from 5 to 70 with a step size 0.0334 , and data were obtained via PANalytical X'Celerator solid-state real-time multiple strip detector continuously with an equivalent step time of 1600 s. X'pert high score (3.0e) with the PDF4 database (ICDD, USA, release 2014) was used for comparison. Fourier-transform infrared spectroscopy Fourier-transform infrared (FTIR) spectroscopy analysis of ELR membranes before and after mineralization was performed using FTIR Spectrum GX (PerkinElmer; Waltham, MA, USA). ELR membranes were placed over the infrared window and recorded 128 scans on average at a resolution of 4 per cm in the wavenumber range of 4000 per cm to 450 per cm in respect to the percentage of absorbance and the percentage of transmittance for organic and inorganic samples, respectively [22]. The spectrum data were analyzed by OMNIC software, and the original data values were dealt with Origin software to make the final spectrum curve. Nanoindentation Mineralized membranes with channel (0, 0) and channel (2, 2) were glued using Loctite Super Attack glue and left for drying for 24 h. Young's modulus (E) measurements were performed with instrument iNano Nanoindenter (Nanomechanics Inc.) with the sensitivity of 3 nN for the load and 0.001 nm for the displacement. The membranes were analyzed with single indentations and mapping procedure (Nanoblitz 3d [Nanomechanics Inc.] using method reported previously [22,40]. A Berkovich tip was mounted in the machine and used to perform the indentation with the 1-2 points per microns, indentation depth ranging between 30 and 70 nm, and the load ranging between 0.01 and 0.05 mN at 20 C and 30-40% relative humidity. The mechanical properties were computed using the well-established method described by Oliver and Pharr [41]. Rationale of design Our approach combines a protein-based matrix capable of triggering hierarchical mineralization with well-defined surface topographies. The protein matrix was fabricated using ELR molecules (Fig. 1), which are recombinant elastin-mimicking polypeptides consisting of hydrophobic domains (VPGIG), positively charged domains (VPGKG) with intermittent lysine (K) segments for cross-linking, and a statherin-derived mineralizing sequence DDDEEKFLRRIGRFG (SN A 15) [29,42]. We have recently demonstrated that by modulating ELR order and disorder, it is possible to grow high aspect ratio 50 nm thick apatite nanocrystals that are hierarchically organized in~5 μm diameter microbundles and spherulitic mineralized structures of up to 1 mm in diameter ( Fig. 2A) [22]. We hypothesized that by creating well-defined topographies on the surface of this ELR matrix, geometrically confined ELR volumes would reproducibly affect the growth of the apatite nanocrystals and microbundles. Therefore, topographical patterns comprising posts with star or hexagonal shapes and channels with straight ( Fig. 1D) or zig-zag ( Fabrication of membranes with surface topographies We first fabricated topographically patterned silicon wafers via photolithography followed by reactive ion etching [43]. These patterned silicon wafers were then used to cast PDMS masters [35], and themselves used to cast ELR membranes via soft lithography at room temperature by cross-linking and drying the ELR solution as previously described [22] but directly on top of the topographically patterned PDMS masters (Fig. 1A). SEM analysis revealed that the patterned ELR membranes did not exhibit any geometrical variations with respect to their PDMS masters (Fig. 1D). However, there were minor defects at the tip of ELR post and channel topographies, which likely occurred during one of the soft lithographic steps. Mineralization of topographical membranes On exposure of the smooth ELR membranes (i.e. without micropatterns) to a supersaturated mineralizing solution at pH of 6 and 37 C, apatite nanocrystals nucleated within the bulk of the ELR matrix to form the root of the mineralized structure. Nanocrystals from this root grew along their c-axis to emerge and spread radially on the surface of the ELR membrane as previously reported [22]. When the aligned nanocrystals emerged out of the bulk and onto the membrane surface, they organized into microscopic prisms, which grew together into macroscopic circular mineralized structures ( Fig. 2A) [22]. The direction of nanocrystal growth was away from the point they emerged onto the surface ( Fig. 2A). Similar nucleation of nanocrystals within the ELR bulk and growth of circular mineralized structures on the ELR membranes comprising post topographies (i.e. circular, hexagonal, and star shaped) were observed ( Fig. 2B, C, and 2D). Therefore, it is likely that as in the case of the smooth ELR membranes, ions from the mineralizing solution also diffused inside the bulk of the membranes to trigger nucleation. At the macroscale, these mineralized structures grew on different locations of the ELR membranes irrespective of the geometrical shape of the post topographies and on all surfaces (i.e. between adjacent posts as well as on the vertical and top surface of the posts; Fig. 2C). At the nanoscale, however, the nanocrystals were organized differently depending on the geometrical shape of the post topographies. Major differences were observed particularly in the level of nanocrystal alignment at the base of the posts where the vertical walls meet the horizontal space between posts. In these locations of membranes comprising circular posts, nanocrystals maintained co-alignment but together changed direction by 90 as they grew from the post wall to the space between posts (Supplementary Figs. S2A and S2B) and vice-versa ( Supplementary Fig. S2C). However, when the post geometry exhibited angles on the vertical walls such as in the case of hexagonal posts (i.e. 120 ), nanocrystal co-alignment changed (Fig. 2C). This effect was particularly pronounced on the star-shaped posts exhibiting 60 angles on the walls (Fig. 2D), which lead to less co-aligned nanocrystals (Fig. 2E, pointed with white arrow). These results suggest that nanocrystals growing on the surface of ELR membranes change their co-alignment depending on the geometrical features they encounter, with sharper topographical features resulting in higher misalignment. Mineralization results of straight channel Given these topographical effects on nanocrystal co-alignment, we fabricated ELR membranes exhibiting 4 μm deep microchannels of varying channel width and ridge width from 1 μm to 25 μm in size (Fig. 1D). After exposure to the mineralization solution for 8 days, we again observed effects on the mineralized structures at different size scales. In particular, the morphology of the spherulitic mineralized structures on the surface of the ELR membranes changed from circular to elliptical with varying aspect ratios depending on the dimensions of the microchannels (Fig. 2I). Here, we define the aspect ratio of the mineralized structures as the ratio between the distance of parallel growth (i.e. structure length) to the distance of perpendicular growth (i.e. structure width; Fig. 1B). However, this aspect ratio does not take into account the actual distance traveled by the nanocrystals in the direction perpendicular to the channels. In this direction, the nanocrystals would travel longer distances than those defined by the aspect ratio as they go up and down the channel geometry (Fig. 2H). Nonetheless, despite measuring the aspect ratio taking into account the actual traveled distance, the results demonstrate that the nanocrystals are indeed growing longer along the direction of the channels than they do perpendicular to the channels. Thus, based on the measurement of the distance traveled by the nanocrystals in the direction perpendicular to the channels, we define two types of aspect ratios, including 'aspect ratio of growing distance' and 'aspect ratio of visual distance.' Aspect ratio of growing distance takes into account the actual distance traveled by nanocrystals as they go up and down the channel geometry in the direction perpendicular to the channel, whereas aspect ratio of visual distance considers the virtual distance traveled by the nanocrystals, as visualized from the top of the mineralized structures (Fig. 2H). Morphologies of the mineralized structures The morphology of the mineralized structures changed from circular on smooth membranes (Channel (0, 0)) to oval shape on Channel (1, 2) and elliptical on Channel (2, 2), elongating in the direction of the channels (Fig. 2I). Notably, as the sizes of the channels decrease, the aspect ratio of visual distance tended to increase (from 1.22 AE 0.04 to 1.83 AE 0.03), whereas the aspect ratio of growing distance tended to decrease (from 1.43 AE 0.03 to 0.85 AE 0.04). Interestingly, although limited effects on both types of aspect ratio were observed in Channels (25, 10), (10,5), (1,2), and (2, 3); a large change was observed in Channel (2,2) where the aspect ratio of the mineralized structures increased substantially to 8.36 AE 0.88 for visual distance and 4.35 AE 0.34 for growing distance (Fig. 2F). It has been reported that enhanced diffusion and localization of ions can increase the growth of nanocrystals [44,45]. We hypothesize that this enhanced growth of apatite structures along the channel direction may result from an increased localization of Ca and P ions. This is discussed in detail in the section 'Effect of different geometries on crystal co-alignment'. Overall, these results demonstrate the effective guidance that Channel (2, 2) topographies have on the growth of the mineralized structures. Hypothesis To elucidate this effect of the Channel (2, 2) topographies, it is important to take into account the kinetics of growth. The mineralized structures are nucleating within the ELR matrix and growing from the bulk and toward the surface of the ELR membranes [22]. As the growing nanocrystals inside the matrix reach the ELR membrane surface, they encounter the microchannel topographies, which constrain the ELR matrix within confined volumes exhibiting different angles including 90 , 180 , and 270 (Fig. 2G). It is possible that these angles at the surface topography regulate the morphology of the mineralized structures by affecting the growth of the nanocrystals in a similar manner as those observed on the post topographies (Fig. 2B, C, and 2D). This effect at the nanocrystal and microbundle levels may be playing a role in the dramatic elongation exhibited by the mineralized structures on Channel (2, 2), whereas mineralized structures on smooth surfaces (Channel (0, 0)) exhibit symmetrical morphology with no directional growth ( Fig. 2A). Based on these observations, we addressed two questions: (1) How robust is this topographical guidance effect? and (2) Can the topographical patterns on the ELR membranes be used to guide the directional growth of the nanocrystals? Aspect ratio of visual distance on zig-zags To explore these questions, similar surface topographies as those of Channel (2, 2) were fabricated but in zig-zag patterns with variable corner angles, including 45 , 90 , and 135 (Fig. 3A). As in the smooth ELR membranes and those comprising straight channels, the membranes with zig-zag channel topographies also exhibited the spherulitic mineralized structures emerging onto the surface but with different aspect ratios of visual distance depending on the corner angles. The aspect ratio of visual distance of these structures increased from 1 on smooth surface to 1.23 on 90 corner angle patterns and further to 1.40 on 135 corner angle patterns (Fig. 3B and C). This increase in the aspect ratio of visual distance of the structures with increase in the corner angles indicates that as the zig-zag patterns become more like straight channels, a similar trend of mineralized structure alignment along the direction of the channels appears. However, zig-zag channels with 45 corner angles exhibited mineralized structures with the smallest aspect ratio of visual distance (i.e. 0.77) because of mineralized structures growing and elongating in the direction perpendicular to the channel rather than along the channel direction. These results indicate the possibility of switching the direction of mineralization by 90 depending on the zigzag corner angle. Aspect ratio of growing distance on zig-zags In contrast to the aspect ratio of visual distance, we observed completely opposite behavior of the aspect ratio of growing distance, which decreased with the increase in the corner angles of zig-zag channels. Zig-zag channels with 45 corner angles exhibited the highest aspect ratio of growing distance (i.e. 1.38), which decreased to 1.17 on 90 corner angles and further to 0.97 on 135 corner angles (Fig. 3B). Channel ridges of 45 corner angle zig-zag patterns are closer than those of 90 and 135 corner angles. Therefore, on mineralization, nanocrystals encounter more channel ridges on 45 corner angle zig-zag and thus travel longer distances compared with nanocrystals on 90 and 135 corner angle zig-zags. Notably, channel geometry appeared to be distorted on all zig-zag patterns in the presence of the mineralized structures, which is likely because of stresses generated on the ELR matrix by the growing nanocrystals (as highlighted by arrows in Fig. 3D). Small isotropic structures grew along the channel direction with time to acquire the elongated anisotropic form, as evident in the low magnification image in Fig. 3D. Furthermore, in some instances, multiple mineralized structures grew and appeared to emerge close to each other and together followed the channel geometry, creating millimeter-long mineralized zig-zag patterns (Fig. 3D). These results not only demonstrate the robustness of the topographical guidance in modulating the morphology of the mineralized structures but also indicate the possibility to guide them directionally. Conformation of directional control using zig-zags Our approach offers the possibility of integrating the supramolecular architecture provided by the spherulitic organization of the organic ELR matrix with the microscale-confined architecture provided by the surface topographies to spatially control mineralization. Surface Channel (2, 2) permitted apatite nucleation from the bulk of the ELR membrane but guided the growth of mineralized structures with high aspect ratio of visual distance on the surface (Fig. 2I). Furthermore, by changing the direction of these channels with zig-zag patterns, we further demonstrated that topographies were able to guide the growth of nanocrystals on the surface (Fig. 3A). In addition, the corner angles of these zig-zag patterns affected the mineralization direction (Fig. 3C). These observations confirm the possibility of using surface topographies to guide the growth of the hierarchically organized mineralized structures. However, metallic surfaces with microchannel topographies have been previously reported to not necessarily regulate the directional growth of apatite nanocrystals [46,47]. The effect of confinement on mineralization has been previously reported in both volumetric confinement in ELR [22] and gelatin [30] matrices as well as topographical confinement in nanoporous [31] and microporous [24] structures. In this study, we integrate both volumetric and topographical confinement using the ELR matrix. The topographical effects reported here emerge from both differences in channel size (ranging between 1 and 25 μm) and angles (i.e. 90 , 180 , and 270 ) formed by the microchannels within the ELR matrix (Fig. 4G). We have taken advantage of the ordered and hierarchical nature of the mineralization process (i.e.~50 nm crystals growing into 5 μm prisms and these into structures 100s of μm in diameter; Fig. 2A) to investigate the effects on nanocrystal growth at multiple size scales under confined conditions. To the best of our knowledge, this is the first report that highlights the tuneability of an organic matrix to exhibit precise control on directionally guided growth of hierarchically organized fluorapatite mineralized structures over millimeter length scales. Physical characterization of the bulk To investigate the underlying mechanism behind this guided mineralization process, we first characterized the crystallographic organization and orientation of the apatite nanocrystals inside the bulk of the ELR matrix. Ultrathin lamellae from mineralized structures grown on smooth membranes (Fig. 4A) and straight Channel (2, 3) (Fig. 4C) were milled out using a FIB and analyzed under a TEM ( Fig. 4B and C, respectively). These results revealed the nucleation (i.e. root) of the mineralized structure inside the ELR matrix with the nanocrystals growing along their c-axis and emerging on the surface of the ELR membrane. The growth front of the nanocrystals corresponding to the c-axis grew on the surface of the membrane but was directed toward the bulk of the ELR matrix, as pointed by the arrow in Fig. 4B. Furthermore, these results showed a different organization of nanocrystals depending on the location inside the ELR matrix, which seemed to be primarily influenced by the angles and confined regions delineated by the surface topographies. For instance, nanocrystals growing inside the 180 angle geometry of the ELR matrix (highlighted by red and green circles in Fig. 4G) were organized co-aligned to each other. However, the 270 angle geometry exhibited more spaced nanocrystals with wider co-alignment angles than planar (180 ) and right angle (90 ) geometries. We observed that these nanocrystals tended to rotate and grow toward the spaces with more ELR matrix and bundled up into flower-like structures (highlighted by yellow circle in Fig. 4G; Fig. 4I). In contrast, nanocrystals at 90 angle geometry (highlighted by blue circle in Fig. 4G) exhibited completely different behavior, growing and populating compactly with much smaller co-alignment angles. These differences in the organization of the nanocrystals depending on the geometry of the matrix (i.e. topographies presenting 90 or 270 angles) further confirm the possibility of using topographies on the surface to guide the growth of the nanocrystals within the bulk. Physical characterization of nanocrystals Given these differences in nanocrystal organization, we further characterized the mineralized structures and the crystallographic orientation of nanocrystals in both smooth and patterned membranes. First, the mechanical properties of the mineralized structures grown on Channel (0, 0) and Channel (2,2) were assessed using nanoindentation. Young's modulus (E) measurements at the center (9.9 AE 3.1 GPa) and at the edges (8.4 AE 4.1 GPa) of the circular mineralized structures on Channel (0, 0) revealed similar E values and were consistent with our previous results [22]. However, for elliptical-shaped mineralized structures on Channel (2,2), nanocrystals localized at the edges growing perpendicular to the channels exhibited significantly higher E values (12.7 AE 5.0 GPa) compared with those on the edges growing parallel to the channels (5.1 AE 1.6 GPa; Fig. 5A and B). We hypothesize that these differences might result from distinct nanocrystal organization in different locations within the ELR matrix, which we explore in the next section. Also, FTIR spectroscopy (Fig. 5C) and XRD (Fig. 5D) analysis of topographically patterned membranes with mineralized structures demonstrated non-stoichiometric apatite spectral peaks that exhibit a crystalline phase and structural parameters that match fluorapatite, respectively, as reported previously by our group [22]. Furthermore, TEM imaging of a single nanocrystal and its corresponding fast Fourier transform (FFT) pattern revealed 40-50 nm thick flat-ended nanocrystals with typical fluorapatite hexagonal morphology growing along the c-axis (Fig. 5E & F). These observations were further confirmed by investigating crystallographic orientation using high-resolution TEM (HRTEM) and SAED. These analyses also exhibited similar fluorapatite characteristics of nanocrystals growing at 90 , 180 , and 270 angle geometries inside the ELR matrix (Fig. 4G). Thus, these results demonstrated that nanocrystals exhibit similar crystallographic characteristics irrespective of the ELR matrix geometry (i.e. topographies presenting 90 , 180 , or 270 angles). Effect of different geometries on crystal co-alignment Given these drastic effects of the surface topographies on the growing behavior and morphology of the mineralized structures, we hypothesized that the channel geometry can significantly affect the organization and co-alignment of nanocrystals, which can further regulate the morphology of the mineralized structures. Therefore, we investigated the organization of the nanocrystals growing in Channel (2, 3) at the edges and center of the mineralized structures (Fig. 6). FFT and SAED analysis at these different locations demonstrated that the nanocrystals shared a similar growth orientation (i.e. along the c-axis) but different co-alignment degrees ranging between 2 and 7 . We hypothesized that this considerable difference in nanocrystal co-alignment may play a role in the preferential growth of the nanocrystals along the channels compared with perpendicular to them. We reasoned that differences in nanocrystal coalignment would have significant effects on the morphology and mechanical properties of the mineralized structures at the macroscale. FFT analysis of nanocrystals at 180 angle geometries revealed nanocrystal co-alignment of 2.9 , whereas at 270 angle geometries, they co-aligned at 6.7 ( Supplementary Fig. S3A and Fig. S3B). This greater degree of nanocrystal co-alignment along the ridge or channel direction enables more presence of ELR matrix between nanocrystals. We speculate that this higher availability of ELR matrix between nanocrystals (i.e. separated by an angle of 6.7 ) in the direction of the channel compared with those growing perpendicular to the channel (i.e. separated by an angle of 2.9 ) would lead to enhance the growth of the nanocrystals as a result of their affinity to the ELR [22] and a higher diffusion of Ca and P ions and local increase in ion concentration. Similar observations have been reported on apatite nanocrystal formation during mineralization of collagen fibrils because of an increased concentration and localization of Ca and P ions within confined gap regions [42,43]. In contrast, nanocrystals exhibiting lower amounts of ELR matrix between them (i.e. 2.9 ) would allow less CaP diffusion and consequently more restricted crystal growth. Furthermore, as expected, densely packed nanocrystals because of higher co-alignment (2.9 ; i.e. at the edges growing perpendicular to the direction of channels) exhibited higher E values (12.7 AE 5.0 GPa) compared with those with lower co-alignment (6.7 ; i.e. at the edges growing parallel to the channels; 5.1 AE 1.6 GPa). Thus, these results demonstrate that differences in nanocrystal alignment in the direction perpendicular and parallel to the channels lead to the effects on growth preference and mechanical properties of the mineralized structures at the macroscopic level. Conclusion The present work reports on the possibility to guide mineralization with spatial control by integrating a mineralizing matrix and surface topographies. Apatite nanocrystals nucleated and grew inside of the ELR matrix, whereas topographical patterns on the surface were able to generate ELR volumes with specific geometries, which dramatically affected mineralization. In summary, (1) we demonstrate that minor changes in single nanocrystal co-alignment led to large mineralization effects at the micro and macroscale, (2) we validate the possibility of using our approach to selectively guide the growth of the hierarchically Fig. 6. Co-alignment degree between adjacent nanocrystals. Table summarizing the relation between aspect ratio and the nanocrystal co-alignment at different locations (i.e. at the center and edges) of the mineralized structures grown on a smooth ELR membrane and on Channel (25,10) and Channel (2, 3) topographies. mineralized structures by systematically modifying surface topographies, and (3) our study provides new knowledge on the role of spatial confinement of the organic matrix on the growth of crystals at nanoscale and microscale. In addition, our study addresses a major challenge in materials science by establishing the possibility of controlling various structural properties such as crystal orientation, spatial organization, directional growth, and hierarchical organization over a large millimeter scale. We envisage the possibility of developing biomineralizing synthetic materials with advanced functionalities that can offer exciting opportunities for a broad range of fields expanding from materials science to hard tissue (such as dental enamel and bone) regeneration. Data and materials availability All data are provided in the article or the supplementary file. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-07-21T05:21:12.351Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "8141d8b2b75170829092c36ac13cdf202f78bebb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mtbio.2021.100119", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8141d8b2b75170829092c36ac13cdf202f78bebb", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
51638306
pes2o/s2orc
v3-fos-license
GtTR: Bayesian estimation of absolute tandem repeat copy number using sequence capture and high throughput sequencing Background Tandem repeats comprise significant proportion of the human genome including coding and regulatory regions. They are highly prone to repeat number variation and nucleotide mutation due to their repetitive and unstable nature, making them a major source of genomic variation between individuals. Despite recent advances in high throughput sequencing, analysis of tandem repeats in the context of complex diseases is still hindered by technical limitations. We report a novel targeted sequencing approach, which allows simultaneous analysis of hundreds of repeats. We developed a Bayesian algorithm, namely – GtTR - which combines information from a reference long-read dataset with a short read counting approach to genotype tandem repeats at population scale. PCR sizing analysis was used for validation. Results We used a PacBio long-read sequenced sample to generate a reference tandem repeat genotype dataset with on average 13% absolute deviation from PCR sizing results. Using this reference dataset GtTR generated estimates of VNTR copy number with accuracy within 95% high posterior density (HPD) intervals of 68 and 83% for capture sequence data and 200X WGS data respectively, improving to 87 and 94% with use of a PCR reference. We show that the genotype resolution increases as a function of depth, such that the median 95% HPD interval lies within 25, 14, 12 and 8% of the its midpoint copy number value for 30X, 200X WGS, 395X and 800X capture sequence data respectively. We validated nine targets by PCR sizing analysis and genotype estimates from sequencing results correlated well with PCR results. Conclusions The novel genotyping approach described here presents a new cost-effective method to explore previously unrecognized class of repeat variation in GWAS studies of complex diseases at the population level. Further improvements in accuracy can be obtained by improving accuracy of the reference dataset. Electronic supplementary material The online version of this article (10.1186/s12859-018-2282-3) contains supplementary material, which is available to authorized users. Background Repetitive DNA sequences make up almost half of the human genome [1]. A subset of these repeats are known as the tandem repeats (TRs) in which a stretch of DNA sequence (i.e. repeat unit) is located next to each other (i.e. in tandem). TRs with less than nine base pair repeat units are classified as microsatellites or short tandem repeats (STRs) and those with more than 10 base pair repeat units are known as minisatellites [2]. TRs, which have variable copy number in a population, are termed as variable number tandem repeats (VNTRs). There are almost 1 million TRs in the human genome encompassing 4% of the entire genome [3], yet only few of these have been investigated in terms of disease association. Trinucleotide repeats in Fragile X Syndrome [4], Huntington's disease [5], Spinobulbar muscular atrophy [6] and Spinocerebellar Ataxia [7] are few of the well-documented human diseases associated with TR variation. Notably these TRs are microsatellites; minisatelittes have not been well studied because of the limitations in analyzing longer length repeat units. The contribution of genetic variation to complex disease susceptibility has been extensively studied in the recent years. Single nucleotide polymorphisms (SNPs) and copy number variations (CNVs) have been the major focus of large scale genome wide association studies (GWAS). However, VNTRs have been largely ignored in the context of complex diseases due to their sequence complexity. Historically, VNTRs have been considered as non-functional DNA, due to their repetitive and unstable nature. However, VNTRs are prone to high rates of copy number variation and mutation due to the repetitive unstable nature, which makes them a major source of genomic variation between individuals, which could potentially explain some of the phenotypic variation observed in complex diseases [8,9]. Recent studies have shown that 10 to 20% of coding and regulatory regions contain VNTRs, suggesting that repeat variations could have phenotypic effects [10]. Association analysis have identified cis correlations of large tandem repeat variants with nearby gene expression and DNA methylation levels, indicating the functional effects of tandem repeat variations on nearby genomic sequences [11]. These findings show that TRs, which represent a highly variable fraction of the genome, can exert functionally significant effects. However, the possibility of exploring this collection of genetic variation is hindered by the difficulties in sequencing repetitive regions and the limitations of existing tools. As a result, the impact of tandem repeats on genomic variation between individuals as well as complex diseases remains largely unknown. Traditionally, TR analysis has been carried out via restriction fragment length polymorphism (RFLP) analysis in which restriction enzymes are designed to fragment a target region, and genotyping was carried out by separation of fragments on a gel [12]. Recently, PCR amplification of the target loci, followed by capillary electrophoresis analysis was used to determine the fragment length of the alleles [13]. However, these techniques are only applicable to a specific target region and not scalable to high-throughput analysis. Hence, it limits the possibility of TR analysis in large-scale association studies. Recent progress has been made in genotyping STRs using high-throughput short-read Illumina sequence data by use of local assembly techniques [14][15][16], which has led to insights into the role in variation in STR repeat length in controlling expression levels [17][18][19]. However, longer VNTRs remain intractable using these approaches with short to medium length reads. Sequencing reads which span the entire repeat regions could be informative to accurately genotype repeat copy number variation [20]. However this is not feasible for large scale analysis of longer TRs due to the high costs associated with long-read sequencing technologies. We propose a novel genotyping approach with targeted capture sequencing, which can be used in combination with short read sequencing technologies to assess TR variation at a population scale. We first demonstrate that targeted sequence capture of repetitive TR regions are feasible. We describe a novel probabilistic algorithm (GtTR) for genotyping TRs from short read sequencing data (targeted capture sequencing or whole genome sequencing) by comparison of regional read-depth with a single long-read reference sample. Our analysis methodology requires the use of long read sequencing for only one sample to use as a reference, and can scale to population level with more economical short read sequencing technology. We demonstrate the accuracy of the estimates from GtTR by comparison with gold-standard PCR sizing analysis. Our novel long read reference based genotyping approach of combining long read sequencing with targeted sequence capture using short read sequencing enables to genotype long TRs up to 5Kb in length and possibly longer with improved long read sequencing methods. It also provides a cost effective approach to genotype TRs for large scale analysis and has the potential to be applied in large scale genome wide association studies to uncover the genetic impact of long TRs on complex traits. Selection of tandem repeats for analysis This study was carried out as a pilot study to develop methods to investigate the association between TRs and obesity. The TRs targeted in this study were identified from SNP microarray intensity data collected on childhood obesity case control data (646 cases and 589 controls) and adult obesity case control data (709 cases and 197 controls) which were publicly available [21,22]. Briefly, we selected microarray probes overlapping the VNTRs (as determined from the Tandem Repeats Database (TRDB) (4)) for association analysis. Principal component analysis (PCA) was used to identify association between obesity and the probe intensity measurements within the VNTR regions. A comparative analysis using Multiphen [23] was used to identify the top 50 VNTRs associated with obesity in each cohort (Child_Gender1, Child_Gender2 and Adult). We selected a combined total of 142 VNTRs for the targeted sequencing analysis. The selected TRs range from 112 bp to 25,236 bp in length in the reference human genome and the number of repeat units range from 2 to 2300 repeats (Fig. 1). Probe design for selected TRs Agilent SureSelect DNA design was used to design target probes to capture the targeted regions. The 142 selected VNTRs were used as targets for the design. 100 bp flanking regions were also included as part of the target sequence. A high density tiling approach was used for probe design in target regions. Since the design is intended to target repeats, the repeat masker option was avoided to facilitate even probe coverage in the targeted region. Regions flanking the VNTRs were also included in the design. The size of the flanking region was determined by the size of the repeat region, at least 1000 bp flanking sequence was included for each target. A high density tiling approach was used for probe design in flanking regions as well, however the repeat masker option was used to identify unique flanking sequences in the flanking region. Therefore the probe coverage in the flanking region is not evenly distributed. Capture and Illumina sequencing of targeted TRs Seven samples were used for Illumina sequencing. Library preparation was performed using Agilent SureSelectXT Target Enrichment kit according to the manufacturer's instructions. Briefly, DNA was fragmented to 600 to 800 bp using micro-TUBE (Covaris). Fragments were end repaired, adapter ligated and amplified prior to target enrichment. Amplified fragments were hybridized to the designed capture probes for 24 h. After hybridization, Streptavidin beads were used to capture the DNA fragments bound to the probes. Captured DNA was amplified using Illumina indexing adapters. Amplified libraries were sequenced on Illumina MiSeq with 300 bp paired end sequencing. Samples NA12878 and NA12891 were sequenced as a pool of 4 samples in run 1 (other 2 samples in the pool are not included in the paper), whereas samples NA12877, NA12878, NA12879, NA12889, NA12890 and NA12892 were sequenced as a pool of 6 samples in run 2. Sample NA12878, which was used as reference sample was sequenced in both sequencing runs and data from sequencing run 1 was used as test sample and data from sequencing run 2 was used as reference sample. PCR analysis of VNTRs PCR sizing analysis of VNTRs have inherent limitation due to repetitive sequences and size limitation of the PCR products for fragment analysis. Therefore only nine targeted VNTR regions which are less than 1Kb in repetitive sequence were validated by PCR sizing analysis in this study. Nevertheless, these nine regions include various repeat unit length and repeat sequence combinations to assess the accuracy of the genotypes determined from sequencing data. PCRs were performed using HotStar Taq DNA Polymerase (Qiagen) and PCR conditions were optimized for each PCR target. PCR products were purified and subjected to capillary electrophoresis on an ABI3500xL Genetic Analyzer (Applied BioSystems). Fragment sizes were analyzed using GeneMapper 4.0 (Applied BioSystems). Sanger sequencing was performed on PCR products to confirm the sequence of the repeat regions. Simulation of targeted sequencing data We used simulated sequencing data to assess the accuracy of our genotyping algorithm. Generation of simulated data is described in Cao et al. (2017) [24]. We first introduced SNPs and small indels to the reference human genome (hg19) to create 4 diploid genomes -Genome1, Genome2, Genome3 and Genome4. The rates for SNPs and indels were 2500 per MB and 280 per MB respectively, following the analysis of the 1000 genomes project [25]. We then introduced repeat variations into these genomes (Additional file 1: Table S1). We simulated PacBio whole genome sequencing (WGS) data for Genome 2 and we simulated Illumina targeted sequencing data for these 142 loci for all 4 simulated genomes according to Cao et al. (2017) [24]. For simulation of capture sequencing data, we sampled fragments from each genome according to the length distributions observed in real sequencing data (mean 800 bp and standard deviation 100 bp). We simulated approximately 1000X coverage for Illumina targeted sequencing data, however for downstream analysis we down-sampled to approximately 200X coverage to achieve comparable depth to the real targeted capture sequencing data. Public data used in the study Illumina WGS data on CEPH Pedigree 1463 samples were downloaded from ENA with accession number PRJEB3381 and PRJEB3246 [26]. PacBio WGS data on NA12878 sample was downloaded from SRA with accession numbers SRX627421 and SRX638310 [27]. Genotyping TRs from PacBio sequencing data PacBio WGS data on NA12878 sample was mapped to the whole genome hg19 reference using BLASR [28]. We used VNTRTyper (https://github.com/mdcao/japsa), an in-house tool to genotype TRs from long read PacBio sequencing data. Recently a similar tool -adVNTR was reported by Bakhtiari et al. (2017) [29]. Briefly, VNTRTyper takes advantage of the long read sequencing to identify the number of repeat units in the TR regions. Firstly, the tool identifies reads that span the repeat region and applies Hidden Markov Models (HMM) to align the repetitive portion of each read to the repeat unit. Then it estimates the multiplicity of the repeat unit in a read using a profile HMM. A threshold of 2 supporting reads per genotype was used to estimate genotypes. Details of VNTRtyper analysis is provided in Additional file 2. Sequencing analysis of TRs from Illumina sequencing data Both Illumina targeted capture sequencing data and WGS data were mapped to the human genome hg19 reference using BWA-MEM [30]. We developed "GtTR", which is a read-depth Bayesian model for estimating both the maximum a-posteriori repeat count genotype, as well as the standard error and 95% high posterior density (HPD) interval in this estimate. GtTR estimates a scaled relative repeat count of the test sample to a reference sample. Define R rep , R flank , S rep , S flank as the read count for the repeat and flanking region reference (R) and the test sample (S) respectively. We calculate a frequentist estimate of Relative Copy Number (RCN) as We calculate a posterior probability over a discretized set of possible RCN values as follows where k∈ 1 1000 0; 1; ::: We have assumed that prob.(R rep , R flank , RCN = k) = prob.(R rep , R flank ).prob.(RCN = k), in other words, that the absolute reference genome read counts are independent of the relative copy number of the test sample relative to the reference. We also place a uniform prior on prob.(RCN = k). We model the expected number of reads in the repeat region (R rep ), conditional on the total number of reads in the region (flanking plus repeats) using a beta-binomial distribution This is a model in which the proportion of reads expected to come from the repetitive region scales with RCN. From eq. 3 we calculate the maximum a-posteriori RCN (RCN MAP ) as well as the smallest range which contains X% of the posterior probability mass (defined as the high posterior density interval), where the default for X is 95%. Finally, we rescale these values by multiplying by the reference genotype, as determined either by an estimate derived from PacBio data or PCR analysis. Details of GtTR (https://github.com/mdcao/japsa) analysis is provided in Additional file 2. Statistical analysis To calculate the accuracy as a function of HPD interval, we use GtTR to calculate the HPD interval for 10, 20, 30, 40, 50, 60, 70, 80, 90 and 95% of posterior probability mass respectively at all loci for which we have gold-standard PCR sizing results. We use the number of PCR sizing results which lie inside and outside these HPD intervals to estimate the accuracy, as well as 95% confidence intervals (CI) in this estimate (using the binom.confint function in the binom R package). We also plot the distribution of half relative width (HRW) of the 95% HPD intervals. We calculate this value as 95% HRW-HPD = (HPD_upper -HPD_lower)/ (HPD_upper + HPD_lower). For a non-skewed posterior distribution, we can interpret 95% HRW-HPD = x as the value x such that 95% of the posterior mass lies with HPD-midpoint +/− x * HPD-midpoint. We estimate this distribution over all captured TR regions for which we had sufficient long-read sequence coverage (122 regions). We also estimated this cumulative distribution after partitioning regions based on the average short read coverage depth in order to investigate the role of short-read sequence read depth on genotype resolution. Results We developed a novel approach to genotype tandem repeats from targeted capture sequencing which integrates a single whole-genome long-read reference sequence with short-read sequencing data. We evaluated this approach using a combination of simulated and real sequencing data. Using short read sequence to genotype TRs The short-read length of Illumina sequencing reads (< 300 bp) are not sufficient to span the entire repeat region and flanking region, which presents a hurdle for genotyping repeat regions. Here, we propose a novel algorithm 'GtTR' to utilize the cost-effective short-read sequencing method for genotyping repeat regions. We use a control sample with known genotype, which is determined from long-read sequencing as a reference to improve the accuracy of genotyping from short-read sequencing data (Fig. 2). Due to the use of read-depth based approach genotypes determined from short-read sequencing data will be an average of the two alleles instead of the exact genotype of the two alleles. Evaluating performance of GtTR using simulated data We simulated Illumina targeted capture sequencing data from the targeted VNTR regions for Genome1, Genome2, Genome3 and Genome4 (see Methods). We also simulated PacBio WGS data from simulated Genome2, hence simulated Genome2 sample was used as the reference sample in the GtTR analysis pipeline. VNTRtyper was applied to simulated PacBio data on Genome2 to determine the genotypes of the targeted VNTR regions. PacBio WGS simulated data only had sufficient coverage for 119 targets to determine the genotype. VNTRtyper identified at least one allele correctly for 92 targets (Additional file 1: Table S2). The correlation values between simulated and observed genotypes were 0.9980, 0.9969 and 0.9971 for allele 1, allele 2 and both alleles, respectively, indicating that the VNTRtyper method produces an accurate estimation of the genotypes. The GtTR analysis pipeline was applied to all 4 simulated Illumina targeted sequencing data set to determine the repeat count genotypes, as well as the relative standard error in the estimate of the genotypes at the targeted VNTR regions. Genotype estimates from GtTR were compared with the simulated genotypes for all 4 simulated data sets (Fig. 3, Additional file 1: Table S3). GtTR estimated genotypes were 96.6% accurate (CI: 94.6-98.0%) within 95% HPD intervals (Fig. 4a). The median half relative width of 95% HPD interval was 11.2% (based on median depth of coverage across 4 samples and 119 targets of 578X coverage, Fig. 4b)). Half relative width of 95% HPD interval decreased to 8.8% amongst loci with depth between 800X and 1000X (Fig. 4c). Developing a global reference sample for GtTR GtTR relies on use of a reference sample with accurate TR genotypes. This could be obtained from the long read capture sequencing of a reference sample, however, the drawback of this approach is that long-read sequencing would have to be obtained for each new TR target panel. An alternative to this approach is to use a sample which has been fully sequenced using long-read Fig. 4 a Accuracy of the genotype estimates by GtTR at varying HPD intervals for simulated capture data using genotypes from long read sequence data as a reference genotype. Error bars represent 95% binomial confidence intervals. b Overall cumulative distribution of half relative width of 95% HPD interval in genotype estimates (c) Cumulative distribution of half relative width of 95% HPD interval in genotype estimates stratified by sequence coverage sequencing. Conveniently, Pendleton et al. (2015) recently released a sample (NA12878) which has been sequenced to over 45X depth using PacBio [27]. VNTRtyper was applied to this sequencing data to calculate the number of repeats in our targeted TR regions. Pac-Bio WGS data on NA12878 sample had sufficient coverage on 122 targeted TR regions to determine the genotype (Additional file 1: Table S4). The genotype estimates by VNTRtyper were compared to the genotypes determined by PCR sizing analysis for nine targets (Fig. 5a and b). The correlation values between VNTRtyper and PCR sizing analysis indicate that the genotype predictions on PacBio WGS data by VNTRtyper were comparable to the accuracy of PCR results. On average, the VNTRTyper result was 13% different from the PCR sizing (Table 1) and this was mainly due to low coverage of PacBio WGS data. Genotyping VNTRs using short-read Illumina targeted capture sequencing data All of the targeted VNTRs were captured successfully by the targeted capture sequencing method and approximately 90% of the targets have greater than 100X coverage (Fig. 6a). The sequence coverage was not affected by the GC content of the repeat sequences (Fig. 6b) or the length of the repeat unit. The GtTR algorithm was applied to Illumina targeted sequencing data to determine the genotypes of the targeted VNTR regions. NA12878 PacBio WGS data (Additional file 1: Table S4) was used to calculate reference genotypes for 122 out of 142 targets; the remainder did not have sufficient depth to calculate accurate genotypes (see Methods). Two technical replicates were included for Illumina targeted sequencing of NA12878 sample. One of these replicates was used as reference for RCN calculations and the other was used as a test sample (see methods). The genotype estimates by GtTR were compared to the average of two alleles determined by PCR on nine targets for all 7 samples (Table 1). Capillary electrophoresis plots for the PCR sizing analysis is provided in Additional file 2. The correlation values between genotype calls estimated from Illumina targeted sequencing and PCR sizing range from 0.9738 to 0.9930. Genotypes estimates from GtTR using the Pacbio reference on the 9 loci with PCR sizing analysis were 68% accurate (CI: 55-79%) using 95% HPD intervals (Fig. 7a). However, if we restrict to the 6 loci for which the VNTRTyper estimate from the Pacbio reference was concordant with the PCR sizing result, the accuracy was 81% (CI: 65.9-91.4%) (Fig. 7b). If we had accurate reference genotypes for all 9 loci (i.e. PCR sizing estimate), then the accuracy would be 87.3% (CI 76.5-94.3%) (Fig. 7c). This demonstrates the importance of obtaining highly accurate reference genotypes. The median relative half-length of 95% HPD interval was 12.1% (across 122 loci and 7 samples with median depth of 395X). (Fig. 8a). Amongst the 3 loci with depth greater than 800X the median relative half-length was 8.1% (Fig. 8b), demonstrating the influence of read depth on the resolution of the estimates. This was evident with down-sampling analysis on targeted sequencing data, where the median relative half-length increased as the coverage of the sample decreased (Additional file 2 : Figure S13). Genotyping VNTRs using short-read Illumina whole genome sequencing data We downloaded 30X coverage Illumina WGS data on CEPH Pedigree 1463 for 17 samples including all 7 samples which were included in this study. Additionally we also downloaded 200X coverage sequencing data for 3 samples, including NA12877 and NA12878 samples included in this study. GtTR algorithm was applied to this NA12878 sample is independent to the sample which was used as a reference sample for RCN analysis c Genotype estimates from PacBio WGS on NA12878 sample included for comparison WGS data set and NA12878 sample with 200X coverage was used as the reference sample for RCN analysis. Genotype estimates by GtTR on Illumina WGS data were compared with PCR sizing analysis for all 7 samples included in this study (Table 2). Comparison between genotype estimates by GtTR on Illumina WGS data and PCR sizing analysis for all Illumina WGS samples are provided in (Additional file 1 Table S5). Noticeably, the correlation values between Illumina targeted capture sequencing vs PCR sizing were higher than Illumina WGS vs PCR sizing (refer to Table 1 and Table 2), indicating that the high coverage targeted capture sequencing data improves the accuracy of genotype estimates. The correlation between genotype estimates Fig. 7 Accuracy of the genotype estimates by GtTR on targeted capture sequencing data at varying high posterior density intervals for (a) 9 targets validated by PCR sizing analysis using genotypes from PacBio sequence data as a reference genotype (b) 6 targets for which the Pacbio genotype estimates were concordant with the PCR sizing analysis using genotypes from PacBio sequence data as a reference genotype (c) 9 targets validated by PCR sizing analysis using genotypes from PCR analysis as a reference genotype. Error bars represent 95% binomial confidence intervals Table S6) range from 0.9340 to 0.9765. The low correlation values were likely due to the low accuracy in genotype estimates from low coverage (30X) WGS data. This is evident with NA12877 sample, where the correlation value improved from 0.9636 to 0.9800 (Additional file 1: Table S6), respectively for WGS 30X and 200X data. Discussion There are almost 1 million TRs in the human genome encompassing 4% of the entire genome including coding and regulatory regions [1]. Due to their unstable nature, TRs can lead to high rates of repeat variation and mutation in the genome [8]. Repeat variation in tandem repeats could exert functional consequences on adjacent genes [31]. Furthermore, variation in TRs are a major source of genomic variation between individuals and could possibly explain some of the phenotypic variation observed in complex diseases. However, the analysis of TRs are limited due to the lack of efficient high throughput analysis tools. In this study, we present a novel high throughput targeted sequencing approach to genotype TRs. Our approach GtTR, uses short-read sequencing in combination with long-read characterized reference sample to genotype TRs in a cost-effective high throughput manner. Long reads, which span the entire repeat region and flanking region of the TRs enables accurate estimation of the number of repeats. Therefore the use of long-read sequencing data to determine the genotype of reference sample improves the accuracy of the genotype estimates by our 'GtTR' approach. Furthermore, the use of a global long-read WGS data as a reference sample data set (i.e. NA12878) eliminates the need to generate long read sequencing data on the reference sample for each new target panel. The reference sample only needs to be sequenced on a low cost targeted short-read sequencing method along with test samples for each new target panel. The genotype estimates by GtTR on the targeted VNTR regions had comparable accuracy to PCR sizing analysis. Although we were only able to include nine regions for PCR validation due to the laborious nature of developing and validating PCR primers, the variation in repeat unit length, number of repeat units and sequence composition of the repeat in these nine regions provides a comprehensive representation of the entire targeted VNTR regions. We have also demonstrated our method GtTR on Illumina WGS data and the results reveal comparable accuracy to PCR sizing analysis. However, it was evident that the high sequencing coverage achieved from targeted sequencing provided an advantage for accurate genotype estimation in targeted regions. One of the main drawbacks of our GtTR approach is that due to the use of read depth based analysis, the genotype estimates are an average of the 2 alleles instead of the exact estimates of 2 alleles as with long-read sequencing data. Although this might prevent the estimation of exact genotype, we believe this approach might still be applicable in GWAS. Difference in genotype estimates between test and control samples might be sufficient to identify TRs Genotype estimates from PacBio WGS on NA12878 sample included for comparison which might be associated with a complex disease. It is also worth noting that the genotype estimates from GtTR are dependent on a reference, therefore the genotypes are relative to reference sample and not the exact genotype. Thus, any errors in the reference sample will affect the estimates in the test samples. Furthermore, the alignment method can affect the results due to multi-mapping reads in repetitive regions. We have assessed other aligners (i.e. Bowtie, Stampy) and found BWA-MEM performs best in repetitive sequences. However, the use of reference sample to obtain relative estimates in repeat number would remove any bias caused by multi-mapping reads, hence the impact on estimates in the test samples would be low or negligible. There have been several studies on the use of targeted sequencing of STRs using short-read sequencing [20,[32][33][34][35]. However, the shorter read length of these technologies presents a challenge for genotyping longer TR regions. To our knowledge, our study is the first to successfully demonstrate targeted capture sequencing and genotyping of VNTRs up to 5Kb in length using short read sequencing. Our combination of long-read reference genotyping and short-read sequencing has enabled us to genotype difficult repetitive sequences and our approach has provided a cost-effective solution to genotype hundreds of TRs simultaneously in multiple samples. Recently, a similar hybrid approach, MixTaR was published, which combines the high-quality of short-reads and the longer length of long-reads for tandem repeat detection [36]. However, this method requires the sample to be sequenced using both long-read and short-read sequencing methods to genotype TRs. Although, MixTaR provides an accurate genotype of 2 alleles, it is not feasible to apply this approach in population based genotyping studies. In contrast our approach uses a global reference sample, which eliminates the need to generate Fig. 9 Accuracy of the genotype estimates by GtTR on Illumina WGS data at varying HPD intervals for 9 targets validated by PCR sizing analysis (a) using genotypes from PacBio sequence data as a reference genotype and (b) using genotypes from PCR sizing as a reference genotype. Error bars represent 95% binomial confidence intervals. c Cumulative distribution of half relative width of 95% HPD interval in genotype estimates in Illumina WGS 30X and 200X coverage (d) Cumulative distribution of half relative width of 95% HPD interval in genotype estimates at varying sequence coverage of Illumina 200X WGS
2018-07-18T11:26:04.975Z
2018-01-10T00:00:00.000
{ "year": 2018, "sha1": "bfd2a568465fe66507a3667503f51ab81608a078", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-018-2282-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfd2a568465fe66507a3667503f51ab81608a078", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Biology" ] }
14942890
pes2o/s2orc
v3-fos-license
Krempfielins N–P, New Anti-Inflammatory Eunicellins from a Taiwanese Soft Coral Cladiella krempfi Three new eunicellin-type diterpenoids, krempfielins N–P (1–3), were isolated from a Taiwanese soft coral Cladiella krempfi. The structures of the new metabolites were elucidated by extensive spectroscopic analysis and by comparison with spectroscopic data of related known compounds. Compound 3 exhibited activity to inhibit superoxide anion generation. Both 1 and3, in particular 1, were shown to display significant anti-inflammatory activity by inhibiting the elastase release in FMLP/CB-induced human neutrophils. Introduction Soft corals have been known to be rich sources of terpenoid metabolites [1]. For the purpose of discovering bioactive agents from marine organisms, we have previously investigated the chemical constituents and reported a series of bioactive natural products from Taiwanese soft corals [2][3][4][5]. In recent studies a series of bioactive eunicellin-based diterpenoids, have been isolated from the soft corals of the genera Cladiella, Klysum and Litophyton sp. [6][7][8][9][10][11][12][13][14]. The soft coral Cladiella krempfi has been found to produce several types of metabolites including eunicellin-type diterpenoids [15][16][17] and pregnane-type steroids [18,19]. Our previous chemical investigation of the Formosan soft coral Cladiella krempfi also resulted in the isolation of a series of new eunicellin-type diterpenoids, krempfielins A-M [20][21][22]. In this paper, we further report the discovery of three new eunicellin-based diterpenoids, krempfielins N-P (1-3) (Chart 1 and Supplementary Figures S1-S9). The ability of these compounds to inhibit the superoxide anion generation and elastase release in FMLP/CB-induced human neutrophils was also evaluated. The results showed that compound 3 could inhibit superoxide anion generation while 1 and 3, especially 1, effectively inhibited the generation of the elastase release in FMLP/CB-induced human neutrophils. Results and Discussion The new metabolite krempfielin N (1) showed the molecular ion peak [M + Na] + at m/z 461.2882 in the HRESIMS and established a molecular formula of C 25 H 42 O 6 , implying five degrees of unsaturation. The IR absorptions bands at ν max 3445 and 1733 cm −1 revealed the presence of hydroxy and ester carbonyl functionalities. The 13 C NMR spectrum measured in CDCl 3 showed signals of 25 carbons (Table 1) which were assigned by the assistance of the DEPT spectrum to six methyls (including one oxgenate methyl δ C 57.0), six sp 3 methylenes, one sp 2 methylene, eight sp 3 methines (including four oxymethines), four quaternary carbons (including one ester carbonyl). The NMR spectroscopic data of 1 (Tables 1 and 2) showed the presence of one 1,1-disubstituted double bond (δ C 112.5 CH 2 and 148.0 C; δ H 5.03 s, and 4.86 s), one methoxy group (δ H 3.34, 3H, s) and one n-butyryloxy group (δ C 172.3 C; 37.4 CH 2 ; 18.4 CH 2 ; and 13.7 CH 3 ; δ H 2.30 m, 2H; 1.67 m, 2H; and 0.98 t, 3H, J = 7.6 Hz). Therefore, taking account of the two degrees of unsaturation from double bonds, it was suggested that 1 should be a tricyclic compound from the remaining three degrees of unsaturation. The 1 H-1 H COSY and HMBC correlations ( Figure 1) were further used for establishing the molecular skeleton of 1. The COSY experiment assigned three isolated consecutive proton spin systems. Above evidences and the analysis of HMBC spectrum ( Figure 1) suggested that 1 is an eunicellin-based diterpenoid. Furthermore, the two hydroxy groups attaching at C-7 and C-12 were confirmed by the HMBC correlations from one methyl (δ H 1.12 s, H-16) and one oxymethine (δ H 4.12 m, H-6) to the oxygenated quaternary carbon appearing at δ 75.8 (C-7), and one methine (δ H 2.91 t, H-10) and one proton of H 2 -17 (δ H 5.03 s) to the oxymethine carbon appearing at δ 71.0 (C-12). Thus, the remaining one n-butyryloxy group had to be positioned at C-3, an oxygen-bearing quaternary carbon resonating at δ 86.5 ppm. On the basis of above analysis, the planar structure of 1 was established. The stereochemistry of 1 was finally confirmed by the very similar NOE correlations of both 1 and krempfielin L [22]. Krempfielin O (2) was shown by HRESIMS to possess the molecular formula C 28 H 44 O 9 (m/z 547.2880 [M + Na] + ). The NMR spectroscopic data of 2 (Tables 1 and 2) showed the presence of two acetoxy groups (δ H 2.07, s and 2.08, s, each 3H; and δ C 170.7, C and 170.2, C; 21.4, CH 3 and 21.6, CH 3 ), and an n-butyryloxy group (δ H 2.60 m and 2.50 m, each 1H; 1.67 m, 2H and 1.00 t, 3H, J = 7.5 Hz; and δ C 173.0, C; 36.7, CH 2 ; 18.5, CH 2 and 13.5, CH 3 ). As demonstrated by the HMBC correlation from oxymethine proton H-8 (δ 5.19) to the ester carbonyl carbon appearing at δ C 170.7 (Figure 1), one acetoxy group was positioned at C-8. The position of an n-butyryloxy group at C-3 was established by NOE interaction between the methylene protons (δ 1.67) of n-butyryloxy group with H-5 (δ 1.49). The remaining one acetoxy group was thus positioned at C-12. The relative configuration of 2 was further confirmed by NOE correlations (Figure 2). (Tables 1 and 2). The 13 C NMR spectrum of 3 revealed the appearance of two ester carbonyls (δ C 172.5 and 170.1), which were correlated with one methylene (δ H 2.12 m, 2H; and δ C 37.4) of an n-butyrate and the methyl (δ H 2.05 s, 3H; δ C 21.7CH 3 ) of an acetate group, respectively. The planar structure of 3 was determined by 1 H-1 H COSY and HMBC correlations (Figure 1). Comparison of the NMR data of 3 with those of the compound krempfielin A [20] revealed that the only difference is the replacement of one methyl and one hydroxy group at C-7 in krempfielin A by the substitution of one olefinic methylene (δ C 118.1, CH 2 ; δ H 5.55, s and 5.23, s) in 3. The placement of one n-butyryloxy group and one acetoxy group at C-3 and C-12, respectively was established by comparison of the spectroscopic data with those of krempfielin A. The relative configuration of 3 was mostly determined to be the same as that of krempfielin A by comparison of the chemical shifts of both compounds and was further confirmed by NOE correlations (Figure 2). Recently, we discovered several eunicellins showed anti-inflammatory activity by significantly inhibiting superoxide anion generation and elastase release in human neutrophiles induced by N-formyl-methionyl-leucyl-phenylalanine/cytochalasin B (FMLP/CB) [22,23]. The same in vitro anti-inflammatory effects of the diterpenoids 1-3 also were tested in this study (Table 3). At a concentration of 10 µM, 1 and 2 could not significantly reduce the generation of superoxide anion, General Experimental Procedures Melting point was determined using a Fisher-Johns melting point apparatus. Optical rotations were measured on a JASCO P-1020 polarimeter. IR spectra were recorded on a JASCO FT/IR-4100 infrared spectrophotometer. ESIMS were obtained with a Bruker APEX II mass spectrometer. The NMR spectra were recorded either on a Varian UNITY INOVA-500 FT-NMR and a Varian 400MR FT-NMR. Silica gel (Merck, Darmstadt, Germany, 230-400 mesh) was used for column chromatography. Precoated silica gel plates (Merck, Darmstadt, Germany, Kieselgel 60 F-254, 0.2 mm) were used for analytical thin layer chromatography (TLC). High performance liquid chromatography was performed on a Hitachi L-7100 HPLC apparatus with an octadecylsilane (ODS) column (250 × 21.2 mm, 5 µm). Animal Material C. krempfi was collected by hand using scuba off the coast of Penghu islands of Taiwan in June 2008, at a depth of 5-10 m, and stored in a freezer until extraction. A voucher sample (specimen No. 200806CK) was deposited at the Department of Marine Biotechnology and Resources, National Sun Yat-sen University. Extraction and Separation The octocoral (1.1 kg fresh wt) was collected and freeze-dried. The freeze-dried material was minced and extracted exhaustively with EtOH (3 × 10 L). The EtOH extract of the frozen organism was partitioned between CH 2 Cl 2 and H 2 O. The CH 2 Cl 2 -soluble portion (14.4 g) was subjected to column chromatography on silica gel and eluted with EtOAc in n-hexane (0%-100% of EtOAc, stepwise) and then further with MeOH in EtOAc with increasing polarity to yield 41 fractions. Fraction 31, eluted with n-hexane-EtOAc (1:10), was rechromatoraphed over a silica gel open column using n-hexane-acetone (3:1) as the mobile phase to afford eight subfractions (A1-A8). Subfraction
2016-03-01T03:19:46.873Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "daa91cceeef3925a24baefd21abdbdac0b3125d1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/12/2/1148/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "daa91cceeef3925a24baefd21abdbdac0b3125d1", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
103768137
pes2o/s2orc
v3-fos-license
Combination of atomic force microscopy and mass spectrometry for the detection of target protein in the serum samples of children with autism spectrum disorders Possibility of detection of target proteins associated with development of autistic disorders in children with use of combined atomic force microscopy and mass spectrometry (AFM/MS) method is demonstrated. The proposed method is based on the combination of affine enrichment of proteins from biological samples and visualization of these proteins by AFM and MS analysis with quantitative detection of target proteins. Introduction According to WHO, about 67 million people worldwide suffer from autism, and this level increases by 14% every year. Due to the complications in the diagnosis, in Russia there are no official data on the number of autistic children. In the early 2000s, the genes associated with the development of autism spectrum disorders (ASD) were annotated [1]. The genes associated with the development of ASD are, however, inhomogeneous and can participate in the development of other psychiatric and neurological disorders [2]. A full understanding of the causes of autism can be achieved by the revelation of functional protein markers along with behavioral reactions and the development of highly sensitive and efficient methods of their quantitative detection in biomaterial [3,4]. Earlier, the authors carried out comparative panoramic mass spectrometric (MS) analysis of serum samples from three families, in which children with ASD were brought up [5]. As a result of the performed comparative analysis of the protein composition of the serum samples, a small group including 13 conventional marker proteins was sorted out; in this group, one LIM domain-containing protein 1 was identified in four of five samples. To date, changes in the levels of GFAP, apoptosis factor Bcl-2, glutamate metabolism factor GAD-2, metallothioneins and thymidylate synthases, etc. [4,6] in blood samples of children suffering from ASD are announced in scientific literature. These proteins are attributed to potential markers of development of autism in children. The aim of the present study is the development of sensitive multiplexed method of target protein registration in serum samples of children suffering from ASD. The following objects were used as target proteins: apoptosis regulator (Bcl-2, UniProt AC P10415), metallothionein-3 (MT3, UniProt AC Р25713), and thymidylate synthase (TYMS, UniProt AC P04818). The developed method is based on affine concentration of target proteins from the serum onto the surface of chips for atomic force Materials and methods Similar to panoramic analysis [5], serum samples of children suffering from ASD were tested. Four control serum samples of healthy volunteers (С1, С4, С6, and С8) and five samples of children with ASD (S2, S3, S5, S7, and S9), provided by Separated Structural Unit "Clinical Research Institute of Pediatrics" at Pirogov Russian National Research Medical University Named After Y.E. Veltishev, were tested. AFM chip represented an affine reagent, on which surface four sensor areas were formed: three working areas with immobilized monoclonal antibodies against three target proteins, and one control area without immobilized affine reagents. The procedures of AFM chips sensibilization are described in detail elsewhere [7]. AFM scanning of each area was carried out before and after the chip incubation in the analyzed sample [7]. Preparation of AFM chip for MS analysis included hydrolytic cleavage of proteins on the chip surface according to [7]. Mass spectrometric selected reaction monitoring (SRM) of the target proteins was carried out using Agilent 6490 mass spectrometer (USA), equipped with Agilent 1260/1290 HPLC system (USA) for unique peptides of target proteins according to the technique described in [7]. Results and discussion AFM chips fabrication and analysis procedure (including the step of specific enrichment of AFM chip surface with target protein molecules) are described in our papers [5,7,8]. The identification of protein objects captured onto the AFM chip surface was carried out by SRM mass spectrometry [7]. The AFM scanning of working areas indicated that the affine complex «antibody/target protein» is hindered due to insufficient contrast of the images on the background of antibody molecules. That is, the changes in the heights of objects visualized in the working areas after the chip incubation in the target protein solution are insufficient for unambiguous evaluation of complex formation. Earlier, with HIV-1 gp120 glycoprotein we demonstrated that the best contrast of AFM images is observed for small probe moleculesaptamers (short single-stranded DNA sequences specific against the target protein). The contrast of «aptamer/target protein» images is twice higher than that of «antibody/target protein» images [9]. For this reason, in our present research AFM chip served as efficient affine reagent [5,8]. Subsequent SRM analysis of the AFM chips has revealed the presence of Bcl2 protein on the surface of three chips and TYMS protein on the surface of one chip at picomolar concentrations ( Table 1). Identification of MT3 protein on the AFM chips' surface was not possible. It is interesting to point out that AFM/SRM method allowed detection of Bcl-2 protein in serum samples of brothers (S2 and S3), and also in S5 sample.
2019-04-09T13:06:31.502Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "923076673483e429cef95f8fdb348c6e32c98b3f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/256/1/012015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fd53130436edec0d2b17c687f718ac244fe23317", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
254884084
pes2o/s2orc
v3-fos-license
Does a Hot Drink Provide Faster Absorption of Paracetamol Than a Tablet? A Pharmacoscintigraphic Study in Healthy Male Volunteers To investigate the hypothesis that paracetamol is absorbed faster from a hot drink than from a standard tablet using simultaneous scintigraphic imaging and pharmacokinetic sampling. Twenty-five healthy male volunteers received both paracetamol formulations in a randomised manner. The formulation administered in the first treatment arm was radiolabelled to allow scintigraphic monitoring. In both treatment arms, blood samples were taken for assessing paracetamol absorption. Following the hot drink, paracetamol absorption was both significantly faster and greater over the first 60 min post-dose compared with the tablet, as evidenced by the median time to reach t0.25 μg/mL of 4.6 and 23.1 min, respectively, and AUC0-60 of 4668.00 and 1331.17 h*ng/mL, respectively. In addition, tmax was significantly shorter for the hot drink (median time = 1.50 h) compared with the tablet (1.99 h). However, Cmax was significantly greater following the tablet (9,077 ng/mL) compared with the hot drink (8,062 ng/mL). Onset of gastric emptying after the hot drink was significantly faster than after the standard tablet (7.9 versus 54.2 min), as confirmed scintigraphically. Compared with a standard tablet, a hot drink provides faster absorption of paracetamol potentially due to more rapid gastric emptying. INTRODUCTION The common cold is one of the most frequent human illnesses worldwide (1) and, although no cure exists, symptoms are treatable. A plethora of cold remedies exist but few have proven effectiveness, although paracetamol has shown greater effectiveness than placebo in treating symptoms associated with upper respiratory tract infection, including sore throat (2), headache (3) and fever (4). Cold remedies are available in a variety of formats, including hot drink and tablet forms. However, there have been very few clinical studies conducted to investigate their potential for rapid symptom control. Whilst capsules and tablets are more convenient for many customers, hot drink remedies are associated with greater comfort and they provide active ingredients in solution that may result in them reaching the bloodstream and being bioavailable faster than tablet formulations. Previous data from a smaller study of the absorption of paracetamol from a hot drink formulation (although not specifically designed to estimate pharmacokinetic parameters) indicated that the paracetamol from a hot drink was absorbed more quickly than historically seen with a solid dose formulation (5). Absorption of paracetamol from the stomach is negligible but is rapid and significant from the small intestine (6), making rapid gastric emptying a key approach to reducing the delay between drug ingestion and onset of symptom control. Fastdissolving tablets have been shown to empty from the stomach more quickly than standard tablets, resulting in earlier appearance of the drug in the plasma (7-9) and most importantly, more rapid pain relief (10). Previous studies using the dual investigative techniques of gamma scintigraphic imaging combined with concurrent pharmacokinetic (PK) assessment have shown that the rate of gastric emptying is directly proportional to the rate of paracetamol absorption (7,8). Since paracetamol is more soluble in hot water but only sparingly soluble in cold water, it is hypothesised that presenting paracetamol as a hot drink will potentially increase the rate of gastric emptying of the drug as it is already in solution form, negating the requirement for prior disintegration and dissolution of conventional tablets. This clinical study was designed to compare the in vivo behaviour of two paracetamol formulations: one a hot drink and the other as a standard tablet. Although the hot drink contained additional ingredients of phenylephrine and ascorbic acid (phenylephrine is commonly used as a nasal decongestant to help relieve a blocked nose and ascorbic acid [Vitamin C] is a common ingredient of cold and flu remedies) the pharmacology of these ingredients does not suggest that any effect on the PK of paracetamol is likely. The simultaneous monitoring of formulation behaviour using gamma scintigraphy and blood sampling for PK analysis was utilised to establish the link between formulation disintegration and gastric emptying with resultant serum concentrations of paracetamol. This study fills a knowledge gap where previously there were no data on the transit rates of hot drink formulations through the gastrointestinal (GI) tract. The primary objective of this healthy volunteer study was to investigate whether paracetamol in a hot drink reaches the plasma faster than from standard tablets, as determined by the time to reach a plasma concentration of 0.25 μg/mL (t 0.25 μg/ mL). Other indicators of the speed of early paracetamol absorption included AUC 0-30 , AUC 0-60 , t max and C max . The use of these concentrations to determine the PK parameters was standard and the lower limit of quantification (LLOQ) was 0.05 μg/mL, so it was proposed that five times the LLOQ was a robust indicator of paracetamol presence in the blood. The scintigraphic data provided information on the in vivo fate of both formulations to allow a correlation to be made between PK parameters and gastric emptying and disintegration profiles. Materials Hot drink sachets (Beechams Flu Plus Hot Lemon Sachets) and standard paracetamol tablets (Panadol Original Tablets) were supplied by the Clinical Supplies Department, GlaxoSmithKline Consumer Healthcare UK. Both products were obtained from a commercially available batch and packaged in commercial packaging. The paracetamol dose was the same for both the tablet and hot drink formulations (1,000 mg). Technetium-99 m diethylenetriamine pentaacetic acid ( 99m Tc-DTPA) was provided by the West of Scotland Radionuclide Dispensary, Glasgow, UK. Lactose monohydrate for radiolabelling procedures was obtained from DMV-Fonterra, The Netherlands. Hot Drink A volume of 99m Tc-DTPA sufficient to provide approximately 3.8 MBq at the target dosing time was added to 150 mL of hot water. The contents of the sachet were mixed with this radiolabelled water. The hot drink was allowed to cool sufficiently to be drinkable and was at a temperature between 48 and 50°C at time of dosing. Standard Paracetamol Tablets Radiolabelled lactose monohydrate was prepared by mixing lactose monohydrate with a volume of 99m Tc-DTPA sufficient to provide approximately 1.9 MBq per tablet at the target dosing time, following drying in hot air. The tablets were drilled to a fixed depth using a microdrill then filled with the required dose of radiolabelled lactose monohydrate (approximately 5 mg) and sealed with a small amount of bone cement. This previously validated radiolabelling methodology has been used in other scintigraphic studies (11,12). Unpublished data from work previously conducted within this clinical centre confirmed that the complete release of radiolabel correlated well with complete tablet disintegration. Two 500 mg tablets were given orally with 150 mL water at room temperature. Study Design This was a phase IV, single centre, open-label, randomised, two-way crossover study conducted in healthy male volunteers. The study was performed according to the protocol and in accordance with the guidelines of the Declaration of Helsinki and Good Clinical Practice (GCP). The protocol and relevant study documentation were approved by the Scotland A Research Ethics Committee. The Administration of Radioactive Substance Advisory Committee (ARSAC) approved the radiation dosimetry. The following study treatments were administered in a randomised manner based on a Williams Latin Square design: & Hot drink i.e. 1,000 mg paracetamol, 10 mg phenylephrine and 40 mg ascorbic acid prepared with 150 mL hot water & Standard paracetamol tablets i.e. 2×500 mg tablets taken with 150 mL water at room temperature To minimise radiation exposure to the subjects, only the formulation administered on the first dosing occasion was radiolabelled. Study Population A total of 25 healthy male volunteers were enrolled into the study. They provided written informed consent prior to participation in any study-specific investigations and underwent a screening medical investigation to ensure compliance with study criteria. The study population included non-smokers who were in good general health with a body mass index (BMI) in the range 18.0-29.9 kg/m 3 . In addition, it was essential that the subjects did not suffer from any GI disorders that could impact on the expected 'normal' behaviour of the formulations following administration. As such, subjects with diabetes and current sufferers of migraine were excluded as they have been found to have altered gastric emptying (13,14). Vegetarians were also excluded as there is evidence that paracetamol absorption is impaired in this population (15) and due to the standard meals provided on study assessment days. Female subjects were excluded due to the need for exposure to radiation and it has been observed that the menstrual cycle has been associated with changes in gastric emptying patterns (16). Subjects with egg allergy were also excluded due to the contents of the standardised breakfast, and any subjects with a BMI of ≥30 kg/m 2 were excluded as shielding caused by bone, muscle, other organs and soft tissue can attenuate radioactive counts. Study Procedures Eligible subjects attended the study centre on two dosing occasions. On arrival at the study centre, subjects were questioned on adherence to study restrictions, which included pre-breakfast fasting of at least 10 h of which the final 2 h required abstinence from fluids as well. In the 72 h prior to dosing, subjects were not allowed any alcohol. They were also restricted from consuming any caffeine-or xanthinecontaining beverages or foods and undertaking any strenuous physical activity in the 24 h prior to dosing. Food and fluid intake on the study days were monitored by study staff and consisted only of standard meals supplied. Subjects were also required to abstain from prescribed and over-the-counter medications for 14 days and 48 h pre-dose, respectively, unless the medication was approved by a study physician. At 2 h pre-dose, subjects consumed a standard breakfast which comprised one scrambled egg, one slice of bacon, one slice of toast with 15 g butter and 5 g jam, 100 g hash browns and 200 mL whole milk. The consumption of this meal at this time was to enable the dosing of study treatments to occur in the 'semi-fed' state, which mimics the normal directions for usage of analgesic products. Approximately 15-30 min pre-dose, a blood sample was taken and, on the first dosing occasion only, external radioactive markers (approximately 0.01 MBq 99m Tc) were taped to the chest and back to enable accurate alignment of sequential images. At the target dosing time, the subjects were given the study treatment and were required to complete dosing within 20 s. The investigator (or designee) collected blood samples from an indwelling cannula placed in the subject's arm at the following times: pre-dosing, then 3,5,7,9,11,15,20,30,45, 90, 120 and 180 min post-dosing to allow an assessment of paracetamol PK. The total blood volume taken at each timepoint was approximately 4 mL. The actual sample times were recorded alongside the nominal times on the Case Report Form (CRF). An acceptable blood sampling time was considered ± 30 s for up to 11 min, ± 1 min for 15, 20, 30 min, then ± 2 min from 30 min onwards. The total amount of blood removed during the two treatment visits for PK analysis was approximate to 104 mL. These blood samples were centrifuged and plasma fractions removed and frozen until shipping to a GSK-approved laboratory for analysis. On the first dosing occasion only, scintigraphic images of 25 s duration each were taken from both anterior and posterior aspects immediately after dosing then every 5 min for a period of 15 min, then every 15 min to 2 h post-dose, every 20 min to 4 h post-dose and hourly to a maximum of 10 h post-dose. An acceptable scintigraphic imaging time was considered ± 2 min throughout the imaging period. The images were acquired using a Siemens eCam gamma camera with a 53.3 cm field of view and fitted with a low energy, high resolution collimator. Imaging was stopped once complete gastric emptying and release of radiolabel from the tablet (if applicable) was confirmed. Scintigraphy Images were collected using the eSoft image acquisition software and subsequently analysed using the WebLink software. The following parameters were derived from the analysis: & Time to onset and completion of gastric emptying of a hot drink and standard paracetamol tablets & Time and site of onset and complete disintegration of standard paracetamol tablets Pharmacokinetics The primary PK variable was the time taken to reach a plasma paracetamol concentration of 0.25 μg/mL (t 0.25 ). Secondary PK variables were plasma concentrations of paracetamol at each PK sampling point, AUC 0-30 , AUC 0-60 , C max and t max . The PK parameters AUC 0-30 , AUC 0-60 , C max and t max were derived from the observed individual subject drug concentration versus time data using non-compartmental methods in WinNonlin® Professional Version 5.0.1 or higher. Statistics All statistical analyses were performed using SAS Version 9.2. Time to onset and completion of gastric emptying were analysed using an ANOVA model appropriate for a parallel group design. The time and site of onset and complete disintegration of standard paracetamol tablets were summarised using descriptive statistics. The t 0.25 and t max parameters were subjected to a nonparametric analysis as the assumptions of normality and homogeneity of variance were not satisfied. A series of Wilcoxon rank sum tests as described by Hills and Armitage (17) was conducted and the Hodges-Lehmann estimate of the median difference between treatments was presented with a corresponding 95% confidence interval (CI) according to the method described by Hodges and Lehmann (18). The AUC and C max parameters were transformed prior to analysis using a logarithmic transformation (natural log) and analysed using an ANOVA model including factors for sequence, period and treatment (as fixed effects) and subject within sequence (as a random effect). The difference in log-transformed means and associated 95% CIs were backtransformed (exponentiated). For individual subjects, if AUC could not be calculated due to an insufficient number of quantifiable concentrations, then AUC was set to missing for that subject. If there were fewer than 12 subjects per treatment group for which AUC could be calculated, then a formal statistical analysis of AUC was not performed. Assessment of Safety/Tolerability Safety was assessed by physical examinations, electrocardiogram (ECG), vital signs, laboratory safety evaluations (blood biochemistry, haematology and urinalysis) and adverse event (AE) monitoring. Subjects were actively questioned on AEs before dosing, throughout the study day and at follow-up. AEs spontaneously reported by subjects were also noted. RESULTS Of the 37 subjects screened, 25 were randomised and completed the study. A flow-chart showing the breakdown of subjects screened, randomised and treated is shown in Fig. 1. The subjects had a mean (standard deviation [SD]) age of 30.5 (11.2) years (range: 18-51), a mean (SD) BMI of 24.74 (2.60) kg/m 2 and all of the subjects were Caucasian. The PK and scintigraphy analyses were performed on all subjects who were randomised, had any post-baseline PK or scintigraphic measurement, and had pre-dose plasma paracetamol concentration values of ≤100 ng/mL. One subject had a high pre-dose plasma paracetamol concentration value of 705 ng/mL in the first study period (where he was given the hot drink) and therefore was excluded from the analysis for this period only. Gamma Scintigraphy Results Example scintigraphic images comparing the gastric emptying behaviour of the hot drink and the standard tablets are shown in Fig. 2. At 30 min post-dosing, images clearly indicated that the hot drink had commenced emptying into the small intestine while the tablets were still relatively intact. Results showed that the hot drink had a statistically faster onset of gastric emptying compared with the standard tablets, as observed from the adjusted mean onset times of 7.86 and 54.23 min, respectively (p<0.0001) (Table I). However, there was no statistically significant treatment difference in time to complete gastric emptying, although the completion time was approximately 34 min faster for subjects dosed with the standard tablet due to the fact it started to empty later (Table I). For all 13 subjects dosed with the radiolabelled standard tablets, disintegration of tablets commenced and completed in the stomach. It should be noted that some disintegration of the tablets would have occurred prior to observation of gastric emptying of the radiolabel since the radiolabel is centralised in the tablet core so some non-labelled disintegrated material will have been released prior to gastric emptying of the radiolabelled product. However, it has been shown that administration of radiolabelled and non-labelled paracetamol tablets (using the same method of radiolabelling as used in this study) have similar disintegration rates (8). Onset of disintegration occurred at 43 min (SD=18.0) and completion occurred at 63 min (SD=24.8) post-dosing. Pharmacokinetics Results The mean plasma concentration vs. time profiles for both the hot drink and standard tablets are shown in Fig. 3. Appearance of paracetamol in the plasma was more rapid following administration of the hot drink when compared to the standard tablets. Tables II and III detail the PK parameters obtained and derived, as well as the results of the statistical treatments applied. The results demonstrated that t 0.25 was significantly shorter in subjects dosed with the hot drink compared with standard paracetamol tablets, with median times to reach t 0.25 of 4.59 and 23.14 min, respectively (p=0.0004). The hot drink had a median t max of 1.50 h which was significantly shorter than that of standard tablets (1.99 h) (p-value=0.0058). AUC 0-30 could only be calculated for 6 of the 25 subjects dosed with standard paracetamol tablets as there was insufficient quantifiable paracetamol plasma concentrations in the first 30 min for this treatment group. Since less than 12 subjects had this value calculated, statistical analyses were not performed for this parameter. The adjusted geometric means for AUC 0-60 for the hot drink and standard paracetamol tablets were 4668.00 and 1331.17 h*ng/mL respectively, indicating that paracetamol absorption over the first 60 min post-dose was statistically significantly greater with a hot drink compared with a standard paracetamol tablet (p<0.0001). However, C max was significantly higher for the standard paracetamol tablets compared with the hot drink. The adjusted geometric means were 9077.39 and 8061.76 ng/mL, respectively (p=0.0057). These findings may be due to the fact that the hot drink provides a different bioavailability profile. Safety/Tolerability There were no serious AEs (SAEs) or other significant AEs reported during the study and no subjects were discontinued due to AEs. The most commonly reported AE was haematuria in two subjects following the paracetamol hot drink, which was considered to be unrelated to the study product in both cases (in both cases mild haematuria was detected by dipstick and had resolved at the next assessment with no other associated problems reported). There were no significant safety issues with regard to vital signs, ECGs, or safety laboratory tests. DISCUSSION Despite the vast array of cold remedies available, there have been very few clinical studies conducted to investigate their potential for rapid symptom control. Hot drink remedies are associated with providing greater comfort possibly because their intense taste helps stimulate the flow of saliva and mucus which lubricate and soothe the nose and throat, as well as helping to clear bacteria and viruses (19). Furthermore, the active ingredients are available in solution, with paracetamol being more soluble in hot water but only sparingly so in cold water, which may result in them reaching the bloodstream and being bioavailable faster than tablet formulations, thereby resulting in a quicker alleviation of discomfort. The premise that a hot drink would result in earlier paracetamol absorption in comparison to a standard tablet was based on previously published data that the rate of appearance of paracetamol in plasma correlated to the rate of gastric emptying of paracetamol. This is because paracetamol absorption depends on the rate of gastric emptying as it is absorbed in the small intestine rather than the stomach (20). A drug in solution will be emptied from the stomach faster (21) hence as the hot remedy is in solution, gastric emptying will be more rapid and absorption from the small intestine will occur sooner. The current study evaluated the formulation behaviour and drug absorption behaviour of both hot drink and standard tablet formulations of paracetamol using simultaneous gamma scintigraphic imaging and blood sampling for PK analysis. The data obtained clearly demonstrated the superiority of the hot drink over the standard tablet in achieving faster exposure of paracetamol, as observed from the median times to reach t 0.25 . Paracetamol absorption over the first 60 min post-dose was statistically significantly greater with a hot drink compared with that of a standard tablet. Furthermore, t max was significantly shorter for the hot drink compared with standard paracetamol tablets. However, the C max observed in the 3-h study period was significantly higher for the standard paracetamol tablets compared with the hot drink. However, it is important to note that total exposure (i.e. AUC 0-inf ) was not assessed in this study and therefore it is inappropriate to conclude that more paracetamol is being delivered with a hot drink compared to tablet formulation. It could perhaps be expected that the tablet produces a higher C max compared to the hot drink formulation because the liquid hot drink is more spread out over the tissue at earlier timepoints and consequently there is a higher absorption rate per unit surface area, which results in C max concentrations being higher at earlier timepoints. It is unlikely that temperature has a key effect on C max . The clinical significance of these PK differences on symptom relief remains to be fully elucidated and future large-scale studies may investigate this finding further, however, based on these results, it is proposed that a clinical benefit would be noted earlier following administration of a hot drink compared with a tablet. In conjunction with the scintigraphic data that indicated that the time to onset of gastric emptying was significantly shorter for the hot drink, it can be inferred that the rapid drug absorption was a consequence of a more rapid onset of gastric emptying of the hot drink. Although the hot drink contained additional ingredients of phenylephrine and ascorbic acid, which might have been a contributing factor to the PK and gastric emptying differences, the pharmacology of phenylephrine and ascorbic acid does not suggest that this is likely. This small-scale pilot study demonstrates interesting initial results, but further methodologically rigorous studies comprising large, long-term, prospective, randomised clinical trials are necessary to compare the absorption of different paracetamol formulations, together with further elucidation of the clinical significance of these differences on symptom relief. CONCLUSION A hot drink of paracetamol has been shown to achieve faster and greater early drug absorption in comparison with a standard tablet formulation. Scintigraphic data supports the premise that more rapid gastric emptying of the hot drink contributed to the earlier appearance of paracetamol in the plasma. While comprehensive clinical data is not yet available to support the hypothesis that administering paracetamol in the form of a hot drink could result in more rapid alleviation of cold symptoms, results of this initial study allude to that potential.
2022-12-21T15:53:17.574Z
2014-02-21T00:00:00.000
{ "year": 2014, "sha1": "72e58f2961e6650e20bb4ea07f48022f6772b0b2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11095-014-1309-3.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "72e58f2961e6650e20bb4ea07f48022f6772b0b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237551607
pes2o/s2orc
v3-fos-license
Development and validation of a 1 K sika deer (Cervus nippon) SNP Chip Background China is the birthplace of the deer family and the country with the most abundant deer resources. However, at present, China’s deer industry faces the problem that pure sika deer and hybrid deer cannot be easily distinguished. Therefore, the development of a SNP identification chip is urgently required. Results In this study, 250 sika deer, 206 red deer, 23 first-generation hybrid deer (F1), 20 s-generation hybrid deer (F2), and 20 third-generation hybrid deer (F3) were resequenced. Using the chromosome-level sika deer genome as the reference sequence, mutation detection was performed on all individuals, and a total of 130,306,923 SNP loci were generated. After quality control filtering was performed, the remaining 31,140,900 loci were confirmed. From molecular-level and morphological analyses, the sika deer reference population and the red deer reference population were established. The Fst values of all SNPs in the two reference populations were calculated. According to customized algorithms and strict screening principles, 1000 red deer-specific SNP sites were finally selected for chip design, and 63 hybrid individuals were determined to contain red deer-specific SNP loci. The results showed that the gene content of red deer gradually decreased in subsequent hybrid generations, and this decrease roughly conformed to the law of statistical genetics. Reaction probes were designed according to the screening sites. All candidate sites met the requirements of the Illumina chip scoring system. The average score was 0.99, and the MAF was in the range of 0.3277 to 0.3621. Furthermore, 266 deer (125 sika deer, 39 red deer, 56 F1, 29 F2,17 F3) were randomly selected for 1 K SNP chip verification. The results showed that among the 1000 SNP sites, 995 probes were synthesized, 4 of which could not be typed, while 973 loci were polymorphic. PCA, random forest and ADMIXTURE results showed that the 1 K sika deer SNP chip was able to clearly distinguish sika deer, red deer, and hybrid deer and that this 1 K SNP chip technology may provide technical support for the protection and utilization of pure sika deer species resources. Conclusion We successfully developed a low-density identification chip that can quickly and accurately distinguish sika deer from their hybrid offspring, thereby providing technical support for the protection and utilization of pure sika deer germplasm resources. Supplementary Information The online version contains supplementary material available at 10.1186/s12863-021-00994-z. Background China has one of the largest and most diverse deer populations in the world. In a study on the genetic diversity of Chinese antler deer, Xing [1] proposed that there are 19 deer species in 10 genera, including sika deer, red deer, tufted deer, and white-lipped deer, in China. This diversity of deer resources is an important component of the special animal germplasm resources of China and represents an economically important resource. Among these deer, sika deer and red deer are two species belonging to the order Artiodactyla, family Cervidae, and genus Cervus. The high degree of homology between the genomes of these two deer species indicate that their degrees of reproductive isolation and genetic isolation are relatively small [2], and that they have not yet reached the stage of restricted or inhibited gene exchange [3]. In fact, fertile offspring can be produced in the wild and in captivity [4], and hybrid deer exhibit notable velvet quality traits and reproductive traits, indicating heterosis. To pursue greater economic benefits, cross-breeding was applied in the breeding process of antler deer, with the main hybridization method being crossing or progressive crossing between sika deer and red deer [5]. Specifically, the first generation of hybrids was crossed with sika deer to produce a second generation of hybrids, and the second generation of hybrids was crossed with sika deer to produce a third generation of hybrid deer. The phenotype of the second-generation hybrid deer was very similar to that of the sika deer, and the hybrids were difficult to distinguish with the naked eye, enabling the hybrid offspring and pure sika deer to intermingle. This intermingling has posed considerable challenges to the protection and utilization of pure sika deer. As a result, how to effectively identify and protect existing pure sika deer resources has become highly important. Traditional identification of purebred sika deer is primarily based on morphological characteristics. Such characteristics are easily influenced by the environment and seasonal variation, the identification step is timeconsuming, and the work is demanding. Thus, identification using phenotypic traits alone is not accurate, comprehensive or scientific. Subsequently, the identification of purebred sika deer evolved from relying on traditional phenotyping to employing DNA molecular marker technology. DNA is the basic carrier of biological genetic information. The DNA sequence in each organism is unique and can be used as a biological indicator. DNA molecular marker technology has extremely high application value [6], especially for some populations that are difficult to identify on the basis of their appearances, as molecular marker technology can be employed to identify them scientifically and accurately. According to the order of development, DNA molecular markers are divided into the first, second, and third generations. The first generation of DNA molecular markers is represented by restriction fragment length polymorphisms (RFLPs) and random amplified polymorphic DNA (RAPD), the second generation is represented by simple sequence repeats (SSRs), and the third generation is represented by expressed sequence tags (ESTs) and single nucleotide polymorphisms (SNPs) [7]. As a kind of DNA molecular marker, SNPs have the advantages of abundant polymorphisms, large quantities, stable genetics, fast detection, high quality, automatic labeling technology and large-scale detection. Moreover, the dimorphism of these markers is conducive to genotyping and is currently traceable. For these reasons, SNPs are currently the most important and effective genetic marker in use. With the reduction in high-throughput sequencing costs and the development of SNP chips, whole-genome SNP chips have emerged. To date, several SNP chips have been developed in a variety of plants and animals, for example rice [8], grapes [9], the salmon [10], and in livestock species like the pig [11], the cattle [12], the horse [13], the goat [14], the sheep (Illumina Ovine 50 k SNP BeadChip [15] and Illumina Ovine High-Density (HD) SNP BeadChip [16]), the chicken [17], and also in other domestic species like the dog [18] and the cat [19]. SNP chips are important tools for genetic diversity analysis, variety relationship analysis, genome-wide association studies (GWASs), and quantitative trait identification [20]. In addition, SNP chips are also used for breed and species identification. For example, the SNP chip of G. hirsutum [21] contains 17,954 interspecific SNPs, which can accurately distinguish land cotton from sea island cotton. The chicken 55 K chip [22] can identify 13 native Chinese breeds of chickens. SNP chips are also widely used in population genomics research. For example, Canas et al., [20] used the Illumina Bovine 777 K HD Bead Chip to analyze the genetic diversity of 7 important breeds of native Spanish beef cattle. The resulting phylogenetic tree showed that the 7 breeds originated from two main groups, and the differences within the breeds were large. Dasilvl et al., [23] used a high-density SNP chip to detect mutations in 2175 robins and identified 41,029 copy number variations (CNVs). The characteristics of these CNVs reflected how robins evolve in constantly changing environments. Talenti [24] used the GoatSNP50 chip to sequence data from 109 highland goats with known pedigrees and developed a new 3-step procedure for low-density SNP panels to support high-precision paternity testing. The RiceSNP50 array was used to genotype 195 rice inbred lines. A neighbor-joining (NJ) tree was constructed using the microarray typing results of these 195 rice inbred lines, with a accurate clustering into three populations (indica, japonica, and intermediate accessions) [25]. These studies have shown the effectiveness of SNP chips in population evolutionary analysis, paternity identification, and phylogenetic tree construction. However, most SNP chips are biased towards use in breeding, with very few used exclusively for provenance identification. Given the current situation of antler deer breeding in China, there is an urgent need for an accurate and rapid method for the identification of pure sika deer, which can be applied during the preservation process. In this study, the first lowdensity genotyping chip for the identification of purebred sika deer was developed; this SNP chip can quickly and accurately distinguish sika deer from hybrid progeny and facilitate the protection of the germplasm resources of sika deer. This study provides a scientific basis for preventing the degradation of germplasm resources due to the hybridization of sika deer resources in China. Results The roadmap of development and validation of 1 K SNP chip is shown in Fig. 1, and the establishment of the 1 K SNP chip is indicated in the following paragraphs. Whole genome sequencing analysis Sequencing of samples from all individuals yielded a total of 14.03 Tb of clean data with an average of 27.73 Gb per sample. Using the chromosome-level sika deer genome as the reference sequence, the clean reads obtained from the sequencing of each sample were aligned back to the genome, and average mapping rate, coverage, and sequencing depth of each sample were determined (Table 1). SNP screening and chip design The sequencing data were compared to the reference genome, and a total of 130,306,923 SNPs were detected. After hard filtering (see methods), 31,140,900 sites were selected for tree building (Fig. 2). The results showed that sika deer and red deer clustered separately at the two ends of the evolutionary tree. F1, F2, and F3 clustered between sika deer and red deer. According to the positions of individuals in the evolutionary tree, although three individuals (DF-81, LW-DD-057, and LW-CLW-40) showed phenotypes that matched those of sika deer, they clustered with hybrid deer, so they should be excluded from the sika deer population. Based on the molecular level and phenotypes results, 247 pure sika deer and 206 red deer were selected as the pure sika deer reference population and red deer reference population, respectively. The Fst values of all SNP loci in both reference populations and the heterozygosity of each locus were determined. There were 958,889 loci with Fst values greater than 0.95. According to the screening principles (see methods), 1000 SNP loci were finally selected. Figure 3 shows that some SNP sites (red dots) included in the SNP chip had high Fst values and low heterozygosity. The rest of chromosomes are shown in Additional file 1: Fig. S1. The average Fst of the 1000 SNP loci was 0.997, the minor allele frequency (MAF) was between 0.3277 and 0.3621 (with an average of 0.3483), and the average chip score was 0.99 (Additional file 2: Table S1). The annotation information of all SNP loci is provided in Table 2. A list of related genes of all SNPs that fall in the gene region (exon region and intron region) is given in the attachment (Additional file 3: Table S2). According to Fig. 4, the average proportion of red deer alleles in the F1-generation samples was 0.48 (± 0.008), that in the F2-generation samples was 0.24 (± 0.02), and that in the F3-generation samples was 0.11 (± 0.05) (Additional file 4: Table S3). The gene content of red deer gradually decreased with the hybrid generation, generally reflecting the laws of statistical genetics. Improvement of genotyping chip accuracy GenomeStudio software was used to perform cluster analysis on the genotyping signals detected by oligomer probes, resulting in three groups. In the first group, the default parameters could be used to clearly distinguish the genotypes of most samples (Additional file 5: Fig. S2). The second group consisted of markers for which some or all samples had uncalled genotypes. In addition, data for 4 SNPs were missing from all samples because these SNPs showed complex cluster graphs that could not be accurately clustered even with manual adjustment or a NormR > 0.2 (Additional file 6: Fig. S3). In the third group, some sites required adjustment to obtain accurate genotyping. Figure 5A is a clustering diagram automatically generated using only GenomeStudio software. F1 samples of a known genotype (AB) were not clustered to the corresponding position. To solve this problem, we resequenced samples with known genotypes to correct the genotyping results of the SNP chip and constructed high-quality clustering files. Through this adjustment, the F1 samples were correctly clustered to the corresponding positions, as shown in Fig. 5B. Verification of the 1 K array A significant correlation between the genotyping obtained by resequencing and the genotyping of the SNP chip at all loci was detected (r = 0.6507, p < 0.0001), as shown in Fig. 6. The average agreement was 93.48% (Additional file 7: Table S4). The genotyping results obtained for the same sample with different chips were consistent. Analysis of the SNP chip test data of 266 samples demonstrated that 973 sites were polymorphic. The 833 SNP sites remaining after filtering (see methods) were used for subsequent analysis. (Additional file 8: Fig. S4) The average MAF of the remaining loci was 0.38, the average detection rate of SNP loci was 98.7%, and the population average detection rate was 92% (F1)-95.30% (sika deer). These findings indicate that the genotyping results of the SNP chip are reliable. The genotyping data of these samples were analyzed by principal component analysis (PCA) (Fig. 7A). In the figure, the left side of the PC1 axis corresponds to sika deer, and the right side corresponds to red deer. The hybrid deer are located between the two deer species, and there is clear distinction among F1, F2, and F3. The results of the phylogenetic tree analysis (Fig. 7B) and the PCA were generally consistent. The cross-validation program of ADMIXTURE software can help select the best K value and perform cross-validation under the default setting (−-cv). The cross-validation error is lowest when K = 7 (Additional file 9: Fig. S5 A). The ADMIXTURE result (Additional file 9: Fig. S5 B) shows that when the ancestral components come from two populations of sika deer and red deer (K = 2), there are obvious differences between sika deer (red), red deer (blue), and hybrid deer, and the hybrids showed the same ancestry. When K = 3, the F1 hybrid deer is separated from the hybrid population and can be clearly distinguished from other hybrid offspring, while the F2 and F3 hybrid deer have a certain degree of mixing. According to Fig. 8A, the error rate was the lowest when Mtry = 6. Thus, the number of preselected variables for each tree node was set to 6, and Mtry = 6 was selected to construct the random forest model. As shown in Fig. 8B, when Mtry = 6 and the number of decision trees was less than 400, the error of the model fluctuated greatly. When the number of decision trees was greater than 400, the model gradually stabilized, but there were still some fluctuations. Because the error rate of the model was lowest when the number of decision trees was 850, 850 was selected as the number of decision trees in the random forest. Then, the trained random forest model was used for classification, and the out-of-bag (OOB) error rate of these loci was 4.76%, indicating that the accuracy of assigning an unknown individual to its corresponding population was 95.24%. In the receiver operating characteristic (ROC) graph, the area under the curve (AUC) was 0.941, indicating that the model had a better classification effect. Discussion The sika deer subspecies currently found in China include Cervus nippon hortulorum, Cervus nippon sichuanicus, Cervus nippon kopschi, and Cervus nippon taiouanus [26]. After a long period of domestication, Cervus nippon hortulorum has formed a domestic sika deer population, including 7 breeds (Shuangyang sika deer, Dongda sika deer, Aodong sika deer, Dongfeng sika deer, Xifeng sika deer, Xingkai Lake sika deer, and Siping sika deer) and a Changbai Mountain strain. Among these breeds (strains) of sika deer, Shuangyang sika deer have the characteristics of high yield, stable genetic performance, strong adaptability, medium size, no obvious backline and throat spots, short and thick eyebrows and red hair; Siping sika deer exhibit a short and thick antler trunk and a mostly ingot-type mouth with red-yellow antlers; Dongfeng sika deer are characterized by strong limbs with sparse and large motifs, a thick antler body, and a notably round mouth; Dongda sika deer have a strong, thick body, long branch antler trunk, and short and large motifs. The common characteristics of these varieties (strains) are high production performance and stable genetic performance. These varieties have been widely used to improve low-and medium-yield deer herds, and are currently the most commonly used populations for breeding and cross-breeding [27]. Cervus nippon sichuanicus, Cervus nippon kopschi, and Cervus nippon taiouanus are primarily distributed in the wild environment, their degree of domestication is low, and they are rarely used in cross-breeding [28]. At present, the most common crossbreeding method involves using Cervus nippon hortulorum as the female parent and Cervus canadensis songaricus, Cervus elaphus xanthopygus, or Cervus elaphus yarkandensis as the male parent [29]. The phylogenetic tree was constructed by using the genetic distances between individuals belonging to populations analysed. This method is often used for genetic diversity analysis and parental line selection [25]. The phylogenetic trees of the five populations are shown in Fig. 2. The hybrid deer population clustered between the sika deer and red deer, and different species/subspecies of sika deer and red deer clustered together according to geographical location, such as red deer in Tahe and Alashan. Japanese sika deer showed similar results: the sika deer populations in northern and southern Japan were located on different branches and later formed a large branch, which further supports the view that the Japanese population is derived from at least two pedigrees [30]. In this study, phenotypes and molecular evolutionary trees were jointly considered, and 247 purebred sika The SNP loci were strictly screened according to their Fst values by using a customized algorithm, which ultimately yielded a total of 1000 SNP sites for chip development. Figure 4 shows that as the generation of crosses progresses, the offspring of the hybrids contain a decreasing number of alleles specific to red deer and an increasing number of alleles specific to sika deer. This phenomenon is observed because the current hybrid deer are mostly produced by progressive crosses between sika deer and red deer. The alleles of the hybrid offspring specific to red deer did not decrease by exactly 50, 25, and 12.5%, which may be due to the difference in chromosome type between the red deer and sika deer [31].. Ba et al., [32] employed double-digest restriction-site associated DNA sequencing (ddRAD-seq) technology and detected 320,000 genome-wide SNPs in 30 captive individuals (7 sika deer, 6 red deer and 17 F1 hybrids), screening out 2015 potential diagnostic SNP markers that can be used to evaluate or monitor the degree of hybridization between sika deer and red deer. However, the experimental population in the study was small, and no large group (30 individuals in only three populations) verification was carried out. Compared to the research of Ba and collaborators [32], this study employed whole-genome sequencing, and the sequencing depth and coverage were considerably higher than those of ddRAD-seq. Moreover, the size of the reference population selected for this study was relatively large (250 sika deer, 206 red deer, 23 F1, 20 F2, and 20 F3), and the accuracy of the sites was verified using 266 verification samples (5 populations). Therefore, the accuracy of the results of this study is greater than that of the previous study. To verify the ability of the 1 K SNP chip to detect population structure, a total of 266 samples of sika deer, red deer, and hybrid deer were tested, and the average detection rates of the populations were 92-95.30%. In all individuals, 97.89% of the SNP loci were polymorphic, which indicates that the 1 K sika deer SNP chip can be used to determine the genetic variation among sika deer, red deer, and hybrid deer. According to the PCA results, sika deer, red deer, and hybrid deer were clustered into different positions, and the hybrid deer were arranged from left to right according to the number of consanguinity relatives that were sika deer. The results of the random forest model showed that the accuracy of the 1 K sika deer SNP chip in identifying unknown individuals was 95.24%. Therefore, the 1 K sika deer SNP chip can accurately identify the provenance of the sample to be tested. There are currently few SNP chips available for deer. Bixley et al., [33] used reduced representational sequence technology to screen 768 SNPs for the development of a Golden Gate (Illumina™) SNP chip. The author assembled a mapping pedigree to implement quality control of these and other SNPs and to produce a genetic map. This SNP chip will be a new parentage assignment and breed composition panel. Rowe et al., [34] developed an Illumina SNP chip for New Zealand deer breeding. The chip contains 132 SNP markers for paternity testing. These markers can identify the New Zealand deer breeds. For deer, 1000 randomly selected SNPs were used to successfully assign samples to genetic groups based on their main genetic and geographic differences. Brauning et al., [35] used next-generation sequencing to sequence seven Cervus elaphus (European red deer and Canadian elk) individuals and align the sequences to the bovine reference genome build UMD 3.0. The authors Fig. 8 The relationship between random forest parameters and error rate identified 1.8 million SNPs meeting the Illumina SNP chip technical threshold. Genotyping of 270 SNPs on a Sequenom MS system showed that 88% of the identified SNPs could be amplified. Compared with the abovementioned SNP chips, the 1 K sika deer SNP chip is mainly used to identify domestic deer in China. In addition, in the past, the reference genome of bovines was used for alignment. For the first time, in this research, the sika deer genome was used for alignment to ensure the accuracy of microarray typing results. Conclusion In this study, morphological identification combined with molecular-level analysis was used to establish a reference population. A total of 247 purebred sika deer and 206 red deer were selected as sika deer reference population and red deer reference population. The Fst value of each SNP site in those two reference populations was calculated. The screening and customization algorithm yielded 1000 SNP sites for the development of the microarray, and the distribution of these 1000 sites in the hybrid deer was examined, producing a result in line with the laws of statistical genetics. In terms of 1 K SNP chip verification, the consistency between the microarray genotyping results and the high-throughput sequencing results was 93.48%, and the consistency of the sequencing results between different chips and for the same individual on the same chip was 100%, indicating that the microarray genotyping results were reliable. In addition, machine learning algorithms (random forest) and PCA were used to verify the population stratification ability of the SNP sites on the 1 K SNP chip. The accuracy of the 1 K sika deer SNP chip in identifying unknown individuals was as high as 95.24%. In summary, the 1 K sika deer SNP chip can accurately identify pure sika deer, hybrid deer, and red deer, providing technical support for the identification of pure sika deer provenance and laying a solid foundation for the subsequent breeding of sika deer. Ethics statement All procedures concerning animals were organized in accordance with the guidelines of care and use of experimental animals established by the Ministry of Agriculture of China, and all protocols were approved by the Institutional Animal Care and Use Committee of Institute of Special Animal and Plant Sciences, Chinese Academy of Agricultural Sciences, Changchun, China. Animals To increase the accuracy of identification, four existing Chinese sika deer subspecies, Russian sika deer, Japanese sika deer, and all existing Chinese red deer subspecies and North American subspecies were selected. Specifically, the red deer were from Xinjiang, Northeast China, Gansu, Qinghai, Sichuan and Tibet, and the sika deer were from Northeast China, South China, Sichuan, Taiwan, Russia and Japan. See Table 3 for detailed sample information. The appearance of different groups is shown in Additional file 10: Fig. S6 (sika deer and red deer) and Additional file 11: Fig. S7 (F3-generation). Finally, a total of 519 sample (250 sika deer, 206 red deer, 23 F1 hybrids, 20 F2 hybrids, and 20 F3 hybrids) were randomly selected, and phenotypic identification (head length, coat color, backline, tail spots, throat spots and hip spots) was performed following [36]. A total of 785 samples were raised in captivity, all of which were derived from wild-caught deer and were maintained under closed flock breeding for 5-50 generations. Chemical anesthesia was used during deer catching. Lumianning injection (070011777, Jilin Huamu Animal Health Products Co., Ltd., China), an anesthetic, was administered intramuscularly at 1 ml per 100 kg of body weight, and peripheral vein blood of each sample was collected fresh and stored at − 20°C until DNA extraction. Main instruments and reagents The centrifuge (Sigma 1-14 K) was purchased from Sigma-Aldrich (Shanghai) Trading Co., Ltd.;The electrophoresis instrument (EPS-300) was purchased from Shanghai Tianneng Technology Co., Ltd., and the gel imaging system (SYSTEMGelDocXR+IMAGELA) was purchased from Bio-Rad Life Medical Products (Shanghai) Co., Ltd. Whole-genome resequencing (database construction) Blood was collected from the jugular vein of the experimental animals, and a blood genomic DNA extraction kit (DP348-03) and a high-throughput magnetic bead extraction system were used to extract the genomic DNA from the blood samples. The DNA obtained was subjected to Illumina HiSeq 2000 sequencing (Beijing Nuohe Zhiyuan Biological Information Technology Co., Ltd.). Discovery and screening of specific sites Previous studies have pointed out that the morphological characteristics of deer may not correctly reflect their evolutionary relationships, and the phylogenetic relationship between deer species and subspecies should be analyzed by combining the results of morphological studies at the molecular level [37]. Therefore, to screen out specific SNP sites, the reference population of this study was established on the basis of phenotypic and molecular identification. Identification at the molecular level was performed using NGS QC Toolkit (default parameters) [38] to filter the genotyping data of resequenced samples in order to remove reads meeting the following three conditions: 1. Reads containing linker sequences, 2. Single-end reads of N for which the number of bases exceeded 10% of the total number of read bases, and 3. Single-end reads with low-quality (quality value less than 5) bases that exceeded 50% of the length of the read. BWA-MEM (v0.7.12) [39] software was used to compare the filtered reads to the sika deer reference genome (mhl_v1.0), and SAMtools (v1.9) software [40] was used to sort bam files to remove duplicates. Next, GATK4.0.2.1 software was utilized for mutation detection [41], and the filtering conditions (−filter "QD < 2.0" -filter-name "QD2", −filter "QUAL < 30.0"filter-name "QUAL30", −filter "SOR > 3.0" -filter-name "SOR3", −filter "FS > 60.0" -filter-name "FS60", −filter "MQ < 40.0" -filter-name "MQ40", −filter "MQRankSum < -12.5" -filter-name "MQRankSum-12.5", −filter "Read-PosRankSum < -8.0" -filter-name "ReadPosRankSum-8") were applied to perform hard filtering. Meanwhile, VCFtools-0.1.13 [42] was used to eliminate sites; detect SNPs with a missing rate greater than 0.1, locus coverage less than 5X, and locus quality less than 30; and perform less hard filtering. According to the linear sequence of filtered SNP sites, Gblocks 0.91 software [43] was employed to screen the conserved region sequences in all samples, and TreeBeST 1.9.2 [44] software was used to construct a phylogenetic tree with the nearest-neighbor algorithm. At the same time, phenotypic identification of individuals was performed according to the body appearance of all samples (head length, coat color, backline, tail spots, throat spots and hip spots), and the sika deer reference population and red deer reference population were finally selected based on the cluster position and phenotypic of the samples. The Fst between populations is a measure of population differentiation and genetic distance with a value between 0 and 1. The greater the differentiation index is, the greater the difference is [45]. To screen the specific sites of red deer, the Fst value of each SNP site between the red deer reference population and the sika deer reference population was calculated by VCFtools-0.1.13 [42], and only sites with an Fst > 0.95 were retained. At the same time, it was required that the selected SNP loci be mutually exclusive in the genotypes of red deer and sika deer. In other words, the frequency of genotype AA in red deer was 1, and the frequency of CC in sika deer was 1, with the highest priority. We further filtered the candidate SNP sites according to the customization requirements of the microarray. The filter conditions include the following: 1. The flanking sequence of the site (within 50 bp) had no interference SNP, and 2. All [G/C] or [A/T] conversion sites were deleted; that is, only SNP sites of the transversion type were retained. To observe the genetic stability of the selected SNP loci, we used the sequenced F1, F2, and F3 generation samples as the test samples. Based on the 1000 selected loci, we calculated the frequency of the specific loci in the hybrid deer and calculated the proportion of red deer genetic content in each hybrid sample. (Additional file 12: Table S5). Designing the 1 K genotyping array Illumina chips have two types of SNP sites [46]: singlebead type II SNPs (A/C, A/G, T/C, and T/G) and twobead SNPs. For type I SNPs (A/T, C/G), we selected only type II SNPs to maximize the number of genotyping polymorphisms. The selected candidate SNPs (4 K) of four times the target size were provided to the Illumina company to design a 51-mer sense nucleotide sequence. The target SNP site was located at the 26th position. A customized algorithm was used to calculate each submitted SNP sequence. SNPs with scores less than 0.6 were removed [47]. To ensure the accuracy of the results, each SNP was tested with three probes. During the analysis, the signals from the three detections were summarized, and a single SNP was provided for each SNP signal estimation. SNP marker analysis and cluster file construction Illumina synthesized 995 markers and used GenomeStudio software (v2011.1, Illumina, Ink) to perform cluster analysis on the genotyping data of the SNP chip results for the test sample. At the same time, to increase the accuracy of the results, 1 K SNP chip genotyping was used for resequencing samples, and the clustering diagram of chip products was optimized and adjusted based on the high-confidence (e.g., library sequencing depth ≥ 10×) genotyping results of resequencing analysis [48]. The resequencing samples included 10 F1, 9 F2, and 9 F3 samples for a total of 28 samples. (Additional file 13: Table S6). Verification of the chip First, to verify the accuracy of microarray genotyping, we selected 28 samples (Additional file 13) that had been resequenced in the previous stage and used microarrays for genotyping to assess the consistency of the two results for each individual and the correlation of all sites. At the same time, four DNA samples from different individuals were selected and repeated three times on each chip and on different chips to determine the repeatability of the chip. The second step was to investigate the ability of 1 K SNP chip to detect population structure. We chose 266 deer with a clear pedigree relationship (three generations) and no genetic relationships (see Table 4 for details). These verification samples were genotyped using 1 K SNP chip. For the genotyping data of the sample, ensure that the SNPs to be analyzed met Hardy-Weinberg equilibrium (HWE)(P < 0.01), we filtered the SNP sites according to a call rate > 95% and an MAF > 0.05 [22], and we subsequently deleted samples with a genotype deletion rate of more than 10% by SNP Variation Suite v7 (SVS; Golden Helix Inc., Bozeman, Montana: www.goldenhelix.com) [49]. According to the genotyping data of the sample, PCA was performed using the prcomp function in R-4.0.2 [50], and then the ggplot2 package was used for mapping [51]. TreeBeST software was used to construct the NJ tree [52], and 1000 bootstrap replicates were employed. Drawing was performed with iTOLv4 [53]. ADMIXTURE software was used to analyze the population structure based on the Bayesian model [54], and the clustering model was constructed based on the 1 K SNP chip genotyping data of 266 verification samples. This was performed by assuming the number of different ancestral sources K (1-8), inferring the ancestral composition of all samples in the population, determining the attribution of each individual, and studying the population structure of 266 verification samples. To ensure the reliability of the SNP sites, we also used machine learning algorithms (random forest model) to evaluate their accuracy [55]. From each population, 30% of the samples were randomly selected to be used as the test set for the final classification effect test, and the remaining 70% were used as the training set. The random forest model has two important parameters: the number of decision trees (ntree) and the number of split node preselected variables (Mtry). Appropriate parameters are chosen according to the relationship between the parameters and the error rate. The "randomForest" package in R 4.0.2 software was used to construct a random forest model [56]. The SNP site date were applied for interval evaluation during the process of random forest generation and to obtain the corresponding OOB error rate [57]. An OOB error rate of 0 indicated that these sites could be used to accurately classify each sample.
2021-09-18T13:18:24.404Z
2021-09-17T00:00:00.000
{ "year": 2021, "sha1": "cabf795b16649e3dfd7cf20d7898b60ccbf486c7", "oa_license": "CCBY", "oa_url": "https://bmcgenomdata.biomedcentral.com/track/pdf/10.1186/s12863-021-00994-z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fe07d0996ee9e4c18c736346ad299bee6480740d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
228990231
pes2o/s2orc
v3-fos-license
Legitimacy, stratification, and internationalization in global higher education: the case of the International Association of Universities The International Association of Universities (IAU) is the only inclusive global university association, its membership barriers are low, yet few universities are members despite considerable benefits. What determines membership in this long-standing international university alliance? Reviewing recent trends toward a more networked, stratified and internationalized global higher education field, we argue that universities with a greater need for legitimation and those ‘born’ into a global era are more likely to become members of an inclusive international network like the IAU. Thus, we expect lower status and younger universities to be more likely to join. We apply regression models to test hypotheses implied by these arguments. Our findings are consistent with these hypotheses, even after controlling for other factors. We discuss these findings using neo-institutional arguments about legitimacy and imprinted logics and suggest potential analytical avenues for further research. Introduction Over the past two decades, higher education has become a highly interconnected and nested global organizational field, in which the UNESCO-affiliated International Association of Universities (IAU) occupies a prominent position as one of the first and the only genuinely global and inclusive meta-organizations, i.e. organization that has other organizations as members (Berkowitz and Bor 2017). Despite its broad range of missions and membership benefits, membership in the IAU is strikingly low, with currently only 3.2%, or N = 551, of the global higher education field (~16,978) being a full (institutional) member of the IAU. This raises the question, what determines universities' decision to join such a meta-organization in global higher education? To explain membership in the IAU, we are guided by two different but overlapping theoretical traditions in organizational analysis. From a neo-institutional perspective, organizations seek legitimacy, which often entails enacting an appropriate or 'proper' organizational identity (Drori et al. 2006;March 1982;Meyer and Rowan 1977;Suchman 1995). The need to display such proper identity is particularly evident in lower status and less affluent organizations. In the realm of higher education, looking like a "real" university is crucial, especially for organizations lacking reputation (Hüther and Krücken 2016;Stensaker et al. 2019). For these universities, joining the easy-to-enter and welcoming IAU is an attempt to enhance legitimacy as well as visibility and to access its resources. By contrast, higher-status universities are less likely to join the IAU. In an era of international rankings and a "world class" university discourse, what constitutes a "real" university becomes more standardized and who are perceived as the higher status universities becomes more evident (Buckner 2020;Frank and Meyer 2020). Thinking about historical eras and their influence on universities leads us to the imprinting perspective. Stinchcombe (1965) argued that organizational structures and strategies were heavily influenced by the conditions in which organizations were born, that is, by the dominant and persistent institutional logics of the specific era in which they came into existence (Oertel 2018;Oertel and Söll 2017). Even though there are serious and ongoing debates as to what constitutes internationalization (Buckner 2017(Buckner , 2019(Buckner , 2020Knight 2014;Kosmützky and Putty 2016), the post-World War II era clearly emphasizes internationalization in higher education as a desideratum more so than in earlier time periods (Ramirez 2006;Chou et al. 2017;Parreira do Amaral 2010;Powell et al. 2017;Seeber et al. 2016;Stensaker et al. 2019). In this perspective, one would expect younger universities to be more attracted to membership in the IAU as a way of displaying commitment to internationalization, and thus, to looking like a "proper" university. Older universities are more likely to rely on other and often more exclusive strategies of internationalization, from entering into alliances with peer institutions to marketing themselves for foreign students (Buckner 2020). These perspectives overlap insofar as they both emphasize the importance of managing organizational identity in accounting for organizational decisions. From these perspectives, joining the IAU is organizational behavior influenced by legitimacy-seeking due to lower status and to being born in the era of internationalization as a dominant institutional logic in higher education. In what follows, we first situate the International Association of Universities within global higher education governance. Next, we elaborate the core arguments and their empirical implications. Drawing on the IAU's World Higher Education Dataset, we apply logistic regression models to test these implications, controlling for a number of other variables that may also be influential. Our main findings are consistent with the core arguments. We discuss these findings and conclude by reflecting on the challenges an inclusive meta-organization like the IAU faces in the stratified and competitive global field of higher education. Global higher education governance and the International Association of Universities Higher education worldwide has seen important changes in recent decades concerning both universities themselves and their governance (Schofer and Meyer 2005). Universities have recently been re-conceptualized from 'specific organizations' of public administration (Musselin 2007) to 'autonomous', 'normal', 'complete', 'real', 'formalized' andeven 'empowered' organizations (Brunsson andSahlin-Andersson 2000;Krücken and Meier 2006;Musselin 2009). As part of this process, universities increasingly have to display their legitimacy while facing pressures to become more autonomous, accountable, excellent, relevant, and international (Ramirez 2010). At the level of national governance, policymakers' discourses have shifted from a focus on national development to one of global competitiveness (Buckner 2017). This leads to a situation where universities are increasingly in competition with each other in a (global) race for reputation, revenues and researchers (Brankovic 2018b;Hazelkorn 2015;Musselin 2018;Vukasovic and Stensaker 2018). Situated in such a global competitive field, higher education has become the target of particular interest and policy directives. In particular, national and international ratings and rankings (e.g. the Times Higher Education World University Rankings) create new logics of quantification, comparison, distinction and stratification Sauder 2007, 2016;Espeland and Stevens 2008). Furthermore, universities increasingly face a new environment characterized by a sprawling governance architecture made of international and regional as well as governmental and nongovernmental actors organized in an international regime. For example, international higher education conferences, national and regional higher education qualification frameworks as well as regional recognition conventions have diffused dramatically in the past two decades (Zapp and Ramirez 2019;Zapp et al. 2018). Operating within this global educational regime, universities become more sensitive to their place in the international status order. Universities are also more aware that internationalization is very much favored in this global educational regime (Buckner and Zapp 2020;Zapp and Lerch 2020;Buckner 2020). Thus, they have started to strategically position themselves globally through inter-university networks or associations 1 and elaborate internationalization strategies (Brankovic 2018b;Seeber et al. 2016;Vukasovic and Stensaker 2018). The mobility of staff and students, programs and campuses, internationalized curricula, international research collaborations and partnerships have become routine features of the modern university (Knight 2014;Powell et al. 2017;Ramirez 2006). As a result, universities, across countries, band together to a degree unseen before. Brankovic (2018a), tracing the emergence of university associations over time, finds 185 associations with most of them being regional or global and burgeoning in the past two decades. Some of these associations are small and exclusive, while others are large and span entire continents. To summarize, universities increasingly operate in a competitive global environment in which their place in an international status order is dramatized through regional and world rankings. Within this global environment, internationalization is clearly favored. As organizational actors, universities have to showcase their legitimacy and are expected to be proactive and make decisions to enhance their status. In this, membership in university alliances has become an important mechanism to display legitimacy and upgrade organizational standing (Gunn and Mintrom 2013;Brankovic 2018a). We now turn to the IAU as a prime example of such alliances. The International Association of Universities as a meta-organization in global higher education In the increasingly dense field of global higher education, the International Association of Universities (IAU) is unique in many regards. Founded in 1950, it is the second oldest global association concerned with universities, 2 established prior to most regional and other international associations that took off in the late 1950s and 1960s. The IAU, an offspring of UNESCO, has been created as a forum for universities to come together without representatives from national governments and it stands for a number of core democratic values, most prominently, academic freedom. With a truly global mission that reflects its affiliation with UNESCO, the IAU had long been the only association that accepts members from all geographic areas of the globe. Further, with its inclusive mandate, the IAU is unique compared to more recent associations whose membership is either confined by geography (e.g. the Network of Universities from the Capitals of Europe -UNICA), mission (Global University Network for Innovation) or "excellence" (e.g. the League of European Research Universities -LERU) (Brankovic 2018a;Gunn and Mintrom 2013). Finally, unlike many of the more recent specialized organizations, the IAU is also one of the few major organizations in the field of higher education that works across various policy areas. It has active involvement in discourses linked to internationalization, quality assurance, intercultural learning, the use of technology or sustainable development. Barriers to IAU membership are relatively low. Institutional characteristics of potential members are almost all-encompassing; higher education institutions need to be recognized as a public or private higher education institution and need to have undergone accreditation or quality assurance. They are also required to confer a first terminal degree and need to have completed at least three cohorts of students (IAU 2020). Membership fees are low. Small universities from low-income countries start at €950 p.a. and a maximum of €3250 p.a. is charged for large universities located in high-income countries (IAU 2020). In addition, if a member is unable to pay its membership fee, it can stay listed as a member of IAU for up to three years before its membership will end due to outstanding fees. By contrast, there are several benefits of membership. Members take an active part in IAU's governance through voting and elections. They can access IAU's international networks, conferences, data, libraries and other media. They can take part in workshops and other professional development formats such as leadership trainings and use them to liaise with other universities. They benefit from trends analyses, specialized portals, advisory services, training and peer-to-peer learning as well as global advocacy and representation (IAU 2020). Moreover, the IAU explicitly highlights that members gain visibility through their affiliation, presumably increasing the status of their members. As an inclusive, open, global and non-expensive organization with considerable benefits, we see the IAU as an almost ideal-typical meta-organization, i.e. an organization that has other organizations as members Brunsson 2005, 2008). Meta-organizations have become an important concept in organization studies where it helps understand organizational field construction and composition as well as organizational behavior and inter-organizational relations Brunsson 2005, 2008;Berkowitz and Bor 2017). These are emerging questions also in higher education where university behavior is undergoing strategic changes and inter-university relations are being globally reshuffled, especially when considering the growing number of higher education meta-organizations in the form of inter-university associations (Brankovic 2018a). Considering IAU's characteristics and status as the only genuinely global higher education meta-organization, it should, in principle, attract members from all countries and all strata of higher education systems alike. However, only a small percentage, 3.2% or N = 551, of the global higher education field (N = 16,978) has joined the IAU as full (institutional) members. It seems that membership in university associations, even in situations of low entrance barriers, involves a more complex process. In what follows we set forth our core arguments about IAU membership determinants and present the data with which we test the related hypotheses. Explaining membership in the International Association of Universities Following our interest to investigate why some universities are more likely to become members of the IAU, we contend that universities in greater need for legitimacy are especially driven towards membership. This view is based on the two previously outlined arguments from neo-institutional theory and the imprinting perspective in organizational research. Following these approaches, we develop two matching hypotheses that are the basis for our subsequent analysis. Our first argument emphasizes the importance of legitimacy for organizations. This core idea in organizational theory sees meta-organizations as an important means for members to gain legitimacy. It also resonates with neo-institutional theories where organizational structures and policies are legitimation-seeking exercises, not simply technical rational ones (Meyer and Rowan 1977;Bromley and Powell 2012). In this approach, organizations are embedded in a cultural environment that provides models, which indicate appropriate behavior and thus supply legitimacy. As organizations seek legitimacy, they undertake activities in line with these models to act in accordance with generally-accepted rules and norms. In this, legitimacy is the support that the environment provides to an organization based on the appropriateness of its actions (Suchman 1995). In the context of global higher education, top universities enjoy abundant legitimacy; they are even the templates for organizational reform around the globe (Buckner 2020). Those not in the top of global rankings, however, are more likely in need for a strong external basis for their legitimacy. In addition, being on top of international rankings also tends to go hand in hand with more organizational capacity and resources. This means that those universities that are ranked in the top of rankings should be less in need of the services and resources provided by the IAU, especially the increased visibility and the opportunity for networking. Thus, membership in the IAU should be more attractive to those universities that do not do well in international rankings. H1: Universities that are ranked in the top of international rankings are less likely to join the IAU. In a related vein, young universities that have been created in a time when the internationalization trend was especially strong can be expected to be more strongly inclined to join the IAU. As these younger organizations have to catch up to the older and more established universities that shape the dominant models in the global higher education field, they use the globalized narrative of excellent and international universities to signal their legitimacy (Buckner and Zapp 2020;Buckner 2020;Oertel and Söll 2017;Stensaker et al. 2019). This resonates with the concept of imprinted institutional logics, an idea that has proven useful in explaining similarities in organizational structure and behavior within and across fields and periods (Thornton et al. 2012). Combining the imprinting concept with institutional logics, we emphasize the crucial role of inceptive phases together with the importance of ideational and cognitive forces that motivate organizational behavior (Waeger and Weber 2019). In this perspective, logics have strong legacies. For example, analyzing corporate social responsibility activities in a large sample of Chinese companies, Raynard et al. (2013) find that cognitive frames reflect a state logic that was more dominant in an earlier era, and argue that the earlier frame, once imprinted, remained salient. Similarly, the time of university creation has an influence on the way it adapts to an existing environment and signals its status and belonging (Zapp and Lerch 2020;Oertel 2018;Oertel and Söll 2017). In this sense, younger universities are more likely to look to international organizations such as the IAU to seek legitimacy and use the IAU's prominent and unique status in global higher education governance and its link to UNESCO to augment their standing and underline their participation in the global organizational field of higher education. H2: Younger universities are more likely to be IAU members. Data and methodology We draw on the World Higher Education Database (WHED), which has been created and updated by the IAU in collaboration with UNESCO since the 1950s. The WHED is the most comprehensive and authoritative dataset on universities worldwide. The data provided is for the most-recent year and thus only cross-sectional. The data for our study refers to the years 2016-2017. The original dataset comprises information on N = 16,978 colleges and universities from 191 countries and independent territories. The information is provided to the IAU by official public sources and complemented by IAU staff through direct contact with higher education institutions. Information comprises, among others, full original and English titles, founding dates, type of funding and legal status (private vs public), student enrollment and formal organizational structure as well as curricula. All higher education institutions included offer at least a 3-4 years first terminal degree (ISCED 6A/B). This rich dataset does, however, have some limitations. 3 First, public institutions may be overrepresented as they more readily enter public records than private higher education institutions. Second, the IAU updates the data every year but uses a specific regional focus each year for the update. The 2018 update, for example, focused on Asia and the Middle East. Consequently, there might be a difference of up to 5 years between regions with regard to the timeliness of the information. Finally, the database relies on data reported by public authorities and higher education institutions themselves. As the IAU is not responsible for data collection, some data might include elements of organizational window-dressing. However, even with these limitations the data from the WHED still represents the most comprehensive and reliable dataset on higher education institutions in the world. Analysis and variables Outcome variable Our dependent variable is institutional membership in the IAU. We only use institutional membership (N = 551) for 2016, and discard all affiliated, associated or organizational members (N = 111). 4 There are three reasons for this sampling strategy. First, higher education institutions make up the largest part of the membership body. Second, all other types of members may also include other meta-organizations (e.g. university associations) or individuals. Third, the IAU was originally created to provide a meta-organization in which universities could organize without the involvement of national governments. Historically, higher education institutions are the main audience of the IAU. Consulting with the IAU, we have been reassured that each member re-assesses its membership on a regular basis and that there are no dormant memberships (a situation that would hamper the robustness of our analysis). Key predictors We use the results of the Times Higher Education (THE) World University Ranking (WUR) from 2018 to indicate university status. We draw on the THE WUR as it is one of the most comprehensive rankings, yet does not methodologically overlap with our own predictors (THE WUR 2020). We select the top 500 universities in the world as our cut-off point. Age is defined by years of existence based on the universities' founding date through 2017 (WHED 2020). We use a z-transformed variable for age. Alternatively, we run a model with a binary age/ cohort variable using 1990 as a cutting value (see Appendix table A2), yet results show almost no difference. Control variables We control for a number of other organizational and country-level variables. Regional coding assesses the university's location and is based on the UN classification of world regions. Given IAU's founding context, we use Europe as a reference category. The distinction of university type, i.e. private (not-for-profit) versus public, is based on IAU's classification of universities' legal status. We use public universities as a reference as private universities were less prominent at IAU's onset. We also use a logged student enrollment measure to test for the effect of universities' size. In doing so, we excluded some outliers. We draw on a widely used index to compare political systems, the Polity IV Index, to measure whether the level of democracy of a country in which a university is situated has an impact on its likelihood to join the IAU. Polity IV ranges from −10 (highly autocratic) to +10 4 IAU has four categories of members: 1) institutions, which is by far the largest category and the one that we are interested in. This category comprises those universities that are full IAU members, 2) organizations such as other national, regional or global higher education associations, 3) affiliates such as non-governmental organizations or networks in education, and 4) associates such as individuals collaborating with the IAU on a project-basis. See also: https://www.iau-aiu.net/Members (06.04.2020) (highly democratic) (Center for Systemic Peace 2018). We use a high level of democracy as a reference based on IAU's core values. Lastly, we control for the level of internationalization of a university measured through three indicators of university organizational structure that cater to the task of internationalization. First, through a binary variable we ascertain whether a university has an international office as indicated in the WHED (WHED 2020). Second, we code the prevalence of a variety of international studies in the curriculum based on WHED (2020) data on degree designations (e.g. international relations, area studies, international business administration). Our variable captures the existence of at least one explicitly international study program as a binary variable. We also built models with a metric variable describing the total number of international study programs as a share of the total teaching portfolio yet results show no difference (see Buckner and Zapp 2020; Lerch 2020 for details). Third, we include a control for membership in regional university associations (see Buckner and Zapp 2020; Lerch 2020 for details). Based on membership directories, we coded a binary variable indicating whether a university is a member in a regional association (e.g. the European University Association). Model Logistic regression is appropriate for modeling the likelihood of membership prevalence and estimating the magnitudes of effects for various predictors. However, basic logistic regression models are unable to adequately account for data that result from cluster sampling within universities and countries. The WHED includes such data. We use multilevel modeling to achieve a more accurate estimation of university-level effects within separate countries (i.e., within-country effects), as well as accurate estimations of the unique influences of the university environment. We run multi-level binary logistic regressions with fixed effects in order to handle the nested data (i.e. membership of universities located in countries) (Wong and Mason 1985). We chose fixed effects due to the large sample size and an explicit interest in the covariates. The full estimated model takes the following equation: where Y ij is the measure of membership for university i in country j. A ij describes the age for an institution i in country j and B ij its status rank. IO ij describes whether an institution i in country j has an international office, M ij whether an institution i in country j is member of a regional university association and IS ij whether an institution i in country j has an internationalized curriculum. Z ij represents controls at the organizational level and ε ij the residuals at the organizational level. W j represents control variables at the country level and μ j the residuals at the macro-level. Table 1 below provides descriptive statistics for all variables. Table A 1 in the appendix provides a correlation matrix for all independent variables with no problematic associations observed. As we have a very large sample, Appendix A also shows a boot-strapped model to warrant accuracy of our interpretations. We also recognize the limitations that our crosssectional design has, as it does not allow us to ascertain the direction of some associations. For example, IAU membership may drive some organizational efforts to internationalize or, reversely, these organizational characteristics have an influence on the likelihood to become a member of the IAU. Therefore, we caution against strong causal interpretations. Table 2 provides a synopsis of IAU members by region. 5 The percentage of membership for each region relates to the proportion of universities from that region which are IAU members. Interestingly, membership is comparatively high in Africa, the Middle East and Northern Africa, Western and Eastern and Central Europe as well as Oceania, while North American and Latin American universities are least represented in the IAU. Results What Table 2 indicates is that membership in the IAU is not driven by those regions with universities that command the most attention in global rankings and in discourses about worldclass universities. This descriptive finding suggests that joining the IAU may be due to factors other than the centrality of a national higher education system within a global environment. We present our statistical model to examine the hypotheses advanced earlier. The model tests the legitimacy hypotheses with our controls including those focusing on university internationalization (Table 3). Our model shows that both of the main hypotheses are confirmed. Being ranked among the THE Top 500 universities significantly lowers the likelihood for IAU membership with a moderate effect size (B = −.69***). The effect for age is smaller, yet also significant, and it follows the expected direction (B = −.12**): The younger a university, the more likely it will join the IAU. Codes refer to: type: 0 = private; 1 = public; area, regional membership, international office, curriculum, excellence: 0 = No; 1 = Yes; Polity IV: −10 = most undemocratic; 10 = most democratic The results show that the status and age of the university continue to show the expected effects, even when controlling for international office, internationalized curriculum and regional association membership. That is, lower status and younger universities were more likely to join, irrespective of the influence of organizational efforts by the university to internationalize. Controls yield some interesting findings. University size and whether the university is located in a more democratic polity are inconsequential, at least treated as isolated variables. This is a somewhat surprising finding, since the IAU promotes core democratic values that should have made it less attractive to universities in more autocratic polities. However, it may well be that some universities in autocratic polities join because their aspirations are more democratic than those of their political regime. 6 In addition, we find that public universities are more likely candidates for IAU membership. Lastly, relative to Western Europe, being situated in Oceania, Eastern & Central Europe, Africa and the Middle East & Northern Africa shows significant negative effects on the likelihood of membership. This is somewhat surprising compared to the previously presented descriptive statistics on the spatial distribution of IAU membership. Discussion Despite the general trend towards more internationalization, its low-entry costs and sizable benefits, the IAU attracts only a few universities as members today. At the same time, the IAU is one of the oldest university associations in the world and it has a solid core of universities as members. This paper offers explanations as to why these join and others not. We rely on ideas from neo-institutional and organizational imprinting theories. We used arguments from sociological neo-institutionalism to focus on the wider environment and see organizational developments as attuned to the changes in the rules or standards of the organizational field focusing specifically on legitimacy. We also used arguments from organizational imprinting to call attention to the influence of the dominant institutional logic at the founding of the organization. What our findings suggest is that the influence of the wider field is greater on those universities with an increased need for legitimacy and external validation of their university identity. The impact is also higher on those for which the logic of internationalization is most compelling as a means of identity validation. Let us briefly reflect on each of these core findings. First, the need for external legitimacy has been a key determinant for the likelihood of joining the IAU. In this regard, high status universities may be more immune, and even if inclined, may prefer not to join a globally inclusive meta-organization. In the stratified global field of higher education, the more prestigious universities may prefer to stick to each other in more exclusive networks such as small alliances that only admit very selected and "excellent" members (Gunn and Mintrom 2013;Brankovic 2018a). Birds of the same feathers may indeed flock together, especially if the feathers are already esteemed. This may explain why we find that the less prestigious universities are more likely to become members. The top 500 of the THE WUR do not need the IAU to validate their legitimate identity as a university. Their legitimacy is derived from their globally validated excellence via their standing in world rankings and from the generally more positive perception enjoyed by the more established universities (Christensen et al. 2018). Joining an inclusive and heterogeneous association such as the IAU is not seen as gaining them more status and distinction as suggested by the meta-organization literature (Ahrne and Brunsson 2008). Instead, these organizations create their own associations such as the League of European Research Universities (LERU) signaling exclusiveness and boundarydrawing (Brankovic 2018a).Our analysis of IAU membership reveals the ongoing process of higher education stratification at a global scale by adding "associational structure" to a process that has thus far mainly been analyzed in terms of global rankings (Hazelkorn 2015). While having a recognized status makes IAU membership less likely, young age is positively associated with membership in the IAU. This might be explained by the fact that younger universities are less likely to have these reputational advantages and more likely to display internationalization commitments via membership in an inclusive meta-organization. They see such membership as legitimacy-enhancing and as a potential source of resources to grow their reputation. At the same time, we point to another potential reason for why younger universities join the IAU. We argue that universities 'born' into an internationalized era are confronted with a different institutional logic made of era-specific cognitive frames (Raynard et al. 2013;Thornton et al. 2012). In this perspective, universities reflect the dominant logic that prevails during their formative phase. In the case of universities, internationalization can certainly be considered a powerful narrative that steadily increased its relevance in the last decades (Buckner and Zapp 2020;Zapp and Lerch 2020;Buckner 2017Buckner , 2020. We extend the research on the environmental penetration of universities that has so far shown effects on universities' diversity strategies as well as study programs and structure (Oertel 2018;Buckner and Zapp 2020;Zapp and Lerch 2020) by directing attention to universities' internationalization decisions that can partly be explained by contemporary expectations of what constitutes a 'proper' university. The additional finding that IAU membership is associated with international offices and curricula as well as regional memberships indicates an emerging 'internationalist' type of university. While the literature highlights that internationalization has multiple meanings for different universities and is articulated in different ways (Buckner and Zapp 2020;Zapp and Lerch 2020;Ramirez 2006;Buckner 2020;Seeber et al. 2016;Stensaker et al. 2019), the correlation between these variables indicates that there is a cluster of institutions that increasingly embrace the aim to connect across boundaries. Conclusion This paper empirically identified the determinants of meta-organizational membership in the IAU, the only global and inclusive meta-organization in higher education. By focusing on organizational legitimacy and imprinted legacies of institutional logics, we unpacked what type of universities are more likely to join the IAU. Our aim with this study was to better understand the increasing global associational structure and stratification in higher education. Our findings indicate that in a fragmented and stratified global organizational field, of which higher education is a prime example, characterized by increasing internationalization but also growing organizational competition, those universities that are in need of external legitimation are more likely to join the IAU. We explained the relevance of these factors by relating them to theories of organizational legitimacy and imprinted logics. The former looks to the wider environment and sees organizational developments as attuned to the changes in the rules or standards of the organizational field, particularly the search for legitimacy. The latter is used to make sense of why organizations seem to be influenced by the dominant institutional logic at the time of their foundation. Our analysis highlights the challenges that the IAU but also other more inclusive metaorganizations face in an increasingly globally stratified organizational field like higher education. They have to maintain their own legitimacy and increase their constituency through inclusiveness and openness, while remaining attractive to those organizations that embrace global competition and are among the strongest and most prestigious players in the field. This is a delicate balance to strike especially in the context of an increasing number of more exclusive international university networks. Given the limitation of our study, the phenomenon of meta-organizations in higher education clearly demands future research. This includes, for example, longitudinal studies of membership dynamics in different types of alliances that go beyond the cross-sectional assessment that we could provide. Moreover, it is worth investigating the question how membership dynamics influence the activities, policies, and organizational processes of the networks themselves. Overall, a growing focus on university alliances using a meta-organizational perspective can contribute to a better understanding of both key dynamics in the organizational field of higher education especially regarding processes of internationalization and stratification in global higher education.
2020-10-29T09:08:35.972Z
2020-10-27T00:00:00.000
{ "year": 2020, "sha1": "70c402be828ac7a0691f4be08f7c4d9f93d65ebd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11233-020-09062-0.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "24444fa4dcbf5e2daba12517347ba2c86b2e81ff", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
15815305
pes2o/s2orc
v3-fos-license
Quality of Community Based Nutrition of Integrated Refresher Training Provided for Health Extension Workers in Amhara Region, Northwest Ethiopia Improving nutrition contributes to productivity, economic development, and poverty reduction by getting better cognitive development, school performance, physical work capacity, and maintaining health status by reducing morbidity and mortality. Poor nutrition perpetuate the cycle of poverty. Community-Based Nutrition (CBN) is an important component of the National Nutrition Program, designed to build upon the Health Extension Program packages to improve nutritional status of under-five children and pregnant and lactating women. As part of this program shift, CBN training modules have been shortened and incorporated into the Integrated Refresher Training (IRT). The nutrition components of Integrated Refresher Training have not been assessed so far. This study aims to assess the quality of CBN component of integrated refresher training, stakeholder perceptions on the quality of training and change in the knowledge of HEWs. Institutional based cross-sectional study with both qualitative and quantitative data collection methods was used. Four Woredas were chosen purposively from a listing of all woredas receiving IRT module II in Amhara region from June-July 2012. Many MTs and trainees mentioned difficulty of delivering the training as designed due to shortage of time allocated. This was also observed in IRT sessions where MTs used additional than allocated time. Even though most trainees said the training on CBN component was adequate to give services to the community and significant knowledge change (p<0.05) was seen by participants after the training, it was observed that they failed to give all the appropriate advice related to CBN component during the field practice. Most of the HEWs reported that there was no supportive supervision for the last four months. In conclusion; the training that were given in four selected woredas of Amhara region were not quality wise and as designed. The IRT of nutrition component lack reporting and monitoring format. In all nutrition components of IRT and the allocated time for training is too short. The IRT of nutrition component is also not adequate for health extension workers to accomplish community based nutrition program. Thus, there should be additional training for the health extension workers. INTRODUCTION Undernutrition continues to afflict 180 million children worldwide and is responsible for in excess of 3.5 million maternal and child deaths each year.Until recently malnutrition was a neglected issue.However, it has recently begun to rise up the political agenda [http://www.ids.ac.uk.August12/ 2013].Maternal and child undernutrition account for 11% of the global burden of disease [Black et al., 2008].Globally, about one in four children of under 5 years old are stunted (26 per cent in 2011).An estimated 80 per cent of the world's 165 million stunted children live in just 14 countries.Ethiopia is among 14 countries with the largest burden and highest prevalence of stunting [UNICEF, 2013].In Ethiopia between 2000 and 2011 the prevalence of both underweight and stunting has declined to 32 and 23 percent respectively.The country needs to accelerate efforts to reach the Health Sector Development Plan's target of reducing the prevalence of stunting to 30 percent by 2015 [FMoH, 2008;GOE, 2013]. In order to prevent malnutrition in children, family and community should be the first line of protection.Community-Based Nutrition (CBN) aims to build up the capacity and the ownership of communities and families to make informed decisions on child care practices [FMOH, 2008;FMOH, 2011].CBN is an important component of the National Nutrition Program (NNP), designed to build upon the Health Extension Program (HEP) packages to improve nutritional status of underfive children and pregnant and lactating women.CBN was implemented in different phases in Ethiopia [FMOH, 2013].Although there has been a marked improvement in the level of malnutrition in the country, child malnutrition is still prevalent according to the Ethiopia Demographic and Health survey (EDHS) of 2011.The prevalence of stunting was 44% among children 6 -59 months, underweight was 29% and 10% children were wasted [DHS, 2011]. CBN supports improved quality and coverage of a number of preventive and promotive activities of CBN at the community level, including: monthly growth monitoring and promotion for under-two children; monthly community dialogues to engage community members in assessing and improving nutrition; quarterly screening of underfive children and pregnant and lactating women for malnutrition (with linkages to targeted supplementary food where available); improving referral practices; and six-monthly campaigns of Vitamin A supplementation and deworming for children 6-59 months [FMoH, 2008;2011]. As part of this programme shift, CBN training modules have been shortened and incorporated into the Integrated Refresher Training (IRT). IRT is a new model which is developed by Federal Ministry of Health for delivering in-service training to health extension workers (HEWs) that could be able to implement the health extension packages including the former CBN training module.The community based nutrition components of the IRT include: maternal nutrition (MN), breastfeeding, complementary feeding (CF), and growth monitoring and promotion (GMP) and community health days (CHD) [FMOH, 2011].The nutrition components of IRT not been evaluated so far; so this study will help to improve the component by identifying the gaps and opportunities.Therefore, the current study aimed to assess quality of CBN components of integrated refresher training and stakeholder perceptions on the quality training and change in health extension workers (HEWs) knowledge in Amhara Region. Study areas The study was conducted in randomly selected four woredas of Amhara region.Amhara region is located 9°-14° N and 36°-40°E in Ethiopia's Northwest.The state shares common borders with the state of Tigray in the north, Afar in the east, Oromiya in the south, Benishangu-gumuz the south west and the Republic of Sudan in the west.The State of Amhara covers an estimated area of 170,752 square kilometres.According to the 2007 census, the region's population was 17,214,056 of which 50.2% were males and 49.8% females.About 85% of the people are engaged in agriculture.The State is one of the major Teff (staple food) producing areas in the country.Barley, wheat, oil seeds, sorghum, maize, wheat, oats, beans and peas are major crops produced in large quantities [CSA, 2008]. Instrument, measurements and variables collected In-depth interviews were conducted by researchers who were received training on the instruments and assistant by a note taker and digital tape recorder.Moreover observations were also conducted during training sessions.Pre-test and post test questions, composed of all CBN components were prepared.Indicators were: observation in training session, structured tests, perception of both MTs and trainees. Observations and participants' perceptions on breast feeding training session. Breast milk is the only food or drink that a newborn child needs in the first 6 months of his/her life.Exclusivity is a measure of the amount of breastfeeding without supplementation (e.g., infant formula or other breast milk replacements), and 6 months of age is a key marker since complementary foods (i.e., solids) usually begin around 6 months postpartum [WHO, 2011;BCC, 2006].A shorter duration of exclusive breastfeeding does not protect infant growth so well as exclusive breastfeeding for six months does [WHO and UNICEF, 1998].Moreover, the early introduction of complementary foods shortens the duration of breastfeeding, interferes with the uptake of important nutrients found in breastmilk and reduces the efficiency of lactation in preventing new pregnancies [Zeitlin and Ahmed, 1998;Oski and Landaw, 1980;Bell et al., 1987]. Around half of MTs said that training was delivered based on the manual as designed; they explained it in a good way and the facilitator guided to do so.On the other hand, the remaining half of them said that there was shortage of time to strictly follow each and everything listed in the manual which made some of them rush, others skip some exercises and the rest push the session in order to finish the breast feeding part.Some (5) MT of them explained as they were not using breast models and dolls.Moreover, they explained that more time should be given for the topics like breast attachment, positioning and expressing breast milk by mentioning that it is a new practice for rural community.This was also observed in IRT sessions where MTs used 20 additional minute on average than allocated time for breast feeding session.As observed in the four sessions, use of more time than allocated in the above specified topic could be due to insufficiency of the allocated time to do all exercises in separate groups and discussing it with the larger group.In addition, on average 65 % of BF session was fully delivered with two way communications.The reason why the rest part of the session was not fully delivered might be of MTs rush to finish the sessions on time and MTs thought that participants already know the messages. Master trainers from Even though more than 60% of the trainees said the training on BF was adequate to give services to the community and significant knowledge gained (p<0.05) on breast feeding was seen by participants after the training, it was observed that they failed to give all the appropriate advice related to breast feeding during the field practice.This could be due to shortage of time given for the field practice which made them not to follow all the procedures and the MTs were not supervising and giving feedback for them.The classroom practical session was not properly done as per the guideline which could be the other reason. Observations and participants perceptions on complementary feeding training session Complementary feeding (CF) is the process of giving other foods and liquids in addition to breastmilk.Complementary foods can be especially prepared for the infant or can be the same foods available for family members, modified in order to meet the eating skills and needs of the infant [IASC,2009; Monte and Giugliani, 2004].The impact of feeding practices on nutritional status, growth, development, and health outcomes of infants and young children are well documented.The critical window for improving child nutrition is from pregnancy through the first 2 years of life, a period when the transition is made to CF [IASC, 2009]. Most (15) MTs said that they delivered the training based on the manual and they reasoned out that it was scheduled and prepared in a way that HEWs could understand.However, some (5) MTs mentioned that there was time shortage which obliged them to give some topics as a reading assignment and rush on some topics that they thought HEWs have a good understanding. Master trainers from Gidan woreda responded "It is not possible to say delivered as it was designed; this was mainly because of shortage of time.If we think to deliver it as to the manual we would not be able to cover the portions.That's why some times we gave more time to new topics by reducing some time from the topics which we consider trainees would have a better understanding." Other MTs also explained that there was difficulty in strictly following the manual.Their reason Hang the weighing scale from a tree branch, attach the weighing basket to the scale and adjust the scale to zero by moving the knob at the back of the scale, child's heavy clothes and shoes removed, wait for the needle to stop moving before reading weight, read the weight to the nearest 0.1 kg, Read the weight loud enough for the mother to hear, Asked about breastfeeding, asked about breastfeeding frequency, asked about breastfeeding problems, asked about other liquids given to child, explain about exclusive breastfeeding, asked about complementary foods, frequency of CF,CF proportions (3 cereal:1 legume), consistency of CF (thickness),enrichment foods, Appropriate snacks, babies have small stomachs and need to eat more frequently, feeding during illness, feeding after illness, actively encourage baby to eat, Washing hands before preparing foods, hygienic preparation of foods, be patient and encourage child to eat, GALIDRAA (for abnormal growth). was shortage of participant manual and family health cards (FHCs) as a result they were using the old FHC to fill the gap. In addition, some (5) MTs mentioned the importance of having a poster that contains the procedures of complementary food preparation.Complementary food session didn't contain adequate information on iodized salt utilization and they suggested that information regarding the importance of iodized salt utilization should be included in this session since the utilization especially in their community was poor. Master trainers from Regional health office stated as "When we demonstrate the food preparation, we had material problem.For e.g. the materials for food preparation are not included in the training package.We prepared the cooking materials and food stuffs available in the area by our own initiation.I think it will help to give the training easily if these things are included in the package." MTs were rushing and skipping some exercises to finish the session and use of extra time was observed in sessions; on average additional 13 minutes to the allocated were used.This was noted while observing the sessions, where 59% of the CF sessions were delivered all messages under the sub-topics with two way communication.This shows that in order to finish sessions on time MTs were not delivering all the messages as designed.Although most trainees mentioned that the training they received on CF was adequate and significant knowledge change was seen (p<0.001),gaps in delivering appropriate message about CF for the caregiver were observed during the field practice.This could be due to the fact that enough time was not allocated for the field practical session on which only half day was allocated for growth monitoring, inter personal communication with caregiver, conducting community conversation and demonstration of complementary food preparation [FMOH, 2011] .The other reason might be that less facilitators' involvement in following up the trainees during field practice. Observations and Participants perceptions on growth monitoring and promotion training session Community-based growth and monitoring programs is one of the short-route responses to reduce the prevalence of malnutrition.In order to detect cases of malnutrition and associated illnesses, monthly measurements of children's weight growth are recorded and compared to previous records and plotted on a chart against an international reference population.The linear growth retardation acquired early on infancy can't be easily reversed after the second year of life [Monte and Giugliani, 2004;UNICEF, 2004]. Though most ( 14) MTs complain time shortage for delivering GMP session as designed, in all the observations it was finished on average 15 minutes earlier.This could be due to all woredas didn't practice proper weighing.Preparation of weighing basket was also not done in all sessions except one woreda.Furthermore, interpreting weight gain and nutritional status determination were skipped in two woredas and it was observed that per session, on average 55 % of GMP messages were delivered as designed (all messages with two way communications).This shows that skipping exercises might be the reason for the sessions on GMP to be finished early. More than half interviewees said the training was not enough due to different reasons.They said that additional training is needed as they didn't get adequate knowledge.Shortage of time was the main problem raised.They were not able to do the practical weighing and plotting exercise due to time shortage.They also said that, they need more training on interpretation of the result after plotting. Health Extension worker from Gidan woreda responded that "For example in the area where I am working there is no separate growth chart for the two sexes; but what I observed here was in blue and red colors.So, it needs additional explanation why we needed to put them separately." Although there was significant knowledge gained statistically (p<0.05), the trainees answered only half of the questions correctly on post test.This could be as a result of skipping practical exercises and not delivering all messages as designed.And again, in the field practices most failed to appropriately weigh a child and determine its nutritional status, though most trainees replied that the training they received on GMP was adequate.Since in some places trainees didn't even get the chance to practice weighing a child as the MTs were doing it themselves, this might have effect on delivering the service to community.Some HEWs were confused with new information in the manual especially on children who are eligible for growth monitoring and didn't want to stick to the new. Master trainers from Regional health office responded "Even though we gave the training based on the manual, there was conflicts among participants and facilitators concerning to the age of children eligible to GMP (2 or 3 years).The manual clearly shows that the age of child for GMP should be under 2 years but most of the HEWs resist this and have a tendency to follow the previous trend i.e. under 3 years.They thought their immediate boss should order them to practice them based on the new manual."Almost all MTs complain on the time allocated to this session; they said that the session contains important and practical topics which HEWs expected to do independently in the community but the time allocated for the topics were not adequate to deliver the training by ensuring each participant understands everything.Understanding the invisible malnutrition, properly weighing a child, plotting a child's weight on the growth chart, interpreting growth trend and discussing the issue with the mother were frequently mentioned topics which need more time in order to have better understanding.Some MTs supported this suggestion and said that practical exercises were skipped due to shortage of time and as a result most of the participants were unable to perform the growth monitoring and community conversation activities during field practice. Furthermore some MTs recommended that the GMP exercise presented as table in the manual, and the interpretation of growth trend in graph 3 should be revised as it created confusion and they also stressed that the material of weighing basket should be again modified as it was not comfortable for the children during weighing.Shortage of training materials like family health card and participant manual were mentioned and suggested to be solved in the future before conducting training. Observations and Participants perceptions on maternal nutrition training session Maternal and child undernutrition account for 11% of the global burden of disease.Maternal nutrition refers to the nutritional needs of women during the antenatal and postnatal period (i.e., when they are pregnant and breastfeeding) and also may refer to the time period before conception (i.e., adolescence).Maternal undernutrition affects the health of both mothers and children and, as a result, has broad impacts on economic and social development.Undernourished pregnant women have higher reproductive risks, including death during or following childbirth [Black et al., 2008;MGUR, 2012].Study shown that maternal and child undernutrition (maternal height, birth weight, intrauterine growth restriction, and weight, height and body mass index (BMI) at 2 years according to the new WHO growth standards) were related to adult outcomes (height, schooling, income or assets, offspring birth weight, BMI, glucose concentrations, blood pressure) [Victoria et al, 2008]. The perception of MTs on maternal nutrition was more or less similar; they all explained that they delivered the training according to the manual and pointed out that it would be better if put separately just like other components of the nutrition.Their complaint was not only on how the section was organized but also the detail and clarity of the contents included in it.According to their opinion, lacteal feeds and food taboos were not presented in detail and might not be sufficient for the trainees to teach the community.They explained that the manual mentions only the type of foods which are accepted and not accepted culturally but it didn't put clearly why some of them are accepted and others are prohibited.This deficiency made them to think two different things: the first was, to perceive as the manual didn't consider the reality that health and growth of a child fully relies on the condition of a mother and the other was, to think ways of addressing these gaps.In this regard, they reported that they made a thorough discussion using their own experiences and some examples: "We explained what types of foods were prohibited, why prohibited and in what way they could change the communities' attitude."(MT). Nearly half of the MTs explained that even if they gave the training according to the manual they had faced some challenges due to shortage of time and unavailability of some materials.As they explained, neither the theoretical nor the practical session had been allocated sufficient time, and because of this, they were forced to deliver only basic points, simply by telling trainees to cover the rest by themselves: Master trainers from Ebnat woreda said "We convey only key massages and left others as a reading assignment." Due to these reasons, they were not confident to say that trainees would bring the desired behaviour at the community level. Majority of the trainers reported that they had no adequate knowledge on Outpatient Therapeutic Feeding Program (OTP) services.They told that they were confused with what the manual says and the reality on the ground.In the manual there are eligibility criteria's for pregnant and lactating woman to get OTP services, but, because of budget constraint and lack of knowledge it is uncommon to give this service.MTs also indicated that none of the HEWs knew how the service should be given. On the contrary, few MTs told that the manual has sufficient time and detailed contents; to them, it has a lot of exercises, and training was delivered in a participatory and explanatory way. Observations and participants perceptions on community Health Day training session In this section trainers were asked to tell whether they gave training according to the manual or not and to explore possible reasons which made them not to provide according to the manual.Almost all MTs told that they take the training according to the manual even if shortage of time was the main problem to them.Most of those MT told that this shortage was mainly during practical sessions. Another master trainers from East Estie woreda responded "To dig out everything and solve the problems, there wasn't enough time.There were argumentative ideas raised.So this needs more time.Caring for malnourished children by itself needs more than an hour.The time given for practical part was very short and even not enough to explain the points." When asked about the delivery of messages with respect to the sequences put in the manual, trainers cited a range of reasons why they skip and not keep the sequence in some portions.Most of them asserted that they were themselves trained in this way and the assumption that trainees already knew and practiced it. Master trainers from Meket woreda said "We were just applying what we have been trained in training of trainers.That's why we go to the particular community getting full information about the pertaining problems in that area.In today's session, for instance: first we gathered information about the number of children weighed and their nutritional status from the last month registry of the health post.Having this information, we just began the session with counselling."Few of the interviewees also raised the need for hiring of additional HEWs because some of those who already received the training are leaving to other places."There should be additional HEWs because nowadays HEWs are migrating to other places."(MTfrom Regional health office). The others were in doubt of the health extension workers ability to carry out this work independently in their work places. "What we understood from this section is most HEWs are unable to carry out independently in their work places.They need support from someone else whenever there is community health days."(MT from Meket woreda).Majority of the trainers described that the training was highly affected by shortage of materials.As a result, they were not able to show some procedures. "That is because we don't have the materials.We have asked but only albendazole, vit A and MUAC tape was found.We have asked tally sheet and registration book, it is but there isn't any." (MT from Estie woreda). Training adequacy for non-CBN Areas Most interviewees indicated that the training was not enough for those who had no previous CBN exposure.According to the trainers, this was clearly observed when they were receiving training of trainers (TOT) with those who had already trained CBN. Change in Knowledge of trainees This study was aimed to assess change in knowledge of participants.They were given pretest before the training and post test after the training.The trainees were given four days training by master trainers on CBN components (breastfeeding, complementary feeding, growth monitoring and promotion, maternal nutrition and community health day at woredas level).Paired ttest was used to see the knowledge change and was found to be statistically significant.The mean pre-test score for breastfeeding was 5.19 and post test score 5.31, which indicated an increase of 0.12 over pre-test score (Table 1).The result of this study also showed there were 0.96 increment observed in CF i.e., the mean pre-test score for CF was 4.01 and it was 4.97 for post-test.The result of GMP showed that there was 0.69 increments after intervention.In this study there is significant change in knowledge of training given in breastfeeding and community health day (p<0.05).There is a highly significant change in knowledge in the complementary feeding and growth monitoring and promotion components (P<0.001);however, there is no statistically significant change in knowledge observed in training that was given on maternal nutrition (P>0.05)(table 1). Health Extension workers perceptions on the quality of training HEWs were interviewed to assess their perception on quality of training.More than half of the study subjects responded that training was given in the components of CBN was the right amount.Whereas about quarter of trainees perceived that they need more training on CF, BF, GMP, MN and CHD to understand the subjects (table2). Field practices Field practices were done in all four woredas where nutrition part of IRT sessions observed.In all field practice sessions; details on what must be done, when and how, was informed for participants before going to the field.Facilitators and co-facilitators were selected in all sessions except in one woreda.The communities were informed about the field practice prior to the date of practice but they were not divided in to two gottes (peasant association) in all sessions.On the other hand, use of community growth charts were observed only in the practical session conducted at one woreda.In addition inadequacy of FHCs and unavailability of checklist to follow the community conversation were noticed in all the four practical sessions. One participant was expected to weigh a child and do the inter-personal communication with the care taker; however a group of 3 to 4 participants were weighing one child and doing the interpersonal communication except in one woreda where weighing a child was done by the master trainers.The reason mentioned was only few children were available at the time of field practice and surprisingly only 3 children were weighed in the field practice held at one (Estea) woreda. Among 21 children, who were observed during growth monitoring at the field practice of all sessions, all procedures of weighing were followed only for 7 of them.In addition all the appropriate advices were delivered for none of the care takers.Correct age and nutritional status determination was done only for 14 and 11 respectively (table 3).Of the four field practice sessions observed, community conversation (CC) was conducted on the 3 woredas but not in one woreda as only 3 care takers were available at the time of field practice.However all community conversations in the three practical sessions were not conducted properly. CONCLUSION In conclusion; the training that were given in four selected woredas were not quality wise and as designed.The IRT of nutrition component lack reporting and monitoring format.The nutrition component of IRT training manual should be revised based on the finding.The time allocated for all CBN component were not adequate, in all nutrition components the allocated time for training should rearranged.Reporting format and monitoring evaluation should be added in revised training manual.There should be training for the health extension workers with more number of days by considering new CBN woredas. based cross-sectional study design was employed.Data was collected from June to July, 2012.Both qualitative and quantitative data were collected.Observations were conducted during IRT training sessions and field practices.Pre and post training test were administered to participants to evaluate their knowledge change after training.In-depth interviews (IDIs) were conducted with selected trainees and master trainers (MTs) from each of the training session.A total of 139 HEWs and 22 master trainers (MT) were involved in the study.The quantitative data exported from Epi-Info were analyzed using SPSS version 17.0.Paired sample t-test was used to see knowledge gained after training.HEWs perceptions of training and observations of training by the research team were summarized as a measure of quality of training.Qualitative data from IDIs was reported in the pre-established themes on the quality of training, barriers and facilitating factors to delivering the training as planned. Table 1 : Distribution of respondents on pre and post training test scores of CBN components in selected Woredas' of Ethiopia, September 2012 Test type Mean score However I have an idea that it is better if it is included in the content."In contrary to this most of the trainees said that the training they received on breast feeding is enough.They have mentioned different reasons; they stated that they have got enough skill to apply it practically in the community, some said it is the right way to address the available gaps in the community, others said it is easily understandable and they trained well.Some said the training on breast feeding (BF) is not enough.They reasoned out that shortage of materials like breast model and doll have made the training difficult to understand the training on attachment and positioning.They also said it was more of theoretical or absence of practical session/demonstration which is the other shortcoming of the training. *Significant at <0.05, ** Significant at <0.001 thing is not included in the training materials.Health Extension worker from Meket woreda said "Expressing breast milk was not enough as it was very uncommon in our community.Even it is not clear for me.Since it is related with bottle feeding, it needs additional and improved training.So more time must be given for this section and we ourselves are lactating and we can practice it.I tried it last night but I can't because I didn't practice it here."
2015-03-19T23:44:59.000Z
2013-12-30T00:00:00.000
{ "year": 2013, "sha1": "d26b497a5d25ebeac62a65f65672479d5ce2d77b", "oa_license": null, "oa_url": "https://doi.org/10.12944/crnfsj.1.2.07", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "d26b497a5d25ebeac62a65f65672479d5ce2d77b", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4668679
pes2o/s2orc
v3-fos-license
Competing Bandits: Learning under Competition Most modern systems strive to learn from interactions with users, and many engage in \emph{exploration}: making potentially suboptimal choices for the sake of acquiring new information. We initiate a study of the interplay between \emph{exploration and competition}---how such systems balance the exploration for learning and the competition for users. Here the users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing systems. As a model, we consider competition between two multi-armed bandit algorithms faced with the same bandit instance. Users arrive one by one and choose among the two algorithms, so that each algorithm makes progress if and only if it is chosen. We ask whether and to which extent competition incentivizes \emph{innovation}: adoption of better algorithms. We investigate this issue for several models of user response, as we vary the degree of rationality and competitiveness in the model. Effectively, we map out the"competition vs. innovation"relationship, a well-studied theme in economics. Introduction Learning from interactions with users is ubiquitous in modern customer-facing systems, from product recommendations to web search to spam detection to content selection to fine-tuning the interface. Many systems purposefully implement exploration: making potentially suboptimal choices for the sake of acquiring new information. Randomized controlled trials, a.k.a. A/B testing, are an industry standard, with a number of companies such as Optimizely offering tools and platforms to facilitate them. Many companies use more sophisticated exploration methodologies based on multi-armed bandits, a well-known theoretical framework for exploration and making decisions under uncertainty. System that engages in exploration typically need to compete against one another; most importantly, they compete for users. This creates an interesting tension between exploration and competition. In a nutshell, while exploring may be essential for improving the service tomorrow, it may degrade quality and make users leave today, in which case there will be no users to learn from! Thus, users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing systems. We initiate a study of the interplay between exploration and competition. The main high-level question is: whether and to which extent competition incentivizes adoption of better exploration algorithms. This translates into a number of more concrete questions. While it is commonly assumed that better learning technology always helps, is this so for our setting? In other words, would a better learning algorithm result in higher utility for a principal? Would it be used in an equilibrium of the "competition game"? Also, does competition lead to better social welfare compared to a monopoly? We investigate these questions for several models, as we vary the capacity of users to make rational decisions and the severity of competition between the learning systems. (In our models, the two are coupled as they are controlled by the same "knob".) The relationship between the severity of competition among firms and the quality of technology adopted as a result of this competition is a familiar theme in economics literature, known as "competition vs. innovation". We frame our contributions in terms of the "inverted-U relationship", a conventional wisdom regarding "competition vs. innovation" (see Figure 1). Our model We define a game in which two firms (principals) simultaneously engage in exploration and compete for users (agents). These two process are interlinked, as exploration decisions are experienced by users and informed by their feedback. We need to specify several conceptual pieces: how the principals and agents interact, what is the machine learning problem faced by each principal, and details of the game between the principals and the agents. Each piece can get extremely complicated in isolation, let alone jointly, so we strive for simplicity. Thus, the game is as follows: • A new agent arrives in each round, and chooses among the two principals. The principal chooses an action (e.g., a list of web search results to show to the agent), the user experiences this action, and reports a reward. • Each principal faces a very basic and well-studied version of the multi-armed bandit problem: for each arriving agent, it chooses from a fixed set of actions (a.k.a. arms) and receives a reward drawn independently from a fixed distribution specific to this action. • What happens with a given agent is only observed by this agent and the principal chosen by this agent. Principals simultaneously announce their learning algorithms before the agents start arriving, and cannot change them afterwards. All agents share the same Bayesian prior on the rewards and the same "decision rule" for choosing among the principals. Our model side-steps many potential complexities, including, resp.: (i) agents who arrive multiple times and may potentially learn over time and/or manipulate the principals' learning algorithms, (ii) numerous well-motivated generalizations of multi-armed bandits studied in machine learning, particularly ones that concern rewards that change over time. (iii) agents and principals secondguessing and gaming one another as the game progresses. In particular, each agent has welldefined beliefs about the agents that came before, and therefore is capable of making a decision, and each principal's "strategy" boils down to a multi-armed bandit algorithm (which is oblivious to the game-theoretic aspects of the model). Our results Our results depend crucially on agents' "decision rule" for choosing among the principals. The simplest and perhaps the most obvious rule is to select the principal which maximizes their expected utility; we refer to it as HardMax. We find that HardMax is not conducive to innovation. In fact, each principal's dominating strategy is to do no purposeful exploration whatsoever, and instead always choose an action that maximizes expected reward given the current information; we call this algorithm DynamicGreedy. While this algorithm may potentially try out different actions over time and acquire useful information, it is known to be dramatically bad in many important cases of multi-armed bandits -precisely because it does not explore on purpose, and may therefore fail to discover best/better actions. Further, we show that HardMax is very sensitive to tie-breaking when both principals have exactly the same expected utility according to agents' beliefs. If tie-breaking is probabilistically biased -say, principal 1 is always chosen with probability strictly larger than 1 2 -then this principal has a simple "winning strategy" no matter what the other principal does. We relax HardMax to allow each principal to be chosen with some fixed baseline probability. One intuitive interpretation is that there are "random agents" who choose a principal uniformly at random, and each arriving agent is either HardMax or "random" with some fixed probability. We call this model HardMax&Random. We find that innovation helps in a big way: a sufficiently better algorithm is guaranteed to win all agents after an initial learning phase. While the precise notion of "sufficiently better algorithm" is rather subtle, we note that commonly known "smart" bandit algorithms typically defeat the commonly known "naive" ones, and the latter typically defeat DynamicGreedy. However, there is a substantial caveat: one can defeat any algorithm by interleaving it with DynamicGreedy (see Section 5 for details). This has two undesirable corollaries: a better algorithm may sometimes lose, and pure Nash equilibrium typically does not exist. We further relax the decision rule so that the probability of choosing a given principal varies smoothly as a function of the difference between principals' expected rewards; we call it SoftMax. For this model, the "better algorithm wins" result holds under much weaker assumptions on what constitutes a better algorithm. This is the most technical result of the paper. The competition in this setting is necessarily much more relaxed: typically, both principals attract approximately half of the agents as time goes by (but a better algorithm may attract slightly more). Economic implications. Our models differ in terms of rationality in agents' decision-making: from fully rational decisions with HardMax to relaxed rationality with HardMax&Random to an even more relaxed rationality with SoftMax. The decision rule also controls the severity of competition between the principals: from cut-throat competition with HardMax to a more relaxed competition with HardMax&Random to an even more relaxed competition with SoftMax. Further, uniform choice among principals corresponds to no rationality and no competition. The results discussed above imply an inverted-U relationship between rationality/competition and innovation, in the spirit of Figure 1, where innovation refers to the quality of multi-armed bandit algorithms selected in an equilibrium. Further, we find another, technically different inverted-U relationship, where we vary rationality/competition inside the HardMax&Random model, and we measure innovation via the marginal utility of switching to a better algorithm. Remark. Much of the challenge in this paper, both conceptual and technical, was in setting up the theorems rather than proving them. Apart from making the modeling choices described in Section 1.1, it was crucial to interpret the results and intuitions from the literature on multi-armed bandits so as to formulate meaningful assumptions which are productive in our setting. Map of the paper. We survey related work (Section 2), lay out the model and preliminaries (Section 3), and proceed to analyze the three main models, HardMax, HardMax&Random and SoftMax (in Sections 4, 5, 6, resp.). We discuss economic implications in Section 7. Appendix A provides some pertinent background on multi-armed bandits. Related work Exploration. Multi-armed bandits (MAB) is a particularly elegant and tractable abstraction for tradeoff between exploration and exploitation: essentially, between acquisition and usage of information. MAB problems have been studied in Economics, Operations Research and Computer Science for many decades, see (Bubeck and Cesa-Bianchi, 2012;Gittins et al., 2011) for background on regret-minimizing and Bayesian formulations, respectively. A discussion of industrial applications of MAB can be found in Agarwal et al. (2016). The literature on MAB is vast and multi-threaded. The most related thread concerns regretminimizing MAB formulations with IID rewards (Lai and Robbins, 1985;Auer et al., 2002a). This thread includes "smart" MAB algorithms that combine exploration and exploitation, such as UCB1 (Auer et al., 2002a) and Successive Elimination (Even-Dar et al., 2006). Specific algorithms, and 'naive" MAB algorithms that separate exploration and exploitation, such as Explore-then-Exploit and ǫ-Greedy. There is a superficial similarity -in name only -between this paper and the line of work on "dueling bandits" (e.g., Yue et al., 2012;). The latter is not about competing bandit algorithms, but rather about scenarios where in each round two arms are chosen to be presented to a user, and the algorithm only observes which arm has "won the duel". Our setting is closely related to the "dueling algorithms" framework (Immorlica et al., 2011) which studies competition between two principals, each running an algorithm for the same problem. However, this work considers algorithms for offline / full input scenarios, whereas we focus on online machine learning and the explore-exploit-incentives tradeoff therein. Also, this work specifically assumes binary payoffs (i.e., win or lose) for the principals. Other related work in economics. The competition vs. innovation relationship and the inverted-U shape thereof have been introduced (among many other ideas) in a classic book (Schumpeter, 1942), and remained an important theme in the literature ever since (e.g., Aghion et al., 2005;Vives, 2008). Production costs aside, this literature treats innovation as a priori beneficial for the firm. Our setting is very different, as innovation in exploration algorithms may potentially hurt the firm. A line of work on platform competition, starting with Rysman (2009), concerns competition between firms (platforms) that improve as they attract more users (network effect); see Weyl and White (2014) for a recent survey. This literature is not concerned with innovation, and typically models network effects exogenously, whereas in our model network effects are endogenous (they are created by MAB algorithms, an essential part of the model). Relaxed versions of rationality similar to ours are found in several notable lines of work. For example, "random agents" (a.k.a. noise traders) can side-step the "no-trade theorem" (Milgrom and Stokey, 1982), a famous impossibility result in financial economics. SoftMax model is closely related to the literature on product differentiation, starting from Hotelling (1929), see Perloff and Salop (1985) for a notable later paper. There is a large literature on non-existence of equilibria due to small deviations (which is related to the corresponding result for HardMax&Random), starting with Rothschild and Stiglitz (1976) in the context of health insurance markets. Notable recent papers (Veiga and Weyl, 2016;Azevedo and Gottlieb, 2017) emphasize the distinction between HardMax and versions of SoftMax. While agents' rationality and severity of competition are usually modeled separately in the literature, it is not unusual to have them modeled with the same "knob" (e.g., Gabaix et al., 2016). Basic model and preliminaries Principals and agents. There are two principals and T agents. The game proceeds in rounds (we will sometimes refer to them as global rounds). In each round t ∈ [T ], the following interaction takes place. A new agent arrives and chooses one of the two principals. The principal chooses a recommendation: an action a t ∈ A, where A is a fixed set of actions (same for both principals and all rounds). The agent follows this recommendation, receives a reward r t ∈ [0, 1], and reports it back to the principal. The rewards are i.i.d. with a common prior. More formally, for each action a ∈ A there is a parametric family ψ a (·) of reward distributions, parameterized by the mean reward µ a . (The paradigmatic case is 0-1 rewards with a given expectation.) The mean reward vector µ = (µ a : a ∈ A) is drawn from prior distribution P mean before round 1. Whenever a given action a ∈ A is chosen, the reward is drawn independently from distribution ψ a (µ a ). The prior P mean and the distributions (ψ a (·) : a ∈ A) constitute the (full) Bayesian prior on rewards, denoted P . Each principal commits to a learning algorithm for making recommendations. This algorithm follows a protocol of multi-armed bandits (MAB). Namely, the algorithm proceeds in time-steps: 1 each time it is called, it outputs a chosen action a ∈ A and then inputs the reward for this action. The algorithm is called only in global rounds when the corresponding principal is chosen. The information structure is as follows. The prior P is known to everyone. The mean rewards µ a are not revealed to anybody. Each agent knows both principals' algorithms, and the global round when (s)he arrives. Each principal is completely unaware of the rounds when the other is chosen. Some terminology. The two principals are called "principal 1" and "principal 2". The algorithm of principal i ∈ {1, 2} is called "algorithm i" and denoted alg i . The agent in global round t is called "agent t"; the chosen principal is denoted i t . Throughout, E[·] denotes expectation over all applicable randomness. Bayesian-expected rewards. Consider the performance of a given algorithm alg i , i ∈ {1, 2}, when it is run in isolation (i.e., without competition, just as a bandit algorithm). Let rew i (n) denote its Bayesian-expected reward for the n-th step. Now, going back to our game, fix global round t and let n i (t) denote the number of global rounds before t in which this principal is chosen. Then: Agents' response. Each agent t chooses principal i t as as follows: it chooses a distribution over the principals, and then draws independently from this distribution. Let p t be the probability of choosing principal 1 according to this distribution. Below we specify p t ; we need to be careful so as to avoid a circular definition. Let I t be the information available to agent t before the round. Assume I t suffices to form posteriors for quantities n i (t), i ∈ {1, 2}, denote them by N i,t . Then for each principal i, This quantity represents the posterior mean reward for principal i at round t, according to information I t ; hence the notation PMR. In general, probability p t is defined by the posterior mean rewards PMR i (t) for both principals. We assume a somewhat more specific shape: (1) Here f resp : [−1, 1] → [0, 1] is the response function, which is the same for all agents. We assume that the response function is known to all agents. To make the model well-defined, it remains to argue that information I t is indeed sufficient to form posteriors on n 1 (t) and n 2 (t). This can be easily seen using induction on t. Since all agents arrive with identical information (other than knowing which global round they arrive in), it follows that all agents have identical posteriors for n i,t (for a given principal i and a given global round t). This posterior is denoted N i,t . Figure 2: The three models for agents' response function: HardMax is thick blue, HardMax&Random is slim red, and SoftMax is the dashed curve. Response functions. We use the response function f resp to characterize the amount of rationality and competitiveness in our model. We assume that f resp is monotonically non-decreasing, is larger than 1 2 on the interval (0, 1], and smaller than 1 2 on the interval [−1, 0). Beyond that, we consider three specific models, listed in the order of decreasing rationality and competitiveness (see Figure 2): • HardMax: f resp equals 0 on the interval [−1, 0) and 1 on the interval (0, 1]. In words, agents choose the better principal with probability 1. • HardMax&Random: f resp equals ǫ on the interval [−1, 0) and 1 − ǫ ′ on the interval (0, 1], where ǫ, ǫ ′ ∈ (0, 1 2 ) are some positive constants. In words, each agent is a HardMax agent with probability 1 − ǫ − ǫ ′ , and with the remaining probability she makes a random choice. Unless specified otherwise, f resp is symmetric, in the sense that f resp (−x) + f resp (x) = 1 for any x ∈ [0, 1]. This implies fair tie-breaking: f resp (0) = 1 2 , and ǫ = ǫ ′ in the definitions above. MAB algorithms. We characterize the inherent quality of an MAB algorithm in terms of its Bayesian Instantaneous Regret (henceforth, BIR), a standard notion from machine learning: where rew(n) is the Bayesian-expected reward of the algorithm for the n-th step, when the algorithm is run in isolation. We are primarily interested in how BIR scales with n; we treat K, the number of arms, as a constant unless specified otherwise. We will emphasize several specific algorithms or classes thereof: • "smart" MAB algorithms that combine exploration and exploitation, such as UCB1 Auer et al. (2002a) and Successive Elimination Even- Dar et al. (2006). These algorithms achieve BIR(n) ≤ O(n −1/2 ) for all priors and all (or all but a very few) steps n. This bound is known to be tight for any fixed n. 2 • "naive" MAB algorithms that separate exploration and exploitation, such as Explore-then-Exploit and ǫ-Greedy. These algorithms have dedicated rounds in which they explore by choosing an action uniformly at random. When these rounds are known in advance, the algorithm suffers constant BIR in such rounds. When the "exploration rounds" are instead randomly chosen by the algorithm, one can usually guarantee an inverse-polynomial upper bound BIR, but not as good as the one above: namely, BIR(n) ≤Õ(n −1/3 ). This is the best possible upper bound on BIR for the two algorithms mentioned above. • DynamicGreedy: at each step, recommends the best action according to the current posterior: an action a with the highest posterior expected reward E[µ a | I ], where I is the information available to the algorithm so far. DynamicGreedy has (at least) a constant BIR for some reasonable priors, i.e., BIR(n) > Ω(1). • StaticGreedy: always recommends the prior best action,i.e., an action a with the highest prior mean reward E µ∼P mean [µ a ]. This algorithm typically has constant BIR. We focus on MAB algorithms such that BIR(n) is non-increasing; we call such algorithms monotone. While some reasonable MAB algorithms may occasionally violate monotonicity, they can usually be easily modified so that monotonicity violations either vanish altogether, or only occur at very specific rounds (so that agents are extremely unlikely to exploit them in practice). More background and examples can be found in Appendix A. In particular, we prove that DynamicGreedy is monotone. Competition game between principals. Some of our results explicitly study the game between the two principals. We model it as a simultaneous-move game: before the first agent arrives, each principal commits to an MAB algorithm. Thus, choosing a pure strategy in this game corresponds to choosing an MAB algorithm (and, implicitly, announcing this algorithm to the agents). Principal's utility is primarily defined as the market share, i.e., the number of agents that chose this principal. Principals are risk-neutral, in the sense that they optimize their expected utility. Assumptions on the prior. We make some technical assumptions for the sake of simplicity. First, each action a has a positive probability of being the best action according to the prior: Second, posterior mean rewards of actions are pairwise distinct. That is, for any step and any feasible history h of an MAB algorithm at that step, 3 it holds that In particular, prior mean rewards of actions are pairwise distinct: for any a, a ′ ∈ A. This property is generic, e.g., it can be easily ensured by a small random perturbation of the prior. Some more notation. Without loss of generality, we label actions as A = [K] and sort them according to their prior mean rewards, so that Fix principal i ∈ {1, 2} and (local) step n. The arm chosen by algorithm alg i at this step is denoted a i,n , and the corresponding BIR is denoted BIR i (n). History of alg i up to this step is denoted H i,n . Write PMR(a | E) = E[µ a | E] for posterior mean reward of action a given event E. Generalizations Our results can be extended compared to the basic model described above. First, unless specified otherwise, our results allow a more general notion of principal's utility that can depend on both the market share and agents' rewards. Namely, principal i collects U i (r t ) units of utility in each global round t when she is chosen (and 0 otherwise), where U i (·) is some fixed non-decreasing function with U i (0) > 0. In a formula, Second, our results carry over, with little or no modification of the proofs, to much more general versions of MAB, as long as it satisfies the i.i.d. property. In each round, an algorithm can see a context before choosing an action (as in contextual bandits) and/or additional feedback other than the reward after the reward is chosen (as in, e.g., semi-bandits), as long as the contexts are drawn from a fixed distribution, and the (reward, feedback) pair is drawn from a fixed distribution that depends only on the context and the chosen action. The Bayesian prior P needs to be a more complicated object, to make sure that PMR and BIR are well-defined. Mean rewards may also have a known structure, such as Lipschitzness, convexity, or linearity; such structure can be incorporated via P . All these extensions have been studied extensively in the literature on MAB, and account for a substantial segment thereof; see Bubeck and Cesa-Bianchi (2012) for background and details. Chernoff Bounds We use an elementary concentration inequality known as Chernoff Bounds, in a formulation from Mitzenmacher an (2005). Full rationality (HardMax) In this section, we will consider the version in which the agents are fully rational, in the sense that their response function is HardMax. We show that principals are not incentivized to explorei.e., to deviate from DynamicGreedy. The core technical result is that if one principal adopts DynamicGreedy, then the other principal loses all agents as soon as he deviates. To make this more precise, let us say that two MAB algorithms deviate at (local) step n if there is an action a ∈ A and a realization h of step-n history such that h is feasible for both algorithms, and under this history the two algorithms choose action a with different probability. Theorem 4.1. Assume HardMax response function with fair tie-breaking. Assume that alg 1 is DynamicGreedy, and alg 2 deviates from DynamicGreedy starting from some (local) step n 0 < T . Then all agents in global rounds t ≥ n 0 select principal 1. Corollary 4.2. The competition game between principals has a unique Nash equilibirium: both principals choose DynamicGreedy. Remark 4.3. This corollary holds under a more general model which allows time-discounting: namely, the utility of each principal i in each global round t is U i,t (r t ) if this principal is chosen, and 0 otherwise, where U i,t (·) is an arbitrary non-decreasing function with U i,t (0) > 0. Proof of Theorem 4.1 The proof starts with two auxiliary lemmas: that deviating from DynamicGreedy implies a strictly smaller Bayesian-expected reward, and that HardMax implies a "sudden-death" property: if one agent chooses principal 1 with certainty, then so do all subsequent agents do. We re-use these lemmas in Section 4.2. Proof. Since the two algorithms coincide on the first n 0 − 1 steps, it follows by symmetry that histories H 1,n 0 and H 2,n 0 have the same distribution. We use a coupling argument: w.l.o.g., we assume the two histories coincide, H 1,n 0 = H 2,n 0 = H. At local step n 0 , DynamicGreedy chooses an action a 1,n 0 which maximizes the posterior mean reward given history H: for any realization h ∈ support(H) and any action a ∈ A Since the two algorithms deviate at step n 0 , there is a realization h ∈ support(H) and an action a ∈ A such that Pr[a = a 2,n 0 a 1,n 0 | H = h] > 0. Inequality (6) is strict for this (h, a) pair by assumption (4). Integrating (6) over a ∼ (a 2,n 0 | H = h) and h ∼ H, we obtain rew 1 (n 0 ) > rew 2 (n 0 ). Here (a 2,n 0 | H = h) denotes the conditional distribution of a 2,n 0 given H = h. Proof. Formally, let's use induction on t, with the base case t = t 0 . Let N = N 1,t 0 be the agents' posterior distribution for n 1,t 0 , #global rounds before t 0 in which principal 1 is chosen. By induction, all agents from t 0 to t − 1 chose principal 1. Therefore, where the first inequality holds because alg 1 is monotone, and the second is the base case. Proof of Theorem 4.1. Since the two algorithms coincide on the first n 0 − 1 steps, it follows by symmetry that rew 1 (n) = rew 2 (n) for any n < n 0 . By Lemma 4.4, rew 1 (n 0 ) > rew 2 (n 0 ). Recall that n i (t) is the number of global rounds s < t in which principal i is chosen, and N i,t is the agents' posterior distribution for this quantity. By symmetry, each agent t < n 0 chooses a principal uniformly at random. It follows that N 1,n 0 = N 2,n 0 (denote both distributions by N for brevity), and N (n 0 − 1) > 0. Therefore: [rew 2 (n + 1)] = PMR 2 (n 0 ) So, agent n 0 chooses principal 1. By Lemma 4.5, all subsequent agents choose principal 1, too. HardMax with biased tie-breaking The HardMax model is very sensitive to the tie-breaking rule. For starters, if ties are broken deterministically in favor of principal 1, then principal 1 can get all agents no matter what the other principal does, simply by using StaticGreedy. Theorem 4.6. Assume HardMax response function with f resp (0) = 1 (ties are always broken in favor of principal 1). If alg 1 is StaticGreedy, then all agents choose principal 1. Proof. Agent 1 chooses principal 1 because of the tie-breaking rule, and the subsequent agents choose principal 1 by an induction argument similar to the one in the proof of Lemma 4.5. A more challenging scenario is when the tie-breaking is biased in favor of principal 1, but not deterministically so: f resp (0) > 1 2 . Then this principal also has a "winning strategy" no matter what the other principal does. Specifically, principal 1 can get all but the first few agents, under a mild technical assumption that DynamicGreedy deviates from StaticGreedy. Principal 1 can use DynamicGreedy, or any other monotone MAB algorithm that coincides with DynamicGreedy in the first few steps. Theorem 4.7. Assume HardMax response function with f resp (0) > 1 2 (i.e., tie-breaking is biased in favor of principal 1). Assume the prior P is such that DynamicGreedy deviates from StaticGreedy starting from some step n 0 . Suppose that principal 1 runs a monotone MAB algorithm that coincides with DynamicGreedy in the first n 0 steps. Then all agents t ≥ n 0 choose principal 1. Proof. The proof re-uses Lemmas 4.4 and 4.5, which do not rely on fair tie-breaking. Because of the biased tie-breaking, for each global round t we have Recall that i t is the principal chosen in global round t. Let m 0 be the first round when alg 2 deviates from DynamicGreedy, or DynamicGreedy deviates from StaticGreedy, whichever comes sooner. Note that rew 1 (n) = rew 2 (n) for each step n < m 0 , by definition of m 0 , and rew 1 (n) ≥ rew 2 (n) by Lemma 4.4. To summarize: rew 1 (n) ≥ rew 2 (n) for all steps n ≤ m 0 . We claim that Pr[i t = 1] > 1 2 for all global rounds t ≤ m 0 . We prove this claim using induction on t. The base case t = 1 holds by (8) and the fact that in step 1, DynamicGreedy chooses the arm with the highest prior mean reward. For the induction step, we assume that Pr[i t = 1] > 1 2 for all global rounds t < t 0 , for some t 0 ≤ m 0 . It follows that distribution N 1,t 0 stochastically dominates distribution N 2,t 0 . 4 Observe that [rew 2 (n + 1)] = PMR 2 (t 0 ). So the induction step follows by (8) Now let us focus on global round m 0 , and denote N i = N i,m 0 . By the above claim, N 1 stochastically dominates N 2 , and moreover N i (m 0 − 1) > N i (m 0 − 1). Relaxed rationality: HardMax & Random This section is dedicated to the HardMax&Random response model, where each principal is always chosen with some positive baseline probability. The main technical result for this model states that a principal with asymptotically better BIR wins by a large margin: after a "learning phase" of constant duration, all agents choose this principal with maximal possible probability f resp (1). For example, a principal with BIR(n) ≤Õ(n −1/2 ) wins over a principal with BIR(n) ≥ Ω(n −1/3 ). However, this positive result comes with a significant caveat detailed in Section 5.1. We formulate and prove a cleaner version of the result, followed by a more general formulation developed in a subsequent Remark 5.2. We need to express a property that alg 1 eventually catches up and surpasses alg 2 , even if initially it receives only a fraction of traffic. For the cleaner version, we assume that both algorithms are well-defined for an infinite time horizon, so that their BIR does not depend on the time horizon T of the game. Then this property can be formalized as: In fact, a weaker version of (12) suffices: denoting ǫ 0 = 1 2 f resp (−1), for some constant n 0 we have (∀n ≥ n 0 ) BIR 1 (ǫ 0 n)/BIR 2 (n) < 1 2 . Theorem 5.1. Assume HardMax&Random response function. Suppose both algorithms are well-defined for an infinite time horizon, and satisfy (13) and (14). Then each agent t ≥ n 0 chooses principal 1 with maximal possible probability f resp (1). Remark 5.2. Many standard MAB algorithms in the literature are parameterized by the time horizon T . Regret bounds for such algorithms usually include a polylogarithmic dependence on T . In particular, a typical upper bound for BIR has the following form: Here we write BIR(n | T ) to emphasize the dependence on T . We generalize (13) to handle the dependence on T : for some n 0 = n 0 (T ) ∈ polylog(T ), In this holds, we say that alg 1 BIR-dominates alg 2 . We prove a version of Theorem 5.1 in which algorithms are parameterized with time horizon T and condition (13) is replaced with (16); its proof is very similar and is omitted. To state a game-theoretic corollary of Theorem 5.1, we consider a version of the competition game between the two principals in which they can only choose from a finite set A of monotone MAB algorithms. One of these algorithms is "better" than all others; we call it the special algorithm. Unless specified otherwise, it BIR-dominates all other allowed algorithms. The other algorithms satisfy (14). We call this game the restricted competition game. Corollary 5.3. Assume HardMax&Random response function. Consider the restricted competition game with special algorithm alg. Then, for any sufficiently large time horizon T , this game has a unique Nash equilibrium: both principals choose alg. A little greedy goes a long way Given any monotone MAB algorithm other than DynamicGreedy, we design a modified algorithm which learns at a slower rate, yet "wins the game" in the sense of Theorem 5.1. As a corollary, the competition game with unrestricted choice of algorithms typically does not have a Nash equilibrium. Given an algorithm alg 1 that deviates from DynamicGreedy starting from step n 0 and a "mixing" parameter p, we will construct a modified algorithm as follows. 1. The modified algorithm coincides with alg 1 (and DynamicGreedy) for the first n 0 − 1 steps; 2. In each step n ≥ n 0 , alg 1 is invoked with probability 1−p, and with the remaining probability p one does the "greedy choice": chooses an action with the largest posterior mean reward given the current information collected by alg 1 . For a cleaner comparison between the two algorithms, the modified algorithm does not record rewards received in steps with the "greedy choice". Parameter p > 0 is the same for all steps. Corollary 5.5. Suppose that both principals can choose any monotone MAB algorithm, and assume the symmetric HardMax&Random response function. Then for any time horizon T , the only possible pure Nash equilibrium is one when both principals choose DynamicGreedy. Moreover, no pure Nash equilibrium exists when some algorithm "dominates" DynamicGreedy in the sense of (16) and the time horizon T is sufficiently large. Remark 5.6. The modified algorithm performs exploration at a slower rate. Let us argue how this may translate into a larger BIR compared to the original algorithm. Let BIR ′ 1 (n) be the BIR of the "greedy choice" after after n − 1 steps of alg 1 . Then In particular, suppose BIR 1 (n) ∼ n −γ and BIR ′ 1 (n) ≥ c BIR 1 (n), for some constants γ ∈ (0, 1) and c > 1 − γ . Then using Jensen's inequality, for all n ≥ n 0 and small enough p > 0 it holds that (The last inequality follows by plugging in BIR 1 (n) ∼ n −γ and using the fact that (1 − p) γ < 1 − pγ .) Proof of Theorem 5.4. Let rew ′ 1 (n) denote the Bayesian-expected reward of the "greedy choice" after after n − 1 steps of alg 1 . Note that rew 1 (·) and rew ′ 1 (·) are non-decreasing: the former because alg 1 is monotone and the latter because the "greedy choice" is optimized given an increasing set of observations. Therefore, the modified algorithm alg 2 is monotone by (17). Let alg denote a copy of alg 1 that is running "inside" the modified algorithm alg 2 . Let m 2 (t) be the number of global rounds before t in which the agent chooses principal 2 and alg is invoked; in other words, it is the number of agents seen by alg before global round t. Let M 2,t be the agents' posterior distribution for m 2 (t). We claim that in each global round t ≥ n 0 , distribution M 2,t stochastically dominates distribution N 1,t , and PMR 1 (t) < PMR 2 (t). We use induction on t. The base case t = n 0 holds because M 2,t = N 1,t (because the two algorithms coincide on the first n 0 − 1 steps), and PMR 1 (n 0 ) < PMR 2 (n 0 ) is proved as in (7), using the fact that rew 1 (n 0 ) < rew 2 (n 0 ). The induction step is proved as follows. The induction hypothesis for global round t−1 implies that agent t − 1 is seen by alg with probability (1 − ǫ 0 )(1 − p), which is strictly larger than ǫ 0 , the probability with which this agent is seen by alg 2 . Therefore, M 2,t stochastically dominates N 1,t . SoftMax response function This section is devoted to the SoftMax model. We recover a positive result under the assumptions from Theorem 5.1 (albeit with a weaker conclusion), and then proceed to a much more challenging result under weaker assumptions. We start with a formal definition: Definition 6.1. A response function f resp is SoftMax if the following conditions hold: • f resp (·) is bounded away from 0 and 1: f resp (·) ∈ [ǫ, 1 − ǫ] for some ǫ ∈ (0, 1 2 ), • the response function f resp (·) is "smooth" around 0: • fair tie-breaking: f resp (0) = 1 2 . Our first result is a version of Theorem 5.1, with the same assumptions about the algorithms and essentially the same proof. The conclusion is much weaker: we can only guarantee that each agent t ≥ n 0 chooses principal 1 with probability slightly larger than 1 2 . This is essentially unavoidable in a typical case when both algorithms satisfy BIR(n) → 0, by Definition 6.1. Theorem 6.2. Assume SoftMax response function. Suppose alg 1 has better BIR in the sense of (16), and alg 2 satisfies technical condition (14). Then each agent t ≥ n 0 chooses principal 1 with probability Proof Sketch. We follow the steps in the proof of Theorem 5.1 to derive This is at least BIR 2 (t)/4 by (14). Then (21) follows by the smoothness condition (20). We recover a version of Corollary 5.3, if principal's utility is the number of users (rather than the more general model in (5)). We also need a mild technical assumption that cumulative Bayesian regret (BReg) tends to infinity. BReg is a standard notion from the literature (along with BIR): Corollary 6.3. Assume that response function is SoftMax, and principal's utility is the number of users. Consider the restricted competition game with special algorithm alg, and assume that all other allowed algorithms satisfy BReg(n) → ∞. Then, for any sufficiently large time horizon T , this game has a unique Nash equilibrium: both principals choose alg. Further, we prove a much more challenging result in which the "BIR-dominance" (16) is replaced with a much weaker condition: for some n 0 (T ) ∈ polylog(T ) and constants β 0 , α 0 ∈ (0, 1/2), In this holds, we say that alg 1 weakly BIR-dominates alg 2 . Note that while the BIR-dominance condition (16) involves sufficiently small multiplicative factors (resp., ǫ 0 and 1 2 ), the new condition replaces them with factors that can be arbitrarily close to 1. We need a slightly stronger version of the technical assumption (14): for any ǫ > 0, there exists n(ǫ) such that for Theorem 6.4. Assume SoftMax response function. Suppose alg 1 weakly-BIR-dominates alg 2 , and the latter satisfies (24). Then there exists some T such that each agent t ≥ T ′ chooses principal 1 with probability The main idea behind our proof is that even though alg 1 may have a slower rate of learning in the beginning, it will gradually catch up and surpass alg 2 . We will describe this process in two phases. In the first phase, alg 1 receives a random agent with probability at least f resp (−1) > 0 in each round. Although this is may be a slow rate, the difference in BIR between the two algorithms is gradually diminishing. After a sufficiently long time, alg 1 attracts each agent with probability at least 1/2 − O(β 0 ). Then the game enters the second phase: both algorithms receive agents at a rate close to 1 2 , and the fractions of agents received by both algorithms -n 1 (t)/t and n 2 (t)/talso converge to 1 2 . In the end of the second phase, and in each global round afterwards, the agent counts n 1 (t) and n 2 (t) fit into the weak-BIR-dominance condition, in the sense that both are larger than n 0 (T ), and n 1 (t) ≥ (1 − β 0 ) n 2 (t). So now alg 1 actually provides better rewards, which gets reflected in the PMR's eventually. Accordingly, from then on alg 1 attracts agents at the rate slightly larger than 1 2 . We prove that the "bump" over the 1 2 is at least on the order of BIR 2 (t). It follows that for any t ≥ T 1 + T 2 , [BIR 2 (m 2 + 1) − BIR 1 (m 1 + 1)] where the last inequality holds as long as q 2 ≤ α 0 BIR 2 (t)/4, and is implied by the condition in (24) as long as T 2 is sufficiently large. Hence, by the definition of our SoftMax response function and assumption in (20), we have Corollary 6.5. Assume that response function is SoftMax, and principal's utility is the number of users. Consider the restricted competition game in which the special algorithm alg weakly-BIR-dominates the other allowed algorithms, and the latter satisfy BReg(n) → ∞. Then, for any sufficiently large time horizon T , there is a unique Nash equilibrium: both principals choose alg. Economic implications We frame our contributions in terms of the relationship between competition and innovation, i.e., between the extent to which the game between the two principals is competitive, and the degree of innovation that these models incentivize. Competition is controlled via the response function f resp , and innovation refers to the quality of the technology (MAB algorithms) adopted Competition/Rationality Innovation/alg in equilibrium Uniform SoftMax HardMax&Random HardMax Figure 3: The stylized inverted-U relationship in the "main story" by the principals. The competition vs. innovation relationship is well-studied in the economics literature, and is commonly known to often follow an inverted-U shape, as in Figure 1 (see Section 2 for citations). Competition in our models is closely correlated with rationality: the extent to which agents make rational decisions, and indeed rationality is what f resp controls directly. Main story. Our main story concerns the restricted competition game between the two principals where one allowed algorithm alg is "better" than the others. We measure innovation in terms of whether and when alg is chosen in an equilibrium. We vary competition/rationality by changing the response function from HardMax (full rationality, very competitive environment) to HardMax&Random to SoftMax (less rationality and competition). We find a competition/rationality vs. innovation relationship which goes as follows: HardMax: no innovation: DynamicGreedy is chosen over alg. HardMax&Random: some innovation: alg is chosen as long as it BIR-dominates. SoftMax: more innovation: alg is chosen as long as it weakly- This follows, resp., from Corollaries 4.2, 5.3 and 6.3. We can complete these three bullets to an inverted-U relationship if we include the uniform choice between the principals, which corresponds to the least amount of rationality. When principals' utility is the number of agents, uniform choice provides no incentives to innovate. 7 See Figure 3 for a stylized depiction of the inverted-U relationship. Secondary story. Let us zoom in on the symmetric HardMax&Random model. Competition/rationality within this model is controlled by the baseline probability ǫ 0 = f resp (±1), which goes smoothly between the two extremes of HardMax and the uniform choice (resp., ǫ 0 = 0 and ǫ 0 = 1 2 ). For clarity, we assume that principal's utility is the number of agents. We consider the marginal utility of switching to a better algorithm. Suppose initially both principals use some algorithm alg, and principal 1 ponders switching to another algorithm alg' which BIR-dominates alg. We are interested in the corresponding increase in utility; we refer to this increase as incentive-to-innovate (i2i), and we use it to quantify innovation. • ǫ 0 near 0: only a small i2i can be guaranteed, as it may take a long time for alg ′ to "catch up" with alg, and hence less time to reap the benefits. • ǫ 0 near 1 2 : small i2i, as principal 1 gets most agents for free no matter what. The familiar inverted-U shape is depicted in Figure 4. A Background on multi-armed bandits This appendix provides some pertinent background on multi-armed bandits (MAB). We discuss BIR and monotonicity of several MAB algorithms, touching upon: DynamicGreedy and StaticGreedy (Section A.1), "naive" MAB algorithms that separate exploration and exploitation (Section A.2), and "smart" MAB algorithms that combine exploration and exploitation (Section A.3). As we do throughout the paper, we focus on MAB with i.i.d. rewards and a Bayesian prior; we call it Bayesian MAB for brevity. A.1 DynamicGreedy and StaticGreedy We provide an example when DynamicGreedy and StaticGreedy have constant BIR, and prove monotonicity of DynamicGreedy. For the example, it suffices to consider deterministic rewards (for each action a, the realized reward is always equal to the mean µ a ) and independent priors (according to the prior P mean , random variables µ 1 , . . . , µ K are mutually independent) each of full support. The following claim is immediate from the definition of the CDF function Claim A.1. Assume independent priors. Let F i be the CDF of the mean reward µ i of action a i ∈ A. Then, for any numbers z 2 > z 1 > E[µ 2 ] we have Pr[µ 1 ≤ z 1 and µ 2 ≥ z 2 ] = F 1 (z 1 )(1 − F 2 (z 2 )). We can now draw an immediate corollary of the above claim Next, we show that DynamicGreedy is monotone. Lemma A.4. DynamicGreedy is monotone, in the sense that rew(n) is non-decreasing. Further, rew(n) is strictly increasing for every time step n with Pr[a n a n+1 ] > 0. Proof. We prove by induction on n that rew(n) ≤ rew(n + 1) for DynamicGreedy. Let a n be the random variable recommended at time t, then E[µ a n |I n ] = rew(n). We can rewrite this as: [µ a n |r n , [µ a n |I n+1 ] since I n+1 = (I n , r n ). At time n + 1 DynamicGreedy will select an action a n+1 such that: which proves the monotonicity. In cases that Pr[a n a n+1 ] > 0] we have a strict inequality, since with some probability we select a better action then the realization of a n . A.2 "Naive" MAB algorithms that separate exploration and exploitation MAB algorithm ExplorExploit (m) initially explores each action with m agents and for the remaining T − |A|m agents recommends the action with the highest observed average. In the explore phase it assigns a random permutation of the mK recommendations. Proof. In the explore phase we we approximate for each action a ∈ A, the value of µ a byμ a . Using the standard Chernoff bounds we have that with probability 1 − δ, for every action a ∈ A we have |µ a −μ a | ≤ T −1/3 . Let a * = arg max a µ a and a ee the action that ExplorExploit selects in the explore phase after the first |A|T 2/3 agents. Sinceμ a * ≤μ a ee , this implies that µ a * − µ a ee = O(T −1/3 ). To show that ExplorExploit (m) is monotone, we need to show only that rew(mK) ≤ rew(mK + 1). This follows since for any t < mK we have rew(t) = rew(t + 1), since the recommended action is uniformly distributed for each time t. Also, for any t ≥ mK + 1 we have rew(t) = rew(t + 1) since we are recommending the same exploration action. The proof that rew(mK) ≤ rew(mK + 1) is the same as for DynamicGreedy in Lemma A.4. We can also have a a phased version which we call PhasedExplorExploit (m t ), where time is partition in to phases. In phase t we have m t agents and a random subset of K explore the actions (each action explored by a single agent) and the other agents exploit. (This implies that we need that m t ≥ K for all t. We also assume that m t is monotone in t.) Lemma A.6. Consider the case that K = 2 and the rewards of the actions are Bernoulli r.v. with parameter µ i and ∆ = µ 1 − µ 2 . Algorithm PhasedExplorExploit (m t ) is monotone and for m t = √ t it has BIR(n) = O(n −1/3 + e −O(∆ 2 n 2/3 ))). we show that the algorithm . Proof. We first show that it is monotone. Recall that µ 1 > µ 2 . Let S i = t j=1 r i,j be the sum of the rewards of action i up to phase t. We need to show that Pr[S 1 > S 2 ] + (1/2) Pr[S 1 = S 2 ] is monotonically increasing in t. Consider the random variable Z = S 1 −S 2 . At each phase it increases by +1 with probability µ 1 (1 − µ 2 ), decreases by −1 with probability (1 − µ 1 )µ 2 and otherwise does not change. Consider the values of Z up to phase t. We really care only about the probability that is shifted from positive to negative and vice versa. First, consider the probability that Z = 0. We can partition it to S 1 = S 2 = r events, and let p(r, r) be the probability of this event. For each such event, we have p(r, r)µ 1 moved to Z = +1 and p(r, r)µ 2 moved to Z = −1. Since µ 1 > µ 2 we have that p(r, r)µ 1 ≥ p(r, r)µ 2 (note that p(r, r) might be zero, so we do not have a strict inequality). Second, consider the probability that Z = +1 or Z = −1. We can partition it to S 1 = r + 1; S 2 = r and S 1 = r; S 2 = r + 1 events, and let p(r + 1, r) and p(r, r + 1) be the probabilities of those events. It is not hard to see that p(r + 1, r)µ 2 = p(r, r + 1)µ 1 . This implies that the probability mass moved from Z = +1 to Z = 0 is identical to that moved from Z = −1 to Z = 0. We have showed that Pr[S 1 > S 2 ] + (1/2) Pr[S 1 = S 2 ] and therefore the expected valued of the exploit action is non-decreasing. Since we have that the size of the phases are increasing, the BIR is strictly increasing between phases and identical within each phase. We now analyze the BIR regret. Note that agent n is in phase O(n 2/3 ) and the length of his phase is O(n 1/3 ). The BIR has two parts. The first is due to the exploration, which is at most O(n −1/3 ). The second is due to the probability that we exploit the wrong action. This happens with probability Pr[S 1 < S 2 ] + (1/2) Pr[S 1 = S 2 ] which we can bound using a Chernoff bound by e −O(∆ 2 n 2/3 ) , since we explored each action O(n 2/3 ) times. Remark A.7. Actually we have a tradeoff depending on the parameter m t between the regret due to exploration and exploitation. (Note that the monotonicity is always guarantee assuming m t is monotone.) If we can set that m t = 2 t then at time n we have 2/n probability of an exploit action. For the explore action we are in phase logn so the probability of a sub-optimal explore action is n −O(∆ −2 ) . This should give us BIR(n) = O(n −O(∆ −2 ) ). A.3 "Smart" MAB algorithms that combine exploration and exploitation MAB algorithm SuccesiveEliminationReset works as follows. It keeps a set of surviving actions A s ⊆ A, where initially A s = A. The agents are partition into phases, where each phase is a random permutation of the non-eliminated actions. Letμ i,t be the average of the rewards of action i up to phase t andμ * = max iμi,t . We eliminate action i at the end of phase t, i.e., delete it from A s , ifμ * t −μ i,t > log(T /δ)/ √ t. In SuccesiveEliminationReset we simply reset the algorithm with A = A s − A e,t , where A e,t is the set of eliminated actions after phase t. Namely, we restartμ i,t and ignore the old rewards before the elimination. Proof. Let the best action be a * = arg max a µ a . With probability 1 − δ at any time n we have that for any action i ∈ A s that |μ i − µ i | ≤ log(T /δ)/ √ n/K, and a * ∈ A s . This implies that any action a such µ a * − µ a > 3 log(T /δ)/ √ n/K is eliminated. Therefore, any action in A s has BIR (n) of at most 6 log(T /δ)/ √ n/K. Lemma A.9. Assume that if µ i ≥ µ j then the rewards r i stochastically dominates the rewards r j . Then, SuccesiveEliminationReset is monotone Proof. Consider the first time T an action is eliminated, and let T = τ be a realized value of T . Then, clearly for n < τ we have rew(n) = rew(1) . Consider two actions a 1 , a 2 ∈ A, such that µ a 1 ≥ µ a 2 . At time T = τ, the probability that a 1 is eliminated is smaller than the probability that a 2 is eliminated. This follows sinceμ a 1 stochastically dominatesμ a 2 , which implies that for any threshold θ we have Pr[μ a 1 ≥ θ] ≥ Pr[μ a 2 ≥ θ]. After the elimination we consider the expected reward of the eliminated action i∈A µ i q i , where q i is the probability that action i was eliminated in time T = τ. We have that q i ≤ q i+1 , from the probabilities of elimination. The sum i∈A µ i q i with q i ≤ q i+1 and i q i = 1 is maximized by setting q i = 1/|A|. (We can see that if there are q i 1/|A|, then there are two q i < q i+1 , and one can see that setting both to (q i + q i+1 )/2 increases the value.) Therefore we have that the rew(τ) ≥ rew(τ − 1). Now we can continue by induction. For the induction, we can show the property for any remaining set of at most k−1 actions. The main issue is that SuccesiveEliminationReset restarts from scratch, so we can use induction.
2017-02-27T21:13:57.000Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "eeda582fa9864108aa889c20537fb2ea6cd8c0cb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f9514440866ab62871e810e5618d7732cec40685", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
203593583
pes2o/s2orc
v3-fos-license
Dynamics of spiral waves in the complex Ginzburg-Landau equation in bounded domains Multiple spiral wave solutions of the general cubic complex Ginzburg-Landau equation in bounded domains are considered. We investigate the effect of the boundaries on spiral motion under both homogeneous Neumann boundary conditions, for small values of the twist parameter $q$. We derive explicit laws of motion for rectangular domains and we show that the motion of spirals becomes exponentially slow when the twist parameter exceeds a critical value depending on the size of the domain. The rotational frequency of mutlispiral patterns is also analytically obtained. Introduction The complex Ginzburg-Landau equation has a long history in physics as a generic amplitude equation in the vicinity of a Hopf bifurcation in spatially extended systems (see for instance §2 in [12]). It still remains in the forefront of nonlinear science since it is the generic equation for active media displaying plane and rotating wave patterns [4,11]. The simplest examples of such media are chemical oscillations such as the famous Belousov-Zhabotinsky reaction [20]. More complex examples include thermal convection of binary fluids [19], transverse patterns of high intensity light [13]; more recently, it has also been used to model the interaction of several species in some ecological systems [14]. The general cubic complex Ginzburg-Landau equation is given by where a and b are real parameters and Ψ is a complex field representing the amplitude and phase of the modulations of the oscillatory pattern. The class of solutions that we study in this paper are rotational solutions of (1) with a given frequency ω. In particular, we focus on complex solutions of (1) whose isophase lines have the shape of spiral waves. Factoring out the rotation and introducing the scalings Ψ = e −iωt 1 + ωb 1 + ab ψ, t = t ′ 1 + ωb , (x, y) = 1 + b 2 1 + bω (x ′ , y ′ ) in (1) gives where q = (a − b)/(1 + ab) and k is such that The parameters q and k are usually referred to as the twist parameter and asymptotic wavenumber respectively. When q = 0, spiral wave solutions of (2) have isophase lines that are straight lines emanating from some origin (see [10,17] for more details), while if q = 0, the isophase lines bend to form spirals. Such complex patterns may be understood in terms of the position of the centres of the spirals, which are often known as defects. Thus if the motion of the defects can be determined, much of the dynamics of the solution can be understood. Of particular interest are spiral wave solutions of (1) in R 2 . In particular, patterns with a single spiral are topologically stable solutions characterised by the fact that Ψ has a single zero around which the phase of Ψ varies by an integer multiple of 2π that we shall denote as n, which is the winding number of the spiral. Depending on the sign of n the spiral wave unwinds or winds. The time dependence of this type of solution appears as a global rotation, so these solutions are written as Ψ(x, t) = e −iωt ψ(x). Furthermore, ψ(x) has solutions of the form ψ(x) = f (r)e inφ+iχ(r) , with r and φ the polar radius and azimuthal variables respectively, in which f and χ satisfy a system of ordinary differential equations (see [10] for the derivation and asymptotic properties of these solutions and [15] for a result on existence and uniqueness of solution). The complex Ginzburg-Landau equation has also more general solutions characterised by a set of zeroes of Ψ from which spirals emanate. Systems with finitely-many zeroes evolve in time in such a way that the spirals preserve their local structure. When the twist parameter vanishes (that is if a = b), multispiral solutions in R 2 move on a time-scale that is proportional to the logarithm of the inverse of the typical spiral separation [16]. As q increases (and so a = b) the interaction weakens and eventually becomes exponentially small in the separation. When q becomes of order one numerical simulations reveal that the dynamics becomes "frozen", with a set of virtually independent spirals separated by shock lines [6,7]. The singular role of the twist parameter, as pointed out in [18], is to interpolate between these two very dissimilar behaviours, namely a strong (algebraic) interaction for small values of q and an exponentially weak interaction as q approaches the critical value of q c = π/(2 log d), where d is the spiral separation, as is shown in [2,3]. For a finite set of spirals in the whole of R 2 , the asymptotic wavenumber k represents the wavenumber of the phase of ψ at infinity that is to say k = lim t→∞ arg(ψ)/r. Therefore, expression (3) represents a dispersion relation, which, when b = 0 reads ω = q(1 − k 2 ). An important property of spiral wave solutions is that the asymptotic wavenumber k is not a free parameter, but is uniquely determined by q and it is therefore another unknown of the problem. Moreover, k happens to be exponentially small in q which increases the complexity of the problem in the small q limit. In [2,3], the authors use non-trivial perturbation techniques to derive the asymptotic wavenumber and to obtain laws of motion for the centres of the spirals in the whole of R 2 . For ease of exposition we shall take b = 0 so that, dropping the primes henceforth, the equation that we shall be considering is In this paper we focus on multiple-spiral-wave solutions of (4) when the equation is restricted to a bounded domain of R 2 . We consider homogeneous Neumann (zero flux) boundary conditions; the extension to periodic boundary conditions is easy to make, and together these cover the vast majority of numerical computations and physical applications. We investigate the effect of a bounded domain on the spiral dynamics when the twist parameter is small. An important motivation of this work is to help elucidate the effect of the boundaries on some features of the dynamics of spirals in order to decide when such interactions are negligible in comparison with the interaction between spirals. We recall that the structural stability of spirals allows us describe the dynamics of spiral wave solutions of (4) in terms of the motion of the centres of the spirals, which are the points at which ψ vanishes. We thus extend our results in [2,3], where we obtained explicit laws of motion for spirals in free space, and we here derive laws of motion for spirals that are now confined to a bounded domain. The law of motion we find will be given in terms of the Green's function for a modified Helmholtz equation on Ω, which encodes how the shape of the domain affects the motion of defects. By way of illustration, we then focus on rectangular domains where we obtain explicit laws of motion for a finite set of spirals. As already mentioned, the limit when q → 0 is highly singular since spiral wave solutions pass from having an algebraic interaction to an exponentially small one. The simulation of the entire system of partial differential equations (1) in this regime is therefore tedious and one usually considers large rectangular domains to simulate spirals in free space. When doing so, one sometimes finds interesting patterns, such as bound states or changes in the direction of interaction of the spirals, which may or may not be due to the effect of the boundaries. Our equations for the motion of spirals in rectangular domains show how much richer their dynamics become in the presence of boundaries. Indeed, one of our main results is to show that what drives the change from an algebraic interaction to an exponentially small motion is in fact how close one gets to a critical relation between the size of the domain and the twist parameter. In particular, we find that the motion of spirals becomes exponentially small when the diameter of the domain approaches e π/2q , which gives an indication of the difficulty of approximating the solution on an infinite domain with that on a truncated domain. A second important goal of this paper is to describe the role of the boundaries as a selection mechanism for the rotational frequency ω, and hence for the asymptotic wavenumber k, which we also obtain. In this case we find that as the diameter of the domain approaches e π/2q , the rotation rate of the patterns also shifts from being algebraic to becoming exponentially small in q. The paper is organised as follows. Sections §2 and §3 are devoted to obtaining expressions for the laws of motion of the centres of the spirals in general bounded domains. We start in Section §2 by considering what we denote as the canonical or far-field scale, which corresponds to considering domains of diameter e π/2q . Then, in Section §3, we consider domains of diameter ≪ e π/2q , which provides a new set of equations for spiral motion in what we denote as the near field. In Section §4 we consider the particular case of rectangular domains and we obtain explicit laws of motion in both the far and near field. In particular we find that the interaction between the spirals changes from being exponentially small and mainly in the azimuthal direction when the parameters are in the far field regime to becoming algebraic and with a radial component in the near field. Furthermore, the asymptotic wavenumber and thus the rotational frequency of the patterns is exponentially small in the far-field scaling but proportional to the square root of q and the diameter of the domain in the near field. At the end of Section §4, to reconcile these two regimes, a composite law of motion that is valid in both near and far fields is proposed. This composite law of motion is used to compare the trajectories of the spirals with direct numerical simulations of the whole system of partial differential equations (4). Finally, in Section §5, we present our conclusions. Interaction of spirals in bounded domains at the canonical scale In this section we derive laws of motion for the centres of a finite set of spirals with unitary winding numbers confined in general bounded domains with homogeneous Neumann boundary conditions. The law of motion and the corresponding asymptotic wavenumber, k, are given explicitly in terms of the parameter q, which is assumed to be small. In what follows we assume that the centres of the spirals are separated from each other and from the boundaries by distances which are large in comparison with the core radius of the spirals. Since in (4) the core radius is O(1), this requires the domain to be large. We quantify this by introducing the inverse of the domain diameter as the small parameter ǫ, and we suppose that spirals are separated by distances of O(1/ǫ). We therefore consider the system with parameters 0 < q ≪ 1 and 0 < k ≪ 1. As in unbounded domains (see [2] and [3]), the relationship between ǫ, q and k plays a special role giving place to different types of interaction. In particular, we shall show it is the combination α = kq/ǫ that determines the nature of the interaction between spirals. In this section we shall assume that α is an order-one constant, and we shall show that this is equivalent to assuming that 1/ǫ is of order e π/(2q) . Outer solution We follow the same notation as [2] and [3], denoting by X = ǫx the outer space variable and T = µǫ 2 t the slow time scale on which the spirals to interact. At this stage µ is an unknown small parameter. We will later determine that µ = 1/ log(1/ǫ). Since in this section we are assuming that α = kq/ǫ = O(1), we write (4) in the outer region as in Ω along with homogeneous Neumann boundary conditions at the domain boundaries, where ∇ now represents the gradient with respect to X. We express the solution in amplitude-phase form as ψ = f e iχ , giving in Ω, where now the boundary conditions for f and χ are ∂f ∂n = ∂χ ∂n = 0 on ∂Ω. Using the Cole-Hopf transformation χ 00 = log h 0 , equation (9) is transformed into the linear problem In order to match to a spiral solution locally at near the origin h 0 should have the form h 0 ∼ −β log |X| as X → 0 for some constant β [3]. Thus, a solution with N spirals at positions X 1 , . . . , X N should satisfy (10) along with h 0 ∼ −β j log |X − X j | as X → X j , for j = 1, . . . , N. The solution to (10)-(11) is therefore β j G(X; X j ) = G (X; α(T ), β 1 (T ), . . . , β N (T ), X 1 (T ), X 2 (T ), . . . , X N (T )) , (12) say, where G(X; Y) is the Neumann Green's function for the modified Helmholtz equation in Ω, satisfying and we have been explicit about the dependence of G on the value of α, the weights β j , and the position of the sprials X j , all of which may depend on T . Inner solution We rescale close to the centre of a spiral X ℓ by writing X = X ℓ + ǫx to give where ∇ represents now the gradient with respect to the inner variable x. Since the inner region does not see the boundary of Ω (since we assume that the distance between the spiral centre and the boundary is much greater than the core radius), the inner equations are to be solved on an unbounded domain, and the same computations presented in [3] hold. Inner limit of the outer We define the regular part of the outer solution G near the ℓth spiral by setting Then, from (12), as X approaches X ℓ , we find Thus, written in terms of the inner variables, where r = R/ǫ = |X − X ℓ (T )|/ǫ. Outer limit of the inner Using (15) along with the fact that f 00 ∼ 1 − 1/r 2 as r → ∞, it is found that as r → ∞, where c 1 is a constant given by However, in order to match with the outer expansion we need the outer limit of the whole expansion in q. This can be found to be of the form where C i > 0 and D i > 0 are constant values independent of q. The necessity of taking all the terms in q when matching can be seen, since the expansion in q is valid only when q(log(r) + c 1 ) ≪ 1. When α = O(1), q turns out to be O(1/ log ǫ) and thus all the terms in (21)-(22) are the same order. We can sum all these terms in the outer limit of the inner expansion using the same method as in Section 3.3.1 in [3]. The idea is to rewrite the leading-order inner equations in terms of the outer variable R = ǫr to obtain We now expand in powers of ǫ as χ 0 ∼ χ 00 (r, φ; q) + ǫ 2 χ 01 (r, φ; q) + · · · and f 0 ∼ f 00 (r, φ; q) + ǫ 2 f 01 (r, φ; q) + · · · . The leading-order term in this expansion χ 00 (r, φ; q) is just the first term (in ǫ) in the outer expansion of the leading-order inner solution, including all the terms in q. Substituting these expansions into (23), (24) gives f 00 = 1, f 01 = − 1 2 |∇ χ 00 | 2 and 0 = ∇ 2 χ 00 + q|∇ χ 00 | 2 , that is a Riccati equation which can be linearised with the change of variable where A ℓ and B ℓ are constants that depend on q which may be different at each vortex, and the factors ǫ ±iqn ℓ are included to facilitate their determination by comparison with the solution in the inner variable. To determine A ℓ and B ℓ we need to writeχ 00 in terms of r, expand in powers of q, and compare with (20). Writing the constants in powers of q as A ℓ (q) ∼ A ℓ0 /q + A ℓ1 + qA ℓ2 + · · · and B ℓ (q) ∼ B ℓ0 /q + B ℓ1 + qB ℓ2 + · · · , and expressing H 0 in terms of r we find Comparing with (20) (and recalling that n ℓ = ±1) we see that The remaining equations determining A ℓ and B ℓ will be fixed when matching with the outer region. Outer limit of the first-order inner We do the same with the first-order inner solution. Motivated by the transformation we applied to χ 00 we write Writing Writing the velocity as and recalling that h 0 = e qn ℓ φ H 0 (R), the left hand side of (29) gives Therefore, writing yields a system of equations for g 1 and g 2 , whose solution gives where γ 1 and γ 2 are unknown constants that will be determined by matching to the inner limit of the outer solution. Leading order matching: determination of the asymptotic wavenumber Using (25) and (26), the leading-order outer limit of the inner expansion is found to be, while the leading-order inner limit of the outer, according to (19) reads Hence, in order to match, the order 1/q term inside the logarithm in the outer limit of the inner must vanish, so that where ν is an order one constant and |n ℓ | = 1. This expression provides a relation between the two small parameters q and ǫ, and it is needed in order for α to be an order one constant. It is equivalent to assuming that the typical size domain is 1/ǫ = O(e π/2q ). The outer limit of the inner now reads and matching with (31) provides the conditions A 0ℓ = β ℓ /2 and Eliminating A 1ℓ − B 1ℓ using (27) gives With ν given by (32), this provides a set of N equations for the N + 1 unknowns α and β ℓ , ℓ = 1, . . . , N . However, since G ℓ reg (X ℓ ) is a homogeneous, linear function of β 1 , . . . , β N (see (12)), the system (33) is a homogeneous linear system of N equations for β 1 , . . . , β N . There exists a solution if and only if the determinant of the system is zero, which provides an equation for α. This in turn determines the asymptotic wavenumber, k = αǫ/q, and therefore the rotational frequency ω. The coefficients β 1 , . . . , β N are then determined only up to some global scaling (which is equivalent to adding a constant to χ 00 ). First order matching: law of motion for the centres of the spirals We now compare one term of the outer expansion with two terms of the inner expansion (in the notation of Van Dyke [9]). This matching will eventually provide a law of motion for the spirals. The two-term inner expansion of the one-term outer solution for χ is given in (19). We must compare this with the one-term outer expansion of the two-term inner solution χ 0 + ǫχ 1 . From §2.4 the one-term (in ǫ) outer expansion of this is Comparing this with (19) gives the matching condition Note that this equation implies that µ = O(q), as we have been supposing. Solving for γ 1 and γ 2 , substituting into (30), writing χ 10 in terms of the inner variable and expanding in powers of q finally gives, to leading order in q, Solvability condition and law of motion Equation (35) provides a boundary condition on the first-order inner equation (16). However, there is a solvability condition on (16) subject to (35), which determines V 1 and V 2 , thereby providing our law of motion for the spiral centres. The analysis in this section summarises the corresponding analysis in [3]. Multiplying equation (16) by the conjugate v * of a solution v of the adjoint problem integrating over a ball B r * of radius r * , and using integration by parts gives, after some manipulation, where ℜ denotes the real part. A straightforward calculation shows that directional derivatives of ψ 0 are solutions of the adjoint problem if q is replaced by −q, i.e. v = d · ∇ψ 0 | q→−q , where d is any vector in R 2 . To leading order in q and µ the solvability condition (36) is Letting the ball radius r * tend to infinity gives lim r→∞ 2π 0 (e φ · d) ∂χ 10 ∂r + χ 10 r dφ = 0. Now using (35) gives the law of motion where ⊥ represents a positive rotation by π/2. Summary The parameter α and the coefficients β j are determined (up to a scaling) by the linear system (33), which is where is the regular part of the Neumann Green's function G for the modified Helmholtz equation (13), and ν = log(1/ǫ) − π/2q. The law of motion (38) may be written, to leading order in q, as As the size of the domain tends to infinity, and equation (40) agrees with that given in [2] for spirals in an infinite domain. Interaction of spirals in bounded domains in the near-field In the previous section we assumed the parameter α is order one as ǫ → 0, which led to q and ǫ being related by (32), which implies that the separation of spirals, and therefore the size of the domain, is exponentially large in q. We now consider smaller domains, in which α will be small. We will find that α = O(q 1/2 ) in this new scale in contrast to spirals in the near field in the whole of R 2 , where α is found to be exponentially small in q [2]. Outer region As before we rescale time as T = µǫ 2 t and use X = ǫx as the outer variable, to give Recall that 1/ǫ is the typical domain diameter in x, so that the diameter of the domain is O(1) in terms of X. Expressing the solution in amplitude-phase form as ψ = f e iχ yields in Ω, where, as before, the boundary conditions for f and χ are ∂f ∂n = ∂χ ∂n = 0 on ∂Ω. Expanding in asymptotic power series in ǫ as f ∼ f 0 + ǫ 2 f 1 + . . . and χ ∼ χ 0 + ǫ 2 χ 1 + . . ., the leading-and first-order terms in f give The equation for the leading-order phase function, χ 0 , is So far the analysis is exactly the same as before. However, we know that α cannot be O(1) this time, and so must be some lower order in q. The natural assumption is that α 2 = O(q), which we will verify a posteriori. We thus rescale α = q 1/2ᾱ . We note that α being of order q 1/2 is consistent with the value of α that is found in [1] for a single spiral in a finite disk with homogeneous Neumann boundary conditions. Expanding χ 0 in terms of q as χ 0 ∼ 1 q (χ 00 + qχ 01 + . . .) as in §2 1 gives, at leading and first order in q, µ ∂χ 00 ∂T = ∇ 2 χ 01 + 2∇χ 00 · ∇χ 01 −ᾱ 2 , in Ω, with homogeneous Neumann boundary conditions, whereμ = µ/q. Integrating (44) over Ω and using the divergence theorem and the boundary conditions gives Ω |∇χ 00 | 2 dS = 0, so that in fact χ 00 = C 1 (T ). Now (42)-(43) are invariant with respect to the transformation so that we may take C 1 ≡ 0 without loss of generality. In fact, if C ′ 1 (T ) = 0 it means we have not factored out all the bulk rotation when making the change of variables which leads to (2). However, we must be careful when matching with the inner region near each spiral, since changing C 1 is equivalent to scaling A ℓ in the inner region. With C 1 = 0 we will find that the inner expansions for A ℓ and B ℓ start at O(1) rather than O(1/q) as they did in §2.4. The first-order equation (45) becomes in Ω, ∂χ 01 ∂n = 0 on ∂Ω, where R j = |X − X j (T )| and φ j are polar coordinates centred on the jth spiral, and we have assumed that the singularities due to the spirals are locally of the same form as the corresponding singularities when Ω = R 2 [2]. We thus have a set of unknown slow-time-dependent parameters, C 2j (T ), one for each spiral, which are determined by matching at each spiral core. To determineᾱ we integrate equation (46) over the domain V δ = Ω\ N j=1 B δ (X j (T )), which is the domain that is left after removing disks of radius δ centred at each spiral. Applying the Divergence Theorem on this domain (on which solutions are regular), and then taking the limit δ → 0, gives is the area of the domain in terms of the outer variable X. Inner region The inner region is exactly the same as in §2.2. Inner limit of the outer The solution to (46) may be written as n j H(X; X j ) = G, say, where G n (X; Y) is the Neumann Green's function for Laplace's equation in Ω, satisfying and H satisfies where φ is the azimuthal angle centred at Y. If G d (X; Y) is the Dirichlet Green's function, satisfying then H is its harmonic conjugate, so that Defining the regular part of G n , H and G d as we find that as X → X ℓ (T ), Written in terms of the inner variable ǫx = X − X ℓ (T ) this is where r and φ are the polar representation of x. Outer limit of the inner We sum the q-expansion of the outer limit of the inner in exactly the same was as in §2.4 to give χ 00 = n ℓ φ + (1/q) log H 0 with To determine A ℓ and B ℓ we need to writeχ 00 in terms of r, expand in powers of q, and compare with (20). Crucially though, as mentioned in §3.1, and in contrast to §2.4, the expansions for A ℓ and B ℓ proceed now as A ℓ (q) ∼ A ℓ0 + qA ℓ1 + · · · and B ℓ (q) ∼ B ℓ0 + qB ℓ1 + · · · . Expressing H 0 in terms of r we find Comparing with (20) (and recalling that n ℓ = ±1) we see that The remaining equations determining A ℓ and B ℓ will be fixed when matching with the outer region. Using (53) we now find that (52) gives the outer limit of the leading-order inner expansion as Similarly, the leading-order outer limit of the first correction to the inner expansion χ 10 is, as before, The asymptotic wavenumber is related to α by k = αǫ/q and so, since α = q 1/2ᾱ , As q log(1/ǫ) → π/2 this expression matches smoothly into that given by (33); we demonstrate this in Section 4.3 when we develop a uniform composite approximation. First order matching: law of motion for the spirals Matching (51) with (34) gives Solving for γ 1 and γ 2 and substituting into (34) using (30) gives, finally, The compatibility condition (37) then gives the law of motion as Using (49) and (57) we may write this as Thus we see the motion due to each spiral is a mix of the gradient of the Dirichlet Green's function and the perpendicular gradient of the Neumann Green's function. Since we are considering only the case that |n j | = 1 for all j we may simplify to n ℓ µ 2q tan(q log ǫ) dX ℓ dT = 2π tan(q log ǫ)∇ ⊥ G n,reg (X ℓ ; X ℓ ) − 2πn ℓ ∇G d,reg (X ℓ ; X ℓ ) As the size of the domain tends to infinity both the Neumann and Dirichlet Green's functions tend to 1 2π log |X − Y|. Comparison with direct numerical simulations In this section we compare our results with direct numerical simulations in a rectangular domain with sides of length a and b. As we have shown in the previous sections, we find two different laws of motion for spirals depending on the relative sizes of the domain and the parameter q. We first evaluate these two laws of motion for the case of a rectangle, before comparing the trajectories and velocity of spirals with those obtained by numerically solving the partial differential equation (5). Canonical scale For spirals in a rectangular domains in which a, b ∼ 1/ǫ ∼ e π/2q the motion takes place in the canonical scaling. Recalling that the outer variable is defined as X = ǫx, equation (13) for the Neumann Green's function G(X;X) for the modified Helmholtz equation is, in this case where X = (X, Y ) andX = (X,Ŷ ). Using the method of images, and noting that the free space Green's function is given by (41), the solution is The series are rapidly convergent since K 0 (z) decays exponentially for large z. We also defined the regular part of the Green's function by In order to compare with direct numerical simulation, we rewrite G in terms of the original variable x by setting , η), and we have written ǫα = qk. Then With a single spiral. In the particular case where there is only one spiral at position X 1 with unitary winding number n 1 , the law of motion simply reads and α is given by −2πG reg (X 1 ; Written in terms of the original variables x, t and k equation (65) becomes where ∇ now represents the gradient with respect to x. Equation (66) becomes −2πG ′ reg (x 1 ; x 1 ) + c 1 − π/2q = 0. Note that neither of these equations depends on the scaling parameters ǫ or µ, as expected. With two spirals Written in terms of the original coordinate x, with spirals at positions x 1 and x 2 , (39) gives The equation for k is thus Written in terms of the original variables x and t the law of motion (65) for two spirals is Remark 1 We note that if initially x 1 +x 2 = (a, b), so that the spirals are placed symmetrically with respect to the centre of the domain, then if n 1 = n 2 they keep this symmetry during the motion. In this case G ′ reg (x 1 ; x 1 ) = G ′ reg (x 2 ; x 2 ) so that β 2 /β 1 = 1. Near-field scale In the near field scaling the relevant Green's functions are the Neumann and Dirichlet Green's functions for Laplace's equation. We rewrite these in the original variables as G ′ n (x; ξ) = G n (ǫx; ǫξ), G ′ d (x; ξ) = G d (ǫx; ǫξ). As before, we evaluate the Green's functions by the method of images. However, we must be a little careful, because the sums over images for the Green's functions themselves do not converge. However, the corresponding sums over images for the derivatives of the Green's functions do converge, and these are what we need for the law of motion. Defining Note that the final sums above again converge exponentially quickly. In terms of x and t the law of motion is Recall also that k = 2πN q|Ω| tan(q log(1/ǫ)) where |Ω| is the area of Ω in the original variable x. With a single spiral Written out in component form, the law of motion for a single spiral at x 1 with winding number |n 1 | = 1 is With two spirals Written out in component form, the law of motion for spirals at positions x 1 and x 2 with winding numbers |n 1 | = |n 2 | = 1 is A composite expansion To compare with direct numerical simlulations we combine the expansions of Sections 4.1 and 4.2 into a single composite expansion valid in both regions. We first consider the asymptotic wavenumber. As α → 0 in (13) we find where G n (X; Y) is the Neumann Greens function for Laplace's equation given by (48). Thus (33) implies that the β ℓ are all equal to leading order and α is given by . We see that this matches smoothly into the near-field α we found in (59), since as q| log ǫ| → π/2. We may generate a uniform approximation to α by taking . The corresponding uniform approximation to k is given by . For the law of motion the simplest composite expansion is to use the near-field law of motion (64) but with the Neumann and Dirichlet Green's functions for the modified Helmholtz equation (13) in place of those for Laplace's equation. In order to validate our results, numerical simulations of (1) were carried out using finite difference schemes applied to the coupled reaction-diffusion equations for the real and imaginary parts of ψ. The choice of the second-order accurate uniform spatial discretization follows other studies of spiral wave dynamics [8,5]. Explicit timestepping using Euler's method with a small timestep, ∆t = ∆x 2 /20, was found to be stable and computationally efficient. Trajectories of the spiral cores were obtained by tracking the minima of |ψ|. Initial conditions were chosen to have zeros with a unit winding number at the desired initial location of the sprials. Following initial transients, the numerical solutions converged locally to stable spiral-wave structures, maintained for long times. Because of this transient we might expect a small change in the initial position when comparing spiral trajectories with the solution of the asymptotic law of motion. In order to plot the composite solution we need to make one final choice as to the value of ǫ, which is the inverse of the typical separation between spirals (and their images). In principal there should be no choice in this parameter (note that ǫ disappears from the approximation for k, for example, when it is rewritten in the original variables)-this is reflected in the law of motion by the fact that ǫ only appears as log(ǫ): multiplying ǫ by any factor does not change the law of motion asymptotically. However, for finite values of q the choice of ǫ is important. The simplest choice would simply be the inverse of the domain diameter, i.e. ǫ = 1/a. However, we find that a better match with the direct numerical simulations is achieved if ǫ is taken to be proportional to the inverse distance to the nearest image. For a single spiral we approximate this by setting where we take p to be 1,2 or 3 and λ is an O(1) constant chosen to give a good fit. For two sprials we take and the results are fairly insensitive to p so we take p = 1 and λ = 0.52 for all q. In Figure 1 we compare the trajectories provided by a direct numerical simulation of (1) (dashed lines) and those given by the uniform asymptotic approximation described above (solid lines) for a single spiral in a square domain of side 200. Numerical trajectories starting from positions (10, 0), (20, 0), . . . , (70, 0) are shown. The starting points for the asymptotic trajectories are perturbed slightly to account for the initial transient in the numerical simulation. These were determined by solving backwards from a point on the numerical trajectory near the boundary of the domain. While the qualitative behaviour of the trajectories is the same whatever value for ǫ is chosen (since the law of motion depends only on log ǫ as we have noted), the excellent quantitative agreement shown relies on a careful choice of the parameters in (68). For small q we see that the spiral is attracted to the boundary whatever its inital position. However, as q is increased there is a Hopf bifurcation with the appearance of an unstable periodic orbit. Trajectories starting outside this periodic orbit are attracted to the boundary of the domain, but those starting inside it spiral in to the origin. As q is increased further the periodic orbit grows in size and becomes more square. This can be understood as the spiral interacting with its images predominantly in the near-field limit, in which the motion is perpendicular to the line of centres. With the motion dominated by the nearest image the spiral will move parallel to the nearest boundary until it nears the corner, when a second image takes over. In Figure 2 we compare the trajectories provided by a direct numerical simulation of (1) (dashed lines) and those given by the uniform asymptotic approximation (solid lines) for a pair of +1 spirals in the same square domain of side 200. We position the spirals symmetrically at positions (−x, 0) and (x, 0), where we choose x = 10, 20, . . . , 70. In this case we use the expression (69) for ǫ with p = 1 and λ = 0.52 for all q. As in Figure 1, the starting points of the asymptotic trajectories are perturbed to account for the initial transient in the numerical simulation, and were determined by solving backwards from a point on the numerical trajectory near the boundary of the domain. We see that the agreement is qualitatively very good, but is less quantitative than in the single spiral case. An examination of the numerical trajectories indicates that there must be a stronger initial transient in this case. For example, in Figure 2(a) the numerical trajectories from initial positions (50, 0) and (60, 0) practically overlap at late times, and in fact cross each other. Since the asymptotic law of motion gives velocity as a function of position, such behaviour is not possible when the evolution is quasistatic. In the trajectories of Figure 2 we see that the spirals attempt to circle around each other, as the near-field interaction would indicate, but gradually drift apart until the image spirals take over and force the pair to rotate in the opposite direction. Conclusions We have developed a law of motion for interacting spiral waves in a bounded domain in the limit that the twist parameter q is small. We find that the size of the domain is crucial in determining the form of this law of motion. When the domain is large (specifically when the diameter is O(e π/2q )) the motion is given in terms of the Neumann Green's function for the modified Helmholtz equation (40). The asymptotic wavenumber, which is exponentially small in q, is determined as the solvability condition on a set of linear equations involving the positions of all the spirals (39). When the domain is not so large, the motion is given in terms of both the Neumann and Dirichlet Green's functions for Laplace's equation (64). The asymptotic wavenumber is now algebraically small in q, and depends only on the number of spirals and not their position (60). Although we have focussed on Neumann boundary conditions for the complex Ginzburg-Landau equation (4), the extension to periodic boundary conditions is straightforward.
2019-09-30T09:38:16.000Z
2019-09-30T00:00:00.000
{ "year": 2020, "sha1": "347ac7dbe8faa69aef9f84282fc720bc8a7638a6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1909.13554", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8a6bcf5624a0a2f7a0363eb86809c0569b4ca309", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
18862204
pes2o/s2orc
v3-fos-license
Renal deposition of soluble immune complexes in mice bearing B-16 melanoma. Characterization of complexes and relationship to tumor progress. Histologic and immunofluorescence studies of the kidneys of mice bearing a progressive melanoma show a proliferative glomerulonephritis associated with immune complex deposition in the mesangium and along the glomerular basement membrane This immune complex disease is distinct from the age-associated disease of the C57BL/6J host strain and the complexes can be shown to consist of soluble tumor antigen and antitumor antibody. Furthermore, the intensity of IgG complex deposition correlates directly with tumor progress (size and metastases) and inversely with mononuclear leukocyte infiltration of the tumor. In vitro assays for lymphocyte cytotoxicity and humoral antibody were found to be less reliable indicators of tumor progress. The possible role of circulating soluble tumor antigen in modifying the immune response to tumors is discussed. In the course of examining tissues for microscopic metastases in C57BL/6J mice bearing a strain-specific melanoma (B-16), we routinely found histologic alterations of the renal glomeruli characterized by a mild proliferative glomerulonephritis (Fig. 1). These mice had slightly elevated urine protein levels, but they were not significantly above normal limits. However, the histologic alterations of the kidneys were striking and were not due to metastatic infiltration or impairment of the renal blood supply by the tumor. It was of interest to determine in what way this asymptomatic renal damage was related to the presence and progress of the tumor; especially, whether it reflected the generation and deposition of soluble complexes of tumor antigen and antitumor antibody. Such complexes have been implicated in the blockade of cell-mediated tumor rejection in other systems (11,12). Materials and Methods The B-16 melanoma is a transplantable, strain-specific tumor of C57BL/6 mice, which was spontaneous in origin. Subcutaneous inoculation of l X 105 cells produces a progressive, local, lethal growth in at least 70% of mice in our experience. The mice in this study were male C57BL/6 obtained from the Jackson Laboratories, Bar Harbor, Maine. Suspensions of tumor cells used in all assays and for in vivo immunization or challenge were derived from nontrypsinized B-16 cells maintained in vitro in RPMI-1640 medium containing 15% heat-inactivated fetal calf serum, penicillin-streptomycin (50 U/ml), and tylocine (anti-PPLO) at 37°C in 5% CO2 atmosphere. Detection of Renal Immunoglobulin Deposits.--Kidneys obtained from animals killed by cervical dislocation were washed in cold saline, frozen in a dry-ice acetone bath, and stored at --70 ° until sectioned. 6/z cryostat sections were fixed and stained as previously described (13). Fhiorescein-conjugated, monospecific antimouse IgG, IgM, and IgA were obtained from the Meloy Laboratory, Springfield, Va. Fluorescence was examined with a Zeiss Photomicroscope (Carl Zeiss, Inc., New York) equipped with an HBO 200 mercury vapor lamp, darkfield condenser, UG-1 exciter filter, and 47/65 barrier filter. Histology.--Histologic examination of kidneys and tumors was done on formalin-fixed, paraffin-embedded tissues. The sections were cut at 3 # and stained with hematoxylin and eosin. In Vitro Assays for Cellular and Humoral Reactivity.--Plasma and peripheral blood leukocytes from tumor-sensitized, tumor-progressor, and tumor-rejector mice were obtained at intervals by bleeding from the retroorbital sinus under light ether anesthesia using sterile heparinized capillary tubes. 0.4-0.5 ml of blood could be safely taken from individual animals at 10-day intervals by this method. The blood was centrifuged and the plasma removed. Erythrocytes in the cell pellet were lysed by incubation with Tris-ammonium chloride for 10 min at 37°C. The leukocytes were washed four times with Hank's balanced salt solution to remove residual serum and lysing solution. The cell number and viability were determined by hemocytometer counting in an eosin Y solution (14). Wright's stained smears were prepared on random samples to determine the differential cell count. Leukocyte preparations showed greater than 95% viability after lysing and washing, and were found to consist of 97-100% lymphocytes. A yield of 1-2 X 106 viable lymphocytes could be obtained from 0.4-0.5 ml of blood by this technique. Each animal's plasma and leukocyte preparations were tested individually. In vitro cytotoxic activity of the peripheral blood lymphocytes (PBL) 1 was determined by admixture of 2 X 105 PBL with 1 X 105 [3H]TdR-labeled (1.9 Ci/mmol, Schwartz-Mann Div., Becton, Dickinson & Co., Orangeburg, N. Y.) B-16 cells (2:1 PBL: target cell ratio). The mixture was incubated at 37°C for 72 h in RPMI-1640 medium containing 5% fetal calf serum and the antibiotics described previously. During the incubation time the viable B-16 cells adhere to the glass culture tube. The number of viable tumor cells remaining at the end of the incubation period was estimated by the retention of [3H]TdR in trichloroacetic acidprecipitable material in adherent tumor cells. Isotope counting was done in a liquid scintillation system. Antibody cytotoxicity was determined in a similar fashion. The noncomplement inactivated plasma was diluted 1 : 10 in culture medium and the labeled tumor cells (1 X 105) were suspended in 0.25 ml of the diluted serum and incubated for 1 h at 37°C. The volume was then brought up to 1.5 ml by addition of culture medium and the incubation was continued for 72 h. The viable target cells remaining were estimated by determining isotope retention as described above. The "blocking activity" of plasma was determined by first incubating the target cells with plasma as described for 1 h and then adding the PBL (2 X 10 ~) in a volume of 1.25 ml. Cytotoxicity was measured at 72 h as before. The relative titers of anti-B-16 membrane reactivity of the plasma were determined using a mixed hemadsorption assay. B-16 cells were allowed to adhere to the surface of the wells of Trans-Type leukocyte-typing slides (Hyland Div., Travenol Laboratories, Inc., Costa Mesa, Calif.) and were subsequently incubated for 30 min at room temperature with dilutions of complement-inactivated mouse plasma. The monolayers were washed gently in phosphatebuffered saline (PBS) and two drops of a 0.5% suspension of doubly sensitized (mouse anti-SRBC/rabbit antimouse Ig) sheep red blood cells (SRBC) were added. Mter an additional 30-min incubation at room temperature the monolayers were again gently washed by dipping in PBS and were read for adherence of SRBC. The titer was taken as the last dilution of plasma showing significant SRBC adherence to target cells. Elution of Renal Immunoglobulin.--Immunoglobulin was eluted from the kidneys of 9-wk old tumor-progressor mice by acid citrate or chaotropic ions using the method described by Linder, et al. (15). The eluates are designated citrate eluate and KI eluate, respectively. Detection of Tumor Antigen in Plasma.--Individual plasma pools were prepared from five groups of mice. The first pool was derived from mice sensitized with soluble B-16 antigen but not challenged with tumor. 2 These mice had demonstrable antitumor antibody. The second and third pools were derived from tumor-progressor mice at day 10 and 20 post-challenge, respectively. The day 10 plasma had no anti-B-16 antibody while the day 20 plasma did show significant antibody activity. The fourth pool was derived from day 20 plasma of animals which subsequently spontaneously rejected a tumor challenge of 1 X 105 cells and had no demonstrable antibody activity. The fifth pool was derived from control (nonsensitized, nonchallenged) mice. Each whole plasma pool was reacted in Ouchterlony double diffusion plates with rabbit antiserum against soluble B-16 antigen prepared as previously described. 2 In addition, a 3-ml sample of each plasma pool was applied to a Sephadex G-150 column (2.5 × 45 cm) and eluted with PBS pH 7.4 at a flow rate of 10 ml/h. The excluded protein peak from each sample was concentrated by negative pressure dialysis and dialyzed against pH 3.0 saline to split antigen-antibody complexes. The samples were then dialyzed through an XM-50 Amicon membrane (Amicon Corp., Lexington, Mass.) with 10 vol of pH 3.0 saline. The dialysate, containing materials of less than 50#00 mol wt, was concentrated to one-half the original serum volume, dialyzed against pH 7.4 saline, and tested for the presence of B-16 specific antigen in double diffusion against rabbit anti-B-16SA antiserum. RESULTS The intensity of immunoglobulin deposition in representative tumor-progressor and age-matched control mice is shown in Table I. IgG, IgM, and IgA deposits in the kidneys of tumor-progressor mice were always greater than in age-matched controls, although age-related, spontaneous complex deposition involving all three immunoglobulin classes did occur. IgG deposits showed the most consistent and most profound elevation in the tumor-progressor mice and always involved the glomerular basement membrane (GBM) as well as the mesangium; whereas in control mice the deposits, when present, were confined to the mesangium. The immunofluorescence staining pattern of glomeruli from tumor-progressor mice is shown in Fig. 2. The deposits were clearly granular in nature and appeared in 100% of the glomeruli in affected mice. Since an underlying, age-associated immune-complex disease has been described in this mouse strain (15,16) and was confirmed by these data, further studies were confined to mice younger than 16 wk of age. It was necessary to determine whether the increased intensity and GBM involvement of immune-complex deposition in tumor-progressor mice represented acceleration of an ongoing disease due to the physiologic stress of a growing The values are shown for individual animals. Tumor-bearing animal's kidneys were examined 20-25 days after s.c. challenge with 1 X l0 s B-16 cells. * ra indicates mesangial localization of deposits, g indicates GBM localization of deposits. tumor, or whether it represented the generation of new immune complexes involving soluble tumor antigen. Table II shows the relative intensity of IgG deposition and antitumor serum antibody titers in ll-wk old animals which had progressive tumors, in animals which were sensitized with soluble tumor antigen 2 but not challenged, in immunosuppressed tumor-progressor animals, and in animals which spontaneously rejected a tumor challenge. It appears that the complexes are, in fact, related to the presence of the tumor as an antigen rather than as a physiologic stress. Animals sensitized and challenged with soluble tumor antigen all showed greater than 2 + IgG immune complex deposition as did animals bearing progressive tumor. However, animals which spontaneously rejected the tumor and tumor-progressor animals which had been pretreated with a dose of irradiation which preferentially suppresses primary immune responses all showed renal IgG deposits of 1+ or less. These results suggested that the renal deposits might represent soluble circulating complexes of tumor antigen and antitumor antibody. Immunoglobulin was eluted from pooled kidney tissue of 9-wk old tumor- Table III for titers and specific activity). progressor mice and examined by indirect immunofluorescence for reactivity with the B-16 tumor cell membranes. Tumor-bearing mice in this age group showed only IgG in renal deposits. The fluorescent staining pattern is shown in Fig. 3. Table I I I shows the relative activity and specificity for B-16 cells of renal eluates and serum derived at the same time from the same animals. The specific activity for B-16 cell membranes of IgG in the pooled K I and citrate eluates was 32 times greater than in the serum (relative to the IgG concentration). Neither the serum nor the pooled eluate reacted with C57 liver cells or erythrocytes in an indirect immunofluorescence assay and the activity against B-16 cells could not be removed by absorption with C57 tissue powder. IgM and IgA could not be detected in the eluates and no IgM or IgA antibody was detected in the indirect fluorescence assay with B-16 cells, C57 liver cells, or erythrocytes. The relationship of the renal IgG deposits to tumor size, intensity of mononuclear leukocyte (MNL) infiltration, and the presence of lung metastases is shown in Table IV. In general, mice which showed only small amounts (< l-l-) of renal deposits of IgG had relatively small tumors at the injection site and the tumors had moderate to pronounced MNL infiltration (Fig. 4 a). A correlation between a favorable prognosis and intense lymphocytic infiltration has been reported in human malignant melanoma (17). None of these animals had metastases to the lungs. Mice which showed moderate (> 1+ <2+) renal IgG deposits had somewhat larger tumors with relatively less MNL infiltration. One animal had lung metastases. Mice which had large (> 2+) amounts of renal IgG had the largest tumors with little or no MNL infiltration (Fig. 4 b) and 5 out of 10 had microscopic lung metastases. All parameters were examined at 24-26 days after tumor inoculation. The extreme variability (none to pronounced) of the intensity of MNL infiltration of the tumors suggests that there is a difference in the cell-mediated tumor rejection response in these animals which is related to immune complex deposition. However, in vitro testing of PBL from similar animals (Table V) revealed that all tumor-progressor animals had some degree of PBL cytotoxicity against B-16 cells at day 20 after tumor inoculation. PBL cytotoxicity was not found in any animals at day 10. The intensity of day 20 PBL reactivity did not correlate with the tumor size at day 20 or with the time of death. Furthermore, animals which spontaneously rejected the tumor had essentially identical PBL reactivity as did animals which succumbed to the tumor. Humoral antibody activity in the tumor-progressors and tumor-rejectors is shown in Table VI. Day 20 post-tumor inoculation antibody titers of individual tumor-progressor plasmas did not differ significantly. Both cytotoxicity and blocking activity could be demonstrated in the same plasma at the same dilution in some of the progressor animals. In some cases, where blocking activity was not found, the plasma and PBL combination resulted in enhanced (greater than additive) cytotoxicity implying perhaps an antibody-dependent cellular cytotoxicity. Animals which had cytotoxic antibody or whose i)lasma enhanced cellular cytotoxicity did not survive significantly longer than those which did not; nor did the presence of plasma blocking activity correlate with shortened survival time. Tumor-rejectors, as a group, had no detectable antitumor antibody at day 20 by mixed hemadsorption, cytotoxic, or blocking assays. Mice sensitized to the B-16 tumor with either killed (irradiated or frozen) tumor cells or with soluble tumor antigen 2 developed both cellular and humoral responses comparable to those in tumor-progressor animals. When the sensitized animals were subsequently challenged with a tumor inoculum (5 X 104 cells) which did not take in control animals, they all developed progressive, lethal tumors (Table VII). These animals showed the most pronounced (all > 2+) renal IgG complex deposition and the least amount of MNL infiltration of the tumors of any animals tested. Of the five unfractionated plasma pools tested, tumor-specific antigen was found only in one. This was the pool consisting of day 10 serum from tumorprogressor mice which had not shown antibody activity. However, after frac- tionation and low pH immune complex dissociation, free antigen was detected in both the day 10 and day 20 tumor-progressor plasma. Tumor antigen could also be detected in the citrate (pH 3.0) eluates from day 20 tumor-progressor kidneys. DISCUSSION We have reported previously ~ that in vilro cultured B-16 cells spontaneously release a soluble, tumor-specific membrane antigen (20,000-27,000 mol wt), and that antigen release occurs both through active metabolism and autolysis. It appears that a similar antigen (less than 50,000 mol wt) is shed in vivo, as evidenced by the detection of free or antibody-complexed antigen in the serum and in renal deposits of tumor-progressor animals. Furthermore, the intensity of renal immune complex deposition showed a more consistent correlation with in vivo parameters of tumor growth (size and metastatic spread) and host rejection response (MNL infiltration) than did in vitro assays for peripheral blood lymphocyte cytotoxicity, antibody titer, cytotoxic antibody or blocking antibody. Lymphocyte cytotoxicity assays at a low lymphocyte: target cell ratio (2:1) failed to discriminate between tumor-progressors and rejectors, and the level of cytotoxicity did not correlate with survival time in progressors. Tumor-reactive antibody (mixed hemadsorption assay) was consistently found in progressor animals; however, functional assays for cytotoxicity and blocking activity were poor indicators of tumor progress with regard to the most significant criterion, namely, survival. The existence of tumor antigen in the serum of tumor-bearing individuals has been described previously (11,12,18,19) and such serum has been shown to block in vitro lymphocyte cytotoxicity presumably by interaction with the lymphocyte antigen receptors (11,12,18). In other systems, however: humoral blocking activity appears to be a property of noncytotoxic antibody which masks the tumor cell antigens (20). The definition of "blocking" activity is quite dependent on the way the serum is assayed. Thus, if serum is preincubated with target cells and removed before addition of the lymphocytes, only "masking antibody" can be detected. If the serum is preincubated with the lymphocytes, on the other hand, only antigen blockade of T-lymphocyte receptors can be detected. Fig. 5 shows the interactions which might occur between soluble circulating tumor antigen and immune effectors. In this scheme, the biological effects of antibody and T lymphocytes are proposed to be controlled by fluctuating levels of soluble antigen. In conditions of antigen excess, free or antibody-complexed antigen may bind to T-lymphocyte antigen receptors and thus activate the cells. These lymphocytes would be stimulated at a distance from the tumor mass and would therefore expend their short-range effectors (MIF, lymphotoxin, etc.) without causing damage to the tumor. Thus, one might expect that in individuals with massive tumor burdens delayed hypersensitivity responses to additional tumor antigen might be minimal or absent. In a study of eight patients with malignant melanoma, Fass et al. (21) found that only those with small, localized tumors (3/8) showed delayed hypersensitivity reactions to autologous tumor cells. The five patients with disseminated disease were unreactive. These ~esults were not a reflection of general anergy as all eight patients showed normal delayed hypersensitivity reactions to other antigens to which they were previously sensitive. Furthermore, antigen-antibody complexes bound by T lymphocytes might result in complement fixation and lysis of the cells. This could account for the lesser reactivity in vitro of washed lymphocytes from melanoma patients with disseminated tumors (22). Thus, it is possible to deplete the effective T-lymphocyte population to the point where even a strongly sensitized individual will appear unreactive. In the B-16 system, the C57BL/6 host does not have a complete hemolytic complement system and so depletion of reactive lymphocytes by lysis may not occur. In fact, when peripheral blood lymphocytes were removed from animals with varying degrees of tumor mass and washed thoroughly, they behaved very similarly. On the other hand, the in vivo effectiveness of the lymphocytes as measured by MNL infiltration of the tumor was markedly different in animals with large and small tumors. Thus, the potential activity of the lymphocytes was the same regardless of tumor mass, but their actual effectiveness was diminished in animals with larger tumors. This suggests the possibility that diminished lymphocyte reactivity in vivo could be an effect rather than a cause of rapid tumor growth and consequent release of large amounts of tumor antigen. Referring again to Fig. 5, in conditions of slight antibody excess, the phenomenon of masking tumor cell antigens by antibody present in insufficient quantities to be cytotoxic (by complement fixation or antibody-dependent cellular cytotoxicity) may occur. Whereas, in conditions of extreme antibody excess, brought about by removal of part of the tumor mass and decrease in available circulating antigen, the same antibody which previously "masked" might become cytotoxic. For example, Lewis et al. (23) have reported the appearance of cytotoxic antibody in two previously negative patients following partial excision of their malignant melanoma. Furthermore, inhibition of serumblocking activity by the serum of patients in remission (24) may reflect the attainment of a balance of antigen and antibody in the mixture of the two types of sera resulting in effective neutralization of both antigen and antibody. The postulated interaction presented in Fig. 5 is not meant to represent alternative static conditions, but rather a cycle which is controlled chiefly by the availability of circulating antigen. Since we have shown 2 that soluble antigen can be released by autolysis following tumor cell death as well as by living cells, it is possible that vigorous immunologic attack of the tumor may be ultimately self-defeating by accelerating the release of soluble antigen. Monitoring of renal immune complex deposition in one reported case of Hodgkin's disease (5) proved to be useful for following and, in fact, predicting the course of the disease. At present, renal biopsies are usually performed only on cancer patients with obvious renal complications; however, the study reported here suggests that significant immune complex deposition can occur in tumor-bearing individuals without clinical evidence of renal disease. Examination of this phenomenon in a selected series of cancer patients could establish its general occurrence and possible prognostic value. SUMMARY Histologic and immunofluorescence studies of the kidneys of mice bearing a progressive melanoma show a proliferative glomerulonephritis associated with immune complex deposition in the mesangium and along the glomerular basement membrane This immune complex disease is distinct from the age-associated disease of the C57BL/6J host strain and the complexes can be shown to consist of soluble tumor antigen and antitumor antibody. Furthermore, the intensity of IgG complex deposition correlates directly with tumor progress (size and metastases) and inversely with mononuclear leukocyte infiltration of the tumor. In vitro assays for lymphocyte cytotoxicity and humoral antibody were found to be less reliable indicators of tumor progress. The possible role of circulating soluble tumor antigen in modifying the immune response to tumors is discussed.
2014-10-01T00:00:00.000Z
1974-08-01T00:00:00.000
{ "year": 1974, "sha1": "5e9498ae13d58646fb28eb2aca16955c6cd38ca9", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/140/2/410.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5e9498ae13d58646fb28eb2aca16955c6cd38ca9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
109934754
pes2o/s2orc
v3-fos-license
Dynamics of warped accretion discs Accretion discs are present around both stellar-mass black holes in X-ray binaries and supermassive black holes in active galactic nuclei. A wide variety of circumstantial evidence implies that many of these discs are warped. The standard Bardeen--Petterson model attributes the shape of the warp to the competition between Lense--Thirring torque from the central black hole and viscous angular-momentum transport within the disc. We show that this description is incomplete, and that torques from the companion star (for X-ray binaries) or the self-gravity of the disc (for active galactic nuclei) can play a major role in determining the properties of the warped disc. Including these effects leads to a rich set of new phenomena. For example, (i) when a companion star is present and the warp arises from a misalignment between the companion's orbital axis and the black hole's spin axis, there is no steady-state solution of the Pringle--Ogilvie equations for a thin warped disc when the viscosity falls below a critical value; (ii) in AGN accretion discs, the warp can excite short-wavelength bending waves that propagate inward with growing amplitude until they are damped by the disc viscosity. We show that both phenomena can occur for plausible values of the black hole and disc parameters, and briefly discuss their observational implications. INTRODUCTION The study of warped discs dates back to Laplace's (1805) study of the motions of the satellites of Jupiter, in which he showed that each satellite precessed around an axis on which the orbit-averaged torques from the quadrupole moment of the planet and the tidal field from the Sun cancelled. The locus of the circular rings defined by these axes, now called the Laplace surface, is the expected shape of a dissipative low-viscosity disc in this potential (for a review see Tremaine et al. 2009). More recent studies of warped accretion discs began with Bardeen & Petterson (1975), who pointed out that an accretion disc orbiting a spinning black hole (BH) would be subject to Lense-Thirring torque if its orbital axis were not aligned with the spin axis of the BH; this torque leads to precession of the axis of a test particle on a circular orbit of radius r at an angular speed ω = 2GL•/(r 3 c 2 ), where L• is the angular momentum of the BH 1 . We call discs 'quadrupole' or 'Lense-Thirring' discs depending on which determines the torque from the central body. There are fundamental differences in the behavior of warped quadrupole and Lense-Thirring discs. The first is that if the spin axis of the central body is reversed, the Lense-Thirring torque is also reversed (eq. 3) but the quadrupole torque is not (eq. 2). A second and more fundamental difference is the sign of the torque: for small inclinations the quadrupole torque induces retrograde precession of the angular momentum of the disc around the spin axis of the central body, whereas the Lense-Thirring torque induces prograde precession. The shape of a steady-state warped disc is determined by the requirement that the sum of the torques from all external sources equals the divergence of the angular-momentum currents from transport As a preliminary step, § § 1.1 and 1.2 derive the steady-state properties of warped discs in which viscosity is negligible. Then § 1.3 provides a broad-brush overview of the competing effects that determine the behavior of warped discs. Section 2.1 derives the equations of motion for a thin, viscous disc subjected to external torques, following Pringle (1992) and Ogilvie (1999), and § 2.3 describes our numerical methods and the results for both quadrupole and Lense-Thirring discs in systems with a binary companion. Section 3 describes the behavior of self-gravitating warped discs. Section 4 relates our findings to earlier work on warped accretion discs. Sections 5.1 and 5.2 apply our results to accretion discs around stellar-mass BHs in binary systems, around supermassive black holes in AGN. Finally, §6 contains a brief summary of our conclusions. External torques In this paper we consider three types of external torque that can warp an accretion disc. In each case we shall assume that the torque is weak -the fractional change per orbit in the angular momentum of an orbiting fluid element is small -so we can work with the orbit-averaged torque. In particular we define T(r,n, t) to be the torque per unit mass averaged over a circular orbit at radius r with orbit normaln. Quadrupole torque: In the system examined by Laplace, the central body is a planet of mass M , radius Rp, and quadrupole gravitational harmonic J2. If the planet's spin axis is alongnp, the torque per unit mass on an orbiting test particle is The quadrupole torque is also relevant to circumbinary accretion discs; in the case of a binary with masses M1 and M2 on a circular orbit with separation a ≪ r, we replace M by M1 + M2 and J2R 2 p by 1 2 M1M2 a 2 /(M1 + M2) 2 . Lense-Thirring torque: The central body can also be a BH of mass M and angular momentum L• = GM 2 a•n•/c where c is the speed of light,n• is the spin axis of the BH and 0 a• < 1 is the dimensionless spin parameter of the BH. The angular momentum of a test particle orbiting the BH precesses as if it were subject to a classical torque (the Lense-Thirring torque; see Landau & Lifshitz 2007) TLT = − ǫLT r 5/2n ×n• where ǫLT = 2(GM ) 5/2 a• c 3 = 2R 5/2 g c 2 a• , where Rg ≡ GM/c 2 ≪ r is the gravitational radius of the BH. Companion torque The central body, whether a planet or a BH, may be accompanied by a companion star of mass M⋆, on a circular orbit with radius r⋆ ≫ r. Then the gravitational potential of the companion can be approximated by its quadrupole component, which after averaging over the companion orbit yields a torque T⋆ = ǫ⋆r 2 (n ·n⋆)n×n⋆ where ǫ⋆ = 3GM⋆ 4r 3 ⋆ . (4) Inviscid discs Following Laplace, we first consider a thin disc of material orbiting a planet with non-zero obliquity (the obliquity is cos −1n p · n⋆). The disc is subject to torques from the quadrupole moment of the planet, Tp (eq. 2), and from the companion star around which the planet orbits, T⋆ (eq. 4). In the absence of pressure, viscosity, self-gravity, or other collective effects in the disc, the fluid rings at different radii precess independently, so the disc cannot retain its coherence unless the total torque T⋆ + Tp = 0 at each radius. This requires r 5 (n ·n⋆)n×n⋆ + ǫp ǫ⋆ (n ·np)n×np = 0, which can be rewritten as defines the characteristic radius rw at which the warp is most prominent (Goldreich 1966). We restrict ourselves to the usual case in which the disc normaln(r) is coplanar withnp andn⋆ (for a more general discussion see Tremaine et al. 2009). Then the unit vectorsn(r),np,n⋆ can be specified by their azimuthal angles in this plane, φ(r), φp, φ⋆. Without loss of generality we may assume φ⋆ = 1 2 π, so the obliquity is φp − φ⋆ = φp − 1 2 π. Then equation (6) can be rewritten as The solutions to equation (7) are shown in the left panel of Fig. 1 for obliquity φp − φ⋆ = 60 • . The 'classical' Laplace surface, shown as solid black circles, is aligned with the planet's orbit around the star at large radii (φ → 1 2 π as r → ∞). The surface shown by solid blue circles is similar, but composed of retrograde orbits (the disc angular-momentum vector is anti-aligned with the planetary orbital angular momentum at large radii, and anti-aligned with the planetary spin at small radii). The surfaces shown by open red circles are also solutions of equation (7) but they are unstable to small perturbations inn (Tremaine et al. 2009), and we will not consider them further. On the classical Laplace surface, the azimuth of the disc normal φ increases smoothly and continuously from φ⋆ to φp, so that the disc plane gradually twists from the orbital plane of the planet to the equatorial plane of the planet as its radius shrinks. We next carry out the analogous derivation for an inviscid thin disc orbiting a spinning BH with a companion star. The disc is subject to Lense-Thirring torque, TLT (eq. 3), and torque from the companion star, T⋆ (eq. 4). The equilibrium shape defined by T⋆ + TLT = 0 is given by which can be rewritten as r rw 9/2 (n ·n⋆)n×n⋆ −n×n• = 0 where r 9/2 The analog to equation (7) is r rw where φ• is the azimuthal angle of the BH spin axis. The obliquity is φ• − φ⋆ = φ• − 1 2 π. The solutions to equation (10) are shown in the right panel of Fig. 1 for obliquity φ• − φ⋆ = 60 • . In contrast to the quadrupole case, the solution that is aligned with the companion-star orbit at large radii (φ → 1 2 π as r → ∞, shown as black filled circles) terminates just outside the characteristic radius rw (this solution is mirrored by an unstable solution, shown by open red circles, that has no relevance to our discussion). The solution that is aligned with the equator of the BH at small radii, shown as the upper set of filled blue circles, approaches φ = π at large radii; in other words the disc is perpendicular to the companion-star orbital plane, which is inconsistent with the expectation that the disc is fed by material lost from the companion. Material spiraling in from the companion star along the black sequence of points in the right panel of Fig. 1 must therefore jump to one of the two blue sequences before proceeding inwards to the BH 3 . The lower blue sequence represents a solution in which the disc angular momentum is anti-aligned with the BH spin at small radii (φ = φ• − π) and anti-aligned with the orbital angular momentum of the companion at large radii. This is equivalent to a solution in which the obliquity is 120 • and the disc angular momentum is aligned with the BH spin at small radii and the companion's orbital angular momentum at large radii. Thus a smooth surface similar to the classical Laplace surface seen in the left panel of Fig. 1 exists around a spinning BH if and only if the obliquity exceeds 90 • . These conclusions raise two obvious questions: how is this unusual behavior related to the standard Bardeen-Petterson analysis of a warped accretion disc orbiting a spinning BH? And how do warped accretion discs actually behave in real astrophysical systems? An approximate analysis of viscous warped discs To show the relation between the findings of the preceding subsection and the Bardeen-Petterson treatment of viscous warped discs, we examine the approximate strength of the torques from various sources. Suppose that the disc is strongly warped near some radius r. The torque per unit mass due to a companion is (eq. 4) where we have neglected all factors of order unity. Similarly, the torque from the quadrupole moment of the central body is (eq. 2) and the Lense-Thirring torque is (eq. 3) The torque per unit mass due to viscous stress is Tv ≃ ηΩ/ρ where η is the viscosity and ρ is the density in the disc. In the Shakura-Sunyaev α-model of viscosity (eq. 26) η = αρc 2 s where cs is the sound speed, and α is a constant, typically assumed to be ∼ 0.1. However, the Shakura-Sunyaev model was developed to model viscous forces in the disc arising from Keplerian shear, whereas the warp shape is determined by viscous forces due to much smaller shears normal to the disc plane. To represent the second kind of force we use an α-model with a different parameter α ⊥ (for small-amplitude warps α ⊥ = 1 2 α −1 ; see eq. 33). Thus For simplicity we shall usually assume that the disc is isothermal, in which case the viscous torque is independent of radius. Finally, the torque per unit mass due to the self-gravity of the disc is roughly where Σ is the surface density near radius r. Viscous quadrupole discs with a companion The quadrupole torque Tp decreases with radius, while the torque from the companion T⋆ increases with radius. The two are equal at which agrees with the precise definition of the warp radius in equation (6) to within a factor of order unity. Since the viscous torque Tv is independent of radius in an isothermal disc, and one of T⋆, Tp is always larger than T⋆(rw), the viscous torque is always smaller than the torque due to the central body or the companion if βα ⊥ < 1, where This agrees with the precise definition of β that we give later in the paper (eq. 30) to within 1 per cent. In the terminology introduced at the start of the paper, a disc with βα ⊥ 1 is a 'low-viscosity' disc. Viscous Lense-Thirring discs with a companion The Lense-Thirring torque TLT and the companion torque T⋆ are equal at and the ratio of the viscous torque to the Lense-Thirring or companion torque at rw is then βα ⊥ where consistent with the precise definition in equation (30) to within 15 per cent. We expect that the shape of a low-viscosity disc (βα ⊥ 1) is determined by the competition between the torque from the central body (quadrupole or Lense-Thirring torque) and the torque from the companion, rather than by viscous torques. On the other hand the surface-density distribution in a warped disc is always determined by the viscous torque, no matter how small, since the other two torques both scale linearly with the surface density and hence do not establish the surface-density distribution. The usual Bardeen-Petterson description implicitly assumes that βα ⊥ ≫ 1 and neglects the companion torque. In this case the warp will be strongest at a smaller radius r ′ w given by Viscous Lense-Thirring discs with self-gravity In accretion discs surrounding supermassive BHs at the centres of galaxies, there is no companion body (except in the case of a binary BH; see §5.2.1) . Thus the torque T⋆ can be neglected. However, the disc can be massive enough that its self-gravity plays a role in determining its shape. In plausible disc models the surface density falls off slowly enough that this torque increases outward (see §5.2), and equals the Lense-Thirring torque at note that this is an implicit equation for the warp radius rw since the surface density depends on radius. The ratio of the viscous torque, equation (14), to the Lense-Thirring and self-gravity torques at rw is then γα ⊥ , where Note that γ ≃ Q(H/r) where Q is Toomre's parameter (eq. 69) and H = cs/Ω is the disc thickness. Thus the viscosity becomes low (in the sense that γ ≪ 1) in thin discs (H/r ≪ 1) long before they become gravitationally unstable (Q < 1). Evolution equations The equations that describe the evolution of a warped, thin accretion disc are derived by Pringle (1992), Ogilvie (1999), and Ogilvie & Latter (2013a). Our starting point is Ogilvie (1999)'s equations (121) and (122). The first of these is the equation of continuity where Σ(r, t) is the surface density, vr(r, t) is the radial drift velocity, and CM (r, t) is the mass current (rate of outward flow of disc mass through radius r). The second is an equation for angular momentum conservation, where Ω(r) ≡ (GM/r 3 ) 1/2 is the Keplerian angular speed, L = Σr 2 Ωn is the angular momentum per unit area, T is the torque per unit mass from sources external to the disc, and CL is the angular-momentum current, given by the sum of advective and viscous currents, Dynamics of warped accretion discs 7 Here cs is the sound speed, which is constant in an isothermal disc (as we shall assume from now on), and as usual 2n (r, t) is the unit vector normal to the disc at radius r. The dimensionless coefficients Q1, Q2, Q3 depend on the equation of state, the viscosity, and the warp ψ ≡ r|∂n/∂r|. For a flat Keplerian disc, Q1 is related to the kinematic viscosity by ν = − 2 3 Q1c 2 s /Ω and the mean-square height of the disc above the midplane is H 2 = c 2 s /Ω 2 . These equations are based on the assumptions (Ogilvie 1999) that (i) the disc is thin, H/r ≪ 1; (ii) the fluid obeys the compressible Navier-Stokes equation; (iii) the fluid equation of state is barotropic, i.e., the viscosity is dynamically important but not thermodynamically important; (iv) the disc is non-resonant in the sense of equation (1). In the calculations below we shall also assume that (v) the viscosity is described by the Shakura-Sunyaev α-model, that is, the shear and bulk viscosities η and ζ are related to the pressure p by where α and α b are constants. For a flat, isothermal disc the kinematic viscosity is ν = η/ρ = αc 2 s /Ω, so α = − 2 3 Q1. Now take the scalar product of (24) withn. Sincen ·n = 1,n · ∂n/∂t =n · ∂n/∂r = 0. Moreovern · T = 0 for the Lense-Thirring torque and for any torque arising from a gravitational potential, so we shall assume that this condition holds in general. We also use equation (23) to eliminate ∂Σ/∂t. The result is an expression for the mass current, We now introduce several new variables: the dimensionless radius x ≡ r/rw with the warp radius rw given by (6) or (9); the dimensionless time τ ≡ t c 2 s /(GM rw) 1/2 (roughly, for a Shakura-Sunyaev disc with α ∼ 1 this is time measured in units of the viscous diffusion time at the warp radius); and y(r, t) ≡ Σ(r, t)(GM rw) 1/2 (with dimensions of angular momentum per unit area). Equation (24) becomes The dimensionless parameter β is given by Lense-Thirring and represents the ratio of the strength of the viscous torque to the external torque at the characteristic warp radius rw (cf. eqs. 17 and 19). Equation (28) is a parabolic partial differential equation for the three components of L. The dimensionless viscosity coefficients Qi are functions of the equation of state and of the warp ψ ≡ x|∂n/∂x| (Ogilvie 1999). Ogilvie shows that for an isothermal α-disc and small warps (ψ ≪ 1), We shall also examine a simplified set of equations that appear to contain most of the important physics of equations (28)-(30). In these equations (i) we examine only the steady-state disc, that is, we set ∂L/∂t = 0 in equation (28); (ii) we set Q3 = 0, since it appears to play no important role in the dynamics; and (iii) we neglect the dependence of Q1 and Q2 on the warp ψ, that is, we treat them as constants. The steady-state assumption implies that the mass current cM is a constant of the problem, independent of radius. We have dy dx + y 2 The three components of the unit vectorn are related by the constraint |n| = 1. This simplified model is similar to Pringle's (1992) equations of motion, in which there are two viscosities η and η ⊥ (in Pringle's notation, these are ρν1 and ρν2), the first of which is associated with the Keplerian shear and the second with shear perpendicular to the disc caused by a warp. In an α-disc model η = αρc 2 s and η ⊥ = α ⊥ ρc 2 s and the two models are equivalent if If α ≪ 1 and the warp is small, equation (31) implies that α ⊥ = 1 2 α −1 (Papaloizou & Pringle 1983;Ogilvie 1999). Although we adopt this formalism, one should keep in mind that angular-momentum transport in real accretion discs is thought to be driven by MHD turbulence, which may not be well approximated by an isotropic viscosity -or if it is, the viscosity may not be well approximated by the Shakura-Sunyaev α-model. Some support for this formalism is provided by local, non-relativistic MHD simulations that examine the decay of an imposed epicyclic oscillation (Torkelsson et al. 2000). Global, general-relativistic MHD simulations have tended to show solid-body precession rather than Bardeen-Petterson alignment, although most of these correspond to the resonant regime α < H/r (cf. eq. 1), which we exclude (e.g. Fragile et al. 2007). More recently, global but non-relativistic MHD calculations with an approximate treatment of Lense-Thirring precession have been performed by Sorathia et al. (submitted to ApJ; see also Sorathia et al. 2013). They find that diffusive damping of vertical shear is much less important than the derivation of the Pringle-Ogilvie equations implies. This in turn implies that the Pringle-Ogilvie + Shakura-Sunyaev formalism overestimates the strength of viscous torques when α ≪ 1 and so the importance of tidal torques and self-gravity in accretion discs is even greater than we find below. Numerical methods Steady-state discs We have solved the simplified ordinary differential equations (32) for steady-state discs with constant viscosity coefficients and Q3 = 0. We find the numerical solution over a range of dimensionless radii [xa, x b ]; typically we choose x b = 1/xa = 30, although in some cases where the viscosity is large we cover a larger range to ensure that the disc is not still warped at either end of the integration range. The viscosity coefficients Q1 and Q2 are usually fixed at their values for an unwarped disc with α = 0.2, α b = 0, in which case Q1 = −0.3, Q2 = 1.58416. The equations are unchanged under the rescaling y(x) → λy(x), cM → λcM , so the normalization of the mass current cM can be chosen arbitrarily apart from the sign. We are interested in the case in which mass flows into the BH, so we set cM = −1. Seven boundary conditions are required for the one first-order and three second-order equations. In the region x ≪ 1 where external torques are negligible, the disc is assumed to be flat, dn/dx = 0. Then the first of equations (32) has the solution where k is an integration constant. We assume a no-torque boundary condition at the radius xISCO of the innermost stable circular orbit, which is close to the BH; this requires that the viscous angular-momentum current cvisc = 0 at xISCO and from the first of equations (29) this in turn requires y = 0 at xISCO. Thus We assume that the inner boundary of our integration region xa is much larger than xISCO so in the region of interest which provides one boundary condition at x = xa. At the outer radius x b the disc should lie in the plane of the companion-star orbit, as we would expect if the disc is fed by mass loss from the companion. Thusn =n⋆ at x = x b , which provides three additional boundary conditions. Moreover since |n| = 1 at all radii, we must haven · ∂n/∂x = 0 at x = x b , which provides another boundary condition (it is straightforward to show from the second of eqs. 32 that these conditions are sufficient to ensure that |n| = 1 at all radii). Note that we do not require that the disc lies in the equator of the central body for x ≪ 1, although it turns out to do so in all of our numerical solutions. Let us assume for simplicity that (i) inside the inner integration boundary xa the external torques on the right side of the second of equations (32) vanish; (ii) the disc normaln is nearly constant,n(x) =n0 + ǫn1(x) where ǫ ≪ 1. Then to first order in ǫ the first of equations (32) is the same as for a flat disc, yielding the solution (36). Substituting this result into the second of equations (32) and working to first order in ǫ we find where a and b are constants. To avoid an unphysical solution that grows as x → 0 we must have b = 0. The component of b alongn is already guaranteed to be zero because our earlier boundary conditions ensure thatn · dn/dx = 0. Thus the Figure 2. The viscosity coefficients −Q 1 , Q 2 , Q 3 for an isothermal disc with viscosity described by a Shakura-Sunyaev α-model (eq. 26) having α = 0.2, α b = 0 (solid lines) or α = 0.1, α b = 0.1 (dashed lines). The horizontal coordinate is the dimensionless warp ψ ≡ r|dn/dr|. We plot −Q 1 because Q 1 is normally negative for small warps; for α = 0.2, α b = 0 Q 1 is negative for all ψ while for α = 0.1, α b = 0.1 Q 1 is positive for ψ > 1.106. The calculations follow the precepts of Ogilvie (1999) and employ a code provided by G. Ogilvie. two components of dn/dx perpendicular ton must vanish at the inner boundary xa, which provides the final two boundary conditions. Note that there is no similar requirement at the outer boundary, since the parasitic solution bx −1/2 decays as x → ∞. The resulting boundary-value problem is solved using a collocation method with an adaptive mesh (routine d02tvf from Numerical Algorithms Group). To improve convergence we start with zero obliquity and increase the obliquity in steps of 1 • , using the converged solution from each value of the obliquity as the initial guess for the solution for the next. Time-dependent discs We have solved the partial differential equations (28), typically over the interval [xa, x b ] with x b = 1/xa = 30. Usually the viscosity coefficients Qi are chosen to be appropriate for a disc with α = 0.2, α b = 0. The coefficients are determined as functions of the warp ψ ≡ x|∂n/∂x| using a code generously provided by G. Ogilvie (see Fig. 2); the coefficients are tabulated on a grid 0 ψ 10 and interpolated using cubic splines. Mass, and the corresponding angular momentum for circular orbits, are added at a constant rate with a Gaussian distribution in radius centred at x = 10 (i.e., well outside the warp) and the disc is followed until it reaches a steady state. The integration is carried out using the routine d03pcf from Numerical Algorithms Group. A complication is that the dependence of the coefficient Q1 on ψ means that equation (28) is third-order in the spatial derivative; to reduce this to a second-order equation we treat the mass current cM as a fourth dependent variable in addition to the three components of the angular momentum L and integrate the second of equations (29) along with equations (28). As in the steady-state case we assume that the disc is aligned with the companion-star orbit at large radii, son =n⋆ at the outer boundary x = x b . We also assume that the steady-state relation (36) between the surface density and the mass current in a flat disc applies at the inner boundary xa; this is plausible since we expect the disc to achieve an approximate Figure 3. (left) The orientation of a stationary disc orbiting a planet that has an obliquity of 60 • (from eqs. 32). The viscosity coefficients are Q 1 = −0.3, Q 2 = 1.58416, appropriate for a flat disc with α = 0.2, α b = 0, and the mass current is c M = −1. The solutions shown have the parameter β (eq. 30) representing the ratio of viscous torques to external torques equal to 1000 (cyan), 100 (green), 10 (magenta), 1 (blue), 0.1 (yellow), 0.01 (red), 0.001 (black). The solid black circles represent the inviscid solution (the Laplace surface), given by equation (6) steady-state most rapidly at small radii. We assume that there is an outer disc boundary xo ≫ x b at which a no-torque boundary condition applies. In the steady-state disc, arguments analogous to those leading to equations (34)-(36) imply This implies in turn that at the outer boundary Typically we use xo = 10x b . Finally, the angular-momentum current at xISCO is cL = x 1/2 ISCO cMn which can be taken to be zero since xISCO is very small. Since the disc is flat inside the warp radius and the inner integration boundary xa is much less than the warp radius, we may assume that cL is constant between xISCO and xa so we set cL(xa) = 0. We usually start with a low-density disc and zero obliquity, and add mass and angular momentum outside the warp radius at a constant rate until the disc reaches a steady state; then we slowly increase the obliquity to the desired value. Results Quadrupole discs The left panel of Fig. 3 shows the solutions of equation (32) for a planet obliquity of 60 • and a range of viscosity parameters β from 1000 to 0.001. As one might expect, very viscous discs (β ≫ 1) exhibit a smooth, gradual warp while low-viscosity discs (β ≪ 1) are close to the inviscid disc (eq. 6), shown as the solid circles. The right panel shows the surface density y(x). Here the behavior is more interesting. While the surface density in very viscous discs is close to that of a flat disc (dashed line, from eq. 36), as the viscosity is lowered the disc develops a sharp valley -almost two orders of magnitude -in the surface density near the warp radius rw. The valley presumably occurs because the viscous stresses are larger when the warp ψ = x|dn/dx| is large, so the mass and angular-momentum current can be carried by a smaller surface density. The asymptotic behavior of the surface density as the viscosity becomes small is obtained from the first of equations (32) by substituting for |dn/dx| the value from the inviscid solution (6); this is shown as the solid circles in the right panel of Fig. 3. The nature of the surface-density valley associated with the warp is illustrated further in Fig. 4, which shows the surfacedensity profile for low-viscosity discs (β → 0) for obliquities 10 • , 20 • , . . . , 80 • . As the obliquity grows the valley becomes deeper: at an obliquity of 80 • the surface density is only 0.2 per cent of the surface density in an unwarped disc at the bottom of the valley, near radius 1.00rw. The steady-state warped discs also exhibit some spirality or twisting; this is shown in Fig. 5 by plotting the horizontal components (nx, ny) of the unit vector normal to the disc. Fig. 6 is analogous to Fig. 3: it shows the solutions of equation (32) for a Lense-Thirring disc when the BH obliquity is 60 • . The viscosity parameter β ranges from 1000 to 0.333; for β < 0.333 no steady-state solution exists. Similarly, the right panel of Fig. 6 shows the horizontal components of the unit normal in Lense-Thirring discs with 60 • , to be compared with the left panel of the same figure for quadrupole discs. Lense-Thirring discs The absence of steady-state solutions for Lense-Thirring discs for viscosity less than some critical value at fixed obliquity -or obliquity larger than a critical value at fixed viscosity -is a novel feature not seen in the quadrupole discs, and presumably related to the jump seen in the orientation of inviscid Lense-Thirring discs ( §1.2). Fig. 7 illustrates how the critical obliquity and viscosity parameter are related. The black curve shows the critical values for the simplified steady-state equations (32), with Q1 = −0.3, Q2 = 1.58416, Q3 = 0. The critical values are defined here by the point where the maximum warp ψ = 10; this is generally close to the curve with ψ → ∞ and for ψ 10 it is unlikely that our model is accurate in any case. The red curve in Fig. 7 shows the critical values obtained by solving the time-dependent equations (28) for the same constant values of Qi; in this case the critical values are defined by the obliquity at which the maximum warp of the timedependent solution exceeds ψ = 10. The agreement of the red and black curves is partly a successful check of our steady-state and time-dependent numerical codes, but more importantly it implies that time-dependent discs with obliquity above the Figure 5. The horizontal components (nx, ny) of the unit normal vector for quadrupole discs (left panel) and Lense-Thirring discs (right panel). The obliquity is 60 • and the other parameters are as described in Fig. 3 (left panel) or 6 (right panel). In both panels the parameter β (eq. 30), representing the ratio of viscous torques to external torques, is equal to 1000 (cyan), 100 (green), 10 (magenta), 1 (blue); in the left panel there are additional curves for β = 0.1 (yellow), 0.01 (red), 0.001 (black) and in the right panel there is an additional curve for the critical value β = 0.333 (black). Figure 6. (left) The orientation of a stationary disc orbiting a BH that has an obliquity of 60 • (from eqs. 32). The parameters are the same as in Fig. 3, except that the parameter β (eq. 30) representing the ratio of viscous torques to external torques equals 1000 (cyan), 100 (green), 10 (magenta), 1 (blue), and 0.333 (black). For β < 0.333 no solution exists. The solid black circles represent the inviscid solution, given by equation (9) Above the critical obliquity shown here, steady-state Lense-Thirring disc solutions do not exist. The parameter β measures the strength of the viscous forces (eq. 30). The solid lines are for Shakura-Sunyaev discs with α = 0.2, α b = 0 and the dashed line is for α = α b = 0.1. The black and red curves are derived from steady-state and time-dependent disc models (eqs. 28 and 32) with the viscosity parameters Q 1 and Q 2 set to their unwarped values and Q 3 = 0. The green curves are for Q i depending on the local warp, as in Fig. 2. critical value will develop singular warps -that is, for example, there is no oscillating solution of the time-dependent Pringle-Ogilvie equations that remains non-singular. The green curve shows the critical values obtained from equations (28) with viscosity parameters Qi that depend on the warp as shown in Fig. 2. This exhibits the same qualitative behavior as the black and red curves, demonstrating that the critical values are not strongly dependent on the variation of viscosity parameters with the strength of the warp. Finally, the green dashed curve is the same as the green solid curve, but for parameters Qi appropriate for Shakura-Sunyaev parameters α = 0.1, α b = 0.1. What happens to a Lense-Thirring accretion disc when the obliquity exceeds the critical value is not understood. Finite-time singularities ('blow-up') are a common feature of non-linear parabolic partial differential equations such as the Pringle-Ogilvie equations and it is likely that the absence of a solution reflects the approximation of the correct, hyperbolic, fluid equations with diffusion equations. The limitations of the diffusion approximation in warped discs are well-known: Papaloizou & Pringle (1983) argue that a transition from diffusive to wavelike behavior occurs when α decreased below H/r (see also Lin 1995 andOgilvie 2006). In this regime, bending waves governed by the pressure in the disk could transport angular momentum to connect smoothly the inner and outer disks. The behavior of such waves in Lense-Thirring discs is described by Lubow et al. (2002) but only to linear order in the warp amplitude, where the singular behavior is not present. For finite-amplitude warps, it is far from clear how to incorporate the required extra physics into the Pringle-Ogilvie equations or what behavior we might expect. The sharp changes in disc orientation seen in Fig. 6 are reminiscent of the phenomenon of 'breaking' in which the orientation of the accretion disc changes almost discontinuously , although there are substantial differences in the phenomenology and interpretation (see §4 for further discussion). The behavior of the disc at the critical obliquity At the critical obliquity or viscosity there is a radius (the 'critical radius') at which the surface density approaches zero and the disc warp ψ = r|dψ/dr| changes from near zero to a very large value (black curves in Fig. 6). We can offer some analytic insight into this behavior. Since the behavior of the disc changes sharply in a small radial distance, this change is unlikely to be due to the external torques, which vary smoothly with radius. Thus we examine the governing differential equations (28) with the right-hand side and ∂/∂τ set to zero. Then this equation states that the total angular-momentum current cvisc + x 1/2 cMn must be independent of radius x. We erect a coordinate system specified by the triple of unit vectorsê1,ê2,ê3 withê3 parallel to the angular-momentum current, so cvisc + x 1/2 cMn = cLê3 with the mass and angular-momentum currents cM and cL constants. For simplicity we assume that the viscosity coefficients Q1, Q2 are constants, and Q3 = 0. Then Sincen is a unit vector,n · dn/dx = 0, we may take the dot product withn to obtain The components of (40) alongê1 andê2 are Combining equations (41) and (42) The interesting behavior occurs if the mass and angular-momentum current have the same sign. In this case the non-linear differential equation (43) has a critical point at f = 1, x = (cL/cM ) 2 ≡ xc. If we restrict ourselves to the usual case in which Q1 < 0, Q2 > 0, then near the critical point solutions must take one of the following two forms: (i) f = 1; this implies an unwarped disc with normal parallel to the angular-momentum current. The surface density is given by equation (41) as In the usual case where the mass current cM < 0 this solution is physical (positive surface density) for x > xc, i.e., outside the critical point. (ii) In this case Since f < 1 and y > 0 this solution is only physical when the mass current cM < 0 and then only for x < xc, i.e., inside the critical point. The angle between the angular momentum current and the disc normal is θ where cos θ = f so θ ∼ (xc − x) 1/2 and the warp ψ = x|dn/dx| ∼ (xc − x) −1/2 . Thus the warp angle ψ is singular at the critical point. The behavior of these solutions is consistent with the behavior seen in Fig. 6 at the critical obliquity: outside the critical radius, the disc is flat and the surface density decreases linearly to zero as the radius decreases to the critical radius (eq. 44), while inside the critical radius the azimuthal angle φ − 1 2 π of the warp normal varies as (xc − x) 1/2 , and the surface density decreases linearly to zero as the radius increases to the critical radius (eq. 45). Since the surface density is zero at the critical point, there is no viscous angular-momentum transport across it, only advective transport. EVOLUTION OF VISCOUS DISCS WITH SELF-GRAVITY Our treatment of accretion discs with self-gravity will be briefer and more approximate than the treatment of discs with a companion in the preceding section, for three main reasons: (i) AGN accretion discs are the only ones in which self-gravity is likely to be important, and these are less well-understood than accretion discs around stellar-mass BHs; (ii) the theory of bending waves in gas discs is remarkably sensitive to small deviations from Keplerian motion (cf. eq. 1); (iii) we found that warped steady-state accretion discs around a spinning BH with a companion do not exist for some values of the obliquity and viscosity, and this finding requires the best available disc models to be credible. In contrast we shall find that warped discs with self-gravity exhibit interesting but physically plausible behavior even in relatively simple disc models, and there is no reason to believe that this behavior will change qualitatively in more sophisticated treatments. We shall assume that the warp is small so that linearized theory can be used, and that the disc surface-density distribution is the same as in a flat disc. We shall also assume a simple model for the viscous damping of the warp. We also ignore the effects of pressure in the disc. This assumption is problematic because Papaloizou & Lin (1995) showed that in gravitationally stable Keplerian discs (Q > 1 in eq. 69) the dispersion relation for bending waves is dominated by pressure rather than self-gravity. However, (i) this result depends sensitively on whether the disc is precisely Keplerian, and small additional effects such as centrifugal pressure support or relativistic apsidal precession can dramatically reduce the influence of pressure on the dispersion relation; (ii) modifying the Pringle-Ogilvie equations to include pressure is a difficult and unsolved problem. The normal to the disc at radius r isn = (nx, ny, nz). We choose the axes so that the BH spin is along the positive z-axis; then since the warp is small |nx|, |ny| ≪ 1. Write ζ(r, t) ≡ nx + iny; then neglecting all terms quadratic in ζ the Lense-Thirring torque (3) causes precession of the angular momentum at a rate The equations of motion due to the self-gravity of the warped disc are given by classical Laplace-Lagrange theory (Murray & Dermott 1999), where Σ(r) is the surface density, χ = min (r, r ′ )/max (r, r ′ ) and the Laplace coefficient with K(χ) and E(χ) complete elliptic integrals. The equations of motion due to viscosity are derived by simplifying equations (24) and (25). The angular-momentum current proportional to Q1n and the mass current CM determine the steady-state surface density in a flat disc, which we assume to be given, so we drop these terms. The current proportional to Q3 appears to play no essential role, so we drop this term as well. Furthermore we assume that the sound speed cs is independent of radius (isothermal disc), and we replace Q2 by 1 2 α ⊥ (eq. 33). Thus we find We now look for a steady-state solution in which dζ/dt|LT +sg+v = 0. We replace the radius by the dimensionless variable x = r/rw where rw is defined for a self-gravitating disc by equation (21), and we assume that the surface density is a power law, Σ(r) = Σ0/x s . The equations above simplify to 4 where γ is the viscosity parameter defined in equation (22). We impose the boundary conditions dζ/dx = 0 as x → 0 and x → ∞ (the disc is flat near the BH, and flat far outside the warp radius) and ζ → ζ0 at x → ∞ (at large distances the normal to the disc is inclined to the spin axis of the BH by an angle θ = |ζ0| ≪ 1). Since equation (50) is linear, there is no loss of generality if we set ζ0 = 1. In these dimensionless units, the shape of the warp is determined by only two parameters, the logarithmic slope of the surface-density distribution s, and the viscosity parameter γα ⊥ . The relation between α and α ⊥ is discussed after equation (33). Fig. 8 shows the solutions of equation (50) for the surface-density slope s = 3 5 appropriate for a gas-pressure dominated disc (eq. 66). The solid and dashed lines show the real and imaginary parts of ζ(x). For low-viscosity discs (γα ⊥ ≪ 1) we find that the disc develops bending waves inside the warp radius, and if the viscosity is sufficiently small the bending waves can grow in amplitude by orders of magnitude as the radius shrinks (the disappearance of the bending waves at x < 0.18 in the lower right panel is a numerical artifact, which arises because the wavelength of the bending waves becomes shorter than the resolution of the numerical grid, ∆ log 10 x = 0.002). Many of the properties of the bending waves can be understood using a WKB analysis (Shu et al. 1983, hereafter SCL83). We shall quote the results from this paper without derivations. If we assume that the waves have the form ζ = A ζ (r) exp[iΦ(r)] with radial wavenumber k ≡ dΦ/dr, then the dispersion relation is (SCL83 eq. 22, with ω = 0 and m = 1) The WKB approximation is valid if the waves have short wavelengths, |k|r 1, which in turn requires that the radius is less Figure 8. The steady-state shape of warped discs including Lense-Thirring torque, self-gravity, and viscosity (eq. 50). The four panels show four different values of the viscosity parameter γα ⊥ (eq. 22). The figures plot the real and imaginary parts of the complex inclination ζ (solid black and dashed green lines) as a function of the radius in units of the warp radius rw (eq. 21). At large radii the disc is assumed to be flat with ζ = 1; since eq. (50) is linear the results can be scaled to any (small) inclination. At small radii the disc is found to lie in the BH equator, ζ = 0. Note the different vertical scales in the four panels. The disappearance of the oscillations at x < 0.18 in the lower right panel is a numerical artifact due to limited resolution. than the warp radius rw defined in equation (21); and this in turn requires that the dimensionless variable x in Fig. 8 is small compared to unity. For plausible variations of the surface density Σ(r), the wavelength 2π/|k| gets shorter and shorter as the radius shrinks. In the absence of viscosity, the maximum inclination of the bending wave varies as A ζ (r) ∝ [r 3/2 Σ(r)] −1 (SCL83, eq. 34, with the inclination amplitude A ζ = A(r)/r) so if the surface density falls as r −s then the amplitude of the warp grows as the radius shrinks whenever s < 3 2 , which is true for most disc models. The waves are spiral, as may be deduced from the offset between the solid (real) and dashed (imaginary) curves in Fig. 8 (except in the lower right panel, where the viscosity is zero). The dispersion relation (51) does not distinguish leading and trailing waves but causality arguments do: trailing waves propagate inward (i.e. negative group velocity, see SCL83 eq. 23) while leading waves propagate outward. Waves excited by the warp in the outer part of the disc and damped at small radii by viscosity must propagate inward and hence are trailing. In the case of low-viscosity Lense-Thirring discs that are warped because of a companion, we found that no solutions of the Pringle-Ogilvie equations existed above a critical obliquity. These calculations suggest that self-gravitating discs are more well-behaved -that the long-range nature of the gravitational force allows a smooth transition from the outer to the inner orientation for any viscosity and obliquity, through the excitation of bending waves that are eventually damped by viscosity as they propagate inward. However, we caution that the analysis of this section is linear in the warp amplitude and it is possible that non-linear effects will prohibit a continuously varying warp shape once the obliquity is large enough. This physical picture needs to be modified for AGN discs dominated by radiation pressure, where the surface density varies as Σ(r) ∝ r 3/2 (eq. 65) out to a radius rpr (eq. 68) where gas pressure begins to dominate, after which the surface density declines as r −3/5 . If rpr rw, the bending waves are launched as usual at the warp radius rw and propagate smoothly into the region r < rpr, although their dispersion relation will change once they enter the radiation-dominated region. If rpr is larger than rw, the gravitational torque will include a significant contribution from material in the accretion disc near rpr (the torque from material between R ≫ r and 2R varies as GΣ(R)r 2 /R ∼ R 1/2 ) in addition to the gravitational torque from local material. This extra torque will tend to counter-act the Lense-Thirring torque, and if it is large enough will prevent the excitation of bending waves. In summary, for low-viscosity discs in which self-gravity is important, misalignment of the disc axis at large radii with the BH spin axis can excite bending waves inside the warp radius (21). For discs dominated by gas pressure, where the surface density Σ(r) ∝ r −0.6 , Fig. 8 shows that the condition for exciting oscillatory waves is γα ⊥ ≃ 0.05. For warps of sufficiently small amplitude, α ⊥ = 1 2 α −1 (eq. 33) so the condition for exciting bending waves is γ 0.01(α/0.1). RELATED WORK Most treatments of warped Lense-Thirring discs neglect torques from the companion in determining the shape and evolution of the disc; we may call this the Bardeen-Petterson approximation since it first appears in Bardeen & Petterson (1975). The approximation is only valid if the torque associated with viscous angular-momentum transport exceeds the Lense-Thirring and companion torques at the point where the latter two are equal, the warp radius rw (eq. 10), which in turn requires β 1 (eq. 30). One of the few treatments of warped AGN accretion discs to include both Lense-Thirring and tidal torques is Martin et al. (2009). In fact the warp radius rwarp defined in their equation (15) is almost the same as the radius rw defined in our equation (9), rwarp = rw/2 2/9 . Martin et al. also define a tidal radius r tid and a Lense-Thirring radius rLT where viscous torques balance tidal and Lense-Thirring torques, respectively. Our parameter β, defined in equation (30), is just 2 1/9 (r tid /rLT) 10/9 . Martin et al. find numerical solutions for steady-state discs with obliquities up to 80 • but all their models have r tid /rLT 1 and their models with obliquities > 20 • have r tid /rLT = 10. Therefore they do not explore the regime with β 1 where the critical obliquity becomes apparent. Scheuer & Feiler (1996) give a simple analytic description of warped accretion discs, derived from the Pringle-Ogilvie equations by linearizing in the warp angle. The main focus of their analysis is on estimating the rate at which the BH aligns its angular momentum with that of the accreting material. Unfortunately, the linearization drops the term proportional to |∂n/∂x| 2 in equation (29), and without this term low-viscosity Lense-Thirring discs develop a thin boundary layer in which the warp angle jumps sharply, so the linearization is not self-consistent when β is sufficiently small. and have argued that warped discs described by the Pringle-Ogilvie equations can 'break' or 'tear' -divide into inner and outer parts with discontinuous orientations -if the obliquity 45 • . As described in their papers, this phenomenon does not appear to be directly related to our critical obliquity, for several reasons: (i) Nixon & King do not include torques from a companion in their analysis, i.e., the parameter β in equation (30) is very large, whereas we find that the critical obliquity is important only for β 1 (Fig. 7). (ii) Nixon & King argue that the breaking phenomenon arises through the dependence of the viscosity parameters Qi on the warp ψ, whereas we have found that the critical obliquity is almost the same whether or not this dependence is included in the differential equations. (iii) We do not see breaks in our high-viscosity (β = 1000) solutions, even for obliquities exceeding 88 • , probably because our expression for Q2(ψ) is relatively flat (Fig. 2) whereas Nixon & King's falls sharply toward zero for ψ 1 (their Fig. 1) 4 . APPLICATION TO OBSERVED ACCRETION DISCS The accreting BHs found in astrophysical systems span a wide range of inferred mass, from M• ∼ 5 M⊙ up to ∼ 10 10 M⊙. Within this range they mostly fall -so far -into one of two distinct classes. At the low-mass end, M• ∼ 10 M⊙, the BHs all belong to close binary systems. The BH accretes mass from its companion star, either by Roche-lobe overflow or by capturing a fraction of the mass lost in a wind. Roche-lobe overflow tends to occur in low mass X-ray binaries (LMXBs), in which the companion is an evolved star with M⋆ 1.5 M⊙. Wind-driven accretion is found in high mass X-ray binaries (HMXBs), where the companion is an O or B star with M⋆ 10 M⊙. The secondary star provides the tidal torque in equation (4), which is also thought to set the outer radius of the accretion disc. The dynamics and geometry of accretion in these systems is relatively well-understood and useful summaries are found in Frank et al. (2002) and . The second class consists of supermassive BHs, with M• ∼ 10 5 -10 10 M⊙, which are found -so far -at the centres of galaxies and primarily accrete gas from the interstellar medium of their galaxy. When mass is supplied at sufficiently high rates, these are observed as AGN (Krolik 1999). The properties of these systems and how they are fed from the interstellar medium are less well understood than binary systems and there are fewer empirical constraints on the properties of the disc 5 . We discuss these two classes of Lense-Thirring discs in the next two subsections. Stellar-mass black holes in binary star systems In these binaries the X-ray emission comes from the vicinity of a neutron star or BH (the 'primary'), while the accreted mass and the tidal torque (4) comes from the companion star (the 'secondary'). The masses of the primary and secondary, M and M⋆, and their orbital separation r⋆ are inferred from the orbital period, the spectral type and velocity semi-amplitude of the secondary, periodic variations in the flux from the secondary due to its tidal distortion by the primary, eclipses, etc. In most cases the main evidence that the primary is a BH rather than a neutron star is that its mass exceeds the upper limit to the mass of a neutron star, ∼ 3 M⊙ (Lattimer & Prakash 2005). Compilations of BH X-ray binary system parameters can be found in Tables 4.1 and 4.2 of and Table 1 of . The inferred BH masses have a relatively narrow distribution -the best estimates in ∼ 20 systems range from 4.5 to 14 M⊙ -with a mean near M• ∼ 7 M⊙. The BH spin a• is more difficult to measure. The two most commonly used methods are continuum fitting (e.g. McClintock et al. 2011) and Fe line modeling (Tanaka et al. 1995). Only a range of plausible spins can be inferred, even for the best systems, and both methods are subject to systematic uncertainties. For our purposes, the most important result is that the majority of systems are not consistent with a• = 0, implying that Lense-Thirring precession can be significant. Since the parameter β (eq. 30) depends relatively weakly on a• (β ∝ a −4/9 • ), we simply adopt a• = 0.5 as a characteristic value. There is strong circumstantial evidence for warps in several X-ray binaries. The jets in the eclipsing X-ray binary SS 433 precess with a 162 d period, likely because the jet direction is normal to a precessing warped accretion disc. The 35 d period of Her X-1 is believed to be due to eclipses by a warped disc, and this is also the likely explanation for some of the long-term periodicities observed in other X-ray binaries, such as LMC X-4 and SMC X-1 (Charles et al. 2008). There is also evidence for misalignment between the binary orbital angular momentum and BH spin angular momentum in GRO J1655−40 and V4641 Sgr, if one assumes that the jet axis is aligned with the BH spin axis (e.g., Fragile et al. 2001;Maccarone 2002). Most BH candidates with mass estimates are LMXBs, and only a handful are HMXBs. In the Roche-lobe overflow systems that comprise the bulk of LMXBs, it is thought that the tidal torque from the companion truncates the accretion disc at an outer radius rout ≃ 0.9rL1 where rL1 is the Roche radius 6 of the primary (Frank et al. 2002). Fitting of ellipsoidal variations of LMXBs with BH primaries generally yields rout values consistent with this assumption (J. Orosz, private communication). In LMXB systems, the secondaries are generally evolved F-K spectral types with M⋆ ∼ M⊙, so we scale the companion mass M⋆ to M⊙. Orbital periods P range from a few hours to several days so we scale the period to 10 5 s = 27.8 h. Then the separation or semimajor axis is The large range of P translates into a fairly broad range in r⋆. At the lower end of the range, corresponding to periods of a few hours, we expect r⋆ ≃ 2-3 R⊙, although r⋆ can be much larger than this estimate in some cases such as . (53) 5 We do not consider the ultraluminous X-ray sources with L 10 40 erg s −1 . If these radiate isotropically and do not exceed the Eddington limit, they require BHs with M• 100 M ⊙ . Whether or not these are, in fact, intermediate-mass BHs or normal HMXBs, the implied accretion rates suggest that ultraluminous X-ray sources arise from a short-lived phase of rapid mass transfer in a close binary (King et al. 2001). 6 'Roche radius' is defined as the radius of a sphere with the same volume as the Roche lobe; the distance to the collinear Lagrange point from the centre of the star is larger by ∼ 25-40 per cent, depending on the mass ratio. An analytic approximation to the Roche radius as a function of mass ratio is given by Eggleton (1983). Assuming a mass ratio M•/M⋆ = 7 the primary's Roche radius is rL1 = 0.55r⋆, so if the outer disc edge is at rout ≃ 0.9rL1 we have rout ≃ 0.5r⋆. Hence, for typical LMXBs the warp radius (53) is well inside the outer disc radius (cf. eq. 52). Similar conclusions hold for HMXBs. We consider the specific example of M33 X-7 since it is the best-understood HMXB system due to its X-ray eclipses and well-determined distance (Orosz et al. 2007;Liu et al. 2008). In this case we have M⋆ = 70 ± 7 M⊙, M• = 15.7 ± 1.5 M⊙, r⋆ = 42 ± 2 R⊙, a• = 0.84 ± 0.05, yielding a warp radius rw = 0.34 R⊙. Orosz et al. also find that the outer radius of the disc is rout = (0.45 ± 0.04)rL1; for the observed mass ratio rL1 = 0.5r⋆ (Eggleton 1983) so rout = 9.5 R⊙. Again, the warp radius is well inside the outer disc radius 7 . The strength of the viscous torque can be parametrized through the disc aspect ratio H/r, which is related to the sound speed through cs = ΩH. The aspect ratio can be estimated using the standard thin-disc model of Shakura & Sunyaev (1973). In BH X-ray binaries, the warp radius is much larger than the BH event horizon, so we can ignore relativistic effects and corrections due to the inner boundary condition; moreover at the warp radius the radiation pressure is negligible. We can therefore use equation (66) We assume that the Shakura-Sunyaev parameter α (eq. 26) is approximately 0.1, based on modeling of dwarf novae and soft X-ray transients (King et al. 2007). This equation is determined by balancing local viscous heating with radiative cooling. However, the spectra from the outer regions of discs in LMXBs show evidence that irradiation by X-rays dominates over local dissipation (van Paradijs & McClintock 1994). Simple models of the X-ray irradiated outer disc imply only a weak dependence of H/R on R (e.g., Dubus et al. 1999). So we make an alternative estimate of the aspect ratio, valid for the outer parts of the disc, by scaling to a characteristic temperature T and assuming hydrostatic equilibrium. Then we have approximately Soft X-ray transient LMXBs are believed to be triggered by a disc instability associated with hydrogen ionization (Lasota 2001) so one expects the outer disc has T 10 4 K at the beginning of an outburst, but the temperature may rise to as high as T ∼ 10 5 K during outburst. Taken together equations (54) and (55) imply (H/R) 2 ≃ 10 −5 -10 −3 in most discs. Inserting the above estimates into equation (30) where H/r is evaluated at the warp radius. Therefore, we generally expect β ≫ 1, that is, viscous torques are more important than the torque from the secondary star in determining the warp shape. In order to have the companion torque dominate the warp dynamics, we need α ⊥ β 1, which requires a nearby companion (the shortest orbital periods of X-ray binaries are a few hours, corresponding to r⋆ ∼ 3 R⊙) and, more importantly, a cool disc with H/r 10 −3 . This is plausible for quiescent discs, with low accretion rates, as long as irradiation by the central X-ray source does not enforce a larger H/r at the radius of the warp. One might even speculate that the absence of a steady-state solution for warped discs with β 1 is the process that drives disc instability and outbursts in some X-ray binaries. Warped discs in active galactic nuclei There is strong circumstantial evidence that warps are common in AGN accretion discs. Maser discs having modest warps on 0.1-1 pc scales are present in NGC 4258 (Herrnstein et al. 2005), Circinus (Greenhill et al. 2003), and four of the seven galaxies examined by Kuo et al. (2011). Warped discs may obscure some AGN and thus play a role in unification models of AGN based on orientation (Nayakshin 2005). The angular-momentum axis of material accreting onto the AGN, as traced by jets or other indicators, is not aligned with the axis of the host galaxy on large scales (Kinney et al. 2000). Radio jets from AGN often show wiggles or bends that may arise from precession of the jet source (e.g., 3C 31). Finally, frequent and variable misalignments of the BH spin axis with the angular momentum of accreted gas are expected theoretically because of clumpy gas accretion, inspiral of additional BHs, and rapid angular-momentum transport within gravitationally unstable gas discs (Hopkins et al. 2012). AGN accretion discs are much less well-understood than X-ray binary discs. There is no obvious source of external torque analogous to the companion star in X-ray binaries -except in the case of binary BHs, which we defer to §5.2.1. In the absence of external torques, warping can arise from a misalignment between the orbital angular momentum of the inflowing material at the outer edge of the disc and the spin angular momentum of the BH at its centre. Then in the absence of other torques the shape of the warp is determined by the competition between viscous torques and the Lense-Thirring torque (the Bardeen-Petterson approximation). However, AGN discs are much more massive than X-ray binary discs relative to their host BH, and this raises the possibility that the self-gravity of AGN discs plays a prominent role in determining the shape of the disc. Self-gravitating 9 warped discs have mostly been investigated in the context of galaxy discs, which are sometimes warped in their outer parts. There is a large literature on the dynamics of galactic warps (e.g., Hunter & Toomre 1969;Sparke & Casertano 1988;Binney 1992;Nelson & Tremaine 1996;Sellwood 2013). Very few authors have examined the properties of self-gravitating warped discs in the context of AGN. One notable exception is Ulubay-Siddiki et al. (2009), who computed the shapes of warped self-gravitating discs orbiting a central mass, modeling the disc as a set of concentric circular rings and computing the gravitational torques between each ring pair. However, they did not include either Lense-Thirring or viscous torques so their calculations do not address the issues that are the focus of the present paper. We first describe a simple analytic model for flat AGN accretion discs, which we shall use to estimate the relative importance of self-gravity and viscous stresses in warped discs. Our model is similar to earlier analytic models by Shakura & Sunyaev (1973), Pringle (1981), Collin-Souffrin & Dumont (1990), and others. We assume that the density ρ(r, z) in the disc is small compared to M•/r 3 . Then hydrostatic equilibrium requires where pt = pg + pr is the sum of the gas and radiation pressure, Ω 2 = GM•/r 3 , and Rz is a dimensionless factor discussed below. The equation of energy conservation is where Fr is the emissivity from one surface of the disc and τ rφ is the viscous stress tensor. Together with Rz above, RR and RT are dimensionless factors that depend on radius and the BH spin parameter a• and approach unity for r ≫ Rg, where as usual Rg = GM•/c 2 is the gravitational radius of the BH. These quantities, defined in Chapter 7 of Krolik (1999), account approximately for general-relativistic effects and incorporate the assumption of no torque at the radius rISCO of the innermost stable circular orbit. Coupling equation (58) to the equation for conservation of angular momentum in a flat steady-state disc allows one to solve for Fr, where L/L Edd is the ratio of the bolometric luminosity of the disc to the Eddington luminosity, κ is the electron scattering opacity (assumed to be ≃ 0.34 cm 2 g −1 ), and ǫ = L/(Ṁ•c 2 ) is the radiative efficiency. We now make the standard α-disc approximations that the stress has the form (eq. 26) and that the rate of energy dissipation per unit mass is independent of z. Then the radiation pressure and the temperature at the midplane of the disc are where σB is the Stefan-Boltzmann constant. The gas pressure at the midplane is where kB and ρ0 are Boltzmann's constant and the midplane density. The mean particle mass µ is taken to be the proton mass times 0.62, appropriate for fully ionized hydrogen plus 30 per cent helium by mass. In the last equation we have replaced ρ0 by Σ/(2H) where H is the disc thickness. We now substitute these results into equations (57) and (58) with the replacements d/dz → 1/H, z → H, and dz → 2H, 9 As described in the Introduction, by 'self-gravitating' we mean that the self-gravity of the warped disc dominates the angular-momentum precession rate, not that the disc is gravitationally unstable or that its mass is comparable to the BH mass. to obtain FrκΣ and HFrκΣ For given values of the radius r, the gravitational radius Rg, the efficiency ǫ, and the Eddington ratio L/L Edd , the second of these equations can be solved for the disc thickness H. Then the result can be substituted into the first equation to yield a tenth degree polynomial in Σ 1/4 , which can be solved numerically to find the surface density (Zhu et al. 2012). The analysis is simpler when the accretion disc is dominated by radiation pressure or gas pressure. For radiation-pressure dominated discs we set pg = 0 in equations (63) and (64). We then find Similarly, when radiation pressure is negligible, Σg = 2 14/5 π 3 7/5 5 1/5 and Hg = 3 1/5 5 1/10 2 2/5 π 1/2 h 3/10 (GM•) 9/10 µ 2/5 κ 1/10 c 21/10 α 1/10 ǫ 1/5 R 1/10 R With these scalings, we can compute most properties of interest in the disc. For example, radiation pressure dominates when Hr > Hg which occurs for radii less than The disc is gravitationally unstable if Toomre's (1964) Q parameter is less than unity; this parameter is approximately In the radiation-and gas-pressure dominated regimes (respectively) we have Equation (71) gives implicit relations for rw because of the radial dependence of the relativistic factors. However, this dependence is rather weak for typical AGN disc models: for the case a• = 0.5, M = 10 8 M⊙, α = 0.1 and L/L Edd = 0.1, we have RR = 0.81, RT = 0.81, and Rz = 1.01 at rw, corresponding to values of 0.96 and 1.05 for the products of relativistic factors in the radiation-pressure and gas-pressure dominated limits of equation (71). where as usual the two equations correspond to the radiation-pressure dominated and the gas-pressure dominated regions. Thus, in our fiducial case -a disc surrounding a 10 8 M⊙ BH radiating at 10 per cent of the Eddington luminosity, with spin parameter a• = 0.5, efficiency ǫ = 0.1, and Shakura-Sunyaev parameter α = 0.1 -the gravitational radius is Rg = 1.48 × 10 13 cm; the warp radius is just inside the radiation-pressure dominated region at rw = 4.3 × 10 15 cm = 290Rg ; the disc becomes gas-pressure dominated outside rpr = 5.0 × 10 15 cm ≃ 340Rg ; the disc becomes gravitationally unstable outside 3.4 × 10 16 cm ≃ 2300Rg ; and the disc warp is governed by Lense-Thirring and self-gravitational torques, with viscous torques smaller by a factor of γα ⊥ ≃ 0.14α ⊥ where α ⊥ ∼ 1 for a Shakura-Sunyaev parameter α ≃ 0.1. We supplement these formula with three sets of plots. These plots are based on the analysis in equations (57)- (64) with three refinements to the analytic formulae (65)-(72): (i) we include both gas and radiation pressure at all radii; (ii) we include the effects of the relativistic parameters Rz, RT , and RR; (iii) we compute the efficiency ǫ from the spin parameter a• using the estimates from Novikov & Thorne (1973). Thus the plots assume thin-disc accretion with no torque at the inner boundary, which is assumed to lie at rISCO, the radius of the innermost stable circular orbit. Fig. 9 shows Toomre's Q (eq. 69), the aspect radio H/r, the surface density Σ, and the ratio γ of viscous and self-gravity torques for BH masses of 10 7 M⊙, 10 8 M⊙, and 10 9 M⊙. Fig. 10 shows a similar plot for Eddington ratios L/L Edd of 1, 0.1, and 0.01. Figs. 9 and 10 show that the transition from radiation pressure to gas pressure dominance occurs in the range of 100 to 10 4 Rg, and depends more strongly on L/L Edd than M•. The radii where Q declines below unity (onset of local gravitational instability) and γ declines below unity (self-gravity torque stronger than viscous torque) are not very different, so care must be taken when applying analytic formulae that assume either radiation or gas pressure to dominate. Fig. 11 compares the warp radius rw to three characteristic disc radii for a range of disc parameters. We have defined the self-gravity radius rQ as the radius where Q = 1, rpr as the radius where the gas and radiation pressure are equal (cf. eq. 68), and r5000 as the half-light radius for emission at 5000Å, assuming that the disc radiates locally as a blackbody. Since γ is smaller than Q by a factor of H/r (see discussion following eq. 22), we always have rw < rQ. The disc is generally in the radiation-dominated regime at rw, but can fall in the gas-pressure dominated region for smaller BH mass M•, smaller Eddington ratio L/L Edd , or spin parameter a• near unity. The dependence of all the characteristic radii on a• is rather weak, except for a• → 0 or 1. Note that for α ≃ 0.1 all of the discs shown in these figures have α ≫ H/r (except for r 100Rg when L/L Edd = 1) so the condition (1) for non-resonant warp behavior is satisfied by a large margin. For most of the parameter space we have examined the warp radius rw is just outside (1-3 times larger than) the optical radius r5000. However, if warping causes the disc to intercept a larger fraction of the emission from smaller radii the region where the warp is strong may dominate the optical emission. The flux of radiation coming from the inner disc that irradiates the outer disc is approximately where Lin is the characteristic luminosity from the inner disc and θ is the angle between the normal to the warped outer disc and the incoming flux. For thin discs, cos θ ≃ H/r ≪ 1 and, since H is independent of r in the radiation-dominated regime, Firr ∝ r −3 . This is the same scaling as the intrinsic disc emission (eq. 59) so disc irradiation has little effect on the radial emission profile of an unwarped disc. However, if the disc has a significant warp, cos θ ≫ H/r and the irradiating flux can exceed the intrinsic disc emission. In this case the characteristic disc temperature will be where χ is a (poorly constrained) reduction factor added to account for the fraction of the disc luminosity intercepted by the warp, the characteristic emitting area of the warp, and the albedo. The wavelength at which blackbody emission peaks for Tirr = 1.1 × 10 4 K is λ ≃ ch/3kB Tirr = 4400Å. Since rw exceeds the the nominal half-light radius of the unirradiated disc, the reradiated emission at the warp can easily dominate. If so, the true half-light radius for optical emission should be roughly given by rw rather than r5000. This result is relevant to recent constraints on the size of quasar emission regions obtained by modeling the variability due to gravitational microlensing in an intervening galaxy. In the majority of cases that have been studied, the sizes inferred (63) and (64), while the dashed and dotted curves show the analytic approximations assuming that radiation and gas pressure (respectively) dominate. The warp radii are marked by filled circles. from microlensing exceed the predicted half-light radii of flat α-disc models by factors of ∼ 3-10 (e.g. Mortonson et al. 2005;Pooley et al. 2007). Morgan et al. (2010) find a best fit in which the microlensing size at 2500Å scales as M 0.8 • for a sample of 11 sources with estimated M• = 4 × 10 7 M⊙-2.4 × 10 9 M⊙. This is the same scaling as rw,r with M• in equation (71) and also agrees well with the dependence of the warp radius on M• found in Fig. 11. Unfortunately this is not a very sensitive test: for a flat disc, the radius at a given temperature scales as M 2/3 • , and in the Bardeen-Petterson approximation the warp radius scales as M 9/8 • . The absolute scale for the microlensing size at 2500Å is a factor of ∼ 6 smaller than our estimate for rw,r, but this is subject to some uncertainty and might be accounted for by bending waves excited interior to rw (compare Fig. 8). An important but poorly understood issue is what fraction of AGN accretion discs are likely to be warped. Over long times, warps are damped out as the BH spin axis aligns with the outer disc. A rough estimate of this time-scale is t align ≃ L•/(πr 2 ΣTLT)r w where L• is the spin angular momentum of the BH and the quantity in parentheses is the Lense-Thirring torque per unit mass TLT times the disc mass evaluated at the warp radius rw. Using equation (3) and the expression for L• given just above it, we find where in the second expression we have used (21) to eliminate the surface density. For our fiducial case -M• = 10 8 M⊙, L = 0.1L Edd , a• = 0.5, ǫ = 0.1, α = 0.1 -the warp radius is ∼ 300Rg and t align = 1.3 × 10 5 yr(rw/300Rg ) 4 , much shorter than the typical AGN lifetime (the Salpeter time, 5 × 10 7 yr for ǫ = 0.1). Much more uncertain is the time-scale on which warps are excited. High-resolution simulations of the centres of galaxies show order unity variations in the gas inflow rate at Figure 10. As in Fig. 9, except for BH mass 10 8 M ⊙ and Eddington ratios of 1 (black), 0.1 (red), and 0.01 (blue). 0.1 pc on time-scales less than 10 5 yr (Hopkins & Quataert 2010, fig. 6) and these are presumably accompanied by similar variations in the angular momentum of the inflowing gas. In such an environment the orientation of the outer parts of the accretion disc is likely to vary stochastically on time-scales less than the damping time, and this case most AGN accretion discs will be warped. Binary black holes Most galaxies contain supermassive BHs at their centres, and when galaxies merge these BHs will spiral to within a few parsecs of the centre of the merged galaxy through dynamical friction (e.g., Begelman et al. 1980;Yu 2002). Whether they continue to spiral to smaller radii remains unclear, but if the binary decays to a sufficiently small semimajor axis -typically 0.1-0.001 pc, depending on the galaxy and the BH mass ratio -the loss of orbital energy through gravitational radiation will ensure that they merge. If one of the BHs (the primary) supports an accretion disc, and the spin axis of the primary is misaligned with the orbital axis of the binary, the accretion disc will be warped 10 . In this case both the self-gravity of the disc and the tidal field from the secondary, as well as viscous stresses and the Lense-Thirring effect, can play important roles in shaping the warp. For the sake of simplicity, we do not examine all of these torques simultaneously: here we first consider an AGN accretion disc without self-gravity orbiting one member of a binary BH, then compare the strength of the torques and the characteristic warp radius to those in an accretion disc with self-gravity orbiting an isolated BH. Let M• be the mass of the primary and µM• the mass of the other BH (the secondary). We assume for simplicity that the orbit is circular, with semimajor axis r⋆. The time required for the two BHs to merge due to gravitational radiation is Figure 11. Characteristic disc radii versus BH mass (top left panel), Shakura-Sunyaev parameter α (top right), Eddington ratio (bottom left), and BH spin (bottom right). The curves represent the warp radius rw (eq. 21; solid black line), radius r Q at which the disc becomes gravitationally unstable (dotted red line), transition radius from radiation-pressure to gas-pressure dominated rpr (dashed blue line) and the half-light radius at 5000Å(dot-dashed green line). The fiducial model has M• = 10 8 M ⊙ , a• = 0.5, L/L Edd = 0.1, and α = 0.1, and is marked by filled circles on each curve. Only a single parameter is varied away from the fiducial value to produce each panel. All radii are measured in units of the gravitational radius Rg = GM•/c 2 . (Peters 1964) tmerge = 5 256 The numbers and orbital distribution of binary BHs are not well-constrained, either observationally or theoretically (see, for example, Shen et al. 2013). In the absence of other information, a natural place to prospect for binary BHs is where the merger time (76) is equal to the Hubble time. Thus we will use equation (76) to eliminate the unknown semimajor axis r⋆ in favor of the ratio tmerge/10 10 yr. With this substitution and using the accretion disc models from earlier in this Section, most properties of interest are straightforward to calculate. The binary semimajor axis is The warp radius (9) The viscosity parameter β (eq. 30) depends on whether the warp radius is in the radiation-pressure dominated or the gas-pressure dominated regime. In these two cases: For our fiducial case -M• = 10 8 M⊙, L = 0.1L Edd , a• = 0.5, ǫ = 0.1, α = 0.1, tmerge = 10 10 yr, µ = 1 -the disc becomes gas-pressure dominated at ∼ 330Rg (eq. 68), the warp radius is ∼ 700Rg, the disc becomes gravitationally unstable at 2300Rg (eq. 70), the binary semimajor axis is 1.6 × 10 4 Rg, and the viscosity parameter is βg = 0.094. For comparison, including self-gravity leads to a warp radius of ∼ 300Rg in an isolated disc (see discussion following eq. 72), so self-gravity is likely to have a stronger influence on the warp shape than torques from the companion BH, at least in the fiducial disc. Companion torques become stronger relative to self-gravity in binary BHs with shorter merger times tmerge; of course, such systems are relatively rare because they last for less than a Hubble time. SUMMARY Warped accretion discs exhibit a remarkably rich variety of behavior. This richness arises for several reasons. First, a number of different physical mechanisms can lead to torques on the disc: the quadrupole potential from the central body (e.g., an oblate planet or a binary black hole), Lense-Thirring precession, the self-gravity of the disc, the tidal field from a companion, angular-momentum transport by viscous or other internal disc stresses, radiation pressure, and magnetic fields (we do not consider the latter two effects). Second, the geometry of the disc depends critically on whether the competing mechanisms lead to prograde or retrograde precession of the disc angular momentum around their symmetry axes. Third, a disc can support short-wavelength bending waves even when the disc mass is much smaller than the mass of the central body (as in Saturn's rings). Most previous studies of warped accretion discs around black holes have focused on Lense-Thirring and viscous torques (the Bardeen-Petterson approximation). If a companion star is present in the system, as in X-ray binary stars, the Bardeen-Petterson approximation is valid (a 'high-viscosity' disc) only if the disc viscosity is sufficiently high, βα ⊥ 1 where β is given in equation (56) for typical X-ray binary parameters and α ⊥ ∼ 1 is the Shakura-Sunyaev α parameter for the internal disc stresses that damp the warp. Our results suggest that the Bardeen-Petterson approximation is not valid (a 'low-viscosity' disc) for quiescent X-ray binaries. Models of such low-viscosity discs using the Pringle-Ogilvie equations of motion exhibit remarkable behavior: for a given obliquity (angle between the black-hole spin axis and companion orbital axis) there is no steady-state solution for β smaller than some critical value. We have argued at the end of §2.4 that the failure of these equations probably arises because they do not allow hyperbolic behavior but the question of how warped low-viscosity Lense-Thirring discs actually behave remains to be answered. The behavior of warped accretion discs around massive black holes is equally rich. Here there is no significant companion torque (unless the black hole is a member of a binary system), but the Bardeen-Petterson approximation remains suspect because it neglects the self-gravity of the disc. In fact we find that most plausible models of AGN accretion discs have low viscosity in the sense that viscous torques are smaller at all radii than one or both of the Lense-Thirring and self-gravity torques. If the viscosity is sufficiently small, spiral bending waves are excited at the warp radius and propagate inward with growing amplitude until they are eventually damped by viscosity or non-linear effects. The presence of such waves may contribute to obscuration of the disc and the illumination of the warped disc by the central source may affect the disc spectrum or apparent size at optical wavelengths. It is worth re-emphasizing that many of our conclusions are based on a simple model of the internal stresses in the disc -the stress tensor is that of a viscous fluid and the viscosity is related to the pressure through the Shakura-Sunyaev α parameter -that does not correspond to the actual stress tensor, which probably arises mostly from anisotropic MHD turbulence. The available evidence on the validity of this model from numerical MHD simulations, discussed at the end of §2.1, suggests that it overestimates the rate of viscous damping of warps; if correct, this would strengthen our conclusions about the limited validity of the Bardeen-Petterson approximation and the importance of tidal torques and self-gravity in shaping warped accretion discs. Our results suggest several avenues for future work. A better treatment of self-gravitating warped discs would merge the Pringle-Ogilvie equations (28) with a description of the mutual torques due to self-gravity as in Ulubay-Siddiki et al. (2009). Generalizing the Pringle-Ogilvie equations to include wavelike behavior is also a necessary step for a complete description of warped accretion discs. Understanding the actual behavior of low-viscosity Lense-Thirring discs that exceed the critical obliq-uity is important and challenging. Simple models of the emission from warped discs may help to resolve current discrepancies between simple flat α-disc models and observations of AGN spectra and sizes. We thank Julian Krolik, Jerome Orosz, and Jihad Touma for illuminating discussions. We thank Gordon Ogilvie for many insights and for providing the program used to calculate the viscosity coefficients Qi. ST thanks the Max Planck Institute for Astrophysics and the Alexander von Humboldt Foundation for hospitality and support during a portion of this work. This research was supported in part by NASA grant NNX11AF29G.
2013-08-08T20:07:09.000Z
2013-08-08T00:00:00.000
{ "year": 2014, "sha1": "6e00dadd5d2a374997721250d9a7b91d7dad1153", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/441/2/1408/3724550/stu663.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6e00dadd5d2a374997721250d9a7b91d7dad1153", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255329458
pes2o/s2orc
v3-fos-license
A Multicenter Survey Study of Lung Transplant Program Staffing INTRODUCTION Approximately 2500 lung transplants are performed in the United States annually, yet there remains a significant shortage of suitable donor organs, resulting in lung transplant waitlist mortality of 15%. To meet demand, lung transplant programs are expanding in size and scope. Unfortunately, instruction on optimal program staffing structure is lacking. Furthermore, existing data are dichotomized into large (>40 transplants/y) or small (<40 transplant/y) programs, limiting program-specific interpretation for process improvement. Moreover, there are few details on procedure volumes, inpatient census, and planning for anticipated growth. Consequently, many lung transplant programs look to data in other solid organ fields for staffing guidance. However, such models may not be generalizable to the lung transplant community because lung allograft recipients have greater mortality, acute rejection, and comorbidities. To address this knowledge gap, we performed a prospective, multicenter, survey assessment of staffing in lung transplant programs across the United States and Canada. Program leadership was surveyed about clinical volumes, staffing adequacy, anticipated growth, and accommodations for coordinator workplace flexibility. These results provide a critical overview of staffing and identify key opportunities for improving logistics. INTRODUCTION Approximately 2500 lung transplants are performed in the United States annually, yet there remains a significant shortage of suitable donor organs, resulting in lung transplant waitlist mortality of 15%. 1,2 To meet demand, lung transplant programs are expanding in size and scope. Unfortunately, instruction on optimal program staffing structure is lacking. Furthermore, existing data are dichotomized into large (>40 transplants/y) or small (<40 transplant/y) programs, limiting program-specific interpretation for process improvement. Moreover, there are few details on procedure volumes, inpatient census, and planning for anticipated growth. Consequently, many lung transplant programs look to data in other solid organ fields for staffing guidance. [3][4][5] However, such models may not be generalizable to the lung transplant community because lung allograft recipients have greater mortality, acute rejection, and comorbidities. 6,7 To address this knowledge gap, we performed a prospective, multicenter, survey assessment of staffing in lung transplant programs across the United States and Canada. Program leadership was surveyed about clinical volumes, staffing adequacy, anticipated growth, and accommodations for coordinator workplace flexibility. These results provide a critical overview of staffing and identify key opportunities for improving logistics. MATERIALS AND METHODS A 29-item survey ( Results were stratified into thirds by program size based on transplant procedures performed in 2021 with <30, 30-65, and >65 transplants/y defining small, medium, and large programs, respectively. Results were adjusted to 2021 transplant rates or total living cohort size, to facilitate comparisons between programs. Workload was calculated by dividing the metric of interest by number of full-time equivalents for the respective role; full-time equivalent equals scheduled hours divided by number of full-time workweek hours. Kruskal-Wallis testing was used to compare continuous variables across program sizes. Pairwise comparisons were performed using Mann-Whitney U and Chi-square testing for continuous and categorical variables, respectively. A P of <0.05 was considered statistically significant. Statistical analysis was performed using StataBE v17.0 (College Station, TX). Patient Metrics by Program Size The survey was distributed to 63 active adult lung transplant programs, with 39 (62%) responding. Responses were received by 11 of 27 (41%), 13 of 19 (68%), and 15 of 17 (88%) of small, medium, and large programs, respectively. Medical directors (85%), other physicians (13%), or administrators (3%) provided responses. The median number of new transplants performed in 2021 was 46 (interquartile range 28-70). There was a significant difference in the number of transplants performed between small, medium, and large programs. visits, and bronchoscopy volume). Across all programs, 15% of referred patients were added to the transplant waitlist, whereas 40% of patients who completed evaluation were waitlisted ( Figure 1A). There was no difference in progression from referral through evaluation and waitlisting by program size. Daily inpatient census was 33% of the annual transplant volume and was significantly higher for smaller programs ( Figure 1B). The ratio of inpatient volume relative to the census of alive patients was 5% and did not vary by program size (P = 0.56). Ambulatory weekly volume was 103% of the annual transplant volume (14% of total living recipients) and did not vary by program size. Weekly bronchoscopy volume was 18% of annual transplant volume, with small programs performing more bronchoscopies than large programs ( Figure 1B). Program metrics normalized to annual transplant volumes are in Figures S1, S2, SDC, http://links.lww.com/TP/C658. Staffing by Program Size Professional staffing by each position is in Table S3, SDC, http://links.lww.com/TP/C658. There were significant differences in the number of surgeons, pulmonologists, inpatient advanced practice providers, coordinators, social workers, administrative assistants, FIGURE 1. Transplant productivity by program size. A, Monthly waitlist additions, evaluations, and referrals stratified by program size. B, Outpatient clinic weekly volumes, bronchoscopy weekly volumes, and inpatient daily census, stratified by program size. For all panels, values are median with interquartile range. n = 11 small, 13 medium, and 15 large programs. Comparisons between program sizes were performed using Kruskal-Wallis testing or Mann-Whitney U testing. txp, transplant. and nutritionists by program size. There were no significant differences in the number of outpatient advanced practice providers, pharmacists, psychiatrists, or physical therapists by program size. There were significant differences in the number of pulmonologists and posttransplant coordinators between medium and large programs (Figure 2A). There were significant differences in the number of pulmonologists, pretransplant coordinators, and posttransplant coordinators between small and medium programs. Relative workloads for each staff position are described in Figure 2B and Table S4, SDC, http://links.lww.com/TP/ C658. Across all programs, there was 1 surgeon per 16 new lung transplants and 1 pulmonologist per 13 new transplants. Each posttransplant nurse coordinator provided care for a median of 77 patients, with 15% being within the first-year posttransplant. There were significant relative workload differences for pretransplant coordinators, posttransplant coordinators, pharmacists, and social workers between medium and large programs. The only role with a difference in workload between medium and small programs was nutritionists. When referenced to total cohort size, there was a relative workload difference between medium and large programs for postcoordinators and between small and large programs for pharmacists ( Figure S3, SDC, http://links.lww.com/TP/C658). Remote Work Environment for Transplant Coordinators The COVID-19 pandemic provided an impetus to improve workplace flexibility for transplant staff. Regardless of program size, most programs polled (59%) instituted remote capabilities for nurse coordinators. Most programs allowed for 40% of the work week to be spent working remotely (Table S5, SDC, http://links.lww.com/TP/C658). Although remote work options for nurse coordinators were numerically greater in large versus small programs (60% versus 20%), this difference was not statistically significant. Nursing and medical leadership had differing perspectives as to the ideal remote work schedule, with nursing suggesting that 2 d/wk (40%) was adequate, whereas medical leadership felt that 1 d/wk (20%) was sufficient (P = 0.04). DISCUSSION In this article, we report current staffing practices across lung transplant programs in North America, using a prospective, multicenter survey study. We report significant differences in staffing between small, medium, and large transplant programs and identify significant differences in relative workload across different staff positions. These data will enable programs to adequately staff provider resources to meet lung transplant patient needs. Our study builds on limited United Network of Organ Sharing data by categorizing data into clusters based on program size, addressing inpatient and bronchoscopy volumes, and reflecting on anticipated program growth and coordinator workplace flexibility. Our study provides key statistics to aid in logistical planning for lung transplant care delivery. We found that 15% of referred patients and 40% of patients completing transplant evaluations were waitlisted, findings that did not vary by program volume. Thus, to reach a set goal of transplants/year, transplant programs should plan referral visits for 7.5-times and transplant evaluations for 2.5times as many patients as the goal number of transplants. These data can be used to assess needs for outreach and ensure that sufficient resources are dedicated to pretransplant workflow. We also report that inpatient daily census, weekly ambulatory clinic volumes, and weekly bronchoscopy needs are highly correlated with the annual transplant volume, which further enables advanced allocation of patient care resources. This study provides important data on relative workloads of teams providing care for lung transplant patients. These data may help identify which additional resources will provide the best value during program expansion. Most programs reported plans to increase transplant volume by approximately one-third within 5 y, despite program leadership citing existing staffing insufficiencies. Although pulmonologist and surgeon roles were felt to be most needed, our data suggest greater variability in workload among coordinators, pharmacists, social workers, and nutritionists. Finally, we highlight that most programs provide flexibility for nurse coordinators to work remotely at least 2 d (40%) of the week. Further assessment is necessary to determine the ideal balance between staff flexibility and patient outcomes. Our study has multiple strengths. The 62% response rate shows that our data represent most North American transplant programs and exceeds the survey response rate of 30%-40% seen in similar studies. 3,4 High participation was achieved because of the concise design, use of proactive weekly reminders to facilitate responses, and recognition among program leadership about the need for granular staffing data. Another strength is the breadth of representation of programs of different sizes. We also included workload assessments for a wide variety of clinical and nonclinical transplant team roles. Our study also has some limitations. Response rate of small programs was lower than large programs. In addition, we were unable to account for program-specific variation in staff role responsibilities; for example, pharmacists might have an outpatient presence in some programs, but only work with inpatient teams at others. It is also possible that staffing needs vary by patient demographics, geographic region, urban versus rural environment, patient travel distance, or involvement of medical housestaff. Notably, this work does not directly assess the influence of staffing ratios on patient outcomes, which would be an important natural extension in future studies. In conclusion, our study provides a detailed description of staffing resources for lung transplant programs across North America, highlighting current staff workloads and providing critical data to provide logistical support for program optimization.
2023-01-01T16:02:41.472Z
2022-12-21T00:00:00.000
{ "year": 2022, "sha1": "0d68c025ebea92bf304de1f3911eb8ffc6ff011d", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/transplantjournal/Fulltext/9900/A_Multicenter_Survey_Study_of_Lung_Transplant.284.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0e3ab1ee8022cab94b83e424cac2d82c3765f9ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207814634
pes2o/s2orc
v3-fos-license
Effects of Caffeine on Auditory- and Vestibular-Evoked Potentials in Healthy Individuals: A Double-Blind Placebo-Controlled Study Background and Objectives The blockage of adenosine receptors by caffeine changes the levels of neurotransmitters. These receptors are present in all parts of the body, including the auditory and vestibular systems. This study aimed to evaluate the effect of caffeine on evoked potentials using auditory brainstem responses (ABRs) and cervical vestibular-evoked myogenic potentials (cVEMPs) in a double-blind placebo-controlled study. Subjects and Methods Forty individuals (20 females and 20 males; aged 18-25 years) were randomly assigned to two groups: the test group (consuming 3 mg/kg pure caffeine powder with little sugar and dry milk in 100 mL of water), and the placebo group (consuming only sugar and dry milk in 100 mL water as placebo). The cVEMPs and ABRs were recorded before and after caffeine or placebo intake. Results A significant difference was observed in the absolute latencies of I and III (p<0.010), and V (p<0.001) and in the inter-peak latencies of III–V and I–V (p<0.001) of ABRs wave. In contrast, no significant difference was found in cVEMP parameters (P13 and N23 latency, threshold, P13-N23 amplitude, and amplitude ratio). The mean amplitudes of P13-N23 showed an increase after caffeine ingestion. However, this was not significant compared with the placebo group (p>0.050). Conclusions It seems that the extent of caffeine’s effects varies for differently evoked potentials. Latency reduction in ABRs indicates that caffeine improves transmission in the central brain auditory pathways. However, different effects of caffeine on auditory- and vestibular-evoked potentials could be attributed to the differences in sensitivities of the ABR and cVEMP tests. Introduction Caffeine (1,3,7-trimethylxanthine) is believed to be the most widely and frequently consumed psychoactive substance in the world and is a natural constituent of numerous available foods and beverages, such as coffee, tea, cocoa products, and cola products [1]. Various hypotheses have been postulated for the mechanisms of action of caffeine, includ-ing blocking of adenosine receptors, mobilization of intracellular calcium, inhibition of phosphodiesterases, and binding of caffeine to benzodiazepine receptors [2]. However, the chief effect of caffeine is the blocking of adenosine receptors [3,4]. These properties allow caffeine to affect many human tissues, including those of the central nervous system, cardiovascular system, and both smooth and skeletal muscular systems [4,5]. Additionally, several studies have suggested that some effects of caffeine could be due to its effects on formation and release of neurotransmitters [2], for example, levels of neurotransmitters such as glutamate, serotonin, noradrenaline, acetylcholine, and dopamine change due to adenosine blockage [1,4]. Adenosine not only acts as a neurotransmitter and neuromodulator, but is also a constituent of other important bioactive molecules including adenosine triphosphate, ribonucleic acid, and secondary messengers such as cyclic adenosine monophosphate [4]. There is extensive literature examining the effects of caffeine on physiological, electrophysiological, and cognitive functions, and it is believed to influence mood and cognitive performance [1,3,4]. However, there is limited research focusing on the effects of caffeine on the auditory and especially vestibular system. Previous studies have evaluated effects of caffeine on the organ of corti [6], auditory-evoked potentials [7][8][9][10], and vestibular system [11][12][13][14][15][16]. They showed that caffeine significantly suppressed the compound action potential of the auditory nerve and summation potential at low intensity, and increased the N1 latency at high and low intensities [6]. Additionally, caffeine reduced the distortion product otoacoustic emissions at low intensities and increased it at high intensities, leading to the shortening of the outer hair cells [6]. In auditory brain stem evoked responses (ABRs), caffeine ingestion significantly reduced the absolute and inter-peak latencies [7,10] and increased the amplitude of wave V [7]. However, there are conflicting results from numerous studies with respect to upper level potentials. In different studies, caffeine invoked reduced P1 middle latency response [7], reduced P300 latency and amplitude [8], and increased P1, P2, and P3b amplitude without any effect on the latency [9]. There are also few studies on the effect of caffeine on the vestibular system. There were no significant effects found in studies performed for tests of caloric [11,12], posturography [13], rotary chair [14], and vestibular-evoked myogenic potentials (cVEMP) [12,15,16]. However, studies wherein oculomotor tests were performed showed that caffeine treatment significantly reduced the saccadic eye movements of smooth pursuit in schizophrenic patients [17]. McNerney, et al. [14] also reported statistically significant differences in the results of several oculomotor tests after caffeine ingestion, including vertical saccades, horizontal saccades, and optokinetics. Furthermore, Pilli, et al. [18] reported that caffeine could reduce the saccade latency significantly. It has been hypothesized that caffeine causes an increase in the release of neurotransmitters, such as glutamate, by interacting with the adenosine receptors, thus improving the sensory perception of auditory stimuli after caffeine intake [7]. This is important because adenosine is a constitutive metabolite of all cells and is present in both the auditory [19] and vestibular systems [20]. However, the effects of caffeine on auditory and vestibular systems is still under debate, and there are no placebo-based clinical trials evaluating its effect on these systems and specially evoked potentials. Therefore, the main purpose of this study was to evaluate the effects of caffeine on auditory-and vestibular-evoked potentials in a randomized, double blind, placebo-controlled trial. Since caffeine affects the central nervous system as well as neuromuscular function, and cVEMP is result of the coordination between sensory and neuromuscular functions, we aimed to assess the effects of caffeine on cVEMP and ABRs. Subjects and Methods The present study was designed as a randomized, double blind, placebo-controlled, interventional study approved by the Ethics Committee of Tehran University of Medical Sciences with code of BP-QP-110-01. Participants The study group recruited 40 individuals (20 males, 20 females) aged 18-25 years from the Rehabilitation Faculty of Tehran University of Medical Sciences, Tehran, Iran. Individuals with no history of neurologic, myogenic, balance and cervical disorders, ear diseases, psychiatric illness, and habitual smoking and drinking, as well as those consuming vestibulotoxic drugs, such as gentamycin and neomycin, were included. Additionally, to eliminate the effect of weight, body mass index of 18.5-24.9 kg/m 2 was considered as an inclusion criterion. Moreover, individuals with low caffeine intake (<200 mg/day of caffeine-containing substances) were eligible to participate in this study. All participants were asked to abstain from caffeine-containing substances (tea, coffee, and cola) for at least 6 hours before the test. Informed consent was obtained from all participants. To assure the health of the auditory system, participants underwent otoscopy (Reister, Jungingen, Germany), immittance audiometry (Zodiac, Madsen Co., Taastrup, Denmark), and pure tone audiometry (AC40, Interacoustics Co., Middelfart, Denmark) examinations. The inclusion criteria were normal tympanogram (static compliance between 0.3-1.6, middle ear pressure between -100-+50), acoustic reflexes with thresholds between 70-100 decibels in hearing level (dB HL) in immittance audiometry, and air conduction and bone conduction thresholds of ≤15 dB in pure tone audiometry. Experimental procedure Initially, the ABRs and cVEMP were recorded in all participants by the experimenter. The participants were then randomly assigned to one of two groups, and both participants and experimenter were blind to the group allotment. The caf-feine and placebo allotment was written in 40 similar envelopes, and 3 mg/kg caffeine (n=20) and 0 mg/kg caffeine or placebo (n=20) were placed in a container by a research collaborator. The standard dose of caffeine used in most studies is 3 mg/kg. The participants chose one of the envelopes randomly and gave it to the collaborator without opening it, and were thus allotted to one of two groups: test group and placebo group. Following this, the body weights of participants were measured using a weighing scale. The collaborator measured the caffeine amount (Human Pharmaceutical, Roma, Italy) per kg of body weight using the analyzer research scale and dissolved it in 100 mL of water. Moreover, a little powdered milk and sugar were added to improve the flavor, and the powdered milk helped achieve similar appearance of the drinks in both groups. The used cups were disposable and non-transparent. The collaborator wrote the name of the participants and the weight of the materials in a separate list not accessible to the participants. Since caffeine reaches its highest concentration in blood plasma within 30-60 min of ingestion, the tests were repeated after about 40 min. In order to eliminate the order effect, the test was conducted randomly once for the right ear and once for the left ear. Recording procedure The cVEMP was recorded with 500 hertz (Hz) tone burst stimuli (2-1-2 duration) using insert earphone, at a repetition rate of 5.1/s, intensity of 95 dB HL, 10-1,500 Hz band-pass filter, 100 stimuli, 5000X gain, and with the active surface electrode placed over the upper one-third of the sternocleidomastoid muscle, reference electrode over the upper sternum, and ground electrode on the forehead. P13 and N23 latency, P13-N23 amplitude, threshold, and asymmetric ratio (AR) parameters were recorded. For equal contraction of muscles on both sides, the feedback method was adopted. ABRs were recorded at a repetition rate of 9.1/s, with rarefaction click stimuli, intensity of 90 dB peSPL (peak sound pressure level), 100-3,000 Hz filter, with the active surface electrode placed over the stimulated ear, reference electrode on the forehead, and ground electrode over the contralateral ear. The skin electrode contact impedance was maintained at <5 kΩ. Statistical analysis The normality of variables was assessed using the Kolmogorov-Smirnov Goodness-of-Fit test. The significance level was set at p<0.01 for ABR and p<0.05 for cVEMP. Wilcoxon test and Mann-Whitney U tests were used for within-group and between-group comparisons, respectively. Results This study included 40 individuals of both sexes with a mean age of 23 years. Statistical analysis during the pre-ingestion session revealed that all parameters of cVEMP (P13, N23 latency, P13-N23 amplitude, AR, and threshold) and ABRs (absolute I, III, V, I-III, III-V and I-V inter-peak latency, and V/I amplitude ratio) were homogenous in both groups, and there was no significant difference between the groups for any parameters (p>0.05). There was no significant difference between both ears; hence, their results were combined. The relative changes in parameters of ABRs and cVEMP are presented in Table 1 and 2. Relative changes were calculated as the difference between the pre-and post-data, divided by the standard deviation of pre-data. In ABRs, a significant reduction was found in the test group compared with placebo group in absolute latencies of I, III, and V and inter-peak latencies of III-V and I-V ( Table 1). The relative changes in absolute latencies of I, III, and V and inter-peak latencies of III-V and I-V are shown in Table 1. They increased or remained unchanged in the placebo group, but decreased in the test group. In the placebo group, the latencies of wave I, III, and V were 2.47±0.14, 4.57± 0.13, and 6.54±0.20 milliseconds (ms) before and 2.48± 0.14, 4.66±0.14, and 6.54±0.20 ms after the intervention, respectively. In the test group, the latencies of I, III, and V were 2.51±0.16, 4.56±0.15, 6.53±0.21 ms before the intervention, respectively, and reduced to 2.46±0.16, 4.53±0.11, and 6.45±0.20 ms, respectively, after caffeine intake. The inter-peak latencies of III-V and I-V in the test group were 1.97±0.13 ms and 4.03±0.17 ms before the intervention, which decreased to 1.91±0.15 ms and 3.97±0.20 ms, respectively. However, in the placebo group, the latency of III-V (1.97±0.14 ms) remained unchanged, while the latency of I-V increased slightly (from 4.06±0.20 ms to 4.07±0.20 ms). The latency of I-III did not show a significant difference between the groups, but showed a slight decrease in both groups. The latency of I-III before and after caffeine intake was 2.06±0.13 ms and 2.04±0.13 ms in the test group, and 2.09±0.15 ms and 2.08±0.13 ms in the placebo group, respectively. No significant difference was found between the groups for V/I amplitude of ABRs. V/I amplitude of ABRs increased in both groups, from 6.95 µv to 12.09 µv in the test group and from 5.22 µv to 6.30 µv in the placebo group. Fig. 1 and 2 illustrate the relative changes of absolute and inter-peak latencies in both groups. As shown, all absolute latencies of ABRs decreased in the test group and increased in the placebo group. In the test group, maximum decrease was observed in the absolute latencies of wave I and V and interpeak latency of III-V, and minimum decrease was noted in absolute latency of III and inter-peak latency of I-III. No statistically significant differences were observed in any of the cVEMP parameters in the test group compared with the placebo group (p>0.05). The mean P13-N23 amplitude of cVEMP increased in both groups. However, the difference between the groups was not statistically significant. The only considerable finding was the changes in P13-N23 amplitude that bordered the significance level (p=0.061). The mean P13-N23 amplitude in the test group was 172.02±103.42 µv during pre-ingestion session that increased to 197.74±101.27 µv after caffeine intake. However, the mean P13-N23 amplitude in the placebo group was 137.95±78.17 µv that increased slightly to 138.04±80.83 µv. Fig. 2. The effects of caffeine and placebo on inter-peak latencies of auditory brain stem responses. Relative changes in the inter-peak latencies of III-V and I-V waves of auditory brain stem responses were significantly greater in caffeine group compared with the placebo group. No significant changes was seen in I-III inter-peak latency in caffeine group compared with the placebo group ( † p<0.001). The P13 and N23 latency also increased slightly in the test group after caffeine ingestion. In the test group, the latency of P13 and N23 increased from 16.61±0.84 ms and 24.54± 1.45 ms to 16.67±0.89 ms and 24.84±1.38 ms, respectively. In the placebo group, the latency of P13 increased from 16.91± 1.69 ms to 16.92±1.56 ms, but that of N23 fell from 24.89± 1.41 ms to 24.85±1.28 ms. The threshold of cVEMP showed a slight decrease in both groups. Discussion In general, the findings showed a significant reduction in the absolute latencies of I, III, and V and in the inter-peak latencies of III-V and I-V of the ABRs. However, there was no significant difference noted in the parameters of cVEMP in the test group compared with the placebo group. There is limited research focusing on the effects of caffeine on ABRs and cVEMP. Latency reduction results in ABR in the present study are concurrent with the study by Dixit, et al. [7] and Shalini, et al. [10]. Dixit, et al. [7] observed significant reduction in absolute latencies of waves IV and V along with I-V inter-peak latency of ABRs. Although they found latency reduction in I-III and III-V waves, the changes were not statistically significant. This was probably because of the low sample size in both studies. However, in the study by Shalini, et al. [10], all waves showed absolute and inter-peak latency reduction, but there was no control group. The findings on the latency reduction also concurred with the studies showing that caffeine decreased the reaction time [21][22][23][24]. Seidl, et al. [21] found that the reaction time improved in response to target stimuli after administration of a drink containing caffeine. In other evoked potentials, Deslandes, et al. [25] found P300 latency reduction at the Fz electrode when the participants consumed caffeine. However, Barry, et al. [9] showed that caffeine had no effect on P1, P2, and P3b latency, but a significant increase was observed in the amplitude of P1, P2, and P3b at a dose of 250 mg. In contrast, no significant increase in V/I amplitude of ABRs was found in the present study. In the study by Dixit, et al. [7,24], a significant increase was detected in wave V amplitude of ABRs, while in wave I there was no significant increase. Amplitude rise after caffeine intake have been also reported for other evoked potentials such as event-related potentials [26] and visual evoked potentials [27]. In studies using visual-evoked potentials, 3 mg/kg caffeine increased the amplitude of P2 and n2b [27]. Although in event related potentials studies, other factors such as higher levels of attention and arousal were triggered by caffeine, they were not comparable with subcortical responses such as ABRs. We analyzed amplitude ratio of V/I in the present study and not the absolute amplitude of waves, because amplitude is a variable parameter and amplitude ratio is a more suitable parameter than absolute amplitude, which has not been evaluated in other studies. It is suggested that latency reduction is caused by a boost in information processing speed [27]. Caffeine stimulates the central nervous system initially at the higher-level functions of the brain, including cognition, memory, attention, and concentration, where altering the peripheral motor responses results in ergogenic action. Caffeine acts mainly by blocking the adenosine receptors, which are responsible for the "fine-tuning" of the neuronal communication. The stimulatory effect of caffeine as represented by the shorter latencies could also represent the speed at which the sensory information is being transmitted to their respective cortices of sensory memory, a part of the brain's cognition [10]. Additionally, it has been reported that caffeine can improve the performance in central auditory behavioral tasks [28]. Accordingly, Taghavi, et al. [28] evaluated the effects of short-term caffeine consumption on speech and sound reception in noise using the acceptable noise level test in healthy individuals. Speech perception in noise is one of the central auditory functions that depends on the interaction of sensory and cognitive processing. They found that the individuals tolerated higher levels of speech in noise [28]. In the present study, no statistically significant difference was found in the parameters of cVEMPs in the test group compared with the placebo group. There is not enough information regarding the effects of caffeine on the vestibular system, particularly on cVEMPs. The absence of significant effect of caffeine on cVEMPs in this study is in agreement with previous studies on the effect of caffeine on vestibular system employing different techniques, especially cVEMPs. There were no significant effects reported in the studies wherein tests of caloric [11,12], posturography [13], rotary chair [14], and cVEMP [12,15,16] were performed. de Sousa and Suzuki [15] administered 420 mg of caffeine, and the cVEMP parameters were compared before and after caffeine intake. No statistically significant difference was found in the test results before and after caffeine ingestion. Similarly, in the study by McNerney, et al. [12], 30 young healthy participants were tested with and without the consumption of moderate amount of caffeine before undergoing caloric and cVEMP tests. The results revealed that a moderate amount of caffeine did not have a clinically significant effect on the results of caloric and cVEMP tests in young healthy adults. However, another study reported statistically significant differences in the results of several oculomotor tests after caf-feine ingestion [14]. de Sousa and Suzuki [15] suggested that there is probably a negligible influence of adenosine receptors on the sacculocollic pathway, and further studies are necessary to clarify the distribution of these receptors in the peripheral vestibular pathway. Another probable cause is that the cVEMP is not sufficiently sensitive to caffeine effects. There are also differences in the auditory brain stem responses and cVEMP. Perhaps, the difference between the findings of ABRs and cVEMP studies is due to the wider normal range of latency for the cVEMP than ABRs, where ABRs are a completely neurologic response but cVEMP is a neurologic as well as myogenic response. Thus, ABRs could have higher sensitivity to the caffeine-induced changes as compared to cVEMP. It seems that the extent of caffeine effects on the various evoked potentials is presumably different, probably because caffeine has complex psychophysiological roles and there are evident differences in the body responses of individuals to caffeine. The findings of the present study about the effects of caffeine on auditory and vestibular systems in particular could be affected by the low statistical power and the small sample size. Moreover, because caffeine dose-response curve is like an inverted "U", and it has different effects at different doses, further studies with larger sample size and higher doses are needed to evaluate the effects of caffeine on auditory and vestibular systems in particular. In conclusion, it seems that the extent of caffeine's effects on various evoked potentials is different owing to the complex psychophysiological roles of caffeine and differences in the body responses of individuals evoked by it. Latency reduction in ABRs indicated that caffeine improved transmission in the central brain auditory pathways. However, different effects of caffeine on the auditory-and vestibular-evoked potentials are probably because of the different sensitivities of ABRs and cVEMP. To understand the effect of higher doses of caffeine on cVEMP and ABRs, further studies with a larger sample size are required.
2019-11-02T13:06:28.270Z
2019-11-04T00:00:00.000
{ "year": 2019, "sha1": "a75ee235eba586961a2696e6f115e767dbd67377", "oa_license": "CCBYNC", "oa_url": "http://www.ejao.org/upload/pdf/jao-2019-00227.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7509608ab7c4155e4b3c2d601d7e8d4f32c23e61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262386482
pes2o/s2orc
v3-fos-license
Distinct chromatin functional states correlate with HIV latency reactivation in infected primary CD4+ T cells Human immunodeficiency virus (HIV) infection is currently incurable, due to the persistence of latently infected cells. The ‘shock and kill’ approach to a cure proposes to eliminate this reservoir via transcriptional activation of latent proviruses, enabling direct or indirect killing of infected cells. Currently available latency-reversing agents (LRAs) have however proven ineffective. To understand why, we used a novel HIV reporter strain in primary CD4+ T cells and determined which latently infected cells are reactivatable by current candidate LRAs. Remarkably, none of these agents reactivated more than 5% of cells carrying a latent provirus. Sequencing analysis of reactivatable vs. non-reactivatable populations revealed that the integration sites were distinguishable in terms of chromatin functional states. Our findings challenge the feasibility of ‘shock and kill’, and suggest the need to explore other strategies to control the latent HIV reservoir. Introduction Antiretroviral therapy (ART) has transformed HIV infection from a uniformly deadly disease into a chronic lifelong condition, saving millions of lives. However, ART interruption leads to rapid viral rebound within weeks due to the persistence of proviral latency in rare, long-lived resting CD4 + T cells and possibly in tissue macrophages . HIV latency is defined as the presence of a transcriptionally silent but replication-competent proviral genome. Latency allows infected cells to evade both immune clearance mechanisms and currently available ART, which is based solely on the elimination of actively replicating virus. An extensively investigated approach to purging latent HIV is the 'shock and kill' strategy, which consists of forcing the reactivation of latent proviruses ('shock' phase) with the use of latency reversing agents (LRAs), while maintaining ART to prevent de novo infections. Subsequently, reactivation of HIV expression would expose such cells (shocked cells) to killing by viral cytopathic effects and immune clearance ('kill' phase). A variety of LRAs have been explored in vitro and ex vivo, with only a few candidates being advanced to testing in pilot human clinical trials. Use of histone deacetylase inhibitors (HDACi: vorinostat, panobinostat, romidepsin, and disulfiram) in clinical studies has shown increases in cell-associated HIV RNA production and/or plasma viremia after in vivo administration (Archin et al., 2012a;Elliott et al., 2015;Elliott et al., 2014;Rasmussen et al., 2014;Søgaard et al., 2015). However, none of these interventions alone has succeeded in significantly reducing the size of the latent HIV reservoir (Rasmussen and Lewin, 2016). Several obstacles can explain the failure of LRAs, as reviewed in (Margolis et al., 2016;. However, the biggest challenge to date is our inability to accurately quantify the size of the reservoir. The absolute quantification (number of cells) of the latent reservoir in vivo (and ex vivo) has thus far been technically impossible. The most sensitive, quickest, and easiest assays to measure the prevalence of HIV-infected cells are PCR-based, quantifying total or integrated HIV DNA or RNA transcripts. However these assays substantially overestimate the number of latently infected cells, due to the predominance of defective HIV DNA genomes in vivo (Bruner et al., 2016;Ho et al., 2013). The best currently available assay to measure the latent reservoir is the relatively cumbersome viral outgrowth assay (VOA), which is based on quantification of the number of resting CD4 + T cells that produce infectious virus after a single round of maximum in vitro T-cell activation. After several weeks of culture, viral outgrowth is assessed by an ELISA assay for HIV-1 p24 antigen or a PCR assay for HIV-1 RNA in the culture supernatant. Importantly, the number of latently infected cells detected in the VOA is 300-fold lower than the number of resting CD4 + T cells that harbor proviruses detectable by PCR. This reliance on a single round of T-cell activation likely incorrectly estimates the viral reservoir for two reasons. First, the discovery of intact non-induced proviruses indicates that the size of the latent reservoir may be much greater than previously thought: the authors estimate that the number may be at least 60 fold higher than estimates based on VOA (Ho et al., 2013;Sanyal et al., 2017). This work and that of others highlight the heterogeneous nature of HIV latency and suggest that HIV reactivation is a stochastic process that only reactivates a small fraction of latent viruses at any given time (Dar et al., 2012;Ho et al., 2013;Singh et al., 2010;Weinberger et al., 2005). Second, the ability of defective proviruses to be transcribed and translated in vivo (Pollack et al., 2017): this study shows that, although defective proviruses cannot produce infectious particles, they express viral RNA and proteins, which can be detectable by any p24 antigen or PCR assays used for the reservoir-size quantification. Thus, current assays underestimate the actual number of latently infected cells, both in vivo and ex vivo, and the real size of HIV reservoir is still to be determined. Therefore, it has been difficult to judge the potential of LRAs in in vitro (latency primary models), ex-vivo (patients' samples) and in vivo (clinical trial) experiments. HIV latency is a complex, multi-factorial process (reviewed in [Dahabieh et al., 2015]). Its establishment and maintenance depend on: (a) viral factors, such as integrase that specifically interacts with cellular proteins, including LEDGF, (b) trans-acting factors (e.g., transcription factors) and their regulation by the activation state of T cells and the environmental cues that these cells receive, and (c) cis-acting mechanisms, such as the local chromatin environment at the site of integration of the virus into the genome. Recent evidence has also highlighted the association of specific HIV-1 integration sites with clonal expansion of latently infected cells (reviewed in [Maldarelli, 2016]). The role of the site of HIV integration into the cellular genome in the establishment and maintenance of HIV latency has remained controversial. While early studies found that the HIV integration site does affect both the entry into latency Jordan et al., 2003;Jordan et al., 2001), and the viral response to LRAs , other studies have failed to find a significant role of integration sites in regulating the fate of HIV infection (Dahabieh et al., 2014;Sherrill-Mix et al., 2013). In this study, we have used a new dual color reporter virus, HIV GKO , to investigate the reactivation potential of various LRAs in pure latent population. We find that latency is heterogeneous and that only a small fraction (<5%) of the latently infected cells is reactivated by LRAs. We also show that both genomic localization and chromatin context of the integration site affect the fate of HIV infection and the reversal of viral latency. Results A second-generation dual-fluorescence HIV-1 reporter (HIV GKO ) to study latency Our laboratory reported the development of a dual-labeled virus (DuoFluoI) in which eGFP is under the control of the HIV-1 promoter in the 5 0 LTR and mCherry is under the control of the cellular elongation factor one alpha promoter (EF1a) . However, we noted that the model was limited by a modest number of latently infected cells (<1%) generated regardless of viral input (Figure 1-figure supplement 1A-1C), as well as a high proportion of productively infected cells in which the constitutive promoter EF1a was not active (GFP+, mCherry-). To address these issues, which we suspected were due to recombination between the 20-30 bp regions of homology at the N-and C-termini of the adjacent fluorescent proteins (eGFP and mCherry) (Salamango et al., 2013), we generated a new version of dual-labeled virus (HIV GKO ), containing a codon-switched eGFP (csGFP) and a distinct, unrelated fluorescent protein mKO2 under the control of EF1a ( Figure 1A). First, titration of HIV GKO input revealed that productively and latently infected cells increased proportionately as the input virus increased ( Figure 1B Figure 1C). A small proportion of csGFP+ mKO2-cells were still visible in HIV GKO infected cells. We generated a HIV GKO virus lacking the U3 promoter region of the 3 0 LTR (DU3-GKO), resulting in an integrated virus devoid of the 5' HIV U3 region. This was associated with a suppression of HIV transcription and an inversion of the latency ratio (ratios latent/productive = 0.34 for HIV GKO-WT-LTR and 8.8 for HIV GKO-D U3-3'LTR - Figure 1D). Finally, to further characterize the constituent populations of infected cells, double-negative cells, latently and productively infected cells were sorted using FACS and analyzed for viral mRNA and protein content. (Figures 1E and F, Figure 1-source data 1). As expected, productively infected cells (csGFP+) expressed higher amounts of viral mRNA and viral proteins, but latently infected cells (csGFP-mKO2+) had very small amounts of viral mRNA and no detectable viral proteins. Based on all these findings, the second-generation of dual-fluorescence reporter, HIV GKO , is able to more accurately quantify latent infections in primary CD4 + T cells than HIV DuoFluoI , and thus allows for the identification and purification of a larger number of latently infected cells. Using flow cytometry, we can determine infection and HIV productivity of individual cells and simultaneously control for cell viability. To measure reactivation by LRAs in patient samples, we treated 5 million purified resting CD4 + T cells from four HIV infected individuals on suppressive ART (participant characteristics in Table 1) with single LRAs, combinations thereof, or vehicle alone for 24 hr. LRAs efficacy was assessed using a PCR-based assay, by measuring levels of intracellular HIV-1 RNA using primers and a probe that detect the 3 0 sequence common to all correctly terminated HIV-1 mRNAs (Bullen et al., 2014). Of Figure 2A), showed expected fold induction value (10 to 100-fold increases of HIV RNA in PBMCs [Bullen et al., 2014;Darcis et al., 2015;Laird et al., 2015]). Combinations of the PKC agonist bryostatin-1 with JQ1 or with panobinostat (fold-increases of 126.2-and 320.8-fold, respectively, Figure 2A), were highly more effective than bryostatin-1, JQ1 or panobinostat alone (fold-increases of 6.8, 1.7-and 2.9-fold, respectively, Figure 3A), and even greater than T-cell activation with aCD3/CD28. This observation is consistent with previous reports Jiang et al., 2015;Laird et al., 2015;Martínez-Bonet et al., 2015). The same LRAs and combinations were next tested after infection of human CD4 + T cells in vitro with HIV GKO . Measurement of intracellular HIV-1 mRNA in HIV GKO latently infected cells showed an expected fold induction of latency in response to aCD3/CD28 (11.3-fold, Figure 2B, Figure 2source data 1). Second, JQ1, panobinostat, and bryostatin-1 alone all caused limited reactivation of latent HIV (fold-increases of 1.1-, 5.6-and 6.2-fold, respectively, Figure 2B), as observed in patients' samples. Finally, we observed low synergy when combining bryostatin and JQ1 (8-fold increase), but high synergy between bryostatin and panobinostat (67.3-fold increase). These data together demonstrate that HIV GKO closely mimics in vitro what is observed in ex vivo patients' samples (correlation rate r 2 = 0.88, p=0.0056 - Figure 2C), and validate the robustness and reliability of the dual-florescence HIV reporter as a model to study HIV-1 latency. HIV-1 LRAs target a minority of latently infected primary CD4 + T cells Current assays have relied on PCR-based assays to measure HIV RNA, and to evaluate the efficacy of different LRAs ( Figure 2A). The use of dual-fluorescent HIV reporters, however, provides a tool to quantify directly the fraction of cells that become reactivated. Small fractional rate of latency reactivation is not explained by low cellular response to activation signals These data highlight two important facts: a) cell-associated HIV RNA quantification does not reflect the absolute number of cells undergoing viral reactivation, and b) induced cell-associated HIV RNA, in response to all reversing agents, comes from a small fraction of reactivated latent cells. This was particularly surprising with aCD3/CD28 stimulation, as a currently accepted model for HIV latency is that the state of T cell activation dictates the transcriptional state of the provirus. Treatment of latently infected primary CD4 + T cells with aCD3/CD28 stimulated HIV production in less than 5% of the cells, while the other 95% remained latent, even though after 24 hr of treatment nearly all of the cells had upregulated the early T cell activation marker CD69 ( +/HLA-DR+/-). We only observed a statistically significant increase of NRLIC compared with RLIC in the CD69+/CD25-/HLA-DR+ population, however this small increase in a relatively minor population is insufficient to explain the low reactivation rate of latently infected cells. Overall, comparison of both reactivated and non reactivated latent populations showed little difference in their activation state. Integration sites, gene expression, transcription units and the fate of HIV infection The role of the site of HIV integration into the genome in latency remains a subject of debate Dahabieh et al., 2014;Jordan et al., 2003;Jordan et al., 2001;Sherrill-Mix et al., 2013). To identify possible differences in integration sites between reactivated and nonreactivated HIV genomes, primary CD4 + T-cells were infected with HIV GKO . At 5 days post-infection, productively infected cells (GFP+, PIC) were sorted and frozen. The GFP negative population (consisting of a mixture of latent and uninfected) was isolated and treated with aCD3/CD28. 48 hr postinduction, both non reactivated (NRLIC) and reactivated (RLIC) populations were isolated. Nine libraries (three donors, three samples/donor: PIC, RLIC, NRLIC) were constructed from genomic DNA as described (Cohn et al., 2015) and analyzed by high-throughput sequencing to locate HIV proviruses within the human genome. A total of 1803 virus integration sites were determined: 960 integrations in PIC, 681 in NRLIC, and 162 in RLIC (Integration Sites Source data). To determine whether integration within genes differentially expressed during T-cell activation predicted infection reactivation fate, we compared our HIV integration dataset with a published dataset for gene expression in resting and activated (48 hr -aCD3/CD28) CD4 + T cells from healthy individuals (Ye et al., 2014). The analysis revealed that most of the aCD3/CD28-induced latent proviruses were not integrated in genes responsive to T-cell activation signals ( Figure 5A and B, Figure 5-source data 1). Interestingly, PIC and RLIC integration events were associated with genes whose basal expression was significantly higher than genes targeted in NRLIC, both in activated and resting T cells ( Figure 5C, Figure 5-source data 2). Next, we investigated whether different genomic regions were associated with productive, inducible or non-inducible latent HIV-1 infections. In agreement with previous studies (Cohn et al., 2015;Dahabieh et al., 2014;Maldarelli et al., 2014;Wagner et al., 2014), the majority of integration sites were found within genes in each population ( Figure 6A, Figure 6-source data 1), although the proportion of genic integrations in NRLIC was significantly lower than in PIC and RLIC samples. Moreover, integration events in the PIC and RLIC populations were more frequent in transcribed regions (64% and 58%, respectively, [sum of low + medium + high transcribed regions] ( Figure 6B), Figure 6-source data 1), while these regions were significantly less represented in the NRLIC (31%) ( Figure 6B). As expected since introns represent a much larger proportion of genes, genic integration events were more frequent in the introns for each population (>65%, Figure 6C Chromatin modifications at the site of HIV integration and latency Chromatin marks, such as histone post-translational modifications (e.g., methylation and acetylation) and DNA methylation, are involved in establishing and maintaining HIV-1 latency (De Crignis and Mahmoudi, 2017). We examined 500 bp regions centered on all integration sites in each population for several chromatin marks by comparing our data with several histone modifications and DNaseI ENCODE datasets. We first looked at distinct and predictive chromatin signatures, such as H3K4me1 (active enhancers), H3K36m3 (active transcribed regions), H3K9m3 and H3K27m3 (repressive marks of transcription) (reviewed in [Kumar et al., 2015;Shlyueva et al., 2014]). All three populations exhibited distinct profiles, although productive and inducible latent infections profiles appeared most similar ( Figure 7A, Figure 7-source data 1). The analysis showed that PIC integration events were associated with active chromatin (i.e., transcribed genes -H3K36me3 or enhancers -H3K4me1), while NRLIC integration events appeared biased toward heterochromatin (H3K27me3 and H3K9me3) and non-accessible regions (DNase hyposensitivity). Marini et al. recently reported that HIV-1 mainly integrates at the nuclear periphery (Marini et al., 2015). We therefore examined the topological distribution of integration sites from each population inside the nucleus by comparing our integration site data with a previously published dataset of lamin-associated domains (LADs) (Guelen et al., 2008). LADs consist of H3K9me2 heterochromatin and are present at the nuclear periphery. This analysis showed that latent integration sites from both RLIC and NRLIC were in LADs to a significantly higher degree (32% and 30.4%) than productive integrations (23.6%) (p<0.05, Figure 7B, Figure 7-source data 1). Overall, these data show similar features between productively infected cells and inducible latently infected cells, while non-reactivated latently infected cells appear distinct from the other populations. These findings support a prominent role for the site of integration and the chromatin context for the fate of the infection itself, as well as for latency reversal. Discussion Dual-color HIV-1 reporters are unique and powerful tools Dahabieh et al., 2013), that allow for the identification and the isolation of latently infected cells from productively infected cells and uninfected cells. Latency is established very early in the course of HIV-1 infection (Archin et al., 2012b;Chun et al., 1998;Whitney et al., 2014) and, until the advent of dualreporter constructs, no primary HIV-1 latency models have allowed the study of latency heterogeneity at this very early stage. Importantly, the comparison of data obtained from distinct primary HIV-1 Integration sites displayed outside of the two solid gray lines were targeted genes whose expression is at least ± twofold differentially expressed after 48 hr stimulation. Plot points size can be different, the bigger the plot point is, the more integration events happened within the same gene. (B) Fraction of integration sites from the different populations PIC, RLIC or NRLIC, integrated within genes whose expression is at least ± twofold differentially expressed after 48 hr of aCD3/CD28 stimulation (**p<0.01; ***p<0.001; two-proportion z test) ( Figure 5-source data 1). (C) Relative expression of genes targeted by HIV-1 integration in PIC, RLIC or NRLIC before TCR stimulation and after aCD3/CD28 stimulation (n = 3, mean +SEM, paired t-test). ***p<0.001; ****p<0.0001. (Figure 5-source data 2). DOI: https://doi.org/10.7554/eLife.34655.013 The following source data is available for figure 5: Source data 1. Fraction of integration sites from the different populations PIC, RLIC or NRLIC, integrated within genes whose expression is at least ± twofold differentially expressed after 48 hr of aCD3/CD28 stimulation. latency models is complicated as some models are better suited to detect latency establishment (e. g., dual-reporters), while others are biased towards latency maintenance (e.g., Bcl2-transduced CD4 + T cells). The use of env-defective viruses limits HIV replication to a single-round and, thereby limits the appearance of defective viruses (Bruner et al., 2016). In this study, we describe and validate an improved version of HIV DuoFluoI , previously developed in our laboratory , which accurately allows for: (a) the quantification of latently infected cells, (b) the purification of latently infected cells, and (c) the evaluation of the 'shock and kill' strategy. Our data highlight two important facts: (a) cell-associated HIV RNA quantification does not reflect the number of cells undergoing viral reactivation, and (b) a small portion of the cells carrying latent proviruses (<5%) is reactivated, although LRAs target the whole latent population. Hence, even if cells harboring reactivated virus die, this small reduction would likely remain undetectable when quantifying the latent reservoir in vivo. Our data are in agreement with previous reports, which show that levels of cellular HIV RNA and virion production are not correlated, and that the absolute number of cells being reactivated by aCD3/CD28 is indeed limited to a small fraction of latently infected cells (Cillo et al., 2014;Sanyal et al., 2017;Yucha et al., 2017). Using our dual-fluorescence reporter, we confirm these findings, and extend these observations to LRAs combinations. However, although LRAs combinations show synergy when measuring cell-associated HIV RNA, we do not find such synergy at the level of individual cells, but rather only partial additive effect. Our work, as well as that of others (Cillo et al., 2014;Sanyal et al., 2017;Yucha et al., 2017), demonstrate the importance of single cell analysis when it comes to the evaluation of potential LRAs. Indeed, it is necessary to determine wheter potential increases in HIV RNA after stimulation in a bulk population result from a small number of highly productive cells, or from a larger but less productive population, as these two mechanisms likely have very different impacts on the latent reservoir. Our data further highlight the heterogeneous nature of the latent reservoir Ho et al., 2013). We currently have a limited understanding of why some latently infected cells are capable of being induced while others are not. It is possible that different chromatin environments impose different degrees of transcriptional repression on the integrated HIV genome, with the non reactivatable latent HIV corresponding to the most repressive environment. . Since HIV GKO allows for the isolation of productively infected cells and reactivated latent cells from those that do not reactivate, it provides a unique opportunity to explore the impact of HIV integration on the fate of the infection. Different integration site-specific features contribute to latency, such as the chromatin structure, including adjacent loci but also the provirus location in the nucleus Lusic et al., 2013). Viral integration is a semi-random process (Bushman et al., 2005) in which HIV-1 preferentially integrates into active genes (Barr et al., 2006;Bushman et al., 2005;Demeulemeester et al., 2015;Ferris et al., 2010;Han et al., 2004;Lewinski et al., 2006;Mitchell et al., 2004;Schrö der et al., 2002;Sowd et al., 2016;Wang et al., 2007). LEDGF, one of the main chromatin-tethering factors of HIV-1, binds to the viral integrase and to H3K36me3, and to a lesser extent to H3K4me1, thus directing the integration of HIV-1 into transcriptional units (Daugaard et al., 2012;Eidahl et al., 2013;Pradeepa et al., 2012). Also CPSF6, which binds to the viral capsid, markedly influences integration into transcriptionally active genes and regions of euchromatin (Sowd et al., 2016), explaining how HIV-1 maintains its integration in the euchromatin regions of the genome independently of LEDGF (Quercioli et al., 2016). Several studies have characterized the integration sites, however, these analyses have been restricted to productive infections. Using ENCODE reference datasets, our data are consistent with previous results, showing that HIV-1 preferentially targets actively transcribed regions (Marini et al., 2015;Wang et al., 2007;Chen et al., 2017). However, non-inducible latent proviruses are observed to be integrated to a higher extent into silenced chromatin. In addition, even though HIV integration is normally strongly disfavored in the heterochromatic condensed regions in LADs due to low chromatin accessibility, we show that some HIV integration does occur in LADs when using a previously published dataset of LADs (Guelen et al., 2008;Marini et al., 2015), and that latent proviruses that are not readily reactivatable are integrated at higher extent in LADs. Importantly, we identify a unique rare population among the latent cells that can be reactivated. In contrast to the non-inducible latent infections, the latency reversal of inducible latent proviruses might be explained by integration in an open chromatin context, similar to integration sites for productive proviruses, followed by subsequent heterochromatin formation and proviral silencing. As a consequence, the distinct integration sites between induced and non-induced latent proviruses highlight new possibilities for cure strategies. Indeed, the 'shock and kill' strategy aims to reactivate and eliminate every single replication-competent latent provirus, since a single remaining cell carrying a latent inducible provirus could, in theory, reseed the infection. However, our study and others point out several significant barriers to successful implementation of the 'shock and kill' strategy. First, LRAs only reactivate a limited fraction of latent proviruses. It is likely that some of the non-induced proviruses, such as those integrated into enhancers and transcriptionnal active regions of the genome, will reactivate after several rounds of activation, due to the stochastic nature of HIV activation (Dar et al., 2012;Ho et al., 2013;Singh et al., 2010;Weinberger et al., 2005). It is also likely that better suited LRAs combinations (two or more LRAs) will reactivate some of the non-induced proviruses integrated into silenced chromatin marked by H3K27me3 and H3K9me3. Indeed, several studies have shown that the pharmaceutical inhibition of H3K27me3 and H3K9me2/3 could sensitize latent proviruses to LRAs (Friedman et al., 2011;Nguyen et al., 2017;Tripathy et al., 2015). Second, Shan et al. have shown that latently reactivated cells are not cleared due to cytopathic effects or CTL response implying that immunomodulatory approaches, in addition of more potent LRAs, are likely required to achieve a cure for HIV infection (Shan et al., 2012). In conclusion, the heterogeneity of the latent reservoir calls for therapies addressing the different pools of latently infected cells. While 'shock and kill' might be helpful in reactivating and possibly eliminating a small subset of highly reactivatable latent HIV genomes, other approaches will be necessary to control or eliminate the less readily reactivatable population identified here and in patients. Perhaps, this latter population should rather be 'blocked and locked' using latency-promoting agents (LPAs), as described by several groups (Besnard et al., 2016;Kessing et al., 2017;Kim et al., 2016;Vranckx et al., 2016). For a functional cure, a stably silenced, non-reactivatable provirus is preferable to a lifetime of chronic active infection. Patients' samples Four HIV-1-infected individuals, who met the criteria of suppressive ART, undetectable plasma HIV-1 RNA levels (<50 copies/ml) for a minimum of six months, and with CD4 + T cell count of at least 350 cells/mm 3 , were enrolled. The participants were recruited from the SCOPE cohort at the University of California, San Francisco. Table 1 details the characteristics of the study participants. Of note, the Envelope open reading frame was disrupted by the introduction of a frame shift at position 7136 by digestion with KpnI, blunting, and re-ligation. Virus production The production of HIV GKO and the assessment of HIV Latency Reversal Agents in Human Primary CD4+ T Cells are described in more detail at Bio-protocol (Battivelli and Verdin, 2018). Pseudotyped HIV DuoFluoI and HIV GKO viral stocks were generated by co-transfecting (standard calcium phosphate transfection method) HEK293T cells with a plasmid encoding HIV DuoFluoI or HIV GKO , and a plasmid encoding HIV-1 dual-tropic envelope (pSVIII-92HT593.1). Medium was changed 6-8 hr posttransfection, and supernatants were collected after 48 hr, centrifuged (20 min, 2000 rpm, RT), filtered through a 0.45 mM membrane to clear cell debris, and then concentrated by ultracentrifugation (22,000 g, 2 hr, 4˚C). Concentrated virions were resuspended in complete media and stored at À80˚C. Virus concentration was estimated by p24 titration using the FLAQ assay (Gesner et al., 2014). Primary cell isolation and cell culture CD4 + T cells were extracted from peripheral blood mononuclear cells (PBMCs) from continuous-flow centrifugation leukophoresis product using density centrifugation on a Ficoll-Paque gradient (GE Healthcare Life Sciences, Chicago, IL). Resting CD4 + lymphocytes were enriched by negative depletion with an EasySepHuman CD4 + T Cell Isolation Kit (Stemcell Technologies, Canada). Cells were cultured in RPMI medium supplemented with 10% fetal bovine serum, penicillin/streptomycin and 5 mM saquinavir. Primary CD4 + T cells were purified from healthy donor blood (Blood Centers of the Pacific, San Francisco, CA, and Stanford Blood Center), by negative selection using the RosetteSep Human CD4 + T Cell Enrichment Cocktail (StemCell Technologies, Canada). Purified resting CD4 + T cells from HIV-1 or healthy individuals were cultured in RPMI 1640 medium supplemented with 10% FBS, L-glutamine (2 mM), penicillin (50 U/ml), streptomycin (50 mg/ml), and IL-2 (20 to 100 U/ml) (37˚C, 5% CO 2 ). Spin-infected primary CD4 + T cells were maintained in 50% of complete RPMI media supplemented with IL-2 (20-100 U/ml) and 50% of supernatant from H80 cultures (previously filtered to remove cells) without beads. Medium was replenished every 2 days until further experiment. Cell infection Purified CD4 + T cells isolated from healthy peripheral blood were stimulated with aCD3/CD28 activating beads (Thermofisher, Waltham, MA) at a concentration of 0.5 bead/cell in the presence of 20-100 U/ml IL-2 (PeproTech, Rocky Hill, NJ) for three days. All cells were spinoculated with either HIV-DuoFluoI , HIV GKO or HIV D3U-GKO at a concentration of 300 ng of p24 per 1.10 6 cells for 2 hr at 2000 rpm at 32˚C without activation beads. Infected cells were either analyzed by flow cytometry or sorted 4-5 days post-infection. Latency-reversing agent treatment conditions CD4 + T cells were stimulated for 24 hr unless stipulated differently, with latency-reversing agents at the following concentrations for all single and combination treatments: 10 nM bryostatin-1, 1 mM JQ1, 30 nM panobinostat, aCD3/CD28 activating beads (1 bead/cell), or media alone plus 0.1% (v/ v) DMSO. For all single and combination treatments, 30 mM Raltregravir (National AIDS Reagent Program) was added to media. Concentrations were chosen based on Laird et al. paper (Laird et al., 2015). Sorting of infected CD4 + T cells was performed with a FACS AriaII (BD Biosciences, Franklin Lakes, NJ) based on their GFP and mKO2 fluorescence markers at 4/5 days post-infection, and placed back in culture for further experimentation. In the experiments shown in Figures 2B and 4, we isolated both HIV GKO latently infected cells (GFP-, mKO2+, 3%) and uninfected cells (csGFP-, mKO2-, 97%) five days post-infection, before treating cells with LRAs. In the experiment shown in Figure 3, we isolated pure latent cells (GFP-, mKO2+) five days postinfection, before treating this pure population with LRAs. DNA, RNA and protein extraction, qPCR and western blot RNA and proteins ( Figure 1B and C) were extracted with PARIS TM kit (Ambion, Thermofisher, Waltham, MA) according to manufacturer's protocol from same samples. RNA was retro-transcribed using random primers with the SuperScript II Reverse Transcriptase (Thermofisher, Waltham, MA) and qPCR was performed in the AB7900HT Fast Real-Time PCR System, using 2X HoTaq Real Time PCR kit (McLab, South San Francisco, CA) and the appropriate primer-probe combinations described in . Quantification for each qPCR reaction was assessed by the ddCt algorithm, relative to Taq Man assay GAPDH Hs99999905_m1. Protein content was determined using the Bradford assay (Bio-Rad, Hercules, CA) and 20 mg were separated by electrophoresis into 12% SDS-PAGE gels. Bands were detected by chemiluminescence (ECL Hyperfilm Amersham, GE Healthcare Life Sciences, Chicago, I) with anti-Vif, HIV-p24 and a-actin (Sigma, Saint-Louis, MO) primary antibodies. Total RNA (Figure 2A and B) wasextracted using the Allprep DNA/RNA/miRNA Universal Kit (Qiagen, Germany) with on-column DNAase treatment (Qiagen RNase-Free DNase Set, Germany). cDNA synthesis was performed using SuperScript IV Reverse Transcriptase with a combination of random hexamers and oligo-dT primers (ThermoFisher, Waltham, MA). Relative cellular HIV mRNA levels were quantified using a qPCR TaqMan assay using primers and probes described in (Bullen et al., 2014) on a QuantStudio 6 Flex Real-Time PCR System (Thermofisher, Waltham, MA). Relative cell-associated HIV mRNA copy numbers were determined in a reaction volume of 20 mL with 10 mL of 2x TaqMan Universal Master Mix II with UNG ( Thermofisher, Waltham, MA), 4 pmol of each primer, 4 pmol of probe, 0.5 mL reverse transcriptase, and 2.5 mL of cDNA. Cycling conditions were 50˚C or 2 min, 95˚C for 10 min, then 60 cycles of 95˚C for 15 s and 60˚C for 1 min. Real-time PCR was performed in triplicate reaction wells, and relative cell-associated HIV mRNA copy number was normalized to cell equivalents using human genomic GAPDH expression by qPCR and applying the comparative Ct method (Livak and Schmittgen, 2001). HIV integration site libraries and computational analysis HIV integration site libraries and computational analysis were executed in collaboration with Lilian B. Cohn and Israel Tojal Da Silva as described in their published paper (Cohn et al., 2015), with a few small changes added to the computational analysis pipeline. First, we included integration sites with only a precise junction to the host genome. Second, to eliminate any possibility of PCR mispriming, we have excluded integration sites identified within 100 bp (50 bp upstream and 50 bp downstream) of a 9 bp motif identified in our LTR1 primer: TGCCTTGAG. Thirdly we have merged integration sites within 250 bp and have counted each integration site as a unique event. The list of integration sites for each donor and each population can be found as a source data file linked to this manuscript (Integration Sites Source data 1). We calculated expression (GSM669617) and chromatin mark abundance (the remaining ENCODE datasets) at the integration sites as bins of 500 bp centered on the integration site (read count quantification in Seqmonk: all non-duplicated reads regardless of strand, corrected per million reads total, non-log transformed). Gene annotations were not taken into account. Thresholds for expression values (upper 1/8th, upper quarter, half, and above 0) were set to distinguish five different categories, set as the upper 1/8th of expression values (high), upper quarter-1/8th (medium), upper half-quarter (low), lower half but above 0 (trace), 0 (silent). Statistical analysis Significance was analyzed by either paired t-test (GraphPad Prism) or proportion test (standard test for the difference between proportions), also known as a two-proportion z test (https://www.medcalc.org/calc/comparison_of_proportions.php), and specified in the manuscript. with help from the University of California San Francisco-Gladstone Institute of Virology and Immu- Data availability All sequencing data generated during this study are included in the Integration sites Source data file 1
2018-05-03T02:53:33.566Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "c154301b2491ab6f92897ec4d1df4085a242e692", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.34655", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6c35ddb9201f1a3dffa6dd285db337f418af376c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244195512
pes2o/s2orc
v3-fos-license
Least Squares Monte Carlo Simulation-Based Decision-Making Method for Photovoltaic Investment in Korea : Solar power for clean energy is an important asset that will drive the future of sustainable energy generation. As interest in sustainable energy increases with Korea’s renewable energy expansion plan, a strategy for photovoltaic investment (PV) is important from an investor’s point of view. Previous research primarily focused on assessing and analyzing the impact of the volatility but paid little attention to the modeling decision-making project to obtain the optimal investment timing. This paper utilizes a Least Squares Monte Carlo-based method for determining the timing of PV plant investment. The proposed PV decision-making method is designed to simulate the total PV generation revenue period with all uncertain PV price factors handled before determining the optimal investment time. The numerical studies with nine different scenarios considering system marginal price (SMP) and renewable energy certificate (REC) spot market price in Korea demonstrated how to determine the optimal investment time for different PV capacities. Therefore, the proposed method can be used as a decision-making tool to provide PV investors with information on the best time to invest in the renewable energy market. Introduction Over the past decades, energy transition has been growing following the diffusion of renewable energy resources worldwide [1]. Korea started actively participating in climate-related activities aiming for a higher penetration level of renewable sources. Such activities include operating a renewable energy supply policy referred to as the Renewable Portfolio Standard (RPS), obligating electricity suppliers to source a specified proportion of the electricity they provide to customers from entitled renewable sources [2]. This can be conducted and fulfilled using a renewable energy certificate (REC). The REC is issued to energy generators according to the amount of eligible renewable electricity they generate. Moreover, the REC is weighted based on the source they use to generate electricity [3]. Regarding electricity suppliers, demonstrating compliance with renewable obligations at a minimum REC allows them to improve portfolio profitability. However, oftentimes newly installed renewable energy generators hold little data, making modeling the RPS market investment portfolio strenuous. The driving force for investors to invest in the renewable energy market is the guaranteed yield. Various methods, such as net present value, internal rate of return, and discounted cash flow methods, have been developed to calculate the profit of newly installed solar power plants. After deciding whether to invest in PV, the next step is to decide when to invest in PV. Although these methods can be used to evaluate the economics of a PV investment, they can only be evaluated based on a predefined investment year for the PV investment. In the REC market, the timing of investment is important because the economic feasibility of PV investment depends on SMP and REC price. Since SMP and REC prices fluctuate frequently over time, investors should consider when to invest in order to achieve high returns. Investors can effectively manage their risk by considering when to invest in PV. The proposed LSMC-based method can be used as a decision-making tool to provide PV investors with information on the best time to invest. By considering when to invest, investors can manage their risk more easily. The study examines all the most essential uncertain features such as PV generation, SMP, and REC prices in Korea. The probability density function based on historical data derived the expected values of long-term, annual, and monthly PV generation. SMP was computed by solving the Lagrangian relaxation and dynamic programming. REC price was estimated following the current renewable expansion policy to ensure feasibility and accuracy, as renewable energy is heavily dependent on the renewable policy. The model then employed the LSMC method to simulate the total period of PV earnings in stocks and determine Korea's optimal investment time. The dynamic investment approach solves the value of options through a backward induction process, evaluating each trading point of the optimal decision between selling REC or holding REC. The proposed method was designed to determine the investment timing using Real Option. Moreover, the model used several scenarios to determine the optimal condition for profitable solar energy investment. The rest of this paper proceeds as follows. Section 2 discusses the existing literature on investment assessment, while Section 3 presents the formulations of the generation profit of PV and LSMC. Section 4 models an investment assessment procedure based on the LSMC method. Section 5 demonstrates the effectiveness of the proposed LSMC method from various case studies. Section 6 further concludes the results. Literature Review Sustainable development is a key goal of human development [4]. Research on sustainability has traditionally focused on changing the way societies produce and consume to achieve global sustainable development [5]. In recent years, sustainability has been addressed in social [6][7][8], economic [9][10][11], and environmental issues [12][13][14] to achieve the Sustainable Development Goals (SDGs). Various studies have been conducted on the analysis and impact of renewable energy on the short-term operation and long-term planning of the power grid. Sun and Nie [15] analyzed the impact of government energy policies on increasing renewable energy installations. Zhang Y. et al. [16] found that subsidy policies effectively promote innovation in new energy companies. Chen et al. [17] analyzed the development and policy of renewable energy utilization. Hong et al. [18] used an integrated analysis of Korea's energy system at the national level. In the past few years, numerous studies have been conducted on photovoltaic (PV) energy as a representative of clean and renewable resources [19]. Several studies have been conducted on PV project assessment using traditional investment techniques such as net-present value and the discounted cash flow method [20][21][22][23]. However, these methods disregard two factors: (1) handling the uncertain factors and (2) presenting the optimal time to invest. Several studies aim to overcome uncertainty issues by utilizing various methods. The factors relevant to the RPS market investment portfolio are categorized as (1) generated PV electricity, (2) electricity price, and (3) REC price. First, support vector machine, intelligence models such as fuzzy logic, adaptive neural-fuzzy-inference system or artificial neural network, and Markov Chain are used to forecast the short-term PV generation under grid stability [24][25][26][27]. Second, locational marginal price is predicted using the artificial neural network [28][29][30] with various data such as factors of generators and pumped storage power plants. Finally, REC pricing schemes are applied by determining the REC price [31][32][33]. However, all the above-mentioned models have limitations in predicting long-term results. Optimal investment timing is crucial for making profitable investment decisions. The real options (RO) theory presents the optimal investment timing as it is can adapt the Sustainability 2021, 13, 10613 3 of 14 substantiated financial options theory to the investment decision. Various fields in the power industry are using the RO approaches in renewable energy [34], P2G (Power-to-Gas), or transmission lines [35][36][37][38][39]. In reference [40,41], nuclear power plant investment is evaluated by real option analysis. These studies practically exercise the binomial tree model and the simulation method. The binomial tree model was first employed by Hoff et al. [42]. Martinez-Cesena et al. [43] employ the simulation method, which focuses on the effect of technological impacts on the project value. For reference, [44] used the decision-making tree model to assess the wind power productivity at different sites. The same technique is further employed in [45], where it focuses on the demand uncertainties. Previous research focuses on the solar PV project investment decision, which is only a now-or-never option. However, the REC spot market's unique characteristics work like the typical stock market. Consequently, decisions on whether to hold or buy options can be exercised. Therefore, the main contribution of this paper is demonstrating decision making of the timing of PV generation plant investment, taking into consideration all uncertain PV price factors. The proposed LSMC-based method can be used as a decision-making tool to provide PV investors with information on the best time to invest. Problem Formulation In this section, a mathematical model is proposed to derive the optimal PV investment plan for PV generators participating in REC markets. Note that three core elements that determine the investors' profit should be designed, namely, solar power generation patterns (generation patterns for 24 h), SMP (system marginal price paid for energy production) and REC (certification for 1 MW of generated renewable energy). Figure 1 demonstrates the schematic diagram of the proposed PV investment plan method. Optimal investment timing is crucial for making profitable investment decisions. The real options (RO) theory presents the optimal investment timing as it is can adapt the substantiated financial options theory to the investment decision. Various fields in the power industry are using the RO approaches in renewable energy [34], P2G (Power-to-Gas), or transmission lines [35][36][37][38][39]. In reference [40,41], nuclear power plant investment is evaluated by real option analysis. These studies practically exercise the binomial tree model and the simulation method. The binomial tree model was first employed by Hoff et al. [42]. Martinez-Cesena et al. [43] employ the simulation method, which focuses on the effect of technological impacts on the project value. For reference, [44] used the decision-making tree model to assess the wind power productivity at different sites. The same technique is further employed in [45], where it focuses on the demand uncertainties. Previous research focuses on the solar PV project investment decision, which is only a nowor-never option. However, the REC spot market's unique characteristics work like the typical stock market. Consequently, decisions on whether to hold or buy options can be exercised. Therefore, the main contribution of this paper is demonstrating decision making of the timing of PV generation plant investment, taking into consideration all uncertain PV price factors. The proposed LSMC-based method can be used as a decision-making tool to provide PV investors with information on the best time to invest. Problem Formulation In this section, a mathematical model is proposed to derive the optimal PV investment plan for PV generators participating in REC markets. Note that three core elements that determine the investors' profit should be designed, namely, solar power generation patterns (generation patterns for 24 h), SMP (system marginal price paid for energy production) and REC (certification for 1 MW of generated renewable energy). Figure 1 demonstrates the schematic diagram of the proposed PV investment plan method. PV power generation is derived stochastically based on probabilities of an hourly normalized generation profile generated using the kernel function. The kernel curve presents the relative likelihood of output produced from each sample of data. Moreover, kernel density estimation (KDE) is constructed by stacking those kernel functions. The Epanechnikov function is selected from the KDE as it is the best in estimating distribution [46]. The probability density function ( ) using the Epanechnikov function is defined by: where , h, (•), and are the statistical PV generation samples, bandwidth, kernel density function, and the number of solar hourly historical generation data, respectively. PV power generation is derived stochastically based on probabilities of an hourly normalized generation profile generated using the kernel function. The kernel curve presents the relative likelihood of output produced from each sample of data. Moreover, kernel density estimation (KDE) is constructed by stacking those kernel functions. The Epanechnikov function is selected from the KDE as it is the best in estimating distribution [46]. The probability density function ( f n ) using the Epanechnikov function is defined by: where P i , h, K(·), and n are the statistical PV generation samples, bandwidth, kernel density function, and the number of solar hourly historical generation data, respectively. The Epanechnikov function is K(y) = 3/4 √ 5 1 − y 2 /5 . In this case, the probability density function is a nonparametric estimate of f of sample P i (one-dimensional random variable, i = 1, . . . , n) with n data. Based on the density function estimation result, the Mersenne twister random number generator ensures that randomness is well-equidistributed [47]. Following a long period of quicker computation, the Mersenne twister generates expectation values of PV generation. In this paper, the SMPs of the Korean wholesale electricity market are simulated until year 2040 using the single unit dynamic programming (SUDP) algorithm. The SUDP algorithm is one of methods for solving the unit commitment problem using dynamic programming in combination with Lagrangian relaxation by optimizing the subproblems of the individual generators separately [48][49][50]. Various factors such as generating capacity, cost function, physical constraint of generator, fuel cost, transmission constraint, and load pattern are used by the SUDP algorithm to obtain the long-term SMPs. Korea's Long-term Basic Plan for Electricity Supply and Demand was established based on the estimation of these factors. There is a causal relationship between the wholesale price of electricity in Korea and the price of liquefied natural gas (LNG). According to research conducted by the Korea Power Exchange, domestic LNG prices are mainly affected by Dubai oil price, Japan Crude Cocktail (JCC), and Indonesia Crude Price (ICP). In this study, several fuel cost scenarios were created to determine SMPs. The REC predictive model derives the monthly REC price based on Korea's renewable energy policy and the RPS distinctive characteristic. However, the demand of the REC spot market is inflexible, as the electricity suppliers are obligated to source a growing proportion of the electricity supply with renewable energy, unless buyers postpone their obligated REC or pay a penalty. Accordingly, the quantity of supply and demand of the REC spot market led to price change. In this study, the model computed the prices by comparing the supply and demand of the REC spot market. When the supply is higher than the demand, the REC spot market price decreases by an explicit rate and vice versa. REC supply is quantified using the generation blueprint of renewable energy and REC weight, which are planned out by the government and the Korea Energy Agency. REC demand is computed as a specific ratio of whole generation. The equation for REC demand is as follows: where Q i,spot_d is the REC demand of year i in the REC spot market, SG denotes the total generation of electricity suppliers from last year, SOR denotes the ratio intended to determine the amount of obligatory purchasing REC, and G L−Hydro and G Tidal are the generation of large-scale hydropower and tidal power, respectively. The annual expected trading volume of the total RPS market is calculated by comparing the demand and supply of the RPS market of the applicable year. Based on each percentage of the market's share, the model computes the concluded amount of REC of each RPS market. The self-construction market and in-house contract market's percentage of each year is fixed to the average of the last five-year ratio as the market's proportion remains steady since the launch of the RPS policy launch. As the fixed-price contract market ratio increases steadily by approximately 1% each year, the tendency is applied to the variable. The remaining volume determines the supply and demand of the REC spot market under the assumption that the transaction of REC is fully achieved at the smaller quantity from the monthly supply and demand. The supply and demand of the REC spot market obtained from the previous steps is randomly spread monthly and yearly. As the monthly trading volume of the coming years are imperceptible, the SoftMax function [51] is used to normalize the input of the trading possibilities of 12 months into the standard exponential function of each element. The monthly negotiable quantity of supply and demand is reckoned as follows: where C i,m,spot_market is the REC spot market quantity of individual supply and demand for year i of month m, C i,spot_market is the REC spot market quantity for i year, e rand j ∑ 12 j e rand j is the expected ratio of REC to be traded on year i of month m. The model passes on the quantity difference to the next month. Untraded supply and demand are updated to the following year. Therefore, the expected trading volume for the following year is also freshly updated as untraded REC. By following the steps, the model predicts the REC price for each year until 2040. The LSMC simulation is processed at the final stage of the algorithm to obtain the optimal transition investment strategy for PV generators. It is used in previous studies such as those of Zhu and Fan [52], Rigter and Vidican [53], Ryan et al. [54], Lee and Shih [55], Zhang et al. [56]. When processing the investment strategy for PV generators, Monte Carlo simulates Ω sample paths for performance warranty time PW pv in N discrete time intervals. Along the path, SP MC l is generated as a PV net-investment profit for the time evolution of the underlying assets. The net profit of PV generation will drift up by the risk-free rate of the PV investment and will be randomly shocked by the standard deviation of returns at time period dt. The expected PV investment profit in time dt at l − th path simulation is as follows: where SP MC t (t) is the expected PV investment profit for t timeline, PW pv is the performance warranty period of PV, G pv m is the generated PV power in month m, SMP t, m is the system marginal price in month m of year t as monetary unit in KRW/MWh, µ pv is the standard deviation of PV plant profit, σ pv is the risk-free rate of the PV investment, ε is the standard normal distribution, r is the discount rate, dt is the time step when an investor can exercise the option and is derived from dividing PW pv by N, and l is one of the pathways among sigma sample paths. For each time step dt, the exercise value of the PV investment at time step dt is to find whether the investor can exercise an investment option or should hold an investment plan. The immediate exercise value is calculated by comparing SP MC t (dt) with the investment cost. When the PV investment profit exceeds the investment cost at time t, the continuation value equals zero; otherwise, the value is still positive. The equation is as follows: where IE pv (dt) is the immediate exercise value at time step dt, Cap pv is the PV unit investment cost, and IC pv is the installed capacity of PV. The PV unit investment cost tends to decrease with falling price of PV panels. Equation (6) derives the total payoffs at time step dt by averaging the obtained net profit and discounting the payoffs back to the present value. When IE pv (dt) has a positive value, the continuous value is estimated from the immediate exercise value, using least squares regression [57]. This starts from terminal time (PO pv ) and works backwards. Then, the option holder compares the obtained continuous value with the used immediate exercise value and updates the payoff. In conclusion, the value of the PV investment profit is obtained by calculating the average value of all the paths. This statement can be expressed mathematically as follows: where V MC is the expected PV investment value and PO pv (l) is the payoff at the l − th sample pathway. Investment Assessment of a Volatile PV Generator Based on LSMC with Stopping Rule The RO model, especially American options, estimates the value of renewable electricity investment as investors exercise the option at any time point. Particularly, Monte Carlo simulations have commonly been used to evaluate electricity investment projects, as the method computes the uncertainty and benefits of PV generators [58]. However, the LSMC assumes the lifetime revenue of the PV generators and values the PV generator investment simultaneously. In the investment assessment process, the expected net profit of the PV generator is regarded as the initial stock value. Further, whole paths are produced in each trial. The option value for investors to exercise their investment is computed by comparing the net profit and investment cost. Optimizing the stopping time is difficult, as the option can be exercised only once. Our proposed model optimizes the stopping time by comparing the options of the investors considering whether to postpone their investment or start investing. The decision-making process starts from the last step and ends in the first step using backward dynamic programming to consider the discount rate. In each calculation, the continuous value is estimated using least squares regression as it signals the decision of the time, which maximizes payoff. Therefore, the LSMC approach estimates the optimal stopping time and maximizes the payoff in the whole observed time step. Figure 2 displays the entire framework of our proposed investment assessment method, which is as follows: where ̂, 0 , and denote the estimated price of continuous value, Laguerre Polynomials, and stock price at time t, respectively. , , , are the coefficients of the regression function. The expression above ignores the exponential term for ease of calculation. If the exercise value is larger than the continuous value, the option can be exercised. Step (5). The optimal option stopping time can be calculated in Step (4). The stopping time can be determined by continuously comparing the updated exercise value with the continuous value. The last stopping time in the backward process is the most profitable solution of stopping time, as there are many stopping times at one stock process. Step (6). The averaged value of the first-step data indicates the expected profit of PV generator investors. The proposed model can be used to analyze the profit of the PV generator in a certain scale of capacity and further calculate the optimal stopping time using the LSMC method. Numerical Results The case study is presented to denote the RO valuation of PV investment using the LMSC simulation. The model used input data from 2016 to 2020, obtained through the KEPCO and relevant studies. The data for the PV investment plan algorithm, which includes financial variables and maintenance durations, are fixed values. This study modeled PV generators located in Jeollanam-do, the southwestern part of Korea, as it shows the highest PV insolation rate, above all other regions. Table 1 details the value of the corresponding parameters. The initial cost of PV panel in Table 1 is assumed to be installed in 2020. In addition, it is assumed that each year, the cost decreases Step (1). The separate PV profit component model simulates the entire life span of the PV generator and sums up to the annual net profit with the corporate tax rate, operational and maintenance cost, and annual solar panel degeneration rate, all of which are considered. where NP ij is the net profit in year i in month j, CS ij is the generated PV power in year i in month j, IC sol,ij is the installed capacity of PV, SMP ij is the SMP price in year i in month j, REC ij is the REC price in year i in month j, OM i is the Operation and Maintenance cost, and CT i is the corporate tax cost. Step (2). The algorithm then computes the total revenue to the planned retirement of the PV generator. As an input to the LSMC, the investor's gross profit for year w is calculated as follows: where TP is the investor's gross profit, r is the annual discount rate, and Cap pv is the PV unit investment cost. Step (3). n random stock paths are generated using the Monte Carlo method. The LSMC method simulates stochastic chronological paths of options that investors can exercise. Step (4). Backward dynamic programming takes the higher value from the exercise value applied with the discount rate and continuous value calculated with least squares regression. The regression functions using Laguerre Polynomials are as follows: whereV, L 0 , and S t denote the estimated price of continuous value, Laguerre Polynomials, and stock price at time t, respectively. a, b, c, d are the coefficients of the regression function. The expression above ignores the exponential term for ease of calculation. If the exercise value is larger than the continuous value, the option can be exercised. Step (5). The optimal option stopping time can be calculated in Step (4). The stopping time can be determined by continuously comparing the updated exercise value with the continuous value. The last stopping time in the backward process is the most profitable solution of stopping time, as there are many stopping times at one stock process. Step (6). The averaged value of the first-step data indicates the expected profit of PV generator investors. The proposed model can be used to analyze the profit of the PV generator in a certain scale of capacity and further calculate the optimal stopping time using the LSMC method. Numerical Results The case study is presented to denote the RO valuation of PV investment using the LMSC simulation. The model used input data from 2016 to 2020, obtained through the KEPCO and relevant studies. The data for the PV investment plan algorithm, which includes financial variables and maintenance durations, are fixed values. This study modeled PV generators located in Jeollanam-do, the southwestern part of Korea, as it shows the highest PV insolation rate, above all other regions. Table 1 details the value of the corresponding parameters. The initial cost of PV panel in Table 1 is assumed to be installed in 2020. In addition, it is assumed that each year, the cost decreases by 3.9% for PV installations of 100 kW or less, 4.3% for PV installations between 100 kW and 3 MW, and 4.5% for PV installations above 3 MW, respectively. The MATLAB software was used to implement the model. Average CAPEX decrease rate 3.9-4.5% Modeling of PV Revenue PV generation data constructs a probability density function using historical data. Regarding the estimation, the algorithm selected hourly data with an average utilization factor of over 1%. Figure 3 illustrates the January case result of the estimated probability density function in a domain as zero to one at 0.01 intervals. In cases without negative historical data, the outcome showed negative values, as density is estimated using relative likelihood. In this case, negative values mean no power generation and were replaced by zero. As the PV generation model is normalized by the capacity of the PV generator, the PV capacity must be multiplied by the simulation results to obtain PV generation. Modeling of PV Revenue PV generation data constructs a probability density function using historical data. Regarding the estimation, the algorithm selected hourly data with an average utilization factor of over 1%. Figure 3 illustrates the January case result of the estimated probability density function in a domain as zero to one at 0.01 intervals. In cases without negative historical data, the outcome showed negative values, as density is estimated using relative likelihood. In this case, negative values mean no power generation and were replaced by zero. As the PV generation model is normalized by the capacity of the PV generator, the PV capacity must be multiplied by the simulation results to obtain PV generation. The generator portfolio and power load data used for SMP forecasting were included in the ninth long-term basic plan for power supply and demand. The SUDP cost functions used historical data. Moreover, those constructing generators were assumed to have the same parameters as brand-new generators using the same fuel. Fuel costs and transmission constraints were assumed to be similar to those in 2019. Renewable energy capacity data predicted for 2040 were used as input data. Its utilization factors were as follows: PV was 14.6%, onshore wind power was 23%, offshore wind power was 30%, and other renewable energies used historical data as the utilization factor. The load pattern was assumed to grow gradually, yearly, in a certain ratio following the total load increase rate. The RPS market supply and demand quantity were estimated on the assumption that the REC weights maintained values of the REC weights in 2018. The volume of the REC The generator portfolio and power load data used for SMP forecasting were included in the ninth long-term basic plan for power supply and demand. The SUDP cost functions used historical data. Moreover, those constructing generators were assumed to have the same parameters as brand-new generators using the same fuel. Fuel costs and transmission constraints were assumed to be similar to those in 2019. Renewable energy capacity data predicted for 2040 were used as input data. Its utilization factors were as follows: PV was 14.6%, onshore wind power was 23%, offshore wind power was 30%, and other renewable energies used historical data as the utilization factor. The load pattern was assumed to grow gradually, yearly, in a certain ratio following the total load increase rate. The RPS market supply and demand quantity were estimated on the assumption that the REC weights maintained values of the REC weights in 2018. The volume of the REC spot market was determined by deducting the transactions volume in other REC markets from the annual RPS market volume and updating the previous year's surplus. Figure 4a shows the tendency shift of the REC spot market volume compared with the whole transaction of the RPS markets operating in Korea. The RPS market shows that both demand and supply increased while there was an inflection point when demand got higher than supply. However, in the REC spot market, as shown in Figure 4b, the inflection point was reached quite fast. Further, the gap between the two was considerably high. Note that the REC spot market graph ignored the effect of electricity suppliers paying penalty surcharge and waived their obligated REC. Sustainability 2021, 13, 10613 9 of 14 spot market was determined by deducting the transactions volume in other REC markets from the annual RPS market volume and updating the previous year's surplus. Figure 4a shows the tendency shift of the REC spot market volume compared with the whole transaction of the RPS markets operating in Korea. The RPS market shows that both demand and supply increased while there was an inflection point when demand got higher than supply. However, in the REC spot market, as shown in Figure 4b, the inflection point was reached quite fast. Further, the gap between the two was considerably high. Note that the REC spot market graph ignored the effect of electricity suppliers paying penalty surcharge and waived their obligated REC. Investment Assessment Based on the LSMC This section presents the optimal investment plan using the LSMC. It outlines three scenarios for each price element, SMP and REC, to consider the volatility. The LSMC model proposes the optimal PV capacity and investment timing based on nine scenarios. The REC price scenarios consider the impact of penalty cost. The scenario schemes cases of the buyers' action when the penalty costs are higher than fulfilling their mandate. Figure 5 shows the REC price scenarios to demonstrate the PV investment simulation. Although the overall trend of the increase in price was similar for all scenarios, they differed in the final price reached in 2040. The REC scenarios were as follows: 1. REC-1: Obligate REC duty case. The frequent event of the REC spot market price is higher than the penalty that leads to more REC price volatility. 2. REC-2: Base case reflecting the current price fluctuation. 3. REC-3: Pay penalty case. The REC price increases are mitigated as REC buyers pay fines instead of buying their obligated duty REC. Investment Assessment Based on the LSMC This section presents the optimal investment plan using the LSMC. It outlines three scenarios for each price element, SMP and REC, to consider the volatility. The LSMC model proposes the optimal PV capacity and investment timing based on nine scenarios. To show the tendency of investment value, a representative of four different PV capacity is shown as a result of our investment model. The The REC price scenarios consider the impact of penalty cost. The scenario schemes cases of the buyers' action when the penalty costs are higher than fulfilling their mandate. Figure 5 shows the REC price scenarios to demonstrate the PV investment simulation. Although the overall trend of the increase in price was similar for all scenarios, they differed in the final price reached in 2040. The REC scenarios were as follows: 1. REC-1: Obligate REC duty case. The frequent event of the REC spot market price is higher than the penalty that leads to more REC price volatility. Table 2 presents the optimal value of the total net profit and optimal investment period from all nine cases. The larger the PV plant's capacity that investors are willing to invest, the faster the investment must start, as the optimal time predicted from the LSMC was 2024 for 3 MW and 2035 for 99 kW. Regarding the base case scenario, the average net profit increased as the capacity grew. The return on investment (ROI) of PV investment decreased as the solar capacity expanded, owing to the increasing cost for keeping solar panels in shape to maintain the performance yield. The return of investment decreased as the capacity of PV increased. The solar capacity of 99 kW reached its highest ROI of 1.72 with an average net profit of 2.1 hundred million KRW, while the 500 kW solar system reached its highest ROI of 1.16 with an average net profit of 0.93 hundred million KRW. As the solar capacity expanded, the ROI drastically fell and reached an average ROI of 0.03 in the PV capacity of 1 MW. The case of solar energy capacity of 3 MW got negative ROI in most scenarios of SMP and REC, the most extreme case reached an ROI of −80.4. It is a poor investment in higher capacity cases, as the profit is not enough to recoup the initial investment. REC weight had a significant influence on PV profit, as all the three scenarios showed a significant decrease in net profit when PV capacity was 3 MW. The decline in revenue resulted from the significant impact of REC weight on installations greater than 3 MW. REC weight of 0.8 was applied to the PV plant capacity greater than 3 MW, while REC weight of 1.0 was applied to the capacity under 3 MW but larger than 100 kW, and 1.2 to that under 100 kW. This shows that the application of REC weight according to the PV capacity had the greatest impact on the total PV investment profit. Figures 6-8 present the best timing for solar investment newbies to enter the REC spot market to yield optimal profit. The histogram results formatted as percentage points Table 2 presents the optimal value of the total net profit and optimal investment period from all nine cases. The larger the PV plant's capacity that investors are willing to invest, the faster the investment must start, as the optimal time predicted from the LSMC was 2024 for 3 MW and 2035 for 99 kW. Regarding the base case scenario, the average net profit increased as the capacity grew. The return on investment (ROI) of PV investment decreased as the solar capacity expanded, owing to the increasing cost for keeping solar panels in shape to maintain the performance yield. The return of investment decreased as the capacity of PV increased. The solar capacity of 99 kW reached its highest ROI of 1.72 with an average net profit of 2.1 hundred million KRW, while the 500 kW solar system reached its highest ROI of 1.16 with an average net profit of 0.93 hundred million KRW. As the solar capacity expanded, the ROI drastically fell and reached an average ROI of 0.03 in the PV capacity of 1 MW. The case of solar energy capacity of 3 MW got negative ROI in most scenarios of SMP and REC, the most extreme case reached an ROI of −80.4. It is a poor investment in higher capacity cases, as the profit is not enough to recoup the initial investment. REC weight had a significant influence on PV profit, as all the three scenarios showed a significant decrease in net profit when PV capacity was 3 MW. The decline in revenue resulted from the significant impact of REC weight on installations greater than 3 MW. REC weight of 0.8 was applied to the PV plant capacity greater than 3 MW, while REC weight of 1.0 was applied to the capacity under 3 MW but larger than 100 kW, and 1.2 to that under 100 kW. This shows that the application of REC weight according to the PV capacity had the greatest impact on the total PV investment profit. Figures 6-8 present the best timing for solar investment newbies to enter the REC spot market to yield optimal profit. The histogram results formatted as percentage points with y-axis in each of the four figures mean percentage of the optimal stopping time from 1,000,000 LSMC simulations. Therefore, the frequently pointed year explicates the signal to start investing. The figures only show the results using SMP-3 with REC-1, as they gained the highest profit. However, the overall stopping time trend was similar when simulated in different scenarios. Investors willing to install higher PV capacity must start investing sooner than those intending to install lower PV capacity; the higher the investment, the higher the profit. However, the case of solar capacity near 500 kW showed a trend unusual from others. Figure 7 shows that the optimal investment time was the year 2023, but starting investing in 2026 was profitless. Therefore, a PV capacity of 500 kW had sparsely distributed overall investment timing and led to another precarious factor in venturing. Therefore, the PV investment analysis method using the LSMC results investment around 2035 with a capacity of 99 kW could achieve optimal returns. with y-axis in each of the four figures mean percentage of the optimal stopping time from 1,000,000 LSMC simulations. Therefore, the frequently pointed year explicates the signal to start investing. The figures only show the results using SMP-3 with REC-1, as they gained the highest profit. However, the overall stopping time trend was similar when simulated in different scenarios. Investors willing to install higher PV capacity must start investing sooner than those intending to install lower PV capacity; the higher the investment, the higher the profit. However, the case of solar capacity near 500 kW showed a trend unusual from others. Figure 7 shows that the optimal investment time was the year 2023, but starting investing in 2026 was profitless. Therefore, a PV capacity of 500 kW had sparsely distributed overall investment timing and led to another precarious factor in venturing. Therefore, the PV investment analysis method using the LSMC results investment around 2035 with a capacity of 99 kW could achieve optimal returns. Conclusions The proposed LSMC-based method can be used as a decision-making tool for PV investors to decide whether and when to invest in PV. The decision-making tool enables PV investors to manage the risks associated with PV investments. The LSMC model analyzes the profit of the PV generator in a certain scale of capacity and calculates the optimal investment time when taking part in the REC spot market. Specifically, the work is conducted by administering several scenarios reflecting the RPS market attributes. The model examines the prediction of the PV revenue with the uncertainty of price elements, such as with y-axis in each of the four figures mean percentage of the optimal stopping time from 1,000,000 LSMC simulations. Therefore, the frequently pointed year explicates the signal to start investing. The figures only show the results using SMP-3 with REC-1, as they gained the highest profit. However, the overall stopping time trend was similar when simulated in different scenarios. Investors willing to install higher PV capacity must start investing sooner than those intending to install lower PV capacity; the higher the investment, the higher the profit. However, the case of solar capacity near 500 kW showed a trend unusual from others. Figure 7 shows that the optimal investment time was the year 2023, but starting investing in 2026 was profitless. Therefore, a PV capacity of 500 kW had sparsely distributed overall investment timing and led to another precarious factor in venturing. Therefore, the PV investment analysis method using the LSMC results investment around 2035 with a capacity of 99 kW could achieve optimal returns. Conclusions The proposed LSMC-based method can be used as a decision-making tool for PV investors to decide whether and when to invest in PV. The decision-making tool enables PV investors to manage the risks associated with PV investments. The LSMC model analyzes the profit of the PV generator in a certain scale of capacity and calculates the optimal investment time when taking part in the REC spot market. Specifically, the work is conducted by administering several scenarios reflecting the RPS market attributes. The model examines the prediction of the PV revenue with the uncertainty of price elements, such as with y-axis in each of the four figures mean percentage of the optimal stopping time from 1,000,000 LSMC simulations. Therefore, the frequently pointed year explicates the signal to start investing. The figures only show the results using SMP-3 with REC-1, as they gained the highest profit. However, the overall stopping time trend was similar when simulated in different scenarios. Investors willing to install higher PV capacity must start investing sooner than those intending to install lower PV capacity; the higher the investment, the higher the profit. However, the case of solar capacity near 500 kW showed a trend unusual from others. Figure 7 shows that the optimal investment time was the year 2023, but starting investing in 2026 was profitless. Therefore, a PV capacity of 500 kW had sparsely distributed overall investment timing and led to another precarious factor in venturing. Therefore, the PV investment analysis method using the LSMC results investment around 2035 with a capacity of 99 kW could achieve optimal returns. Conclusions The proposed LSMC-based method can be used as a decision-making tool for PV investors to decide whether and when to invest in PV. The decision-making tool enables PV investors to manage the risks associated with PV investments. The LSMC model analyzes the profit of the PV generator in a certain scale of capacity and calculates the optimal investment time when taking part in the REC spot market. Specifically, the work is conducted by administering several scenarios reflecting the RPS market attributes. The model examines the prediction of the PV revenue with the uncertainty of price elements, such as Conclusions The proposed LSMC-based method can be used as a decision-making tool for PV investors to decide whether and when to invest in PV. The decision-making tool enables PV investors to manage the risks associated with PV investments. The LSMC model analyzes the profit of the PV generator in a certain scale of capacity and calculates the optimal investment time when taking part in the REC spot market. Specifically, the work is conducted by administering several scenarios reflecting the RPS market attributes. The model examines the prediction of the PV revenue with the uncertainty of price elements, such as generated solar power, SMP, and REC in advance of examining the investment plan. The expected PV revenue for the performance warranty time is computed by implementing each predicted element. It is important not only to determine the appropriate investment capacity, but also to determine when to invest. One of the key factors in determining the PV investment timing is the REC weight in the RPS policy. The REC weight affects the price of RECs in the spot market as it determines the number of RECs supplied to the market. For moderate price of REC, more renewable energy can be supplied. To supply sustainable energy, the government should delicately adjust the REC weights of all renewable energy, not just PV. In a future when renewables and conventional generators can compete in terms of LCOE, the REC weight needs to be set to zero. This study has limitations in a few respects. The revenue earned over the lifetime of the PV is highly dependent on renewable generation curtailment and RPS policies such as changes in REC weights. It is very difficult to accurately estimate the amount of renewable generation curtailment and PRS policy changes over the lifetime of PV. Therefore, it is also difficult to obtain the year-to-year volatility in PV revenues. These limitations can be addressed by estimating the annual amount of renewable generation curtailment and treating future PRS policy changes as multiple scenarios. Further research is needed on the development of an economic evaluation method for PV investment incorporating renewable generation curtailment. For sustainable energy supply, Korea is promoting policies to expand not only solar power generation but also wind power generation. Thus, the proposed LSMC-based method needs to be extended to handle wind power investment.
2021-10-18T18:24:55.056Z
2021-09-24T00:00:00.000
{ "year": 2021, "sha1": "94c4643e3dd95a6d8deb890008970e611cd7c5a2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/19/10613/pdf?version=1632708967", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "acc6ecee28b9e1e7e6b85abfddf0684356d24795", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
73583489
pes2o/s2orc
v3-fos-license
Mucociliary Clearance: Measures and Therapies Mucociliary Clearance can be measured clinically with the Saccharine Test in the sino-nasal system. When clearance is slow, treat mucus viscosity and slow ciliary beat frequency. Chronic sinusitis and bronchitis are often due to poor MCC. Therapy to upper or lower respiratory system benefits both. Therapy for Mucociliary Clearance includes proteolytic enzymes to reduce mucus viscosity, vibrations for improving ciliary beat frequency, irrigation with Locke-Ringer's solution for cilia frequency, glucosteroids and surfactants. Use of the Saccharine Test can help determine toxic effects of chromium, Sulphur Dioxide, benzines and other toxic products. In acute allergy, mucociliary clearance speeds up; but sinusitis may follow when mucociliary clearnace slows down later. In slow mucociliary clearance, bacteria remain in place and are able to multiply. This is a significant factor for recurrent sinus infections and indicates a method of treatment that includes restoring normal mucociliary clearance (MCC). Measuring MCC in the lower respiratory tract is complex and includes inhalation of radioactive particles. Clinically, the results of the Saccharine test of the nasal system may be an indicator of the MCC in the lower respiratory system. Introduction Human airway epithelium is characterized by ciliated cells with mobile cilia, specialized cell surface projection containing axonemes composed of micrtotubules and dynein arms, which provide ATPdriven motility. In the respiratory system, the combination of moveable cilia and mucus make up mucociliary clearance, MCC, a means of clearing away inhaled particles and pathogens [1]. Mucociliary Clearance (MCC) refers to the respiratory system where pathogens, allergens, debris and toxins are trapped and then moved out by ciliary action [2]. Cilai beat in synchrony to perform effective clearance. However, when the cilia beat is asynchronous, MCC may be ineffective [3,4]. In primary ciliary dyskinesia, the asynchronous movements can be visualized on microscopic analysis [5]. Functions of MCC When MCC is normal, bacteria are moved out of the upper and lower respiratory tracts before they have a chance to multiply. When MCC is slowed, bacteria remain in place and multiply to cause illness. The thickened mucus may block sinus openings and Eustachian tube orifices in the head and may block bronchial passages in the chest [6,7]. Mucus is in two layers: outer Gel layer is thick and traps bacteria, dust, etc. Lower or inner Sol layer is thin and contains cilia that move in synchrony to move the outer layer in a single direction. A thin layer of surfactant separates the two layers. In the upper respiratory system, the mucus layer is moved from the nose, to the nasopharynx, down the throat to the stomach. In the lower system, mucociliary clearance is accomplished by air passage and cilia beat frequency. Mucus layer is moved up and over the larynx and then is swallowed to the stomach. Stomach acid generally inactivates the bacteria [8]. MCC and sinuses Normal sinus drainage is by MCC. Whichever sinus is draining poorly, may be aided by improving MCC. After most sinus surgery, MCC slows down. Frequently, post-sinus-surgery infection is related to decreased MCC. Methods to reduce post-op sinus surgery infections by restoring MCC are recommended her? [9,10]. Example of MCC post sinus surgery John Smith, 33 male had sinus surgery four month ago by a top surgeon. He had expected to be free of sinus disease after his surgery, yet he had been sick with sinusitis since his sinus surgery, despite antibiotics. Pulsed irrigation in the office removed thick mucus and colored exudate. He was continued on pulsed irrigation twice a day in order to thin his mucus and restore his cilia He was free of sinus symptoms in three weeks. His post-surgery sinus infection was due to reduced MCC. Slowed MCC is seen frequently after sinus surgery. MCC in lower respiratory system We know that in Chronic Bronchitis, MCC slows. A typical viral infection slows MCC and this is responsible for the cough and the buildup of toxins [11]. Many studies have been done on the effect of air pollution, diesel fumes and industrial "smoke" on the lower airway. In cigarette smoke, MCL is impaired and this impairment may continue for months after smoking ceases. MCC is effected by the thickness and the viscosity of the mucus itself. If it is thin and easily moved, cilia beat easily. Or is it thick and sticky? Viscosity of mucus varies significantly. For example, in acute allergy mucus is of low viscosity. To improve mucus viscosity, increase of fluid intake is a crucial factor. Viscosity therapy Thick bronchial mucus is benefitted by inhalation of 7% saline and apha dornase. Alpha dornase (Pulmazyme) is of particular benefit in CF because of its enzyme action on DNA material in the bronchial passages. Protease products such as papain/bromelain help thin mucus viscosity, as does Mucinex [12]. Thick nasal/sinus mucus is associated with biofilm and is benefitted by the use of surfactin products to effectively remove thick mucus. Adding dilute Johnson's Baby Shampoo to the nasal irrigation solution is reported to help reduce biofilm. Another product is Hyaluronic Acid. Use of protease products such as papain and bromelain are of value in delivering antibiotics into the biofilm [13]. • In summary, measures to thin the mucus layer include: • Adequate intake of water, liquids • Green tea with lemon or lime [14] • Irrigation with Surfactin such as Johnson's Baby Shampoo (Isaacs) [15] • Pulsed Irrigation • Mucinex • Pulmazyme • Inhaling warm moist air for nasal mucus • Inhaling warm moist air with tongue extended for bronchial mucus. • Bromelain and Papain [16] The sol layer: The other main factor is the cilia in the Sol layer. Do they move fast enough and in synchrony to move the mucus? Or are they slowed because of cold or toxins such as Chlorine? [5]. Example: Female age 35. Following each spring allergy season, she gets a sinus infection. It is common at the end of a seasonal allergy to get slowed MCC. She was prescribed pulsed irrigation, humming "om, " at a low tone and green tea: these actions helped her to avoid sinus infection following allergy in the future Chemistry of tea and chicken soup: Intake of Green Tea, with its anti-inflammatory effect, benefits all levels of the respiratory system. Green tea contains phenols. Polyphenols in green tea include EGCG, epicatechin gallate, epicatechins and flavanols. Chicken soup also contains L-cysteine that is released when you take the soup. This amino acid thins mucus in the lungs, aiding in the healing process. Measure mucociliary clearance: A clinical method of measuring nasal MCC is to place a particle of saccharin onto the medial surface of the inferior turbinate one cm back and time how long it takes for the particle to reach the tongue where it is tasted. The saccharine test has established diagnostic standards: This test can be used to identify toxins in the workplace such as chromium or Sulphur dioxide that are causing illness. Inhaled solvents should be tested. The saccharine test can be used to evaluate air pollution. Hyperbaric oxygen slows cilia, as does high altitude [17]. The saccharin test is also very useful for evaluating therapies. If the medication speeds MCC, that is a significant benefit. Unfortunately, MCC is not tested in evaluating new drugs or therapies [18,19]. Measurement of MCC in the lower respiratory tract is complicated, consisting of inhalation of radioactive products and measuring them as they are extruded. In lower tract impaired MCC, deep breathing, bronchodilators, chicken soup can be effective. In many respiratory conditions, the viscosity of the mucus, the ciliary beat frequency and the excretion of the goblet cells is often similar throughout the respiratory system. The therapy addressed to the nasal MCC can be of benefit to the MCC of the lower respiratory tract [8]. Mucociliary therapies Those therapies include lowering the viscosity of mucus. Increase fluid intake, glucosteroids, surfactants and proteolytic enzymes reduce mucus viscosity and affect the entire respiratory tract. In the studies by Workman, mechanical stimulation of airway epithelial cells causes apical release of ATP, which increases cilia beat frequency and speeds MCC. This is illustrated by: • Singing "ooommm" • Humming • Jumping Jacks and running • External Chest Thumping • Breathing exercises. • Special coughing • Pulsed Irrigation • Oropharayngeal exercises In the respiratory system, humming at a low tone, "oooommmm, " can benefit by affecting the mucus and speeding the cilia. Pulsed nasal/ sinus irrigation affects biofilm, thick mucus and cilia movement. Green tea with lemon or lime are of benefit, as are topical glucosteroids, proteolytic enzymes, Mucinex, surfactants and Xylitol. Irrigating with Locke-Ringer's solution improves cilia movement. It is important to differentiate between mucus too thick vs cilia too slow. Fact: thick mucus does slow cilia movement and bacteria trapped here are able to multiply. Hence, fluids, green tea, lemon/lime are very important for thickened mucus Cold slows MCC Temperature affects MCC. Freezing cold slows cilia. This is a primary reason why colds and bronchitis are more common in the winter. Good advice is to warm the nasal passages before entering the elevator or classroom. A common cause of impaired cilia is irrigation solutions that contain benzalkionium. This slows cilia beat frequency. In Cystic Fibrosis, it is the Na ion that causes the increased viscosity and immobility of cilia due to thickening of the mucus. Special Factors for the Lower Respiratory System: In the lower respiratory system, corrective breathing exercises, bronchodilators, inhaled glucosteroids, surfactants affect the lower respiratory system. Rhythmic pounding on the chest to break up viscid mucus is a standard therapy for pneumonia and chronic bronchitis. Pulsed nasal irrigation may be of direct or indirect benefit. A deep throated "ooommm" has benefit. When MCC fails, the patient coughs. For example, a healthy miner may not cough in regular dust. But, he coughs when he inhales a particle too large for mucociliary clearance. Inhaling warm moist air speeds cilia frequency. For the lower respiratory system, to inhale warm moist air, the tongue should be extended; otherwise the warm air is lost in the upper throat. Conclusion It is useful to measure MCC and to differentiate between mucus viscosity and cilia mobility. The saccharine test of MCC gives an indication of the Respiratory System and provides indication for stressing methods of reducing viscosity of mucus and increasing cilia beat frequency. In most patients with chronic respiratory illness, improving MCC may be of significant benefit and a means of avoiding antibiotic therapy. Enhancing MCC is clearly of value to prevent as well as to treat many respiratory conditions. Clinically, improving sinus disease is of value in conditions of the lower respiratory tract. The question that needs to be researched: Does simply improving Nasal CBF aid conditions of Asthma, CF, Bronchitis? I recommend this question as an important research study.
2019-03-17T13:10:40.243Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b1520612dc3cdc2c59998444c5babf29643fece8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2161-119x.1000336", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4fa4ca0090308926cae9d1990b62ff85cfa4ecc2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7660533
pes2o/s2orc
v3-fos-license
Differential Effects of a Mutation on the Normal and Promiscuous Activities of Orthologs: Implications for Natural and Directed Evolution Neutral drift occurring over millions or billions of years results in substantial sequence divergence among enzymes that catalyze the same reaction. Although natural selection maintains the primary activity of orthologous enzymes, there is, by definition, no selective pressure to maintain physiologically irrelevant promiscuous activities. Thus, the levels and the evolvabilities of promiscuous activities may vary among orthologous enzymes. Consistent with this expectation, we have found that the levels of a promiscuous activity in nine gamma-glutamyl phosphate reductase (ProA) orthologs vary by about 50-fold. Remarkably, a single amino acid change from Glu to Ala near the active site appeared to be critical for improvement of the promiscuous activity in every ortholog. The effects of this change varied dramatically. The improvement in the promiscuous activity varied from 50- to 770-fold, and, importantly, was not correlated with the initial level of the promiscuous activity. The decrease in the original activity varied from 190- to 2,100-fold. These results suggest that evolution of a novel enzyme may be possible in some microbes, but not in others. Further, these results underscore the importance of using multiple orthologs as starting points for directed evolution of novel enzyme activities. This construction results in incorporation of Met-Gly 2 -Ser-His 6 -Gly-Met-Ala-Ser before the initial Met of ProA. For cloning into pETcoco-2, the sequences encoding the tagged ProA enzymes were amplified from the corresponding pTrcHis constructs using the following primers: forward: 5' CAG CCT GAT ACA GAT TAA ATC AGA GCG GCC GCA TCG 3'; reverse : 5' CGA TGC GGC CGC TCT GAT TTA ATC TGT ATC AGG CTG 3'. The amplified fragments were then digested with NheI and NotI for 5 hr at 37 ºC. The resulting fragments were ligated into pETcoco-2, which had been linearized by digestion with NheI and NotI, using DNA ligase for 20 min at 16 °C. Generation of competent cells. Five mL cultures of ΔargC::kan ΔproA::cat (DE3) cells were grown overnight at 37 °C in LB containing 50 µg/mL kanamycin. The next morning, the cells were harvested by centrifugation at 9,500 x g for 5 min at 4 ºC. The cell pellet was resuspended in 100 µL of LB. Five µL of this cell suspension was inoculated into 500 mL of LB containing 50 µg/mL kanamycin. The cells were grown at 37 °C until the OD 600 reached 0.6. The cultures were incubated on ice for 20 min prior to centrifugation at 3800xg for 15 min at 4 °C. The cells were washed with 500 mL of 10% glycerol. The pellet was resuspended in 50 mL of 10% glycerol and centrifuged at 3800 x g for 15 min at 4 °C. The pellet was then resuspended in 5 mL of 10% glycerol and centrifuged at 1,900 x g for 15 min at 4 °C. The cell pellet was resuspended in 1 mL of 10% glycerol. Fifty µL aliquots were flash frozen in liquid nitrogen and stored at -80 °C. Purification of ProA enzymes. pTrcHis plasmids encoding proA alleles were introduced into competent ΔargC::kan ΔproA::cat (DE3) cells by electroporation and the transformants were spread onto LB plates containing 100 µg/mL ampicillin. After growth, a single colony was inoculated into 5 mL of LB containing 100 µg/mL ampicillin and the cells were grown with shaking for 14 hrs at 37 °C. The cells were harvested by centrifugation at 9,500 x g for 5 min. The cell pellet was resuspended in 1 mL LB. A 100 µL aliquot of the cell suspension was inoculated into 1 L of LB containing 100 µg/mL ampicillin. IPTG was added to a concentration of 1 mM when the OD 600 was 0.7 and the culture was grown with shaking for 14 hrs at 37 ºC. The cells were harvested by centrifugation at 3800 x g for 15 min at 4 °C. The cell pellet was resuspended in lysis buffer (50 mM sodium phosphate, pH 8.0, 10 mM imidazole, 300 mM sodium chloride, 20 mM DTT) containing 10% glycerol (2 mg cells per mL of buffer) and stored at -80 °C. The suspended cells were lysed by two passes through a French press at 12000 psi at enzymes were purified as described in the Ni-NTA Purification System Handbook (Invitrogen). The supernatant was loaded onto a 12 cm x 2 cm glass column containing 8-10 mL of Ni-NTA agarose (Invitrogen) that had been pre-equilibrated with lysis buffer. ProA. We added a large amount of E. coli ProA (15 µM) to a solution of NAGSA or GSA in 100 mM potassium phosphate, pH 7.6, containing 1 mM NADP + , in 1 mL. The absorbance at 340 nm due to formation of NADPH exhibited a burst followed by a linear phase. The magnitude of the burst was proportional to the total amount of GSA and P5C, whereas the slope of the linear phase was constant, regardless of the amount of GSA and P5C. We conclude that the burst represents consumption of the free aldehyde and hydrated form of the substrate, which we assume are in rapid equilibrium, while the linear phase represents the slower rate at which the P5C ring opens to form GSA. The magnitude of the burst, which typically represented 1-2% of the total concentration of GSA+P5C, was measured before each set of kinetic assays. NAGSA and GSA dehydrogenase activities were measured by monitoring the appearance of NADPH at 340 nm in reaction mixtures containing 100 mM potassium phosphate, pH 7.6, 1 mM NADP + , varying concentrations of NAGSA or GSA, and catalytic amounts of ProA or ProA*. All kinetic measurements were done at room temperature. Apparent values of K M based upon the total concentration of GSA+hydrate+P5C were adjusted based upon the concentration of GSA+hydrate measured as described above. Purification of N-succinyldiaminopimelate aminotransferase/acetylornithine transamine (ArgD). E. coli argD was cloned into pET-21d in order to add a sequence encoding an N-terminal His 6 -tag and the resulting plasmid was introduced into electrocompetent E. coli DH5α cells (New England Biolabs) according to the manufacturers protocol. Transformants were selected on LB plates containing 100 µg/mL of ampicillin. A single colony from the plate was inoculated into 5 mL LB containing ampicillin (100 µg/mL) and the culture was grown overnight with shaking at 37 °C. Plasmid DNA was purified using the QiaPrep Spin Miniprep protocol (Qiagen). The purified plasmid was introduced into 50 µL of 10β cells (New England Biolabs). The cells were allowed to recover for 1 hr at 37 °C in 1 mL of SOC medium (New England Biolabs). A 50 µL aliquot was then spread onto a plate of LB agar containing 50 µg/mL ampicillin. After overnight growth at 37 ºC, a single colony was used to inoculate 5 mL of LB containing 100 µg/mL ampicillin. The cells were grown overnight at 37 °C with shaking. The following morning, the cells were harvested by centrifugation at 4 °C for 15 min at 1900 x g. The cell pellet was resuspended in 100 µL of LB, and a 50 µL aliquot was inoculated into 500 mL of LB containing 100 µg/mL ampicillin. The cells were grown until the OD 600 was 0.6-0.8, at which time IPTG was added to a final concentration of 1 mM. Cell growth was continued for an additional 3 hrs. The cells were harvested by centrifugation at 3800xg for 15 min at 4 °C. ArgD was purified using the protocol described above for purification of ProA. Synthesis of N-acetylglutamate 5-semialdehyde. NAGSA was synthesized enzymatically using N-succinyldiaminopimelate aminotransferase/acetylornithine transaminase (ArgD) in a 300 mL reaction mixture containing 20 mM potassium phosphate, pH 8.5, 100 mM N-acetyl ornithine, 100 mM α-ketoglutarate, 0.01 mM pyridoxal-5'-phosphate, and 50-100 mg ArgD. After incubation for 5 hrs at 37 ºC, the reaction mixture was loaded at room temperature onto a 500 Generation of proA libraries. proA orthologs were amplified from the pTrcHisB plasmids into which they had been cloned using error-prone PCR with Mutazyme II by the following amplification protocol: Step 1, 95 °C for 2 min; Step 2, 95 °C for 30 s; Step 3, 50 °C for 30 s; Step 4, 72 °C for 1 min 30 s; Step 5, repeat steps 2-4 30 times; Step 6, 72 °C for 5 min. The PCR products were digested with NheI, BamHI and DpnI overnight at 37 °C and then purified by gel extraction prior to ligation into pTrcHisB that had been linearized by digestion with Nhe1 and BamHI. Ligation was carried out at 10 ºC overnight. The libraries were introduced into electrocompetent 10-β cells. The transformants were incubated in 1 mL SOC medium (New England Biolabs) at 37 ºC with shaking for one hour prior to plating 200 µL aliquots onto LB agar containing amplicillin (100 µg/mL). Tens of thousands of colonies from each plate were recovered in LB medium and plasmids were isolated from each sample. Each library was introduced into ΔargC::kan ΔproA1cat cells by electroporation. After the transformants were allowed to recover in LB at 37 ºC for one hour with shaking, the cells were recovered, washed twice with PBS and resuspended in 200 µL PBS. A 1 µL aliquot was spread onto agar plates containing LB and ampicillin (100 µg/mL); in each case, more than 10 4 colonies grew. The remaining cells were spread onto agar plates containing M9/glucose and 1 mM proline. Plasmids were isolated from several colonies that grew on the M9/glucose/proline plates and the inserted proA genes were sequenced.
2016-05-12T22:15:10.714Z
2014-09-21T00:00:00.000
{ "year": 2014, "sha1": "2c284f22b31b0de86ed33e6936fa735d8a18c490", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mbe/article-pdf/32/1/100/13167773/msu271.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8513e39068ce86e94c529723003e37a5d71d2799", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11687276
pes2o/s2orc
v3-fos-license
Orbital metastasis of breast carcinoma. We report a case of orbital metastasis in a previously diagnosed metastatic breast cancer in a 46-year old woman presenting with diplopia and proptosis of her left eye bulb. An orbital computed-tomography (CT) and a magnetic resonance imaging (MRI) both revealed an intra-orbital extra-bulbar mass of 1.5 × 3 cm in size, in the left orbit. The patient had been diagnosed with stage IV breast cancer 4 years before. She had received chemotherapy with docetaxel and was on hormone therapy at the time of presentation of her eye symptoms. Current treatment included systemic combination therapy with docetaxel and capecitabine as well as local irradiation with stereotactic radiosurgery (cyberknife). There was a gradual improvement of local symptoms and signs. The metastatic involvement of the orbit in malignant tumors is a rarely diagnosed condition. Breast cancer accounts for the majority of these cases. The appearance of eye symptoms in patients with a history of cancer should always be investigated with a consideration of ocular metastatic disease. Introduction Breast cancer can metastasize to many sites, but the orbit is an infrequent location and a comparatively rare site of distribution among the ocular area structures. Longer survival of patients with metastatic disease as well as advances in diagnostic imaging may explain the increasing frequency of ocular involvement 1 that occurs in up to one third of breast cancer patients. 2 Bone metastases as a sole metastatic site in breast cancer portend a good prognosis as opposed to visceral disease and are seen frequently in the ER/PR (+) Her2/Neu (-) subset of the disease. Nevertheless, they may present a particular clinical problem if they are neighboring sensitive structures such as the spine or the eye, as in this case, and may need urgent treatment to preserve patient's quality of life and function. case Report A 46-year old woman, with a history of a grade 2, hormone receptor-positive, HER2-negative ductal adenocarcinoma of the left breast, presented with diplopia, exophthalmos, decreased visual acuity and pain in her left eye. Initial diagnosis was made 4 years previously, when the patient suffered a pathologic right femor fracture. Clinical examination revealed skin retraction and an estimated 4 × 5 cm palpable mass in the left breast. The area of the right femor was treated with one dose of analgesic external-beam radiotherapy. Patient subsequently underwent a lumpectomy. A total hip arthroplasty was performed a few days later. Breast carcinoma metastatic to the right femoral bone was confirmed histopathologically. Staging CTs of thorax and abdomen and a bone scan were negative for other metastatic lesions. Preoperatively elevated serum CA 19-9 and CA 15-3 levels immediately normalized after surgery and the patient was started on docetaxel at 30 mg/m 2 weekly for 12 weeks followed by hormonal therapy, consisting of goserelin and tamoxifen, as well as zolendronic acid. Further metastatic bone lesions developed in the spine and the patient received analgesic radiotherapy (30Gy) to the lumbar spine. Due to progression of bone disease hormonal therapy was switched to anastrazole and then to letrozole. Four years after the initial diagnosis, the patient presented with diplopia to all gaze directions, exophthalmos and bulb proptosis. Ophthalmologic examination revealed reduced visual acuity to 4/10 in left eye. A CT and MRI of the orbits and head were performed, both showing a solid, intra-orbital, extra-bulbar, 1.5 × 3 cm mass, occupying the inferior quadrant of the left orbit ( Figs. 1-3). Concomitant serum tumor markers elevation together with the imaging findings, were most compatible with metastatic disease in the orbit. A combined chemotherapy treatment with docetaxel 75 mg/m 2 intravenously on day 1 and capecitabine 1000 mg/m 2 per os twice daily for 14 out of every 21 days was started. Additionally, the orbital mass was irradiated with the use of a cyberknife image-guided stereotactic radiosurgery system in one session, with a total dose of 1700 cGy being delivered to the tumor with 6MV photons. Eye symptoms resolved almost completely during the following weeks, while there was also a gradual decrease in serum tumor marker levels. An orbital CT was performed 7 months after diagnosis of orbital involvement and disclosed regression of the tumor, measuring 0.6 cm by 1.7 cm (Fig. 4). The patient remains free from ocular symptoms 18 months after stereotactic treatment. Discussion We describe a case of orbital metastasis presenting as a relapse of a known, previously treated, metastatic breast carcinoma. Orbital metastases represent a small but increasing percentage of all orbital tumors, reported in different case studies and series to have an incidence of 1% to 13%. Breast cancer is by far the most common primary site, accounting for 28.5%-58.8% of cases of orbital metastases, followed by lung, prostate, gastrointestinal, kidney and skin (melanoma) cancers. [1][2][3][4][5] Unilateral disease is the usual presentation while intra-orbital anatomical distribution involves predominantly the lateral and superior quadrants. 1 Orbital metastatic lesions usually present in patients with established diagnosis of disseminated cancer and there is a long medial time interval of 4.5-6.5 years from diagnosis for breast carcinoma. The longest intervals from the diagnosis of primary breast cancer to the presentation of orbital metastasis are 25 and 28 years respectively. 6,7 However, in up to 25% of cases, orbital metastasis is the initial finding of a previously undetected primary cancer. 1,[8][9][10][11][12] Due to a tissue-specific preference of breast cancer to extra-ocular muscle and surrounding orbital fat, diplopia resulting from mobility deficits is a prevalent symptom. Other common symptoms and signs include proptosis, eyelid swelling or visible mass, pain, palpebral ptosis, bulb divergence and blurred vision, caused by infiltration or compression. Enophthalmos is a less common but distinctive sign of orbital infiltration by scirrhous breast adenocarcinoma. 1,5,13,14 In a recently reported case, orbital metastasis presented as neurotrophic keratitis. 15 Definite diagnosis of an orbital lesion requires an orbital biopsy (either FNA or open biopsy). However, in patients with known metastatic cancer, as in our case, the latter may be avoided if there is a strong clinical and imaging suspicion for metastatic disease. It should only be done in patients with no known previous history of cancer and in patients in whom the orbit is the only site of suspected metastasis in whom having a definite diagnosis would change the overall management of the patient. 1 Metastatic lesions to the orbit usually present as irregularly shaped masses on noncontrast CT which are isodense to muscle. With contrast injection, they show slight enhancement. Orbital bony wall involvement is also a common finding, especially in prostate cancer. On MRI, metastatic disease is usually hypointense to fat on T1-weighted images (T1WI) and hyperintense to fat on T2WI. This appearance may help to differentiate it from an orbital pseudotumor, which is usually isointense to fat on T2WI. When hyperintense lesions are seen on T1WI, a very vascular metastasis (e.g. thyroid, renal) or melanoma metastasis should be suspected. 16 The combined involvement of the orbit and adjacent structures, such as the paranasal sinuses, is a rare condition revealed by imaging studies. 17 In addition to metastasis, differential diagnosis of an orbital process should include inflammatory lesions, benign tumors (such as hemangiomas) and lymphoproliferative disorders. Idiopathic orbital inflammatory syndrome (IOIS or orbital pseudotumor), sarcoidosis and Wegener granulomatosis are inflammatory conditions that may present in similar manners. Given that inflammatory signs are common in orbital metastases from breast cancer, they could be misdiagnosed as thyroid orbitopathy, cellulitis, myositis, scleritis or endophthalmitis. The distinguishing feature of orbital metastases is a rapid onset and progressive course with combined motor and sensory deficits, non-responding to antibiotics or steroids. 18,19 Treatment for orbital metastases is inevitably palliative, given that hematogenous spread of cancer to the orbit is a sign of systemic disease and involvement of other sites. Surgical intervention is generally not recommended, unless it is performed for diagnostic purpose (biopsy) in patients with no previous history of cancer [20][21][22][23][24][25][26][27][28] or as palliation (tumor resection or enucleation) in cases of unmanageable local symptoms. 1 The main treatment option is radiotherapy, with high rates (60%-80%) of clinical improvement of local symptoms and vision. External-beam irradiation is the most common and accessible modality, with a total dose of 20-40 Gy delivered in fractions over 1-2 weeks. 1,5 Stereotactic radiation therapy (SRT) has recently evolved as an alternative modality, in an effort to apply high doses of radiation to a well-defined volume with steep dose gradients outside the target volume. 20 A complex mixture of image-guided radiation using CT, MRI and stereotactic localization defines stereotactic radiosurgery (SRS). Although not available in all treatment settings, SRT and SRS require a shorter treatment course compared with external-beam irradiation, thus contributing to a better quality of life. 20 To our knowledge, only two other cases of orbital metastases from breast cancer treated with the stereotactic method have been reported. 21 Due to the fact that most patients have concomitant progressive systemic disease, chemotherapy followed by hormone therapy in cases of hormone-sensitive tumors is indicated in patients with good performance status. A contribution to the palliative result obtained by radiotherapy can be expected with systemic treatment. 22 In contrast, responses with systemic chemotherapy alone have been reported in choroidal metastases. In one recent case of choroidal metastasis of breast cancer a dramatic response was observed with transtuzumab and vinorelbine. 23 The combination of radiotherapy, delivered in eight fractions of 4Gy, and hyperthermia was recently proposed as a treatment for patients with recurrent breast cancer in the orbital region. Local hyperthermia treatment feasibility in the orbit is restricted by the depth of the tumor from the skin and the need to avoid microwave-induced high temperatures reached in the lens. 24 Prognosis of patients with metastatic orbital tumors is rather poor, with a median survival ranging from 22 to 31 months for breast cancer. 1,5 Nevertheless, rare cases of long-term survival after the diagnosis of breast cancer presenting as an orbital mass have been reported. 10,11,18
2014-10-01T00:00:00.000Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "28fe9d7ec8ad18e7711c25522b01347b788d017b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "89dd81ff34647843ad7831c04233c3d075277a25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6804091
pes2o/s2orc
v3-fos-license
Human Anthrax Transmission at the Urban–Rural Interface, Georgia Human anthrax has increased dramatically in Georgia and was recently linked to the sale of meat in an urban market. We assessed epidemiological trends and risk factors for human anthrax at the urban–rural interface. We reviewed epidemiologic records (2000–2012) that included the place of residence (classified as urban, peri-urban, or rural), age, gender, and self-reported source of infection (handling or processing animal by-products and slaughtering or butchering livestock). To estimate risk, we used a negative binomial regression. The average incidence per 1 million population in peri-urban areas (24.5 cases) was > 2-fold higher compared with rural areas and > 3-fold higher compared with urban area. Risk from handling or purchasing meat was nearly 2-fold higher in urban areas and > 4-fold higher in peri-urban areas compared with rural area. Our findings suggest a high risk of anthrax in urban and peri-urban areas likely as a result of spillover from contaminated meat and animal by-products. Consumers should be warned to purchase meat only from licensed merchants. Anthrax, caused by the bacterium Bacillus anthracis, is a widely distributed zoonotic disease that primarily afflicts herbivorous animals. 1 Human transmission is typically associated with rural agricultural activities such as slaughtering cattle or industrial processing. 1 However, anthrax outbreaks and the spread of infection have also been documented in urban markets and livestock trading centers from the illegal sale of contaminated animal by-products. 2,3 In Georgia, the incidence of anthrax has increased dramatically (> 5-fold during 2010-2012) and expanded geographically; evidence suggests urban areas were also at high risk. 4,5 Recently, human anthrax was linked to the sale of contaminated meat at an urban market in Tbilisi, 6 the Georgian capital, highlighting the potential for disease spillover into uncharacteristic areas at risk for anthrax transmission. In this instance, the sale of meat occurred at the Navtlugi market in the Isani District without undergoing proper inspection; it was then transported~12 km to the Dezertirebi agrarian market in Tbilisi, where the meat was resold. 6 An individual subsequently contracted cutaneous anthrax after preparing the purchased meat for consumption; an epidemiological investigation traced the meat back to the informal meat merchant and halted sales. Given this recent event and the status of the disease in the country, we assessed epidemiological characteristics of human anthrax at urban-rural interface during the period 2000-2012 in Georgia. We reviewed epidemiologic records from the National Centers for Disease Control and Public Health that included the case patients' place of residence, age, gender, and selfreported source of infection. Place of residence was mapped at the village level and classified as either urban (> 800 people/ km 2 ), peri-urban (800-250 people/km 2 ), or rural (< 250 people/ km 2 ) using population estimates from the World Population Mapping Project (WorldPop; http://www.worldpop.org.uk/) in ArcGIS (Esri, Redlands, CA) ( Figure 1A). Annual incidence rates per 1 million person-years were calculated for urban, periurban, and rural areas using Georgian national census data (GeoStat, www.geostat.ge) and WorldPop estimates. Associations between the classified place of residence and self-reported source of infection were analyzed using a χ 2 test in SAS (SAS Institute, Cary, NC; PROC FREQ). We estimated the risk associated with urban, rural, and peri-urban communities and assessed two self-reported sources of infection: slaughtered/butchered livestock and handled/processed/ purchased meat or livestock by-products. We used a generalized linear model (GLM) with a negative binomial distribution in SAS (PROC GLM); because of overdispersion in the number of anthrax cases (ratio of the mean/variance was > 1) a negative binomial distribution was selected over a Poisson distribution. 7 We ran two models: model 1 with case patients' risk factors associated with slaughtering/butchering and model 2 with risk factors associated with handling/processing/ purchasing meat. Risk factors included age, gender, and community classification (urban, peri-urban, or rural). Incidence risk ratios (IRRs) were derived for each variable by exponentiation of the GLM model coefficients (SAS Institute; PROC GENMOD). We ran two separate regression models since risk varied across levels of the classified place of residence and the self-reported source of infection. Of the 592 cases, 497 (84%) reported either an exposure from slaughtering/butchering livestock (318 cases) or handling/ processing meat or animal by-products (179 cases) ( Table 1). Of the cases that reported exposure from handling/processing/ purchasing meat, 100 (56%) reported purchasing meat. The proportion of self-reported exposures differed between rural, peri-urban, and urban areas (χ 2 = 49.3, df = 2, P < 0.001); slaughtering/butchering livestock was more common in rural areas (78% We provide preliminary evidence of epidemiologic differences in human anthrax risk related to the place of residence in Georgia. Our findings indicated that reported exposure risks varied among rural, peri-urban, and urban areas. Transmission of human anthrax is typically associated with rural agriculture and slaughtering of livestock, as documented in Turkey. 2,8,9 In contrast, the spread of cases have also been linked to the sharing or selling of meat; in Paraguay, > 90% of cases were linked to the carrying of meat among individuals not involved with slaughtering or butchering. Consistent with these findings, we documented a majority of cases that reported slaughtering or butchering livestock; however, we showed a higher risk from handling/processing/purchasing meat or animal by-products in urban and peri-urban areas compared with rural areas in Georgia. One hypothesis to explain the high urban risk in Georgia is the spillover of anthrax across the urban-rural interface from the sale or sharing of contaminated meat and animal by-products; the recent dramatic increase in anthrax in Georgia has likely facilitated this process. 4 Although reports of urban anthrax are uncommon, human transmission has been documented in urban areas of Brisbane (Australia), 10 Almaty (Kazakhstan) 6,11 and in Europe from injection drug use. 12 Informal or illegal meat markets are often used to sell contaminated livestock by-products or meat to recoup economic losses. 3 Agrarian markets and livestock production are often situated at the fringes of urban areas where they are more accessible, possibly explaining the high peri-urban incidence we observed. In Ukraine, informal meat markets are a common occurrence, including major cities such as Kyiv (M. Bezymennyi, personal communication). Anthrax was recently confirmed in Ukraine in a backyard dog that was fed contaminated meat, 13 and that same contaminated meat was illegally sold at an urban market. 14 As were previously documented, urban outbreaks in Tbilisi in 1995 and again in 1999 likely involved the sale and distribution of contaminated meat; the latter outbreak involved up to 42 individuals. 15 Our findings substantiate an earlier study that suggested contaminated meat sales were associated with the geographic clustering of human anthrax around urban areas in Georgia 4 and are also in keeping with research linking the spread of human anthrax between communities and transnationally via the sharing or sale of infected meat. 5,16 Changes to veterinary health policy and the cessation of compulsory livestock vaccination in the mid-2000s have also likely contributed to the current situation. Efforts to increase the number of official slaughtering plants may help ease barriers to slaughterhouse access and reduce the occurrence of illegal "shade tree" livestock slaughtering. However, indemnity programs that reimburse all or part of a sick or dying animal's value may go a long way in alleviating the economic burden. The true level of exposure risk in urban areas is unknown since handling and cooking B. anthracis-contaminated meat may not lead to clinical infection. 17 Classifying urban and rural communities is difficult. Although we used established methodologies from the scientific literature, 18 our technique may have misclassified some communities. Additional research is needed to corroborate epidemiological records with geographic patterns of transmission. More stringent regulations and education about the disease are needed as agricultural retail products that bypass inspection and purchasing meat via informal markets without knowledge on the condition of the animal may increase risk. 16 Sustained livestock vaccination campaigns remain the most effective way to reduce human anthrax as shown elsewhere in the region, 19 and efforts may be needed in or around uncharacteristic hot spots such as urban areas. Consumers should be warned to purchase meat only from licensed merchants with proper documentation.
2018-04-03T03:33:11.474Z
2015-12-09T00:00:00.000
{ "year": 2015, "sha1": "0d994de2c26eb4e59abcc96f93f51448562cb802", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc4674227?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "43b16b60764e021302262c188a36c45259ecf3b4", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253267740
pes2o/s2orc
v3-fos-license
Community-level evolutionary processes: Linking community genetics with replicator-interactor theory Understanding community-level selection using Lewontin’s criteria requires both community-level inheritance and community-level heritability, and in the discipline of community and ecosystem genetics, these are often conflated. While there are existing studies that show the possibility of both, these studies impose community-level inheritance as a product of the experimental design. For this reason, these experiments provide only weak support for the existence of community-level selection in nature. By contrast, treating communities as interactors (in line with Hull’s replicator-interactor framework or Dawkins’s idea of the “extended phenotype”) provides a more plausible and empirically supportable model for the role of ecological communities in the evolutionary process. Understanding community-level selection using Lewontin's criteria requires both community-level inheritance and community-level heritability, and in the discipline of community and ecosystem genetics, these are often conflated.While there are existing studies that show the possibility of both, these studies impose community-level inheritance as a product of the experimental design.For this reason, these experiments provide only weak support for the existence of community-level selection in nature.By contrast, treating communities as interactors (in line with Hull's replicator-interactor framework or Dawkins's idea of the "extended phenotype") provides a more plausible and empirically supportable model for the role of ecological communities in the evolutionary process. community genetics j multilevel selection theory j replicator/interactor j extended phenotype Evolutionary processes in multispecies assemblages have far-reaching scientific, policy, and even ethical ramifications.Symbioses such as lichens and eukaryotic cells demonstrate that new Darwinian individuals can evolve from onceseparate evolutionary lineages, given "vertical inheritance" (1,2).Such transitions are limited, however, to only a few species.Whether higher-level ecological structures comprising many species could equally be subject to natural selection remains an open question in "macrobial" (3)(4)(5)(6)(7) or microbial communities (8)(9)(10)(11).The emerging field of community and ecosystem genetics, focused on genetic interactions in manipulated and natural environments and communities of many species of multicellular eukaryotes, specifically addresses the role of selection operating at multiple levels of organization (reviewed by Whitham et al. in 12).A novel aspect is the application of the tools of multilevel selection theory [MLST (13)] to communities without any expectation that they have undergone an evolutionary transition in individuality (2). There is little debate about individual-level selection in a community context.Such selection can drive lineagespecific adaptation and reciprocal evolution between species (coevolution).Further, multispecies systems of genes are involved in ecosystem engineering and likely evolve according to the ecological constraints affecting individuallevel fitness (14).However, do complex ecological assemblages form entities subject to evolution by natural selection at their own level, as "units of selection"?If ecological assemblages are higher-level units of selection, their collective ability to respond to their environment could be significant for surviving climate change (12,15). Multispecies evolutionary dynamics can often be explained by selection on individuals.So whether communities act as cohesive wholes or collections of independent populations has been debated since the 1920s (16)(17)(18).Even detecting whether populations causally influence each other's distribution and abundance is challenging, let alone whether their covariation is due to communitylevel selection (19)(20)(21).Statistical techniques have been developed in attempts to parse the effects of selection into individual-and higher-level components (22,23).Another approach, common in community and ecosystem genetics, employs "community heritability" to identify whether community species composition is associated with genetic variation in a foundation species (12).Here, investigators use well-established heritability measures, which indicate the fraction of total phenotypic variation due to a species' population's genes, to assess the extent to which community traits could respond to selection. This forms a radical extension of the "community genetics" research program first outlined by Janis Antonovics (24)(25)(26)(27), as now communities are being treated as units of natural selection (e.g., 12).Advocates have extended heritability measures to include genetic interactions between species putatively subject to natural selection (12,28,29).Some interpret such extended heritability to imply that communities can also have fitness (differential survival and proliferation) and that such fitness covaries with community traits.The conceptual link between population genetic variation within a single species (from which heritability is directly measured) and the differential survival and proliferation of whole communities hinges on the premise that genetic variation within a foundation species is causally responsible for the fidelity of other species actively associating with, or avoiding, a given community during its assembly. We are concerned whether this causal connection can be inferred from heritability analyses and whether this approach can show that communities are units of selection themselves rather than reflecting in their composition the foundation species' "extended phenotype" (30).We begin by rationally reconstructing what community reproduction would be in nature by articulating an account of community phenotypes and community inheritance mechanisms.We aim to be charitable, providing a bestcase scenario for communities as units subject to natural selection. Mainstream formulations of evolution by natural selection (ENS) follow Richard Lewontin's "recipe" (31), which requires populations of entities that must exhibit variation, inheritance, and differential fitness.To quote Levins' and Lewontin's updated version of the recipe (32), three considerations are necessary and sufficient for ENS to occur, namely that "(i) There is variation in morphological, physiological, and behavioral traits among members of a species (the principle of variation).(ii) The variation is in part heritable, so that individuals resemble their relations more than they resemble unrelated individuals and, in particular, offspring resemble their parents (the principle of heredity).(iii) Different variants leave different numbers of offspring either in immediate or remote generations (the principle of differential fitness)." For this recipe to be applied to communities, it must be "substrate neutral," so that it can be applied to multiple levels of the biological hierarchy, removing the necessity of ENS occurring within a population of a single species but requiring that something like level-specific reproduction occur (13,33).Importantly, community genetics presently does endorse Lewontin's recipe, within a multilevel selection (MLS) setting, as the basis of community-level ENS.Whitham et al. (12), for instance, say that For evolution to occur at the group level, variation must occur in average group phenotype, heritability must exist such that progeny groups inherit their parent groups' traits, and selection must ensue whereby a covariance between group phenotype and group fitness allows certain group phenotypes to propagate in disproportionate numbers. So not only must communities have phenotypic traits distinguishable from those of their lower-level constituents, some of those traits must effect differences in fitness that allow for the community to reproduce (34).We introduce a plausible description of community-level phenotypes, then reconstruct the account of community-level inheritance of these phenotypes implicit in community genetics.We then explain why we remain skeptical as to whether one can infer community selection from such heritability measures.We articulate the relationship between community inheritance and heritability, as these two concepts can be conflated.This matters: inheritance may be imposed by experimental design and, therefore, heritability measures may lack natural ecological (external) validity.We suggest that a version of David Hull's replicatorinteractor framework for ENS (35) and/or Richard Dawkins's concept of the "extended phenotype" (30) better serve the purposes of community genetics (33). Community-Level Phenotype and Lewontin's Principle of Variability For natural community assemblages to be differentially selected, there must be a general and unified account of community-level phenotypes that can make a causal difference to the survival and/or proliferation of communities in nature, and this phenotype must vary between communities in a relevant population of communities.Experimental studies of group selection provide some guide to the different relevant higher-level properties, although most studies of group selection are conducted on single-species groups, limiting their applicability here (36). Lean (37) categorizes community properties as follows: "the maintenance of multispecies interaction networks such as food webs (community network structures), the maintenance of compositional identity or aggregative features (emergent community properties), or the various material outputs that the joint assemblage creates (community outputs)."These are ways to describe properties at the community level, not necessarily the sort of properties that could be selected community-level phenotypes.For this, these properties must function to favor differential reproduction or (arguably) persistence of the ecological community that possesses them.Any or all of emergent properties, food webs, or ecological outputs could be properties that would allow the communities to be replicated or maintained in the face of disturbance or perturbation.If functional properties alone are considered, a proposal addressing the latter has been made in the case of holobionts (38,39). Equally, community properties must warrant being described as phenotypes, serving shared purposes within the community.The mere presence of community-level properties does not indicate the community is a functional collective with shared unity of purpose [i.e., Type I Agency (40)].In having community-level properties be the result of differential selection on the genetic variation in a foundation species rather than the whole community, advocates of community and ecosystem genetics have jettisoned the requirement that there is unity of purpose between the populations in the community.The apparent higher-level adaptation of the community can be the result of the foundation species cultivating a community that will support its fitness.In the common garden experiments we are directly engaging with, there is a positive effect of foundation species variation on the species it recruits.However, this is not evidence that communities are entities capable of limiting lower-level selfishness to effect differences in community fitness (in accordance with some unity of purpose at the community level). Experimental inquiries into purported multispecies group selection based on community phenotypes exist.Bangert and Whitham (41) consider arthropod community composition as a community phenotype, which is influenced by cottonwood genetic diversity, independently from any effects the arthropod composition has on the abiotic output of the community system.Indeed, community and ecosystem genetics considers the population size of multiple species and their genotypes as a community phenotype (12,41).Studies of the Gaia hypothesis (42) or ecosystem evolution often consider instead community phenotypes that comprise the outputs of the assemblages (43,44).In the ecosystem services literature, these ecosystem outputs, which act to maintain biotic systems, are called regulatory services.When these self-reflexively maintain a community, community selection could occur due to this phenotype.It has been suggested that the birth-death dynamics of community network structures evolve by evolutionary dynamics (45). We accept that such collective-level phenotypes could be responsible for differential community persistence and/or recurrence.In addition, of course, these phenotypes can vary community to community, even when populations of communities are circumscribed quite tightly.However, according to Lewontin's criteria, for selection to impact the distribution of such phenotypes in future "generations" of communities, they must be transmitted to whole communities as descendents by some inheritance mechanism. Heritability, Inheritance, and Lewontin's Principle of Heredity The principle of heredity is especially problematic for communities.Simply interacting as a whole to produce a phenotype is not sufficient.A multispecies assemblage must also have the capacity to reproduce and transmit a phenotype to offspring assemblages.This concept is further challenging to apply here because it is derived from a synthesis of two different, but related, aspects of heredity.The first is the requirement that there exists an entity-level mechanism for the transmission of a trait from parent to offspring (in accordance with "offspring resemble their parents").The second is that some fraction of population-level phenotypic variation must be reliably transmittable to future generations as genetic effects on phenotype (in accordance with "the variation is in part heritable").Note that these two aspects of heredity have very similar terminology, with the first referred to as "inheritance" and the second as "narrow-sense heritability."The distinction is important, as heritability of the kind routinely measured within-community genetics (i.e., associated with genetic variation within a foundation species) does not depend on the existence of an inheritance mechanism for communities.Thus, establishing that community composition is associated with genetic variation in a foundation species might be necessary (e.g., case 1 below), but it is not sufficient (e.g., case 2 below) for communities to satisfy Lewontin's principle of heredity.It is also necessary that we establish "community inheritance." We begin with the property of inheritance, which is more challenging to apply to higher levels of biological organization (33).The intrinsic mechanisms of inheritance for lower-level reproducers like bacteria and multicellular (often sexual) organisms are widely understood, and consequently, their existence is taken for granted within Lewontin's principle of heredity (e.g., DNA replication, germ cell production and fertilization in diploid organisms need no justification).However, for higher levels of organization such as multispecies assemblages, the mechanisms for reproduction and inheritance are often speculative, if present at all.The inheritance criterion requires that there exists some causal relationship between the entities whereby those related by common descent are phenotypically more similar compared with unrelated entities-that "offspring resemble their parents" in Lewontin's words (32).In many lower-level settings, inheritance is trivial to explain or establish (e.g., Mendelian transmission genetics for diploid organisms).However, the mere existence of lower-level genetic inheritance among the constituents of a higher-level entity is insufficient to cause, on its own, phenotypic covariance between higher-level entities (33,38).There must exist some additional biological or experimenter-imposed mechanism to support inheritance that defines parent-offspring lineages at that level (33).Without such a mechanism, higher-level phenotypic covariance could be a consequence of individual or species-level inheritance, just as organism-level inheritance might be seen as the consequence of gene-level inheritance mechanisms (46).However, organism-level inheritance does approach 100% for asexuals and 50% for each of the parents of a sexual organism, because there are chromosomes and other apparatuses of reproduction that serve as such mechanisms or devices.Genetically encoded information will, at least some of the time, be passed directly from parent organisms to offspring organisms.What are the comparable structures or devices for communities? In contrast to the causal relationship of inheritance between individual parents and offspring, the concept of heritability refers to a statistical property of a population.As routinely used in population genetics, heritability refers to the particular fraction of total population variation in phenotype that is due to genes (the actual partitioning of variance will be discussed below).It simply indicates that for ENS to occur, there must exist some population genetic variation associated with parent-offspring covariance.Thus, Lewontin's recipe is aligned with Fisher's Fundamental Theorem of Natural Selection, another well-known expression of ENS, which indicates that populations cannot evolve if there is no reliably transmittable (narrow-sense) genetic variance in fitness (47).For reproducers such as bacteria and multicellular organisms, the coupling of individual inheritance to the heritability of phenotypic variance within a species is largely guaranteed since genes are the material basis of both.However, the relationship is more complex when lower-level reproducers are the components of a higher-level entity and selection is on phenotypes definable only at that level.When additional levels are involved, the property of inheritance and the observation of heritability can become decoupled. We present below two hypothetical cases to illustrate the distinction between inheritance and heritability in the community genetics context.The first case extends a classic problem to communities such that ENS cannot operate at that level because there is no genetic variance (47).The second case illustrates a unique problem for community genetics.Here, the community cannot evolve by ENS because it lacks an inheritance mechanism, despite having positive heritability in the community genetics context.These cases reveal the difficulty of interpreting the evolutionary significance of community-level heritability.Such interpretation, we will see, requires additional knowledge about the operational level of inheritance. in each of its constituent species.Further, the assumed mechanism is accurate because species reproduction is perfectly coordinated with community reproduction such that descendent communities have every species represented exactly as in the parent community.Such a mechanism of reproduction would yield parent-offspring lineages of communities and consequently would permit phenotypic covariance at the community level.Given a population of such communities, on average, there will be greater phenotypic resemblance among those communities that share common ancestry as compared with unrelated communities.This type of community has the property of inheritance.Now consider that in communities of this sort, a particular community-level phenotype is due to the expression of a gene in one of the constituent species (the "focal" species).Consider that this community phenotype varies across a population of communities according to local environmental influences on gene expression, but all members of the focal species are genetically identical at the locus.Here, the community trait is passed on to descendent communities (inheritance) because the focal species is always transmitted to the next generation.However, the lack of genetic variation at the focal-species locus means that variation in this community phenotype has zero heritability.Clearly, zero heritability is not evidence that the gene has no causal contribution to the community trait.Moreover, it is also not evidence that a community would not inherit any local phenotypic influences on this trait due to niche construction.To conclude that this community phenotype cannot now respond to ENS, but that it would if sufficient genetic variation were to arise (giving rise to positive heritability), requires additional knowledge of the community inheritance mechanisms.Since natural mechanisms for community inheritance are often speculative and less precise than here, empirical estimates of community heritability are more challenging to interpret than heritability for lower-level traits. Hypothetical Case 2: Community-Trait Heritability without Community Inheritance.Consider another type of community where there is no mechanism for community reproduction.There is, however, extensive redundancy for ecological roles among potential member species.Although species recruitment is ecologically constrained, it remains plastic in terms of species composition [e.g., there is "functional redundancy" as in (48)].Assembly yields communities that vary in composition, but share properties determined by ecological constraints.Critically, because there is no mechanism of community inheritance, the constituent species disperse upon dissolution of communities and are recruited randomly with respect to parentage in the formation of future communities.Thus, there is no way for communities to faithfully transmit community-level traits to new communities.There will be no parent-offspring phenotypic covariance at the group level.However, due to sampling variation among ecologically redundant species during assembly [an important source of community variation, (49)], ecologically neutral variation in community traits such as species richness and evenness is expected.Now consider the measurement of heritability for a community phenotype in this setting.Recall that within the field of community genetics, heritability is estimated by selecting a focal species so that genetic variation can be precisely circumscribed and tested for association with a community trait.If genetically divergent lineages within the focal species covary with traits sensitive to sampling variation during assembly, there would exist positive heritability for a community trait which cannot be inherited because there are no parent-offspring relationships at the community level.It is unsurprising that ecological assembly rules would dictate the association of species of similar function, but substituting such rules for evolutionary processes requiring inheritance violates the evolutionary principles on which community and ecosystem genetics rests. Comparison of cases 1 and 2 illustrates why correct evolutionary interpretation of community heritability requires independent knowledge of any community-level inheritance mechanisms.In case 1 we require independent knowledge of community inheritance mechanisms to correctly interpret zero heritability as merely a problem of no fitness-affecting genetic variance (47).In case 2, we require independent knowledge of ecological assembly mechanisms to correctly interpret positive heritability as decoupled from the notion of community-level reproductive fitness.Case 2 highlights a unique challenge in community genetics: positive community-level heritability is not evidence of community inheritance. Inheritance in Ecological Systems.Debate over inheritance between ecological systems goes back a long way.Many have harkened back to Fredrick Clement's vision of ecological communities being akin to organisms, with reproduction and development (16).However, more widely, most ecologists do not consider natural ecological systems as having a tightly integrated and reproduced identity (50).The difficulty is defining parent-offspring ecological lineages and describing a mechanism for sufficient parentoffspring phenotypic covariance.Either is difficult, and, as we will note, sometimes ensured only by the experimental set-up itself.We have already described (and will explore further below) one method of suggesting community inheritance, which implicitly appears in community and ecosystem genetics, that of a "community propagule."This is a member of a "foundation" or "keystone" species which, in some manner, recruits a community around it. Ecological systems, within an area, often maintain their higher-level properties and species compositions over time.The persistence of such features may, however, be solely a result of the spatial autocorrelation between the lower-level (species) populations that comprise those ecosystems (51,52).In such cases there is no intrinsic mechanism for ecological inheritance despite the presence of geospatial boundaries (53)(54)(55). An inheritance mechanism of some kind might involve a lower-level community propagule causally producing a new community with the same higher-level properties as the parent system.All reproduction is subject to environmental influence, so perfect inheritance from a parent system is too stringent a criterion for such community inheritance.Instead, a propagule-based mechanism for generating new communities need only ensure that communities related by common descent are phenotypically more similar than unrelated communities.The most suggestive examples of a propagulelike reproduction of communities are the dispersal of "foundation species," sometimes known as a "keystone species" (56).Foundation species according to Whitham et al. "define much of the structure of a community by creating locally stable conditions for other species" (12).The dispersal of a foundation species is considered to function to create a higher-level process analogous to reproduction of the lowerlevel entities (a higher-level process we would call, instead, re-production).Consider the case where the reproductive excess of a foundation species disperses to a new location, and this founder event is reliably followed by a process of ecological recruitment, facilitating the introduction of other populations and results in a community phenotype. A mechanism that could lead to community re-production in this sense would be the Mendelian transmission of genes to descendent foundation species that influence how other species actively associate with, or avoid, the community during its assembly.Community genetics, specifically, presents evidence for a genetic basis of community assembly, and structure, as caused by interspecies indirect genetic effects (IIGEs) mediated by genetic variation within a foundation species (e.g., 57).This suggested mechanism of community inheritance thus depends on reliably coupling the lower-level inheritance mechanism of a foundation species to the control of higher-level ecological processes. For IIGEs within a community to be the target of ENS, according to Lewontin's recipe, the community must have a sufficient mechanism for the reproduction and transmission of IIGEs to future generations of communities.Therefore, what is at stake here is community inheritance.We contend that community heritability, as inferred from focalspecies genetic variation, is not direct evidence of community inheritance.Moreover, without some mechanism for community inheritance, such a measure of community heritability can be an inadequate predictor of community evolution via changes in IIGEs (elaborated below).Without a mechanism to accurately reproduce (rather than re-produce) the community and its phenotype, a unified adaptive response to external pressures is not possible because the IIGEs cannot be reliably transmitted to descendent communities and there is no mechanism of control over selfish species that disrupt IIGEs when they maximize their fitness.Thus, the application of Lewontin's recipe to community evolution hinges on the prior assumption that the lower-level reproduction and inheritance mechanism of a foundation species is causally responsible for a process of ecological recruitment resulting in predictable community inheritance of IIGEs.Below, we will present an alternative model for group-level selection of IIGEs that does not require strong assumptions about the existence of a reliable mechanism of community inheritance. The reliability of the propagule-like mechanism is important.Ecological interactions are highly contingent (58).The interactions between species are often highly dependent on background conditions, such as the abiotic environment and the order of species appearance ("priority effects") (59).Consequently, inferences made about species recruitment in controlled experiments could lack validity in the wild.Another difficulty with empirically determining whether a higher-level entity is a unit of selection is the ability to provide an identity condition for the community so as to determine what has actually been reproduced: who is the parent and who the offspring?One solution is to use the indexical community framework (e.g., 37).The reproduced community identity is described through indexing the composition to a focal population, in this case, a foundation species, and then identifying the network of populations that are causally connected to the focal population (37,60,61).However, higher-level ecosystem properties might be the product of multiple foundation populations.If propagule-like community reproduction required dispersal of a network of genes spread across multiple foundation species, the community propagule would then be indexed to this more complex cluster of populations (37), whose reproduction as a cluster is problematic.A further complication is the possibility of temporal variability.The composition of the index might be time dependent, with some species even leaving and rejoining a community when causal connections to the foundation species are plastic and subject to environmental modification.Clearly, identity conditions for such cases will be more challenging than for a singular foundation species.We have some difficulty, then, equating reproduction and re-production.Although we do not doubt that foundation species (one or a few) might sometimes determine what species are subsequently recruited, the principles of ecology, not evolutionary biology, are relevant here: communities are not "units of selection" (see discussion below on the principle of differential fitness). Heritability in Community Genetics.The use of heritability scores is widespread in population genetics after being devised by R. A. Fisher (62).Broad heritability H 2 is a score between 0 and 1 representing the proportion to which variation in all genetic factors influences the variance of phenotype within a population.Through common garden experimental design, heritability measures previously used on organismal phenotypes have been extended to community phenotypes.Such experiments yield estimates of the fraction of phenotypic variation among communities (V P ) that is associated with genetic variation (V G ) within a single foundation species (e.g., 63,64). Broad sense heritability is standardly identified through the equation H 2 = V G / V P , where V G is the genetic variation in phenotype and V P is the total variation in phenotype.This includes the assumption that V P = V G + V E , or that the phenotypic variation (V P ) is a simple sum of variation in the genes (V G ) and the environment (V E ).However, this additivity assumption can create well-documented problems when the relationship is more complex.Variation in a phenotype is more realistically represented as the result of where V G×E represents the nonadditive interactions of genes and environment and cov G×E represents the degree to which genetic variation covaries with the environment experienced by the organism.Furthermore, V G represents the sum of additive genetic effects (V A ) and interactive genetic effects such as dominance and epistasis.Critically, in the traditional setting, only the V A are responsible for predictable phenotypic changes in response to ENS.This is the reason for the restricted form of heritability, h 2 = V A / V P, referred to as narrow-sense heritability.Community genetics employs heritability in the broad sense.Here, heritability represents the fraction of community-level trait variation attributable to any sort of genetic variation within the foundation species.Thus, community heritability (hereafter denoted H 2 C ) includes all genetic factors in a focal species, both additive and interactive, that affect a multispecies trait.The community compositional effects captured by H 2 C are significant because the composition of a group can strongly influence individual fitness.Within multispecies groups, gene-mediated interactions come in two forms: 1) within-species indirect genetic interactions (IGEs), and 2) interspecies indirect genetic interactions (IIGEs).The latter underpin the genetic component of community-trait variation (12,65).Thus, H 2 C represents a significant extension of the traditional notion of broad sense heritability (H 2 ), which recognizes only the intragenomic interactions (dominance and epistasis).Estimates of H 2 C for a multispecies phenotype are obtained from common garden experiments where the fraction of among-group trait variance (presumably due to variation in IIGEs) can be attributed to genetic polymorphisms within the foundation species. In the traditional setting (diploid transmission genetics), broad sense heritability (H 2 ) is an inappropriate predictor of the response to ENS because sexual parents cannot reliably transmit intragenomic interactions (dominance and epistasis) to their offspring via haploid gametes.For this reason, narrow-sense heritability is used instead.Likewise, in the absence of higher-level trait transmission, H 2 C would be an inappropriate predictor of the phenotypic response to selection.H 2 C would become relevant to ENS, according to Lewontin's recipe, when the IIGEs responsible for a community trait are reliably transmitted between parent and offspring communities.While a positive estimate of H 2 C from a common garden experiment is consistent with this as a possibility, it is not evidence that reproduction and dispersal of foundational species play this role over the natural scale of environmental and genetic variation (66,67).Resemblance between community phenotypes could be due to factors outside of the variation in a foundation species.Although artificial selection experiments confirm that group-level ENS can produce significant evolution in multispecies systems, those experimental designs ensured that the IIGEs were reliably transmitted from parent to offspring collectives (4,5,8).It is noteworthy that these experiments validate theoretical predictions about group selection being more effective than individual selection when it can target indirect effects (68,69).However, the capacity for natural assemblages of species to evolve as units of selection (under Lewontin's recipe) remains an outstanding question.The answer to this question does not depend on the existence of IIGEs (they have been empirically confirmed), but rather whether communities have an intrinsic capacity to transmit them to future generations. Community Selection and Lewontin's Principle of Differential Fitness Consider Fig. 1A, which is a multispecies version of MLST.The letters (A,B,C, … Z) represent different species, of which organisms are members.Call these organisms "particles."They make up "collectives" of many such particles, representing many species.The circles and ellipse are multispecies "collectives" or "communities."For convenience, only three species are shown in each, but there can be many more species present.Collectives with organisms from species A, B, and C grow larger-so that the ellipse on the left comes to harbor more particles of all species contained in the collective (A, B, and C included) than those with representation of only one of these three species.We think multispecies MLS1 and MLS2 are analogous to the uses of Heisler and Damuth (22), writing about organisms and groups within a species: "Of interest in the former case are the effects of group membership on individual fitnesses, and in the latter the tendencies for the groups themselves to go extinct or to found new groups (i.e., group fitnesses)."So in MLS1, there need be no collective or community "fitnesses" in Lewontin's sense-"different variants must leave different numbers of offspring either in immediate or remote generations."In MLS1, the phenotypic variation is indeed at the level of communities, but communities do not leave offspring communities. Instead, as in more typical trait-group selection ( 70), all communities of whatever size dissolve, releasing their constituent organisms.These are then randomly recruited from a common pool to form the next generation of communities.Since there are more organisms of species A, B, and C in this pool because of their effect when together on the productivity of collectives, the second generation of collectives will have more ABC collectives than the first.The phenotypes of populations (their propensity to grow) could well be due to interactions (IIGEs) between individuals of different species, but no community in one generation would be the parent of any community in the next. It is to the advantage of organisms in species A to associate with (or "recruit") organisms of species B and C in MLS1, and many interspecies associations will indeed qualify as IIGEs (12).If such interactions entail that the A offspring of a parent A organism wind up preferentially bound to the B offspring of that A parent's partner B (and similarly C), then we have MLS2 (Fig 1B).Collectives will reproduce at least in part (organisms of those three species, if no others) as collectives and conform to Lewontin's recipe. In MLS1, IIGEs are only potential interactive properties experienced by the individual species, affecting individual selection within groups, as above.In MLS2, to the extent that there is vertical inheritance (collectives reproducing as collectives), IIGEs can be seen as transmittable properties of collectives.Analogously, although mitochondrial and nuclear mutations are sometimes opposed in eukaryotic cells, most of the time, there are positive interactions.This is why artificially imposed vertical group inheritance (MSL2) is such an effective means of producing a phenotypic response to selection that depends on positive IGEs or IIGEs (4,5,8,68,69). In MLS2, the differential survival and reproduction of descendent collectives (community or collective-level fitness) will ultimately favor the reproduction of beneficial IIGEs and disfavor the reproduction of deleterious IIGEs.With MLS1, although ABC collectives differentially grow, they do not reproduce, and fitness (as Lewontin defines it) can only be attributed to organisms within species.Note that while MLS1 and MLS2 represent distinct processes, a given natural collective could simultaneously express the characteristics of both to some degree. Replacing Lewontin's Recipe with Hull's Replicator-Interactor Framework Evidence that multispecies assemblages have the capacity to evolve as a natural unit comprised of dozens, and perhaps thousands, of species would support a major expansion of Darwinian theory, and proponents of community and ecosystem genetics are excited by the possibility, as it would provide a means of evolution for holistic adaptation otherwise inaccessible to individual-level selection.Their enthusiasm is further encouraged by experimental studies of community-level selection demonstrating that it can yield efficient and rapid evolution of holistic traits in a controlled setting (e.g., 8, 49, 71).However, in a natural setting, it is not sufficient that such traits have been shown to vary among communities and are influenced by genetically encoded interactions between species (i.e., community heritability).Their evolution by natural selection according to Lewontin's recipe can only happen if community-level IIGEs are transmitted largely intact from parent communities to offspring communities (i.e., if there is community inheritance). In an influential commentary on an experimental paper by Swenson et al. (8) showing "heritability at the ecosystem level," Charles Goodnight (72) At issue, really, is the distinction between demonstrating ENS, meaning change as a consequence of that process at some level and evolving as a result of natural selection acting at the ecosystem level.We submit that the experiments reviewed by Goodnight (72) and others often cited (e.g., 49,71) do demonstrate the former but show the latter only because MLS2-like inheritance has been imposed by the investigator.In order to allow for interactions between species to be transmitted to the next generation, the experimental design creates ecosystems with individual-like transmission dynamics that they are not known to possess under natural conditions. For instance, Swenson et al. (8) conducted one of their ecosystem inheritance experiments as follows: Each line consisted of 15 units and the 3 units with the highest (or lowest) value of the phenotypic trait were used as parents by combining the soil from the 3 units into a slurry that was used to inoculate the ''offspring'' generation of units. It is surely unsurprising that the "offspring" so defined resemble their "parents" more than they do all parents (including those with the lowest value).What is transferred between pots with Arabidopsis seedlings (mass of plants is the measured phenotype) is a sample of microbes, and Each letter represents a "dose" of individual organisms belonging to that species, with no necessary implication that each came from the same collective, that only the three indicated species are in the collective or that many species affect the presence of others.Interspecies genetic interactions (IIGEs) can have positive, neutral, or negative effects on individual fitness (depicted in green, black, and red, respectively).(A) MLS1.Mutualistic interactions between organisms of different species provide a collective benefit (e.g., cross-feeding) that manifests as greater growth (size) of the collective.An evolutionarily effect is realized in future "generations" of communities through the greater numbers of individuals contributed by larger communities (e.g., those having A + B + C in green).Despite stochastic blending of many species in each "generation," the overall distribution of IIGEs evolves toward greater representation of the mutualistic interactions.(B) MLS2.Here, it remains an advantage for organisms of species A to interact with organisms of species B and C. Because communities are reproducing as communities, multispecies interactions can be transmitted directly to offspring communities.In this way MLS2 conforms to Lewontin's recipe.However, under MSL2, greater representation of beneficial IIGEs in future generations requires greater community-level reproductive rates.(C) Coevolution.Individual selection and coevolution of mutualistic IIGEs occurs within a single, enduring community.Species interact and influence each other's fitness landscape, leading to a sequence of adaptive changes in IIGEs over time (indicated by integers).Because there is no feedback to a distribution of IIGEs across a larger set or population of communities, the IIGEs cannot be the target of selection in this scenario. when enough are transferred, the progeny communities cannot help but resemble parental communities.The experimental procedure ensures inheritance of material that, via the IIGEs preserved within the inoculum, affects Arabidopsis growth. The extent to which natural ecosystems might evolve by natural selection depends on the extent to which vertical inheritance exists and dominates over natural processes, such as priority effects on ecosystem assembly, sampling variation during assembly, horizonal migration between ecosystems, and variability in the capacity of descendent system to inherit critical biotic and abiotic material produced by niche construction activities.Ecosystems in nature may behave more like horizontally acquired microbiomes, in which one lineage (often and sometimes arbitrarily designated the host or a foundation species) recruits other lineages by a combination of direct (organism-organism recognition processes) and indirect methods analogous to "ecosystem engineering." Given that community heritability does not indicate the level at which fitness variation might be relevant to ENS, we advocate for caution in extrapolating H 2 C from common garden experiments to natural systems.In the simple case of individual selection within a single (enduring) community, species will interact and influence each other's fitness landscape (Fig. 1C).Genetic variation within such species can be the target of individual selection, and in that case, those species would coevolve.While the IIGEs within such a community can change over time according to this process, there is no re-productive (Fig. 1A) or reproductive (Fig. 1B) feedback affecting a distribution of IIGEs across a larger set of communities, so those IIGEs cannot be the target of selection at the community level.Nevertheless, positive H 2 C could be obtained from common garden experiments for any species-level polymorphisms that happen to be associated with some aspect of withincommunity composition.Here, broad sense heritability would be a poor predictor of any response to withincommunity ENS.A narrower sense of heritability would be more suited to this setting. For selection to produce an evolutionary sorting of alternative systems of IIGEs, there must be some mechanism whereby fitness effects (individual level in MLS1 or community level in MLS2) can feed back to a distribution of IIGEs among groups (Fig. 1 A or B).In case of MLS2, variation in community traits captured by H 2 C can be directly transmitted to descendent communities if the foundation species is part of the transmission "propagule."Here, since H 2 C does summarize genetically based interactions with potential to affect differential fitness at the group level, it should be a good predictor of the evolutionary response under MLS2, if this is imposed.Alternatively, in the case of MLS1, H 2 C can be interpreted as summarizing genetically based interactions with potential to feed back to individual-level fitness.However, since there is no community inheritance mechanism for IIGEs in MLS1, the evolutionary process depends on horizontal rather than vertical inheritance and fails to meet Lewontin's heredity criterion.Nonetheless, through effects on individual-level fitness, an evolutionary response in the genetically based interactions between species is possible via MLS1.One implication is that sets of genes residing in different species could experience a degree of coordinated evolution [reminiscent of Dawkins's "genes-as-oarsman" analogy (46)] according to the extent that their lower-level fitness effects are additive and are compatible with a given IIGE environment.The membership and stability of gene sets having such community genome dynamics should be the focus of future community genetics investigation. Foundation species play a role in community and ecosystem research very similar to that played by the host in "the hologenome theory of evolution" (73).Unsurprisingly, the objections to that claim (38,(74)(75)(76) focus on the problematic relationship between the re-production (rather than reproduction) of multispecies collectives and Lewontin's criteria.A solution might be realized in both settings if the standard view of ENS built around Lewontin's recipe were replaced by David Hull's replicator-interactor framework (35,55).In Hull's conception of ENS, holistic interactions between complex entities and their environment are the causal basis of differential fitness, which is manifested as differential reproduction of lower-level replicators. In such a framework, ephemeral entities like the ellipse in Fig. 1A illustrating MLS1 would be cast as "interactors" and could be organisms [as in Dawkins's The Selfish Gene (46)] but could as well be communities or ecosystems, while the cognate replicators could be genes (as in Dawkins's book) but also organisms or species whose differential reproduction is facilitated by being part of a better-growing or more persistent community or collective.Such a solution has been hinted at before and recently made more explicit (55,77).Here, we develop this idea further by applying the replicator-interactor framework to IIGEs as an expansion of the extended phenotype concept to include multispecies MLS1.We acknowledge that multispecies interactions do occur within a single collective, and extended phenotypes can evolve by nothing more inclusive than individual selection and coevolution in this context (Fig. 1C).Under MLS1, communities evolve as interactors, having unique IIGEs as potential targets of trait-group selection.The effects of IIGEs on individual fitness feed back to their distribution among communities.According to Hull's replicator-interactor framework, this would lead to selection of beneficial IIGEs, and differential growth of communities (interactors) would cause differential survival and reproduction of those organisms (replicators) most relevant to beneficial community-level interactions.The advantage of switching to the replicator-interactor framework is that it can accommodate IIGEs that coordinate community composition without requiring all populations to have unified fitness gain.The foundation population can cultivate a community where only some of the populations have a fitness gain or even none other than itself. The hierarchical structure of MLS models allows for a variety of evolutionary processes to operate concurrently across levels (55).Indeed, organismal coevolution is expected to occur in MLS1 whenever organism generation times permit mutation-drift-selection dynamics to play out within the lifespan of a community.When coevolution of mutualistic IIGEs does occur within a community (Fig. 1C) and this in turn causes an increase in the frequency of the genetic environment in which the individual genes are favored (Fig. 1A), further evolution of mutualistic IIGEs could be accelerated (54).A side effect of withincommunity coevolution could be the evolution of genetic mechanisms whereby species having mutualistic IIGEs "assemble" more frequently than expected by random.Evolution of such assembly mechanisms implies a more complex version of MSL1, showing assembly bias.The latter can be viewed as an analog to the linkage disequilibrium parameter in classical evolutionary genetics, as it sets the degree to which mutualistic IIGEs might occur in excess of pure blending as depicted in Fig. 1A.However, coevolution does not affect every gene in every species, which might limit the opportunity of otherwise neutral specieslevel assembly mechanisms to hitchhike to fixation in concert with community-level selection for mutualistic IIGEs. Interestingly, the extended phenotype was originally the core framework of community genetics (25,28), and we suggest that a return to this framework, expanded to include Hull's replicator-interactor formulation, may be a more successful conceptual framing for community and ecosystem genetics.Community-level vertical inheritance would no longer be a necessary condition for the evolution of community-level mutualism (as underpinned by IIGEs).Interactive properties, like IIGEs, become a target of selection when communities that embody them function as interactors, regardless of whether such interactions can be passed intact to future generations of communities.Of course, some degree of vertical inheritance would very likely enhance the effectiveness of selection operating at the level of interactor, but the only necessary condition is that some component of fitness is unique to the community as an interactor.In sum, we suggest that accounts of evolution in a community context are not well served by an exclusive commitment to Lewontin's recipe-based formulation of ENS (i.e., MLS2-thinking).Hull's replicator-interactor framework is more inclusive by admitting interactors as potential targets of selection concurrent with evolutionary processes operating at a variety of other levels.Because the communities of interest here are not expected to make the evolutionary transition to individuality, it works to the advantage of community and ecosystem genetics that differential fitness of either genes or organisms (as replicators) can explain the evolution of community traits (i.e., MLS1-thinking). Data, Materials, and Software Availability.There are no data underlying this work. writes: In the first article of the first volume of Annual Review of Ecology and Systematics, Lewontin points out that any level of organization that can be grouped into a population of units has the potential to evolve by natural selection.Evolution by natural selection has been seen in experimental studies of individual and group selection, and now Swenson et al. have demonstrated that selection acting at the level of the ecosystem can cause evolutionary change (emphases ours). Fig. 1 . Fig.1.Evolution of IIGEs by multilevel selection (MLS) versus organismal coevolution.Circles and ellipse represent multispecies collectives or "communities," and letters (A,B,C … ) represent different species.Each letter represents a "dose" of individual organisms belonging to that species, with no necessary implication that each came from the same collective, that only the three indicated species are in the collective or that many species affect the presence of others.Interspecies genetic interactions (IIGEs) can have positive, neutral, or negative effects on individual fitness (depicted in green, black, and red, respectively).(A) MLS1.Mutualistic interactions between organisms of different species provide a collective benefit (e.g., cross-feeding) that manifests as greater growth (size) of the collective.An evolutionarily effect is realized in future "generations" of communities through the greater numbers of individuals contributed by larger communities (e.g., those having A + B + C in green).Despite stochastic blending of many species in each "generation," the overall distribution of IIGEs evolves toward greater representation of the mutualistic interactions.(B) MLS2.Here, it remains an advantage for organisms of species A to interact with organisms of species B and C. Because communities are reproducing as communities, multispecies interactions can be transmitted directly to offspring communities.In this way MLS2 conforms to Lewontin's recipe.However, under MSL2, greater representation of beneficial IIGEs in future generations requires greater community-level reproductive rates.(C) Coevolution.Individual selection and coevolution of mutualistic IIGEs occurs within a single, enduring community.Species interact and influence each other's fitness landscape, leading to a sequence of adaptive changes in IIGEs over time (indicated by integers).Because there is no feedback to a distribution of IIGEs across a larger set or population of communities, the IIGEs cannot be the target of selection in this scenario.
2022-11-04T06:18:02.092Z
2022-11-02T00:00:00.000
{ "year": 2022, "sha1": "d796ad63d6b27cad35f7cdba8d6c1c6dba8584df", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2202538119", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "27c9b5139f8e9a8d218d2756f9dd1f37998ff680", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Philosophy" ], "extfieldsofstudy": [ "Medicine" ] }
87390576
pes2o/s2orc
v3-fos-license
Osmoregulants Involved in Osmotic Adjustment for Differential Drought Tolerance in Different Bentgrass Genotypes Compatible solute accumulation regulating osmotic adjustment (OA) is associated with drought tolerance. The objectives of this studywere to examine genetic variations inOA among a diverse group of bentgrass (Agrostis sp.) genotypes or lines with differential drought tolerance, and determine major types of organic osmoregulants contributing to OA and accounting for the genetic variations in drought tolerance. A wild type cultivar of creeping bentgrass [Agrostis stolonifera (Penncross)], a transgenic line of creeping bentgrass (SAGIPT41), and four hybrid bentgrass lines [Agrostis capillaris · Agrostis stolonifera (ColxCr14, ColxCr190, ColxCr481, and ColxCr679)] were exposed to drought stress by withholding irrigation for 17 days in growth chambers. Among genotypes, ColxCr14, ColxCr190, and SAGIPT41 showed superior drought tolerance, as manifested by higher turf quality (TQ) and leaf relative water content (RWC), as well as OA than ‘Penncross’, ColxCr679, and ColxCr481 under drought stress. SAGIPT41 leaves accumulated greater content of soluble sugars (glucose, sucrose, and fructose), proline, glycine betaine (GB), and spermine; ColxCr190 had higher content of soluble sugars and spermidine; and ColxCr14 accumulated more soluble sugars and GB, compared with the three drought-sensitive genotypes. Soluble sugars were predominant contributors to OA, followed by GB and proline, with all three forms of polyamine (PA) as minor contributors in bentgrass genotypes. The osmolytes highly correlated to OA and superior drought tolerance could be used as biomarkers to select for drought-tolerant germplasm of bentgrass and other cool-season turfgrass species. Drought stress damage in plants is characterized of leaf dehydration, and therefore, increasing the capacity of water retention in leaves or improving leaf tolerance to dehydration to maintain cell turgor is critically important for plant survival of drought stress. Dehydration tolerance has been associated with the accumulation of compatible solutes in plant cells, which accounts for OA (Chaves et al., 2003). Osmotically active solutes or osmoregulants for OA regulate either movement of water into or reduced water efflux from cells, and thus help to maintain cellular turgor, enabling tissues to sustain growth and metabolic and physiological functions under drought stress (Bohnert and Jensen, 1996; Hare et al., 1998). In addition to the involvement in OA, the accumulation of compatible solutes also play roles in stabilizing proteins and membranes against desiccation injury, as well as protection against oxidative damage (Arakawa et al., 1991; Hoekstra et al., 2001; Rhodes and Hanson, 1993). The types of osmoregulants reported in OA are diverse, and typically include low molecular weight compounds such as amino acids (e.g., proline), ammonium compounds (e.g., GB and PA), sugars (e.g., fructose and sucrose), and organic acids (e.g., malate), as well as inorganic ions (e.g., potassium and calcium) (Chaves et al., 2003). Improvement in drought tolerance has been positively correlated with OA in leaves of many plant species, including perennial grasses used as turfgrasses (DaCosta and Huang, 2006; Qian and Fry, 1997; White et al., 1992). Qian and Fry (1997) reported interspecific variations in OA among three warm-season and one cool-season turfgrass species, with OA ranking as ‘Prairie’ buffalograss (Buchloe dactyloides) = ‘Meyer’ zoysiagrass (Zoysia japonica) > ‘Midlawn’ hybrid bermudagrass (Cynodon dactylon · Cynodon transvaalensis) > ‘Mustang’ tall fescue (Festuca arundinacea). Interspecific variations in OA associated with drought tolerance has also been reported among cool-season turfgrass species, such as greater OA in velvet bentgrass (Agrostis canina) leaves than creeping bentgrass (DaCosta and Huang, 2006). White et al. Received for publication 7 July 2015. Accepted for publication 7 Oct. 2015. Thanks go to Jiang Su Education Commission for financial support for Nanqing Liu to conduct the collaborative research project at Rutgers University. Authors also wish to thank Patrick Burgess for assistance in plant material preparation and technical support. Corresponding author. E-mail: yxshen@njau.edu.cn or huang@aesop.rutgers. Drought stress damage in plants is characterized of leaf dehydration, and therefore, increasing the capacity of water retention in leaves or improving leaf tolerance to dehydration to maintain cell turgor is critically important for plant survival of drought stress. Dehydration tolerance has been associated with the accumulation of compatible solutes in plant cells, which accounts for OA (Chaves et al., 2003). Osmotically active solutes or osmoregulants for OA regulate either movement of water into or reduced water efflux from cells, and thus help to maintain cellular turgor, enabling tissues to sustain growth and metabolic and physiological functions under drought stress (Bohnert and Jensen, 1996;Hare et al., 1998). In addition to the involvement in OA, the accumulation of compatible solutes also play roles in stabilizing proteins and membranes against desiccation injury, as well as protection against oxidative damage (Arakawa et al., 1991;Hoekstra et al., 2001;Rhodes and Hanson, 1993). The types of osmoregulants reported in OA are diverse, and typically include low molecular weight compounds such as amino acids (e.g., proline), ammonium compounds (e.g., GB and PA), sugars (e.g., fructose and sucrose), and organic acids (e.g., malate), as well as inorganic ions (e.g., potassium and calcium) (Chaves et al., 2003). Improvement in drought tolerance has been positively correlated with OA in leaves of many plant species, including perennial grasses used as turfgrasses White et al., 1992). reported interspecific variations in OA among three warm-season and one cool-season turfgrass species, with OA ranking as 'Prairie' buffalograss (Buchloe dactyloides) = 'Meyer' zoysiagrass (Zoysia japonica) > 'Midlawn' hybrid bermudagrass (Cynodon dactylon · Cynodon transvaalensis) > 'Mustang' tall fescue (Festuca arundinacea). Interspecific variations in OA associated with drought tolerance has also been reported among cool-season turfgrass species, such as greater OA in velvet bentgrass (Agrostis canina) leaves than creeping bentgrass . White et al. (1992) found genotypic variations in OA within a species, such as for tall fescue leaves, which was positively associated with tiller survival of drought stress. Despite of knowledge in the diverse genetic variability of OA among turfgrass grass species and within genotypes of the same species in the degree of OA, the specific types of osmoregulants accounting for the genetic variations in OA and associated with differential drought tolerance are not well documented. Understanding major osmoregulants contributing to OA may identify potential biomarkers or candidate genes to select for germplasm with high capacity of OA and improved drought tolerance. Bentgrass species widely used turfgrass species on golf courses, including creeping bentgrass and colonial bentgrass, vary genetically in drought tolerance . Some hybrid lines of colonial bentgrass · creeping bentgrass exhibited superior drought tolerance relative to traditional creeping bentgrass genotypes, such as 'Penncross' (Merewitz et al., 2012). Transgenic creeping bentgrass over-expressing a gene controlling cytokinin synthesis was found to exhibit improvement in OA and drought tolerance compared with the wild type 'Penncross' (Merewitz et al., 2011). However, major osmoregulants may account for the genetic variations in OA and associated drought tolerance among the diverse germplasm of bentgrass is yet to be determined. The objectives of this study were to examine genetic variations in OA among a diverse group of bentgrass genotypes or lines with differential drought tolerance, and determine major types of organic osmoregulants contributing to OA and accounting for the genetic variations in drought tolerance. Six genotypes of bentgrass that exhibited diverse genetic variations in drought tolerance levels were examined in the study, including a wild type cultivar of creeping bentgrass (Penncross), a transgenic line of creeping bentgrass overexpressing a gene (ipt encoding isopentenyl transferase) controlling cytokinins synthesis (SAGIPT41), and four hybrid lines colonial bentgrass · creeping bentgrass (ColxCr14, ColxCr190, ColxCr481, and ColxCr679). SAGIPT41 plants have been previously reported to exhibit superior drought tolerance to 'Penncross' (Merewitz et al., 2011), but their relative drought tolerance to the hybrid genotypes are unknown. Through using genotypes with a wide range of drought tolerance levels, specific osmoregulants associated with improved OA could be identified. Materials and Methods PLANT MATERIALS AND GROWTH CONDITIONS. Sods of SAGIPT41, four hybrid lines of colonial bentgrass · creeping conditions and (B) drought stress rated on a scale of 1-9 [1 = completely desiccated brown turf canopy; 9 = healthy plants with dark green, turgid leaf blades, and a dense turf canopy (Turgeon, 2008)]. Vertical bars on the top are least significant difference values (P = 0.05) for comparisons at a given day of treatment. bentgrass (ColxCr14, ColxCr190, ColxCr481, and ColxCr679), and 'Penncross' were transplanted from field plots into polyvinyl containers (40-cm wide, 80-cm long, and 40-cm deep) in a greenhouse at Rutgers University (New Brunswick, NJ) on 3 Mar. 2013. SAGIPT41 and the four hybrid lines were selected at Rutgers University and 'Penncross' was developed by The Pennsylvania State University. Each container was separated into 12 compartments with polyethylene foam plates, which was filled with fritted clay (Turface Green Grade; Profile Products, Buffalo Grove, IL). Two sets of plants of each of six genotypes were randomly planted in the 12 compartments within each container. Holes (%1.5 mm diameter) were drilled on the polyethylene divider, so water could move freely across compartments within each container while roots of different genotypes were prevented to be tangled together. This setup enabled all genotypes exposed to equivalent level of soil water content during drought stress as water could move or diffuse across different compartments. Plants were maintained in a greenhouse under 10 to 12 h natural light conditions and average temperatures of 21/13°C (day/night) for 3 months during Mar. to May 2013 and then moved to a walk-in growth chamber (Environmental Growth Chamber, Chargrin Falls, OH) where treatments were imposed on 6 May 2013. The growth chamber was maintained at 20/15°C (day/night), 70% relative humidity, 12-h photoperiod, and photosynthetically active radiation of 650 mmolÁm -2 Ás -1 at canopy height. Plants in the containers were watered three times per week to maintain soil moisture at field capacity until drainage occurred from the bottom of the containers, and fertilized weekly with half-strength Hoagland's solution (Hoagland and Arnon, 1950) before exposure to drought. Grasses were cut at 6 cm height every 2 d with scissors, with clippings removed. TREATMENTS AND EXPERIMENTAL DESIGN. The six bentgrass genotypes were exposed to two soil water treatments: 1) wellwatered control: plants were watered every other day to soil reaching field capacity (soil volumetric water content at 28%); and 2) drought stress: irrigation was withheld for 17 d until soil volumetric water content declined to 10%, which was monitored using the time domain reflectometer (Trase; Soil Moisture Equipment, Santa Cruz, CA). Each water treatment was replicated in four containers. Each container included two sets (or subsamples) of plants for each genotype. The genotype and water treatments were arranged as a split-plot design with water treatments as main plots and genotype as subplots. Containers with well-watered or drought-stressed plants were rotated in the growth chamber to minimize variability due to the environment. Statistical significance of data were tested using the analysis of variance procedure (SAS version 9.0; SAS Institute, Cary, NC). Differences between treatment means and genotype were separated by Fisher's protected least significance difference test at the 0.05 P level. GROWTH AND PHYSIOLOGICAL EVALUATION OF GENETIC VARIATION IN DROUGHT TOLERANCE. Several commonly used parameters were used to evaluate genetic variations in growth and physiological response to drought stress, including visual evaluation of TQ, leaf RWC, electrolyte leakage (EL), and leaf photochemical efficiency. Measurements were taken during the month of June and July. TQ was visually rated on a scale of 1 to 9 with a rating of 1 being a completely desiccated brown turf canopy and a rating of 9 representing healthy plants with dark green, turgid leaf blades, and a dense turf canopy (Turgeon, 2008). A rating of 6 was considered the minimal acceptable TQ level. RWC was calculated using the formula: 100 · [(FW -DW)/ (TW -DW)] ( Barrs and Weatherley, 1962), where FW is the leaf fresh weight, TW is the leaf turgid weight, and DW is the leaf dry weight after oven-drying leaf samples for 72 h at 80°C. TW was determined as weight of fully turgid leaves after soaking leaves in distilled water in the refrigerator for 24 h. Cellular membrane stability was evaluated based on EL (Blum and Ebercon, 1981). Leaves (%0.15 g) were cut to 0.5-cmlong segments, soaked in 30 mL of deionized water, and placed on a conical shaker. Following 12-h incubation, initial level of EL (C i ) was measured using a conductance meter (model 132; Yellow Springs Instrument Co., Yellow Springs, OH). Leaf samples were then killed at 120°C for 15 min in an autoclave, incubated in deionized water on the conical shaker for 12 h, and final level of conductance of the incubation solution (C max ) was measured. Leaf EL was calculated as (C i /C max ) · 100 (Blum and Ebercon, 1981). OA ANALYSIS. OA (Ѱ p100 ) was determined according to the rehydration method, in which Ѱ p100 of leaves was determined after soaking in water for full rehydration (Blum, 1989;Blum and Sullivan, 1986). Turgid leaf samples were frozen in liquid nitrogen and subsequently stored at -20°C until analysis of leaf Ѱ s at full turgor (Ѱ p100 ). Frozen leaf samples were thawed and cell sap was pressed from leaves using a laboratory press (Fred S. Carver, Wabash, IN), which was subsequently analyzed for osmolality [C (millimoles per kilogram)] using a vapor pressure osmometer (Vapro model 5520; Wescor, Logan, UT). Osmolality of cell sap was converted from millimoles per kilogram to Ѱ p [megapascals (MPa)] using the formula: MPa = -C · 2.58 · 10 -3 . OA was determined as the difference in Ѱ p100 between well-watered and drought-stressed plants. QUANTIFICATION OF SOLUTE CONTENT. Approximately 1-2 g fresh leaves from each replicate for each treatment were sampled at 0 d (wellwatered conditions) and 13 d of drought stress. Samples were frozen in liquid nitrogen for the analysis of content of major osmoregulants, including soluble sugars, proline, PAs [putrescine (Put), spermidine (Spd), and spermine (Spm)], and GB. Soluble sugars were determined using the phenol-sulfuric acid method (Buysse and Merckx, 1993). Soluble sugars were extracted from leaves (%1.0 g FW). An aliquant of 1-mL leaf extractant was added to 1 mL phenol solution and mixed well, in which 5 mL concentrated sulfuric acid (95%) was added. The reaction solution was incubated in water bath at 30°C for 30 min and then was cool down for 15 min. The absorbance of the reaction solution was read with a spectrophotometer (Spectronic Instruments, New York, NY) at 490 nm. A calibration curve with D-glucose was done as a standard. Proline content was determined using the ninhydrin method (Magn e and Larher, 1992). Fresh leaf samples (%300 mg) were ground in liquid nitrogen and then a 5 mL sulfosalicylic acid was added into the ground powders; 2 mL acetic acid and 2 mL ninhydrin were added to the extractant in test tubes. The test tubes with extractants were placed in a water bath with boiling water for 45 min and then cooled. The final reaction was completed by adding 4 mL toluene in the extractants. The absorbance of the resulting organic layer was measured with a spectrophotometer at 520 nm. A calibration curve was made with L-Proline as a standard. The content of PAs (Put, Spd, and Spm) was determined according to the method of Liu et al. (2002). Dry leaf tissue power (0.1 g) was extracted in 2 mL precooled perchloric acid, ice-bath for 60 min and centrifuged at 15,000 g n for 30 min. The supernatant was transferred into a centrifugal tube and stored at -20°C. PAs contained in the supernatant were subjected to a benzoylation reaction in the alkaline medium. Benzoyl PA derivatives were extracted by diethyl ether. Ether fraction was evaporated to dryness and dissolved in methanol. Highperformance liquid chromatography (HPLC) was performed on the liquid chromatograph (Thermo Scientific, Waltham, MA) using a 5-mm, 250 · 4.6-mm column [Diamonsil C 18 (2); Alltech Associates, Deerfield, IL]. A 10 mL of methanol solution of benzoyl PAs was injected into an autosampler (Surveyor, Thermo Scientific) every 35 min. Samples were eluted from the column with 70% methanol and with a temperature maintained at 30°C, and the flow rate was 0.7 mLÁmin -1 . PA peaks were detected with an ultraviolet detector at 230 nm. The applied standards were Put, Spd, and Spm. GB was determined using the method described by Wang et al. (2010). Dry leaf tissue power (0.1 g) was extracted in 12.5 mL water, shaked for 30 min, and then centrifuged at 2270 g n for 5 min. The supernatant was filtered and filtrate was transferred to solid-phase cartridges [150 mg/6 mL (Poly-Sery MCX; CNW Technologies, Dusseldorf, Germany)]. Then extraction cartridges were rinsed by methanol/water (85/15 v/v) and methanol. Elution was finished by rinsing with mixture of ammonia water/methanol (5:95 v/v) twice. Eluent was condensed dry and diluted with acetonitrile/water (50% v/v) followed by passing through a 0.45-mm membrane (Merck Millipore, Darmstadt, Germany) for further analysis in HPLC. Solution of acetonitrile/water (50% v/v) was the mobile phase for HPLC analyses. GB was analyzed and quantified by HPLC using an Atlantis HILIC Silica column (4.6 · 150 mm filled with 5 mm particle diameter; Waters Corp., Milford, MA). The peak areas were integrated and compared with standard curve constructed with standard of GB. The contribution of each solute to OA, calculated as osmolarity [C (millimoles per kilogram)], was expressed as a percentage of p100 measured from the same sample. p100 = -0.1013RTiC (R was gas constant, the value was 0.08314; T was degree Kelvin, T = 273 + room temperature; iC was the value that showed on the Ѱ s meter). G E N O T Y P I C V A R I A T I O N S I N P H Y S I O L O G I C A L R E S P O N S E S T O DROUGHT STRESS. TQ was visually rate to evaluate overall turf performance. TQ maintained at 7.2 or more throughout the treatment period in all genotypes under wellwatered conditions (Fig. 1A). Under drought stress, TQ exhibited a steady decline, and the rate of decline varied among genotypes (Fig. 1B). By 13 d of drought treatment, TQ of ColxCr481, ColxCr679, and 'Penncross' declined to below the minimum acceptable level (6.0) whereas C o l x C r 1 4 , S A G I P T 4 1 , a n d ColxCr190 maintained TQ above the minimum acceptable level. During the 17-d drought period, ColxCr14, SAGIPT41, and ColxCr190 had higher TQ than that of ColxCr481, ColxCr679, and 'Penncross'. Leaf RWC was determined to evaluate leaf hydration status. All well-watered plants maintained RWC above 90% throughout the treatment period ( Fig. 2A). At 13 and 17 d of drought stress, significant decreases in RWC were observed in all genotypes. During prolonged periods of drought (13 and 17 d), ColxCr14, ColxCr190, and SAGIPT41 showed significantly higher RWC than ColxCr679, 'Penncross', and ColxCr481 (Fig. 2B). Leaf EL was measured to estimate cell membrane stability. All well-watered plants maintained low EL (below 24%) throughout the treatment period (Fig. 3A). Leaf EL increased with drought stress in all genotypes, but genotypes did not exhibit significant differences in EL during most of the drought period (Fig. 3B). GENOTYPIC VARIATIONS IN OSMOTIC ADJUSTMENT AND CORRELATION TO PHYSIOLOGICAL TRAITS. OA level in leaves increased during drought stress in all genotypes (Fig. 4). The OA was significantly greater in ColxCr14, ColxCr190, and SAGIPT41 than ColxCr679, 'Penncross', and ColxCr481 during the entire drought period (Fig. 4). Correlation analysis between OA and other physiological parameters was conducted to determine the relationship of OA to TQ, RWC, and EL. Leaf OA was positively correlated to TQ and RWC, and negatively correlated to EL. TQ, RWC, and EL were also significantly correlated with each other (Table 1). CONTRIBUTION OF DIFFERENT OSMOLYTES TO OSMOTIC ADJUSTMENT. Content of osmolytes, including soluble sugars (glucose, sucrose, and fructose), proline, GB, and PA (Put, Spm, and Spd) were measured in leaves of well-watered plants and drought-stressed plants at 13 d of treatment when genotypic differences in physiological parameters were most pronounced during drought periods. Different genotypes exhibited variations in the content of different osmolytes under either wellwatered or drought conditions (Figs. 5-9). For soluble sugar content under well-watered conditions, 'Penncross' had highest glucose, sucrose, fructose, and total soluble sugar content while other genotypes were not significantly different from each other for glucose and fructose; sucrose and total soluble sugar content was lower in ColxCr679 than 'Penncross' but lower than other genotypes (Fig. 5). Under drought stress, SAGIPT41 had greatest content of glucose, sucrose, fructose, and total soluble sugar content, which was followed by ColxCr14 and ColxCr190, whereas ColxCr679 had the lowest content of those sugars. The content of glucose, sucrose, and fructose, as well as the total content of soluble sugars in ColxCr481 and 'Penncross' was significantly lower than in SAGIPT41, ColxCr14, or ColxCr190, but greater than that in ColxCr679 under drought stress (Fig. 6). Proline content for well-watered plants was ranked as SAGIPT41, ColxCr190, ColxCr481, ColxCr679, ColxCr14, 'Penncross' = SAGIPT41 > ColxCr190 > ColxCr481 = ColxCr679 = ColxCr14 (Fig. 7A). Under drought stress, SAGIPT41 had highest proline content whereas ColxCr481 had the lowest proline content and others were intermediate (Fig. 7B). GB content was lowest in 'Penncross', intermediate in ColxCr679, and no significant differences were among other genotypes under well-watered conditions (Fig. 10A). Under drought stress, GB content was ranked in the order of ColxCr14 = SAGIPT41 > ColxCr481 > ColxCr190 = 'Penncross' > ColxCr679. Table 2 showed the contribution of each solute to OA under drought stress. Soluble sugars were the predominant osmolytes in all genotypes and lines, contributing to 43.62% to 59.32% of OA, which was followed by GB with 4.91% to 8.46% contribution and proline with 4.06% to 4.81% contribution. Three forms of PA are minor contributors to OA, with less than 1% contribution in all genotypes. Discussion The maintenance of adequate leaf water status is important for proper physiological and biochemical functioning. Plants that can maintain adequate RWC for a longer period during drought stress will have the greatest likelihood of continued metabolic activities and survival. Maintaining cell membrane stability is also crucial for proper cellular functions, and EL has been widely used to estimate cell membrane stability (Blum and Ebercon, 1981;Rachmilevitch et al., 2006). The analysis of TQ, RWC, and EL demonstrated genotypic variations in drought tolerance among a transgenic line (SAGIPT41) of creeping bentgrass, a wide type of creeping bentgrass ('Penncross'), and creeping bentgrass · colonial bentgrass hybrids (ColxCr14, ColxCr190, ColxCr481, and ColxCr679), with ColxCr14, SAGIPT41, and ColxCr190 exhibiting superior drought tolerance relative to ColxCr481, ColxCr679, and 'Penncross'. Previous studies with various physiological and biochemical analysis also reported improved drought tolerance of the transgenic creeping bentgrass SAGIPT41 compared with 'Penncross' (Merewitz et al., 2011(Merewitz et al., , 2012. OA has been regarded as a drought tolerance mechanism in many plants (Bohnert and Jensen, 1996;LaRosa et al., 1987), including turfgrasses (DaCosta andWhite et al., 1992). Increasing OA facilitates the maintenance of cell turgor under conditions of limited water availability. Our study found that drought-tolerant genotypes ( C o l x C r 1 4 , S A G I P T 4 1 , a n d ColxCr190) had higher OA than drought-sensitive ones (ColxCr481, ColxCr679, and 'Penncross'). Correlation analyses between OA and physiological traits demonstrated that OA was positively correlated to TQ and RWC and negatively correlated to EL. These results indicated that greater level of OA could at least partially contribute to the maintenance of better TQ and water hydration levels, and cell membrane stability under drought stress, although many physiological factors, such as photosynthesis also play essential roles in regulating plant drought tolerance (Chaves et al., 2003). OA involves the accumulation of solutes in cells that lower Ѱ s , facilitating water retention or maintaining cell turgor (Boyer et al., 2008). Major types of osmolytes include soluble sugars, proline, GB, and PA (Bouchereau et al., 1999;Chaves et al., 2003;Gomes et al., 2010;Kumar et al., 1997;Martinez et al., 2004;Travert et al., 1997;Yancey et al., 1982). Soluble sugars act as osmoregulants and also play roles in stabilizing membrane structures, contributing to plant tolerance to drought stress (Bartels and Sunkar, 2005;Spollen and Nelson, 1994). Proline typically accumulates in the cytosol, and it contributes to the cytoplasmic OA in response to drought or salinity stress (Ashraf and Foolad, 2007). Many studies showed that proline was positively correlated with OA during drought stress in many other plant species (Keyvan, 2010;Quan et al., 2010;Xiong et al., 2012). GB is synthesized in response to saline or drought stress in some plant species, which regulates Ѱ s and facilitate cellular turgor maintenance (Munns, 2002). Put, Spd, and Spm are the major types of PA in plants, which play roles in OA, membrane stability, free radical scavenging and regulation of stomatal movements (Liu et al., 2008;Nayyar and Chander, 2004;Sanchez et al., 2005). In our present study, the predominant forms of osmolytes in bentgrass was soluble sugars, which contributed to 43.62% to 59.32% of OA, followed by GB and proline whereas PA were relatively minor contributors to OA with less than 1% contribution. Differential OA among genotypes could be due to the accumulation of specific or unique types of solutes in different genotypes. DaCosta and Huang (2006) reported that creeping bentgrass plants osmotically adjusted to dehydration stress by accumulating soluble carbohydrates. Jiang and Huang (2001) found kentucky bluegrass of improved drought tolerance by drought preconditioning had 21% to 44% higher soluble sugar content in leaves than nonpreconditioned, drought-sensitive plants. Fu et al. (2010) found that decreases in Ѱ s was accompanied by higher sucrose levels, which were the result of the increased level of sucrose phosphate synthase and sucrose synthase activity and a decline in acid invertase activity in response to drought stress. In this study, increased OA associated with better drought tolerance could be attributed to greater accumulation of soluble sugars, proline, Spm, and GB in SAGIPT41, soluble sugars and Spd in ColxCr190, and soluble sugars and GB in ColxCr14 under drought stress, compared with the drought-sensitive genotypes, ColxCr481, ColxCr679, and 'Penncross'. The content of all solutes for plants under well-watered conditions did not exhibit consistent patterns of higher values in the drought-tolerant genotypes (SAGIPT41, ColxCr190, and ColxCr14) compared with the drought-sensitive genotypes (ColxCr481, ColxCr679, and 'Penncross'), suggesting that increased OA under drought stress was not associated with genotypic differences in solute accumulation of plants under well-watered conditions, but due to increased accumulation of solutes in response to drought stress. Although PA was found to be a minor contributor to OA in bentgrass species in this study, the significant positive correlation of Spm with OA suggested that Spm is a major form of PA associated with OA and improved drought tolerance in bentgrass species. It is worthy of noting that the sum of the contribution of osmolytes examined in this study was less than 100%. That might be due to the accumulation of other solutes in the cell contributing to OA, such as polyols (e.g., mannitol), ions (e.g., potassium), and organic acids (e.g., malate) (Chaves et al., 2003;Hare et al., 1998) which were not detected in this study, and further research is needed to examine the undetected metabolites that may also contributes to OA in bentgrass. In summary, physiological analysis of genotypic variations in drought tolerance demonstrated the genetic potential for developing drought-tolerant bentgrass by selecting for OA. To our knowledge, this is the first report of different types or specific osmolytes involved in regulating OA under drought stress in turfgrasses differing in drought tolerance, with greater accumulation of soluble sugars, proline, GB, and Spm in SAGIPT41, soluble sugars and Spd in ColxCr190, and soluble sugars and GB in ColxCr14, compared with the droughtsensitive genotypes, ColxCr481, ColxCr679, and 'Penncross', as shown by the physiological analysis in this study. This study also first found that soluble sugars were major contributors to OA, followed by GB and proline, with all three forms of PA as minor contributors in bentgrass genotypes. The osmolytes highly correlated to OA and superior drought tolerance could be used as biomarkers to select for drought-tolerant germplasm of bentgrass and other cool-season turfgrass species. .32 a z Means followed by the same letters within a line indicate no significant differences between treatments base on least significant difference test at P = 0.01.
2019-03-31T13:41:39.760Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "c6e2e86b3d34468fbb8ead0fdfe4d51e856cd63e", "oa_license": null, "oa_url": "https://doi.org/10.21273/jashs.140.6.605", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1b0742f0db6781a9124569426eff55b0fb5a9ff0", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
137867235
pes2o/s2orc
v3-fos-license
Miniature nanoparticle sensors for exposure measurement and TEM sampling Nanoparticles in workplaces may pose a threat to the health of the workers involved. With the general boom in nanotechnology, an increasing number of workers is potentially exposed, and therefore a comprehensive risk management with respect to nanoparticles appears necessary. One (of many) components of such a risk management is the measurement of personal exposure. Traditional nanoparticle detectors are often cumbersome to use, large, heavy and expensive. We have developed small, reliable and easy to use devices that can be used for routine personal exposure measurement in workplaces. Introduction Exposure measurements should be a part of a general risk-management strategy for workplaces where nanoparticles are produced or processed. Since the prime exposure route is over inhalation, measuring airborne nanoparticles in such workplaces seems like a sensible idea, and it can be done in different ways: (I) measurements by an industrial hygienist (II) with stationary equipment that is always present at a sensible location (similar to a fire detector), or (III) as personal exposure measurement with small instruments carried by the workers, similar to dosimeters in the nuclear industry. Ideally, for personal monitoring, the instruments should be small, lightweight, affordable, simple to use and reliable, and provide an online measurement so that an alarm can be raised if a threshold value is exceeded. Unfortunately, most current particle detectors have severe drawbacks for this type of application: Optical detectors are generally simple to use, and low-maintenance, but are not sensitive to nanoparticles -sub-100nm particles cannot be detected. Condensation particle counters that first grow the particles to larger sizes before an optical detection are very precise devices, but need a working fluid that needs regular replenishment, and they need to be held level to prevent internal spilling of the working fluid. They are thus reasonable for occupational hygiene professionals, but cannot be used for measurements of types II and III. Filter samples can be taken, but are not real-time, and need a lot of effort after the sampling. Fortunately, there is one class of particle instruments that is nano-sensitive, reliable, and real-time -electrical particle detectors. This class of instrument has been known for a long time, and already exists in the market. Our new instruments are similar in performance to existing electrical particle detectors, but much smaller and easier to use. Standard electrical detection by unipolar charging of aerosols Unipolar charging of nanoparticles is at the basis of many current aerosol instruments (e.g. TSI FMPS, Cambustion DMS500, Dekati ELPI, Pegasor PPS etc). The simplest possible instrument that can be built based on unipolar charging is known as a diffusion charger (DC). A scheme of such an instrument is shown in figure 1. The diffusion charger consists of 3 basic elements: the unipolar charger (usually a corona charger), where ions are generated and mixed with the particles to charge them, the ion trap, where excess ions are removed, and finally a filter, where all particles are captured and the current deposited by the particles onto the filter is detected with a sensitive electrometer, which must be able to measure currents in the order of fA. This type of instrument has been commercially available for some time (Matter LQ1-DC, Ecochem 2000-DC, TSI EAD, TSI NSAM, TSI Aerosotrak 9000). It measures the final charge state of the aerosol after the unipolar diffusion charger. The particles take up a charge that can well be described by a power law: where x is usually in the range of 1.1 -1.2. The signal of the DC is thus nearly proportional to an "aerosol length concentration" (with units e.g. mm/cm 3 ), and this is true over a very wide size range, from about 10 nm to 10 µm. Diffusion charging is unspecific, i.e. it hardly depends on the particle material, so all particles are detected with this instrument. An interesting interpretation of the DC signal was given by Wilson et al. [1]: for particles in a more limited size range (about 20nm -350nm), the DC signal is approximately proportional to the lung-deposited surface area (LDSA) concentration of nanoparticles in the human lung, expressed in units of µm 2 /cm 3 , indicating how much particle surface area in µm 2 is deposited in the lung for every cm 3 of inhaled air. Since many toxicological studies have shown that particle surface area usually correlates better than particle mass or particle number with health effects, the LDSA concentration is a potentially highly interesting particle metric. In summary, the DC is thus an interesting instrument, because it is simple (and thus robust and potentially low-cost), measures a probably relevant quantity, operates over a very large size range and needs no consumables. Electrical detection by induced currents We have developed a novel technique to measure LDSA by induced currents, rather than particledeposited currents on a filter, and implemented it in an instrument called the partector. It works as follows: The only visible difference to the DC is the lack of the particle filter. It is replaced by an empty Faraday cage, which is connected to an electrometer, and thus virtually grounded. Since a grounded Faraday cage will not allow any electric field lines to escape, the total charge inside the cage and on the cage must be zero by the law of Gauss. Therefore, by measuring the charge on the cage, an indirect measurement of the charge in the cage is made. However, in our setup we are only measuring the current flowing to/from the cage, i.e. the time derivative of the charge, dQ/dt. Thus, a constant charge in the cage is not detectable. In order to always have a measureable signal, a temporal variation of the charge must be introduced. We do this be pulsing the unipolar charger, which turns on and off at a rate of 0.5 Hz. In this manner, we can detect an oscillating signal at the electrometer, whose amplitude is proportional to the charge transfer to the aerosol (see Fig. 3). This approach has some important differences to the standard DC approach: • It measures the charge transferred to the aerosol in the unipolar charger, and not the final charge state. In practice, this is mostly identical, since aerosols are usually neutral. However, if a highly charged aerosol is measured, this difference will be clearly visible. In particular, negatively pre-charged aerosols will give a larger signal than neutral aerosols, while positively pre-charged aerosols will give a smaller signal (since the unipolar charger uses positive charging). • An amplitude of an AC signal is measured rather than a DC signal. This is much more robust, since all electrometers exhibit some zero offset that may drift with time and/or ambient conditions (such as temperature and humidity). In traditional electrometer-based aerosol instruments, warmup periods and regular zero checks are necessary, which are not needed here. • A better monitoring of the instrument health status is possible in this version: (1) the electrometer zero offset is constantly monitored, because it is just the period-average of the electrometer signal. Although a drift or high offset in itself does not influence the measurement, it is an indicator that something bad is happening to the instrument. The firmware can thus display a service warning while the instrument is still operational. (2) In the unipolar charger, the charging current reaching the ion trap is measured, and the high voltage is regulated for constant current. It is possible that a leak current flows to this counterelectrode via insulator due to water condensation or, over long times, contamination with particles. In traditional instruments, such leak currents are indistinguishable from real currents, whereas here, the charger is in an off-state half of the time, where we can monitor if the charging current is really 0 as it should be. • Service intervals are longer. Since no particles are collected nominally, there is no need for cleaning or filter exchanges. Of course, particles are always deposited by diffusion and impaction in any instrument, so some service is required at some point. Nevertheless, service intervals are much longer than in previous instruments we have built. • Finally, because this is a non-contact method of aerosol detection, the particles are still available after the measurement in a charged state, and can be further manipulated. We have built a version of the instrument where the particles can be deposited on a TEM grid for further analysis. This is described in section 4. Overall, the Faraday cage variant of a DC seems to have more advantages to us than drawbacks. We used this principle to build an instrument designed for occupational hygiene / workplace safety, i.e. our goal was to build a small instrument that would be simple to use. The partector, has the following specifications: • Small (13.4 x 7.8 x 2.9 cm) • Size range: 10nm -10µm; however, the interpretation as LDSA is only valid in a more limited size range of about 20-350nm. Nevertheless the instrument can be used to measure larger particles. Summing up, the partector has a very wide detection range both in terms of particle concentration and particle diameter, making it suitable even for workplaces with very high concentrations of nanoparticles (e.g. welding). The electrical charging principle is very sensitive even to nanoparticles, which cannot be detected with optical means. TEM sampling The partector can serve as a simple instrument in occupational hygiene to easily make a quick survey of nanoparticle concentrations. However, like all instruments measuring physical parameters (mass, number, size distribution etc), it cannot distinguish between different types of particles -whether they are harmless (e.g. salt particles) or potentially dangerous (e.g. metal particles, fibers), they are all seen by the instrument. Furthermore, as always in occupational hygiene, there is the general difficulty of distinguishing between the always present background aerosol and nanoparticles released by a process in the workplace. In such cases, a deeper analysis of the particles is necessary. The transmission electron microscope (TEM) is the most powerful analytical technique for analysing nanoparticles; in particular for morphology and single-particle chemical analysis. We thus designed a second instrument (partector TEM) which adds a sampling stage for TEM probes. Since the particles passing through the partector are already charged, it is natural to use electrostatic deposition to collect the particles on the TEM grid. The partector TEM contains a small insert that holds the TEM grid, and a strong electric field can be applied to deposit the particles on the grid. Since TEM grids are notoriously difficult to handle due to their small size, the entire insert is exchanged at once when a new sample needs to be taken -the instrument comes with 6 inserts which can be prepared with TEM grids before the actual sampling. One of the biggest challenges in TEM sampling is to produce samples with optimal coverage. If too many particles are sampled, they will form agglomerates on the grid, and it remains unclear whether they were agglomerated in the gas phase. If only very few particles are sampled, then the sample is not very representative. For traditional methods to collect TEM samples (on holey grids by flow-through filtering, by thermophoretic or electrostatic deposition), it is therefore necessary to either take multiple samples with different sampling time, or to estimate the necessary sampling time from a concentration measurement made with a second instrument. The partector TEM is unique, because it can directly estimate the area of the grid covered by nanoparticles by integrating the electrometer signal. This works because the time-integrated signal is approximately proportional to the area coverage of the grid: for the area A covered with N nanoparticles of diameter d we can write: Where n is the nanoparticle concentration in the air, η is the deposition efficiency, and t is the sampling time. Because smaller particles have a higher electric mobility after diffusion charging than larger particles, the deposition efficiency is approximately proportional to d -1 , and thus the area covered is proportional to the instrument signal integrated over time. This calculation once again breaks down for large particles (d >~ 300nm) where the mobility becomes more or less constant, so large particles will lead to a higher coverage of the grid. Nevertheless, especially for particles in the nano size range, integrating the LDSA signal over time gives an excellent estimate for when to stop sampling. The partector TEM is thus a very interesting instrument, as it allows an occupational hygienist to first make a quick survey of nanoparticle concentrations in a workplace, and then, if there is some concern about the levels or the types of particle that might be produced, to take a sample with a single push of a button. TEM analysis is expensive due to the high cost of the instrument and the skilled operator involved, so, although the partector TEM is much more expensive than simple filter samplers, it can easily pay off through decreased operating costs in the TEM analysis. Applications and Conclusions We have developed a new method of aerosol detection, by induced currents. This method is sensitive to nanoparticles, and non-collecting, thus leading to long service intervals. As demonstrated, it can be used to create miniature nanoparticle detectors that are very easy to use. These instruments can for example be used for personal exposure monitoring in the workplace, but also, due to the long service intervals, for ambient monitoring where it seems that 1 year 24/7 operation is possible. Another application is to combine it with a CPC measurement; the ratio of the two instrument signals is then proportional to the average particle diameter over a very wide size range, allowing (crude) particle sizing in real time (1s) from 10nm to 10µm [2].
2019-04-29T13:13:19.049Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "1b7bea27bc60e4e731cfd84a13c8b0649b8f7182", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/617/1/012034/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "14c21e4dcdb88b788f5935bd16e9a682612e8a61", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
258075644
pes2o/s2orc
v3-fos-license
Common Neurologic Features of Lyme Disease That May Present to a Rheumatologist Lyme disease, caused by Borrelia burgdorferi (Bb) infection, has a broad spectrum of clinical manifestations and severity. Patients with possible Lyme disease may seek out or be referred to rheumatologists. Today, the most common reason to engage a rheumatologist is due to complaints of arthralgia. After skin, neurologic manifestations of Lyme disease are now among the most common. Therefore, it is important for rheumatologists to be aware of clues that suggest neurologic Lyme disease and prompt help from a neurologist experienced with Lyme disease. Purpose Rheumatologists may see patients from Lyme disease-endemic areas who complain about joint pain or who are referred to them by another physician. Today, Lyme disease with true arthritis occurs far less than cases with arthralgias and neurologic features. Neurologic manifestations are now recognized as among the most common extracutaneous manifestations. Recognition of suggestive neurologic features may warrant a referral to a neurologist who is familiar with Lyme disease. The purpose of this paper is to alert the rheumatologist, particularly in the United States, of the common neurologic Lyme disease features which should trigger a referral. Background Lyme disease is a tick-borne illness caused by Borrelia burgdorferi (Bb). In 1977 "Lyme arthritis" was first recognized as neighborhood outbreak of what was believed to be idiopathic juvenile rheumatoid arthritis [1]. The fact that a systemic infectious disease caused these manifestations and that the etiologic agent was B. burgdorferi, transmitted through a tick bite [2,3] was not realized until later. In particular, the cases were recognized to be arthritis, especially of the knee. The belief that arthritis and arthralgias, primarily of the knee in endemic areas, may be due to Lyme disease has encouraged primary care providers and patients themselves to seek out a rheumatologist. Rheumatologists have played a major role in diagnosing and treating Lyme disease, and can be an informed gatekeeper when neurologic Lyme disease may be present. They can also be a source of well-characterized body fluid samples and specimens to help research on Lyme disease and other fields of medicine. Lyme disease is highly seasonal in temperate climates. Approximately two-thirds of the cases from 1992 to 2006 have reported onset dates in June, July, or August. The seasonality of case occurrence varies geographically, with the beginning of the main transmission season occurring earlier in southern endemic states and later in the northern endemic states [1]. The increasing prevalence of Lyme disease further solidifies the importance of disease symptom and sign recognition [4]. Exposed patients may complain about pain (in joints, especially large joints, muscles, spine, and head), fatigue, and cognitive impairment. They may also exhibit skin lesions. However, because skin lesions related to Lyme disease occur at the onset of the disease, and the main symptoms of arthralgias occur later, it is not likely that the rheumatologist would see them when the patient first presents. Hence, a good history regarding this and exposure is important. Gathering subjective and objective data is the first step in differentiating Lyme disease from other diseases, such as a viral syndrome. Immediate suspicion for Lyme disease should occur when there is potential tick exposure in an endemic area, and the subject has an expanding skin lesion known as erythema migrans (EM) [5][6][7][8][9][10]. The lesion often has varied appearances [9][10][11] (see below). Further information about symptoms, history of Lyme disease, and family members with similar symptoms and signs should be obtained. Awareness of suggestive joint/musculoskeletal (Table 1) and neurologic clues for Lyme disease are helpful (Table 2). However, due to the non-specific nature of these manifestations, accurate diagnosis will generally involve a laboratory investigation. [12], cerebrovascular events [13], and vasculitis [14,15] all of which are likely to come to the more direct attention of a neurologist or dermatologist rather than a rheumatologist. For a rheumatologist, a detailed history and comprehensive physical exam is pertinent to make a diagnosis of Lyme disease and rule out other disorders. The most common neurologic features of early disseminated Lyme disease are shown in Table 2. Although the initial discovery of Lyme disease involved arthritis, this feature has now diminished markedly [16]. The manifestations are still frequently reported, but largely involve subjective joint pain (arthralgias) rather than true arthritis. This leads to an overestimation of Lyme arthritis incidences. In a recent Canadian study of 1230 patients reported to have Lyme disease, the overall incidence of arthritis was 0.028%. Early disseminated infections had 94% of neurologic complaints, while late disseminated infections had a 55% rate of neurologic complaints compared to 93% of arthralgias [16]. Of the 475 cases reported to have late-stage Lyme disease, only 35 (7.4%) manifested true arthritis, while 440 (92.6%) had arthralgias. Neurologic manifestations were noted in 259 (54.5%) cases [16]. Thus, common extracutaneous manifestations are now found to be neurologic. The rheumatologist may be sought out directly by patients or referred by primary care physicians. Because the nervous system is among the most commonly involved body system other than the skin, the remainder of this article is oriented to the neurologic clues a rheumatologist is likely to encounter, especially in North American cases. Elicitation of a History of Skin Lesions Resembling a Form of Erythema Migrans (EM) Several studies with microbiologic proof of B. burgdorferi infection, demonstrated that the often described "classic" bull's-eye lesion with central clearing occurred far less than "non-classic" atypical EM lesions [9,10]. In fact, the classic appearance occurs more frequently in southern tick-associated rash illness (STARI) than Lyme disease [17]. Mimics, such as a drug eruption, may appear as a classic lesion [6]. Photographs of the classic and varied appearances are shown on the CDC website [11]. Although it is unlikely that patients will be seen by a rheumatologist for an asymptomatic skin lesion, such as EM, it is worth seeking a possible occurrence in the patients' history. EM can be present initially with or without symptoms. An EM lesion may not be noticed because it is usually painless and does not itch. Atypical EM appearances are observed in 25-30% of all cases, even in PCR-positive cases [10]. The EM lesion may occur 4-14 days after a tick bite [6]. The appearance of a rash within 24 h of a suspected tick bite supports a hypersensitivity reaction rather than EM. EM can be present as many various appearances. The recognition of EM is often dependent on the knowledge and experience of the clinician looking at the skin lesion, which now can be supported by adjunctive laboratory tests [18,19]. Neurologic Features of Lyme Disease The more common neurologic features of early disseminated Lyme disease are shown in Table 2. These include focal weakness due to a cranial nerve VII palsy (rarely other cranial nerves are involved), aseptic meningitis syndrome, and acute painful radiculoneuritis. Very infrequent manifestations include cerebrovascular issues, including vasculitis, encephalomyelitis, and intracranial hypertension syndromes, in adolescents. Patients with these rare manifestations would likely present to neurologists directly. Facial nerve palsy is a major manifestation of neurologic Lyme disease. It is important to determine if there is unilateral weakness (with inability to close the accompanying eye lid or wrinkle the forehead). One should ask about tearing, hearing, or taste abnormalities involved on the same side of the weakness and anterior portion of the tongue. These signs may be subtle but confirm a peripheral cranial nerve VII involvement. When there is bilateral facial nerve involvement in a patient with endemic area exposure, Lyme disease is among a differential diagnosis list. This list includes Guillain-Barré syndrome, HIV, sarcoidosis, Epstein-Barr virus infection, and lymphoma. Lyme disease can sometimes involve other cranial nerves (including III, IV, VI) to produce double vision. Symptomatic lymphocytic/mononuclear meningitis due to Lyme disease is largely indistinguishable from viral meningitis, with headache, fever, photosensitivity, and stiff neck. In a patient with a clinical presentation suggesting acute meningitis, cerebrospinal fluid (CSF) examination is mandatory to guide diagnosis and therapy, including those due to other pathogens. The third neurologic syndrome of early dissemination is acute painful radiculoneuritis. This is more commonly seen in Europe. The term "Garin-Bujadoux-Bannwarth syndrome" (or "Bannwarth syndrome") has been applied to the constellation of painful radiculoneuritis (the hallmark of the syndrome, with severe spinal pain) with variable motor weakness, sometimes accompanied by facial nerve palsy. There is a robust CSF pleocytosis [20], despite absence of headache and meningeal signs. Fuller descriptions are published [20][21][22]. Spine pain (neck or mid/lower back pain along the spine) is typically prominent and may have radicular features, such as scapular winging and dermatomal sensory loss. Imaging will likely be unable to diagnose painful radiculoneuritis. The clinical manifestations of late neurologic Lyme disease include subtle encephalopathy, rare encephalomyelitis (most cases are European), and possible neuropathies, such as mononeuropathy multiplex, or a subtle sensory axonal peripheral neuropathy. A mild chronic encephalopathy may be the most common neurologic manifestation in patients with late-stage Lyme disease. The symptoms tend to be diffused and nonspecific, and patients typically report memory loss, sleep disturbance, fatigue, and depression [23]. There are currently debates on whether this represents a central nervous system (CNS) infection, or a systemic mechanism. Laboratory Tests as an Adjunct to Diagnosis of Lyme Disease Involving the Nervous System It is certainly helpful if one can document characteristic involvement, such as EM. However, this may not be apparent. Laboratory tests can be useful to document exposure to B. burgdoferi. Currently, the only FDA-approved tests are antibody tests. These are indirect tests that measure the host humoral response to the pathogen. A single test cannot prove active infection, but rather exposure. Limitations and caveats have been discussed elsewhere [18,24]. However, it should be noted these types of laboratory tests are undergoing relatively fast change, so it is important to keep abreast of the field [18,24]. Currently, two types of two-tiered tests are approved. The older one is now designated as the standard two-tiered test (or two-step approach) (STTT). The first tier has commonly been an ELISA assay, and if positive or borderline, is followed by a second test. For many years, the second test has been a Western immunoblot. Visual interpretation of the blot is subjective and involves counting different protein "bands" thought to represent specific B. burgdorferi antigens. However, it is now known that many of those bands represented more than one protein and were cross-reactive. Using the STTT, the presence of two out of three protein bands (23, 39, 41 kilodalton) to which IgM antibody reacts, is considered positive. IgM blot alone should not be used to diagnose patients who are symptomatic over 6 weeks. Towards 4 weeks or later, IgG reactivity to 5 of 10 bands is considered positive (but note the caveats above and described in detail elsewhere [18,24]). As an improvement, a modified two-tier test (MTTT) [19,25] has been approved by the FDA. It substitutes a "first-tier-like" immunoassay for the Western blot as the second step. Equal or improved sensitivity, without degradation of specificity or subjectivity, has been achieved. One caveat affecting both tests is that early antibiotics may blunt an expected antibody response and cause an apparent seronegative response. A likely explanation is that early antibiotic therapy leads to clearance of the pathogen prior to the development of a class-switched antibody response. As a result, antibody responses either do not develop, or individuals do not seroconvert from IgM to IgG. Therefore, it is important to know if a patient has received antibiotics. During the early phase of the disease, often between the third and sixth week, there is a robust IgM response. It is likely that the Western blot tests will be used less and less over the next few years, and replaced by a recombinantbased immunoassay. Despite limitations of the two-tiered serologic assays [18,24], the majority of suspected cases of Lyme disease should be borderline or positive in a patient who has not received treatment and more than a month or two has elapsed since possible infection. These tests have a significant negative predictive value toward ruling out the disease in endemic regions in such patients. Clinicians should enhance their interpretation of laboratory tests by consulting with the laboratory technical director where they send their tests. Direct tests can measure specific and active infections in many cases [18]. They are often offered by clinical laboratories, but they are not yet FDA-approved. Finding pathogen nucleic acid, especially if circulating in blood or CSF, is strong evidence of an active infection. As a caveat, tissue bound pathogen nucleic acid may be remnant material and not necessarily be a measure of active infection. Direct Lyme PCR has a sensitivity of approximately 50-70% in a true EM lesion, and 20% in synovial fluid, from true Lyme arthritis. The rheumatologist is in an excellent position to further research endeavors in Lyme disease and autoimmune disorders, as they can conduct careful examinations and sample acquisitions from this relatively restricted joint compartment. When a patient comes in with monoarthritis to an academic center, a rheumatologist is almost always consulted. The academic rheumatologist can then perform an arthrocentesis at bedside for synovial fluid analysis and possibly a biopsy of the synovium. In the case of Lyme disease, a previous study conducted during a 17-year period had samples of synovial fluid collected from 127 patients with Lyme arthritis who were seen in the Lyme disease clinics. The study found that B. burgdorferi DNA was detected in 75 of 88 patients with Lyme arthritis (85 percent), but in none of the 64 control patients. This presented evidence that PCR is a useful method for detecting B. burgdorferi DNA in synovial fluid from patients with Lyme arthritis. Although PCR testing of synovial fluid has not been standardized for widespread clinical use, B. burgdorferi DNA is detectable in synovial fluid by PCR in about 70 percent of patients with untreated Lyme arthritis [26]. As for a neurological workup, imaging is usually found to be normal in up to 75% of cases. Sometimes imaging shows an enhancement of the facial nerve, but this is non-specific. CSF should be obtained from a suspected neurologic Lyme disease subject, especially those with headache, fever, and neck stiffness or spinal pain. The rheumatologist should be prepared in advance to have an identified neurologist for referral. CSF results are likely to influence antibiotic choice, as mentioned below. In consideration of other diseases in the differential, CSF studies should include cell count and differential, protein and glucose concentrations, and Gram stain and bacterial cultures. Intrathecal Lyme antibody testing for CSF serum indices should be mandatory, and checked routinely in anyone who has CSF examined for possible neurologic Lyme disease [27]. Syphilis testing can be obtained as well. Viral studies and cultures should be obtained, with testing for herpes simplex virus. Patients presenting with Lyme meningitis typically have a modest CSF pleocytosis of up to several hundred mononuclear cells per mi-croL; the median count in acute neurologic Lyme disease is approximately 160 cells/microL (160 × 10 6 cells/L). The CSF protein concentration is usually moderately elevated. The CSF glucose concentration is generally normal. In North American cases, a bland CSF picture may be common using traditional CSF tests [28]. Treatment of Neurologic Lyme Disease: Consideration of CNS-Penetrating Antibiotics A full discussion of this topic is beyond the intended scope of the article. Published guidelines [29,30] cite that patients with CNS disease are likely to benefit from known CNSpenetrating antibiotics, such as intravenous therapy with ceftriaxone for 14-21 days. The rheumatologist with limited experience in treating and following patients with neurologic Lyme disease is encouraged to confer with a neurologist or an infectious disease physician experienced with neurologic Lyme disease. With appropriate antibiotic therapy for early Lyme disease, persisting neurologic sequelae have been minimized. Nevertheless, 10% or more of early treated patients may not return to their baseline. We call attention to patients with CNS involvement, and the need to discriminate between recommendations from the literature based on European patients and those for North American patients, as the disease may be different. Neurological involvement involves discussions on using oral medications, such as doxycycline and amoxicillin, for some forms of systemic or neurologic Lyme disease. Steere et al. [31] noted that in treating Lyme arthritis without neurologic symptoms at the onset, 1/18 (5.5%) patients treated with oral doxycycline and 4/16 (25%) patients treated with oral amoxicillin, later developed neurologic Lyme disease, despite resolution of the arthritis. This suggests that the nervous system was infected early and that the oral medication was ineffective against the neurologic seeding [31]. In treating systemic illness when meningitis is involved, intravenous ceftriaxone is recommended over oral doxycycline [30]. When treated early, neurologic Lyme disease has a favorable prognosis. However, it can be difficult to determine the efficacy of antibiotic therapy during treatment, as improvement may occur over weeks to months, particularly in late stage infection. In patients with post-treatment Lyme disease with persistent symptoms, long-term antibiotics over weeks to months have not been shown to yield sustained resolution [32][33][34]. Conclusions For a multitude of reasons, patients with possible Lyme disease may present themselves to the rheumatologist. The rheumatologist is in an excellent position to evaluate patients, especially those who may have been missed at the earliest stage of Lyme disease and present with rheumatologic and/or neurologic symptoms. Accompanying suggestive neurologic symptoms should raise the possibility of neurologic Lyme disease, with further assessments. It can be difficult to diagnose a patient with neurologic Lyme disease. Therefore, it is important for a rheumatologist to initially gather a carefully elicited history from the patient. Preparedness can maximize favorable outcomes for the patients. This includes having a go-to experienced neurologist for prompt referral and interdisciplinary management. A relationship with the laboratory servicing patients is also important to determine the best test and know when to use that test when considering neurologic Lyme disease. When examining a patient with possible endemic area exposure to B. burgdorferi with rheumatologic complaints, it is important to consider Lyme disease and particularly neurologic Lyme disease as a possible diagnosis. Current knowledge and the landscape of Lyme disease are changing. Factors that favor exposure to ticks (encroachment of residences near wooded areas and climate change) may play a role in future to enhance the incidence of Lyme and tick-borne diseases [35]. Key guiding points in suspected Lyme disease are as follows: arthralgias are far more common than arthritis, neurologic manifestations are far more frequent than arthritis, EM in true cases of Lyme disease most often has a non-classic appearance more frequently than a classic bull's-eye lesion, and even the classic appearing EM is not totally pathognomonic because of mimicking lesions. Because neurologic involvement in Lyme disease is so common, recognition and timely treatment should be encouraged. Conflicts of Interest: The authors declare no conflict of interest.
2023-04-12T15:24:25.693Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "d2f41a0e7e5e2d995f191748c12eb4c4d7ef4303", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/12/4/576/pdf?version=1681798968", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea9fbd1b15d28a784b14f8bf5834d96518f5d209", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
56364856
pes2o/s2orc
v3-fos-license
Evaluation of Some Vegetal Colloids on the Quality Attributes of Beef Sausage Colloids are of vital role for improving the quality of foods including that of psyllium, locust bean and pectin which is found in orange peel albedo. These colloids are also of value for clinical nutrition. The last opinion could be confirmed by the chemical analysis which revealed that locust bean seeds had higher total phenolic compounds (485.28 mg/100 g) while psyllium seeds (297.54 mg/100 g) and orange peel albedo (246.11 mg/100 g) showed nearly the same level. Major phenolic compound was pyrogallol for locust bean, being cholchecein for other two colloids sources. Total flavonoid compounds were higher for psyllium seeds (536.46 mg/100 g) and locust bean seed (275.76 mg/100 g), being less for orange peel albedo (113.65 mg/100 g); major flavonoid in all sources was the hesperidin. The best eating qualities recoded for psyllium sausage followed by locust bean sausage. Generally, all three colloids sources improved the eating quality of beef sausage. Plasticity confirmed the results of sensory evaluation where the best sample was that of psyllium sausage. Higher pH value after 6 months storage at-18oC was in line with the best Water Holding Capacity (WHC) and plasticity levels recorded for psyllium sausage. Color intensity and TBA value were best for locust been followed by psyllium treatments. The lowest color intensity was in line with the highest TBA value. The keeping quality was better when adding the tested colloids; TVN, TBA value, Total Bacterial Count (TBC), Yeast and Mold (Y and M) count was lowest for psyllium followed by locust bean treatment. Other colloids showed the same trend but at lower degree. INTRODUCTION There are few of processed foods that do not contain one or more hydrocolloids in the formulation.Hydrocolloids are generally polysaccharide extracts, obtained from plants, which have a great affinity for water at relatively low concentrations with production of high viscosity system.Hydrocolloids are broadly used in food systems for various purposes, for example as thickeners, gelling agents, texture modifiers and stabilizers.Large, linear and flexible polysaccharides increase viscosity even at low concentrations.This property allows hydrocolloids to be the major ingredient in liquid and semisolid type foods.Recently there has been an increase in the demand of hydrocolloids (Williams and Phillips, 2000;Nishinari, 2008). Pectin a polysaccharide derived from plant material, mainly citrus fruit peel, apple peel or sugar beets.Pectin is widely used to impart formulation, thickening and physical stability to wide range of foods as confectioneries and is mostly used in fruit-based products including jams, jellies, fruit drinking and also dairy products as yoghurt (Nassinovitch, 1997;Ramírez et al., 2011).It should be noted that orange peel albedo (citrus sinensis) which is a good source of pectin was not used as it is in foods.At the same time according to El-Naqib (2010) orange albedo powder when fed to hepatointoxicated/diabetic rats lowered serum glucose and improved the liver and kidneys functions. Psyllium is an annual plant from the Plantago genus (Craeyveld et al., 2009).Around 200 species of this genus broadly distributed all over the moderate regions of the world (Guo et al., 2009).Psyllium is also called Isabgol meaning ''horse ear'' in Indian, which describes the shape of the seed.The psyllium seed husk which is a well-known source for the production of psyllium hydrocolloid (Craeyveld et al., 2009) is widely utilized in pharmaceutical and food industries (Singh, 2007;Yu et al., 2003).The psyllium is utilized in pharmaceutical industries as a medicinally bioactive polysaccharide and used for the medical treatment of constipation, colon cancer, diarrhea, high cholesterol, diabetes and inflammation bowel diseases-ulcerative colitis (Singh, 2007).Additionally, it is also used in food industries as constituting the gel and enhancing the consistency and stability (Bemiller and Whister, 1996).In this concermn, Abou-Moussa (2009) found that addition of Plantage psyllium seed improved the water holding capacity and plasticity of raw and roasted beefburgers, the sensory characteristics were also enhanced. The carob tree (Ceratonia siliqua L.), also called algarroba, locust bean and St. John's bread, is a leguminous evergreen tree which grows throughout the Mediterranean region, mainly in Spain, Italy, Portugal and Morocco.The fruit pod (containing sweet pulp) gives, after removal of the seeds, the carob powder (Yousif and Alghzawi, 2000).The seeds, covered with a tight-fitting brown coat, contain a white and translucent endosperm (containing galactomannans), also called Carob gum, Locust Bean Gum (LBG) or E411 (Dakia et al., 2007).The seed coat contains antioxidants (Batista et al., 1996) Seed of carob after decortications may be split and milled then used or extracted with hot water to obtain the Locust Bean Gum (LBG).The most familiar use of carob bean gum (LBG) is for food.Dairy products, sauces and dressing contain LBG.Meat products, breads and breakfast cereals may contain it, as well LBG can be used to replace fat and lower cholesterol and it has even been associated with decreased diarrhea in infant, children and adults (El-Hajaji et al., 2010;Milani and Maleki, 2012).).The aim of current research was to evaluate the effect of some vegetal colloids extracts such as psyllium seeds, locust bean seeds and orange albedo compared with pure pectin extract on quality attributes of beef sausage during frozen storage at-18 ºC for 6 months. MATERIALS AND METHODS Lean beef and fat tissues: Fresh lean beef from boneless round and fat tissues (sheep tail) were purchased from the private sector shop in the local market at Giza, Egypt. Pure pectin: It was obtained from El-Gomhouria Co. for Trading pharmaceutical, Chemicals and Medical Equipments, Cairo, Egypt.It was used at 5% solution by dissolving in hot water at 80ºC up to completely dissolving and allowed to cool to room temperature and kept overnight in refrigerator (5±1ºC). Carob seeds or locust bean seeds: (Ceratonia siliqua) purchased from a spices shop at Giza, Egypt.Seeds were milled and extracted with hot distilled water (5 g: 95 mL) for 2 h at 80ºC under constant stirring and allowed to cool to room temperature and kept overnight in refrigerator (5±1ºC). Orange fruit: (Citrus sinensis) peeled and inner white part separated, washed, dried, milled into powder form and then extracted with hot distilled water (5 g: 95 mL) at 80 ºC for 2 h under constant stirring and allowed to cool to room temperature and kept overnight in refrigerator (5±1ºC). Psyllium seeds: (Plantago psyllium) purchased from a spices shop at AL-Azhar, Cairo, Egypt.Seeds were milled into powder and then dispersed in distilled water (5 g: 95 ml) at 80 for 2hr under constant stirring; the dispersion became a homogenous gel and then cooled to room temperature.The dispersion was then kept overnight in refrigerator (5±1ºC) All above extracts added during formulation of raw sausage mixture by replacement of added water used in control sample with these extracts by the same percent. Other ingredients: Other ingredients such as defatted soy flour were obtained from Food Technology Research Institute, Agricultural Research Center, Giza, Egypt.Also, food grade sodium tripolyphosphate and sodium nitrite (El-Gomhoriya Company for Drugs, Chemical and Medical equipments, Cairo, Egypt).Salt, dried garlic and spices were obtained from local market at Giza, Egypt.The spices were powdered in a laboratory mill and a mixture of the powdered spices was prepared as follows: 7.27% laurel leaf powder; 4.37% cardamom; 5.22% nutmeg; 13.12% Arab yeast, 12.44% cinnamon, 9.58% clove, 7.50% thyme, 27.75% cubeb and 12.75% white pepper. Preparation of beef sausage: Five main formulas of beef sausages were prepared in this study.The first formula was prepared with water according to the traditional formula, to serve as the control sausage sample and consisted of the following ingredients: lean beef (60%), fat tissue (17.0%), water (15.0%),defatted soy flour (4.90%) dried garlic (0.20%), tripolyphosphate (0.30%), sodium chloride (1.50%), nitrite (0.01%) and spices (1.09%).Other four formulas were prepared by using the same ingredients with the replacement of added water by psyllium extract, locust bean extract, orange albedo extract and pure pectin, respectively.Sausages were prepared as described by Heinz and Hautzinger (2007) and stuffed into a nature casings which were hand linked at about 15 cm intervals.Sausages were aerobically packaged in a foam plate, wrapped with polyethylene film and stored at -18°C for 6 months.Samples were taken for analysis every month periodically. ANALYTICAL METHODS Chemical and physicochemical: Phenolic compounds of psyllium seeds, locust bean seeds and orange peel albedo were fractioned and determined by HPLC according to the methods of Goupy et al. (1999), while flavonoid compounds fractioned and quantified by HPLC according to the method of Mattila et al. (2000). Total Volatile Nitrogen (TVN) and Thiobarbituric Acid value (TBA) of sausage samples were determined using the method published by Kirk and Sawyer (1991).The pH value was measured by a pH m (Jenway, 3510, UK) on suspension resulting from blending 10 gm sample with 100 mL deionized water for 2 min (Fernández-López et al., 2006).Color of beef sausage formulas was determined by measuring the absorbance at 542 nm according to Husaini et al. (1950). Texture Profile Analysis (TPA) was determined by a universal testing machine (Cometech, Btype, Taiwan) .providedwith software.An Aluminum 25 mm diameter cylindrical probe was used in a ''Texture Profile Analysis'' (TPA) double compression test to penetrate to 50% depth, at 1 mm/s speed test.Hardness (N), gumminess, chewiness, adhesiveness, cohesiveness and springiness were calculated from the TPA graphic.Hardness = maximum force required to compress the sample; Cohesiveness = extent to which sample could be deformed prior to rupture; Springiness = ability of sample to recover to its original shape after the deforming force was removed; Gumminess = force to disintegrate a semisolid meat sample for swallowing (hardness×Cohesiveness); and Chewiness = work to masticate the sample for swallowing (springiness×gumminess) were determined as described by Bourne (2003). Physical properties: Water Holding Capacity (WHC) and plasticity were measured according to the filterpress method of Soloviev (1966).The cooking loss of prepared sausages were determined and calculated as described by AMSA (1995).This measurement was carried out after cooking in hot water at 85ºC the boiling in water for 15 min. Microbiological methods: According to the procedures described by Difico-Manual (1984), total bacterial count and yeast and mold counts of beef sausage were determined by using nutrient agar and potato dextrose agar media respectively.Incubations were carried out at 37ºC/48 h for total bacterial count and 25 ºC/5 day for yeasts and molds counts. Statistical analysis: Data were subjected to Analysis of Variance (ANOVA).Means comparison was performed using Duncan's test at the 5% level of probability as reported by Snedecor and Cochran (1994). RESULTS AND DISCUSSION Phenolic and flavonoids compounds: From results of Table 1, it could be noticed that, the major phenolic compound in psyllium and orange peel albedo was the cholchecien (234.01 and 229.68 mg/100 g respectively) which was pyrogallol for locust bean (388.95mg/100g), the latter contained also significant amount of chochecein (88.46 mg/100 g) and tangible amount of catechol and coumaria (2.85 and 1.79 mg/100 g respectively) while psyllium had significant amount of catechol and pyrogallol (37.67 and 12.01 mg/100g respectively) and tangible amount of coumarin, cafien, syringic acid and cinnamic acid (2.73, 1.90, 1.60 and 1.49 mg/100 g respectively) .It could be observed that total phenolic compounds was higher for locust bean (485.28 mg/100 g), followed by psyllium (297.54 mg/100 g) and orange peel albedo (246.11mg/100 g). The same table revealed that the major flavonoid was hesperidin for these studied sources, being highest for psyllium extract (442.40 mg/100 g) and locust bean extract (265.18mg/100 g) while was lower for orange peel albedo extract (65.14 mg/100 g).For psyllium total quercitrn, hespertin, rosmarinic acid and rutin were 90.88 mg/100 g which was 41.66 mg/100 g for locust bean extract and 39.80 mg/100 g for orange peel albedo.Total flavonoids were highest for psyllium (536.46 mg/100 g) and locust bean (275.76 mg/100 g) and lower for orange peel albedo (113.65 mg/100 g). Physical and physiochemical properties: Water Holding Capacity (WHC) and plasticity: From Table 3, it could be noticed that water holding capacity and plasticity of different sausage treatments were significantly (p<0.05)affected by the type of colloids immediately after processing.The highest or best water holding capacity (i.e., lowest value) and plasticity was recorded for sausage made with psyllium extract (0.40 and 4.50cm 2 /0.3 g, respectively) followed by locust bean sausage (0.65 and 4.10 cm 2 /0.3 g, respectively) with non significant differences between them (p>0.05).Also, no significant differences in WHC and plasticity were found between orange peel albedo sausage and pectin sausage.On the other hand, control sample had significantly lower WHC and plasticity when compared with other sausage treatments.These results are in agreement with that reported by Abou-Moussa (2009) found that addition of Plantage psyllium seed improved the water holding capacity and plasticity of raw and roasted beefburgers. During frozen storage, the water holding capacity and plasticity were decreased (i.e., separated free water increased) with advancement of storage time for all treatments (Fig. 1 and 2).The loss of WHC and plasticity during storage may be attributed to protein denaturation and loss of protein solubility.The rate of decrease in WHC and plasticity was lower for sausage treatments prepared with different type of colloid especially psyllium and locust bean when compared with control.This may be due to these colloids are polysaccharides which have a great affinity for water at relatively low concentrations with production of high viscosity system (Nishinari, 2008). Cooking loss and cooking yield: Significant differences (p<0.05) in cooking loss and cooking yield were observed between all sausage treatments at zero time (Table 3).Cooking loss was significantly decreased by replacement of added water with different colloids.Cooking loss decreased from 22.75% for control to 12.27% for psyllium, 13.70% for locust bean, 15.73% for pectin and 17.19 % for orange peel albedo.From Fig. 3, it could be noticed that cooking loss of all treatments increased as the period of storage increased.This may be due to decreased water holding capacity.The lowest increment of cooking loss during storage was recorded for psyllium.The highest cooking loss in control sample after 6 months of frozen storage was in line with the lowest WHC and pH values (Fig. 1 and 5).Color intensity: Data presented in Table 3, showed that color intensity slightly improved due to adding of the colloids sources in sausages, provided that best color recorded for locust bean, followed by psyllium treatments.It showed be mentioned that lowest color intensity was found for pectin sausage was also better than that of control.Generally, no significant differences were recorded in color intensity between treatments at zero time.Also, from Fig. 4, it could be observed that, by advancement of frozen storage time, the color intensity was decreased for all treatment.This decrease in color intensity may be due to oxidation of oxymyoglobin to metmyoglobin beside lipid oxidation as reported by Osheba (2003).It seems possible that color intensity is affected by antioxidant efficiency; it was better for locust bean and psyllium sausages than other treatments during frozen storage.This may be due to locust bean and psyllium had higher photochemical compounds such as phenolic and flavonoids compounds which have antioxidants.The lowest color intensity of control sample was parallel to the highest TBA value. pH value: pH values of different sausage treatments ranged from 6.16 to 6.21 showed no significant differences between all treatments at zero time.Also, differences between all treatments were actually slight at any time of frozen storage (Table 3 and Fig. 5).By advancement of storage period, the pH values were slightly decreased for all treatments.This may be due to the breakdown of glycogen to produce lactic acid and consequently decreased pH value (Darwish et al., 2012).The highest pH value after 6 months of frozen storage (5.68) was in line with the best WHC (1.7cm 2 /0.3 g) value (Fig. 1) and the highest plasticity (3.40 cm 2 /0.3 g) for psyllium sausage (Fig. 2). Chemical properties: Thiobarbituric Acid (TBA): Figure 6, show thiobarbituric acid (mg malonaldhyde/kg) of different sausage treatments as affected by type of colloids during frozen storage at -18ºC up to 6 months.From these results it could be noticed that, TBA values of different sausage treatments ranged from 0.349 to 0.361 mg malonaldhyde/kg showed no variations at zero time.These values were gradually increased with advancement of frozen storage period.This increase in TBA value during storage could be indicating continuous oxidation of lipids and consequently the production of oxidative by products (Brewer et al., 1992).The highest increment of TBA value was recorded for control sample which reached 0.947 and 1.194 mg malonaldhyde/kg after 5 and 6 months of frozen storage respectively being exceeded the maximal permissible limit of 0.9 mg malonaldhyde/kg for TBA in frozen sausage (Egyptian Standards, 2005).On the other hand, the lowest increment of TBA values was observed for locust bean sausage followed by psyllium sausage with slight differences between them.This may be due to the locust bean and psyllium extract contain many phenolic and flavonoids compounds (Table 1) which have antioxidant activity.TBA values of locust bean and psyllium sausages reached 0.789 and 0.815 mg malonaldhyde/kg respectively after 6 months of frozen storage being not exceeded the maximal permissible limit.TBA values of sausage prepared with orange peel albedo and pectin exceeded the maximal permissible limit of TBA with only 0.042 and 0.18 mg malonaldhyde/kg respectively after 6 months of frozen storage. Total Volatile Nitrogen (TVN): Total volatile nitrogen of different sausage treatments did not affected by the type of colloids immediately after processing which ranged from 12.50 to 12.68 mg/100g as shown in Fig. 7. Total volatile nitrogen of all treatments progressively increased as the time of frozen storage increase.After 5 months of storage, TVN of all treatments were in the range of permissible level reported by Egyptian Standards (2005) being not more than 20 mg/100 g with exception of the control sample and pectin sausage.The increase of TVN in these treatments than allowance was 1.32 and 0.85 mg only respectively.Meanwhile, after 6 months of storage TVN of all treatments was higher than permissible level of TVN, except the treatments prepared with psyllium (18.73 mg/100g) and locust bean (19.84 mg/100g).Generally, at any time of frozen storage, psyllium sausage had the lowest TVN followed by locust bean sausage.This indicates the effectiveness of psyllium and locust bean extracts for inhibiting many types of microorganisms which caused protein hydrolysis, this may be due to these colloids contain many phenolic and flavonoids compounds (Table 1) which have antimicrobial. Microbial load: Total bacterial count and yeast and mold counts of different sausage treatments as affected by type of colloids during frozen storage at-18ºC for 6 months are presented in Values are mean and SD (n = 10); where: Mean values in the same row with the same letter are not significantly different at 0.05 level Fig. 8: Organoleptic evalution of sausage as affected by type colloid period.Total bacterial count of all sausage treatments was also increased by increasing storage time.Total bacterial count of control sample and pectin sausage reached 1.65×10 6 and 1.13×10 6 cfu/g, respectively after 6 months of storage.It exceed the maximal permissible limit of 10 6 cfu/g for total bacterial count in frozen sausage (Egyptian Standards, 2005) while total bacterial count of other sausage treatments reached to 4.31×10 5 for psyllium, 5.85×10 5 for locust bean and 7.18×10 5 cfu/g for orange peel albedo was not exceeded the maximal permissible limit at the end of frozen storage. Organoleptic evaluation: The organoleptic evaluation of different sausage treatments as affected by type of colloid was tabulated in Table 6 and graphically in Fig. 8. From statistical analysis of these data it could be noticed that there were significant differences (p<0.05) in texture and color scores between different sausage treatments, but other sensory properties i.e., taste, odor and overall acceptability showed no significant differences (p>0.05). Texture score of control sample (7.5) was significantly lower when compared with other sausage treatments, except orange peel albedo sausage which had no significant differences with control.The highest texture (more tender) and color scores were recorded for psyllium sausage followed by locust bean sausage with non significant differences between them.Also, no significant differences in the same parameters were recorded between orange albedo sausage and pectin sausage. Moreover, no significant differences (p>0.05) in taste, odor and overall acceptability scores were recorded between all sausage treatments.Generally, results indicated that psyllium sausage was the best one considering texture, odor, taste, color and overall acceptability, followed by locust bean sausage, pectin sausage, orange peel albedo sausage and control sample.It concluded also that all three colloid sources slightly improved the beef sausage quality specially the psyllium.In this concern, Abou-Moussa (2009) found that addition of Plantage psyllium seed enhanced the sensory characteristics of beefburgers. It seems better to judge based on organoleptic test, which indicated that the best treatment was that of psyllium.This was also found for plasticity values (Table 3 and Fig. 2) which were best for psyllium fresh and stored sausage.Finally from all obtained results, it could be noticed that, the used colloids sources improved the quality attributes of beef sausage and extend the shelf life. CONCLUSION Finally, any of the used colloid sources may be recommended during processing of beef sausage to improve the quality attributes of the end product especially psyllium and locust bean.This practice may result in certain therapeutic effect due to pronounced polyphenol and flavonoids levels. Fig. 1 : Fig. 1: Water holding capacity (cm 2 /0.3 g) of different sausage treatments as affected by type of colloids during frozen storage Fig. 4 : Fig. 4: Color intensity of different sausage treatments as affected by type of colloid during frozen storage Fig. 6 : Fig. 6: Thiobarbituric acid of different sausage treatments as affected by type of colloid during frozen storage Table 1 : Phenolic and flavonoid compounds (mg/100 g) of some colloid extracts prepared in this study Table 2 : Texture profile analysis of different cooked sausage treatments as affected by type of colloids Table 3 : Values are mean and SD (n = 3); where: Mean values in the same row with the same letter are not significantly different at 0.05 level Physical and physiochemical properties of different sausage as affected by type of colloid at zero time Texture Profile Analysis (TPA) of cooked sausage as affected by type of colloid: Values are mean and SD (n = 3); where: Mean values in the same row with the same letter are not significantly different at 0.05 levels Data illustrated in Table2, revealed that texture indices were at different levels for the variable colloid sources.Hardness of sausage treatments was significantly decreased by replacement of added water with different colloids which decreased from 20.79 for control to15.10,17.06, 17.85 and 18.84 Table 4 : Total bacterial count (cfu/g) of different sausage treatments as affected by type of colloids during frozen storage up to 6 Table 5 : Yeast and mold counts (cfu/g) of different sausage treatments as affected by type of colloids during frozen storage up to 6 months Table 6 : Organoleptic evaluation of different sausage treatments as affected by type of colloid
2018-12-17T23:10:09.874Z
2013-06-05T00:00:00.000
{ "year": 2013, "sha1": "70295a250d8382bdd126d39cf7e6eb1bacf2661e", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/AJFST/5-743-751.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "70295a250d8382bdd126d39cf7e6eb1bacf2661e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
12453274
pes2o/s2orc
v3-fos-license
Patellar Tendon Properties and Lower Limb Function in Rheumatoid Arthritis and Ankylosing Spondylitis versus Healthy Controls: A Cross-Sectional Study Objective. Rheumatoid arthritis (RA) and ankylosing spondylitis (AS) lead to inflammation in tendons and peritendinous tissues, but effects on biomechanical tendon function are unknown. This study investigated patellar tendon (PT) properties in stable, established RA and AS patients. Methods. We compared 18 RA patients (13 women, 59.0 ± 2.8 years, mean ± SEM) with 18 age- and sex-matched healthy controls (58.2 ± 3.2 years), and 12 AS patients (4 women, 52.9 ± 3.4 years) with 12 matched controls (54.5 ± 4.7 years). Assessments with electromyography, isokinetic dynamometry, and ultrasound included quadriceps muscle force and cross-sectional area (CSA), PT stiffness, and PT CSA. Additionally, measures of physical function and disease activity were performed. Results. PT stiffness and physical function were lower in RA and AS patients compared to healthy controls, without a significant difference in force production. PT CSA was significantly larger leading to reduction in Young's modulus (YM) in AS, but not in RA. Conclusion. The adverse changes in PT properties in RA and AS may contribute to their impaired physical function. AS, but not RA, leads to PT thickening without increasing PT stiffness, suggesting that PT thickening in AS is a disorganised repair process. Longitudinal studies need to investigate the time course of these changes and their response to exercise training. Introduction Chronic autoimmune arthritides are characterised by joint inflammation and progressive joint destruction and are accompanied by impaired physical function [1]. Inflammation also affects other musculoskeletal structures including tendons and their insertions into bone (entheses), but whether this leads to chronic alterations in the biomechanical function of the tendon-muscle complex is unknown. The function of a tendon is determined by its stiffness, that is, its elastic properties, which in turn influence skeletal muscle force output and function. When the force of the contracting muscle is transmitted via the tendon, the resulting elongation of the tendon attenuates the impact of the contraction on the connected bone. The force output is thereby reduced by a small amount, but this is stored as elastic energy and released on relaxation of the muscle [2]. Thus, this mechanism plays an essential part in the efficient performance of complex movements. Tendon properties also influence joint stability and the ability to make postural adjustments [3] and consequently play a major role in maintaining balance and preventing falls. In exercise physiology, ultrasound is used to investigate the biomechanical properties of healthy tendons (especially the load-bearing patellar and achilles tendons) and how they adapt to high intensity exercise, immobilisation, and changes with ageing [3][4][5]. In the elderly and after immobilisation, alterations in collagen content and cross linking lead to reduced tendon stiffness and size with a consequent reduction in collagen fibril diameter and number [6,7]. 2 The Scientific World Journal Ankylosing spondylitis (AS) and rheumatoid arthritis (RA) are autoimmune inflammatory arthropathies with distinct pathology. An inflammatory process involving the entheses, that is, the tendon insertions to bone, is characteristic for AS [8]. The enthesis is the site where stress is concentrated in the tendon-muscle complex and therefore is prone to microdamage [9]. It is assumed that genetic factors in spondylarthropathies such as HLA-B27 lead to preferential deposition of adjuvant molecules derived from bacteria at the damaged enthesis, followed by abnormal tissue repair responses [9]. These in turn lead to thickening of the tissue and fibrocartilage formation at the tendon insertions (enthesophytes) and ligaments (such as syndesmophytes in the axial skeleton) and account for the gradual ankylosing of joints and vertebrae with loss of movement. Corresponding structural alterations found on magnetic resonance and ultrasound (US) imaging include thickening and hypervascularity of the tendon and enthesis [10,11]. While enthesitis is the primary feature of AS, a secondary inflammatory reaction in the synovium and tenosynovium can occur [9]. In contrast, in RA the joint synovium is the primary antigenic target. Local diffusion of inflammatory cells and molecules from the synovium is thought to be responsible for inflammatory changes seen in and around adjacent tendons RA [12,13]. The close proximity of the patellar and achilles tendons to the synovial spaces of the knee and ankle joint facilitates their direct exposure to the local inflammatory process. Enthesitis has also been demonstrated in these tendons in RA in connection with the joint synovitis [14]. It is thought that the high mechanical load that the patellar and achilles tendons undergo predisposes to this process, since entheseal involvement is not usually seen in other tendons of RA patients [14]. The primary aim of this research was to investigate the biomechanical properties of the human patellar tendon (PT) in the context of chronic inflammatory arthritis in vivo. Secondarily, we aimed to determine whether RA and AS have different effects on tendon size and function and therefore conducted two separate studies comparing stable RA patients with matched healthy controls, and stable AS patients with matched healthy controls. To our knowledge, this is the first time in vivo assessment methods of biomechanical PT properties with ultrasound have been applied to populations with arthropathies. Additionally, assessment of muscle size, muscle specific force (muscle force normalised to muscle size), and neural activation of the muscle with electromyography was performed. Participant Characteristics and Disease Activity. Eighteen patients with RA according to the American Rheumatism Association 1987 revised criteria [15] and 12 patients with AS according to the European Spondylarthropathy Study Group criteria [16] were recruited from the rheumatology outpatient clinics of the local health board, as were, respectively, 18 and 12 age-and sex-matched healthy volunteers. Inclusion criteria for all patients were: disease duration of at least three years and stable disease activity (i.e., no flare or change in medication for the past three months). Exclusion criteria were the presence of any other catabolic disease, high dose steroid therapy (i.e., >10 mg prednisolone daily) or a recent steroid injection, and joint replacement or current pain or swelling in the right knee. The study was approved by the local research ethics committee and conducted in compliance with the Helsinki declaration. Disease activity was assessed in RA patients by the modified Rheumatoid Arthritis Disease Activity Index-(RADAI-5) [17] and in AS patients by the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) [18]. RADAI-5 measures global RA disease activity over the previous six months and current disease activity in terms of swollen and tender joints, arthritis pain, general health, and duration of morning stiffness. BASDAI measures AS disease activity of the past week in terms of fatigue, spinal pain, peripheral joint pain and swelling, areas of localised tenderness (e.g., at the site of tendons and ligaments), and duration and severity of morning stiffness. Both RADAI-5 and BASDAI are scored from 0 = no disease activity to 10. Habitual Physical Activity and Physical Function. A questionnaire on habitual physical activity [19] was administered to all participants, with separate scores (1 = sedentary to 4 = vigorous physical activity) for work and leisure time summed to a final score of 2-8. Objective physical function of the lower body was assessed by the 30-second chair sit-to-stand, the 8foot-up-and-go [20], the 50-foot-walk, and single-leg balance tests [21]. The Modified Health Assessment Questionnaire (MHAQ) [22] and the 36 questions of the Short-Form Health Survey (SF-36) [23] provided information on subjective physical function and health-related quality of life (QoL), respectively. These questionnaires and physical function tests have been used in RA and AS populations before [24][25][26][27][28]. Setup for Quadriceps Muscle and Patella Tendon Measurements. Participants sat upright on an isokinetic dynamometer (CSMI Medical Solutions, Stoughton, MA, USA) with their right leg strapped to the dynamometer arm above the ankle. Additional straps were secured to prevent extraneous movement at the hips and shoulders. The knee joint angle was fixed at 90 ∘ from full leg extension and the hip angle at 90 ∘ [29]. PT stiffness was then determined using the method of Onambele-Pearson and Pearson [2]. After a set protocol of warm-up contractions, participants performed three ramped maximal voluntary isometric knee extension contractions (MVC), building up to maximum force with increasing effort over 4-5 seconds. During these contractions, participants crossed their arms over their chest to avoid the addition of arm muscle force to the quadriceps force measurements. Verbal encouragement was given. The US 7.5 MHz linear probe (MyLab50, Esaote, Firenze, Italy) was positioned sagittally over the PT and three video clips were recorded of PT excursion from the proximal and the distal attachments of the tendon to the bone, respectively ( Figure 1). An external marker was fixed on the skin to detect accidental movement of the probe against the skin; when this occurred, recordings were repeated. The recordings were aligned by synchronization of force and US data. muscle torque on a computer screen provided feedback to the participants, and at least 1 minute rest between each MVC helped to minimise fatigue. US images were analysed using digitizing software (ImageJ, NIH, Bethesda, MD, USA), with the assessor blinded to the participant's disease status. Calculation of Quadriceps Muscle Force. The MVC with the highest torque was used for analysis. Calculation of quadriceps muscle force accounted for torque, PT moment arm length [2], and antagonist cocontraction (which was estimated from electromyographic (EMG) activity [2,30]). EMG activity (root mean square of the raw EMG signal) was recorded through self-adhesive Ag-AgCl electrodes (Ambu, Denmark) over the vastus lateralis (VL) and the long head of the biceps femoris (BF) during the extension MVCs and during three subsequent maximal isometric knee flexions. The latter data were used to correct extension torque for the effect of knee flexor muscle cocontraction. The following equations were used to calculate quadriceps force: Quadriceps force = (BF torque + knee extension torque)/estimated PT moment arm length [30] where BF torque = (BF EMG during knee extension/BF EMG during knee flexion) × knee flexion torque [31]. Calculation of Patellar Tendon Stiffness. The tendon force-elongation relationship was assessed at intervals of 12.5% for maximal force development and fitted with secondorder polynomial functions forced through zero [2]. Tendon stiffness for each participant was calculated at the level of maximum force of each individual's maximum force from the slope of the tangent (first derivative of the polynomial function) at this force level. Patella Tendon Length, Patella Tendon Cross-Sectional Area, and Young's Modulus. PT length was defined as the distance between the apex of the patella and the superior aspect of the tibial tuberosity visualised on sagittal-plane ultrasound images with the knee joint at 90 ∘ . Three ultrasound images taken in the axial plane at 25%, 50%, and 75% of the patella tendon length were averaged to determine PT cross-sectional area (CSA). The mean CSA measurements at all levels were then averaged for the calculation of Young's modulus (YM = (tendon stiffness) × (tendon length/tendon CSA)) [5]. Quadriceps Muscle Cross-Sectional Area and Muscle Specific Force. To estimate muscle size, quadriceps anatomical CSA (ACSA) was measured by ultrasonography at 50% of the muscle length [30]. ACSA was measured separately for each of the four muscles of the quadriceps (VL, vastus medialis VM, vastus intermedius VI, rectus femoris RF) and then summed. In this way, noncontractile tissue between the muscles was not included in estimations of muscle tissue. Muscle specific force was calculated by normalising force to quadriceps ACSA. 2.8. Statistics. All statistical analyses were performed using SPSS software v. 14.0 (Chicago, IL, USA). Depending on normality of distribution of the data, differences between the patient and matched control groups were determined by either Student's paired -test or Wilcoxon test. Unless otherwise stated, values are presented as means ± SEM. Significance was accepted at the level < 0.05. Participant Characteristics and Disease Activity. The anthropometric characteristics of the participants are summarised in Table 1. All patients had stable disease with low disease activity scores (DAS). In the AS group, disease duration was 20.7 ± 3.9 years and DAS by BASDAI 3.0 ± 0.6. Three patients were on DMARDs (1 MTX, 2 SSZ), two in combination with an NSAID, one was Five patients had conditions that are typically associated with spondylarthropathy: ulcerative colitis ( = 1), Crohn's disease ( = 1), and psoriasis ( = 3). AS patients were significantly shorter than their healthy counterparts, which is a consequence of the axial involvement of the disease leading to kyphosis and loss of body height, this explains the apparent large BMI of the AS patients. Habitual Physical Activity and Physical Function. There were no significant differences in habitual physical activity levels between either the patient groups or their respective matched controls ( Figure 2 shows increased elongation of the PT of the patient groups relative to their respective control groups at defined force levels, as demonstrated by a right shift of the force-elongation curves of the patient groups, indicating a reduction in tendon stiffness (i.e., the gradient to the curve). The calculated PT stiffness was significantly reduced in both the RA and AS patients compared to their controls (Table 3). This is consistent with the interpretation of the force-elongation curves. However, while the PT CSA of the RA group and their healthy control group was similar, it was increased in AS patients compared to their controls. There were no differences in PT length between the patient groups and their controls. Young's Modulus, which normalises PT stiffness to PT CSA, was therefore reduced in AS patients, but not in RA patients, relative to healthy controls. Quadriceps Muscle Cross-Sectional Area and Muscle Specific Force. There were no differences in quadriceps muscle force or CSA between RA and AS patients and their respective matched controls. Consequently, muscle specific force was not compromised for either patient groups (Table 3). Discussion This study is to our knowledge the first to investigate the physiological properties of patellar tendons in patients The Scientific World Journal 5 with stable RA or AS. Compared to healthy age-and sexmatched controls, tendon stiffness in both patient groups is significantly reduced, and whereas the size of the PT was unchanged in RA, there was PT thickening in the AS group, resulting in pronounced reduction of YM. Despite preserved muscle force and size, these changes in tendon properties were accompanied by significant impairments in physical function. The reduction in PT stiffness is likely due to local and systemic effects of cytokines on the tendon, since proinflammatory cytokines are known to alter tendon structural characteristics in inflammatory arthropathies. The main drivers of the local inflammatory process are TNF-, interleukin-1 (IL-1) and IL-6 which produce proteolytic enzymes such as matrix metalloproteinases that lead to collagen destruction [32], and the proangiogenic vascular endothelial growth factor, which evokes synovial hyperplasia and infiltration of macrophages and T cells into synovium [14]. According to the different pathologies of RA and AS inflammatory molecules target primarily the enthesis in AS, whereas in RA tendon involvement is thought to be secondary through the proximity to inflamed synovium [10,11]. Systemically circulating cytokines [33] could have an additional detrimental effect on the tendon in both RA and AS. In addition to the effects of inflammation, disuse can be a contributor to reduced PT stiffness due to chronic reduction of the loading of the tendons [4,34]. In the current study, however, there were no differences in habitual physical activity levels between the patient groups and their controls. It is therefore unlikely that disuse was causing the differences we observed in PT stiffness. Tendon mechanical properties are essential for proprioception and for the reflex responses involved in rapid adjustment of muscle tension to positional changes [3], as well as the storing of elastic strain energy which is key to efficient locomotion. The reduced PT stiffness observed for both our patient groups was accompanied by significantly impaired physical function, despite no differences in muscle strength or size. This is in agreement with previous data that found that the decline in postural stability in the elderly correlated with reduced gastrocnemius tendon stiffness and YM [3]. The underlying biomechanical explanation is that increased compliance of the tendon reduces muscle fascicle length changes in response to passive joint movements and thereby impairs recognition of small movements by the muscle spindle [35]. The finding that the PT CSA was increased in the AS patients but not the RA patients reflects the difference in the pathologies. The enthesis where a disorganised repair process takes place is the primary target organ in AS [9]. With MRI imaging, McGonagle et al. demonstrated characteristic entheseal inflammatory changes of perientheseal swelling and oedema and bone marrow oedema associated with knee synovitis in spondylarthropathies. These changes are not seen in RA, in particular, adjacent to entheseal insertions [36]. Histological findings from cadavers indicate the underlying structural changes at and around the enthesis including periosteal bone reaction, alterations of the bone structure, and increased bone formation, endochondral ossification, and vascular invasion of the fibrocartilage that facilitates access for inflammatory cells [37]. Our results are in agreement with Balint et al. who demonstrated thickening on US of the infrapatellar and tibial entheseal insertions in patients with SpA [38], and with other authors describing tendon thickening on US in SpA [11,39]. A further indication of the ineffective repair process in the tendon in AS is the fact that although PT CSA was increased in our AS patients this did not attenuate loss of PT stiffness, and resulted in the strikingly low YM. In RA, YM was not reduced significantly because of the unchanged tendon size. This emphasizes the differences in structural adaptive responses of the tendon in these different conditions. In ageing, a different process leads to degenerative changes of the tendon, with variable effects on tendon size having been demonstrated in tendon CSA with age [5,40,41]. Increases in tendon stiffness in healthy individuals following exercise training have been primarily attributed to intrinsic adaptations of the tendon material properties [5,42]. These adaptations and additional increases in PT CSA with exercise [5,40] (e.g., a heterogeneous, region-specific increase of PT CSA at the enthesis) possibly provide protection to the stressed tendon [42,43]. As the current study was cross-sectional in design, our results do not provide information on the time course of tendon changes. However, in a case report on unilateral inflammatory knee effusion in a patient with newly diagnosed RA [28], we found reduction of PT stiffness initially only in the leg affected by knee joint effusion, but one year later both PTs were affected despite controlled disease activity and maintenance of regular physical activity. Loss of muscle specific force and muscle CSA in the affected leg in the acute stage of knee effusion was also observed. Whilst muscle specific force and muscle size showed signs of partial recovery following resolution of the joint effusion by intraarticular corticosteroid injection and stabilisation of disease activity, there was no recovery of the PT biomechanics. This corresponds to the results now presented in moderately physically active patients with controlled, established RA and AS, where tendon stiffness is reduced. Previous publications showed that whereas stable RA patients are characterised by attenuated muscle mass and consequently reduced physical function [44,45], their muscle specific force and activation capacity are preserved [26,27]. In RA, high intensity exercise has been shown to restore muscle quantity, strength and function [25,33,[44][45][46]. Similarly, high intensity exercise training may be required to achieve beneficial adaptations of tendon properties. In healthy populations, this form of exercise is associated with increases in tendon stiffness and rate of force development [5,42]. Additionally, intensive exercise training has been shown to reverse the loss of tendon stiffness consequent of either immobilization or ageing [5,30,34,47]. In particular, eccentric exercise, which is characterised by high frequency fluctuations of force and transfers higher loads through the tendons than concentric exercise, has shown clinical effectiveness in tendinopathies [48] and is thought to promote tendon remodelling through increased cross-linking of collagen fibres [49]. Intermittent loading has been shown to reduce inflammation in tendon tissue in vitro [50], and thus it is possible that eccentric exercise would be beneficial for tendons affected by inflammatory arthropathies. Future studies should investigate the response of RA and AS to tendon-specific training. There are several limitations to our study. Firstly, higher participant numbers would have been helpful to clarify if, in the context of the loss of tendon stiffness, the YM in the RA group would have reached significantly low levels. Secondly, although we assessed disease activity through patient questionnaires and inflammatory markers in the blood, we did not have an objective measure of the local inflammation of the PT or enthesis. Both MRI and US can provide a detailed assessment of tendinopathic features in different regions along the tendon and at the enthesis; however, we had no access to a clinician trained in clinical ultrasound or MRI evaluation of tendinous structures. Similarly, histological data on the inflammatory processes in the tendon alongside our tests would have enhanced understanding of the relationship between inflammation of the tendinous and peritendinous structures and their biomechanical properties. This was judged to be unjustifiably invasive. A possible future project could assess tendon biomechanical properties in patients with RA and AS awaiting tendon surgery, whereby biopsy material could be gained without inconveniencing patients. Finally, a more detailed assessment of proprioception in future studies could further elucidate the functional implications of tendon abnormalities in RA and AS. Conclusions In summary, the present study reveals that PT properties are adversely affected in RA and AS and possibly contribute to the disability associated with these conditions. The demonstration of different changes in tendon structure add to our increasing understanding of the differences between the pathologies of RA and AS. Tendinopathies can be asymptomatic and therefore may go unnoticed in the context of inflammatory arthropathies. However, further research is needed to elucidate the role of tendon properties in the impact of chronic arthropathies, and to develop and evaluate treatments for preserving and restoring function of the muscle-tendon complex. Conflict of Interests The authors declare that they have no conflict of interests.
2018-04-03T00:31:34.929Z
2013-06-05T00:00:00.000
{ "year": 2013, "sha1": "6983ac97994544fa2b06c11dc8a7c02bfd150c0f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2013/514743.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13e85f7da1696ca3b501049e16f9f768746d4b00", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150975464
pes2o/s2orc
v3-fos-license
Saints, Hagiographers, and Religious Experience: The Case of Tukaram and Mahipati One of the most important developments in Hinduism in the Common Era has been the rise of devotionalism or bhakti. Though theologians and others have contributed to this development, the primary motive force behind it has been poets, who have composed songs celebrating their love for God, and sometimes lamenting their distance from Her. From early in their history, bhakti traditions have praised not only the various gods, but also the devotional poets as well. And so hagiographies have been written about the lives of those exceptional devotees. It could be argued that we find the religious experience of these devotees in their own compositions and in these hagiographies. This article will raise questions about the reliability of our access to the poets’ religious experience through these sources, taking as a test case the seventeenth century devotional poet Tukaram and the hagiographer Mahipati. Tukaram is a particularly apt case for a study of devotional poetry and hagiography as the means to access the religious experience of a Hindu saint, since scholars have argued that his works are unusual in the degree to which he reflects on his own life. We will see why, for reasons of textual history, and for more theoretical reasons, the experience of saints such as Tukaram must remain elusive. One of the most important developments in Hinduism in the Common Era has been the rise of devotionalism or bhakti. Though theologians and others have contributed to this development, the primary motive force behind it has been poets, who have composed songs celebrating their love for God, and sometimes lamenting their distance from Her. From early in their history, bhakti traditions have praised not only the various gods, but also the devotional poets as well. And so hagiographies have been written about the lives of those exceptional devotees. It could be argued that we find the religious experience of these devotees in their own compositions and in these hagiographies. This article will raise questions about the reliability of our access to the poets' religious experience through these sources, taking as a test case Tukaram (1608Tukaram ( -1649Tukaram 1991, p. vii). Though Tukaram is remembered for poems that dramatically describe his encounters with God, we will focus here on his conflictful relationship with his wife. It may seem as if this is something that does not belong in an article about religious experience. Yet we will see that in the hagiographies, Tukaram's conflict with his wife is presented as the result of his devotion to God, as she upbraids him for failing to provide for his family because of this devotion. So this article is at least about the social impact of Tukaram's religious experience. And even if Tukaram's marriage is categorized as a secular matter, it is still a good place to begin a discussion about how confidently we know any aspect of his experience. Tukaram was one of the four great poets in the Varkari tradition, which is the most popular bhakti movement in the Indian state of Maharashtra. In the Marathi language, a "Varkari" is a person who performs a specific type of pilgrimage. And this label is appropriate for this movement, since its central ritual is a twice annual pilgrimage. Devotees from around Maharashtra travel on foot, sometimes for more than a hundred miles, to the town of Pandharpur to worship at the temple of Vitthal (more Encountering Saints through Hagiography A recent book about the comparative study of hagiography includes an article about another Varkari saint, Jnaneshvar, that describes a temple dedicated to him in his home village of Alandi. This is said to be the location of Jnaneshvar's samjivan samadhi, that is, the place where the saint voluntarily had himself entombed while still alive in 1296. Contemporary devotees hope to obtain blessings by worshiping at this site, Mark McLaughlin says in this article, as they believe that Jnaneshvar is still present here. This belief is illustrated strongly in an earlier century by a story that is told about yet a third Varkari saint, Eknath. A hagiography reports that Eknath had a dream in which Jnaneshvar called to him to come and restore the shrine of his samadhi. McLaughlin writes that "He entered the cavern and there he found Jnanesvar seated in meditation, as young and alive as the day that he had entered the cavern nearly three hundred years before" (McLaughlin 2016, p. 77). And even now, some four centuries after Eknath, here Jnaneshvar still sits. At the end of his analysis of the contemporary worship at Jnaneshvar's temple, McLaughlin takes a detour in the direction of theory. He writes: "The compound does not simply memorialize past events. The ritual activities of the space are not simply forms of remembrance of something that has gone. To perceive this space through a lens of absence, as the historiography of modern discourse offers us, is a mistake. This is a culture of presence-a presence anchored by the perceived occurrence of Jnanesvar having taken samjivan samadhi in the space. Such happenings Orsi calls abundant events because the foundation event that establishes the presence in the space informs all subsequent events there" (pp. 87-88). There may be a naïve positivist historiography that would describe this temple as only a memorial, since the historian's metaphysics cannot accommodate a saint meditating in a subterranean room for the better part of a millennium. But the metaphysics of contemporary devotees are more capacious, or so McLaughlin argues. In making this argument about presence, and even in invoking Robert Orsi, Mark McLaughlin is following the editors of the book in which his article appears. In their introduction, the editors write that they seek to move beyond a framework in which hagiography is disparaged as "mere myth or legend" (Monge et al. 2016, p. 1). In a separate theoretical chapter, one of the editors, Rico G. Monge, attacks a dichotomy between hagiography and history, in which "history is construed as representing objective truth. Hagiography, on the other hand, is that which dissembles, whitewashes, and idealizes, and thus carries with it the connotation of falsehood" (Monge 2016, p. 9). Citing the work of Hayden White, Monge highlights "the fundamentally fictive character of historiography," as historians rely on some of the same devices to construct their narratives as novelists. Monge argues that just as "Marxists, feminists, deconstructionists, psychoanalytic theorists, queer theorists, and the like" all compose accounts of past events according to the canons each of her own methodology, so does the hagiographer (p. 17). Monge insists that "hagiographies in fact do what any historian does-they interpret the data about their subject's lives in a way that is intended to provide real, meaningful knowledge about them" (p. 18). At least Monge is arguing that believers can find hagiographies meaningful even if they recount tales about which historians are skeptical-that the analysis of the historian may not capture the power of the hagiography for the believer. But then at the end of his article, Monge recommends an approach that follows on the work of Robert Orsi in which "hagiographic modes of discourse would no longer be simply demythologized or mined for their anthropological value; rather, they would be allowed to speak truths on their own terms as manifestations of an 'abundance' that exceeds the limitations of modern historical-critical methodology" (p. 20). Here, I believe that Monge is going beyond my earlier second-hand academic formulation that hagiographies recount things that believers take to be true, or that even believers take to be the Truth, but that they also convey truth to the scholarly analyst as well. As they make the case for a metaphysics of presence, both Mark McLaughlin and Rico Monge cite an article by Robert A. Orsi, "Abundant History: Marian Apparitions as Alternative Modernity." There, Orsi argues that in the west before the modern period, "the woods, homes, and forests of Europe, its churches, statues, relics, holy oils, and waters, and its shrines were filled with the presence of saints" (Orsi 2009, p. 218). By contrast, "Western modernity exists under the sign of absence. Time and space are emptied of presence" (p. 219). Yet a sense of presence persists at the site of Marian apparitions, even in our modern age, and this is something that should challenge modernity's exclusions. These are the sites of what Orsi calls "abundant events," where believers are transformed by their face-to-face encounter with Mary's power, which "radiates out from the really-real event along a network of routes, a kind of capillary of presence, filling water, relics, images, things, and memories" (p. 223). Orsi argues at least that the scholar must take into account this sense of presence in her analysis of Marian sites. But he also suggests that for the scholar (and here, he writes about a historian, but it could be an anthropologist, a sociologist, and so forth), abundant events "may very well draw the historian himself or herself, too, into an unexpectedly immediate and intimate encounter with the past" (p. 225). As with Monge, for Orsi, too, it seems an encounter with the saint's presence is not only possible for the believer but also for the scholar. Perhaps these themes are carried to their logical conclusion, or perhaps they are pushed beyond breaking point, at the end of the "Afterword," by the comparative theologian Francis X. Clooney. The very last sentence of Hagiography and Religious Truth reads: "If, as the volume tells us, boundaries among the several important understandings of truth and value ought now to be recognized as in fact permeable, it seems plausible, even if not explicitly stated in the volume, that collecting in one place these studies in hagiography might also draw readers (along with authors) into a kind of interfaith communion of the saints that is indebted to each saint's tradition but reducible to no one community, religious or academic" (Clooney 2016, pp. 203-4). Of course, "the communion of saints," a phrase found in the Apostles' Creed, is generally understood to refer to the church, especially as it includes not only those Christians now living, but to those in heaven and even in purgatory. As Hagiography and Religious Truth is about saints in the "Abrahamic Traditions" (here, Islam and Christianity) and the "Dharmic Traditions" (here, Hinduism and Buddhism), clearly the communion Clooney conjures is "interfaith," "reducible to no one community, religious." And to the extent that scholars are invited to share in this communion, as well as believers, it is also "reducible to no one community, . . . academic." It is the church of the Islamic/Christian/Hindu/Buddhist/believer/scholar. Practically speaking, our encounter with a saint, regardless of whether as believers or scholars or some hybrid of the two, is dependent upon the sources that give us access to the saint. The next section is about the problems with those sources, particularly for Tukaram. Tukaram and His "Cantakerous Wife" Good people will have a great regard for you; And the world will view you with growing respect. Think of your cattle as dead And of your pots and pans as stolen by a thief. Think of your children as though they were never born. Give up all desires and make your mind Hard as Indra's warhead. Spit out all mean pleasures And receive pure bliss. Says Tuka, you will be rid of great turmoil Once you break free from the bonds of this world. (Tukaram 1991, p. 48) This appears in the collection of poems by Tukaram translated by Dilip Chitre. Though most of the poems in this book do not have a title, this is the ninth of ten headed "Advice to an Angry Wife." In the first five of this set, the speaker is mostly not Tukaram, but his wife, who complains about his failure to provide for his family. For example, in the first poem, she asks rhetorically, "What can I feed these starving children?" (p. 42). Each poem ends with a response by Tukaram, which is either angrily or ironically dismissive. In the sixth through the eighth poem in this series, the tone changes, as the poet himself appears to complain that he is the victim of God's heavy demands, for example, asking, "Who would protect me from Him?/Where else can we go to escape Him?" (p. 47). Finally in the ninth poem, the one quoted above, there is yet another shift, as the poet seems to embrace a life "free from the bonds of this world," and urges his wife to do the same. Here, the poet's attitude seems to be more positive both toward his own self-denial and toward his wife, though the sacrifice he asks of her is extreme. His wife must regard her own "children as though they were never born." In his earliest and most important hagiographical compendium, the Bhaktavijay, Mahipati appears to paraphrase this poem, including using many of the same words. 1 This hagiographer offers a much longer biography of Tukaram in a later compendium, the Bhaktalilamrt, and there, too, he seems to summarize this poem. 2 As Mahipati tells the story, this preaching was remarkably effective. According to both the hagiographies, this dialogue occurs when Tukaram's wife, Jijai, confronts him and demands that he return home. Though Tukaram would regularly come to their village to sing the praises of Vitthal, he spent the rest of his time out in the forest. Tukaram accedes to Jijai's demand, but with a condition. He says, "If you listen to my advice, and if you give me your word for it, then I will now come home at once" (Abbott and Godbole 1999, vol. 2, p. 222). As soon as Jijai agrees to this and they return home, Tukaram gives her a lengthy sermon about detachment from the things of this world, which includes the paraphrase of the poem that I quoted in its entirety above. The wife is so moved by Tukaram's teaching that she invites Brahmans to her home to take whatever they want and they pick the place clean. But then Jijai immediately begins to regret that she has done this. When Tukaram gives away her last piece of clothing to a woman from the untouchable Mahar caste who comes begging, Jijai "flew into a rage" (Abbott and Godbole 1999, vol. 2, p. 229). Actually the Bhaktavijay says that Krishna's wife, the goddess Rukmini, merely dons the disguise of a Mahar (vol. 2, p. 228). Mahipati praises Tukaram's lack of concern with worldly life, and this is repeatedly dramatized by conflicts with his wife such as the one just described. Tukaram receives a sack of grain, which he 1 Compare (Chitre 2001, p. 55)) (poem 1987) with Mahipati (1974, p. 383) (49.92-96), available in English in (Abbott and Godbole 1999, vol. 2, p. 225)). immediately gives away, and his wife "flew into a rage" (Abbott and Godbole 1999, vol. 2, p. 215). Jijai miraculously receives a handful of silver coins, but then donates them to Brahmans at her husband's urging (vol. 2, p. 237). When the saint is given a load of sugarcane, all of it but one stalk is taken from him by the children of the village-he does not bother to save more for his own children. When Jijai sees that, she, once again, "flew into a rage," and hits Tukaram with the cane so hard that it breaks into pieces (vol. 2, p. 242). It may be that Jijai really was quarrelsome, but it is also true that Mahipati admits that her worldliness is a lesson for Tukaram and for his reader. In his later hagiography the Bhaktalilamrt, which is also full of such conflicts, Mahipati praises God's providence: "When Thou lookest on Thy bhaktas with the look of mercy, they at once break friendship with their worldly affairs, and Thou dost break the net that binds them to this earthly life. If Thou shouldst give to any bhakta, a wife who was all goodness, his love would bind him to her. So Thou dost give him as companion a cantankerous wife" (Abbott 2000, pp. 162-63). And Jijai is not the only "cantankerous wife" that we find in the lives of the saints that Mahipati recounts. In terms of the length of his biography, the chief saint in the Bhaktavijay is the fourteenth-century poet Namdev. He is criticized for failing to provide for his family both by his mother Gonai, and by his wife Rajai, who complains to her mother-in-law, "my garments are torn and exceedingly old. I have not enough to eat. I have come, therefore, to your house to live my poverty-stricken life" (Abbott and Godbole 1999, vol. 1, p. 65). In an incident that seems to be clearly composed to parallel a story about Tukaram's wife mentioned earlier in this article, gold coins are given to Rajai by no less than Krishna himself, when he visits her disguised as a wealthy merchant. But then Namdev cannot keep this wealth, despite its divine source, and he "called the Brahmans of the town and gave to these twice-born the money, garments, and the ornaments" (vol. 1, p. 76). It is not the case that all of the male saints in Mahipati's books struggled with worldly wives. For example, in his Ph.D. dissertation about the Varkari poet and saint Eknath, Jon Keune notes, "In stark contrast to the wives of the Marathi sants Namdev and Tukaram, Girija is remembered to have supported and encouraged Eknath's activities with unwavering enthusiasm" (Keune 2011, p. 27). But some of the saints did struggle with their wives. Perhaps it is obvious, but it is worth noting that there is a certain gender bias in these stories. Though one encounters women saints in India today, and even women who have renounced the world, this is something that is relatively uncommon historically, with the classical law codes only permitting renunciation for men. The authors of the law codes probably could not brook the idea of women enjoying that much independence, but they may have also assumed that the religious life was not something that women would even aspire to, mired as they were in their attachment to their children and other things of this world. If these gender prejudices helped to shape Mahipati's stories of domestic conflict, it should also be pointed out that he does tell tales of women saints as well as men, though these are relatively uncommon. Of the fifty-four saints whose stories are prominent enough to appear in the chapter titles of the English translation of the Bhaktavijay, only six are women. It may have been the case that Tukaram's wife was coincidentally vehemently opposed to his way of living out devotion. However, it is also certainly the case there is a broader motif of spiritual saints fighting with worldly wives that recurs in Mahipati's hagiographies. If this motif was common at the time that Tukaram lived, we might even argue that, driven by this understanding, he chose a "cantankerous wife." However, the details of Tukaram's life story militate against this, if we are to accept Mahipati's accounts. In both the Bhaktavijay and in the Bhaktalilamrt, Tukaram seems relatively satisfied with worldly life when he marries Jijai. It is only some time later, when Tukaram sees his first wife die during a famine, and then he endures a series of business failures, that he realizes that there is no lasting joy to be found in this world. 3 It is then that Tukaram says to himself, "This earthly life is 3 The Bhaktalilamrt actually says that Tukaram lost not only his wife, but also a son in the famine (Abbott 2000, p. 80;28.123). The Bhaktavijay does not mention the son's death. unreal. It is the outcome of maya (illusion). The human body is perishable, I have spent my life for nothing, and I have forgotten the Lord of Pandhari [that is, the god Vitthal, worshiped in Pandharpur]" (Abbott and Godbole 1999, vol. 2, p. 204). In his book Religion and Public Memory, which is a study of the historical development of the biography of another Varkari poet-saint, the aforementioned Namdev, Christian Novetzke contrasts hagiography and history. He describes works such as the key north Indian hagiographical compendium the Bhaktamala (whose title could be translated as "The Garland of Devotees"), as "splendidly circular compositions, like a necklace" (Novetzke 2008, p. 36). It is no surprise to find themes reiterated in a hagiographical compendium, as the stories of various saints are assimilated to a common model of saintliness. The reader might even find whole incidents recurring in the life of more than one character. 4 History, by contrast, Novetzke says, aims "to ward off the nonrational, replicative, and mimetic" (p. 36). Seeing the same incident repeated is something that is liable to make a historian suspicious of the veracity of the hagiographer's account. Yet there is evidence that Tukaram had a conflictful relationship with Jijai, not only in Mahipati's hagiographies, but also in Tukaram's poems themselves, as in the poem quoted earlier. While they may be a reliable indicator of the saint's experience, there is no guarantee of that. In the introduction to a translation of some of his poems, Dilip Chitre admits that Tukaram's oeuvre has dramatically expanded over time. Chitre begins his introduction by describing a manuscript in Tukaram's home village of Dehu, which is said to be in the poet's own hand, and which contained, at the beginning of the twentieth century, "about 700 poems." By the time of Chitre's writing, "[s]ome scholars believe Tukaram's available work to be in the region of about 8000 poems" (Tukaram 1991, p. vii). If those seven hundred poems are the only ones certainly by Tukaram, that would mean that over 90 per cent of the poems now attributed to the seventeenth century saint were not written by him. In the history of devotional poetry in India, it is quite common for later followers to write in the style of a revered saint, even explicitly claiming that their work is by their illustrious forebear (Hawley 1988). As was the practice by the Varkaris, and in devotion poetry in Hindi and in other Indian vernaculars, each poem by Tukaram ends with his name written into the last verse. However, it appears to be the case that later followers would style themselves as Tukaram, working his name into the close of their poems. As Religion and Public Memory describes it, the study of this process is particularly compelling for Namdev. Scholars have theorized that they have put their finger on later poets who identified themselves with Namdev in their poems, but who also left traces of their individual identity. So, for example, at the end of some of the poems, the author is identified as Vishnudas Nama. This could simply be the Namdev of the fourteenth century, since it is conceivable that he might call himself a "servant of Vishnu," which is what Vishnudas means. And this is probably how most devotees have understood this signature. But Vishnudas could also be a personal name, and there are some scholars who believe that this was a poet in the sixteenth century who wrote in the fashion of Namdev. Styling himself Vishnudas Nama, he was simultaneously claiming that he was the same Namdev and telegraphing that he was different. And Vishnudas Nama is not the only separate author that scholars believe they have isolated in the Namdev corpus. So Novetzke heads his chapter about this "Namdev and the Namas." This is a tradition of "remembering through imitation with variation" (Novetzke 2008, p. 137). Religion and Public Memory further clarifies: "This is not a case simply of borrowing a portion of a previously famous author, but a method of tapping into a complex cultural system of public memory that uses authorship to maintain a performative genealogy, and interconnection of authors over centuries" (p. 150). So later authors, steeped in the stream of Namdev's compositions, drew poems from it, but also poured their own compositions back into it, compositions 4 An anonymous reviewer of this article suggested that there may be hints here of a darker South Asian conception of repetition in history, as articulated in David Shulman's analysis of the Mahabharata, in "which time cooks all creatures, and time crushes them" (Shulman 2001, p. 26). that would be recognized as appropriately labeled Namdev's because they were in his style. And the same has apparently happened in the case of Tukaram. Crucial to Namdev's legacy, and that of other Varkari saints such as Tukaram, according to Religion and Public Memory, is the practice of kirtan, described as follows: "A kirtan performance in Marathi involves a lead performer, a kirtankar, who invokes one or two famous songs or stories and gives a narrative philosophical interpretation of selected texts. This is combined with music, dance, theatrical flourishes, and often a call and response with the audience" (Novetzke 2008, p. 81). To range backward chronologically, Novetzke notes that modern printed editions of Namdev's songs have been compiled from notebooks of kirtan performers. Before those printed versions (and even since), it is primarily through kirtan that devotees would have come to know of Namdev. And Namdev himself was a kirtankar-he composed for performance and was the originator of this genre of performance. Kirtan is largely the medium that has shaped the saint's public memory. Like Namdev, for Tukaram kirtankars learn his poems from other performers, sometimes adding their own compositions. The primary limit to the creativity of the kirtankar would be a general sense of the kind of poetry that Tukaram wrote, including a sense of who Tukaram was. For most devotional saints, the poetry they wrote or at least the poetry attributed to them is constitutive of their hagiography. Mahipati's accounts include frequent reference to his subjects' compositions-he sometimes admits that he is paraphrasing their poems. It is significant that when he introduces Mahipati, Christian Novetzke labels him first a "kirtan performer" and then an "author" (Novetzke 2008, p. xii). Mahipati often depicts the Marathi saints in performance. Religion and Public Memory argues that Mahipati's hagiographies not only contain kirtan, but that they were composed to be presented as kirtan. It is certainly the case, as Novetzke notes, that Mahipati frequently addresses his audience directly. He ends each chapter in the Bhaktavijay with an exhortation to pay attention, for example: "Therefore listen, O pious ones, to the deeply delightful forty-eighth chapter; it is an offering to Shri Krishna" (Abbott and Godbole 1999, vol. 2, p. 217). Novetzke even claims that Mahipati's text "appear[s] to be a transcribed kirtan, word for word" (Novetzke 2008, p. 121). Regardless of whether Novetzke is right about the extent to which the written text preserves an original performance, it is certainly the case that, once Mahipati's hagiographies were written about Tukaram, they would have provided one of the criterion by which the later tradition would have judged the appositeness of poems attributed to him. There is a particularly suggestive analysis of the circular relationship between the compositions of the poet saints and their hagiographies in John Stratton Hawley's book about the sixteenth century north Indian devotee Surdas. To a Krishna worshiper today, there is something poignant about the rich imagery of Surdas's poems, as he is remembered to have been blind from birth. And there are poems attributed to Surdas in which he speaks of his blindness. However, Hawley's conclusion is that there is no unambiguous evidence of this condition in the earliest collections of Surdas's work. There are early poems in which the author calls himself blind, but in them "the blindness the poet bemoans in himself is of a spiritual, not physical nature" (Hawley 1984, p. 29). It may have been the case that "the historical Surdas" was not blind, but that his sight was taken away by the later tradition, both in hagiographies about him and even in poetry attributed to him, which read his metaphors too literally. To return to Tukaram, we cannot know for certain that his wife Jijai was "cantankerous," even though Mahipati says so, especially given that this is a theme in the lives of some of his other saints, as noted previously in this article. And we cannot know this for sure, even though Tukaram's own poems say this, since it is almost certain that many of the poems now attributed to him were written after his time. Perhaps the compositions that excoriate his wife are the product of a certain gender bias in the greater Varkari tradition, which is also found in Mahipati. This is a particularly interesting problem for Tukaram, because his poetry is often characterized as marked by a certain modern self-expressiveness. In the critical introduction that precedes his translation of a selection of Tukaram's poems, Dilip Chitre writes: Tukaram gave Bhakti itself new existential dimensions. In this he was anticipating the spiritual anguish of modern man two centuries ahead of his time. He was also anticipating a form of personal confessional poetry that seeks articulate liberation from the deepest traumas man experiences and represses out of fear. Tukaram's poetry expresses pain and bewilderment, fear and anxiety, exasperation and desperateness, boredom and meaninglessness-in fact all the feelings that characterize modern self-awareness. (Tukaram 1991, p. xx) For our purposes, particularly interesting is Chitre's claim that we find in Tukaram's work a new "form of personal confessional poetry." Perhaps it would be appropriate in Tukaram's case to speak of his poetry as conveying his experience, at least according to Chitre's analysis, since we find in that poetry an expression of the poet's individuality, a reflection upon his inner life. One thing that is problematic in this usage is that Chitre himself admits that Tukaram's oeuvre has dramatically expanded over time, as we have noted. When Chitre claims that he finds in this tradition "personal confessional poetry," he may mean that he has developed the hermeneutical skills to identify the works that were really by the seventeenth century devotee. But I doubt that he means this, as he implies that even the experts have not had much success in isolating the historical core of Tukaram's work. On the other hand, Chitre may mean that there is a kind of confessional tone to many of the poems attributed to Tukaram, whether an individual work is by him or not. However, if this is what Chitre intends, it seems to stretch what might be called "personal confessional poetry," since that may have become self-conscious adherence to a style that is only apparently personal, expressing the anxiety and desperateness that the audience has come to regard as characteristic of Tukaram, whoever is the author of specific poems. Like Dilip Chitre, in their translation of a selection of the poems of Tukaram, Gail Omvedt and Bharat Patankar credit the devotee with a certain modern sensibility. They write, "Tuka, however-perhaps as befitting a poet of the seventeenth century, a century emerging into modernity-gives his own extended commentaries on his life. More than any other sant, his song-poems are personal and compelling" (Tukaram 2012, pp. 21-22). Later, they say that Tukaram "expresses his own subjectivity, something that might be taken as a sign of modernism and a new concern with the individual" (p. 27). The Songs of Tukoba describes the poet as from "an age entering into modernity," a time of "rational questioning," which here is "turned into themes of questioning the divine" (pp. 35-36). The book goes on to note that this was during the same period when traditions in Europe were shaken up by Descartes, Galileo, Hobbes, and Locke. Omvedt and Patankar assert that "[t]hese trends had their correlates in India, though we find more of the themes emerging from the subalterns rather than from a dissident elite, and being cut-off by their conflict with an establishment" (pp. 36-37). Central to Omvedt and Patankar's argument here is that they present Tukaram as a strong critic of caste oppression. In fact, he is only the most prominent representative of what Omvedt and Patankar argue is a tradition of "radical bhakti," that also included other poet-saints such as Basava, Namdev, Kabir, Nanak, and Ravidas. Within the context of this article, it should be noted that Omvedt and Patankar argue that the eighteenth century was a time of "conservative consolidation" in Maharashtra, and they see Mahipati's hagiographies as significant evidence of this (p. 44). "Mahipati's Tukaram tells Shivaji to follow varnashrama dharma" and return to his royal responsibilities, for example, while in his own poems, Tukaram wants nothing to do with kings, Omvedt and Patankar argue. And Tukaram humbly accepts the chastisement of the Brahman Rameshvar Bhat (p. 18). In The Songs of Tukoba, Dilip Chitre comes in for criticism similar to Mahipati. Omvedt and Patankar argue that Tukaram not only criticizes caste, but also rejects "the traditional goals of Brahminism of absorption in the divine" (p. 37). Then they add in an endnote, "Here we are disagreeing with most of the interpretations of Tuka by scholars including Chitre and More. The fact is that to maintain their position of Brahminized orthodoxy, the Brahminic scholars simply have to ignore a large number of songs" (p. 50n6, and see also p. 19). So Chitre represents, according to this, a category of "Brahminic scholars" who advance "Brahminized" interpretations of Tukaram. Within the context of an article about hagiography, an interesting example of the conflict among Tukaram's biographers is the story of the end of the saint's earthly existence. In the Bhaktavijay, the hagiographer Mahipati recounts that Tukaram was miraculously taken to heaven even while still alive: "Then (later) Tuka went to Vaikunth (Vishnu's heaven) with his body" (Abbott and Godbole 1999, vol. 2, p. 294). In his later work, the Bhaktalilamrt, Mahipati tells this story in much greater and more dramatic detail, as God calls Tukaram to "the luxurious Pushpak chariot of light" (Abbott 2000, p. 313). Chitre admits that "Varkaris believe that Vitthal Himself carried Tukaram away to heaven in a 'chariot of light'" (Tukaram 1991, pp. xiii-xiv), and he lists some other opinions about the poet's end. He concludes: "Reading his farewell poems, however, one is inclined to imagine that Tukaram bade a proper farewell to his close friends and fellow-devotees and left his native village for some unknown destination with no intention of returning" (p. xiv). When Omvedt and Patankar take up the question of Tukaram's end, they present three alternatives. The first is the "orthodox account," "that he was carried off to heaven directly" (Tukaram 2012, p. 41). The second alternative, for which they cite Chitre's book, is "the modern Brahminic secularist explanation," "that he simply bid everyone good-by and wandered off on some unnamed pilgrimage" (p. 41, citing Tukaram 1991, pp. xiii-xiv). It is apparent why The Songs of Tukoba labels Chitre's account "secularist," since it seeks to offer a stand-in for Mahipati's miracle story. Why Chitre's position is "Brahminic" is only clear when Omvedt and Patankar present their third alternative: "Non-brahmins believe that he was murdered by his enemies." This seems to be the conclusion that Omvedt and Patankar lean toward. They refer to a book by A. H. Salunkhe, which "cites many bits of circumstantial evidence in favour of this" (Tukaram 2012, p. 41). Omvedt and Patankar stigmatize Chitre's position as "Brahminic," as he seeks to absolve the Brahmans of the crime of Tukaram's murder. 5 For his part, Chitre mentions the theory that Tukaram was murdered, but dismisses it. He was "phenomenally popular," such that his assassination "would not have escaped the keen and constant attention of his many followers." Since they did not pass down any account of this, Chitre concludes that "such speculations seem wild and sensational" (Tukaram 1991, p. ix). On the issue of the end of Tukaram's earthly existence, Chitre and Omvedt and Patankar have something important in common: they are secularists, in that they seek a nonmiraculous explanation. But there is also a substantial difference between them, which turns on Omvedt and Patankar's more basic critique that Chitre's Tukaram is too "Brahminic," that Chitre does not sufficiently emphasize the devotional poet's radical critique of caste. And there is something similar and characteristically different in how Says Tuka and The Songs of Tukoba deal with the question of the modernism of Tukaram's poetry. Though there is no sign of Chitre's existentialist "anxiety" and "desperateness" and "meaningless" in Omvedt and Patankar, they do allow that Tukaram "expresses his own subjectivity" in his poetry. However, they argue that this subjectivity includes a strong caste consciousness that is watered down in Says Tuka. In other words, in both of these readings, something that is characteristic of Tukaram's work is its self-expression, but it seems that the selves expressed are somewhat different, one with a greater sense of anti-Brahman resentment. Or, to shift to the rhetoric of experience, it would seem that for Omvedt and Patankar, Tukaram's experience was more strongly marked by caste oppression and opposition to it. This is something that you will not find in Mahipati's hagiographies, according to The Songs of Tukoba, because the hagiographer's work represented "The Conservative Consolidation of the Eighteen Century" (Tukaram 2012, p. 40). And this is something that you will not even find in Chitre's translation of a selection of Tukaram's own poems, since he "ignore[s] a large number of 5 The opposition between the "Brahminic" view and that of "Non-brahmans" serves Omvedt and Patankar's overall activist agenda, but also lumps people together into groups that they might not otherwise choose to join. Certainly, that Tukaram was murdered is not a view that is subscribed to by all non-Brahmans. There is evidence of conflict with sanctimonious Brahmans throughout Varkari history. Tussles with Brahmans are common in Mahipati's hagiographies, though they generally lead to a reconciliation in the end. Omvedt and Patankar's more radical view of this may be in part a product of the development of a more militant Dalit consciousness in Maharashtra in the late twentieth century. songs" (p. 50n6). It seems that in a corpus of thousands of poems, the translator enjoys some latitude to constitute the self and experience of the poet. 6 Religious Experience as a Problem One of the tricks to getting at the religious experience of Tukaram is that we do not have reliable sources. But another trick may concern the nature of experience itself. An influential critique of experience is a 1998 article by Robert H. Sharf. Before turning to that article directly, we might take a moment to consider a point that he makes, that "experience" as an analytical category has a history. In his book Shelter Blues, Robert Desjarlais provides a brief genealogy of "experience." It is something that is not only internal, but also personal and unique to the individual. It is authentic and true, as opposed to artificial. Within a self that is understood to be composed of a complex of multiple interior layers, experience is deep and meaningful, subject to reflective interpretation by the experiencer. At a time in recent history marked by the rise of the modernist novel and of psychoanalysis, experience is assigned its meaning in a narrative of the development of the person. So, Desjarlais concludes, "In much the same way that the truth of sexuality grew out of an economy of discourses that took hold in seventeenth-century Europe, so discourses of depth, interiority, and authenticity, sensibilities of holism and transcendence, and practices of reading, writing, and storytelling have helped to craft a mode of being known in the modern West as experience: that is, an inwardly reflexive, hermeneutically rich process that coheres through time by way of narrative" (Desjarlais 1997, p. 17). 7 My account of the progress of this notion of experience implies something that Desjarlais argues explicitly: there is nothing about this understanding of the self, or of a life narrative, or of experience itself that is universal across cultures. This is a conclusion that is significant for Desjarlais as an anthropologist. Writing about "experience" in the analysis of another culture, the ethnographer may be unwittingly imposing an alien life world. Let us now return to the critique of religious experience expressed by Robert H. Sharf. The rhetoric of experience, his essay begins, valorizes the "the subjective, the personal, the private," over "the 'objective' or the 'empirical'" (Sharf 1998, p. 94). This is a particularly appealing rhetorical move, since it saves both religious people and scholars of religion from an empiricism, which deconstructs religion, and a cultural pluralism, which delegitimizes parochial western claims. However, Sharf comes to the conclusion that it is a mistake for scholars to rely on this rhetoric. For Sharf, it is not just that "Scholars of religion are not presented with experiences that stand in need of interpretation, but rather with texts, narratives, performances, and so forth" (p. 111). So we do not have access to the experience, but only to discourse about it. And it is not just that "a given individual's understanding and articulation of . . . an experience will be conditioned by the tradition to which he or she belongs" (p. 96). So there is no such thing as a "raw experience" that is not culturally mediated. It is not even just that the contemporary discourse of religious experience only developed since the beginning of the nineteenth century, so that this rhetoric "anachronistically imposes the recent and ideologically laden notion of 'religious experience' on our interpretations of premodern phenomena" (p. 98). Rather, for Sharf, the problem with the rhetoric of religious experience is that it is incoherent. "The word 'experience,' in so far as it refers to that which is given to us in the immediacy of perception, signifies that which by definition is nonobjective, that which resists all signification" (p. 113). So the rhetoric of experience is "a mere place-holder that entails a substantive if indeterminate terminus for the relentless deferral of meaning" (p. 113). 6 It is noteworthy in this context that Omvedt and Patankar do not include any of Chitre's "Advice to an Angry Wife" poems in their collection, though they do note in passing that she is "depicted . . . as an almost complete shrew" (Tukaram 2012, p. 32). 7 In the sentence quoted, Desjarlais cites Foucault (1978). Though I refer to Desjarlais's original here, I initially found this anthologized in Martin and McCutcheon (2012). June McDaniel takes on Sharf's critique of experience in her very recent book, Lost Ecstasy: Its Decline and Transformation in Religion. Here, McDaniel argues that experience and, especially, ecstatic and mystical experiences have been marginalized in both religious studies and theology. One consequence of this is that people in the modern west have come to seek ecstasy outside religious channels, even in ways that are self-destructive, such as through violence, or so McDaniel states. Lost Ecstasy deals with Sharf most directly in the eighth chapter, "The Case of Hinduism: Ecstasy and Denial." McDaniel opens the discussion by boiling down Sharf's critique to three points: 1. Ideas of religious experience are not really indigenous ideas-they are "a relatively late and distinctively Western invention." 2. What earlier ideas exist in Asia about religious experience show that it is unimportant. There is no pre-colonial emphasis on experience; its importance only comes from Western-trained writers such as Radhakrishnan. Religious authority is rarely based on "exalted experiential states." 3. There are false, inconsistent or dubious claims about religious experience, such as claims of alien abduction. Since some claims of subjective religious experience are false, therefore, all claims on the topic are false. (McDaniel 2018, p. 235;citing Sharf 1998) Most of the chapter is given over to a refutation of the second point, with McDaniel presenting accounts of authoritative mystical experience from a broad range of Hindu literature from the Vedic period to the present. In a section about "The Dharma Tradition," the author admits that there is little room for mysticism in works about one's worldly obligations based on caste and stage of life, but this is presented as being exceptional. The phrase "exalted spiritual states" appears in Sharf's article in a brief discussion of Buddhist works about meditation, which concludes, "the authority of exegetes such as Kamalasila, Buddhaghosa, and Chih-i [who wrote such works] lay not in their access to exalted spiritual states but in their mastery of, and rigorous adherence to, sacred scripture" (Sharf 1998, p. 99;citing Sharf 1995). Here, the critic admits that the Buddhist tradition has preserved accounts of meditative states, but that those accounts are represented in works that depend on a kind of scholastic authority. This kind of interdependence of spiritual experience and scholastic authority is not a very clear subject of McDaniel's Chapter 8, though throughout the book she charges that contemporary Christian theology has worked to narrow the range of accepted experience. About McDaniel's third point summarizing Sharf's article, it is certainly true that it includes accounts of alien abductions. Sharf does not claim that these are religious experiences, but he does treat them as analogous. John Mack and other scholars argue that these accounts are so consistent that this is proof that something close to the events described did take place. However, Sharf is skeptical about this. It is more likely that there is no "experience" at all at the basis of these accounts. After citing this example, Sharf critiques the conclusion of a book about possession by Felicitas D. Goodman. That author agnostically admits that she cannot know whether her research subjects were actually possessed: "No one can either prove or disprove that the obvious changes in the brain map in possession . . . are produced by psychological processes or by an invading alien being" (Sharf 1998, p. 112;quoting Goodman 1988, p. 126). About this tergiversation, Sharf comments derisively, "Goodman's agnosticism is but a small step away from John Mack's qualified acceptance of existence of alien abductors" (Sharf 1998, p. 112). Here, it seems that Sharf is attaching to Goodman's possession the opprobrium that seems to go with the accounts of alien abductions-we all know that these things did not happen. Perhaps Sharf reaches this conclusion because he is a thoroughgoing materialist, beginning his analysis with the assumption that there are no supernatural beings, so all the tales of encounters with them must be subject to some other kind of explanation. If that is Sharf's starting point, I cannot follow him that far. As a professor of religious studies in a state university in the United States, I am not prepared to issue a universal declaration of my own view about the existence or nonexistence of supernatural beings. However, it is not clear if that is Sharf's starting point-he does not make that explicit. In my reading of McDaniel's book, there is little in Chapter 8 about the first point in her summary of Sharf. Lost Ecstasy does claim that "As we compare understandings of religious experience in Hinduism and in the Judaeo-Christian West, what is striking is their similarity" (McDaniel 2018, p. 251). But nowhere here or elsewhere is there a discussion of the kind of genealogy of experience in the modern west that we find in Sharf and Desjarlais. I assume that this is because McDaniel does not find such a genealogical analysis to be necessary to compare accounts of ecstatic experience across cultures, accounts that are manifestly similar. Though I do not agree with all of Sharf's critiques of religious experience, I do think that there is at least one that is worthy of serious consideration. If Sharf and Desjarlais's genealogy of experience is correct, we should be careful about imposing our own cultural framework on Tukaram, if we seek access to his experience. Since the claim is made that Tukaram's poetry is particularly modern, it might seem as if this problem should not arise, that this collapses the cultural difference between the seventeenth century poet and the contemporary scholarly analyst. Yet, as noted in the second section of this essay, there are problems with this claim. Omvedt and Patankar do briefly discuss the emergence of a kind of modern critique in seventeenth century Europe, but provide no analysis of how this development occurred in South Asia. In Chitre's book, the context in cultural history becomes irrelevant, since he argues that in its "new existential dimensions," Tukaram's poetry "was anticipating the spiritual anguish of modern man two centuries ahead of his time" (Tukaram 1991, p. xx). Tukaram has apparently somehow leaped over the cultural changes that have led in the west to a modern sensibility. Of course, the historical rupture that Chitre suggests is cast into some doubt by the fact that the body of Tukaram's poetry has been added to over the centuries. More fundamentally, as discussed above, it seems that the modern self that Chitre finds in Tukaram is substantially different from the one that Omvedt and Patankar uncover. The contemporary devotee's encounter with Tukaram, through his poetry and through the hagiographies of Mahipati, particularly during the Varkari pilgrimage, deserves to be labelled an "abundant event," in the language of Robert Orsi, as described in the first section of this essay. It is certainly the case that the devotee believes that she knows Tukaram and his experience. Orsi and Monge insist that such an encounter can have a transformative effect on the scholar as well. But it seems to me that whether the scholar is encountering Tukaram at all must depend at least to some extent on the sources that provide access to him. As noted above, we cannot rely entirely on Mahipati's hagiographies, since they were written a century after Tukaram lived, on the basis of uncertain sources, and by an author who did not have a contemporary historian's concern about his sources. We cannot even depend on Tukaram's own poems to get back to him, at least without a successful critical study of their history, since it is likely that many of the poems that are now attributed to him were written later. At best, Tukaram's oeuvre as it currently exists only gives us access to how the Varkari community has represented his experience over time. At the beginning of this essay, I argued that we find in the story of Tukaram's relationship with Jijai an account of the social impact of his religious experience. This is a story that celebrates indifference to worldly concerns, embodied by Tukaram, however much that may provoke the censure of the worldly, represented by his wife. However, because of the nature of the sources that (may or may not) take us back to the seventeenth century, we cannot be sure that this social teaching is really based on Tukaram's experience. There have been devotional poet-saints in the history of Hinduism whose compositions were fixed in their own lifetime, but this has been relatively unusual. More common is that a saint's output has expanded over the centuries. For some poets, there are manuscripts that can be studied to analyze this development. But even this will not lead to certain knowledge of the original works of most saints. So, for reasons of textual history, if not for more theoretical reasons, the experience of saints such as Tukaram must remain elusive. Funding: This research received no external funding.
2019-05-13T13:04:00.028Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "aa9f8df6049609a5b760730ce35a8b8d40af636b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/10/2/110/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3315f5f5a2d98f3d3e381ec2970fbdc742a6811e", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
119391849
pes2o/s2orc
v3-fos-license
Generalized Uncertainty Principle, Extra-dimensions and Holography We consider Uncertainty Principles which take into account the role of gravity and the possible existence of extra spatial dimensions. Explicit expressions for such Generalized Uncertainty Principles in 4+ n dimensions are given and their holographic properties investigated. In particular, we show that the predicted number of degrees of freedom enclosed in a given spatial volume matches the holographic counting only for one of the available generalizations and without extra dimensions. Introduction During the last years many efforts have been devoted to clarifying the role played by the existence of extra spatial dimensions in the theory of gravity [1,2]. One of the most interesting predictions drawn from the theory is that there should be measurable deviations from the 1/r 2 law of Newtonian gravity at short (and perhaps also at large) distances. Such new laws of gravity would imply modifications of those Generalized Uncertainty Principles (GUP's) designed to account for gravitational effects in the measure of positions and energies. On the other hand, the holographic principle is claimed to apply to all of the gravitational systems. The existence of GUP's satisfying the holography in four dimensions (one of the main examples is due to Ng and Van Dam [3]) led us to explore the holographic properties of the GUP's extended to the brane-world scenarios [4]. The results, at least for the examples we considered, are quite surprising. The expected holographic scaling indeed seems to hold only in four dimensions, and only for the Ng and van Dam's GUP. When extra spatial dimensions are admitted, the holography is destroyed. This fact allows two different interpretations: either the holographic principle is not universal and does not apply when extra dimensions are present; or, on the contrary, we take seriously the holographic claim in any number of dimensions, and our results are therefore evidence against the existence of extra dimensions. The four-dimensional Newton constant is denoted by G N throughout the paper. Linear GUP in four dimensions from micro black holes In this Section we derive a GUP via a micro black hole gedanken experiment, following closely the content of Ref. [5]. When we measure a position with precision of order ∆x, we expect quantum fluctuations of the metric field around the measured position with energy amplitude The Schwarzschild radius associated with the energy ∆E, falls well inside the interval ∆x for practical cases. However, if we wanted to improve the precision indefinitely, the fluctuation ∆E would grow up and the corresponding R S would become larger and larger, until it reaches the same size as ∆x. As it is well known, the critical length is the Planck length, and the associated energy is the Planck energy p ≡ c 2 p = 1 2 If we tried to further decrease ∆x, we should concentrate in that region an energy greater than the Planck energy, and this would enlarge further the Schwarzschild radius R S , hiding more and more details of the region beyond the event horizon of the micro hole. The situation can be summarized by the inequalities which, if combined linearly, yield This is a generalization of the uncertainty principle to cases in which gravity is important, i.e. to energies of the order of p . We note that the minimum value of ∆x is reached for (∆E) min = p and is given by (∆x) min = 2 p . Holographic properties In this section, we investigate the holographic properties of the GUP proposed above. We shall estimate the number of degrees of freedom n(V ) contained in a spatial volume (cube or "hypercube") of size l. The holographic principle claims that n(V ) scales as the area of the (hyper-)surface enclosing the given volume, that is (l/ p ) 2+n in 4 + n dimensions. For the four-dimensional GUP considered in the previous section, Eq. (6), we find that this scaling does not occur. In fact, (∆x) min ∼ p and a cube of side l contains a number of degrees of freedom equal to We then conclude that this GUP, obtained by linearly combining the quantum mechanical expression with gravitational bounds, does not imply the holographic counting of degrees of freedom. Ng and Van Dam GUP in four dimensions An interesting GUP that satisfies the holographic principle in four dimensions has been proposed by Ng and van Dam [3], based on Wigner inequalities about distance measurements with clocks and light signals [6]. Suppose we wish to measure a distance l. Our measuring device is composed of a clock, a photon detector and a photon gun. A mirror is placed at the distance l which we want to measure and m is the mass of the system "clock + photon detector + photon gun". We call "detector" the whole system and let a be its size. Obviously, we suppose which means that we are not using a black hole as a clock. Be ∆x 1 the uncertainty in the position of the detector, then the uncertainty in the detector's velocity is After the time T = 2 l/c taken by light to travel along the closed path detector-mirror-detector, the uncertainty in the detector's position (i.e. the uncertainty in the actual length of the segment l) has become We can minimize ∆x tot by suitably choosing ∆x 1 , Hence This is a purely quantum mechanical result obtained for the first time by Wigner in 1957 [6]. From Eq. (13), it seems that we can reduce the error (∆x tot ) min as much as we want by choosing m very large, since (∆x tot ) min → 0 for m → ∞. But, obviously, here gravity enters the game. In fact, Ng and van Dam have also considered a further source of error, a gravitational error, besides the quantum mechanical one already addressed. Suppose the clock has spherical symmetry, with a > r g . Then the error due to curvature can be computed from the Schwarzschild metric surrounding the clock. The optical path from r 0 > r g to a generic point r > r 0 is given by (see, for example, Ref. [7]) and differs from the "true" (spatial) length (r − r 0 ). If we put a = r 0 , l = r, the gravitational error on the measure of (l − a) is thus where the last estimate holds for l > a r g . If we measure a distance l ≥ 2a, then the error due to curvature is Thus, according to Ng and van Dam the total error is This error can be minimized again by choosing a suitable value for the mass of the clock, and, inserting m min in Eq. (17), we then have The global uncertainty on l contains therefore a term proportional to l 1/3 . Holographic properties We now see immediately the beauty of the Ng and van Dam GUP: it obeys the holographic scaling. In fact in a cube of size l the number of degrees of freedom is given by as required by the holographic principle. Models with n extra dimensions We shall now generalize the procedure outlined in a previous section to a space-time with 4 + n dimensions, where n is the number of space-like extra dimensions [4]. The first problem we should address is how to relate the gravitational constant G N in four dimensions with the one in 4 + n, henceforth denoted by G (4+n) . This of course depends on the model of space-time with extra dimensions that we consider. Models recently appeared in the literature mostly belong to two scenarios: • the Arkani-Hamed-Dimopoulos-Dvali (ADD) model [1], where the extra dimensions are compact and of size L; • the Randall-Sundrum (RS) model [2], where the extra dimensions have an infinite extension but are warped by a non-vanishing cosmological constant. A feature shared by (the original formulations of) both scenarios is that only gravity propagates along the n extra dimensions, while Standard Model fields are confined on a fourdimensional sub-manifold usually referred to as the braneworld. In the ADD case the link between G N and G (4+n) can be fixed by comparing the gravitational action in four dimensions with the one in 4+n dimensions. The space-time topology in such models is M = M 4 ⊗ n , where M 4 is the usual four-dimensional space-time and n represents the extra dimensions of finite size L. The space-time brane has no tension and therefore the action S (4+n) can be written as whereR,g are the projections on M 4 of R and g. Here L n is the "volume" of the extra dimensions and we omitted unimportant numerical factors. On comparing the above expression with the purely four-dimensional action we obtain The RS models are more complicated. It can be shown [2] that for n = 1 extra dimension we have G (4+n) = σ −1 G N , where σ is the brane tension with dimensions of length −1 in suitable units. The gravitational force between two point-like masses m and M on the brane is now given by where the correction to Newton law comes from summing over the extra dimensional graviton modes in the graviton propagator [2]. However, since Eq. (24) is obtained by perturbative calculations, not immediately applicable to a nonperturbative structure such as a black hole, we shall consider only the ADD scenario in this paper. To be more precise, from table-top tests of the gravitational force one finds that n ≥ 2 in ADD [1,8]. On the other hand, black holes with mass M σ −1 are likely to behave as pure five-dimensional in RS [9], therefore our results for n = 1 should apply to such a case. Ng and Van Dam GUP in 4 + n dimensions Ng and van Dam's derivation can be generalized to the case with n extra dimensions. The Wigner relation (13) for the quantum mechanical error is not modified by the presence of extra dimensions and we just need to estimate the error δl C due to curvature. We ought not to consider micro black holes created by the fluctuations ∆E in energy, as in Section 2, but we have rather to deal with (more or less) macroscopic clocks and distances. This implies that we have to distinguish four different cases: 1. 0 < L < r g < a < l; 2. 0 < r (4+n) < L < a < l; where r (4+n) is the Schwarzschild radius of the detector in 4 + n dimensions, and of course r g = r (4) . The curvature error will be estimated (as before) by computing the optical path from a ≡ r 0 to l ≡ r. Of course, we will use a metric which depends on the relative size of L with respect to a and l, that is the usual four-dimensional Schwarzschild metric in the region r > L, and the 4 + n dimensional Schwarzschild solution in the region r < L (where the extra dimensions play an actual role). In cases 1. and 2. the length of the optical path from a to l can be obtained using just the four-dimensional Schwarzschild solution and the result is given by Eq. (19). In cases 3. and 4. we instead have to use the Schwarzschild solution in 4 + n dimensions [11], at least for part of the optical path. In the above, and A n+2 is the area of the unit (n + 2)-sphere, that is Besides, we note that, for n = 0, that is, C coincides in four dimensions with the Schwarzschild radius of the detector. The Schwarzschild horizon is located where (1 − C/r n+1 ) = 0, that is at r = C 1/(n+1) ≡ r (4+n) , or is an unimportant numerical factor. Since measurements can be performed only on the brane, to the uncertainty ∆x in position we can still associate an energy given by Eq (1). The corresponding Schwarzschild radius is now given by Eq. (29) with m = ∆E/c 2 and the critical length such that ∆x = r (4+n) is the Planck length in 4 + n dimensions, For sake of simplicity (because α(0) = 1 and in any case α(n) ∼ 1) we define the Planck length in 4 + n dimensions as The energy associated with (4+n) is analogously the Planck energy in 4 + n dimensions, where p is the Planck energy in four dimensions given in Eq. (4). In case 3. we obtain the length of the optical path from a to l by adding the optical path from a to L and that from L to l. We must use the solution in 4 + n dimensions for the first part, and the four-dimensional solution for the second part of the path, It is not difficult to show that from r (4+n) < L (which holds in cases 3. and 4.) we can infer Now, suppose a n+1 C = r n+1 (4+n) , that is a r (4+n) , so that we are not doing measures inside a black hole. Then r g < r (4+n) a < L < l and The error caused by the curvature (when a < L < l) is therefore linear in m, We recall that the curvature error in four dimensions does not contain the size of the clock. On the contrary, this error in 4 + n dimensions depends explicitly on the size a of the clock and on the size L of the extra dimensions. Hence the total error is given by where J = 2 ( l/c) 1/2 and K is defined above. This error can be minimized with respect to m, Finally, where we used the definition of J and K. In case 4., the optical path from a to l can be obtained by using simply the Schwarzschild solution in 4+n dimensions. We get Suppose now, as before, that a n+1 C = r n+1 (4+n) , that is a r (4+n) (i.e. our clock is not a black hole). We then have If the distance we are measuring is, at least, of the size of the clock (l ≥ 2 a), we can write The error caused by the curvature is therefore (when a < l < L) Here we again note that the curvature error in 4 + n dimensions explicitly contains the size of the clock. The global error can be computed as before where C is linear in m. Minimizing δl tot with respect to m can be done in perfect analogy with the previous calculation. The result is We note that the expression (40) coincides in the limit L → a with Eq. (19) (taking l ≥ 2 a), while, in the limit L → l, we recover from Eq. (40) the expression (46) (of course, supposing also that l ≥ 2 a). Holographic properties We finally examine the holographic properties of Eq. (46) for the GUP of Ng and van Dam type in 4 + n dimensions. We just consider the expression in Eq. (46) because it also represents the limit of Eq. (40) for L → l and l ≥ 2 a. Moreover, for n = 0, Eq. (46) yields the four-dimensional error given in Eq. (19). Since we are just interested in the dependence of n(V ) on l and the basic constants, we can write We then have that the number of degrees of freedom in the volume of size l is and the holographic counting holds in four-dimensions (n = 0) but is lost when n > 0. In fact we do not get something as as we would expect in 4 + n dimensions. Even if we take the ideal case a ∼ (4+n) we get and the holographic principle does not hold for n > 0. Concluding remarks In the previous Sections, we have shown that the holographic principle seems to be satisfied only by uncertainty relations in the version of Ng and van Dam and for n = 0. That is, only in four dimensions we are able to formulate uncertainty principles which predict the same number of degrees of freedom per spatial volume as the holographic counting. This could be evidence for questioning the existence of extra dimensions. Moreover, such an argument based on holography could also be used to support the compactification of string theory down to four dimensions, given that there seems to be no firm argument which forces the low energy limit of string theory to be four-dimensional (except for the obvious observation of our world). In this respect, we should also say that the cases 3. and 4. of Section 5 do not have a good probability to be realized in nature since, if there are extra spatial dimensions, their size must be shorter than 10 −1 mm [8]. Therefore, cases 1. and 2. of Section 5 are more likely to survive the test of future experiments. A number of general remarks are however in order. First of all, we cannot claim that our list of possible GUP's is complete and other relations might be derived in different contexts which accommodate for both the holography and extra dimensions. Further, one might find hard to accept that quantum mechanics and general relativity enter the construction of GUP's on the same footing, since the former is supposed to be a fundamental framework for all theories while the latter can be just regarded as a (effective) theory of the gravitational interaction. We might agree on the point of view that GUP's must be considered as "effective" (phenomenological) bounds valid at low energy (below the Planck scale) rather than "fundamental" relations. This would in fact reconcile our result that four dimensions are preferred with the fact that string theory (as a consistent theory of quantum gravity) requires more dimensions through the compactification which must occur at low energy, as we mentioned above. Let us also note that general relativity (contrary to usual field theories) determines the space-time including the causality structure, and the latter is an essential ingredient in all actual measurements. It is therefore (at least) equally hard to conceive uncertainty relations which neglect general relativity at all. This conclusion would become even stronger in the presence of extra dimensions, since the fundamental energy scale of gravity is then lowered [1,2] (possibly) within the scope of present or near-future experiments and the gravitational radius of matter sources is correspondingly enlarged [10]. A final remark regards cases with less than four dimensions. Since Einstein gravity does not propagate in such space-times and no direct analogue of the Schwarzschild so-lution exists, one expects a qualitative difference with respect to the cases that we have considered here. For instance, a point-like source in three dimensions would generate a flat space-time with a conical singularity and no horizon [12]. Consequently, one does expect that the usual Heisenberg uncertainty relations hold with no corrections for gravity.
2019-01-11T20:43:22.759Z
2003-07-18T00:00:00.000
{ "year": 2003, "sha1": "41fc21c2678e3d6d6aa2005c0413bde63f146140", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjp/a/ZP8VPQYRjCgbGWg6FrQQrkk/?format=pdf&lang=en", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "f6142357fb919660d05c2754117f036e9a7d99ba", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
215721156
pes2o/s2orc
v3-fos-license
Management of Pulmonary Arterial Hypertension in Patients with Systemic Sclerosis Abstract Systemic sclerosis (SSc) is a rare and complex immune-mediated connective tissue disease characterized by multi-organ fibrosis and dysfunction. Systemic sclerosis-associated pulmonary arterial hypertension (SSc-PAH) is a leading cause of death in this population. Pulmonary arterial hypertension (PAH) can coexist with other forms of pulmonary hypertension in SSc, including pulmonary hypertension related to left heart disease, interstitial lung disease, chronic thromboembolism and pulmonary venous occlusive disease, which further complicates diagnosis and management. Available pulmonary arterial hypertension therapies target the nitric oxide, endothelin and prostacyclin pathways. These therapies have been studied in SSc-PAH in addition to idiopathic PAH, often with different treatment responses. In this article, we discuss the management as well as the treatment options for patients with SSc-PAH. Introduction Systemic sclerosis (SSc), also called scleroderma, is a complex immune-mediated connective tissue disease characterized by fibrosis and thickening of the skin and internal organs as well as vascular abnormalities that ultimately leads to multiorgan dysfunction. 1 These immune, fibrotic and vascular abnormalities, including pulmonary arterial hypertension, are highlighted in the revision of the American College of Rheumatology/European League against Rheumatism (ACR/EULAR) criteria for SSc diagnosis. 2 SSc is classified into diffuse and limited cutaneous forms based on the extent of skin involvement. Limited cutaneous systemic sclerosis (lcSSc) has skin involvement distal to the elbows and knees, whereas the skin is involved proximal to the knees and elbows, including the trunk in diffuse cutaneous systemic sclerosis (dcSSc). The term systemic sclerosis sine scleroderma is used when there is no skin involvement, but the patient meets the other criteria for SSc. 3 The definition of pulmonary arterial hypertension (PAH) was recently modified by the sixth World Symposium on Pulmonary Hypertension (WSPH) proceedings to include an elevation in the mean pulmonary arterial pressure (mPAP) >20 mmHg, a pulmonary vascular resistance (PVR) ⩾3Wood units (WU) and a pulmonary artery wedge pressure (PAWP) ⩽15 mmHg. 4 This change was based on data obtained from healthy individuals, showing that a normal mPAP at rest is 14 ± 3.3 mmHg. 5 Two standard deviations from this mean give the current mPAP cut-off for upper limit of normal. Evidence from large databases indicate that patients with a mPAP between 20 and 25 mmHg have worse outcomes than those with a mPAP ≤20 mmHg, further supporting the modification by the 6 th WSPH. 4 Two studies in patients with systemic sclerosis-associated pulmonary arterial hypertension (SSc-PAH), with mPAP between 21 and 24 mmHg, demonstrated a decrease in functional capacity as shown by an abnormal six-minute walk test (6MWT), even when considering a PVR of ⩾ 2 (instead of ≥3) Wood units. 6,7 There are five groups of pulmonary hypertension (PH) that are based on the mechanisms of disease, clinical presentation, hemodynamic characteristics, and therapeutic response. 8 Groups 1 to 5 include patients with 1) PAH, 2) PH due to left heart disease, 3) PH due to lung disease and/ or hypoxia, 4) PH due to pulmonary artery obstruction and 5) PH due to unclear or multifactorial mechanism, respectively. The prevalence of pulmonary arterial hypertension (PAH) in SSc is estimated to be around 6-12%, a percentage that may increase when using the modified definition for PAH. 9,10 It is the second most frequent cause of PAH in both US and European registries following idiopathic pulmonary arterial hypertension (IPAH). 11 PAH is more common in lcSSc but can be seen in the other variants. Furthermore, PAH can coexist with other forms of PH in SSc, including PH related to left heart disease, interstitial lung disease/hypoxemia, chronic thromboembolism and pulmonary venous occlusive disease (currently included in group 1PH), 8 which further complicates diagnosis and management. PAH results from an imbalance between vasoconstrictors and proliferative mediators (such as endothelin-1) and vasodilators (such as nitric oxide and prostacyclin). Endothelial injury and intraluminal micro-thrombosis lead to progressive pulmonary arterial remodeling and increase in PVR. The progressive increase in PVR affects the right ventricular function, leading to right ventricular failure and death. 9,12 SSc-PAH is a leading cause of mortality, with a mortality rate of 50% within the first 3 years; 9,13 which is worse than the one observed in patients with IPAH. 14 Types of Pulmonary Hypertension in Systemic Sclerosis Due to the systemic nature of SSc, an overlap between more than one type of PH is common (Table 1 and Figure 1), making the identification of the predominant type of PH not always straightforward, 15 and complicating the management of these patients. Systemic sclerosis associated with left heart disease (SSc-LHD, group 2 PH) is defined by an elevated mPAP with left heart disease (characterized by PCWP >15 mmHg). Cardiac dysfunction may be seen in >40% of patients with SSc and includes primary myocardial fibrosis, fibrosis of the conduction system leading to arrhythmias, microvascular and atherosclerotic coronary vessel disease and hypertensive crisis. [16][17][18] Fibrosis of the myocardium may result in either diastolic (heart failure with preserved ejection fraction which is reported in around 18% of SSc patients) or less commonly systolic heart failure (heart failure with reduced ejection fraction which is described in around 2% patients). 16 Studies comparing echocardiography findings in patients with SSc-PAH and IPAH with similar hemodynamics showed that patients with SSc-PAH are more likely to have left atrial enlargement and other indications of left ventricular diastolic impairment. Exercise and/or fluid challenge during RHC may be useful in differentiating group 1 and group 2 PH. In the case of elevated PCWP, a markedly elevated transpulmonary (mPAP -PCWP) or diastolic pulmonary gradient (pulmonary artery diastolic pressure -PCWP) suggests the possibility of combined group 1 and group 2 PH. 19 The treatment consists of volume status optimization, rate control and heart failure medications. 20 Treatment with PAH-specific therapy in group 2 PH or combined group 1 and group 2 is not recommended, as it may result in fluid retention and pulmonary edema (such as the use of macitentan in combined pre and post-capillary pulmonary hypertension, CpcPH). 21 Interstitial lung disease (ILD) is common in SSc, with evidence of interstitial changes on imaging in up to 90% and chronic respiratory failure in approximately 10% of patients. PH is seen in up to 31% of patients with clinically significant SSc-ILD and results in higher mortality than in SSc-ILD patients without PH. 22 SSc-ILD is more common in the dcSSc type, especially in patients who have positive Scl-70 (anti-topoisomerase) 23 but also may occur in all variants. The presence of a positive Scl-70 antibody is somewhat protective against PAH, but these patients may still develop group 3 PH. ILD may be classified as "limited" or "extensive" based on high resolution computed tomography (HRCT) and pulmonary function testing (PFT). It is suggested that PH associated with the extensive form of SSc-ILD (>20% fibrosis on HRCT or forced vital capacity (FVC) <70% in indeterminate HRCT) could be classified as group 3. 15 Although the etiology of PH is associated with extent of lung disease, the mPAP does not seem to correlate with extent of fibrosis on imaging or the forced vital capacity. 22,24 In patients with SSc-ILD, a diffusion capacity (DLCO) <40% of predicted or a reduction in DLCO out of proportion to FVC (FVC/ DLCO ratio >1.6) also suggests the presence of pulmonary vascular disease. 11 The PH in group 3 is typically modest with a mPAP <35 mmHg. Likewise, the presence of a mPAP "out of proportion" or >35 mmHg suggests the possibility of concomitant PAH. Treatment of SSc-ILD mainly consists of immune-suppressive therapy, most commonly cyclophosphamide and mycophenolate. 25 Most recently, the antifibrotic nintedanib, a tyrosine kinase inhibitor, was shown to decrease progression of SSc-ILD in the SENSCIS trial, 26 and murine models have suggested a possible additional benefit of nintedanib on the pulmonary vasculature. 27 Treatment of PH due to ILD with PAHspecific therapies may result in worsening oxygenation due to deterioration of the ventilation-perfusion mismatch and is not recommended outside of clinical trials ( Figure 2). However, a more recent study demonstrated that the coexistence of type I and type III is common in patients with SSc, and such patients tolerated concomitant targeted therapy and immunosuppressive therapy. 28 Pulmonary veno-occlusive disease (PVOD) is underrecognized in SSc and its presence carries a very poor prognosis. 29 Although difficult to distinguish from PAH, it should be suspected in patients who develop pulmonary edema after the initiation of PAH-specific therapies. Other indications are a DLCO <50% on PFTs, severe hypoxemia, and the presence of septal lines, centrilobular ground-glass opacities and lymph node enlargement on HRCT. 4 Pathologic findings include intimal fibrosis and obstruction of small pulmonary veins and venules in addition to the arteriopathy seen in PAH. 30 Treatment mainly consists of diuretics to optimize fluid status, very careful use of PAH-specific therapies and ultimately lung or heart-lung transplantation. Patients with SSc have a 3-fold increased risk of pulmonary thromboembolic disease, especially if antiphospholipid antibodies are present. 31 Furthermore, SSc is a potential risk factor for developing chronic thromboembolic pulmonary hypertension (CTEPH, group 4 PH), 32 that could be related, at least in part, to higher levels of vWF in patients with SSc. 33,34 As such, patients with SSc-PH should be screened for CTEPH PAH LHD ILD PVOD CTEPH Figure 1 Overlap between different phenotypes of pulmonary hypertension in SSc. Abbreviations: CTEPH, chronic thromboembolic pulmonary hypertension; ILD, interstitial lung disease; LHD, left heart disease; PAH, pulmonary arterial hypertension, PVOD, pulmonary veno-occlusive disease; SSc, systemic sclerosis. SSc-PAH Targeted therapy Extensive d as a potential etiology of PH. Screening involves obtaining a ventilation/perfusion (V/Q) scan in all patients newly diagnosed with SSc-PH, even in patients without prior history of pulmonary embolism, as approximately 25% of patients diagnosed with CTEPH have no known history of pulmonary embolism. 35 In the case of abnormal V/Q scan, a pulmonary angiogram should follow to confirm the diagnosis. 36 Screening for Pulmonary Hypertension in Systemic Sclerosis All SSc patients should have pulmonary function testing (PFT), consisting of spirometry, lung volumes and DLCO, to screen for both ILD and PH. A decrease in DLCO <60% or >20% in one year in the absence of significant lung volume abnormalities, or a FVC/DLCO percent >1.6 suggests PH. 11 An elevated NT-proBNP showed a sensitivity and specificity of 90% for the presence of SSc-PAH in one study, and may suggest PAH if elevated > two-fold the upper limit of normal. 37 However, the role of pro-BNP in screening for SSc-PAH is yet to be determined. Transthoracic echocardiography is the best screening tool for PH. It assesses right-and left-sided morphology and function, detects valvular abnormalities and is useful for the estimation of right ventricular pressures. 11,38 When PH is suggested by echocardiography (RVSP > 40 mmHg and/or any degree of RV dysfunction) a RHC is warranted. 39 HRCT is performed mainly to screen for interstitial lung disease and/or PVOD. However, a HRCT might show an enlarged pulmonary artery and right ventricle in advanced PAH. 38,40 A cardiopulmonary exercise test (CPET) can suggest pulmonary vascular disease in cases of low end-tidal partial pressure of carbon dioxide (EtPCO 2 ), high ventilator equivalents for carbon dioxide (VE/VCO 2 ), low oxygen pulse (VO 2 /HR) and low peak oxygen uptake (VO 2 ). 20,41 The six-minute walk test (6MWT) is a non-invasive submaximal functional test that correlates with maximum exercise capacity as measured by CPET. 42 In addition, heart rate recovery (HRR, as measured by the difference of heart rate at the end of 6MWT and after 1 min of resting) may predict clinical worsening in patients with connective tissue disease associated PAH (CTD-PAH), including SSc. 43 However, the performance characteristics of the 6MWT in SSc are less robust than in IPAH secondary to non-cardiopulmonary limitations of exercise, such as joint pain, skin contractures and muscle weakness. Recently, a combination of features obtained during the 6MWT, ie, distance walked, degree of oxygenation and Borg dyspnea index was found useful in identifying patients who need further cardiopulmonary evaluation. 44 Compared with echocardiography, cardiac MRI (CMR) may provide information on the underpinnings of cardiac involvement in patients with SSc, including inflammatory, microvascular and fibrotic mechanisms. 45 Furthermore, CMR provides a more comprehensive assessment of the right ventricular function. 46 Due to the high morbidity and mortality of PAH, patients newly diagnosed with SSc should be screened for PAH at presentation and annually thereafter for life. 41 Data suggest that SSc-PAH patients who are diagnosed and treated early as a result of screening have improved mortality as compared to patients diagnosed as a result of clinical suspicion. [47][48][49] The European Society of Cardiology/European Respiratory Society recommends annual echocardiography. 20 If features suggestive of PH are noted, ie, elevated tricuspid regurgitant jet velocity or abnormal right ventricular morphology and function, a right heart catheterization is recommended. The Australian Scleroderma Interest Group (ASIG) recommends screening with NT-proBNP and PFT. An elevated NT-proBNP and/or an elevated FVC/DLCO ratio is an indication to consider RHC. 50 The 2-step DETECT algorithm combines clinical, physiologic and laboratory data to decide who should get an echocardiogram and based on its result who should undergo a RHC to confirm the presence of PH. 13 All three screening strategies 13,20,50 have similar sensitivity, specificity, positive and negative predictive values ( Table 2). Treatment As SSc is yet to be a treatable disease, the treatment of SSc-PAH is directed at controlling the progression of PAH. 51 General measures and supportive therapy should be offered to all patients. PAH-specific treatment is generally offered for patients with WHO functional class (WHO FC) II, III or IV. 9 Currently available treatments target the nitric oxide, endothelin or prostanoid pathways ( Table 3). The 6th World Symposium Proceedings in PH recommend a treatment strategy based on a multiparametric risk stratification approach, 52 with the main objective of achieving a low-risk status that is associated with a reduced mortality (annual mortality of <5%). 20 Several risk stratification strategies have been used, including the REVEAL 2.0, 53 French Pulmonary Hypertension Network (FPHN), 54 COMPERA 55 and the Swedish Pulmonary Arterial Hypertension Registrar (SPAHR), 56 (Table 4). The two methodologies most commonly used to assess risk are the FPHN and REVEAL 2.0. The FPHN risk assessment totals the number of low-risk criteria (WHO functional class I or II, 6MW distance >440 m, RA pressure <8 mmHg, and cardiac index ≥2.5 L/min/m 2 ). 57 The REVEAL 2.0 risk score includes a larger number of variables but provides a greater risk discrimination than the FPHN. 53 Vasoreactivity Testing Vasoreactivity testing is recommended predominantly for patients with idiopathic PAH to identify individuals that can benefit from calcium channel blockers. However, vasoreactivity testing in SSc-PAH is not mandated in recent guidelines as most patients are non-reactive. 58 Therefore, calcium channel blockers are not recommended as they may worsen right heart failure; nevertheless, and under careful vigilance, some SSc individuals receive this medication for their Raynaud phenomenon. 59 Treatment Approach After confirmation of the diagnosis of SSc-PAH, the 6th World Symposium proceedings recommend initiating PAH targeted therapy based on risk stratification (Figure 3). 60 Patients with low-intermediate risk are started generally on combination therapy, 52 with a few exceptions in which monotherapy is an adequate alternative. Choice of medication is usually based on a number of factors, including comorbidities, side effects, route of administration and patient preference. 52 High-risk patients should be treated with combination therapy that includes a parenteral prostacyclin analogue. 52 Patients are usually followed up within 1-3 months after initiating therapy to evaluate treatment response, and thereafter, every 3-6 months depending on the patient. 9 Tests suggested during follow up include clinical assessment (WHO FC), 6MWT, NT-proBNP and echocardiogram. A RHC should be considered 3-6 months after initiation or change in therapy, and yearly thereafter. 20 Treatment should be escalated in patients who fail to achieve a low-risk status within 3-6 months. Those failing triple therapy should be considered for lung transplantation. 52 Targeted Therapy Phosphodiesterase-5 Inhibitors Both sildenafil and tadalafil are approved for the treatment of SSc-PAH. 51 SUPER-1/SUPER-2 trials demonstrated the efficacy of sildenafil therapy in improving the 6MW distance, cardiac hemodynamics and WHO FC in patients with PAH, including those with CTD-PAH (45% of whom had SSc). Improvements in the distance walked during the 6MWT were similar for patients with CTD-PAH or IPAH. 64,65 Tadalafil was studied in the PHIRST- Guanylate Cyclase Stimulator Riociguat was evaluated in the PATENT-1/PATENT-2 trials and demonstrated considerable efficacy in CTD-PAH including SSc-PAH, improving the 6MW distance, pulmonary hemodynamics (such as PVR and the cardiac index) and WHO FC. These improvements were less pronounced in patients with CTD-PAH than IPAH; however, 2-year survival rates were similar in both PAH types. 68,69 In addition, a case series demonstrated that in patients with SSc-PAH with unsatisfactory response to PDE-5 inhibitors, switching to riociguat was associated with improved respiratory and cardiac hemodynamics. 70 Long-term treatment with riociguat has been associated with a reduction in right heart size and an improvement in right ventricular function in patients with PAH (14% of patients with CTD-PAH) and CTEPH. 71,72 Endothelin Receptor Antagonists Endothelin-1 binds to endothelin receptors on the pulmonary vasculature and results in vasoconstriction. 73 Bosentan, ambrisentan and macitentan are endothelin receptor antagonists (ERA) approved for the treatment of SSc-PAH. In the BREATHE-1 trial, bosentan prevented the deterioration of walking distance in 6MWT, predominantly in patients with IPAH (3-m improvement in SSc-PAH vs 46 m in IPAH). 74 In the ARIES-1/ARIES-2 trials, ambrisentan improved the 6MW distance and slowed clinical worsening in CTD-PAH, though survival was better in the IPAH population. 75,76 The SERAPHIN trial showed that macitentan reduced morbidity and mortality in PAH patients, after-which a meta-analysis showed similar outcomes between IPAH and CTD-PAH. 77,78 ERA side effects include elevated liver function tests (bosentan), peripheral edema and anemia (all). 20 In the RAPIDS-2 study, 79 bosentan prevented the formation of new digital ulcers but did not improve the rate of ulcer healing. Prostacyclin Pathway Agonists Prostacyclin is produced by endothelial cells and causes potent pulmonary artery vasodilation. Dysregulation of the prostacyclin pathway has been shown in patients with PAH. 80 Epoprostenol, treprostinil and iloprost are approved for the treatment of SSc-PAH. 51 Epoprostenol is given intravenously due to its short half-life. Treprostinil may be administered intravenously, subcutaneously, orally or by inhalation. 81 Iloprost is only administered by inhalation in the US, although an intravenous formulation is Variables Included 82,83 Similar results were demonstrated with intravenous and subcutaneous treprostinil as well as inhaled iloprost. [84][85][86][87][88] Intermittent intravenous iloprost infusion (composed of iloprost infusion for 6 hrs per day for 5 days every 6 weeks) decreased the systolic PAP, improved the distance walked during the 6MWT and protected against PAH progression in patients with SSc. 89 Prostaglandins are also commonly used to treat ischemic digital ulcers in patients with SSc. 51 Selexipag, an oral selective IP-receptor agonist, was evaluated in the GRIPHON trial, where it decreased the risk of death or complications related to PAH patients including SSc, findings that were consistent with those seen in patients with IPAH. 90,91 Combination Therapy versus Initial Monotherapy Combination therapy is the approach of choice for drug naïve SSc-PAH patients. The ATPAHSS-O and the AMBITION trials showed that upfront combination therapy with ambrisentan and tadalafil in SSc-PAH improved cardiac hemodynamics (PVR and stroke volume, NT-proBNP) and 6MW distance, and lowered the risk of clinical worsening, when compared to monotherapy with either drug alone. 92,93 More recently, the SERAPHIN and the long-term GRIPHON trials demonstrated lower morbidity and mortality rates with the addition of macitentan and selexipag on a background therapy, respectively. 94,95 General Measures Oral Anticoagulation Data on the use of oral anticoagulation remain conflicting for patients with IPAH; 96 recent data have discouraged the use of anticoagulation for patients with SSc-PAH, since these patients are at increased risk of bleeding secondary to gastric antral vascular ectasia and arterial vascular malformations in the intestines. In the COMPERA trial, oral anticoagulation was associated with a survival benefit in IPAH, but not in other forms of PAH, such as SSc-PAH. 97 In fact, long-term use of warfarin was associated with worse prognosis in patients with SSc-PAH in the REVEAL Registry. 98 Furthermore, anticoagulation with warfarin was not associated with an effect on survival in PAH (12% had SSc-PAH) or idiopathic PAH patients treated with SQ treprostinil. 99 A prospective cohort study showed that anticoagulation with warfarin was associated with improved survival; however, 46% of SSc-PAH received treatment for indications other than PAH. 48 There is an ongoing trial (SPHInX) that aims to evaluate the efficacy and safety of apixaban, a direct oral anticoagulant, in SSc-PAH. 100 Immunosuppressive Therapy While immunosuppressive therapy has been associated with improved clinical outcome and survival in PAH associated with some forms of CTD (such as glucocorticoids and cyclophosphamide in systemic lupus erythematosus and mixed connective tissue disease), 101 SSc-PAH is refractory to corticosteroids or immunosuppressive therapy. 102 Supportive Therapy Despite the lack of specific data, experts recommend the use of diuretics to manage volume overload and digoxin for atrial arrhythmias and right heart failure. Long-term oxygen therapy is recommended to maintain arterial blood oxygen pressure above 60 mm Hg or oxygen saturation above 90%. 103,104 In terms of non-medical management, exercise training was found to improve work capacity, quality of life and possibly survival in patients with CTD-PAH including SSc. 105 Pregnancy is contraindicated in SSc-PAH given substantial mortality rate, estimated to be up to 50%, 106 and teratogenic effect of some PAH-specific medications such as riociguat and ERAs. If a patient becomes pregnant, a discussion about pregnancy termination should ensue. Patients who decide to continue with pregnancy are optimized on PAH therapies, followed-up closely in an expert center with experience in managing this high-risk pregnancy. 20 Other general measures also include immunization against influenza and pneumococcal infections. 20 Refractory Disease Despite the continuous advances in the management of PAH, a significant proportion of patients with SSc-PAH experience disease progression. 107 Atrial septostomy has been investigated as a palliative measure for patients with SSc-PAH who continue to deteriorate despite maximal medical treatment, and was found to improve exercise capacity and possibly survival in selective cases. 108 In patients with SSc-PAH refractory or poorly responsive to PAH therapy, lung transplantation is currently the only option. At least one study showed that patients with SSc-PAH experience similar 2-year survival rates after lung transplant when compared to IPAH or idiopathic pulmonary fibrosis. 109,110 Unfortunately, patients with SSc are commonly deemed inappropriate candidates to lung transplantation due to risk of post-transplant aspiration in the submit your manuscript | www.dovepress.com DovePress Integrated Blood Pressure Control 2020:13 setting of esophageal dysmotility, severe renal impairment, severe Raynaud phenomenon, non-healing digital ulcerations that pose a risk of infection, and very rarely, severe chest wall skin thickening leading to restriction. 111 Available data have mainly focused on lung transplantation for patients with SSc interstitial lung disease with or without PH rather than isolated SSc-PAH. 112,113 Prognosis Despite important advances in the treatment of PAH in the recent years, SSc-PAH still carries a poor prognosis, with survival rates of 81%, 64% and 52% over the first, second and third years, respectively. 114 Moreover, the mortality rate of SSc-PAH is worse than that of IPAH and non SSc CTD-PAH, probably due to the multi-organ involvement of SSc (when compared to IPAH) and the poorer response to treatment (when compared to IPAH and CTD-PAH). 14,115 Predictors of worse outcome include age >60 years old, male sex, WHO FC IV, higher mPAP, low systolic blood pressure <110 mmHg, 6MW distance <165 m, DLCO <39% predicted, presence of pericardial effusion and anti-U1 ribonucleoprotein (RNP) negative status. 114,116,117 Future Studies and Medications Several medications are currently being studied for the treatment of SSc-PAH. One of the medications is ifetroban, a thromboxane A2/prostaglandin H2 receptor antagonist. Ifetroban works by alleviating blood vessel contraction, increasing vasodilation and thereby decreasing PAH. 118 Another potential treatment is rituximab, a monoclonal antibody against a protein called CD20, which is found on the surface of B-cells. It is thought that rituximab may slow the progression of fibrosis in the lungs by lowering antibodies against the plateletderived growth factor. Rituximab is currently being tested for this indication in a Phase 2 clinical trial in patients with SSc-PAH, though preliminary data did not show statistical significance in improvement of the 6MW distance. 119,120 Bardoxolone methyl is currently being studied in patients with PAH including CTD-PAH. Bardoxolone methyl works by inducing nuclear factor erythroid 2-related factor 2 (Nrf2) and suppressing nuclear factor-kB (NF-kB). Bardoxolone methyl is thought to target several cells involved in SSc-PAH, such as smooth muscle cells, endothelial cells and macrophages. 121 Conclusion Systemic sclerosis-associated pulmonary arterial hypertension is a devastating complication carrying a poor prognosis. Current therapies targeting SSc-PAH have resulted in improved quality of life, cardiac hemodynamics and survival. Most recent guidelines focus on routine screening for PAH in patients with SSc, and when PAH-SSc is diagnosed, initial aggressive combination treatment is important to reach a low-risk status. Several promising medications are still being studied in this patient population.
2020-03-26T10:08:33.353Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "d6a5dc216dce74abab1d152195a785e084d235ac", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=56954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f226db9089f67d9178a8b29c1d08286f3c449c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266860005
pes2o/s2orc
v3-fos-license
Stability Analysis of Sheared Thermal Boundary Layers and its Implication for Modelling Turbulent Rayleigh-Bénard Convection Predicting the heat flux through a horizontal layer of fluid confined between a hot bottom plate and a cold top one has always spurred theoretical, numerical and experimental work on Rayleigh–B´enard convection. Customarily, the Nusselt number (the heat flux in non-dimensional form) has been modelled in the form of one or several power-laws of three parameters, the Rayleigh, Prandtl and Reynolds numbers. Quantifying the large-scale flow that spontaneously develops in a turbulent Rayleigh–B´enard cell, the Reynolds number, unlike the Rayleigh and Prandtl numbers, is not a control parameter strictly speaking and, depending on the model, is sought as another power-law or introduced as an external input. Whereas balancing the di ff erent transport mechanisms can predict the exponents in these power laws, experimental and numerical results are required to adjust the various prefactors. The early and simple model of Malkus [1] and Howard [2] assumed that the value of the Nusselt number could be directly deduced from the marginal stability of the two sheared thermal boundary layers along the upper and lower plates, interacting via the large-scale flow. Maintaining this simplicity, this work shows that in the classical regime of turbulent convection, considering the linear critical conditions of absolute (as opposed to convective) thermo-convective instabilities alleviates the flaws of the original model. Revisiting available Direct Numerical Simulations from which a Reynolds number can be unambiguously extracted, the present approach then yields the Nusselt number as a function of the Rayleigh and Prandtl numbers agreeing well with the numerical results. Introduction Rayleigh-Bénard (RB) convection has attracted the interest of the scientific community for more than a century as, on the one hand, its simple configuration (a fluid confined between two, hot and cold, horizontal plates) facilitates its study and, on the other hand, it encompasses the basic physics occurring in numerous natural or industrial flows.Rayleigh-Bénard convection has therefore played a crucial role in the study of hydro-dynamic instabilities, from the early concepts to more involved ones as spatio-temporal chaos [3] and constitutes a convenient set-up to experimentally and numerically investigate turbulent processes.The driving force of thermal convection, the buoyancy, is quantified by the Rayleigh number Ra = gα∆h 3 /(νκ) based on the thermal expansion coefficient α, kinematic viscosity ν and thermal diffusivity κ of the fluid, and the acceleration due to gravity g, the vertical distance and the temper-ature difference between the hot, bottom and top, cold plates, h and ∆, respectively.The main outcome and metric of RB convection, the vertical heat flux across the cell, is quantified by the Nusselt number Nu = Φh/(λ∆) based on the horizontally averaged heat flux Φ and λ the thermal conductivity of the fluid.Moreover, the fluidspecific competition between heat diffusion and viscosity is quantified by the Prandtl number Pr = ν/κ.For Rayleigh numbers between 10 5 and 10 11 − 10 12 , experimental and numerical results all converge towards a universal functional dependence of Nu with Ra and Pr [4].In addition to Nu, a secondary response parameter is the Reynolds number Re = Vh/ν, characterizing the velocity field observed in the RB cell.A Reynolds number is more difficult to define, investigate and measure than Nu.Indeed, for fixed control parameters and in steady state, Φ is constant and equal to the heat flux through the two horizontal plates, whereas the velocity field results from the complex combination of turbulent motions and coherent structures, as plumes, boundary layers and large-scale circulation in the bulk of the cell, also known as "wind" [5].Consequently, several definitions are possible for the characteristic velocity V, such as the root-mean-square velocity averaged over the whole fluid, V r.m.s.= ⟨V 2 x + V 2 y + V 2 z ⟩, or some magnitude of the wind flow (V w ).Nevertheless, a functional dependence of Re with Ra and Pr also seems to prevail [6,7].These converging results are noteworthy for such a system, in which boundary layers (BLs) and turbulent flows in the bulk interact in a complex fashion. On the other hand, in configurations where Ra is larger than 10 11 − 10 12 , a controversy has been going on for more than twenty years on the existence and conditions for the appearance of a convection regime called "ultimate regime" [8,9].For these large Rayleigh numbers, experimental results seem to call into question a functional dependence of Nu and Re with Ra and Pr.The present model covers the classical convection regime (Ra ⪅ 10 10 ) and there is no evidence that it might be applicable to the ultimate regime.Indeed, the ultimate regime is associated with boundary layers having transitioned from laminar to turbulent behaviour [8,9], which contradicts the assumptions used in the present theory. Numerous theories have been developed to predict the functional dependence Nu(Ra, Pr) in the classical regime [see 5, 10, 3, for detailed and thorough accounts].The widely accepted Grossmann and Lohse (GL) model [11,12] is based on splitting the mean kinetic energy and thermal dissipation rates into two contributions each, one from the bulk and one from the boundary layers.It assumes that RB convection is a mixture of eight convection regimes, Nu and Re w = V w h/ν being described by power-laws of Ra and Pr for each of these regimes.The Nusselt and Reynolds numbers are then obtained as functions of Ra and Pr, using the two equations: The function f captures the cross-over of a thermal BL of thickness δ T nested in the kinetic one of thickness δ V to the reciprocal situation.For very large Pr's, the function g describes the saturation of the kinetic BL thickness as Re w decreases below a critical Reynolds number.This critical Reynolds number and the four prefactors c 1 -c 4 have been computed in [13] by fitting (1) to experimental and numerical results.It was recently proposed [7] to use functional forms for the prefactors c 1c 4 to improve the predictions of (1).Note that whether from experiments or simulations, obtaining the velocity of the wind V w remains a major challenge.It is generally assumed that V w = ξ V r.m.s., with ξ = 1, the value of ξ only impacting the model constants [11,12]. Before these fine-tuned quantitative models, Malkus [1] and Howard [2] had suggested that in turbulent convection (Ra ⪆ 10 5 ), the lower and upper thermal boundary layers would self-adjust to reach the critical conditions, above which thermal convection rolls develop.These critical conditions are accessible by linear stability analysis.They deduced that Nu ∝ Ra 1/3 (Eq.5), in agreement with the dimensional analysis proposed by Priestley [14].This exponent 1/3 is indeed the only scaling for Nu ensuring that the heat flux is independent of the height of the cell h.To capture the departure from the 1/3 exponent observed in the experimental results, Castaing et al. [5] proposed a mixing zone model, distinguishing three regions in the RB cell.In addition to the two thermal boundary layers and the centre of the cell, they considered an intermediate region where sheets of fluid are ejected from the BLs.The scalings of the velocities of the ejected fluid with Ra then led to predict that Nu ∝ Ra 2/7 .This mixing zone model was then further developed to take into account the Pr-dependence [15,16], predicting Nu ∝ Ra 2/7 Pr 2/7 for small Pr and Nu ∝ Ra 2/7 Pr −1/7 for large enough Pr. In order to agree with experimental and numerical results, these models have been increasingly complexified, departing from the seminal simplicity of Malkus input that has to be inferred from numerical simulations. Revisiting the Malkus and Howard model Malkus and Howard assumed that, for turbulent convection, the temperature averaged over time is almost uniform in the bulk flow and the mean temperature increases by ∆/2 when crossing the thickness δ T of each thermal boundary layer (see Fig. 1a).They also assumed that the bulk flow only serves to transmit the constant heat flux between the two boundary layers and has no impact on their stability.More precisely, we postulate that the bulk could be seen as a 'conveyor belt' advecting the perturbations of temperature and velocity from one boundary layer to the other.This crude picture advantageously discards the need of a mixing zone and, as underlined in [5], the Reynolds number The correction factor F comes from the fact that the wind velocity at the edge of each thermal BL is not systematically equal to V w (see Fig. 1 in [11]).For Pr ≤ 1, as δ V ≤ δ T , F should be equal to 1.For larger Pr, the relevant velocity should be less than V w , namely about Even for large Pr numbers, though, Direct Numerical Simulations (DNS) in [7] have shown that δ V remains very close to δ T , leading to F very close to unity.We will show later that variations in F have little impact on the results of the model. For Ra ⪅ 10 10 , previous experimental and numerical works have confirmed the validity of (4) (see Fig. 2a of [7] for example).In contrast, for larger Ra numbers or for the ultimate regime, as a logarithmic mean temperature profile is expected and reported in experiments [18,19], (4) certainly no longer holds.Equations ( 2) and ( 4) then result in Irrespective of the velocity profile and boundary conditions, when thermo-convective instabilities of infinite extension along the direction of the shear are considered, rolls the axes of which are aligned with this direction are the first to become linearly unstable.Their critical Rayleigh number is found to be independent of the shear flow, the well-known value Ra 0 c ≈ 1708 being retrieved, for Dirichlet boundary conditions.Assuming Ra bl = Ra 0 c , equation ( 5) then shows that Nu must vary as Ra 1/3 .This approach, however, has three caveats. First, the linear stability analysis of these longitudinal rolls show that Ra 0 c is independent of Re bl , so the stability of the BLs is obviously not affected by the wind flow.Then, as Ra 0 c is also found to be independent of the Prandtl number, so is the scaling (5).Finally, experiments usually observe that Ra bl > Ra 0 c and conclude that the BLs should be unstable [20], calling the initial assumption of this model into question. We show in the forthcoming that it is possible to fix these caveats by considering the linear impulse response of the sheared thermal BLs instead of their stability to infinitely extended perturbations.The Green that the dynamics of turbulent Rayleigh-Bénard convection is driven by the marginal stability of the thermal boundary layers with respect to absolute modes and independent of the bulk.This amounts to check that in existing numerical and experimental results, Nu and Re w are related by equations ( 7)-( 8). Bénard convection To check the validity of our model, the critical threshold of absolute thermo-convective instabilities developing in the merged boundary layers Ra abs c (Re bl , Pr) must be computed beforehand.Although this analysis should be restricted to configurations that are homogeneous in the direction of the mean flow, it is assumed here that changes in the BLs along this direction remain sufficiently weak to be neglected in the analysis. The configuration the stability of which is computed then consists of a fluid confined between two horizontal plates at rest, the lower one at temperature ∆/2 while the upper one, at distance 2δ T above, is set at temperature −∆/2, and swept by mean shear flow, as shown in Fig. 1(b).For the sake of simplicity, this shear flow is chosen in the shape of a piece-wise linear profile and it has been further checked that the results of the stability analysis were only minutely impacted by changing of this profile to a Poiseuille flow.The Rayleigh and Reynolds numbers are defined as above, using respectively (2) and (3), with F = 1.Using the streamwise, spanwise and wall-normal coordinates (x, y, z) and related basis, the velocity, pressure and temperature fields are decomposed into the steady base state V x,b , 0, 0, P b , Θ b , consisting of the laminar shear flow and conduction solution, and temporally evolving perturbations V x,p , V y,p , V z,p , P p , Θ p . Following the procedure and non-dimensionalization scheme detailed in [22,23] for Rayleigh-Bénard-Poiseuille convection, the double curl of the Navier-Stokes equation and the heat equation, both linearized about the base state, together with the continuity equation in the Boussinesq approximation, yield a system of partial differential equations satisfied by the wall-normal velocity and temperature fields of the perturbation.Seeking these perturbations in the form V z,p , Θ p = v z (z) , θ(z) exp −iωt + ik x x + ik y y recasts these PDE's into the following generalized eigenvalue problem and boundary conditions the operators reading with d z the z-derivative, k 2 = k 2 x + k 2 y and ∆ = d 2 z − k 2 .The eigenvalues of ( 9) are the complex frequencies ω, the imaginary parts of which are the growth rates of the instabilities. For a given set of parameters and wavenumber Re bl , Pr, Ra bl , k x , k y , problem ( 9) is solved by a taucollocation spectral method using Chebychev polynomials on 32 Gauss-Lobatto collocation points.For fixed values of Re bl and Pr, the critical conditions for the absolute instability are sought after as the set (Ra abs c , k x , k y ), with complex k x and k y , ensuring that 6 These critical conditions are computed using Newton-Raphson algorithms and all the linear algebra involved in those computations is accomplished using NAG (Numerical Algorithms Group, Oxford, UK) routines. Whereas convective instabilities are known to take the form of longitudinal rolls the axis of which is aligned with the direction of the wind, i.e. k x = 0, absolute instabilities are always found in the form of transverse rolls, i.e. k y = 0. Figure 2(a) shows the results of this parametric stability analysis, namely the function Ra abs c (Re bl , Pr) to be used in equations ( 7)- (8).Note that besides the Reynolds number Re bl , this function would only marginally be affected by changes in the velocity profile V x,b (z) of the stability problem ( 9)- (10). Comparison between model predictions and DNS results. To put our interpretation on firmer grounds, we now proceed to test the relation (8) between Nu and Re w in existing numerical results.As for the GL model, it is assumed hereafter that V w = ξ V r.m.s., or equivalently Re w = ξ Re r.m.s., with ξ independent of Ra and Pr numbers.The stability analysis in §1 assumes a twodimensional base flow but, spanning both wavenumbers k x and k y , is fully three-dimensional.Selecting k y = 0, the outcome of this analysis is a two-dimensional flow in the form of transverse rolls.Thus, our model should be able to retrieve both 2D and 3D DNS results of Rayleigh-Bénard convection.However, perhaps counter-intuitively, the bulk flow is more complex in 2D than in 3D because of recirculation loops in the corners [26,27] and the coexistence of multiple statistically stable states when the ratio between horizontal and vertical extension of the 2D cell is greater than one [26,28].For these reasons, in what follows, we will compare our model with the results of 3D DNS for which we have both: (i) the Reynolds number based on the root-mean-square velocity (V r.m.s. ), and (ii) a large enough aspect ratio and three-dimensional flow, so that a one-to-one dependence of Nu and Re r.m.s. with Ra and Pr prevails [29]. with b = 0.15.Using (11), ξ = 0.22 and F = 1, a nice agreement is observed for Nu between the DNS data and the predictions of Eqs. ( 7)-( 8), depicted by the thick solid blue curve in Fig. 3(b).This agreement was also obtained and extended to Pr 1 using a fit of the numerically computed values for Re r.m.s.instead of (11). Figure 3(b) also shows that variations of ξ impact this result, particularly as Ra increases.There are no DNS results available so far to unambiguously calculate ξ.It seems reasonable, however, to find a typical wind flow velocity V w lower than the the root-mean-square veloc-ity.The velocity V w could actually be closer to the characteristic horizontal velocity, V h,r.m.s.= ⟨V 2 x + V 2 y ⟩/2.Using the results of [30,31], it is found V h,r.m.s./V r.m.s.≈ 0.5, a value closer to ξ = 0.22.fitted from DNS results as plotted in Fig. 4(a).For Ra Pr ≤ 10 9 , as δ V ≤ δ T , F (δ T /δ V ) = 1 [7] and changes in the ratio δ T /δ V have no effect on the theoretical results.For Ra Pr ≥ 10 9 , F (δ T /δ V ) = δ T /δ V can be calculated using the numerical results of [7]. Figure 4(b) shows that the effect of the ratio δ T /δ V is significant for Ra ≥ 10 9 only and Pr ≥ 10 (magenta solid line: F = 1, and dash-dotted line: F = δ T /δ V ).Even in this case, the effect of the ratio δ T /δ V is limited and numerical results are inconclusive to discriminate the theoretical results. In agreement with experimental, numerical and previous theoretical approaches, this model shows that Nu does not strictly behave as a power-law of Ra and for curves: GL theory [12] with prefactors from [13].Solid curves in (b): present model ( 7)-( 8) with F = 1, ξ = 0.22 and Re r.m.s.fitted from DNS results depicted by the solid curves in (a).Dash-dotted magenta curve in (b): ( 7)-( 8) with ξ = 0.22, Re r.m.s.represented by magenta curve in (a) and the ratio F = δ T /δ V as given in [7]. Conclusions It has been found in this study that the linear threshold of absolute instability of the sheared thermal boundary layers could quantitatively relates the heat flux through a RB cell and the magnitude of the large scale wind in this cell.On the basis of computing the critical Rayleigh number of absolute instability, Ra abs c (Re bl , Pr) (these numerical results are available upon request) of the merged sheared thermal boundary layers, equations ( 7) and ( 8) predict the variations of Nu as a function of the three parameters Ra, Pr and Re w .Whereas our approach supports the pivotal role of absolute thermoconvective instabilities in the dynamics of turbulent Rayleigh-Bénard convection, it does not yield, nonetheless, a quantitative model predicting the Nusselt number as a function of the Prandtl and Rayleigh numbers, as the wind Reynolds number Re w remains an external input.Moreover, whereas our results support the idea that the boundary conditions between the bulk and the thermal boundary layers might be unimportant to model the heat flux, they do not exclude that these boundary conditions could be pivotal for the wind, the velocity of which V w remains an external input in our model.Unfortunately, the wind characteristic velocity V w remains ambiguous to define and is seldom reported in the literature.On the contrary, V r.m.s. is univocally defined and frequently addressed, but its exact relation to V w remains an open question. While two-and three-dimensional DNS and experiments with aspect ratios close to unity exhibit a single large scale circulation in the bulk, several cells separated by plumes have been reported in large aspect ratio two-dimensional DNS [19] and a complex network of plumes circumscribing regions of obvious sheared ther-mal boundary layers is observed experimentally [34]. These plumes obviously carry the heat through the bulk, but it remains unclear whether they contribute or not to the heat flux close to the plates and this latter could still be mostly imposed by the heat flux through the thermal boundary layers.Besides, two-dimensional DNS have also shown that this large-scale organization in cells and plumes is non-unique [28], though the threedimensional generalization of this result remains an open question.The relation between the wind and V r.m.s. is very likely non trivial and V w should probably be described in a statistical fashion [35].Our approach could nonetheless remain valid and useful as an "elementary brick" upon which to build a more complex model.Albeit crude, assuming a linear relation V w = ξV r.m.s to compute Re w from values of Re r.m.s. reported in the literature, our model still retrieves the corresponding Nu.The coefficient ξ is so far the only parameter the adjustment of which is required to validate the model against experimental or DNS results.In the ranges 10 5 ≤ Ra ≤ 10 9 and 0.02 ≤ Pr ≤ 200, setting ξ = 0.22 was found to be good enough to capture Nu as a function of Pr and Ra with a reasonable accuracy.This agreement could suggest that thermal plumes may have a limited impact on the total thermal flux.This ad hoc value ξ = 0.22 seems sensible since we find a typical wind flow velocity V w close to the mean horizontal velocities V h,r.m.s., as reported in DNS [30,31].Though always close to unity, the value of ξ nevertheless depends on the actual velocity profile used in the stability analysis of the thermal boundary layers. Despite the hefty amount of existing experimental and numerical results, the characterization of the wind, its typical velocity, its fluctuations, its relation to the averaged velocity still require further work.Experimental measurements of the velocity and temperature in the thermal boundary layers remain particularly delicate and three-dimensional DNS in cells with large aspect ratios are still limited by their numerical cost. and 9 and 10 − 2 < Howard's interpretation.Furthermore, these improvements do not fix the original flaw of this interpretation: whatever the shear profile and boundary conditions, the linear threshold of thermo-convective instability of the thermal BL is unaffected by the shear and independent of the Prandtl number.This work establish that it is possible to eliminate this flaw by considering the absolute instabilities of the sheared thermal BLs instead of the convective ones, i.e. the impulse response instead of infinitely extended perturbations.After revisiting the early Malkus-Howard model and its shortcomings in §1, this work alleviate these latter by specifically considering the marginal stability of the thermal BLs in the framework of absolutely (as opposed to convectively) unstable sheared Rayleigh-Bénard (sRB) convection in §1.Section 1 shows that the prediction for Nu as a function of Ra, Pr and Re w of this model compares very favourably with existing results from the three-dimensional Direct Numerical Simulations from which both the Nusselt and the Reynolds numbers are available over the ranges 10 5 < Ra < 2•10 Pr < 10 3 .Section 1 finally concludes and further discusses the pivotal roles of the large-scale flow and Reynolds number, which, in our model, remains an Figure 1 : Figure 1: (a) Schematic view of the flow in a RB cell for Ra ⪆ 10 5 .Also shown: the mean temperature profile and the two thermal boundary layers of thickness δ T close to the bottom and top plates.∆ is the difference of temperature between the two horizontal plates.In the forthcoming analysis, the top and bottom boundary layers will be modelled by sheared Rayleigh-Bénard flows, the schematic view of which is shown in (b). 3 =/ 3 . ( 8 )Equation ( 8 ) function of the linearized dynamic equations is the natural framework to address the stability of open flows, as these BLs swept by the wind are.It leads to distinguish among the instabilities the convective ones that grow in space and time but are eventually swept out by the wind, and the absolute ones, the growth of which is vigorous enough to overcome this wind and propagate both upand downstream [21].Absolute instabilities are doubly relevant here.First, whereas convective instabilities are usually extrinsic and observed as the result of an exter-nal forcing, absolute instabilities are driven by the intrinsic dynamics of the flow and are more robust.Then, whereas convective instabilities in sheared Rayleigh-Bénard convection retrieve the critical Rayleigh number Ra 0 c ≈ 1708, unaffected by the shear flow and independent of the Prandtl number, it is established hereinafter and shown in Fig. 2 that the critical Rayleigh number of the absolute instabilities (Ra abs c ) substantially varies with Pr and Re bl .So, following the idea of Malkus and Howard, we still suppose that the thermal BLs thickness (δ T ) adjusts itself so as to set Ra bl at the threshold of instability, but we further claim that the relevant threshold pertains to the emergence of absolute instabilities: Ra bl = Ra abs c (Re bl , Pr). (6) Combining (2) and (3) fixes the thickness δ T , and the condition of absolute instability (6) becomes Ra abs c (Re bl , Pr) Re bl Ra (Re w F ) 3 .(7) Once Ra abs c as a function of Re bl and Pr has been computed, as elaborated in §1, solving equation (7) yields Re bl as a function of the three parameters Ra, Pr and Re w , and equations (5) and (6) yield the Nusselt number: Nu = Ra Ra abs c Re bl (Ra, Pr, Re w ) , Pr 1expresses Nu as a function of the 3 parameters Ra, Pr and Re w .Whereas Ra and Pr are external control parameters, Re w results from the dynamics of the flow.Computing Nu from this model thus requires to plug in a value for Re w .Strictly speaking, this model cannot predict the Nusselt number out of the external parameters.It is nonetheless possible to check the validity of the basic assumption of our interpretation: Figure 3 : Figure 3: Reynolds (a) and Nusselt (b) numbers in compensated form, a functions of Ra, for Pr = 1.Symbols: DNS from respective references. 3 (a) are consistent with Re r.m.s.behaving as the squareroot of Ra: Re r.m.s.= b Ra 1/2 , Figure. 4 Figure.4(b) then shows that the present model also retrieves with a good accuracy the variations of Nu with both Ra and Pr, using again F = 1, ξ = 0.22 and Re r.m.s.
2024-01-09T16:50:40.472Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "c5aae7631d867cb2fc3e70ae6ddbe263d5b573e4", "oa_license": "CCBY", "oa_url": "https://hal.science/hal-03877608/file/Creyssels&Martinand.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "2642e276e5d3bbdc505325bcd8cbcfdc218ff610", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [] }
18760807
pes2o/s2orc
v3-fos-license
Plastoquinone and Ubiquinone in Plants: Biosynthesis, Physiological Function and Metabolic Engineering Plastoquinone (PQ) and ubiquinone (UQ) are two important prenylquinones, functioning as electron transporters in the electron transport chain of oxygenic photosynthesis and the aerobic respiratory chain, respectively, and play indispensable roles in plant growth and development through participating in the biosynthesis and metabolism of important chemical compounds, acting as antioxidants, being involved in plant response to stress, and regulating gene expression and cell signal transduction. UQ, particularly UQ10, has also been widely used in people’s life. It is effective in treating cardiovascular diseases, chronic gingivitis and periodontitis, and shows favorable impact on cancer treatment and human reproductive health. PQ and UQ are made up of an active benzoquinone ring attached to a polyisoprenoid side chain. Biosynthesis of PQ and UQ is very complicated with more than thirty five enzymes involved. Their synthetic pathways can be generally divided into two stages. The first stage leads to the biosynthesis of precursors of benzene quinone ring and prenyl side chain. The benzene quinone ring for UQ is synthesized from tyrosine or phenylalanine, whereas the ring for PQ is derived from tyrosine. The prenyl side chains of PQ and UQ are derived from glyceraldehyde 3-phosphate and pyruvate through the 2-C-methyl-D-erythritol 4-phosphate pathway and/or acetyl-CoA and acetoacetyl-CoA through the mevalonate pathway. The second stage includes the condensation of ring and side chain and subsequent modification. Homogentisate solanesyltransferase, 4-hydroxybenzoate polyprenyl diphosphate transferase and a series of benzene quinone ring modification enzymes are involved in this stage. PQ exists in plants, while UQ widely presents in plants, animals and microbes. Many enzymes and their encoding genes involved in PQ and UQ biosynthesis have been intensively studied recently. Metabolic engineering of UQ10 in plants, such as rice and tobacco, has also been tested. In this review, we summarize and discuss recent research progresses in the biosynthetic pathways of PQ and UQ and enzymes and their encoding genes involved in side chain elongation and in the second stage of PQ and UQ biosynthesis. Physiological functions of PQ and UQ played in plants as well as the practical application and metabolic engineering of PQ and UQ are also included. INTRODUCTION Plastoquinone (PQ) and ubiquinone (UQ) are two important prenylquinones functioning as electron transporters in plants. They are involved in photophosphorylation and oxidative phosphorylation located in chloroplast thylakoids and mitochondrial inner membrane, respectively (Swiezewska, 2004). PQ and UQ are both made up of an active benzoquinone ring attached to a polyisoprenoid side chain. The length of polyisoprenoid side chain determines the type of PQ and UQ. Difference between PQ and UQ in chemical structure mainly lies in diverse substituent groups of benzoquinone ring (Figure 1). Plant PQ and UQ usually include nine or ten units of isoprenoid side chain. For instance, the main PQ and UQ in Arabidopsis thaliana have nine such units, known as PQ 9 and UQ 9 , respectively. PQ and UQ are localized in different organelles of plant cells. PQ is located on the thylakoids of chloroplasts, while UQ is located on the inner membrane of mitochondria. The locations of PQ and UQ are consistent with their roles in photophosphorylation and oxidative phosphorylation. Lifespan of PQ and UQ is very short in cells. The half-time of PQ and UQ in spinach cells is about 15 and 30 h, respectively (Wanke et al., 2000). Therefore, to maintain the concentration stable and dynamic balance for normal plant photosynthesis and respiration, PQ and UQ need to be continuously synthesized in cells. In addition to the significance of PQ and UQ in plants, UQ 10 has also been used in people's life. Significant progress has been made recently in PQ and UQ biosynthetic pathways and genes associated with PQ and UQ production. The biosynthesis and functions of UQ in Escherichia coli, Saccharomyces cerevisiae and animals (Clarke, 2000;Meganathan, 2001;Tran and Clarke, 2007;Bentinger et al., 2010;Aussel et al., 2014), UQ 10 production in plants (Parmar et al., 2015) and metabolic engineering of UQ 10 in microbes (de Dieu Ndikubwimana and Lee, 2014) have been reviewed. Here we mainly summarize and discuss recent advances in biosynthetic pathways, key enzymes and their encoding genes, physiological functions, and metabolic engineering of PQ and UQ in plants. BIOSYNTHETIC PATHWAYS OF PQ AND UQ The biosynthetic pathways of PQ and UQ can be generally divided into two stages (Figures 2 and 3). The first stage leads to the biosynthesis of precursors of benzene quinone ring and prenyl side chain. The second stage includes the condensation of ring and side chain and subsequent modification. The benzene quinone ring precursor for UQ is 4-hydroxybenzoic acid (4HB) synthesized from tyrosine or phenylalanine under the catalysis of various enzymes known as phenylalanine ammonia-lyase (PAL), cinnamic acid 4-hydroxylase (C4H), 4-coumarate CoA ligase (4CL), and other unknown enzymes. The benzene quinone ring precursor for PQ is homogentisic acid (HGA). It is synthesized from tyrosine under the catalysis of tyrosine aminotransferase (TAT) and 4-hydroxyphenylpyruvate reductase (HPPR). The prenyl side chains of PQ and UQ are derived from glyceraldehyde 3-phosphate (G3P) and pyruvate through the 2-C-methyl-D-erythritol 4-phosphate (MEP) pathway and/or acetyl-CoA and acetoacetyl-CoA through the mevalonate (MVA) pathway. The universal isoprene precursor isopentenyl diphosphate (IPP, C 5 ) and its isomer dimethylallyl diphosphate (DMAPP) synthesized through the MEP and MVA pathways are converted into intermediate diphosphate precursors, including geranyl diphosphate (GPP, C 10 ), farnesyl diphosphate (FPP,C 15 ), and geranylgeranyl diphosphate (GGPP,C 20 ). Enzymes catalyzing this conversion are a group of polyprenyl diphosphate synthases (PPSs), including geranyl diphosphate synthase (GPPS), farnesyl diphosphate synthase (FPPS), and geranylgeranyl diphosphate synthase (GGPPS) Zhang and Lu, 2016). Both PQ and UQ share the prenyl side chain precursors. Side chain may be elongated to solanesyl diphosphate (SPP,C 45 ) and decaprenyl diphospate (DPS, C 50 ) under the catalysis of solanesyl diphosphate synthase (SPS) and decaprenyl diphosphate synthase (DPS), respectively. In the second stage, SPP is attached to HGA by homogentisate solanesyltransferase (HST) to produce intermediate 2-dimethylplastoquinone, which is then catalyzed by a methytransferase to form the end-product PQ 9 in plants. The condensation of PHB and the corresponding prenyl side chain is catalyzed by PHB polyprenyltransferase (PPT). It leads to the formation of 3-polyprenyl-4-hydroxybenzoate. After three methylations, three hydroxylations and one decarboxylation, ubiquinonol-n and ubiquinone-n are formed eventually. Enzymes catalyzing these modification steps are currently not well understood. (IDSs), are a group of enzymes widely distributed in organisms. They catalyze the formation of polyprenyl diphosphates with various chain lengths through consecutive condensation of IPP and are key enzymes involved in the biosynthesis of isoprenoid compounds, including monoterpenes, diterpenes, triterpenoids, carotenoids, natural rubber and many derivatives, such as PQ and UQ, vatamin E, prenylflavonoids, and shikonnin (Kellogg and Poulter, 1997). Polyprenyl diphosphate synthases possess seven common conserved domains, I-VII, of which domain II is characterized with the first aspartate-rich motif (FARM), DDX 2−4 D, while domain VI is characterized with the second aspartate-rich motif (SARM), DDXXD (Wang and Ohnuma, 1999;Phatthiya et al., 2007). These aspartate-rich motifs are involved in substrate binding and catalysis via chelating Mg 2+ , a cofactor required for enzyme activity (Wang and Ohnuma, 1999). Based on the chain length of final products, PPSs can be divided into three subfamilies: short-(C 15 -C 25 ), medium-(C 30 -C 35 ) and longchain (C 40 -C 50 ) PTs (Hemmi et al., 2002). The most common PQ and UQ in plants have a C 45 or C 50 prenyl side chain moiety. For instance, Oryza sativa PQ 9 and UQ 9 contain a C 45 prenyl side chain (Ohara et al., 2010), while cauliflower and pea UQ 10 has a C 50 prenyl side chain (Mattila and Kumpulainen, 2001). Thus, long chain PPSs, such as SPS catalyzing the formation of solanesyl diphosphate (SPP, C 45 ) and decaprenyl diphosphate synthase (DPS) involved in decaprenyl diphosphate (DPP, C 50 ) production, are significant to PQ and UQ biosynthesis in plants. Database mining of fully sequenced genomes showed that Chlamydomnas reinhardtii, Cyanidioschyzon merolae, Cucumis sativus, Vitis vinifera and Hordeum vulgare contained two SPSs, Physcomitrella patens, Arabidopsis, Glycine max, Oryza sativa and Zea mays had three, and Brachypodium distachyon contained four (Block et al., 2013). Among the three Arabidopsis SPS genes, AtSPS1 (At1g78510) and AtSPS2 (At1g17050) are highly expressed in leaves with the level of AtSPS1 transcripts higher than AtSPS2 (Hirooka et al., 2003(Hirooka et al., , 2005, whereas AtSPS3 (At2g34630) is ubiquitously expressed with peaks in seeds and shoot apical meristems (Ducluzeau et al., 2012). Although AtSPS1 had been shown previously to be localized in the ER (Hirooka et al., 2003(Hirooka et al., , 2005Jun et al., 2004), recent analysis have clearly demonstrated that both AtSPS1 and AtSPS2 is targeted exclusively to plastids and contribute to the biosynthesis of PQ 9 (Block et al., 2013). Overexpression of AtSPS1 resulted in the accumulation of PQ 9 and its derivative plastochromanol-8 (PC 8 ) (Ksas et al., 2015). Moreover, the enzymatic activity of AtSPS1 and AtSPS2 is stimulated by a member of the lipidassociated fibrillin protein family, fibrillin 5 (FNB5-B), which physically binds to the hydrophobic solanesyl moiety and helps to release the moiety from the enzymes in Arabidopsis cells (Kim et al., 2015). AtSPS3 had been shown previously to be targeted to plastids and to contribute to gibberellin biosynthesis (Bouvier et al., 2000). However, recent results suggest that it is actually dual targeted to mitochondria and plastids and appears very likely bifunctional (Ducluzeau et al., 2012). AtSPS3 is able to complement a yeast coq1 knockout lacking mitochondrial hexaprenyl diphosphate synthase. Silence of AtSPS3 using RNAi technology led to 75-80% reduction of the UQ pool size. AtSPS3 overexpression resulted in a 40% increase in UQ content. No significant alternation of PQ levels was observed in AtSPS3 silenced or overexpressing lines. Therefore, AtSPS3 seems to be the main contributor to SPS activity required for UQ 9 biosynthesis in Arabidopsis (Hsieh et al., 2011;Ducluzeau et al., 2012). Similarly, three SPS genes exist in O. sativa (Ohara et al., 2010;Block et al., 2013). OsSPS1 is highly expressed in roots, whereas OsSPS2 is highly expressed in both leaves and roots. TargetP prediction and transient expression of GFP fusion protein showed the localization of OsSPS1 in mitochondria and OsSPS2 in plastids. Recombinant proteins of both OsSPS1 and OsSPS2 catalyzed the formation of solanesyl diphosphates. The enzyme activity of OsSPS1 was stronger than OsSPS2. OsSPS1 complemented the yeast coq1 disruptant and produced UQ 9 in yeast cells, whereas OsSPS2 weakly complemented the growth defect of the coq1 mutant (Ohara et al., 2010). The results suggest that OsSPS1 and OsSPS2 are involved in the supply of solanesyl diphosphate for UQ 9 production in mitochondria and PQ 9 biosynthesis in chloroplasts, respectively. OsSPS3 is less studied compared with OsSPS1 and OsSPS2. Since it shows high homology with OsSPS2 (Block et al., 2013), OsSPS3 appears to be also involved in PQ 9 formation. Analysis of the fully sequenced tomato genomes showed that Solanum lycopersicum contained two long-chain PPSs genes (Block et al., 2013). Jones et al. (2013) cloned and termed them SlSPS and SlDPS, respectively. SlSPS is targeted to plastids, whereas the fluorescence signal of SlDPS-GFP may resemble the mitochondrial localization reported for rice OsSPS1 (Ohara et al., 2010;Jones et al., 2013). In E. coli, SlSPS and SlDPS extend the side chain of endogenous UQ to nine and ten isoprene units, respectively (Jones et al., 2013). Overexpression of SlSPS elevated the content of PQ in immature tobacco leaves. Silence of SlSPS resulted in photobleached phenotype and accumulated phytoene. SlSPS and SIDPS could not complement silencing of each other. SlDPS can use GPP, FPP or GGPP in SPP and DPP biosynthesis. Silence of SlDPS did not affect leaf appearance, but impacted on primary metabolism (Jones et al., 2013). The roles of SlSPS and SlDPS in PQ and UQ biosynthesis need to be further investigated. Although long chain PPSs play significant roles in PQ and UQ biosynthesis, the corresponding genes have only been identified from various plants, such as Arabidopsis (Hirooka et al., 2003(Hirooka et al., , 2005Jun et al., 2004;Ducluzeau et al., 2012;Block et al., 2013), rice (Ohara et al., 2010), tomato (Jones et al., 2013), and Hevea brasiliensis (Phatthiya et al., 2007). Greater efforts are required for molecular cloning and functional characterization of the genes in other plant species and utilization of plant source long chain PPSs in PQ and UQ production through biotechnology. Compared with other genes involved in the upstream of PQ biosynthesis, HSTs are less studied, although they play an important role in the biosynthesis of PQ 9 , GAs and carotenoids, of which carotenoids are precursors of ABA and strigolactones. Genes encoding HSTs have only reported in C. reinhardtii and Arabidopsis (Sadre et al., 2006;Venkatesh et al., 2006;Tian et al., 2007;Sadre et al., 2010;Chao et al., 2014). The deduced C. reinhardtii CrHST and Arabidopsis AtHST proteins contain a chloroplast targeting sequence (Sadre et al., 2006;Tian et al., 2007). Confocal images of the AtHST-GFP fusion protein showed that AtHST protein was located in the chloroplast, most likely on the envelope membrane (Tian et al., 2007). Overexpression of AtHST caused modest elevation of PQ 9 levels (Sadre et al., 2006). The pds2 mutant with an in-frame 6 bp deletion in AtHST showed an albino phenotype and the mutation can be functionally complemented by constitutive expression of AtHST (Norris et al., 1995;Tian et al., 2007). Similarly, T-DNA insertion mutant of HST gene displayed albino, dwarf and early flowering phenotypes with chloroplast development arrested, chlorophyll (Chl) absent and stomata closure defected (Chao et al., 2014). The GA and ABA levels were very low in the mutant. Exogenous GA could partially rescue the dwarf phenotype and the root development defects. Exogenous ABA could rescue the stomata closure defects (Chao et al., 2014). The expression levels of many genes involved in flowering time regulation and PQ, Chl, GA, ABA and carotenoid biosynthesis were changed in the mutant, suggesting the key roles of AtHST played in chloroplast development and plant hormone biosynthesis (Chao et al., 2014). HST genes in other plant species remain to be isolated and characterized. PPTs are members of the polyprenyl diphosphate transferase family. Although proteins in this family have sequence homology and relatively close phylogenetic relationship, only a small subset catalyzing the condensation of 4HB and prenyl chain are involved in UQ biosynthesis (Ohara et al., 2006). Other members may be involved in other metabolic pathways. For instance, Cyanobacteria hydroxybenzoate solanesyltransferase, Slr0926, which exhibits highly specific for 4-hybroxybenzoate and a broad specificity with regard to the prenyl donor substrate, including SPP and a number of shorter-chain prenyl diphosphates, is actually involved in PQ 9 biosynthesis (Sadre et al., 2012). Lithospermum erythrorhizon 4HB geranyltransferases (LePGT1 and LePGT2) with a strict specificity for GPP as a prenyl substrate are involved in shikonin biosynthesis and not relevant to UQ formation (Heide and Berger, 1989;Yazaki et al., 2002;Ohara et al., 2006Ohara et al., , 2009. Arabidopsis AtPPT1 is the first PPT gene identified in plants (Okada et al., 2004). It is predominantly expressed in the flower cluster. The deduced AtPPT1 protein is localized in mitochondria and can complement the yeast coq2 disruptant. AtPPT1 has broad substrate specificity in terms of the prenyl donor. The T-DNA insertion mutant of AtPPT1 shows arrest of embryo development at an early stage of zygotic embryogenesis (Okada et al., 2004). The other mutant, known as hypersensitive response-like lesions 1 (hrl1), was identified in an ethyl methanesulfonate (EMS)mutagenesis screen (Devadas et al., 2002). The hrl1 mutant spontaneously develops HR-like lesions and shows enhanced resistance against bacterial pathogens (Devadas et al., 2002). Positional cloning and subsequent DNA sequencing showed that the mutant had a single base change in an exon of AtPPT1. The mutation results in a leucine to phenylalanine substitution at position 228. Leucine 228 is not a part of the active site of the enzyme but is conserved across PPT sequences from various organisms (Dutta et al., 2015). Overexpression of HRL1 in A. thaliana leads to elevated UQ and decreased ubiquinol levels (Dutta et al., 2015). The O. sativa genome contains three PPT genes, including OsPPT1a, OsPPT1b and OsPPT1c. However, only OsPPT1a was found to be expressed (Ohara et al., 2006). The deduced OsPPT1a proteins contain a putative mitochondrial sorting signal at the N-terminus. Consistently, GFP-PPT fusion proteins are mainly localized in mitochondria (Ohara et al., 2006). Same as AtPPT1 and other PPT proteins, OsPPT1a can complement the yeast coq2 mutant, accepts prenyl diphosphates of various chain lengths as prenyl donors, and shows strict substrate specificity for the aromatic substrate 4HB as a prenyl acceptor (Ohara et al., 2006). The Modification Enzyme of PQ Benzene Quinone Ring 2-demethylplastoquinol formed through the condensation of HGA and SPP by HSTs is a key intermediate in PQ synthesis. This intermediate can be converted to the final product PQ under the catalyzing of a modification enzyme, termed albino or pale green 1 (APG1) or methy-phytyl-benzoquinone (MPBQ)/methylsolanesyl-benzoquinone (MSBQ) methyltransferase (Shintani et al., 2002;Motohashi et al., 2003;Cheng et al., 2003). Arabidopsis APG1 was identified through characterization of the Ds-tagged albino or pale green mutant 1 (apg1) (Motohashi et al., 2003). This mutant lacks PQ, cannot survive beyond the seedling stage when germinated on soil, and contains decreased numbers of lamellae with reduced levels of chlorophyll (Motohashi et al., 2003). The insertion of Ds transposon disrupts a gene encoding a 37 kDa polypeptide precursor of the chloroplast inner envelope membrane. The 37 kDa protein had partial sequence similarity to the S-adenosylmethioninedependent methyltransferase (Motohashi et al., 2003). Because of the lack of PQ in apg1 mutant and the putative methyltrasferase activity of the 37 kDa protein, APG1 appears to be involved in the methylation step of PQ biosynthesis (Motohashi et al., 2003). Almost at the same time, using a combined genomic, genetic and biochemical approach, Cheng et al. (2003) isolated and characterized the Arabidopsis VTE3 (vitamin E defective) locus. This locus is actually identical to APG1. In vitro enzyme assays showed that VTE3 was the plant functional equivalent of MPBQ/MSBQ methyltransferase from Synechocystis sp PCC6803 (Shintani et al., 2002). In cyanobacterium, various genes involved in the PQ biosynthetic pathway and the vitamin E biosynthetic pathway are highly conserved. The biosynthesis of PQ and vitamin E share common precursor, HGA, which is prenylated with different substrates in the biosynthesis of different products. HGA is prenylated with phytyl diphosphate (PDP) or GGPP in tocochromanol formation, whereas it is prenylated with SPP in PQ 9 biosynthesis. Thus, the MPBQ/MSBQ methyltransferase identified from Synechocystis sp PCC6803 has a dual function in the final methylation step of PQ and vitamin E biosynthesis (Shintani et al., 2002). Most genes involved in PQ and vitamin E synthesis are homologous, but MPBQ/MSBQ methyltransferase from Synechocystis sp PCC6803 and VTE3 from Arabidopsis are highly divergent in primary sequence (Cheng et al., 2003). The orthologs of Synechocystis MPBQ/MSBQ methyltransferase exist in C. reinhardtii and Thalassiosira pseudonana but absent from vascular and non-vascular plants. VTE3 orthologs exist in C. reinhardtii and vascular and non-vascular plants but absent from cyanobacteria (Cheng et al., 2003). It suggests that VTE3 is evolutionarily originated from archea rather than cyanobacteria. Interestingly, two mutants of VTE3, vte3-1 and vte3-2, which show partial or total disruption of MPBQ/MSBQ methyltransferase activity, respectively, have different effects on PQ and vitamin E biosynthesis. vte3-1 mainly impairs the methylation of tocopherol substrates. It has little effect on the methylation of MSBQ to PQ. On the contrary, vte3-2 completely disrupts MPBQ/MSBQ methyltransferase (Shintani et al., 2002). The underlying mechanism remains to be elucidated. Although genes encoding MPBQ/MSBQ methyltransferase are largely unknown in plant species other than Arabidopsis, engineering of VTE3 has been tested (Van Eenennaam et al., 2003;Naqvi et al., 2011). Seed-specific expression of Arabidopsis VTE3 in transgenic soybean reduced seed delta-tocopherol from 20 to 2%. Coexpression of Arabidopsis VTE3 and VTE4 (gammatocopherol methyltransferase gene) resulted in a greater than eightfold increase of alpha-tocopherol and an up to fivefold increase in vitamin E activity in transgenic soybean seeds (Van Eenennaam et al., 2003). Simultaneous expression of Arabidopsis ρ-hydroxyphenylpyruvate dioxygenase and VTE3 in transgenic corn kernels triples the tocopherol content (Naqvi et al., 2011). It is currently unknown whether it is possible to increase the content of PQ in plants through VTE3 overexpression. Modification Enzymes of UQ Benzene Quinone Ring Compared with PQ benzene quinine ring modification, the process of UQ aromatic ring modification is complex. It includes three methylations (two O-methylations and one C-methylation), three hydroxylations, and one decarboxylation. UQ biosyntheses in prokaryotes and eukaryotes are similar. The difference lies in the reaction order of modifications. In eukaryotes, proposed modifications start with hydroxylation, followed by O-methylation, decarboxylation, two additional hydroxylations, C-methylation and one additional O-methylation (Tran and Clarke, 2007), whereas the reaction order of modifications in prokaryotes is decarboxylation, three hydroxylations, O-methylation, C-methylation and then an additional O-methylation (Meganathan, 2001). Hydroxylation The modification of UQ aromatic ring requires a total of three hydroxylations. The first hydroxylation of 3-polyprenyl-4-hydroxybenzoate in eukaryotes occurs before decarboxylation (Goewert et al., 1977). Two additional hydroxylations occur after decarboxylation in eukaryotes. Enzymes involved in hydroxylation are cytochrome P450 monooxygenases, known as COQ6 and COQ7 in yeast (Olson and Rudney, 1983). Genes encoding these enzymes have been identified from various organisms, such as S. cerevisiae (Marbois and Clarke, 1996;Kawamukai, 2000;Ozeir et al., 2011), rat (Jonassen et al., 1996), Caenorhabditis elegans (Ewbank et al., 1997), human (Vajo et al., 1999;Lu T.T. et al., 2013), and E. coli (Hajj Chehade et al., 2013). Homologs of the enzymes involved in UQ aromatic ring hydroxylations have also been found in various plants, such as Arabidopsis (Lange and Ghassemian, 2003) and alga (Blanc et al., 2010(Blanc et al., , 2012. Arabidopsis contains a COQ6 homolog, but lacks COQ7 (Hayashi et al., 2014). Although AtCOQ6 cannot complement the yeast coq6 disruptant, addition of a mitochondrial targeting signal to AtCOQ6 will enable the production of UQ 10 (Hayashi et al., 2014). Except AtCOQ6, other plant COQ6 and COQ7 homologs were identified based on genome sequence search and computational annotation, their roles in UQ biosynthesis need to be further confirmed using experimental approaches. Decarboxylation The proposed modifications of UQ aromatic ring include a decarboxylation step. It occurs before the three hydroxylation steps in prokaryotes, whereas in eukaryotes, decarboxylation occurs after a hydroxylation step and an O-methylation step. It has been shown that ubiX and ubiD genes are involved in the decarboxylation of UQ aromatic ring in bacteria (Meganathan, 2001;Gulmezian et al., 2007;Aussel et al., 2014). ubiX encodes a flavin mononucleotide (FMN)-binding protein with no decarboxylase activity detected in vitro (Gulmezian et al., 2007;Aussel et al., 2014), and UbiX proteins are metal-independent and require dimethylallyl-monophosphate as substrate . During the biosynthesis of UQ, UbiX acts as a flavin PT, producing a flavin-derived cofactor required for the decarboxylase activity of UbiD White et al., 2015). pad1 and fdc1 are two fungal genes related to bacterial ubiX and ubiD (Mukai et al., 2010;Lin et al., 2015;Payne et al., 2015). Although Pad1 and Fdc1 proteins are homologous with UbiX and UbiD, respectively, and UbiX can activate Fdc1, they are not essential for UQ synthesis in yeast (Mukai et al., 2010;Lin et al., 2015). Thus, the decarboxylation step is largely unknown in eukaryotes. Feeding the isolated mitochondria from potato tubers with 14 C-labeled IPP, 4HB and S-adenosylmethionine shows the accumulation of methoxy-4-hydroxy-5-decaprenylbenzoate. It indicates the occurrence of decarboxylation after the first hydroxylation and subsequent O-methylation in plants; however, the immediate methoxy-4-hydroxy-5-decaprenylbenzoate cannot be detected in potato tubers (Lütke-Brinkhaus et al., 1984;Lütke-Brinkhaus and Kleinig, 1987), and no plant genes and enzymes involved in the decarboxylation of UQ aromatic ring have been identified. In addition to the enzymes mentioned above, various other enzymes, such as yeast COQ4, COQ8, and COQ9, are potentially involved in UQ benzene quinone ring modification, although their actual functions are currently unknown (Tran and Clarke, 2007). Moreover, it seems that modification enzymes involved in modification of UQ benzene quinone ring form a multi-subunit complex. The interaction among subunits guarantees the normal function of enzymes. PHYSIOLOGICAL FUNCTIONS OF PQ AND UQ IN PLANTS Both PQ and UQ are functionally important electron transporters in plants. PQ is involved in the electron transport chain of oxygenic photosynthesis, whereas UQ works exclusively as an electron carrier in the aerobic respiratory chain (Cramer et al., 2011;de Dieu Ndikubwimana and Lee, 2014;Parmar et al., 2015). In addition to their basal functions in photophosphorylation and oxidative phosphorylation, PQ and UQ also play indispensable roles in plant growth and development through participating in the biosynthesis or metabolism of various important chemical compounds, acting as antioxidants, being involved in plant response to stress, and regulating gene expression and cell signal transduction. Involved in Biosynthesis or Catabolism of Chemical Compounds It has been shown that PQ and UQ are involved in the biosynthesis or metabolism of various important chemical compounds in plants. For instance, PQ participates in the biosynthesis of carotenoids (Norris et al., 1995), abscisic acid (ABA) (Rock and Zeevaart, 1991) and gibberellin (GA) (Nievelstein et al., 1995), whereas UQ is involved in branch-chain amino acid metabolism (Ishizaki et al., 2006;Araújo et al., 2010). Carotenoids are C 40 tetraterpenoids functioned as accessory light-harvesting pigments in photosynthetic tissues. In nonphotosynthetic tissues, such as fruits and flowers, high levels of carotenoids often bring intense orange, yellow and red colors (Pfander, 1992). During carotenoid biosynthesis, the phytoene desaturation reaction is a rate-limiting step. A certain quinone/hydroquinone balance is necessary for optimal phytoene desaturation (Mayer et al., 1990). In an anaerobic environment, the oxidized quinones rather than reduced quinones are involved in the desaturation of phytoene (Mayer et al., 1990). pds1 and pds2 are two Arabidopsis mutants showing albino phenotype (Norris et al., 1995). The mutations affect phytoene desaturation and cause accumulation of phytoene, but they are not occurred in the phytoene desaturation enzyme. Analysis of pds1 and pds2 shows that pds1 is 4hydroxypheylpyruvate dioxygenase deficient (Norris et al., 1998), whereas pds2 is deficient in HST, a critical enzyme involved in PQ biosynthesis (Tian et al., 2007). Both of the mutations lead to plastoquionone/tocopherol absence from different aspects in Arabidopsis, providing conclusive evidence that PQ is an essential component in phytoene desaturation (Nievelstein et al., 1995;Norris et al., 1995). Since the plant hormone ABA is synthesized by oxidative cleavage of epoxy-carotenoids (Rock and Zeevaart, 1991), it is reasonable that PQ is also important for ABA biosynthesis. In a T-DNA insertion mutant of HST gene (pds2-1), not only PQ but also carotenoids, ABA and GA 3 levels are dramatically reduced (Chao et al., 2014). PQ works as the co-factor of phytoene desaturase and ζ-carotene desaturase and is the immediate electron acceptor in carotenoid and ABA biosynthesis (Chao et al., 2014). Disruption of HST gene results in PQ content decrease, which subsequently affects carotenoid and ABA biosynthesis. On the other hand, the biosynthesis of PQ, carotenoid, ABA and GA shares the common precursor, GGPP Chao et al., 2014;Du et al., 2015;Zhang and Lu, 2016). In the pds2-1 mutant, expression of GA biosynthesis genes, such as GA1, GA2, and GA3, is significantly downregulated (Chao et al., 2014). Consistently, in the Arabidopsis phytoene desaturase gene mutant (pds3), gibberellin biosynthesis is impaired (Qin et al., 2007). It indicates that PQ may affect the biosynthesis of other chemical compounds through negative feedback regulation or other indirect mechanisms. The relationship between UQ and amino acids lies in two aspects: (1) The precursors of UQ biosynthesis are derived from amino acids, including phenylalanine and tyrosine; and (2) UQ is involved in catabolism of some branched-chain amino acids in mitochondria. It is well known that mitochondrion is an integration point of cellular metabolism and signaling. Amino acids are not only metabolized in peroxisomes but also broken down in mitochondria, which provide carbon skeletons for biosynthesis of many important compounds, such as vitamins, amino acids, and lipids (Sweetlove et al., 2007). In Arabidopsis, leucine is catabolized to form isovaleryl-CoA in mitochondrial matrix, and then the intermediate of the leucine catabolic pathway, isovaleryl-CoA, is dehydrogenated to 3-methylcrotonyl-CoA by isovaleryl-CoA dehydrogenase (IVD). This process occurs on the matrix face of the inner mitochondrial membrane and an electron is transferred through the electrontransfer flavoprotein/electron-transfer falvoprotein:ubiquinone oxidoreductase (ETF/ETFQO) system, first to flavoprotein and then to flavoprotein:ubiquione oxidoreductase (Ishizaki et al., 2006;Araújo et al., 2010). UQ is the final acceptor of electrons in the decomposition of leucine. Similarly, electrons produced during the catabolism of lysine can also be channeled to the mitochondrial electron transport chain (Araújo et al., 2010). Act as Antioxidants and Involved in Plant Response to Stress Plastoquinone and ubiquinone can scavenge free radicals to prevent lipid peroxidation, protein oxidation and DNA damage in plant response to biotic and abiotic stresses. They exert antioxidant activity in the reduced forms, plastoquinol and uibiquinol, located in chloroplast thylakoid membrane and mitochondrial membrane, respectively. Analysis of the antioxidant effect of reduced PQ in isolated spinach thylakoid membranes showed that the reduced PQ acted as a scavenger of toxic oxygen species generated in the thylakoid membranes under strong illumination stress (Hundal et al., 1995). Reduced PQ inhibits lipid peroxidation and pigment bleaching, whereas oxidized PQ plays an opposite role (Hundal et al., 1995). In PQ-depleted spinach PSII membranes, exogenously added plastoquinol serves as an efficient scavenger of singlet oxygen (Yadav et al., 2010). Similarly, in C. reinhardtii cells, the level of reduced PQ markedly increased under high-light stress. When pyrazolate, an inhibitor of PQ and tocopherol biosynthesis, was added, the content of reduced PQ quickly decreases (Kruk et al., 2005). Further analyzing the turnover of plastoquinol showed that, due to scavenging of singlet oxygen, the reduced PQ underwent high turnover rate under high-light conditions (Kruk and Trebst, 2008). Moreover, the redox state of PQ pool was found to be an upstream master switch associated with programmed cell death in Arabidopsis leaves in response to excess excitation energy and may be play a central role in the light acclimation of diatoms (Mühlenbock et al., 2008;Lepetit et al., 2013). PSII photoinhibition occurred as a consequence of more reduced PQ pool (Darwish et al., 2015). In addition to abiotic stress, PQ is also involved in plant response to biotic stress. When Solanum nigrum was treated with the pathogen Phytophthora infestans-derived elicitor, reactive oxygen species (ROS) production, lipid peroxidation and lipoxygenase activity were elevated (Maciejewska et al., 2002). These events were accompanied by a significant increase in PQ level. The increase of PQ level was more significant in plants growing in darkness than under continuous light. It suggests that PQ may be involved in maintaining a tightly controlled balance between the accumulation of ROS and antioxidant activity (Maciejewska et al., 2002). Mesembryanthemum crystalinum performs C 3 and CAM carbon metabolism. Analysis of M. crystalinum plants infected with pathogen Botrytis cinere showed that the redox state of PQ pool modifies plant response to biotic stress and hypersensitive-like response is accelerated when PQ pool is in the reduced state (Nosek et al., 2015). Ubiquinone is an obligatory element of mitochondrial functions in both animals and plants. The antioxidant activity of UQ has been extensively characterized in animals Tiano, 2007, 2010). It prevents DNA damage and cell membrane lipid peroxidation through the elimination of ROS. Same as PQ, UQ also has two forms: the reduced type (ubiquinol) and the oxidized one (ubiquinone), of which ubiquinol is the form exerting antioxidant activity. Overexpression of yeast coq2 (phydroxybenzoate poliprenyltransferase) in tobacco resulted in the increase of UQ in transgenic lines . Transgenics with the higher UQ level showed the greater tolerance to oxidative stresses caused by methyl viologen or high salinity . Analysis of the suspension-cultured Chorispora bungeana cells showed that the redox transition of UQ played key roles in adaptation of cellular regulations under chilling stress (Chang et al., 2006). In addition, the redox state of UQ determines the levels of ROS and plays a key regulatory role in Arabidopsis basal resistance against bacterial pathogens and in response to high oxidative stress environments (Dutta et al., 2015). Actually, the PQ and UQ pools play a dual role: (1) reducing O 2 to superoxide by semiquinone; and (2) reducing superoxide to hydrogen peroxide by hydroquinone. In plant cells, the predominant ROS involved in plant defense includes superoxide and hydrogen peroxide, which are distributed in different pools. Moreover, ROS are generated in two ways. One is elicited by external stresses, such as environmental stresses and biotic stresses. The other way is produced through metabolic processes in the cells, such as the electron transport chains in mitochondria and chloroplasts (Mubarakshina and Ivanov, 2010;Tripathy and Oelmuller, 2012). An unknown interaction may be existed between different pools in modulation of ROS generation and plant response to stress. Regulate Cell Signal Transduction and Gene Expression Plastoquinone and ubiquinone may regulate cell signaling and gene expression indirectly through the generation of hydrogen peroxide, an important signaling molecule in plant resistance and cell metabolism. They can also directly regulate the expression of genes involved in cell metabolism. For instance, UQ 10 influences the expression of hundreds of human genes involved in different cellular pathways. Among them, seventeen are functionally connected by signaling pathways of G-protein coupled receptor, JAK/STAT, integrin and β-arrestin (Schmelzer et al., 2007). These UQ 10 -inducible genes possess a common promoter framework with binding domains of transcription factor families EVII, HOXF, HOXC and CLOX (Schmelzer et al., 2007). Moreover, UQ 10 -mediated gene-gene network are involved in inflammation, cell differentiation, and peroxisome proliferator-activated receptor signaling (Schmelzer et al., 2011). In plants, Chla/b-binding protein complex II (LHC II) and NADPH dehydrogenase complex are two important protein complexes in photosynthesis. The cytochrome b 6 f deficient mutant of lemna perpusilla maintains a low level of the lightharvesting chl a/b-binding protein complex II (LHC II) at lowlight intensities (Yang et al., 2001). Inhibiting the reduction of PQ pool increases the level of LHC II in the mutant at both low-and high-light intensities, whereas the level of LHC II is increased in wild-type plants only under highlight conditions (Yang et al., 2001). It suggests that the redox state of PQ is an important signal-transducing component in plant photoacclimation process (Yang et al., 2001). Analysis of gene expression using DNA microarray technology showed that 663 genes were differentially expressed in A. thaliana under low, medium, high and excessive irradiances, of which 50 genes were reverted by 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU), an inhibitor of the photosynthetic electron transport chain (Adamiec et al., 2008). It indicates that the expression of the 50 genes is regulated by the redox state of PQ pool. Hierarchical clustering and promoter motif analysis showed that the promoter regions of PQ-regulated genes contain conserved cis-acting elements involved in signal transduction from the redox state of the PQ pool (Adamiec et al., 2008). Active NADPH dehydrogenase complex is necessary for cyclic electron transport in photosystem I (PSI) and respiration. In glucosetreated cyanobacteria Synechocystis sp. strain PCC 6803 cells, NADPH dehydrogenase complex activity was inhibited and the cyclic PSI rate was decreased. In contrast, when the cells were treated with DCMU, the activity of NADPH dehydrogenase was significantly stimulated (Ma et al., 2008). Glucose treatment causes partial reduction of the PQ pool, whereas DCMU results in overoxidation. Differential responses of enzyme activity and cyclic PSI rate to glucose and DCMU treatments indicate that the redox state of PQ pool controls the NADPH dehydrogenase complex activity and further influences on cyclic PSI (Ma et al., 2008). In addition to PQ, UQ is also involved in signal transduction in plants. Comparative analysis of hypersensitive tobacco Nicotiana tabacum L. variety Samsun NN treated with UQ 10 and TMV and those treated with TMV only showed that UQ 10 and TMV-treated tobbaco had less number of lesions and TMV and greater change of plant hormone levels, including the decrease of ABA and increase of IAA level (Rozhnova and Gerashchenkov, 2006). It indicates that UQ 10 has a protective antiviral effect through controlling plant hormonal status (Rozhnova and Gerashchenkov, 2008). On the other hand, it has been reported that the ROS level generated by UQ redox state is a threshold for successful basal resistance response in plants (Dutta et al., 2015). In plant defenses, ROS acts as signaling molecules directly or mediates the generation of phytoalexins or serves as a source for activation of further defenses indirectly (Kovtun et al., 2000;Thoma et al., 2003;Mur et al., 2008). Plants employ both pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI) and effector-triggered immunity (ETI) in basal and R gene-mediated defense response. ROS is induced rapidly and transiently and then mediates signaling during PTI and ETI (Frederickson Matika and Loake, 2013). The induction of ROS is considered as a defining hallmark of identification and subsequent defense activation against pathogens (Torres et al., 2006;Dutta et al., 2015). Moreover, UQ is involved in the mitochondrial glycerol-3-phosphate shuttle for redox homeostasis in plants (Shen et al., 2007) and serves as mitochondrial permeability transition pore in cell metabolism (Amirsadeghi et al., 2007;Reape et al., 2007). Utilization of PQ and UQ Plastoquinone is specific to plants. It has not been directly utilized for human. However, various synthesized PQ derivatives, such as SkQ1 (plastoquinonyl-decyl-triphenylphosphonium), SkQR1 (the rhodamine-containing analog of SkQ1) and SkQ3 (methylplastoquinonyl-decyl-triphenylphosphonium), were reported to show antioxidant and protonophore activity (Skulachev et al., 2011). They are able to penetrate cell membranes and potentially used in anti-aging treatment (Anisimov et al., 2008Obukhova et al., 2009). SkQ1 is currently under clinical trials for glaucoma treatment in Russia (Iomdina et al., 2015). The phase 2 clinical trial indicates that SkQ1 is safe and efficacious in treating dry eye signs and symptoms (Petrov et al., 2016). In plants, SkQ1 and SkQ3 can retard the senescence of Arabidopsis rosette leaves and their death, increase the vegetative period, and improve crop structure of wheat (Dzyubinskaya et al., 2013). In addition, SkQ1 effectively suppresses the development of p50-induced PCD in tobacco plants through inhibiting ROS production (Solovieva et al., 2013;Samuilov and Kiselevsky, 2015). The role of SkQ1 and SkQ3 played in cells is mainly based on its antioxidant activity. Compared with PQ, the practical application of UQ, particularly UQ 10 , has attracted more attention. UQ 10 is effective in treating cardiovascular diseases, particularly in preventing and treating hypertension, hyperlipidemia, coronary artery disease and heart failure (Tran et al., 2001;Moludi et al., 2015). In the past decades, many studies have reported the remarkable clinical benefits of UQ 10 and illuminated its antioxidant activity as the basis of pathology (Lee et al., 2012). Moreover, UQ 10 controls energy metabolism and regulates cell death via redox signaling, indicating its potential in cancer treatment (Chai et al., 2010). Moderate UQ 10 levels have favorable impact on breast cancer (Folkers et al., 1993;Lockwood et al., 1994;Premkumar et al., 2008). In addition, UQ 10 can enhance viral immunity and affect the development of AIDS (Folkers et al., 1988(Folkers et al., , 1991. UQ 10 is also closely related to human reproductive health. Moderate UQ 10 supplement can effectively reduce the risk of spontaneous abortion and develop pre-eclampsia in pregnant women (Palan et al., 2004;Teran et al., 2009). Exogenous administration of UQ 10 can increase sperm cell motility and the mean pregnancy rate. Its positive role in the treatment of male infertility also relies on the antioxidant properties and bioenergetics (Balercia et al., 2004;Mancini et al., 2005;Balercia et al., 2009;Mancini and Balercia, 2011;Safarinejad, 2012). As a good immunomdulator, UQ 10 has been used to treat chronic gingivitis and periodontitis (Chatterjee et al., 2012;Hans et al., 2012). A series of toothpaste containing UQ 10 are developed and sold in the market. Other UQ 10 -containing chemical products include cleanser, cosmetic products, and healthy foods. Similar to PQ derivatives, the role of UQ played in cells relied on the antioxidant activity. As an antioxidant, UQ effectively scavenge ROS and prevent ROSinduced damage to membrane lipids, DNA, and proteins. Metabolic Engineering of UQ Plastoquinone plays significant roles in plants; however, it is not directly used in human life. Moreover, PQ derivatives are mainly synthesized by chemical methods. Metabolic engineering of PQ and its derivatives is rarely reported. Differing from PQ, metabolic engineering of UQ, particularly UQ 10 , has been conducted in prokaryotes and eukaryotes, including bacterial, yeast and plants. Currently, the majority of commercially available UQ 10 comes from yeast fermentation and chemical synthesis. Compared with microbial fermentation, chemical synthesis of UQ 10 is more expensive and produces environmentally harmful waste products. Additionally, the scalability of both yeast fermentation and chemical synthesis is limited. Thus, plants are thought to be an attractive alternative source of UQ 10 . The natural UQ producers, such as Agrobacterium tumefaciens, Paracoccus denitrificans, Rhodobacter sphaeroides, and their chemical mutants have been successfully used for commercial production of UQ; while, with the increase of knowledge about enzymes involved in UQ biosynthesis and regulatory mechanisms modulating UQ production, opportunities have arisen for UQ metabolic engineering in other organisms. For instance, overexpression of some key genes, such as ubiA encoding p-hydroxybenzoate-polyprenyl pyrophosphate transferase, ispB encoding polyprenyl pyrophosphate synthetase and ubiCA, in E. coli may achieve the level of UQ content 3-4 times to that of wild-type cells (Zhu et al., 1995;Jiang et al., 2006). Even though, UQ production using these methods does not meet industrial needs, which require a yield of higher than 500 mg/L (Cluis et al., 2007). Therefore, in addition to the highly efficient microbial system, growth condition optimization and alteration of cellular regulatory mechanisms are important for UQ production (Sakato et al., 1992;Zhang et al., 2007a,b). Recently, multiple strategies have been employed in improving UQ production. One hundred and eighty percent increase of UQ 8 content is achieved in E. coli ( menA) through a comprehensive approach, including blocking menaquinone pathway, coexpressing dxs-ubiA, and supplementing PYR and pHBA (Xu et al., 2014). The highest UQ10 titer and yield, 433 mg/L, is obtained in engineered E. coli through integrating dps into chromosome of E. coli ATCC8739, modulating dxs and idi genes of the MEP pathway and ubiCA genes, and recruiting the glucose facilitator protein of Zymomonas mobiliswas to replace native phosphoenolpyruvate: carbohydrate phosphotransferase systems (PTS) (Dai et al., 2015). Metabolic engineering of UQ in plants mainly concentrated on UQ 10 production. Although UQ widely exists in plant cells, most cereal crops produce mainly UQ 9 . Tomato, Datura tatula and tobacco BY-2 cells can produce UQ 10 naturally; however, its yield is very limited (Ikeda and Kagei, 1979;Ikeda et al., 1981;Matsumoto et al., 1981). It has been shown that overexpression of rate-limiting genes and increase of UQ precursors can improve UQ production in plants. Expression of ddsA from Gluconobacter suboxydans in rice leads to efficient production of UQ 10 in rice seeds (Sakiko et al., 2006(Sakiko et al., , 2010. Since PPT plays the catalyzing role in a rate-limiting step of the UQ biosynthesis pathway, it is a significant target for UQ metabolic engineering in plants Stiff, 2010;Parmar et al., 2015). For instance, expression of yeast coq2 gene resulted in a sixfold increase of UQ 10 levels in transgenic tobacco plants . Compared with wild type plants, coq2 transgenic tobacco with high UQ 10 level are more resistant to oxidative stresses caused by methyl viologen or high salinity . Similarly, overexpression of AtPPT1 in tobacco increases UQ 10 content and enhances oxidative stress tolerance caused by high NaCl (Stiff, 2010). Increase of UQ precursors, such as 4HB and/or PPS, may potentially improve UQ production in plants (Sommer and Heide, 1998;Viitanen et al., 2004). However, due to the complex relationship among precursors, UQ production and many other intersecting metabolic pathways, the expected goal of improving UQ production is difficult to achieve. Further improvement of UQ content in plant cells may be expected using comprehensive approaches (Kumar et al., 2012), such as improving the amount of UQ precursors combined with overexpression of rate-limiting genes. CONCLUSION AND PERSPECTIVES Plastoquinone and ubiquinone are two important compounds in plants. They function as electron transporters in the electron transport chain of oxygenic photosynthesis and the aerobic respiratory chain, respectively, and play indispensable roles in plant growth and development. UQ, particularly UQ 10 , has also been widely used in people's life. Great efforts have been done to elucidate their biosynthetic pathways and genes associated with PQ and UQ production. As shown in Figure 2 and Table 1, significant achievements have been made. However, there are still several issues related to the biosynthetic pathways, regulatory mechanisms and metabolic engineering need to be addressed. Although a great amount of studies have been done to the MEP and MVA pathways and various enzymes, such as PPT and HST, involved in the attachment of isoprenoid side chain to the benzoquinone ring, pathways and enzymes involved in isoprenoid side chain elongation, UQ benzoquinone ring biosynthesis, and PQ and UQ benoquinone ring modification are largely unknown. The key enzyme responsible for 4HB production has not been identified (Block et al., 2014). PPSs are a group of enzymes converting IPP and DMAPP to diphosphate precursors. Each PPS may play different role in isoprenoid side chain elongation, which needs to be clarified. Although various enzymes involved in benoqunone ring modifications have been identified in bacteria and yeast, few achievements have been made in plants. With more and more transcriptome and whole genome sequence available, gene network reconstruction becomes possible and can be used to address these problems. Non-coding RNAs, including small RNAs and long noncoding RNAs, play significant regulatory roles in many aspects of plants (Lu et al., 2005(Lu et al., , 2007(Lu et al., , 2008Wang et al., 2015). Various microRNAs have been identified to be associated with secondary metabolism Fan et al., 2015;Wei et al., 2015). However, the regulatory roles of non-coding RNAs in PQ and UQ biosynthesis have not been revealed. In addition to noncoding RNAs, transcription factors, such as MYB, WRKY and SPL, potentially play significant regulatory roles in PQ and UQ biosynthesis (Li and Lu, 2014a,b;Zhang et al., 2014;Li C. et al., 2015), which needs to be further demonstrated. Understanding the regulatory mechanisms of PQ and UQ are important for manipulating the content of PQ and UQ in plants. Metabolic engineering of UQ 10 has been successfully performed in bacteria and yeast. However, they were found to be low yield and high production cost. UQ 10 metabolic engineering in plants has various advantages and great perspectives, whereas current efforts are limited to a few plant species (Parmar et al., 2015). Increasing UQ 10 content in UQ 10 -producing plant species and engineering UQ 10 in non-UQ 10 -producing plant species are two routes for UQ 10 metabolic engineering in plants. With more and more genes involved in UQ 10 biosynthesis and regulation to be identified, great achievements may be expected for UQ 10 production in plants. AUTHOR CONTRIBUTIONS Both authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.
2017-05-05T05:38:30.009Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "10d0c5e9235f7c304284345b81ac9f714ad5ba6b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.01898/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10d0c5e9235f7c304284345b81ac9f714ad5ba6b", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11993370
pes2o/s2orc
v3-fos-license
User Cooperation in Wireless Powered Communication Networks This paper studies user cooperation in the emerging wireless powered communication network (WPCN) for throughput optimization. For the purpose of exposition, we consider a two-user WPCN, in which one hybrid access point (H-AP) broadcasts wireless energy to two distributed users in the downlink (DL) and the users transmit their independent information using their individually harvested energy to the H-AP in the uplink (UL) through time-division-multiple-access (TDMA). We propose user cooperation in the WPCN where the user which is nearer to the H-AP and has a better channel for DL energy harvesting and UL information transmission uses part of its allocated UL time and DL harvested energy to help to relay the far user's information to the H-AP, in order to achieve more balanced throughput optimization. We maximize the weighted sum-rate (WSR) of the two users by jointly optimizing the time and power allocations in the network for both wireless energy transfer in the DL and wireless information transmission and relaying in the UL. Simulation results show that the proposed user cooperation scheme can effectively improve the achievable throughput in the WPCN with desired user fairness. I. INTRODUCTION Energy harvesting has recently received a great deal of attention in wireless communication since it provides virtually perpetual energy supplies to wireless networks through scavenging energy from the environment. In particular, harvesting energy from the far-field radio-frequency (RF) signal transmissions is a promising solution, which opens a new avenue for the unified study of wireless energy transfer (WET) and wireless information transmission (WIT) as radio signals are able to carry energy and information at the same time. There are two main paradigms of research along this direction. One line of work aims to characterize the fundamental trade-offs in simultaneous WET and WIT with the same transmitted signal in the so-called simultaneous wireless information and power transfer (SWIPT) systems (see e.g., [1]- [3] and the references therein). Another line of research focuses on designing a new type of wireless network termed wireless powered communication network (WPCN) in which wireless terminals communicate using the energy harvested from WET (see e.g., [4]- [6]). In our previous work [6], we have studied a typical WPCN model, in which a hybrid access-point (H-AP) coordinates WET/WIT to/from a set of distributed users in the downlink (DL) and uplink (UL) transmissions, respectively. It has been shown in [6] that the WPCN suffers from a so-called "doubly near-far" problem, which occurs when a far user from the H-AP receives less wireless energy than a near user in the DL, but needs to transmit with more power in the UL for achieving the same communication performance due to the distancedependent signal attenuation in both the DL and UL. As a result, unfair rate allocations among the users are inured when the sum-throughput of the near and far users is maximized. In [6], we have proposed to assign different time to the near and far users in their UL WIT to solve the doubly near-far problem, which is shown to achieve fair rate allocations among the users in a WPCN. On the other hand, user cooperation is an effective way to improve the capacity, coverage, and diversity performance in conventional wireless communication systems. Assuming constant energy supplies at user terminals, cooperative communication has been thoroughly investigated in the literature under various protocols such as decode-and-forward and amplifyand-forward (see e.g., [7], [8] and the references therein). Recently, cooperative communication has been studied in energy harvesting wireless communication and SWIPT systems in e.g. [9] and [10], respectively. However, how to exploit user cooperation in the WPCN to overcome the doubly near-far problem and further improve the network throughput and user fairness still remains unknown, which motivates this work. In this paper, we study user cooperation in the WPCN for throughput optimization. For the purpose of exposition, we consider a two-user WPCN, as shown in Fig. 1, where one H-AP broadcasts wireless energy to two distributed users with different distances in the DL, and the two users transmit their independent information using individually harvested energy to the H-AP in the UL through time-division-multiple-access (TDMA). To enable user cooperation, we propose that the near user which has a better channel than the far user for both DL WET and UL WIT uses part of its allocated UL time and DL harvested energy to first help to relay the information of the far user to the H-AP and then uses the remaining time and energy to transmit its own information. Under this protocol, we characterize the maximum weighted sum-rate (WSR) of the two users by jointly optimizing the time and power allocations in the network for both WET in the DL and WIT in the UL, subject to the given total time constraint. The achievable throughput gain in the WPCN by the proposed user cooperation scheme is shown both analytically and through simulations over the baseline scheme in [6] without user cooperation. The rest of this paper is organized as follows. Section II presents the system model of the WPCN with user cooperation. Section III presents the time and power allocation problem to maximize the WSR in the WPCN, and compares the solutions and achievable throughput regions with versus without user cooperation. Section IV presents more simulation results under practical fading channel setups. Finally, Section V concludes the paper. II. SYSTEM MODEL As shown in Fig. 1, this paper considers a two-user WPCN with WET in the DL and WIT in the UL. The network consists of one hybrid access point (H-AP) and two users (e.g., sensors) denoted by U 1 and U 2 , respectively, operating over the same frequency band. The H-AP and the users are assumed to be each equipped with one antenna. Furthermore, it is assumed that the H-AP has a constant energy supply (e.g., battery), whereas U 1 and U 2 need to replenish energy from the received signals broadcast by the H-AP in the DL, which is then stored and used to maintain their operations (e.g., sensing and data processing) and also communicate with the H-AP in the UL. We assume without loss of generality that U 2 is nearer to the H-AP than U 1 , and hence denote the distance between the H-AP and U 1 , that between the H-AP and U 2 , and that between the U 1 and U 2 as D 10 , D 20 , and D 12 , respectively, with D 10 ≥ D 20 . We also assume that D 12 ≤ D 10 so that U 2 can more conveniently decode the information sent by U 1 than the H-AP, to motivate the proposed user cooperation to be introduced next. Assuming that the channel reciprocity holds between the DL and UL, then the DL channel from the H-AP to user U i and the corresponding reversed UL channel are both denoted by a complex random variableh i0 with channel power gain h i0 = |h i0 | 2 , i = 1, 2, which in general should take into account the distance-dependent signal attenuation and long-term shadowing as well as the short-term fading. In addition, the channel from U 1 to U 2 is denoted by a complex random variableh 12 with channel power gain h 12 = |h 12 | 2 . If only the distance-dependent signal attenuation is considered, we should have h 10 ≤ h 12 and h 10 ≤ h 20 due to the assumptions of D 10 ≥ D 20 and D 10 ≥ D 12 . Furthermore, we consider block-based transmissions over quasi-static flatfading channels, where h 10 , h 20 , and h 12 are assumed to remain constant during each block transmission time, denoted by T , but can vary from one block to another. In each block, it is further assumed that the H-AP has the perfect knowledge of h 10 , h 20 , and h 12 , and U 2 knows perfectly h 12 . We propose to employ a harvest-then-transmit protocol similar to that in [6] for the two-user WPCN with user cooperation, as shown in Fig. 2. In each block, during the first τ 0 T amount of time, 0 < τ 0 < 1, the H-AP broadcasts wireless energy to both U 1 and U 2 in the DL with fixed transmit power P 0 . The far user U 1 then transmits its information with average power P 1 during the subsequent τ 1 T amount of time in the UL, 0 < τ 1 < 1, using its harvested energy, and both the H-AP and U 2 decode the received signal from U 1 . To overcome the doubly near-far problem [6], during the remaining (1 − τ 0 − τ 1 )T amount of time in each block, the near user U 2 first relays the far user U 1 's information and then transmits its own information to the H-AP using its harvested energy with average power P 21 over τ 21 T amount of time and with average power P 22 over τ 22 T amount of time, respectively, where τ 21 + τ 22 = τ 2 . Note that we have a total time constraint given by For convenience, we noramlize T = 1 in the sequel without loss of generality. During the DL phase, the transmitted complex baseband signal of the H-AP in one block of interest is denoted by an arbitrary random signal, x 0 , satisfying E[|x 0 | 2 ] = P 0 . The received signal at U i , i = 1, 2, is then expressed as where y (k) r denotes the received signal at U r during τ k , with k ∈ {0, 1, 21, 22} and r ∈ {0, 1, 2} (with U 0 denoting the H-AP here). In (2), z i denotes the received noise at U i which is assumed to be z i ∼ CN 0, σ 2 i , i = 1, 2, where CN (µ, σ 2 ) stands for a circularly symmetric complex Gaussian (CSCG) random variable with mean µ and variance σ 2 . It is assumed that P 0 is sufficiently large such that the energy harvested due to the receiver noise is negligible and thus is ignored. Hence, the amount of energy harvested by each user in the DL can be expressed as (assuming unit block time, i.e., T = 1) where 0 < ζ i < 1, i = 1, 2, is the energy conversion efficiency at the receiver of U i . After the DL phase, each user uses a fixed portion of its harvested energy, denoted by η i , with 0 < η i ≤ 1, i = 1, 2, for the UL WIT, i.e., transmitting own information (by both U 1 and U 2 ) or relaying the other user's information (by U 2 only) to the H-AP. Within the first τ 1 amount of time allocated to U 1 , the average transmit power of U 1 is given by We denote x 1 as the complex baseband signal transmitted by U 1 with power P 1 , which is assumed to be Gaussian, i.e., x 1 ∼ CN (0, P 1 ). The received signals at the H-AP and U 2 in this UL slot for U 1 are expressed, respectively, as where z 0 ∼ CN 0, σ 2 0 denotes the receiver noise at the H-AP. During the last τ 2 amount of time allocated to U 2 , the total energy consumed by U 2 for transmitting its own information and relaying the decoded information for U 1 should be no larger than η 2 E 2 , i.e., We denote the complex basedband signals transmitted by U 2 for relaying U 1 's information and transmitting its own information as x 21 with power P 21 and x 22 with power P 22 , respectively, where x 21 ∼ CN (0, P 21 ) and x 22 ∼ CN (0, P 22 ). During τ 21 and τ 22 amount of time allocated to U 2 , the corresponding received signals at the H-AP can be expressed as y Denote the time allocations to DL WET and UL WIT as τ = [τ 0 , τ 1 , τ 21 , τ 22 ], and the transmit power values of U 1 and U 2 for UL WIT as P = [P 1 , P 21 , P 22 ]. From [8], the achievable rate of U 1 for a given pair of τ and P can be expressed from (5) and (7) as (τ , P) denoting the achievable rates of the transmissions from U 1 to the H-AP, from U 2 to the H-AP, and from U 1 to U 2 , respectively, which are given by Furthermore, the achievable rate of U 2 is expressed from (7) as III. OPTIMAL TIME AND POWER ALLOCATIONS IN WPCN WITH USER COOPERATION In this section, we study the joint optimization of the time allocated to the H-AP, U 1 , and U 2 , i.e., τ , and power allocations of the users, i.e., P, to maximize the weighted sum-rate (WSR) of the two users. Let ω = [ω 1 , ω 2 ] with ω 1 and ω 2 denoting the given non-negative rate weights for U 1 and U 2 , respectively. The WSR maximization problem is then formulated from (8)- (12) as (4), and (6), Notice that if we set τ 21 = 0 and P 21 = 0, then (P1) reduces to the special case of WPCN without user cooperation studied in [6], i.e., the near user U 2 only transmits its own information to the H-AP, but does not help the far user U 1 for relaying its information to the H-AP. Note that (P1) can be shown to be non-convex in the above form. To make this problem convex, we change the variables as t 21 = τ21P21 η2ζ2h20P0 and t 22 = τ22P22 η2ζ2h20P0 . Since P 1 = η 1 ζ 1 P 0 h 10 τ0 τ1 as given in (4), R (τ , P), and R 2 (τ , P) in (9)-(12) can be re-expressed as functions of t = [τ , t 21 , t 22 ], i.e., where ρ where the time constraint (19) can be shown to be equivalent to the power constraint originally given in (6). It is worth noting that t 21 and t 22 denote the amount of time in the DL slot duration τ 0 in which the harvested energy by U 2 is later allocated to relay U 1 's information and transmit its own information in the UL, respectively. By introducing the new variables t 21 and t 22 in t andR, joint time and power allocation in problem (P1) is converted to time allocation only in problem (P2). Proof: Due to the space limitation, the proof is omitted here, and is given in a longer version of this paper available online [11]. 22 . To summarize, one algorithm to solve problem (P1) is given in Table I. Fig. 3. Throughput region comparison for WPCN with versus without user cooperation. Fig. 3 shows the achievable throughput regions of the two-user WPCN with user cooperation by solving (P1) with different user rate weights as compared to that by the baseline scheme in [6] without user cooperation, for different values of path-loss exponent, α. It is assumed that D 10 = 10m, and D 12 = D 20 = 5m. The channel power gains in the network are modeled as h ij = 10 −3 θ ij D −α ij , ij ∈ {10, 20, 12}, for distance D ij in meter, with the same path-loss exponent α and 30dB signal power attenuation for both users at a reference distance of 1m, where θ ij represents the additional channel short-term fading. We ignore the effects of short-term fading in this case by setting θ 10 = θ 20 = θ 12 = 1, to focus on the effect of the doubly near-far problem due to distance-dependent attenuation only. Moreover, it is assumed that P 0 = 30dBm and the bandwidth is 1MHz. The AWGN at the receivers of the H-AP and U 2 is assumed to have a white power spectral density of −160dBm/Hz. For each user, it is assumed that η 1 = η 2 = 0.5, ζ 1 = ζ 2 = 0.5, and Γ = 9.8 assuming that uncoded quadrature amplitude modulation (QAM) is employed with a target bit-error rate (BER) of 10 −7 [14]. From Fig. 3, it is observed that the throughput region of WPCN with user cooperation is always larger than that without user cooperation, which is expected as the latter case only corresponds to a suboptimal solution of (P1) in general. 1,max and R (nc) 1,max denoting the maximum achievable throughput of the far user U 1 in the WPCN with and without user cooperation, respectively. It is then inferred from Fig. 3 that δ = 1.33, 1.92, and 3.60 when α = 2, 2.5, and 3, respectively, which implies that user cooperation in the WPCN is more beneficial in improving the far user's rate as α increases, i.e., when the doubly nearfar problem is more severe. This is because the achievable rate for the direct link from U 1 to the H-AP decreases more significantly than that of the other two links with increasing α. Next, Fig. 4 compares the achievable throughput regions of WPCN with versus without user cooperation with α = 2. In this case, the H-AP and the two users are assumed to lie on a straight line with D 20 = κD 10 and D 12 = (1 − κ)D 10 , 0 < κ < 1. It is observed that when κ is not large (i.e., κ ≤ 0.7), R 1,max decreases with decreasing κ. This is because when the near user U 2 moves more away from the far user U 1 (and thus closer to the H-AP), the degradation of R (t * ) for the U 1 -to-U 2 link with decreasing κ is more significant than the improvement in R (t * ) with the optimal time allocations t * . On the other hand, when κ is larger than a certain threshold (e.g., κ = 0.9), R (wc) 1,max decreases with increasing κ since in this case not only the far user U 1 , but also the relatively nearer user U 2 suffers from the significant signal attenuation from/to the H-AP. Finally, Fig. 5 shows the optimal time allocations in t * for (P2) when R 1 (t * ) = R 2 (t * ), i.e., the common-throughput [6] is maximized, 1 with α = 2 and κ = 0.3, 0.5, 0.7. It is observed that τ * 1 decreases but both τ * 21 and τ * 22 increase with increasing κ. This is because when the near user U 2 moves more away from the H-AP, U 2 suffers from more severe signal attenuation as κ increases, and thus it is necessary to allocate more time to U 2 for both transmitting own information and relaying information for U 1 in order to maximize the common Fig. 6. Maximum common-throughput versus P 0 with α = 2 and κ = 0.5. IV. SIMULATION RESULT In this section, we compare the maximum common throughput in the WPCN with versus without user cooperation under the practical fading channel setup, while the other system parameters are set similarly as for Figs. 3 and 4. The short-term fading in the network is assumed to be Rayleigh distributed, and thus θ 10 , θ 20 , and θ 12 in the previously given channel models are exponentially distributed with unit mean. Fig. 6 shows the maximum average common-throughput versus the transmit power of H-AP, i.e., P 0 in dBm, with α = 2 and κ = 0.5. It is observed that the maximum common-throughput in the WPCN with user cooperation is notably larger than that without user cooperation, especially when P 0 becomes large. This result shows the effectiveness of the proposed user cooperation in the WPCN to further improve both the throughput and user fairness as compared to the baseline scheme in [6] with optimized time allocation only but without user cooperation. Fig. 7 shows the maximum average common-throughput versus different values of κ with P 0 = 30dBm. It is observed that the maximum common-throughput in the WPCN with user cooperation is always larger than that without user cooperation. Furthermore, the common-throughput in the WPCN with user cooperation first increases with κ, but decreases with increasing κ when κ is larger then a certain threshold. The threshold value of κ that maximizes the average commonthroughput of the WPCN with user cooperation is observed to increase with α. V. CONCLUSION This paper studied a two-user WPCN in which user cooperation is jointly exploited with resources (time, power) allocation to maximize the network throughput and yet achieve desired user fairness by overcoming the doubly near-far problem. We characterized the maximum WSR in the WPCN with user cooperation via a problem reformulation and applying the tools from convex optimization. By comparing the achievable throughput regions as well as the maximum commonthroughput in the WPCN with versus without user cooperation, Fig. 7. Maximum common-throughput versus κ with P 0 = 30dBm and α = 2, 2.5, 3. it is shown by extensive simulations that the proposed user cooperation is effective to improve both the throughput and user fairness. In future work, we will extend the results of this paper to other setups, e.g., when there are more than two users, alternative relaying schemes are applied, and/or other performance metric is considered.
2014-04-07T03:30:05.000Z
2014-03-27T00:00:00.000
{ "year": 2014, "sha1": "0de32e4fc09b3c18853d21f7710b804d046fa801", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1403.7123", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0de32e4fc09b3c18853d21f7710b804d046fa801", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
264554565
pes2o/s2orc
v3-fos-license
Transglutaminase Type 2-MITF axis regulates phenotype switching in skin cutaneous melanoma Skin cutaneous melanoma (SKCM) is the deadliest form of skin cancer due to its high heterogeneity that drives tumor aggressiveness. Melanoma plasticity consists of two distinct phenotypic states that co-exist in the tumor niche, the proliferative and the invasive, respectively associated with a high and low expression of MITF, the master regulator of melanocyte lineage. However, despite efforts, melanoma research is still far from exhaustively dissecting this phenomenon. Here, we discovered a key function of Transglutaminase Type-2 (TG2) in regulating melanogenesis by modulating MITF transcription factor expression and its transcriptional activity. Importantly, we demonstrated that TG2 expression affects melanoma invasiveness, highlighting its positive value in SKCM. These results suggest that TG2 may have implications in the regulation of the phenotype switching by promoting melanoma differentiation and impairing its metastatic potential. Our findings offer potential perspectives to unravel melanoma vulnerabilities via tuning intra-tumor heterogeneity. INTRODUCTION Melanoma is the deadliest sub-type of skin cancer, mainly due to its high metastatic potential.Although immunotherapies and MAP-kinases targeted drugs have widened the treatment options of metastatic patients, the development of resistance and tumor recurrence is still limiting the clinical benefits of these novel approaches [1].The main process that influences the ability to evade therapeutic treatments and fosters tumor aggressiveness is melanoma intra-tumor heterogeneity [2].Based on gene signature profiling, melanoma has been divided into two prevalent phenotypically distinct sub-populations of cells that co-exist in bulk tumor tissues: the proliferative and the invasive states [3][4][5][6].The proliferative motif is characterized by rapidly proliferating cells and high expression of the microphthalmia-associated transcription factor (MITF), the master regulator of pigmentation.Conversely, a melanoma cell needs to push invasiveness at the expense of proliferation to drive metastasis formation, increasing cellular plasticity and stem-like features.These signatures distinguish the invasive state, which is associated with low expression of MITF [1,[7][8][9][10].MITF has been extensively studied in melanoma cells where its transcriptional activity modulates target genes involved in pigmentation, such as TYR, MC1R, DCT, and MLANA [11].Noticeably, MITF is not only relevant for melanogenesis, as it exerts numerous functions in melanoma cell homeostasis by modulating proliferation, migration, immunosuppression, and many other cancer hallmarks [12].The balancing of MITF expression is complex and tightly restrained, exhibiting both transcriptional and post-translational control [13].In this setting, MITF's upstream regulators (SOX10, PAX3, EDNRB and CREB) are known as drivers of the invasive to proliferative switch, while MITF's downstream targets (MLANA, PMEL, DCT, TYRP) work as markers of the proliferative signature [14]. The high mutational burden of melanoma distinguishes this type of cancer as one of the most immunogenic, making it a suitable target for anti-cancer immunotherapy.However, immunotherapy-based approaches represent a relatively late discovery for melanoma.Nevertheless, a significantly higher success rate of this type of treatment in combination with chemotherapy, radiotherapy, or targeted molecular therapy has been observed, even if it is associated to a panel of side effects that can affect one or more organs and may limit its use [15]. Transglutaminase type-2 (TG2) is the most well characterized and studied member of a family of eight isoenzymes (TG1-7 and coagulation factor XIII) that catalyse the crosslinking between Glutamine and Lysine residues on peptides or proteins.TG2 is ubiquitously expressed and its localization within cells includes all cellular districts, being in the nucleus, cytosol, mitochondria, endoplasmic reticulum, and extracellular environment [16].However, beside this primary function, TG2 also displays a wide variety of different activities, like deamidase, GTPase, isopeptidase, adapter/scaffold, protein disulfide isomerase, kinase, hipusination regulation, serotonilation activities [17].Thanks to its multifunctionality, TG2 is involved in many cellular processes, like cell growth and differentiation, cell death, autophagy, inflammation, macrophage's phagocytosis, tissue repair, fibrosis and wound healing, ECM assembly, and remodelling [18].TG2 level of expression is sensitive to changes in physiological conditions, since the TGM2 gene (which encodes for TG2), is regulated by several agents and stimuli, like apoptotic signals, viral infections, ER stress, hypoxia, inflammation, and cancer-activated pathways. As several molecules impact on TGM2 activation, many signalling pathways are involved in its regulation and consequent behaviour.For instance, the TGM2 promoter region contains various responsive elements able to induce or inhibit TG2 expression during inflammation and hypoxia, two key oncogenic processes characterizing the tumour microenvironment (TME) [19].However, the role of TG2 in cancer is still controversial and far to be fully elucidated, since it has been reported as both a potential tumoursuppressor and a tumour-promoting factor [20].Certainly, the function of TG2 is tissue-specific [20]: in particular, we recently demonstrated by analysing public transcriptomic databases a correlation between TG2 expression, good SKCM overall survival, and a positive regulation of immune response in SKCM [21].These results indicate that TG2 expression might serve as a good prognostic factor in patients with SKCM, as well as a biomarker for the therapeutic strategy to be adopted [21]. In the present work, we gained new insight into the function of TG2 in cutaneous melanoma.We discovered that TG2 expression is required during the process of melanoma pigmentation by modulating MITF expression and activity.In turn, we have shown that TG2 expression is also related to a reduced capacity of melanoma cells to form metastasis both in vitro and in vivo, highlighting that TG2 is involved in melanoma invasion and could be associated to the phenotype switching of melanoma cells.These findings could help to better understand the intratumor variability of cutaneous melanoma, unveiling novel vulnerabilities and offering unexplored treatment perspectives for the cure and improvement of the prognosis of metastatic patients. TG2 positively correlates with better prognosis in SKCM patients only Transglutaminase Type-2 (TG2) is a multifunctional enzyme that is reported to be involved in all the stages of carcinogenesis.Its impact on oncogenic processes is still highly debated, as it has been described both as a tumor-promoting and a suppressor factor [20].To fill this knowledge gap, we analyzed the effect of TG2 expression in 32 histotypes of tumors on the TCGA dataset by taking advantage of the GEPIA software [22].Among all the cancer types, Kaplan-Meier analysis revealed that TG2 expression correlates with patient prognosis in only 5 out of 32 tumors (Fig. 1a-f).In particular, a high TG2 expression is correlated with a poorer survival rate in LUSC (Lung squamous cell carcinoma) (p = 0.002) (Fig. 1b), GBM (Glioblastoma multiforme) (p = 0.0027) (Fig. 1c), KIRC (Kidney renal clear cell carcinoma) (p = 0.029) (Fig. 1d), and LGG (Brain Lower Grade Glioma) (p = 0.0035) (Fig. 1e).These data are in agreement with previous literature [23][24][25][26].Interestingly, the only tumor type in which TG2 has a positive value on Overall Survival is SKCM (Skin Cutaneous Melanoma) (p = 0.0043) (Fig. 1a).Hazard Ratio analyses obtained using the Survival Genie software [27] confirmed that TG2 has a positive prognostic role in SKCM primary tumors only (Fig. 1g).These results are in line with our previous research in which we outlined a novel promising positive clinical value for TG2 in skin cancer [21], though the molecular mechanism behind TG2 function was not fully elucidated. TG2 ablation impairs melanoma pigmentation and increases its invasiveness We decided to employ the CRISPR/Cas9 technology to generate TG2 knock-out cell lines to consequently identify differentially expressed genes and their molecular pathways that regulate and/ or are regulated by TG2 ablation.For this purpose, we took advantage of a multi-omics approach, combining data from Proteomics and RNAseq analyses (Fig. 2a).At first, we generated single clones from a CRISPR/Cas9-mediated TG2 KO B16F10 cell line by adapting the protocol of F. Ann Ran et al., 2013 [28] (Fig. S2a).We used pairs of gRNAs targeting the promoter region of the gene, including the 5'UTR and the transcription initiation site, to prevent both the transcription and translation of TG2 (Fig. S2c).This strategy was adopted as the 3′ TG2 coding region partially overlaps with one of the isoforms of the RPRDIP gene, so that the complete deletion of TGM2 would generate off-target effects (Fig. S2b).Through this procedure, we were able to obtain significant TG2 protein and mRNA KO in two independent clones, named "TG2 KO 1" and "TG2 KO 2", (Fig. 2b, c), which were also validated by PCR (Fig. S2d, e) and selected for further experiments. Subsequently, we proceeded with a Mass Spectrometry analysis to determine the differences in the Proteomic profiles of the TG2 KO clones against the wild-type B16F10 line.Proteins classified as "hits" (FDR < 5%, Fold Change at least 100%) and "candidates" (FDR < 20%, Fold Change at least 50%) were taken into consideration.Overall, we found 50 proteins whose expression was downregulated in the TG2 KO 1, 54 in the TG2 KO 2, and 22 downregulated proteins common to both clones.Concerning the upregulated targets, 51 proteins were found in TG2 KO 1, 96 in TG2 KO 2, and 69 in common (Fig. 2d).A comprehensive list of both up and downregulated Proteomic targets is reported in Supplementary Tables 1-4. To infer the biological significance of TG2 in our model, we performed Gene Ontology (GO) enrichment analysis on the list of the identified Proteomics targets.In Figs.2e, g we report the first 15 statistically downregulated (Fig. 2e) and upregulated (Fig. 2g) Biological Processes categories in TG2 KO clones.In particular, by taking into consideration the union of the downregulated targets in both clones, we found that the main affected Biological Processes in the TG2 KO cell lines are linked to the process of melanogenesis (Fig. 1e-S3a).Interactome analyses conducted on the downregulated targets with STRING and clustered using ClusterONE Software on Cytoscape demonstrated that several targets are relevant for the differentiation of melanocytes and endocytosis, like Kit [29], Dct [30], Tyr [31], Pmel [32], Rab27a [33], Rab38 [34], and Sytl2 [35] (Fig. 2f-S3b).These targets are mainly distributed in melanosomes (the cellular organelles responsible for the synthesis of melanin [36] and their membranes, in the pigment granules, and in the vesicle systems that are necessary for the transport of melanin, as reported in the GOs of Cellular Components (CC) (Fig. S3c).Interestingly, alterations in these proteins are found in Hair Hypopigmentation (HP) (Fig. S3c).On the other hand, considering the Biological Processes associated with the upregulated targets, numerous categories referring to chemotaxis, cell adhesion, and remodeling of the cytoskeleton Fig. 1 Analysis of the TG2 significant clinical value in TCGA cancer datasets.a-e Overall survival based on TG2 expression level in SKCM (Skin cutaneous melanoma), LUSC (Lung squamous cell carcinoma), GBM (Glioblastoma multiforme), KIRC (Kidney renal clear cell carcinoma), and LGG (Brain lower grade glioma) was obtained through Kaplan-Meier analysis by sorting samples for high and low TG2 expression groups according to the quartile (Cutoff-High=25%; Cutoff-Low=75%) on GEPIA.Percent survival was plotted, and p-values were shown as per figure specification, respectively.f Schematic representation of the impact of TG2 expression on LUSC, GBM, KIRC, LGG, and SKCM.TG2 expression is a worst prognostic signature in LUSC, GBM, KIRC, and LGG (represented in blue), whereas it has a positive clinical value in SKCM only (in red).g Forest plot showing the detailed table of the Univariate Cox-Regression Survival Analysis of TG2 expression in LGG, LUSC, GBM, KIRC, and SKCM, retrieved using the Survival Genie Software.The plot shows the hazard ratio and 95% confidence intervals associated with the two considered groups of patients (high and low expression of TG2), along with Walt test and log-rank p-values.Cut-off values applied to the two subsets of patients and the sample number in each group are also shown.To assess the Hazard Ratio (HR) based on TG2 expression in LGG, GBM, KIRC, LUSC, and SKCM primary tumors, we used the Cutp option for the Cut-off establishment (the cut-point is estimated based on martingale residuals using the survMisc package to stratify patients into high and low groups).Squares represent the Hazard Ratio (HR), while the horizontal lines depict the upper and lower limits of the HR 95% confidence interval (arrow pointing to the upper limit indicates that the interval is higher than the maximum shown).Confidence Interval (CI).Likelihood Ratio (LR).Negative significant prognostic values are represented in blue squares, while positive associations in red. that supports cell movement have been identified (Fig. 2g).Reflecting their function, these proteins are mainly localized at the level of cell junctions, on cell protrusions, in the cytoskeleton, and in focal adhesions (Fig. S3e). To further investigate the role of TG2 in melanoma, we decided to accompany the Proteomics data that we obtained also with an RNAseq and Transcriptomic profiling of our model.Considering the similarity in the Proteomic profiles of the two TG2 KO clones, we decided to conduct the RNAseq analysis only on the TG2 KO 2 clone with respect to the WT.According to the Proteomics, RNAseq analyses revealed among the significantly downregulated GO-terms the "phenol-containing compound biosynthetic process", "secondary metabolic processes", and "secondary metabolite biosynthetic process" (Fig. 2i).These categories include transcripts whose expression is necessary for the pigmentation process, such as Cited1 [13], Snca [37], Slc24a5 [38], and the aforementioned Tyr and Pmel (Fig. S3d).At the same time, the most upregulated categories of transcripts in the TG2 KO are involved with positive regulation of chemotaxis and cell junction regulation (Fig. 2l).Among them, we found Sparc [39], Cd47 [40], Plec [41], Anxa1 [42], Anxa 3 [43], and Anxa 5 [44] (Fig. S3f-h), genes strongly involved in the metastatic processes of melanoma.A large part of these up or downregulated transcripts are present among the targets identified in Proteomics of both KO clones (Tables 1-4).As a further confirmation, qRT-PCR analyses on some transcripts identified as differently expressed with respect to the WT line were validated on both KO clones (Fig. S4a-f). These results suggest that TG2 can modulate targets involved in pigmentation and migration capacity of melanoma cells, two key processes for melanoma plasticity. Lack of TG2 affects melanin synthesis in vitro and in vivo The hallmark of differentiated melanocytes and melanoma cells, which are derived from the neural crest, is the presence of melanin pigment, which strongly impacts the mechanical abilities of melanoma cells to spread [45].Given the omics results, we hypothesized that TG2 could play a role in melanoma differentiation.To validate our hypothesis of a possible implication of TG2 in the process of differentiation, we decided to verify the pigmenting capacity of TG2 KO clones with respect to the WT cells, adapting the protocol by Skoniecka et al., 2021 [46].To induce melanogenesis, cells were cultured in a melanin-precursors enriched medium (DMEM: L-Tyr = 72 mg/l, Phe = 66 mg/l) instead of the normal growth medium of B16F10 (MEM: L-Tyr = 52 mg/ml, Phe = 32 mg/ml) (Fig. 3a).Also, DMEM without phenol red was used for melanin quantification to prevent any interference with melanin absorbance measurements [47].After subjecting the cells to the induction of pigmentation, we observed that only the WT cells can synthesize melanin, which gives a brown color to the pellet, whereas the pellets of the two KO clones remain white (Fig. 3b).Also, only WT medium acquires the typical brown color due to melanin release (Fig. 3b).The quantification of the extracellular melanin content demonstrates that the KO clones exhibit a severe alteration in the ability to secrete pigment.However, no significant alterations in intracellular melanin levels were observed (Fig. 3c).The typical dendritic morphology and dark pigmentation that distinguishes the differentiated state of melanoma cells were observed in pigmenting WT condition only (Fig. 3d).Also, the number of melanin granules quantified at the electron microscope is reduced by approximately 50% in the TG2 KO condition (Fig. 3e). Melanin is formed through the activity of several melanogenesis-related proteins such as tyrosinase (Tyr) and dopachrome tautomerase (DCT, Trp-2) [48].In this regard, the level of expression of Tyr and Dct was found strongly and significantly reduced in the TG2 KO clones both at the protein and transcript level (Fig. 3f, Fig. S4a).Melan-A, also known as MART 1 (Melanoma antigen recognized by T cells-Cloned gene) is a protein found both in the melanosomes and in the endoplasmic reticulum, which aids in the processing and transportation of PMEL (pre-melanosome protein), a key factor in the creation of melanosomes [49].Here, we found a reduction of approximately 40% for the TG2 KO 1 clone and 60% for the TG2 KO 2 clone of Melan-A at the protein level (Fig. 3f). Intriguingly, TG2 levels in the B16F10 WT cells significantly increase by a 3.5-fold change at protein level (Fig. 3g and S5c) and a 4-fold change at mRNA level (Fig. 3h) following pigmentation induction.Importantly, the pigmentation markers Tyrosinase and DCT were reduced, both at protein (Fig. 4a) and mRNA level (Fig. 4b), in human melanoma cell lines Mel JuSo, IPC-298 and SK-MEL-3 following downregulation of TG2 by siRNA.Altogether these results demonstrate that TG2 is induced when the pigmentation cascade is triggered in melanoma. In addition to corroborate our findings, we checked the impact of TG2 expression on melanogenesis in two in vivo models.At first, we evaluated the development of melanophores in morphants Danio rerio embryos, namely zTg2b.The injection of 0.1 pmol of antisense TG2 morpholino in 48 hpf zebrafish embryos resulted in an evident defect in pigmentation, similar to an albinolike phenotype, as well as a significant reduction in the number of melanophores per larvae (Fig. 4c).Also, we performed a histological analysis to compare the epidermis of C57BL/6 WT mice to the one of TG2 KOs.Under physiological conditions, interfollicular melanin-producing melanocytes display melanin granules distributed along the dendritic structures of the cells, as indicated by the black arrows (Fig. 4d-panel C).On the contrary, in the skin from TG2 KO mice, in which were not observed morphological alteration compared to WT mice (Fig. 4dpanel A-B), melanin granules were mostly restricted to the cell body, in the perinuclear area (Fig. 4d panel D), suggesting an impairment in melanin secretion. Taken together, these data support our hypothesis that Transglutaminase type 2 expression is involved and required in the process of melanogenesis and that such mechanism is also conserved in vivo. Fig. 2 Generation and multi-omics characterization of TG2 knock-out melanoma B16F10 cells.a Schematic representation of the employed strategy.B16F10 TG2 KO clones were generated by means of the CRISPR/Cas9 genomic editing tool.After obtaining the clones, they were subjected along with the B16F10 WT cell line to Proteomics profiling and RNA-seq analyses.b Immunoblot analyses showing the obtained TG2 KO clones, namely TG2 KO 1 and TG2 KO 2. Actin was used as loading control.c TGM2 expression evaluated by qRT-PCR analysis in B16F10 WT cells and TG2 KO clones (number of independent biological replicates = 8).Statistical significance was calculated with One-Way ANOVA and specified with asterisks ( **** p < 0.0001).Data are represented as mean ± SEM. d Venn diagrams showing the differentially expressed proteins from comparative Proteomic analyses of the TG2 KO clones.Comparisons were divided in Up and Down-regulated Proteomics targets (hits and candidate proteins).Areas of overlap indicate shared protein targets.Statistically significant targets were defined based on adj.p < 0.05 (adjusted p < 0.05).Proteins were annotated as "hits" with FDR < 5% and a fold change of at least 100% and as "candidates" with FDR < 20% and a fold-change of at least 50%.e Bar plot representative of the GO enrichment analyses of the top 11 downregulated Biological Processes (BPs).Bar color represents the adj.p-value (dark blue= most significant).Bar lengths refer to the proportion of enriched proteins for each term.f Heat map of comparative proteomic analysis of the melanogenesis related proteins was generated using pheatmap R package.g Bar plot representative of the GO enrichment analyses of the top 15 upregulated Biological Processes (BPs).Bar color represents the adj.p-value (dark blue= most significant).Bar lengths refer to the proportion of enriched proteins for each term.h Heat map of comparative proteomic analysis of the migration and adhesion proteins was generated using pheatmap R package.Dot plots representative of down (i) and up (l) regulated GO of Biological Processes analyses performed on significantly differentially expressed genes (DEGs) obtained from RNAseq profiling of the TG2 KO 2 clone (FDR < 0.01).Bubble colors represent the adj.p-value (red=most significant).The rich factor refers to the proportion of enriched genes for each term. TG2 interacts with MITF enabling its nuclear translocation Considering the significant downregulation of the melanogenesisrelated genes and the consequent albino-like phenotype of TG2 KO mutants in vitro and in vivo, we hypothesized that TG2 could play a role in the regulation of the MITF transcription factor.MITF is the master regulator of cell differentiation in melanocytes and melanoma.The activation of more than 100 genes required to regulate changes in cellular programs depend on MITF transcriptional activity [12].In turn, MITF is regulated by numerous cellular pathways (Fig. S6a) on whose modulation the process of melanogenesis ultimately depends on.Thus, we decided to investigate the molecular bases behind the observed phenotype induced by loss of TG2 by evaluating the impact of TG2 expression on the MITF-activating pathways. The α-melanocyte-stimulating hormone (α-MSH) is an endogenous peptide hormone of the melanocortin family that binds to the melanocortin-1 receptor (MC1R) on melanocytes to activate the transcription of MITF gene via PKA signaling cascade [50].In 2014, Kim and colleagues reported that TG2 is required for the α-MSH mediated activation of melanin biosynthesis in human melanoma [51].By contrast, we observed that upon stimulation with α-MSH also the TG2 KO clones are able to correctly synthesize and secrete melanin, as well as the WT line (Fig. S6b).Moreover, the usual dendritic shape of differentiated melanocytes is visible in the KO condition (Fig. S6b).Thus, we excluded that TG2 expression may have an impact on melanogenesis via the α-MSH cascade. Furthermore, we did not detect significant alterations in any of the other known pathways that lead to the activation of MITF.In fact, neither the MAPK ERK1/2-p38 nor the canonical Wnt signaling cascade seem to be affected by loss of TG2 (Fig. S6c).Also, treatment with the CHIR 99021 activator of the canonical pathway of Wnt signaling can induce the pigmentation in TG2 KO clones (Fig. S6d), corroborating the idea that this pathway is also not affected by loss of TG2. After excluding the involvement of TG2 in the canonical mechanisms that lead to melanogenesis, we hypothesized a direct regulation of the MITF transcription factor by TG2.Consistent with the downregulation of melanogenesis-related genes, ablation of TG2 leads to a reduction of MITF mRNA levels, both in B16F10 melanoma TG2 KO clones (Fig. 5a), as well as in human melanoma cell lines Mel JuSo, IPC-298 and SK-MEL-3 following downregulation of TG2 by siRNA (Fig. 5b).However, we did not detect significant alterations in MITF protein levels (Fig. 5c).This result could be explained by the fact that being a transcription factor, MITF has a high turnover within melanocytes.Having found no significant differences in MITF protein levels, we wondered if there was an alteration in its transcriptional activation.To address this issue, we first performed a cell fractionation assay to study MITF subcellular localization and check whether an impairment of its nuclear translocation may have occurred.By cyto-nuclear fractionation, we observed that MITF significantly accumulates in the nuclear fraction of B16F10 WT pigmenting cells (Fig. 5d).This is in line with what is expected, since during pigmentation MITF becomes transcriptionally active to induce the synthesis of the melanogenesis-related genes.Intriguingly, MITF does not accumulate at the nuclear level following the induction of pigmentation in the TG2 KO cells, suggesting that loss of TG2 may impair its nuclear translocation and activation.This data could justify the downregulation of the expression of the melanogenesis genes and the subsequent loss of pigmentation in both TG2 KO cells (Fig. 5d). Interestingly, we noticed that upon induction of pigmentation, TG2 is also found in B16F10 WT nuclei (Fig. 5d).In this regard, we recently demonstrated that TG2 can play a scaffold role by binding to β-catenin and allowing its transport to the nucleus, contributing to the regulation of the Wnt signaling [52].Given the colocalization between TG2 and MITF in the pigmenting B16F10 WT nuclear fraction, we hypothesized that TG2 could also play a shuttle function to regulate MITF nuclear transport and subsequent melanogenesis related genes activation.To check this hypothesis, we addressed the presence of a direct interaction between TG2 and MITF both by Proximity Ligation Assay (PLA) and Co-IP.Consistent with our hypothesis, PLA analyses demonstrated that a co-localization between TG2 and MITF occurs in B16F10 WT cells and significantly increases during melanogenesis (Fig. 5e).This result was further confirmed by the Co-IP assay between TG2 and MITF on the nuclear enrichment of the pigmented B16F10 WT cells (Fig. S7). Overall, these data suggest that TG2 ablation could disrupt the correct nuclear translocation of MITF during pigmentation, which in turn leads to an impairment in melanin production of the TG2 KO clones. TG2 ablation leads to increased invasiveness both in vitro and in vivo Beside its role as master regulator of pigmentation, MITF is central to the control of melanoma plasticity and heterogeneity, named "phenotype switching", introduced by Hoek et al. in 2008 [53].According to the phenotypic switching model, there exist two main programs in which melanoma can interconvert, namely the differentiated/proliferative and the undifferentiated/invasive [54]. Given the loss of pigmentation (Figs.3-4) and low levels of nuclear MITF (Fig. 5), as well as the increase in metastatic and invasive markers that characterize the TG2 KO clones (Fig. 2g, l), we speculated that TG2 could play a role in regulating the transition between melanoma plasticity signatures.For instance, TG2 KO clones display a significant downregulation of Cited1 mRNA levels, a marker of the proliferative state (Fig. S4b).Loss of Cited1 is correlated with reduced MITF expression and worse prognosis for patients with primary SKCM [13].Also AXL, a member of the TAM tyrosine kinase receptor family, plays a central role in the mesenchymal motif by regulating cell proliferation, EMT, migration, and immune responses in melanoma cells.The expression of AXL is inversely related to that of MITF, so that MITF high /AXL low (proliferative) and MITF low /AXL high (invasive) Fig. 3 TG2 expression is required for melanogenesis in B16F10 cells.a Schematic representation of the employed pigmentation induction protocol, adapted from Sckoniecka et al., 2021 [46].b Pictures of B16F10 WT and TG2 KOs cell pellets (upper panel) and media color (bottom panel) showing differential melanin (dark color) retainment and secretion between the samples.c Quantitative analyses of extracellular and intracellular melanin content, expressed in (µM)/(µg/cells/mL).Extracellular and intracellular melanin content was normalized on each well protein content.B16F10 WT was used as control during statistical analysis (number of independent biological replicates = 6).d Morphological analysis of B16F10 WT and TG2 KO clones following pigmentation induction by optical microscopy.Melanin granules are indicated with black arrows.Cellular shape is highlighted in blue, red, and green in WT, TG2 KO 1, and TG2 KO 2 respectively.Particularly, in B16F10 WT sample, cells acquire the typical differentiated dendritic shape with protrusions.Conversely, B16F10 TG2 KO cells maintain the typical melanoma spindle-like shape.Scale bar = 200 μm.e Transmission electron microscopy (TEM) images of ultrathin section of B16F10 WT and TG2 KO cells showing melanin-containing granules in the cytosol with relative granules per cell quantification.A higher magnification is reported in the right part of the panel (scale bars indicated in the pictures).Melanin granules (black) are enriched in the perinuclear area of the WT cell line.Isolated and dispersed fewer granules are visible in the KO condition.f Immunoblot analyses and relative densitometry of melanogenesisrelated targets (Melan-A, Tyrosinase, and DCT) in B16F10 WT, TG2 KO 1, and TG2 KO 2 cells.Vinculin was used as loading control (number of independent biological replicates = 5).Immunoblot analysis (g) and relative mRNA levels quantified by qRT-PCR analysis (h) of TG2 expression in WT samples, following (PIGM.)or not (N.PIGM.) pigmentation.β-actin was used as a loading control in both immunoblot (number of independent biological replicates = 3) and qRT-PCR (number of independent biological replicates = 5).Statistical analyses of three or more groups were performed with One-Way ANOVA.Two-way ANOVA with Bonferroni's test was used to compare the data with two variables.Statistical significance is specified with asterisks ( * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001).Data are represented as mean ± SEM. populations contribute mostly to intratumor heterogeneity in melanoma and, thereby, resistance to therapy [55].In line with this, we found significant upregulation of AXL mRNA levels in both TG2 KO clones (Fig. S3i).In addition, TG2 KO clones display a significant increase of the EMT marker FN1, both at the mRNA (Fig. S4c) and protein (Fig. S3j) levels. Given these premises, we hypothesized that TG2 expression could prevent the onset of the mesenchymal state of melanoma cells.Thus, we proceeded with the evaluation of the impact of TG2 expression on tumor growth in vitro and in vivo.As reported in Fig. 6A, in vitro analysis of the clonogenic potential showed a significant increase in the number of TG2 KO colonies (about double those formed starting from WT).By contrast, no significant changes were found in proliferation levels between the three cell lines (Fig. 6B).The discrepancy between these two results can be explained by the very definition of "proliferation assay" and "colony formation assay", two techniques that highlight different aspects of tumor growth.Indeed, the colony formation assay is based on the ability of a single cell to grow into a colony, undergoing unlimited divisions [56].Conversely, the proliferation rate concerns the capability of the entire population to increase its number, by continuously doubling.Furthermore, we found no differences in the growth, volume, and weight of primary tumors injected in C57BL/6 WT mice (Fig. 6C-E).Noticeably, also the primary tumors deriving from TG2 KO 1 and TG2 KO 2 clones are de-pigmented compared to the counterpart deriving from the B16F10 WT cells (Fig. 6C). To measure the invasive capacity of the clones in vivo we injected tumor cells into the tail vein of the mice, and we followed the formation of lung experimental metastasis.In line with our hypothesis, individual KO experimental metastasis appear larger and paler than that of the WT ones (Fig. 6F).Given the difficulty in quantifying white metastases from KO clones, we decided to take advantage from immunohistochemical staining to precisely quantify the metastatic area covering the lungs.According with our hypothesis, immunofluorescence analysis with specific labeling for melanoma cells (anti-Melan-A staining) revealed that the area covered by TG2 KO experimental metastasis in the lungs are larger than those formed after injection of WT cells (Fig. 6G). Overall, these data suggest that TG2 expression has a positive role in preventing and regulating melanoma invasive capability, by supporting the differentiated/proliferative state and consequently reducing metastasis formation. DISCUSSION Melanoma progression, metastasis formation, and therapy resistance are related to the capacity of cells to switch from a differentiated towards a dedifferentiated/invasive phenotype.In many cancers, the acquisition of invasive hallmarks is a process called "epithelial to mesenchymal transition" (EMT).In melanoma, the plasticity between proliferative and invasive states is defined as "phenotype-switching".The first, which has also been described as "differentiated", "epithelial-like", and often "therapy-sensitive", is characterized by a hyper-activation of MITF (MITF high ) that supports a strong degree of cellular differentiation and proliferation.Conversely, the invasive phenotype defined as "undifferentiated/dedifferentiated", "mesenchymal-like", and often "therapy-resistant", is characterized by low levels of MITF (MITF low ) which lead to an increase in metastatic potential at the expense of cell proliferation [1,7,8,10]. The regulation of the phenotypic plasticity of melanoma is very complex: only in recent years cancer research has attempted to better define its functioning, which has led to the identification of numerous factors that take part in such process.For instance, the antithetical expression between receptor tyrosine kinase AXL and MITF has led to the definition of the proliferative phenotype as MITF high /AXL low , and the invasive one as MITF low /AXL high [55].Another relevant modulator of SKCM plasticity is represented by Cited1, a non-DNA binding transcriptional co-regulator whose expression can distinguish the "proliferative" from "invasive" signature, so that loss of Cited1 is correlated with reduced MITF expression and with a worse outcome [13].However, we are still far from understanding all the factors that take part in the regulation of this intricate process. In this study, we identified a new player in the management of melanoma heterogeneity, namely Transglutaminase type-2 (TG2).TG2 is a multifunctional enzyme, whose role in cancer is controversial and tissue dependent [20].Still, the part played by TG2 in EMT processes has already been highlighted in several contexts [20].To shed light on the controversies regarding its role in cancer disease, we performed bioinformatics analyses on public cancer datasets which unveiled that the prognostic role of TG2 is generally negative, except for SKCM, the only tumor type in which TG2 expression is associated with a positive prognosis, as we already mentioned in Muccioli et al., 2022 [21].In the same article, we reported that TG2 expression was upregulated in metastatic samples compared to primary tumors.This apparent discrepancy between the level of TG2 expression in primary and metastatic melanoma and its prognostic role in patient survival can be explained by taking two key elements into consideration: I) the melanoma RNAseq data contained in the TCGA database refer to bulk tumors, in which the expression of genes by not only tumor cells but also all stromal components is taken into account, including endothelial cells, fibroblasts, CAFs, etc.This means that the level of expression of a gene can be influenced not only by the tumor but also by the cumulative effect of all the components of the tumor nest [57].II) In Muccioli et al., 2022 we speculated that the beneficial role of TG2 expression in melanoma is also due to an effect of recruitment and activation of the immune system which, in this way, could favor a positive response to therapeutic treatment [58]. In the present work we tried to deepen the role of TG2 in SKCM.First, we ablated its expression in a commonly used murine melanoma model.Our analyses performed on the generated model unveiled TG2 implications in the regulation of two processes determining the phenotypic status of melanoma.On the one hand, we showed that TG2 is required for pigmentation and that its function is conserved in animal models.In 2014 Kim and colleagues suggested TG2 involvement in the regulation of melanogenesis in human melanoma cells, without delving into the molecular mechanism [51].According to the phenotype switching model [53], pigmentation is a hallmark discriminating melanoma differentiated state, as melanoma cells acquiring the Fig. 6 Characterization of TG2 KO melanoma tumorigenic potential in vitro and in vivo primary tumors and metastatic formations.A Colony formation assay with quantification of the number of colonies per sample (number of independent biological replicates = 5).The number of colonies was assessed with ImageJ (One-Way ANOVA, * p < 0.05).B Growth curve comparing the proliferation rate (expressed in growth percentage/hours) between B16F10 WT and the two TG2 KO clones.Parental B16F10 and Cas9-transfected cells displayed the same proliferation rate (not shown, number of independent biological replicates = 3).C-E Analysis of in vivo primary tumor growth from C57BL/6 WT orthotopic mice models after injection with the indicated cell lines.4 animal models were injected for each group.Excised tumors are reported (C), showing a difference in pigmentation relatable to the different melanin content between TG2 WT and KO clones.Tumors were measured daily to assess the growth volume (D) and were weighted after the excision (E).F Analysis of lung experimental metastasis formation in C57BL/6 mice induced from B16F10 WT and TG2 KO tail vein injection.A picture of the front and back of mice lungs is reported for each condition.White arrows point to the experimental metastatic processes.G Multiplex IHC on lung experimental metastasis tissue of C57BL/6 mice induced from B16F10 WT and TG2 KO tail vein injection and relative tumor area quantification.Melanoma cells infiltration in tissues was visualized by anti-Melan-A staining (in yellow).DNA was stained with DAPI (in blue).Multiple 4 µm sections from 4 mice per conditions were used for the statistical analyses.Statistical analyses of three or more groups were performed with One-Way ANOVA.Two-way ANOVA with Bonferroni's test was used to compare the data with two variables.Statistical significance is specified with asterisks ( * p < 0.05).Data are represented as mean ± SEM. invasive signature decrease the expression of melanogenesis markers, which strongly impact on the mechanical abilities of melanoma cell to spread [45].Also, being melanoma the only type of tumor capable of pigmenting, this result could partially give insights into the positive role of TG2 exerted in SKCM tumor only. On the other hand, we demonstrated that loss of TG2 expression leads to an increase in the invasive capacity and extracellular environment remodeling of the tumor that characterize melanoma invasive state [53].These findings correlate with the increase in the number of lung metastases found in TG2 KO melanoma-injected mice compared to the WT line.In addition, loss of Cited1 expression accompanied with an increase in the EMT hallmark Fibronectin 1 and the antithetic expression between AXL and MITF that we found in the TG2 KO model, collectively support the idea of a possible participation of TG2 in the modulation of melanoma heterogeneity via the phenotype switching. To shed light on the molecular mechanisms behind our findings, we focused our attention on MITF.Indeed, both melanogenesis and invasive capabilities depend on MITF levels and activation in melanoma [53]. Particularly, we observed that during pigmentation TG2 translocates into the nucleus of melanoma cells following an increase in its expression levels.Furthermore, we demonstrated that during this process, TG2 binds MITF to facilitate its nuclear translocation, acting as a protein scaffold and indirectly favoring its transcriptional activity (Fig. 7).This shuttle / scaffold function of TG2 between cytosol and nucleus has already been observed previously for other proteins that need to reach the nucleus to exploit their activity [59].For instance, we recently demonstrated that TG2 can support nuclear localization of the β-catenin, a key player of the canonical Wnt signaling pathway.As observed with MITF, TG2 can physically interact with the β-catenin [52].Although one of the best-studied pathways involved in melanoma cell plasticity is represented by canonical Wnt signaling-dependent up-regulation of MITF expression [60], this effect has not been observed in our experiments, since β-catenin expression was not depleted in TG2 KO clones (Fig. S6c).In addition, increased nuclear TG2 can bind to HIF1β so decreasing the hypoxiaresponsive element-dependent upregulation of pro-apoptotic proteins, thereby protecting neuronal cells from hypoxiainduced death in ischemia and stroke [61,62].Moreover, TG2 can associate and mediate the nuclear translocation of the p65 subunit of NFkB [63], the receptor VEGFR-2 [64], and HSF-1 [65]. Even though nuclear TG2 comprises only a small proportion (around the 5-7%) of the total cellular amount, it has attracted increasing interest because of its great importance in modulating various cellular processes.Indeed, being associated with the euchromatin [66], even minor changes in nuclear TG2 levels and/ or its activities may result in significant effects on gene regulation, and thereby cellular responses, during the pathogenesis of diseases and their treatment.Also, following pigmentation, TG2 interaction with MITF allows the transcription factor nuclear translocation and its subsequent transcriptional activation by synthesis of the melanogenesis-related genes (Tyr, Dct, Melan-A, etc), which allow intracellular melanin synthesis and extracellular secretion.Thanks to TG2, MITF High levels enable the maintenance of the melanocytic/differentiated state.b Loss of TG2 inhibits correct MITF nuclear translocation, contributing to a downregulation of melanogenesis and melanoma de-differentiation.Loss of TG2 increases MITF Low /AXL High ratio, switching melanoma cells to the mesenchymal/invasive phenotype, increasing its metastatic capacity by promoting cell motility, alteration of cell-adhesion molecules, and extracellular remodeling.Schematic representation was created with BioRender.com. Given the growing understanding of the regulatory networks that drive phenotype switching and its role in melanoma metastasis and therapy resistance, one strategy is to block or reverse the invasive switch by directing melanoma cells towards the more therapy-sensitive proliferative/melanocytic state [67].Considering our work, tuning of cytosol to nuclear TG2 translocation could offer new perspectives to address melanoma vulnerabilities.Although more studies are needed to better evaluate the potential of TG2 in SKCM, we believe that our work may pave the way for identifying novel winning strategies to target melanoma phenotype switching and sensitize this tumor to treatments. Ethics approval and consent to participate Zebrafish (animals and embryos) were maintained according to standard rules and procedures (https://zfin.org).All Kaplan-Meier curves and Hazar Ratio assessment Kaplan-Meier curves were retrieved using GEPIA (gene expression profiling interactive analysis) (http://gepia.cancerpku.cn/).GEPIA is an interactive web application for gene expression analysis based on 9736 tumors and 8587 normal samples [22].GEPIA was used to analyze the expression of TG2 in 32 different histotypes of tumors and its effects on survival rate by means of the Kaplan-Meier analysis tool.We divided samples between high and low TG2 expression groups according to the quartile TGs mRNA levels to analyze overall survival (Log2FC cutoff: 1; p-value cutoff: 0.01, group cutoff selected median, cutoff-high (%): 25; cutoff-low (%): 75). To assess the Hazard Ratio (HR), Survival Genie software was employed (https://bbisr.shinyapps.winship.emory.edu/SurvivalGenie/). Survival Genie is an open source that contains 53 datasets of 27 distinct malignancies from 11 different cancer programs related to adult and pediatric cancers [27].The tool provides results with univariate Cox proportional hazards model.To assess HR based on TG2 expression in LGG, GBM, KIRC, LUSC, and SKCM primary tumors, we used the Cutp option for the Cut-off establishment (the cut-point is estimated based on martingale residuals using the survMisc package to stratify patients into high and low groups). Generation of TG2 KO B16F10 clones using the CRISPR/Cas9 technology The CRISPR/Cas9 approach followed the guidelines described by Ran and colleagues [28] using the pSpCas9n (BB) −2A-Puro (PX459) (Addgene Plasmid #48139 Zhang Lab).The sgRNA probes were designed to excise the portion of the TGM2 gene that includes the transcription initiation site (TSS), the translation initiation site, and all of exon 1. Particularly, the sgRNAs used are listed in Table 1. Single guides were ligated into the PX459 vector at the BbsI (NEB) restriction site under the U6 promoter and verified by sequencing.Different couples of single guides were used to transfect cells.sgRNAs 1 and 2 map upstream the TSS, while sgRNA 3 maps downstream TGM2 exon 1. Pair 1 + 3 was used to generate TG2 KO 2 clone and creates a deletion of approximately 264 base pairs, while pair 2 + 3 was used to generate TG2 KO 1 clone and creates a deletion of 397 bps. For B16 F10 wild-type cell line transfection, the TransIT®-LT1 transfection reagent (Mirus) was used.Briefly, cells were seeded in a 6-well plate.After 24 h, cells were transfected with 1.25 µg of plasmid DNA containing the gRNAs.Puromycin 2.5 µg/ml (Gibco) was added to the cell culture medium 48 h after transfection and kept for 96 h.After selection, cells were serially diluted to obtain single-cell clones.Clones were expanded and cells were collected and centrifuged for 5 min at 600 g.After removing the media, DNA was extracted from the pellet using the MyTaq Extract-PCR kit (Meriadin Bioscience) following manufacturer's instructions.A control PCR was performed with primers reported in Table 2. To test TG2 expression in WT vs KO clones, qRT-PCR analyses was employed.Briefly, 500,000 cells were seeded in 6-well plates.One day after seeding, total RNA was extracted using the Qiagen RNeasy Mini Kit (Qiagen, #74104).Genomic DNA was digested following the manufacturer's instructions.2 µg of RNA were retrotranscribed using the Super-Script IV Reverse Transcriptase (Thermofisher, #18091050) following manufacturer's instructions.cDNA was diluted 1:20 and amplified in qRT-PCR using SYBR Green PCR Master Mix (Thermofisher, #4309155).Actin was used as housekeeping control.CT values were first normalized with respect to the housekeeping genes (ΔCT) and next compared to the control sample (ΔΔCT).The relative normalized expression is indicated in the figures.Primers listed in Table 3 were used for qRT-PCR amplification. As further validation, B16F10 KO clones were also tested through Western Blot analyses.Vinculin, β-actin, GAPDH or Latin A/C were used as housekeeping gene.TG2 protein level was normalized on the B16F10 WT cell line. Mass spectrometry Sample preparation SP3 and TMT labeling, OASIS.For the Proteomics sample preparation, two technical replicates and three biological replicates for each sample condition (B16F10 WT, TG2 KO 1, and TG2 KO 2) were collected.Briefly, 5 × 10 5 cells were seeded in a 6-well plate.The following day, cells reaching 90% confluence were washed and proteins extracted using the LUC lysis buffer (25 mM Tris, 2.5 mM EDTA, 10% glycerol, 1% NP-40) supplemented with DTT and protease inhibitors.Samples were centrifuged 15 min at 15,000 g, and supernatant was subjected to Benzonase (Cat# 9025654, Sigma-Aldrich) treatment (30 min at 37 °C) to remove nuclei acids.The reduction of disulphide bridges in cysteinecontaining proteins was performed with dithiothreitol (56 °C, 30 min, 10 mM in 50 mM HEPES, pH 8.5).Reduced cysteines were alkylated with 2-chloroacetamide (room temperature, in the dark, 30 min, 20 mM in 50 mM HEPES, pH 8.5).Samples were prepared using the SP3 protocol [68] and trypsin (sequencing grade, Promega) was added in an enzyme to protein ratio 1:50 for overnight digestion at 37 °C.The peptides were labelled with TMT11plex [69] Isobaric Label Reagent (ThermoFisher) according to the manufacturer's instructions.Samples were combined for the TMT11plex and for further sample clean up an OASIS® HLB µElution Plate (Waters) was used.Off-line high pH reverse phase fractionation was carried out on an Agilent 1200 Infinity high-performance liquid chromatography system, equipped with a Gemini C18 column (3 μm, 110 Å, 100 × 1.0 mm, Phenomenex). LC-MS/MS acquisition.An UltiMate 3000 RSLC nano LC system (Dionex) fitted with a trapping cartridge (µ-Precolumn C18 PepMap 100, 5 µm, 300 µm i.d.x 5 mm, 100 Å) and an analytical column (nanoEase™ M/Z HSS T3 column 75 µm x 250 mm C18, 1.8 µm, 100 Å, Waters) were used.The trapping was carried out with a constant flow of trapping solution (0.05% trifluoroacetic acid in water) at 30 µL/min onto the trapping column for 6 min.Subsequently, the peptides were eluted via analytical column running solvent A (0.1% formic acid in water, 3% DMSO) with a constant flow of 0.3 µL/min, with an increasing percentage of solvent B (0.1% formic acid in acetonitrile, 3% DMSO).The outlet of the analytical column was coupled directly to an Orbitrap Fusion™ Lumos™ Tribrid™ Mass Spectrometer (Thermo) using the Nanospray Flex™ ion source in positive ion mode.The peptides were introduced into the Fusion Lumos via a Pico-Tip Emitter 360 µm OD x 20 µm ID; 10 µm tip (New Objective) and an applied spray voltage of 2.4 kV.The capillary temperature was set at 275 °C.The full mass scan was acquired with mass ranges of 375-1500 m/z in profile mode in the orbitrap with a resolution of 120,000.The filling time was set at a maximum of 50 ms with a limitation of 4 × 10 5 ions.Data-dependent acquisition (DDA) was performed with the resolution of the Orbitrap set to 30,000, with a fill time of 94 ms and a limitation of 1 × 10 5 ions.A normalized collision energy of 38 was applied.The MS2 data was acquired in profile mode. MS data analysis -Isobarquant.IsobarQuant and Mascot (v2.2.07) were used to process the acquired data, which was searched against a Uniprot Mus Musculus proteome database (UP000000589) containing common contaminants and reversed sequences.The following modifications were included in the search parameters: Carbamidomethyl (C) and TMT11 (K) (fixed modification), Acetyl (Protein N-term), Oxidation (M) and TMT11 (Nterm) (variable modifications).For the full scan (MS1), a mass error tolerance of 10 ppm was set and for MS/MS (MS2) spectra of 0.02 Da.Further parameters were established: trypsin as protease with an allowance of a maximum of two missed cleavages: a minimum peptide length of seven amino acids; at least two unique peptides were required for protein identification.The false discovery rate at the peptide and protein level was set to 0.01. Mass spectrometry data analysis.The raw IsobarQuant output files (protein.txt-files) were processed using the R programming language (www.r-project.org).Only proteins that were quantified with at least two unique peptides and identified in all mass spec runs were considered for analysis.Raw reporter ion intensities (signal_sum columns) were first cleaned for batch effects using limma [70] and further normalized using vsn (variance stabilization normalization [71].Missing values were imputed with the 'knn' method using the Msnbase package [72].The differential expression of the proteins was tested using the limma package.The replicate information was added as a factor in the design matrix given as an argument for the limma lmFit function.Furthermore, the imputed values were given a weight of 0.05 in the 'lmFit' function.A protein was annotated as a hit with a false discovery rate (fdr) smaller than 5% and a fold change of at least 100% and as a candidate with a fdr below 20 % and a fold-change of at least 50%. Bioinformatic analysis.The list of the selected proteins was used to identify significantly enriched functional categories.Enrichment analyses were performed using clusterprofiler R package [73] on Gene Ontology (GO) categories of biological processes (BP).GO of molecular function (MF) and cellular component (CC) were retrieved using the online software g Profiler [74], a web server for functional enrichment analysis and conversions of gene lists.False discovery rate (FDR) was used to control for multiple testing.A threshold of 0.01 (FDR < 0.01) was used to identify significantly enriched GO terms.Semantic similarity distance as implemented in rrvo R package (https://ssayols.github.io/rrvgo) was implemented to reduce redundancy of the significant GO terms.Bar plot, dot plot, heat maps, and tables were used to graphically summarize and report the results. RNA-seq analyses: sample preparation, alignment, preprocessing and differential gene expression For the RNAseq sample preparation, two technical replicates and three biological replicates for each sample condition (B16F10 WT and TG2 KO 2) were collected.Briefly, 5 × 10 5 cells were seeded in a 6-well plate.The following day, cells reaching 90% confluence were washed and total RNA was extracted with Qiagen RNeasy Mini Kit (Qiagen, #74104).Genomic DNA was digested following the manufacturer's instructions. Reads were aligned to the reference genome with STAR (v 2.7.10a) [75,76] and quantified with RSEM (v1.3.1).The indexed genome was built with RSEM starting from Ensembl's Mus Musculus DNA primary assembly (release 106).For all of the 9 aligned samples we obtained a percentage of uniquely mapped reads between 73.12% and 79.93%, while the numbers of uniquely mapped reads were between 24600173 and 74021101. After the quantification, data were filtered keeping only genes with at least 20 counts in three different samples. To identify the differentially expressed genes we used the edgeR R package [77].We provided as input the filtered raw counts with the design matrix defined by the dichotomous variables for the different clones.The TMM normalization was applied to the samples.False Discovery Rate (FDR) less than 0.01 was used to select significantly differential genes (DEG). DEG were used to perform enrichment analysis (separately for up and down-regulated genes) on Gene Ontology with clusterProfiler and ReactomePA R packages.For the enrichment analysis the universe was set as the list of the genes with at least one count in one sample.Adjusted p-values less than 0.1 was used to select significant gene sets and pathways. Cluster analysis was performed using the pheatmap R package with the complete linkage method with euclidean distances. Raw data have been deposited at SRA and are available at the accession number reported as follow: RNAseq raw data -SubmissionID: SUB12302131, BioProject ID: PRJNA904573 Pigmentation inducing treatments on B16F10 cells B16F10 are cultured in MEM (Minimum Essential Medium).To induce pigmentation, B16F10 cells were grown in DMEM (Dulbecco Modified Minimal Essential Medium), according to Skoniecka et al., 2019 [46].Both media are supplemented with 10% FBS and antibiotics: penicillin (100 U/ ml) and streptomycin (100 μg/ml).The cultures were maintained at 37 °C in 5% CO2.Both media are recommended for in vitro melanoma cells culture.DMEM medium contains more (72 mg/l) L-tyrosine, the basic amino acid for melanin synthesis, than MEM (52 mg/l).Media differ also in phenylalanine level (66 mg/l in DMEM, 32 mg/l in MEM), which could be hydroxylated into L-tyrosine in the presence of L-phenylalanine hydroxylase.DMEM, as a medium with higher L-tyrosine content, is indicated as a factor able to induce melanization in amelanotic melanoma cells [45,78].DMEM without phenol red was used for melanin quantification to prevent any interference with melanin absorbance measurements [47]. 20.000 cells were seeded in 6-well plate in MEM.The following day, media was changed to white DMEM.Cells were cultured for 5 days and then harvested to analyze the level of both intra and extracellular melanin, for Immuno Blot analyses, qRT-PCR, PLA, CO-IP, cytosolic-nuclear fractionation, and TEM imaging. For the pharmacological induction of B16F10 pigmentation, the following compounds were obtained from Sigma-Aldrich: α-Melanocyte stimulating hormone (α − MSH) and CHIR99021.α − MSH is a hormone that stimulates the synthesis of melanin in B16F10 murine melanoma models [47].A stock solution of 0.5 mM of α-MSH was prepared in deionized water, and then diluted in a phenol red-free cell culture medium to final concentration of 100 nM.Melanoma cells were incubated with α-MSH for 96 h.CHIR99021 is a well-established Wnt signaling activator that Measurement of the intracellular and extracellular amount of melanin According to Chung et al., 2019 [47] for the intracellular melanin content measurement from a cell pellet, 100 μL of 1 N NaOH containing 10% DMSO was added to the pellet and heated at 80 °C for 90 min.Absorbance was then measured at 490 nm using a Tecan Infinite F200PRO micro-plate reader (TECAN).To convert the absorbance value to the amount of melanin, a standard curve was obtained from 0 to 500 μg/mL of synthetic melanin (Cat# 8049-97-6, Sigma-Aldrich) solution dissolved in 1 N NH4OH. For the extracellular melanin quantification, 200 μL of the cell culture medium was transferred to a 96-well plate and the absorbance was read. The absorbance was averaged from three wells, and each experiment was performed in duplicate or triplicate. Electron microscopy images to quantify the amount of intracellular melanin granules Cells grown in 24-wells plates were fixed for 1 h at 4 °C with freshly prepared 2.5% (V/V) glutaraldehyde in 0.1 M sodium cacodylate, pH 7.4.After washing with 0.1 M sodium cacodylate, cells were post-fixed in 1% OsO4, 1.5% K4Fe(CN)6 in 0.1 M sodium cacodylate pH 7.4, stained with 0.5% uranyl acetate, dehydrated in ethanol and embedded in Embed 812.For the samples that followed pigmentation induction, see the relative description above.Thin sections were imaged on a Tecnai-12 electron microscope (Philips-FEI) equipped with a Veleta (Olympus Imaging System) digital camera at the BioImaging Facility of the Dept. of Biology (University of Padua).The experiment was repeated three times.The number of intracellular melanin granules of 15 distinct cells was counted for each biological replicate. SDS-PAGE and immunoblot analysis To obtain cell lysates, freshly harvested cells were washed in 1X PBS, detached and centrifuges 500 g for 5 min.To extract whole-cell protein lysates, cold Lysis Buffer was added (20 mM Tris-HCl pH 7.4, 1% Triton X-100, 150 mM NaCl) supplemented with 100X Phosphatase inhibitor cocktails 2 and 3 (P5726-1ml, Sigma; P0044-5ml, Sigma) and 100X Protease inhibitor (P8340-1ml, Sigma).After 30"on/off of pulse sonication at high power using the Branson 250 standard sonifier (Branson), proteins were quantified using BCA assays (Bicinchoninic Acid Assay kit, Cat# 23225, Thermo Fisher Scientific).50 µg of protein samples were loaded on a 4-12% SDS-PAGE gel and transferred using nitrocellulose membranes in wet condition (Transfer buffer: Tris Glycine 1x, 20% methanol, no SDS).Membranes were later blocked with 5% dried milk powder for 20 min followed by 5% BSA for 40 min, both resuspended in Tris-buffered saline (TBS) and probed with the primary antibodies reported in Table 4. Incubation with primary antibodies was performed over-night at +4 °C.Then, membranes were incubated with a goat anti-rabbit (1:10000) or goat anti-mouse (1:5000) antibody conjugated to horseradish peroxidase (both from Biorad) for 1 h at room temperature, and protein bands were visualized with ECL (Clarity Western ECL Substrate, BioRad).Immunodetection was performed using the ChemiDoc MP Imaging System (BioRad).The uncropped western blots are shown as Supplementary Material. Immunoblot densitometry The software ImageJ was used to analyze the profiles of each lane for the blotted nitrocellulose membrane.The size of the lane selection tool was 8 pixels wide.The lanes' shapes were represented as the average of the grayscale values or the uncalibrated optical density along a one-pixelheight horizontal lane.Protein intensity was calculated as a function of the HRP-band signal.Enrichments in percentage were assigned by normalizing on the housekeeping protein (vinculin, β-actin, GAPDH, or lamin A/C), and then on reference control sample. Cytosolic-nuclear fractionation B16F10 cells were rinsed in ice-cold PBS and collected in lysis buffer containing 20 mM Tris-HCl pH 7.4, 150 mM NaCl, and 1% Triton X-100 with protease inhibitor cocktail.Nuclear and cytosolic extracts were obtained using the NE-PER Nuclear and Cytoplasmic Extraction Kit (Thermo Fisher Scientific, Cat# 78833).Protein concentrations were determined by the BCA, using bovine serum albumin as a standard.20 µg of protein extracts from the different conditions were resolved on sodium dodecyl sulfate (SDS)-polyacrylamide gel and transferred to a nitrocellulose membrane, as previously described. Co-immunoprecipitation After performing cyto-nuclear fractionation, an amount of 700 µg of proteins from the nuclear and the cytosolic extracts of the different conditions were subjected to immunoprecipitation using 6 μg of specific antibodies in combination with 20 μl of Dynabeads™ Protein G (Invitrogen), according to the manufacturer's instructions.LDS sample buffer 4× (Life Technologies) containing 2.86 M 2-mercaptoethanol (Sigma-Aldrich) was added to beads, and samples were boiled at 95 °C for 10 min.Supernatants were analyzed by immuno blot.Real-time reverse transcription PCR (RT-qPCR) Proximity ligation assay (PLA) RNA was extracted from cells using 1 ml of PRImeZOL (Canvax, AN1100), according to the manufacturer"s guidelines.RNAs were then quantified using the Nanodrop spectrophotometer ND-1000 (Thermo Fisher Scientific, Waltham, MA) to calculate the RNA concentration in microliter order.DNase Treatment was later performed to digest the contaminant genomic DNA.The reaction was carried out taking advantage of the DNAse free-kit (Ambion -Life Technologies) using 1 µL recombinant DNAse I and 5 µg of RNA.The reaction was conducted at 37 °C for 30 min.The recombinant DNase I was later inactivated with the DNase Inactivation Reagent (0.1 volume).cDNA was obtained using the SuperScript IV Reverse Transcriptase (Thermofisher, #18091050) following manufacturer's instructions.2 µg of RNA was retrotranscribed for each reaction according to the manufacturer"s instructions.cDNA was diluted and amplified in qRT-PCR using SYBR Green PCR Master Mix (Thermofisher, #4309155).Actin was used as housekeeping control.CT values were first normalized with respect to the housekeeping genes (ΔCT) and next compared to the control sample (ΔΔCT).This relative normalized expression is indicated in the figures.No template controls were used to detect any non-specific amplification.The sequences of RT-qPCR primers are reported in Table 5 for mouse melanoma cells and in Table 6 for human melanoma cell lines. Proliferation and colony formation assay To assess cell proliferation, B16F10 wild-type cells and the two TG2 KO clones were seeded in MEM medium in 96-well plates (2.000 cells/well) and let grow under conditions of 5% CO2 and 37 °C.Cells were blocked at different time points (6 h, 24 h, 48 h, 72 h respectively) to stop and monitor their growth.The wells were washed 3X with PBS and fixed with 4% paraformaldehyde (PFA), then stained with 0.1% Crystal Violet for 30 min and washed multiple times with ddH2O to remove the excess color.The plate's absorbances were read at 595 nm the Tecan Infinite F2PRO micro plate reader (TECAN).Data collected were analyzed on Excel software.Samples absorbances were normalized on the 6 h condition and then on the control sample.For colony formation, 500 B16 F10 cells were seeded in a 6-well plate and allowed to grow for 6 days in standard culture medium.The medium was removed, cells were washed twice with PBS and fixed with 3,8% paraformaldehyde for 30 min.After 3 washes with PBS, cells were stained with 0.1% Crystal Violet for 15 min at room temperature and washed with PBS until colonies cleared.Images were taken with Leica MZ stereo microscope16 F with 1 × 0,5 magnification.The number and dimension of colonies were quantified with the ImageJ software. siRNA in human melanoma cell lines The three human melanoma cell lines were knocked down by means of a small interfering RNA (siRNA) silencing technique.siRNA constructs were purchased from ORIGENE (SR322028A) and the cell lines were transfected with a 10 nM TG2 targeting siRNA using RNAiMAX transfection reagent (Thermofisher scientific, #13778150), according to the manufacturer's protocols for 6-well plates or 12-well plates format.The sense strand of the Transglutaminase 2 human siRNA Oligo Duplex was AGCAACCUUCU-CAUCGAGUACUUCC.Non-targeting siRNA control (Universal Scramble negative control) was used as a negative control at a final concentration of 10 nM and was purchased by ORIGENE (SR30004). Orthotopic B16F10 Melanoma Model injection for the generation of primary tumors and lung metastases Animal experiments were carried out according to the Local Ethics Committee of the University of Padua and the National Agency, and under the supervision of the Central Veterinary Service of the University of Padova (in compliance with Italian law DL 116/92 and further modifications, embodying UE directive 86/609), authorization n. 111/2017-PR.Wild-type mice (12 weeks old) in the C57BL6/J background were kept on a 12 h light/dark cycle at controlled temperature and humidity, with standard food (4RF21, Mucedola Srl, Italy) and water provided ad libitum and environmental enrichments.Sub-confluent wild-type murine melanoma B16F10 and clones B16F10 TG2 KO 1 and 2 (70% confluence) were trypsinized, washed and resuspended in PBS.For the primary tumors formation, cell suspension (5 × 10 4 cells in 100 µl PBS) was injected subcutaneously into the right flank of each mouse.The tumor growth of wild-type and KO clones was assessed by measuring the length and width of each tumor every day and calculating the tumor volume using the formula: TumorVolume = [length ✕ (width) 2 ] ✕ 0.5.Fifteen days after tumor cell injection when the tumors impacted on the life quality of the mice, they were euthanized, and their tumors were weighted and harvested.For the lung experimental metastasis formation, 2 × 10 5 cells resuspended in 100 µl PBS of B16F10 WT and TG2 KO 2 were injected in the caudal vein of the mice.21 days after injection, mice were euthanized, and their lungs harvested.The number of experimental metastases was counted at optical microscopy. Multiplex Immunofluorescence (mIF) The Tyramide Signal Amplification (TSA)-based Opal method (Akoya Biosciences) was used for mIF staining on the Leica BOND RX automated immunostainer (Leica Microsystems).Prior to staining, all 4 µm-thick FFPE tissue sections were deparaffinised by baking overnight at 56 °C, soaking in BOND Dewax Solution at 72 °C, and then rehydrating in ethanol.Heatinduced epitope retrieval (HIER) pretreatments were applied using BOND Epitope Retrieval (ER) Solutions citrate-based pH 6.0 ER1 or EDTA-based pH 9.0 ER2 (both Leica Biosystems).Tissue sections were blocked with Normal Goat Serum (Vector Laboratories) for 10 min before applying each primary antibody.A fluorescent singleplex was carried out for melanoma cells biomarker to determine the optimal staining conditions.The rabbit antimouse Melan-a (Abcam, clone EPR20380) primary antibody was subsequently added on the slides.The HRP-conjugated secondary antibodies goat anti-rabbit (Vector Laboratories) were incubated as appropriated for 10 min.The TSA-conjugated fluorophore was then added for 10 min.Slides were rinsed with washing buffer after each step.Finally, the spectral DAPI (Akoya Biosciences) was used as nuclear counterstain, and slides were mounted in ProLong Diamond Anti-fade Mountant (Life Technologies). Multispectral imaging Multiplex-stained slides were imaged using the Mantra Quantitative Pathology Workstation 2.0 (Akoya Biosciences).The inForm Image Analysis software (version 2.4.9,Akoya Biosciences) was used to unmix multispectral images using a spectral library built from acquisition of single fluorophore-stained control tissues and containing fluorophores-emitting spectral peaks.A selection of representative multispectral images was used to train the inForm software to create algorithms to apply in the batch analysis of all acquired multispectral images.Whole metastases area was calculated with respect to surrounding healthy lung tissue. Preparation and analyses of the epidermal skin of C57BL/6 WT and TG2 KO mice Zebrafish morpholino injection and pigmentation analysis WT zebrafish were from the Tübingen (Tü) or AB strains.All transgenic lines were collected from original laboratories, which developed the lines and are currently stabled at the zebrafish facility of the University.Fish housing was carried out at 28.5 °C according to standard rules and procedures (https://zfin.org).All animal manipulation procedures were conducted according to the Local Ethical Committee at the University of Padua and National Agency (Italian Ministry of Health) (Italian Ministry of Health Fig. 4 Fig. 4 TG2 expression is required for pigmentation in human melanoma cell lines, and in vivo zebrafish and mouse models.a Immunoblot analyses and relative densitometry of melanogenesis-related targets (TG2, Tyrosinase, and DCT) in human melanoma cell lines Mel JuSo, IPC-298 and SK-MEL-3.GAPDH was used as loading control (number of independent biological replicates = 3).Statistical significance is specified with asterisks ( * p < 0.05, *** p < 0.001, **** p < 0.0001).Data are represented as mean ± SEM. b Relative mRNA levels quantified by qRT-PCR analysis of TG2, DCT and TYRP1 expression in human melanoma cell lines Mel JuSo, IPC-298 and SK-MEL-3.β-actin was used as house-keeping gene in qRT-PCR (number of independent biological replicates = 3).Statistical significance is specified with asterisks ( * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001).Data are represented as mean ± SEM. c Photos of zebrafish morphology at 48 hpf comparing melanophores formations in TG2 KD and Ctrl morphants and relative quantification.The zebrafish larvae were injected with 0.1 pmol of zTg2b antisense morpholino/embryo.Ctrl Morpholino (CtrlMO) was used as reference.Images were acquired with the same exposure, at 3.2X magnification.Statistical significance is specified with asterisks ( **** p < 0.0001).Data are represented as mean ± SEM. d Histology of mice skin: representative light micrographs of paraffin sections from C57BL/6 WT (A) and KO (B) mice skin, stained with hematoxylin and eosin, where the surface cornified layer and the numerous hair follicle are clearly identifiable; no detectable abnormalities are present in KO (B).C, and D depict magnifications.Melanin granules are visible along the dendritic extensions of melanocytes (arrowheads) are found in WT skin (shown in C).In TG2 KO skin melanin granules are mostly found in the perinuclear region of melanocytes (arrowheads) (shown in D).Scale bar: A, B = 150 µm; C, D = 12.8 µm. Fig. 5 Fig.5TG2-MITF interaction is required for MITF nuclear translocation.Relative mRNA levels quantified by qRT-PCR analysis of MITF expression in B16F10 WT, TG2 KO 1 and TG2 KO 2 (a) and in human melanoma cell lines Mel JuSo, IPC-298 and SK-MEL-3 (b).β-actin was used as house-keeping gene in qRT-PCR (number of independent biological replicates = 3-5).c Immunoblot analysis of MITF in B16F10 WT, TG2 KO 1 and TG2 KO 2. Vinculin was used as loading control (number of independent biological replicates = 3).d Cytosolic-nuclear fractionation assay and relative densitometric analyses evaluating the expression and localization of MITF and TG2 in WT and KO clones, following (PIGM.)or not (N.PIGM.) pigmentation induction.Vinculin and Lamin C were used as loading controls, respectively marking the cytosolic and the nuclear fractions (number of independent biological replicates = 3).e In situ Proximity Ligation Assay (PLA) showing the interaction between MITF and TG2 in B16F10 WT and TG2 KO conditions, following or not pigmentation induction.Each red spot represents a single interaction.DNA was stained with DAPI (in blue).Quantification of dots per cells is represented in the graph on the right.Statistical analyses were performed with One-Way ANOVA and specified with asterisks ( ** p < 0.01, *** p < 0.001, **** p < 0.0001).Data are represented as mean ± SEM. Fig. 7 Fig. 7 Schematic representation of the hypothesized working model.a TG2 expression increases during pigmentation of melanoma cells.Also, following pigmentation, TG2 interaction with MITF allows the transcription factor nuclear translocation and its subsequent transcriptional activation by synthesis of the melanogenesis-related genes (Tyr, Dct, Melan-A, etc), which allow intracellular melanin synthesis and extracellular secretion.Thanks to TG2, MITF High levels enable the maintenance of the melanocytic/differentiated state.b Loss of TG2 inhibits correct MITF nuclear translocation, contributing to a downregulation of melanogenesis and melanoma de-differentiation.Loss of TG2 increases MITF Low /AXL High ratio, switching melanoma cells to the mesenchymal/invasive phenotype, increasing its metastatic capacity by promoting cell motility, alteration of cell-adhesion molecules, and extracellular remodeling.Schematic representation was created with BioRender.com. animal manipulation procedures were conducted according to the Local Ethical Committee at the University of Padua and National Agency (Italian Ministry of Health Authorization number 407/2015-PR), and with the supervision of the Central Veterinary Service of the University of Padova (in compliance with Italian Law DL 116/ 92 and further modifications, embodying UE directive 86/609).Mice experiments were carried out according to the Local Ethics Committee of the University of Padua and the National Agency, and under the supervision of the Central Veterinary Service of the University of Padova (in compliance with Italian law DL 116/92 and further modifications, embodying UE directive 86/609), authorization n. 144/2022-PR.Male wild-type mice (8 weeks old) in the C57BL6/J background were kept on a 12 h light/dark cycle at controlled temperature and humidity, with standard food (4RF21, Mucedola Srl, Italy) and water provided ad libitum and environmental enrichments. Table 2 . PCR primers used for KO validations on genomic DNA after Table 4 . Antibodies used for Western Blot protein bands detection.Cell Death and Disease (2023) 14:704 humidification chamber for 1 h at 37 °C.Ligation was later performed by adding Ligase into the ligation solution buffer and using the slides in a preheated humidity chamber for 30 min at +37 °C.After the amplification and probing processes (100 min at 37 °C), slides were later washed five times with PBS, Wash Buffer A and Wash Buffer B and prepared for Imaging.DAPI was used to stain nuclei.Cells were imaged by placing the slides on the stage of a LSM700 (Zeiss) confocal microscopy equipped with a 63X, Zeiss Plan-Apochromat 63x/1.4 oil objective and excited using the appropriate laser line.Images were acquired using a 1048 × 1048 resolution with the ZEN software (Zeiss). Table 5 . Primer used for qRT-PCR analyses of mouse melanoma cells. Table 6 . Primer used for qRT-PCR analyses of human melanoma cell were washed three times with PBS and dehydrated (50% EtOH 1 h at room temperature, 70% EtOH over-night at +4 °C, 80% EtOH 1 h at room temperature, 90% EtOH 1 h at room temperature, 100% EtOH 2 h at room temperature, Xylene X-free 2 h at room temperature).Dehydrated samples were then embedded in paraffin and cut in 4 µm-thick slices with a microtome. Skin sample from C57BL/6 WT and TG2 KO mice, were fixed with 10% neutral formalin for 16-24 h at room temperature, dehydrated and embedded in paraffin.For histopathological analysis hematoxylin and eosin (H&E) stained tissue sections (4 μm) were used.Animal experiments were carried out according to the Local Ethics Committee of the University of Rome Tor Vergata and the National Agency, and under the supervision of the Central Veterinary Service of the University of Rome Tor Vergata (in compliance with Italian law DL 116/92 and further modifications, embodying UE directive 86/609), authorization n. 111/2017-PR).
2023-10-30T06:17:18.997Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "77b17f6805fdce8e5fa9de27aadf2ec8fcf2d8d8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8ad8c6d6a3a3ee2b8ffe3e72609d1b4fdb2866ab", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
227305845
pes2o/s2orc
v3-fos-license
Universality of active and passive phase separation in a lattice model The motility-induced phase separation (MIPS) is the spontaneous aggregation of active particles, while equilibrium phase separation (EPS) is thermodynamically driven by attractive interactions between passive particles. Despite such difference in the microscopic mechanism, similarities between MIPS and EPS like free energy structure and critical phenomena have been discussed. Here we introduce and analyze a 2D lattice gas model that undergoes both MIPS and EPS by tuning activity and interaction parameters. Based on simulations and mean-field theory, we find that the MIPS and EPS critical points are connected through a line of nonequilibrium critical points. According to the size scaling of physical quantities and time evolution of the domain size, both the static and dynamical critical exponents seem consistent with the 2D spin-exchange Ising universality over the whole critical line. The results suggest that activity effectively enhances attractive interactions between particles and leaves intact the critical properties of phase separation. It is interesting to consider how the concepts of critical phenomena and universality [23] can be applied to active matter systems [24,25]. According to numerical studies of active lattice gas models [26] and Active Ornstein-Uhlenbeck particles [27], the MIPS critical point in two dimensions seems to belong to the 2D Ising universality class, which is the same as for the EPS critical point. Theoretically, the perturbative renormalization group (RG) analysis of the Active Model B+ has shown that weak activity does not change the universality class of phase separation [28]. On the other hand, the critical exponents of MIPS observed in simulations of Active Brownian particles have been incompatible with the Ising universality [29,30]. Additionally, in simulations of Active Brownian particles with attractive interactions, phase separation is stabilized for weak or strong activity but suppressed for moderate activity [31], suggesting that activity can also effectively suppress the attractive interaction. Thus, it is still unclear if there exists a microscopic model that shows MIPS and EPS with the same Ising universality. To clarify the relation between the MIPS and EPS critical points, it is natural to ask if we can find a critical line which connects them by tuning parameters of a microscopic model [32,33]. If the critical line exists, the next question is whether the whole line, which corresponds to nonequilibrium critical points for any nonzero activity, belongs to the Ising universality class. In this Letter, we address these questions by constructing and analyzing a lattice gas model with both activity and attractive interactions, which undergoes both MIPS and EPS. First, based on numerical simulations and meanfield theory, we find that the MIPS and EPS critical points are indeed connected through a critical line. Then, using the finite-size scaling analysis and examining time evolution, we conclude that the whole critical line belongs to the 2D Ising universality class, which suggests that activity-induced violation of detailed balance is irrelevant for critical properties of phase separation. Model. To discuss both MIPS and EPS within a single framework, we consider a lattice gas model with both activity and nearest-neighbor interaction [ Fig. 1(a)]. In this model, each particle with a spin s (=x,ŷ, −x, or −ŷ) can stochastically (i) hop to a nearest-neighbor site if empty or (ii) flip the spin with a rate h, whereâ is the unit translation parallel to the a-axis. For hopping from site i to an adjacent site j, we set a higher rate (1 + ε)w i→ j J if the hopping is in the same direction as the spin, and a lower rate w i→ j J otherwise, using the activity parameter ε (≥ 0). We set w i→ j = 1 − tanh(∆E i→ j /2) with k B T = 1, where ∆E i→ j is the increase in the total interaction energy due to hopping, with the nearest-neighbor interaction energy U (repulsive for U > 0 and attractive for U < 0). Note that the equilibrium heat-bath dynamics [35][36][37] recovers for ε = 0. Following previous studies [26,27,29], we refer to the phase separation that occurs under U ≥ 0 (with no attractive interactions) as MIPS. As expected, EPS occurs for large negative U [ Fig. 1(b)(i)] in the case with ε = 0, whereas in the case with U ≥ 0, MIPS occurs for large ε [ Fig. 1(b)(ii)]. The effective parameters in this model are ε, U, h/J, and the average density ρ (0 < ρ < 1). In the following Monte Carlo (MC) simulations [34], we set h/J = 0.01. To reduce the interface effects and apply the sub-box method in the finite-size scaling analysis, we consider rectangular systems with an aspect ratio of 10:1, except when measuring the dynamical critical exponent. Connection between MIPS and EPS critical points. In Fig. 2, we show the steady-state phase diagrams in the (a) ρ-U and (b) ρ-ε planes. The heatmap represents the density difference between the high-density and low-density phases (ρ h − ρ l ), which is the order parameter for phase separation. Note that Fig. 2(a)(i) is the phase diagram for EPS since ε = 0, and Figs. 2(b)(ii) and (iii) are the phase diagrams for MIPS since U ≥ 0. From Fig. 2(a) [ Fig. 2(b)], we find that the critical point, located at the tip of the phase boundary in the ρ-U (ρ-ε) plane, moves continuously as we change ε (U). Consequently, in the ρ-U-ε space, there is a critical line which connects the EPS and MIPS critical points. In the following, we consider the qualitative behavior of the critical line by a mean-field approximation [34]. From the master equation, we can obtain the time evolution equation for the local density at a site i with a spin s, ρ i,s (t), by neglecting the microscopic fluctuation and correlation [3,4] as where, ρ i := s ρ i,s and w MF i→ j is the mean-field version of w i→ j [34]. Focusing on the moderate spatial variation of ρ i,s with respect to the lattice constant a, we may replace ρ i,s by ρ s (x) and expand ρ i+l,s as ρ i+l,s [1 + al · ∇ + (al · ∇) 2 /2]ρ s (x). In the same spirit, we may expand w MF i→i+l as w MF . Further, we focus on the temporally slow mode, i.e., the density field ρ(x, t) := s ρ s (x, t), which is important around the critical point, and use the adiabatic approximation [38,39]. Finally, we obtain the equation for ρ(x, t) as Here, M(ρ) := (1 + ε/4)Ja 2 (1 − ρ)ρ represents the mobility, Low-density phase High-density phase (21) configurations at intervals of 10 5 MC steps after 2 × 10 6 (6 × 10 6 ) MC steps in each simulation with the random (fully phase-separated) initial configurations [34]. For all figures, we used h/J = 0.01 and ρ = 0.5. and F eff := dx f (ρ) denotes the effective free energy, with showing that the activity simply works as an additional attractive interaction [38][39][40][41], as well as breaking the particle-hole symmetry in the entropic terms. To investigate the mean-field critical point, we expand f (ρ) where we omit the O(φ 0 , φ 1 ) terms since they do not contribute to Eq. (2). The spinodal line is obtained by and the critical point by further restraining Based on Eqs. (4) and (5), we obtain the mean-field critical points and spinodal lines in the ρ-U plane Universality of the critical line. By using a modified version of the recently proposed sub-box method [26,27,29], we calculate the critical exponents of the critical line, especially for two cases with both activity and attractive interaction: varying ε with U = −1 and varying U with ε = 0.1. For both of these cases, the critical density ρ c is around 0.5 based on Figs. 2(a) and (b), and we set ρ = 0.5 in the following. By considering rectangular systems with the size 10L × L, we take the steady-state configurations from four sub-boxes with the size L × L [ Fig. 3(a)], and · · · represents the average over all the independent samples and sub-boxes. The corresponding results for varying U with ε = 0.1 are shown in Figs. 3(c) and (e), from which we find that the critical exponents at (U c , ε c ) (−1.76, 0.1) are also consistent with the Ising universality. Further, as is well known [29], the EPS critical point with ε = 0 belongs to the Ising universality class (see [34] for confirmation in our model). Lastly, also for the MIPS critical point with U = 0, the obtained size scalings seem consistent with the Ising universality [34], as observed in similar active lattice gas models [26] and in Active Ornstein-Uhlenbeck particles [27]. These results imply that the whole critical line, which connects the EPS and MIPS critical points, belongs to the 2D Ising universality class. For the critical points obtained above, we examine the dynamical scaling of the domain size R(t) ∼ t 1/z after a quench from a random configuration in a square system with ρ = 0.5, where z is the dynamical critical exponent. Here we define R(t) as the first zero of C a (r, t) {:= [C(rx, t) + C(rŷ, t)]/2}, where C(r, t) [:= L −2 r 0 ρ(r + r 0 , t)ρ(r 0 , t) −ρ 2 ] is the density correlation function and · · · represents the average over all the independent samples. The time evolution of R(t) at both (U, ε) = (−1, 1.07) [ Fig. 4(a)] and (−1.76, 0.1) [ Fig. 4(b)] is consistent with z = 15/4, the exponent for the 2D spinexchange Ising universality [42], as observed in active lattice gas models [26]. Discussion and conclusions. In this Letter, we have studied the lattice gas model with activity and nearest-neighbor interaction. By MC simulations, we have found that the MIPS and EPS critical points are connected by a critical line, which we can qualitatively reproduce within the mean-field approximation. We have also investigated both the static and dynamical critical exponents for the critical line by the finite-size scaling analysis, and found that the whole critical line belongs to the 2D spin-exchange Ising universality class. Further, we confirmed that the LSW law appears for a deep quench toward both attractive interaction and activity. Our results suggest that activity-induced violation of detailed balance is inessential for the critical phenomena in the motility-induced phase separation; the activity ε only enters as a parameter in the mean-field free energy [Eq. (3)], which is consistent with the RG analysis of the Active Model B+ [28]. This picture is consistent with the observed LSW law, which reflects the process of reducing the interface free energy between the high-density and low-density phases in the case of EPS [45]. Recently, intracellular phase separation of proteins/mRNAs has been observed, and the functions and mechanism of the liquid droplet formation have been discussed [48][49][50]. Our result clarifies that the MIPS and EPS are indistinguishable at the macro-scale observed in common cell experiments, indicating the potential role of activity, fueled for instance by enzyme catalysis [51,52] in the liquid droplet formation in cells. Supplemental Material for Universality of active and passive phase separation in a lattice model By discretizing time, we perform Monte Carlo (MC) simulations corresponding to the lattice gas model [ Fig. 1(a) in the main text]. In this model, each particle with a spin s (=x,ŷ, −x, or −ŷ) can stochastically (i) hop to a nearest-neighbor site if empty or (ii) flip the spin with a rate h, whereâ is the unit translation parallel to the a-axis. For hopping from site i to an adjacent site j, we set a higher rate (1 + ε)w i→ j J if the hopping is in the same direction as the spin, and a lower rate w i→ j J otherwise, using the activity parameter ε (≥ 0). We set w i→ j = 1 − tanh(∆E i→ j /2) with k B T = 1, where ∆E i→ j is the increase in the total interaction energy due to hopping, with the nearest-neighbor interaction energy U (repulsive for U > 0 and attractive for U < 0). In all the simulations, we set h/J = 0.01. First, we randomly choose a particle, say, at site i with spin s. Then, we randomly choose a direction from {x,ŷ, −x, −ŷ} \ {s} and update s to the chosen direction with a probability 3h/8J(1 + ε). Lastly, we randomly choose a direction (we call l) from {x,ŷ, −x, −ŷ} and move the particle to the adjacent site i + l if empty with a probability w i→i+l /2 or w i→i+l /2(1 + ε) for l = s or l s, respectively. We repeat this procedure N (the total particle number) times as 1 MC step. Note that each flipping/hopping probability is smaller than 1 since 0 < w i→ j < 2. 1.76, 0). The obtained U c is close to the exact value [53], U exact c = 2 ln(1 + √ 2) = 1.7627..., which suggests that the sub-box method [ Fig. 3(a) in the main text] is working. As expected, the critical exponents are consistent with the 2D Ising universality [ Fig. S1(b)]. For MIPS with U = 0, we set ρ = 0.55 considering the shift of ρ c [ Fig.2(b) in the main text]. Based on the crossing of the Binder ratio Q L for L ≥ 10 [ Fig. S1(c)], we estimate the critical point as (U c , ε c ) (0, 1.12), although the crossing is not as clear as the cases with negative U. The size scalings seem consistent with the 2D Ising universality [ Fig. S1(d)], though we do not reach the scaling regime due to the limited system size. C. Relaxation dynamics In the finite-size scaling analysis, we sample configurations of the steady state, which is realized after relaxation from the initial configuration. In Fig. S2, we show typical time evolution of the Binder cumulant for two kinds of parameter sets around the critical line: (a) (U, ε) = (−1, 1.067) and (b) (U, ε) = (−1.767, 0.1). For U = −1, the dynamics of Q L shows the relaxation to the steady state from the random configuration [ Fig. S2(a)]. For ε = 0.1, we perform simulations from the fully phaseseparated configuration to accelerate the relaxation for negatively large U, and the dynamics of Q L represents the relaxation process [ Fig. S2(b)]. Similarly, we use the fully phase-separated initial configuration in simulations for Figs. 3(c) and (e) in the main text and Figs. S1(a) and (b). The domain size R(t) is determined by the first zero of the correlation function C a (r, t) defined in the main text. Figure S3 is an example of the time dependence of C a for the parameters corresponding to Fig. 4(c) in the main text, and we see the growth of R(t) as time passes. S2. MEAN-FIELD APPROXIMATION We explain the details of the mean-field approximation used in the main text. In the following, we use · · · t as the average with respect to the probability P({n i,s }, t) for the configuration {n i,s } at time t, where n i,s (= 0 or 1) is the local occupancy. Based (S1) Here, n i := s n i,s . We neglect the second and higher-order correlations within the mean-field approximation [3,4], which leads to Eq. (1) in the main text: where −1, 2), which corresponds to Fig. 4(c) in the main text. The first zero of C a (r, t) at each time t represents the domain size R(t). Note that C a (0, t) = ρ(1 − ρ) and thus C a (0, t) = 0.25 for ρ = 0.5.
2020-12-07T02:01:09.483Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "f82b7dc752f10598a1f99bfcd40486e438a60b23", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f82b7dc752f10598a1f99bfcd40486e438a60b23", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119231188
pes2o/s2orc
v3-fos-license
Monotonicity of a Class of Integral Functionals In this note we prove a condition of monotonicity for the integral functional $ F(g) = \int_a^b h(x)\, d[-g(x)] $ with respect to $g$, a function of bounded variation. This condition is applied to analyze the behavior of a generalized structured population model. Introduction In the article [1] ("Nontrivial Equilibria of a Quasilinear Population Model ", in progress), I study a functional R(u) (u ∈ L 1 (0, ∞)), said generalized net reproduction rate, to prove existence of non-zero equilibria in a general structured population model. The monotonicity of R(u) is used in a Corollary to prove the non-existence of a non-zero stationary population if R(0) < 1 (a sufficient condition of existence being R(0) > 1). The original proposition about monotonicity, not so immediate, will be reduced to the integration by parts of an improper Stieltjes integral: From now on we denote via G(b) the value of G(b) if b < ∞ and lim x→∞ G(x) if b = ∞. I will denote respectively in the cases [a, b] and [a, ∞). Proposition 1 Let H, G be two given functions on I. Let H be increasing (non-decreasing), bounded, non-negative. Let G be continuous and of bounded variation. Define If Proof. a) Consider first the case b < ∞. F (G) is well-defined; integrating by parts we have: (2) The conclusion is immediate. b) Consider the case b = ∞. For H bounded and G(x) converging for x → ∞ we obtain immediately the existence of the improper integral and extend the formula of case a). If H(x) is not strictly increasing but only non-decreasing, the functional F is only non-decreasing with respect to G. Corollary 2 Let H, G given functions on I. Let H be decreasing (non-increasing), bounded, non-negative. Let G be continuous and of bounded variation. Example 1. Consider the functional where h is positive, increasing and bounded. and Corollary 3 Consider u ∈ L 1 (0, ∞) and the functional where h and f are defined from (0, ∞) × L 1 ∞ 0 dy f (y) = 0, and is decreasing with respect to f , that is non-decreasing in u: therefore this integral is non-increasing in u and we have As f is decreasing with respect to u, we have so that R(u 1 ) > R(u 2 ). (The case of the alternative conditions, given by the parenthesis, is analogous). Example 2. Corollary 3 is applied to a model of population dynamics: let u = u(t, x) ≥ 0 be a population density with respect to age or size x ≥ 0. Existence of stationary solutions (i. e. equilibria) u = u(x) is related to a functional R(u), the net reproduction rate. In a generalized model (see [1]) where g and µ depend on u in an infinite-dimensional kind, R(u) is represented by e − x 0 dy µ(y,u(·)) g(y,u(·)) g(x, u(·)) where β represents fertility, µ mortality and g is a coefficient of growth (the detailed model is given and discussed in [1]). The condition of existence of a nonzero steady solution (with suitable regularity conditions) is requiring that R(u) = 1; see [2,3] and [1]. See also [4,5,8]. If R(0) < 1 and monotonicity conditions hold, the zero solution is the unique equilibrium. I prove in [1] that R(0) > 1 is a sufficient condition for existence of nontrivial stationary solutions. If monotonicity conditions do not hold, then R(0) > 1 is sufficient but it is not necessary and it is simple to give a counterexample. More about the Application The model is a generalized version of the classic Lotka-MacKendrick population model: consider a population density u = u(t, x), where t ∈ [0, T ] represents time, x ∈ (0, ∞) is age or size and the total population P (t) is Consider the following functions: growth/diffusion g = g(x, u), mortality µ = µ(t, u), fertility β = β(x, u), depending on x and infinite-dimensionally depending on the population density u(t, ·). The model is In particular, Eq. (9) gives the newborns. The generalized net reproduction rate is defined as where Π(x, u) = 1 g(x, u) e − x 0 µ(y,u) g(y,u) dy is an auxiliary function, said generalized survival probability and it represents a stationary solution of Eq. (8), i. e. the differential part of the model. In general β and Π depend on u in a functional way: for instance in Calsina and Saldana [2,3] the dependence is given through a weighted integral; in my paper [1] the dependence is infinite-dimensional in a more general way, to manage hierarchical models. Some examples are populations where fertility or mortality are influenced only by the immediately superior size: for instance a population of trees in a forest, where the contended resource is the light, that is intercepted by immediately taller trees than trees of size x but not by the trees that are very taller than x. (For a case of tree population model, see [7]). A stationary solution u of (8)- (9) exists if and only if u satisfies the functional equation where G(u(·)) = ∞ 0 β(x ′ , u(·)) u(x ′ ) dx ′ . Eq. (11) is related to the condition R(u) = 1 that is used to prove the existence of nontrivial stationary solution (that is, nonzero). Under suitable regularity conditions, we have that R(0) > 1 is a sufficient condition. With additional conditions on monotonicity of β/g and µ/g, the reproduction rate R(u) is monotone decreasing and we exclude existence of nontrivial solution if R(0) < 1. This is is a recurrent condition in dynamics of populations. Other Recurrences of the Functional in Literature Conditions on H and G in Prop. 1 are analogous to conditions given in [6], Teorema 2.1, b) Teorema [6] Let −∞ < a < b ≤ ∞ and let h and g be positive functions on (a, b), where g is continuous on (a, b). Assume that h is increasing on (a, b) and g is decreasing on (a, b) where g(b − ) = 0. Then, for any p ∈ (0, 1], If 1 ≤ p < ∞, then the inequality (1.2) holds in the reversed direction. In [9], the theorem above extends from t p to concave and convex functions φ, when they are positive and differentiable. At the present I have no ideas if this fact would have any meaning for R(u) or eventually estimates of it in the spaces L p , however I think that the similarities of conditions is not a coincidence. Heinig and Maligranda's original paper [6] treats monotone functions and Hölder inequalities on Hardy spaces. A related field can be about Fredholm-Volterra equations.
2015-02-17T17:32:39.000Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "6ae326e0c093c1a1cfd7ce0a5f1322ef1fc86ed1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6ae326e0c093c1a1cfd7ce0a5f1322ef1fc86ed1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }