id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
58652587
pes2o/s2orc
v3-fos-license
Preparation of High-Purity Trilinolein and Triolein by Enzymatic Esterification Reaction Combined with Column Chromatography : High-purity trilinolein and triolein were prepared by Novozym 435-catalyzed esterification reaction combined with column chromatography purification in this study. Firstly, linoleic acid and oleic acid were respectively extracted from safflower seed oil and camellia seed oil by urea adduct method. Secondly, trilinolein and triolein were synthesized through Novozym 435 catalyzed esterification of glycerol and fatty acids. The best synthesis conditions were obtained as follows: reaction temperature 100 ℃ , residual pressure 0.9 kPa, enzyme dosage 6%, molar ratio of glycerol to linoleic acid 1:3 and reaction time 8 h. Crude trilinolein and triolein were further purified by silica gel column chromatography. Finally, high-purity trilinolein (95.43±0.97%) and triolein (93.07±1.05%) were obtained. Introduction Trilinolein and triolein are important chemical and pharmaceutical raw materials 1 . Trilinolein can be used as lubricants in the textile industry, and smoothing agent in metal processing industry. Triolein can be used as emulsifier, emulsifying stabilizer and wetting agents in food and cosmetics 2 4 . Trilinolein and triolein can be obtained by extracting them from natural products or by artificial synthesis method. However, the cost of extracting trilinolein and triolein from natural oils is too expensive for industry use. At present, there are two kinds of triglyceride TAG synthesis methods such as esterification reaction and transesterification reaction. It is difficult for the transesterification reaction method to obtain high purity TAG 5,6 . A huge number of papers on TAG synthesis by esterification reaction method are published every year. Compared with traditional chemical esterification reactions, lipase-catalyzed esterification reactions have the advantages of elevated reaction rate, higher efficiency, higher purity of the product and environment friendly 2 . Lipase-catalyzed esterification reactions are especially suitable for food and medicine synthesis 7 . The esterification degree of enzyme catalyzed fatty acid and glycerol can reach above 95 . However, the purity of the TAG synthesized by esterification method was usually less than 90 , because of the presence of partial glycerides, such as monoglyceride MAG and diglyceride DAG 8 10 . Chemical structures of TAG, MAG and DAG were shown in Fig. 1. Liu studied the optimal synthesis conditions for enzymatic esterification synthesis of triglyceride, and under the optimal synthesis conditions the total content of triglyceride reached up to 90.77 0.85 11 . However, because of low-purity oleic acid used as raw material, the obtained triglyceride was a mixture of several different kinds of triglyceride. Therefore, it is necessary to use high purity fatty acid as raw material to obtain high purity TAG. Also, the crude TAG product could be further purified by removing the fatty acids FFA , MAG and DAG. Commonly used purification methods are column chromatography and molecular distillation, which can purify the TAG by decolorization, deacidfication and further enrichment of TAG 12 . However, high temperature molecular distillation was required for further purification of TAG, which would lead to oxidation and isomerization of TAG 13 . It is necessary to explore useful purification methods. The objective of this paper was to research the synthesis and purification method of trilinolein and triolein. Firstly, high purity linolein acid and oleic acid were prepared by urea adduction fractionation. And then trilinolein and triolein were synthesized through Novozym 435 catalyzed esterification of glycerol and FFA. Finally, high-purity trilinolein and triolein were obtained after purification process. 2.2 Preparation of linoleic acid and oleic acid by urea adduction fractionation method Fatty acids mixture was prepared from safflower oil according to previous method 14 . Fatty acids mixture, urea and 95 ethanol were mixed into a 500 mL glass container as a molar ratio of 1:2:10, and then urea adduction was performed at 60 for 90 mins. When the reaction was complete, the container was put into the refrigerating circulation pump, for the urea to crystallize for 12 h at 10 . At last, high purity linoleic acid was separated after rotary evaporation according to previous method 15 . For oleic acid preparation, fatty acids mixture were prepared from camellia seed oil and then a two stage urea adduction fractionation was performed. Fatty acids mixture, urea and 95 ethanol were mixed as a molar ratio of 3:4:10, and then urea adduction was performed at 60 for 120 mins. When the reaction was complete, the container was put into the refrigerating circulation pump, for the urea to crystallize for 14 h at 3 . The obtained first stage urea adduction product was mixed with urea and 95 ethanol as a molar ratio of 1:2.75:10 for the second stage urea adduction. Then, the next urea adduction was performed at 60 for 2 h, followed by the urea crystallizing for 12 h at 0 . Finally, high purity oleic acid was obtained after rotary evaporation 16, 17 . Fatty acid analysis The fatty acid composition was determined by gas chromatography GC after derivatisation to fatty acid methyl esters with 2N KOH in methanol, according to IUPAC method 18 . The analysis of fatty acid methyl esters was performed on an Agilent 7890B GC Agilent, USA equipped with a BPX-70 capillary column 30.0 m 320 μm 0.50 μm, SGE, Australia and a flame ionization detector FID, Agilent, USA . Nitrogen was used as the carrier at a flow rate of 1.0 mL/min. The column temperature was initially held at 180 for 5 min before being programmed to reached 230 at a rate of 3 /min and was maintained isothermally for 15 min. The temperatures for the injector and the FID detector were set at 230 and 300 , respectively. Injections were performed using a split ratio of 1:50. Peaks in GC were identified by comparison with the reference standards. Enzymatic synthesis of trilinolein and triolein In this study, trilinolein and triolein were synthesized by Novozym 435-catalyzed esterification of glycerol and linoleic acid/oleic acid under vacuum. Reaction was conducted in oil bathed glass vessel, with volume of 100 mL, containing some amount of glycerol and linoleic acid/oleic acid a total amount of 40 g and initiated by the addition of 0.8 g Novozym 435 2 of the total weight of substrates with stirring at 400 rpm, 0.9 kPa and 100 . Samples were withdrawn periodically to monitor the composition of the reaction mixture. Determination of trilinolein and triolein The esterification products were analyzed by the same GC using a DB-1ht capillary column 30 m 0.25 mm 0.1 μm, Agilent, USA . The injected volume was 1 μL and the carrier gas was nitrogen at a flow rate of 4.41 mL/min. The initial column temperature was 100 and a temperature gradient was applied: from 100 to 290 at 50 /min, from 290 to 320 at 40 /min and then held at 320 for 8 min, from 320 to 360 at 20 /min and finally held at 360 for 15 min. The temperatures for the injector and the FID detector were respectively set at 350 and 400 . Injections were performed using a split ratio of 1:20. Peaks in HPLC were identified by comparison with the reference standards 11,19 . Acquisition and processing of data was accomplished by Agilent OpenLAB CDS software Agilent, USA . TAG purity and esterification degree were defined as Eq. 1 and Eq. 2 in this study: Esterification degree 1-FFA 100 2 Puri cation of trilinolein and triolein In order to obtain highly pure trilinolein and triolein, silica gel column chromatogram was used to further remove MAG, DAG and FFA from glyceride mixtures. Trilinolein or triolein was weighted, dissolved in a small amount of n-hexane, and poured into the silica gel column. The elution gradient: 100 n-hexane, 3 times column volume; ethyl ether/n-hexane 2/98; vol/vol , 5 times column volume; ethyl ether: n-hexane 5:95, 10 times column volume; 100 ethyl ether, 10 times column volume. The effluent was collected and detected by the ultraviolet detector at 254 nm. Statistical analysis All experiments were repeated for triplication. Significant differences among means were accomplished by using an ANOVA procedure p 0.05 20 . Preparation of linolein acid and oleic acid Urea adduction fractionation method is commonly used to extract fatty acids. Urea adducts precipitate both saturated and monounsaturated hydrocarbon chains as urea complexes, leaving solubilized polyunsaturated fatty acids in the non-urea adduction fraction 21 . With a linoleic acid content of 73 85 , safflower oil is an ideal raw material for preparing linoleic acid. Firstly the mixed fatty acids were made by saponification of safflower oil, and then high purity linoleic acid was directly prepared by urea adduction method. Table shows the fatty acid composition of the safflower oil as raw material and the linoleic acid as product. With a purity of 99.18 0.94 , the linoleic acid can be used as raw material for the subsequent preparation of trilinolein. Camellia seed oil is selected to be raw material for preparing oleic acid, with an oleic acid content of 74 85 . A two stage urea adduction method was used to prepare oleic acid. Firstly the fatty acids mixture were made by saponification of camellia seed oil. And then most saturated fatty acids were removed from the fatty acids mixture by the first stage urea adduction, and oleic acid with a purity of 88.72 0.73 was prepared, as Table 1 shown. During the second stage urea adduction, urea and oleic acid formed a more stable crystal clathrate, but polyunsaturated fatty acids were difficult to be combined with urea 22 . Therefore, the oleic acid was further purified by removing most polyunsaturated fatty acids. As Table 1 shown, contents of both C18:2 and C18:3 in oleic acid were dramatically decreased after the second stage urea adduction. The oleic acid 97.27 0.98 can be used as raw material for the subsequent synthesis of triolein. Preparation of trilinolein 3.2.1 Influence of reaction temperature Trilinolein was synthesized through Novozym 435-catalyzed esterification of glycerol and linoleic acid under vacuum. As shown in Fig. 2 and Fig. 3, both trilinoleic content and esterification degree gradually increased with the reac- Under the condition of reaction temperature 100 and negative pressure, the water produced by the esterification reaction is rapidly boiling and volatilizing, promoting the esterification reaction equilibrium to the right 23 . Further raising the reaction temperature to 110 120 , there was no significant difference in trilinolein content and esterification degree p 0.05 . However, at reaction temperature of 130 , both trilinolein content and esterification degree dramatically reduced because of decreased reaction rate. That was probably attributed to the inactivation of enzyme at high temperature. Fig. 4 shows the time course of the esterification reaction at reaction temperature of 100 and 0.9 kPa. As the time proceeded, the trilinolein content and esterification degree increased rapidly during the first 8 h p 0.001 and then growed slowly until reached a balance. Both MAG content and DAG content increased quickly during the first 2 h p 0.05 and then decreased to 7.98 and 12.04 , respectively. The triglyceride content increased very slowly after 8 h of reaction p 0.012 . However, too long reaction time can easily lead to oxidation and isomerization of triglyceride, as well as discoloration of the oil 24 . At a best reaction time of 8 h, the reaction mixture was composed of 69.36 trilinolein, 7.98 MAG, 12.04 DAG and 10.62 FFA. 3.2.3 Influence of the molar ratio of glycerol to linoleic acid As Fig. 5 shown, the molar ratio of glycerol to linoleic acid has substantial influence on esterification products composition. Increase in the ratio from 1:2 to 1:3, caused trilinolein purity to increase significantly p 0.001 . The maximum trilinolein purity was 69.36 with glycerol and linoleic acid at a molar ratio of 1:3. However, the trilinolein purity decreased significantly when this ratio increased from 1:3 to 1:3.5 p 0.05 . At a molar of 1:2 and 1:2.5, because of excessive glycerol in the system, both MAG content and DAG content were relatively higher than those with other molar ratio, resulting in the trilinolein purity less than 60 . At a molar of 1:3.5, the reaction is incomplete and the residual linoleic acid content is up to 20 , due to the linoleic acid excess. Moreover, excessive linoleic acid caused the difficulties of the subsequent product deacidfication. Therefore, the molar of glycerol to linoleic acid was fixed at 1:3 for the succeeding experiments. 3.2.4 Influence of enzyme loading As shown in Fig. 6 and Fig. 7, the enzyme loading has significant effect on both purity of trilinolein p 0.05 and esterification degree p 0.05 . Trilinoleic purity rised with reaction time rapidly at first 4 h, and gradually balanced at around 8 h. The content of trilinolein was only 64.38 at a reaction time of 8 h with 2 enzyme loading. Under the condition of enzyme loading 4 , the content of trilinoleic reached up to 69.36 . Further increases in the enzyme loading from 4 to 6 , caused trilinolein purity to increase significantly from 69.36 to 70.26 . Therefore, an enzyme loading of 6 was considered to be suitable. Best conditions and trilinolein yield The best conditions were obtained as follows: reaction temperature 100 , reaction pressure 0.9 kPa, enzyme loading 6 , molar ratio of glycerol to linoleic acid 1:3 and reaction time 8 h. Under the best conditions, crude trilinolein product was obtained with a purity of 70.26 0.74 and a esterification degree of 91.23 0.88 . Preparation of triolein At reaction temperature of 100 , reaction pressure of 0.9 kPa, enzyme loading of 6 , glycerol/oleic acid molar ratio of 1:3 and reaction time of 8 h, triolein was produced with a content of 68.19 0.62 and a esterification degree of 91.64 1.03 . Puri cation of trilinolein and triolein In addition to 70.26 trilinolein, the esterification reaction mixture also contains 7.69 MAG, 11.87 DAG and 10.18 FFA. To achieve highly pure trilinolein, the silica gel column chromatography was employed. The different components in esterification could be separated by gradient elutions of different polar eluent 25 . Fatty acids were separated by ethyl ether/n-hexane 2/98; vol/vol , TAGs were eluted by ethyl ether/n-hexane 5/95; vol/vol , while MAG and DAG were washed out by 100 ethyl ether eluent. The effluent was collected and dissolved by reduced pressure distillation. Finally, purified trilinolein 95.43 0.97 was obtained. Also, the crude triolein was further purified, and the final purity was 93.07 1.05 . Conclusions Preparation of high purity trilinolein and triolein by Novozym 435-catalyzed esterification reaction combined with column chromatography purification was reported in this study. Firstly, linoleic acid and oleic acid were respectively extracted from safflower seed oil and camellia seed oil by urea adduct method. Then, trilinolein and triolein were synthesized using glycerol and linoleic acid/oleic acid as raw materials by a Novozym 435-catalyzed esterification method. Crude trilinolein and triolein were further purified by column chromatography. Finally, high-purity trilinolein 95.43 0.97 and triolein 93.07 1.05 were obtained. Overall, the proposed improved multi-step process proved to be a prospective approach for the synthesis of trilinolein and triolein. Financial support granted by Fundamental Research Funds for the Henan Provincial Colleges and Universities 2016QNJH19 , Scientific and technological project of Henan Province 162102310408 , Province Key Laboratory of Cereal Resource Transformation and Utilization, Henan University of Technology 001256, 001251 and the fundamental research funds for special projects of Henan University of Technology 2018RCJH04 are gratefully acknowledged. No conflict of interest associated with this work.
2019-01-22T22:34:04.396Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "1e59150b823d56be70af4bb1f58a158413a85cab", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jos/68/2/68_ess18142/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "288179637fd6d535dfc674d00662a274c6df9fc7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
55969742
pes2o/s2orc
v3-fos-license
Value Orientations and Fertility Intentions of Finnish Men and Women In thispaper we examine howpersona! values and attitudes are related to childbearing intentions among 18-40-year-old Finnish men and women. Wefocus on religious and individualistic values and on attitudes towardschildrenand thefamily, as well as attitudes towardswork and gender roles. The impact of value and attitude orientations and situationalfactors onfertility decision-making are investigated separately at parities 0, 1 and 2 using logistic regression. Our study uses a subsample of 1,237 men andwomen drawnfrom thePPA2survey ofthe attitudes ofFinns towardsfamily and children,family policy measures, values in life as well as theirfertility intentions. Wefind that information onpersona! values and attitudes does increase our knowledge on determinants of childbearing intentions and decision-making, although not ali our initial hypotheses concerning the association, or direction of the association, between certain attitudes and fertility intentions were confirmed in the data. Religious values, as well as work-relatedattitudes and individualistic values appeared to have little bearing on childbearing intentions, while various attitudes towards children were related to intentions to have (more) children. In addition, a conservative familistic attitude was related to intentions as well as gender role attitudes. The impact of values and attitudes varied by parity, providing support to the nation that childbearing decisions are made sequentially". All over Europe after the Second World War, though at a varied pace, fertility has continuedto decline and in many countries has reached a level clearly below replacement (2.1).Most European countries are currently also experiencing similar changes a . . . . . . .We have not mcluded ali the tables from the logistic regression analyses performed för this study m this article.Additional tables can be obtained from the authors. in the family and household structure, in childbearing and in the age structure of the population.These trends include increasing age at first birth and a decreasing number of children being bom, an increasing proportion of children being bom outside marriage, a rise in different forms of cohabitation, an increasing number of single-person households among young adults and an increasing number of one-parent families and reconstituted families. In Finland today, childbearing decisions are made in a context where the majority of women, and especially mothers, are employed, and the younger generations and particularly young women spend a long time in education and end up with a relatively high educational degree.The labor force participation rate of women aged 20-40 is over 77 percent, and the proportion of women among persons completing a university degree has risen to 60 percent during the 1990s.The state has also substantially reduced the costs related to children and childbearing as well as supported reconciliation of parenthood and employment with various family policy measures.Compared to many European countries, fertility in Finland has remained rather stable and on a relatively high level since the mid-l 970s ( 1.6-1.8).Despite this, fertility has continued to be under replacement level, leading to a diminishing number of children being bom. A large part of the fertility decrease during the last decades can be explained by the postponement of the first birth to a later age, and a decrease in third or higher order births.The median age of women at first birth has risen from about 23 in the late 1960s to well over 27 in 2001.The proportion ofwomen still childless at age 35 was about 25 percent in 2001, seven percentage points higher than in the mid-l 980s (Statistics Finland 2002 and2001). Although fertility levels have fallen below replacement, they are not close to zero and, in fact, in many countries the decrease has been leveling off and period fertility rates have stabilized around 1.5-1.9.Among those who become parents, a family with two children seems to have become the norm.In fertility surveys the ideal family size has settled at a little over two children, and the proportion of those who wish to have no children at all or only one child has remained on a low level (Coleman 1996).Also in Finland, the proportion of women progressing towards higher parities, three or four children or more, has decreased continuously, although in the l 990s a slight increase in third-and fourth-birth intensities could be noticed (Vikat 2002). Women's labor force participation and increasing educational participation have received some attention in research on fertility trends and behavior in Finland.So far, relatively little research has been focused on the impact of values and attitudes on fertility decision-making in Finland (except, för example, Ruokolainen & Notkola 2002).With the exception of FFS studies from the 1970s and 1980s, we have little knowledge ofvalues related to childbearing or of family size ideals among Finns. In this paper we examine the association of personal value orientations and attitudes with childbearing intentions among 18-40-year-old men and womenusing data from the PPA2 survey conducted in Finland in spring 2002.This study is motivated by the idea that if we want to understand why men and women decideto have or not to have a child, it is important to also incorporate value and attitude orientations in the study of fertility decision-making.Införmation on personal aspirations and attitudes can provide additional införmation above the other fertility determinants, especially in the case when individuals and couples are behaving 'against the norm', i.e. when they decide not to have children at all or stop at parity 1, or, respectively,when they want to continue towards third or higher order parities. Rational choice and ideational theories on fertility According to rational choice or economictheories on fertility, reproductive decisions are based on rational thinking and calculation in which children are regarded as only one of the many possible ways of self-fulfilmentin life (Easterlin 1966;Becker 1993).When deciding upon whether to have a child, individuals and couples consider pros and cons relatedto childbearingin comparisonto availablealtemativeactivities.Having children may involve considerable costs in the förm of employment opportunities, income spending, partnership behavior, etc. Children involve not only opportunity costs and direct expenditures, but their utility has also declined even further as children are no longer required to support their parents.The emotional satisfaction from children can be achieved most economically by having one or two children. On the other hand, ideational or normative theories argue that norms and values play a central role in fertility behavior.Sociologicalstudies have suggestedthat cultural or ideologicalclimates can, in the absence of sanctions, have a similar impact on fertility as norms and values (Lesthaege 1983, Preston 1986).Value of children -studies have stressed the role of non-instrumental motives and the immanent values which children satisfy that are behind the parents' desire för children.Children are valued as sources för feelingsof accomplishment,creativity and stimulation (Palomba & Moors 1995;Hoffman & Manis 1979), sources of social capital (Schoen et al. 1997), or sourcesto reduceuncertainty and increasemaritalsolidarity (Friedmanet al. 1994;Myers 1997). On the macro level, secularization, the ideology of responsible parenthood, growing individualism or post-materialism, the empowerment of women and changing expectations towards motherhood and parenthood are believed to be the underlying causes of low fertility in the Westem world.Having children may still well förm part of a postmodem idea of self-fulfilment.But at very low fertility levels, the timing of births clearly becomes exceedingly important.The crucial factor that appears to determine completed family size of modemists and postmodemists is not that they differ substantially in stated ideas, wishes, expectations or preferences.Most likely postmodemists just have important competingpreferences and priorities.They began childbearing late: at every age they have below-average numbers of children bom.(van de Kaa 2001.)It can be assumed that both sets of elements,the ideational elementsand the economic constraints, contribute to family formation and fertility behavior.The recent evolution of society has brought about a great variety of opportunities in education, work and leisure time.People's standard of living is largely determined by the level and quality of education, by the degree of commitment to societal goals and motivation för self-realization.In addition to economic costs, social and cultural changes play a very meaningful role in encouraging people to react in an individualistic manner and to break with longstanding behavioral pattems.Effective contraception has made it possible to plan if and when to become a parent.It may be expectedthat motivational factors, personal values and considerations have more importance in determiningfertility behavior when social norms are losing their predictability in describingthe family formation process. Much research has focused on 'hard facts' behind fertility behavior.This can be partly explained by the difficulty in establishing a causal link between values and fertility in cross-sectional or retrospective studies.Studies on values and fertility have demonstrated that ideas about appropriate ways of living are linkedto life course decisions, but values may also adapt to changes in family life (Thomson 2002;Moors 2002).It can be expected that personal values and attitudes affect how economic and situational constraints to childbearing are perceived and the assessment of rewards and costs related to altemative activities.In this respect, studies on childbearing decisionmaking and fertility intentions provide one possible way to include values and attitudes in the study of fertility behavior. Fertility intentions and realized fertility Research on fertility behavior and fertility intentions has shownthat the link between expressed fertility intentions and subsequent fertility is often very loose.Often, the number of children intended or desired by the respondents is used to measure fertility intentions.Preferences and intentions conceming childbirth and number of children may well reflect the person's general ideas about children and childbearing, but their validity in predicting actual fertility behavior is often questioned.With the increasing awareness of and access to reliable contraceptive methods, actual fertility has approached the ideal, and gradually the ideal family size has exceededthe actual number of children in families.In Finland, it was found that the actual number of children womenhad was smaller than their ideals already in the early 1970s.The ideal number of children has been found to depend on the phase of life, and the ideals of the young are oftenunrealistic and may changelater alongwith experience (Ritamieset al. 1984). Although it is agreed that individuals are poor predictors of whether they would initiate childbearing, the same factors that predict fertility behavior are föund to predict fertility intentions (Rindfuss et al. 1988).In this line ofthinking, intentions are important because they synthesizethe influencesof an individual'sbackgroundand attitudes and mediatebetweenthose characteristicsand behavior,the transitionto parenthood. According to Miller and Pasta (1995), fertility behavior can be conceptualized as a general psychological sequenceleading from latent motivational traits to realized fertility behavior.In the first phase, latent fertility-relatedtraits and motivations are activated into conscious desires, which, in tum, are translated into intentions.Desires express personal wishes and as such do not lead to realized behavior.Intentions are different from desires in that they take into account the behavioral control or context relevant to childbearing behavior.This means that childbearing intentions are förmulated in relation to perceived constraints that prevent a person from doing what he or she desires.When desires represent the integration of antecedent motivations and attitudes, intentions represent the integration of antecedent personal desires, and perceived situational, interpersonal, social and other constraints to behavior.In the next phase, intentions generate instrumental behavior för the achievement of the intended goal.Miller and Pasta have identified three types of desire and corresponding intention which are relevant to fertility: the desire/intention för a certain number of children, the desire/intention för timing the birth, and the desire/intention för a child or för another child if there are already children present. A number oflongitudinal studies have provided evidencethat fertility intentions can, indeed,have predictivevalue conceming future behavior (Schoenet al. 1999;Monnier 1989;Miller & Pasta 1995).Contraryto the mediationhypothesis, Schoenet al. (1999) argue that intentions have independent value in explaining subsequent fertility.Timing expectations and especially the certainty of intentions were föund to be strongly related to future fertility behavior, especially among married persons.Research has also pointed to the importance of time as an intervening variable: the more time has elapsed between the measurement of intentions and the behavior, the less predictive intentions are (Miller & Pasta 1995;Thomson 1997). We assume, in accordance with White & Kim (1987), that childbearing decisionmaking is sequential and that individuals and couples proceed towards their final family size via consecutive choices, in which they consider the altematives in respect to their experiences and situational factors.Family förmation intentions reflect these considerations.While the decisionto have a first child is a choice of parenthood over non-parenthood, decisions to have subsequent children are essentially different, in that parents already have the experiences gained from the previous children.As circumstances and altematives are expected to vary by parity, so too are factors related to childbearing decisions.Accordingly, personal values and attitudes enter at every stage of family förmation, but relevant values may change during the family förma-tion process, as well as the relative importance of values/attitudes and other factors.In his study, Bulatao (1981) föundthat values and disvalues relatedto having a(nother) child in the family varied according to the prospective birth order.Personal affection and closeness to one's spouse were related to lower birth orders, whilegenderpreferencesand financialconcemswere expressedmoreoftenin relationto higherparitybirths. Values, attitudes and fertility behavior The general attitudes ofthe population towards the family and children förm the context in which subjective preferences and assessments regarding family förmation are made.There has been a dramatic and pervasive weakening ofthe normative imperative to marry, to have children and to maintain separate roles för males and females.The power of socially shared beliefs that individuals should föllow these particular family pattems is diminishing.However, while more people are now accepting the diversity ofbehavior, they still value and desire marriage, parenthood and family life för themselves (Thomton 1989;Palomba 1998;Palomba & Moors 1998). A decision to become a parent is one of the most complex lifetime judgements that individuals or couples are called upon to make.Becoming a responsible parent involves a sustained commitmentto economic, social and psychological support of the child för at least fifteen and often för more than twenty years.Individuals and couples must assess their current and likely future circumstances over a series of domains, including partnership, employment and income, housing and time commitments (Hobcraft & Kieman 1995). A value is an enduring belief that a specific mode of conductor or end-state of existence is personally or socially preferable to an opposite mode of conduct or end-state of existence.Values are understood as enduring dispositions which guide choices and decisions of individuals (Helkama 2001).The individual's relationship to the surrounding reality is reflected in his or her values.Values are expressed through attitudes, which provide models of behavior, and which develop through life experience (Puohiniemi 1996).There are often logical relations between values and attitudes.For example, values that are identified as being within the specialized concem of a particular institution should be the best predictors of the attitudes and behaviors that are also within that domain: thus religious values should be most associated with religious attitudes and behaviors.(Rokeach 1973.)In this paper we föcus on value orientations and attitudes which we presume to have some influence on fertility decision-making.We examine whether men and women who intendto have less or more than the normative two children hold different values and attitudes in life from others.Are situational factors, such as one's economic situation, more important in determiningfertility intentionsthan values and attitudes?Or are we living in a society with such a diversity of values that conclusions such as this are difficult to make, because, för example, work, family, children and time för oneself are all highly valued by most people? Religious values.Religion is a symbol of the past, the legacy of traditional society (Goldscheider 1999).Church attendance has considerably decreased in most countries; however,the timing and pace of this process differs from one country to the next and from one faith to another.The overarching and transcendent religious system has been reduced to a subsystem of society alongside other subsystems, the overarching claims ofwhich have a shrinking relevance.(Dobbelaere 1995.)Traditional religious values are replaced by secular orientations that emphasize the centrality of the individual in decision-makingprocesses and the deliberate or conscious choicethey make about the number of children that are appropriate för their economic circumstances (Goldscheider 1999).While society is in general becoming more secular, religious values are still föund to have a relation to fertility behavior in many fertility studies.Individualisticvalues.Having a child bonds individuals and couples both emotionally and legally för the rest of their lives.Having children requires giving up past liberties, för example free time and work.In an individualized style of living, the birth of a child may stand in the way of people's individual freedom.An individualized lifestyle may thereföre go hand in hand with postponingthe birth of a child or perhaps with not having children at all.(Jansen & Kalmijn 2002.)Moors (2002) föund strong support för the idea that autonomy values develop in the process of family förmation.Autonomy values are specifically relevant in partially explaining the transition to motherhood and the preference för living independentlyas a woman. Attitudes towards gender roles.In modem society men and women are expected to show more liberal attitudes towards gender roles.Traditional gender role attitudes prescribe motherhoodas an essential characteristic of being a woman, and thus can be expected to bear association to fertility intentions.Thomson (2002) föund outin her study that transition to first-or second-timemotherhoodwas not associated with gender role traditionalism but second-time fatherhood was.Berrington (2002) föund in her study that women's entry into parenthood was associated with the adoption of more traditional attitudes towards women's work.However the effect differed according to the subsequent labor market experiences of woman.Leaving the labor market to undertake family care was associated with greater approval of traditional family attitudes, while re-entry into the labor market was related to increased egalitarianism. Attitudes towards children.In förmer societies parenthood was beyond dispute, but today a matter of free choice, the outcome of comparing pros and cons, which are personally defined.Nevertheless, childrenremain important to large parts ofthe population.The proportion of women and men who do not want any children is small in all European countries, even though there is no longer social pressure to have children (van denAkker, Halman & de Moor 1993).Schoenet al. (1997) föund strong support in their study för their hypothesis that persons för whom relationships created by children are important are more likely to intend to have a(nother) child. Attitudes towards work.Work in modem society is not only an economic necessity, but also an intrinsically rewarding and creative human activity.Work was one of the most important things in life after family in Westem countries in 1990.Work in the införmation society is very different from traditional work roles in the industrial period.Work is now more flexible, more abstract and more demanding mentally.As a consequence,new work qualifications are required from workers, the emphasis being on commitment, motivation and teamwork.(Zanders 1993.)Comparison ofthe importance of work values in Europe and NorthAmerica demonstratedthat NorthAmericans obviously demandmore from a j ob than Europeans, but för both continentsgood pay is ofhighest importance (Zanders & Harding 1995). Attitudes towards family/familistic values.According to the familistic view,the family is a value in itself and includes all thoughts, demands and activities which are directed to making the family stronger (Jallinoja 1984).The birth ofthe first child is likely to be most strongly related to family values (Thomson 2002).Moors (2002) pointed out in his studythat traditional familyvalues increasedthe likelihoodof choosing traditional pattems of family förmation like marriage and motherhood.From the traditional point of view, having children is seen even as the ultimate expression of a bond and the fulfilment of a relationship (Jansen & Kalmijn 2002).More than 80 percent of people in Westem countries find the family very important in their lives.It is more important than friends, acquaintances, leisure time, work, religion or politics (van denAkker, Halman & de Moor 1993). Study hypotheses In this paper we examine whether införmation on personal value orientations and attitudes add to our knowledgeof the determinants offertility intentions and decisionmaking.We will föcus on religious values, attitudes towards children, family and work, attitudes towards gender roles and individualistic values, and assess their impact in addition to demographic and situational factors. We expect that individualistic values and the drive för self-realization as well as nontraditional sex role attitudes and the importance and meaning given to work reduce childbearing intentions in Finland.Wepresume that persons who are more individual-istic or work-oriented are more actively seeking altematives för childbearing, while persons who hold more familistic attitudes (centrality of children, preference för traditional family modes) or religious values get more satisfaction from children and the parental role and, thus, are more likely to intend to have (more) children.In addition, it can be expected that persons who value satisfaction and success in working life highly are more likely to perceive the costs related to childbearing and children as being higher, and consequently,more likely to show intentionsto stop childbearing at lower parities.We assume, in accordance with other studies indicating a negative relationship between higher order parities and economic constraints, that financial concems related to children are more relevant in decisions conceming third or higher parity births than in lower parities.In general, we expect that values and attitudes will increase our understanding of fertility intentions and behavior especially in explaining non-normative behavior, e.g. the intention to stay childless and the intention to have a third or higher order child. Data and methods Data from the Finnish Population Policy Acceptance Survey was used för this study.This Finnish survey is a part of the Population Policy Acceptance Survey (PPA2), a comparative cross-sectional survey of Europeans' (12 countries) attitudes and opinions conceming demographic changes, demographic behavior and population-related policies.The survey föcused on values and attitudes towards the family and family förmation, on the perceptions ofthe advantages ofhaving children, meaningsgivento the family and parenthood, aspirations in life, as well as opinions and attitudes towards population policy issues and family policy measures and the role of govemment in providing social security.The survey also included questions on fertility intentions. The Finnish survey was conducted in spring 2002.A simple random sample of 7,000 men and women aged 18-69 years and living in Finland (excluding the Province of Åland) was drawn from the population register by the Population Register Center.A questionnaire was mailed twice, with one mailing of a letter including only a retum request to all persons in the sample.The overall response rate achieved was 55.7 percent, which is relatively low compared to response rates received from interview studies.For this study the sample was restricted to 18-40-year-old women and men with 0-2 children.Pregnant women or men whose partner was pregnant were excluded from the study.The size of the sub-sample för this study is 1,237 persons.In the Finnish data, men below 40 years of age and women aged 18-19 and 30-34 years were somewhat underrepresented.There was also a slight underrepresentation of married and divorced men aged 30-34 and single women aged 18-19 and 30-34.Personswith a universitydegreewere clearly overrepresentedin the data.Men and women without children and men with two children were slightly underrepresented and men and women with one child overrepresented in the data set. Our föcus of interest för this study is in questions conceming values in life, such as religiousnessand individualism,and attitudestowards children,family,work and gender roles and the impact these have on birth intentions. The PPA2 questionnaire included a number of questions related to values and attitudes.In a preliminary analysis, we conducted factor analyses to construct as reliable indicators of values and attitudes as possible.If it was not possible to create a factor, a single indicator was used instead. Logistic regression is the main analytical tool used in this study.At first we describe men's and women's fertility intentions according to parity, and second, analyze determinants of intentions.We examineparities 0, 1 and 2 separately to test the hypothesis of parity-specific associations. First, we examine factors associated with a decision to stop childbearing versus a more or less certain intentionto continue childbearing (in logistic regression terms 'O' included those who had given a 'no' response to the question on childbearing intention and 'l' those who had either responded 'uncertain' or 'yes').In the next phase, we föcus on those who had indicated at least some potential to continue childbearing by opposing 'uncertain' to 'yes' responsesto investigate factors related to certainty of the intention. Logistic regression analyses are perförmed separately för each parity group.For each parity group, the models include control variables (age, gender,type ofunion and age of the youngest child för parities 1 and 2), and other background variables describing situational factors (educational level, employment and income).The association of the value orientation and attitude variables to the dependent variable will be examined each in tum in models which include control and situational variables.Our föcus will be in examining the net association of values and attitudes and childbearing intentions when the impact of demographic and other background variables has been controlled for'.In our examination we have also included variables which did not prove to have a significant impact, or even exceededthe criteria (p-value <0.25) suggested by Hosmer & Lemeshow (2000) in the preliminary analyses (in which we examined models including only age/age of the youngest child as a control variable 1 . . .Alternative models were tested to examme whether the results presented here would hold up.Especially since the number of cases in each parity group was relatively small, we examined the impact offactors on intentions in models för ali parities 1+, and in models för parities 2+, and included parity*factor-interaction terms.None ofthese models provided additional införmation to the models presented here.Also, collapsing union categories did not change the results markedly.When föcusing only on persons in a union, we föund that attitudes towards gender roles (the more traditional the attitude towards women's role was) significantly increased the certainty of intention, and the intention to stop or continue was significantly associated with a positive attitude towards children (the more negative the attitude was, the more likely the person was to say 'no' to (further) childbearing intentions).and each value/attitude variable in tum), in order to examine factors that have proven significant in other studies. Only the coefficients of value/attitude variables included in the paper are presented in Table 3.The impact of control and situational variables are discussed only shortly (models which present the impact of control and situational variables (only) are presented in Appendix table 3).The figures in Table 3 are odds ratios and obtained from the estimated logit coefficients (b) by transförmation eb.While the interpretation of the impact of a continuous variable is generally in the förm 'b gives the change in the log odds för an increase of one unit in the independent variable', e.g. the likelihood to exhibit the examined outcome increases/decreases by ebby every unit in the independent variable, a one-unit change in an attitude/value variable is more difficult to understand.As the magnitude of the change is dependent on the measurement scale of the independent variable, we will thereföre föcus on the significance and direction of the association. Dependent variable The dependent variable used in this analysis was the intention to have a child.Fertility intentions were measured by the question: "Do you intend to have a(nother) child in the future?"Response options were: 1) No, 2) Don't know, uncertain and 3) Yes.The föurth category, '4 1 am/my partner is pregnant' was excluded from the study. We understand uncertainty as a state between yes and no, in the sense that there is at least a potential 'mental state' för yes, but the person's life situation is not right just now, för example because of an insecure job or economic situation.Since the PPA2questionnaire did not include any additional questions conceming the certainty of 'yes' and 'no' options, nor were there any indicators on timing expectations, we treat the dependent variable as a two-category variable.First we examine the association of covariates to the propensity to say no versus uncertain/yes, and in the second phase, uncertain versus yes to the question about birth plans.This means that we first examine factors related to intention to stop or continue childbearing, and next, factors related to the certainty of the intention to continue. Determinants of fertility intentions Independent factors can be divided into three groups: 1) control variables (sex, age, marital status, interval between births), 2) situational factors reflecting the individual's social and economic circumstances (education, at work or not at work, and income) and 3) value and attitude factors. Control variables We included some characteristics of the respondents in the models as control variables in order to eliminate some potential direct effects on childbearing intentions.A cross-sectional study on fertility intentions among women and men of different ages and number of children reflects the respondents' current stage of reproductive and family life, as Ruokolainen and Notkola (2002) have also pointed out.Consequently, some of those who had intended to have a(nother) child in the beginning of their fertile age, or soon after marriage/previous child, had already done so, and proceeded to higher parities by the time of the survey.Those who remain in the lower parity group have either not yet realized their intentionto have another child, or intend not to have (subsequent) children.Accordingly,the age ofthe respondent (used in the models as a three-or two-category variable, 18-25, 26-33, 34-40, or 18-33, 34-40 in parities 1 and 2) and, in models för parities 1 and 2, age of the youngest child (0-5 years and 6+ years) attempt to eliminate bias caused by the respondents being in different stages oflife. PPA2 data provided the opportunity to examine also men's childbearing intentions.Studies of couples' decision-makinghave pointed to the importance of also including men's fertility desires (cf.Thomson et al. 1990), although unfortunately we could not benefit from couple data in this study.We expected that gender would also influence the impact of values and attitudes to childbearing intentions, and perförmed an additional analysis with models which included interaction terms för gender and value/ attitude variables. Marital status has been föund to have an impact upon childbearing intention (för example Thomson 1997;Schoen et. al 1999), and we included it as a three-category variable (married-in a consensual union-not living in a union).Persons living in a consensual union and those who did not live in a union were kept in separate categories since the proportion of births to cohabiting couples increased rapidly in Finland at the tum of the 1990's.A preliminary analysis showed also that the impact of these two groups on childbearing intentions was different depending on whether we were analyzing 'no' versus 'uncertain/yes' or 'uncertain' versus 'yes' responses.In addition to time variables, marital status reflects the family förmation stage.In most cases, persons living in a consensual union or those not in a union are generally in the beginning phase of their family life, while married persons have lived in their union longer, and are more likely to have had a(nother) child.We conducted an additional analysis för respondents living in a marriage or consensual union only, but the results did not differ markedly from those presented in this paper. Situational variables Studies have suggested that economic and employment-relatedconsiderations (situational factors) can be more important factors when a couple is planning för a third or subsequent child than in lower parity births (Namboodiri 1974;Schoen et al. 1997;Ruokolainen & Notkola 2002).Education is generally believed to influence childbearing, although its impact may be more visible in the timing of births, those with a higher educational background starting later and ending up with a lower number of children. In this study we used educational attainment, work attainment and income as a measure ofthe respondent's socioeconomicsituation.Because ofthe relatively small number of respondents in the relevant age groups, we created rather crude categories för situational factors.We divided educational level into two groups: university degree and lower than university degree education.Work attainment was divided also into two groups: at work and not at work, which also included persons working less than 10hours a week (only 1.2 percent ofthe respondentswere working less than 10 hours per week).Especially among women, higher educational level and employment are generally expected to have a negative relationship to childbearing.Economic situation was measured by monthly income of the total household, which was divided in three groups of equal size: lowest, middle and highest salary group. Appendix table 1 presents the distribution of the respondents by control and situational variables. Values and attitudes We will evaluate the impact of several different values/attitudes on childbearing intentions: religious values, individualisticvalues, attitudes towards children (economic considerations and personal pleasure), family values (children as a social resource and attitudes towards family förms), attitudes towards work (money,success and satisfaction) and gender roles (equality and role model).The composition of value and attitude variables is described in Table 1 (see next page), and införmation on the related factor analyses is presented in Appendix table 2. Table 2 presents mean scores för values and attitudes used in the study by sex.There were some differences in opinions between men and women (in Table 2 on page 215), the statistical significance of the difference is indicated).Men valued religion a little less than women and thought more often that children mean an economic burden to their parents.Womenhad a little more modem attitude towards family förms, but men more often thought of children as a social resource.For women it was more important to be satisfied at work than it was för men, and women were more modem in their gender role attitudes and valued equality in the family more than men did.(l) A factor 'Individualistic values' was created from three items: (a) "Having enough time för yourselfand för your own interests", (b) "Having enough time för your friends" and (c) "Self-realization", Measurement scale from 1=very important to 5=very unimportant. ( 1) A single item "Children mean an economic burden to their parents", to indicate the importance of economic considerations related to children.Measurement scale from 1= strongly agree to 5=strongly disagree. (2) A factor 'Children as personal pleasure' was created from five items: (a) "I believe that in our modem world the only place where you can feel completely happy and at ease is at home with your children", (b) "I always enjoy having children near me", (c) "I believe you can be perfectly satisfied with life once you have been a good mother or father", (d) "I like having children because they really need you" and (e) "I do not believe that you can be really happy ifyou do not have children."Measurement scale from 1= strongly agree to 5=strongly disazree, (l) A factor 'Children as a social resource' was created from three items: (a) "I believe it is your duty towards society to have children", (b) "Children make a family" and (c) "Children mean security för old age", Measurement scale from 1=strongly agree to 5=strongly disagree. (2) A factor 'Attitude towards farnily forms' was created from five items: (a) "If a woman wants to have a child as a single parent, and she doesn't want to have a stable relationship with a man she should be able to", (b) "People who want children ought to get married", (c) "It is all right för a couple to live together without intending to get married", (d) "Marriage is the only acceptable way of living together för a man and a woman" and (e) "It is totally acceptable that young people have many relationships beföre a stable relationship and having a family", Measurement scale was 1=agree and 2=disagree. (l) A single item: "How important it is to you to be satisfied in your job?" (2) A single item: "How important it is to you to be successful in your work?" (3) A single item: "How important it is to you to have enough money/income?"Measurement scale from 1=very important to 5=very unimportant in all three. (l) A single item: "How important it is to you to have equal division of work between the man and woman in the family?" to indicate the importance of equality in the family.Measurement scale from 1=very important to 5=very unimportant. (2) A factor 'Role model attitude' was created from two items: (a) "In their job women are less ambitious than men" and (b) "No one can take care of a child as well as the mother", Measurement scale from 1=strongly disagree to 5=strongly agree. Fertility intentions among men and women Figures 1 and 2 present the distribution of fertility intentions among men and women according to parity. Only eight percent of men and nine percent of women without children said they did not intend to have children at all, about every third was uncertain and half intended to have children.One child is also not very often the desired number of children in a family, because only 16 percent of mothers of one child and 19 percent of fathers did not intend to have more children.Although childlessness or only one child in a family are not wished för by many, still about 15 percent ofFinnish women are childless and 20 percent have only one child at the age of 50 years (Statistics Finland 2001).About two children has been the average family size in Europe för the last sixty years (Coleman 1996).The norm of two children in a family was visible also in this study. After having two children especially the intentions of women to have more children drop, and uncertainty and intention not to have more children grow.Among parents with two children there are more men than women who intend to have more children and more women than men who are uncertain about having more children. Control and situational factors The age of the respondent and the age of the youngest child were negatively associated with an intention to have (more) children.(Models which include both the control and the background variables but not value/attitude variables are presented in Appendix table 3.) Since these variables reflect the stage of life of the respondents, results are as expected.Gender affected intentions only in two cases: women were more likely to say no to childbearing at parity zero than men were and, also, less certain to proceed towards a third birth. There was no marked difference between marriage and a consensual union in childbearing intentions.On the other hand, the lack of a suitable partner was associated with intentions but the pattem was mixed.In parities zero and one, the lack of a suitable partner decreasedthe certainty of the intentionto bear a child, but it was not significantly associated with the intention to stop/continue childbearing.At parity two, not living in a union increased the likelihoodto plan för a third child. Education was positively associated with plans to continue childbearing, and with certainty of the intention to have (more) children (the association was statistically significant only at parity zero).Persons with a university degree were more likely to plan för (more) children.This again may reflect the stage of life of the respondents, since education is related to postponement of childbearing, and persons with a higher educational degree have started to proceed towards the desired number of children later than the others. Neither employment status nor income had a marked impact upon intentions.Employment was significantly related to childbearing intentions only at parity zero. Being employed increased the likelihood to plan childbearing.Only certainty of intention at parity one was significantly,and, surprisingly, negatively associated with income.It may be that state policies are able to reduce costs related to childbearingto the extent that neither employmentstatus nor incomehas a significant role conceming childbearing intentions.It is also possible that employment-relatedfactors are more important in determining the timing of the births. Values and attitudes Contrary to our expectations, religion was not generally associated with the intention to have children (Table 3).Only at parity zero was the respondent more likely to plan on having children, the more important religion was in her/his life.We were particularly surprised to find that at least with this data, intention and certainty to proceed towards a third birth failed to be significantlyassociated with religiousness.While the impact of religiousness was not apparent in the overall intention to have more children, it may have an impact on the timing of parenthood by encouraging earlier and faster childbearing Valuesand attitudes related to children had a significant association with intentionsin föur of our models.Personal pleasure and affection related to childrenwas associated with childbearing intentions and the certainty of the intentions at parities zero and two.The more respondents valued children as sources of personal affection and pleasure, the more likely they were to plan to have (more) children, and the more certain they were in their intentions.Financial considerations associated with children were present in decision-makingat zero parity.Persons who feared economic costs related to children were less likelyto plan för the first child, or less certain in their intentions.Again, it was somewhat surprising that there was no significant association of financial considerations and childbearing intentions at higher parities. The "Children as a social resource" -variable was also associated with childbearing intentions.The importance of children as a social resource had a significant and positive relationship at parity zero to the intentionto start childbearingand to the certainty of the intention.At higher parities, the social resource variable was not significant.It is possible that already one child is enoughto fulfil the meaning of children as constitutive of a family while the importance of children in providing old-age security is diminished in modem society.On the other hand, another variable measuring traditional attitudes towards family and family förms was significant at parity two.Those who held more conservative attitudes towards family förms were more likely to plan för a third child, and more certain in their intentions.Attitudes towards work were generally not related to fertility intentions.We explored a number of factors related to work in addition to single indicators, but could not find any significant relationships.Only at parity one did we find a weak association of the money variable with intentions.The less important it was to the respondent to have enough money, the more likely he/she intended to have a second child.However, a number ofwork-related factors were differently related to intentions among men and women at lower parities.At parity zero, the importance ofbeing satisfied in one's job was significantly and negatively related to women's and positively to men's intention to start childbearing.The more important it was to have enough money,the less likely women were to plan childbearing (significant at p<0.05 level), while among men there was a positive association with the importance of money and childbearing intentions.The certainty of intention at parity zero was, on the other hand, negatively associated with the importance of money among men and positively among women.At parity one, the importance of being successful in work was positively associated with the intention to proceed towards a second birth among women and negatively amongmen. While work-related attitude factors could be understood as indicators of individualistie values, we also examined a separate factor measuring the importance of self-realization in life.This factor had, however, only a weak relationship to intentions.At parity zero, the individualism variable was negatively associated with plans to have a first child. Finally, we also investigated variables measuring attitudes towards gender roles.We expected that more modem views about gender relations would imply intentions to have fewer children.However, only the equality variable had a significant association with fertility intentions.The importance of equal division of work in the family was positively associated with the certainty of intention at all parities.It may be that couples who value equality highly also have a predisposition to behave accordingly, and costs related to child care which usually fall on the mothers are reduced by more equal sharing of household tasks.One reason för the failure of the role model variable in explaining intentions may be that the factor was created from only two indicators, which did not correlate very strongly.The interaction of sex and the gender role variables was significant only in one case.At parity zero women who held more modem role attitudes were more likely to plan staying childless. Summary and discussion In conclusion, the results from the Finnish PPA2 survey provided support to the hypothesis that also values and attitudes are associated with childbearing intentions and decisions. Religious values seemed to have only a minor impact in explaining fertility intentions among Finnish men and women.Attitudes related to children and especially the meaning of children as personal pleasure, on the other hand, had a marked effect on childbearing intentions.Financial considerations influenced intentions at parity zero, as did the attitude towards children as a social asset.Familistic values were related also to thirdbirth intentions.Work-related attitudes and individualistic values appeared to have hardly any impact upon fertility intentions in this data.However, the analysis of the models, which included interaction terms, provided some evidence that the effect of work values may be different among men and women.Attitudes towards gender roles had an impact only when attitude towards the equal division of work was examined. Situational factors, and especially those related to employment and the economic situation of the family, were only marginally related to fertility intentions.Employment status was associated with fertility intentions at parity zero, where persons who were unemployed were more likely to intend to stay childless.Since we had only a very crude indicator för employment, it is possible that an indicator which would better account, för example, för employment history, terms of employment or other employment characteristics would be more useful in explaining fertility intentions and behaviors.Income had a significant and negative impact only on the certainty of the intention to continue childbearing at parity one.The impact of education may be more a reflection of the respondent 's stage of life and the fact that persons with more education postpone their childbearing to a later age. The variation of the impact of both situational and value/attitude variables also supports the notion that intentions and decision-making conceming childbearing are parity-specific.The norm of two children in the family may partly explain the fact that value/attitude variables had hardly any role at allin explaining intentions to proceed towards a second birth.It seems that today people are planning the timing of the first birth very carefully, and often want to postpone it until having finished education and having föund a job (Paajanen 2002).Thanks to reliable and effective contraceptive methods the timing ofbirths is more possible than ever beföre.While situational and economic factors did not have as marked a role in determining intentions to have children as we would have expected, it may well be that they are more important in determining the timing of the first or subsequent birth, and not the general intention to have (more) children. Children and the family are still widely valued in today's society, and only a few wish to have no children or family at all.Having children and a family may be a part of the idea of self-fulfilment in postmodem society but they compete with other preferences and priorities, as already pointed out by van de Kaa (2001).Treating work and family as opposite expressions of some inherent value orientation may not also be very fruitful, since modem men and women appear to value them both very highly in their lives.It is possible that diverse results from studies on fertility and values may be partly caused by indicators för attitudes and values being neither very well developed nor shared.In this study also, more detailed and specified indicators on, för example, sex roles and the importance of different aspects of work in one's life might have provided additional införmation on how attitudes towards changing sex roles as well as towards work interfere with fertility decisions. Future investigation of intentions should also include more detailed analysis on timing intentions, as well as provide införmation on the impact of intentions and actual fertility behavior.In these studies, data on a couple's decision-making and characteristics of the union would also increase our knowledge of fertility determinants (see för example Thomson 1997). While the connection between values and attitudes and fertility behavior is not always clear and the direction of the impact is difficult to establish in a cross-sectional study, we think that research on personal values and attitudes in relation to fertility behavior can give new and important views in addition to other fertility determinants.Particularly values and attitudes may have a meaningful role in explaining childbearing decisions at 'marginal' parities zero and three or more.In the future, PPA2 studies conducted in other European countries will provide data to examine whether the same attitudes and values are related to fertility intentions in a similar fashion among other Europeans. Table 1 . Composition of value and attitude variables. Table 3 . Odds ratios of intending to stop childbearing versus intending to continue (no vs. uncertain/yes), and of the certainty ofthe intention to continue (uncertain vs. yes) by parity.
2018-12-15T19:43:38.857Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "16520c910ba6199b6e3d3d309ade2126635af914", "oa_license": "CCBY", "oa_url": "https://journal.fi/fypr/article/download/44992/11270", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16520c910ba6199b6e3d3d309ade2126635af914", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
6398192
pes2o/s2orc
v3-fos-license
Entry Pathways of Herpes Simplex Virus Type 1 into Human Keratinocytes Are Dynamin- and Cholesterol-Dependent Herpes simplex virus type 1 (HSV-1) can enter cells via endocytic pathways or direct fusion at the plasma membrane depending on the cell line and receptor(s). Most studies into virus entry have used cultured fibroblasts but since keratinocytes represent the primary entry site for HSV-1 infection in its human host, we initiated studies to characterize the entry pathway of HSV-1 into human keratinocytes. Electron microscopy studies visualized free capsids in the cytoplasm and enveloped virus particles in vesicles suggesting viral uptake both by direct fusion at the plasma membrane and by endocytic vesicles. The ratio of the two entry modes differed in primary human keratinocytes and in the keratinocyte cell line HaCaT. Inhibitor studies further support a role for endocytosis during HSV-1 entry. Infection was inhibited by the cholesterol-sequestering drug methyl-β-cyclodextrin, which demonstrates the requirement for host cholesterol during virus entry. Since the dynamin-specific inhibitor dynasore and overexpression of a dominant-negative dynamin mutant blocked infection, we conclude that the entry pathways into keratinocytes are dynamin-mediated. Electron microscopy studies confirmed that virus uptake is completely blocked when the GTPase activity of dynamin is inhibited. Ex vivo infection of murine epidermis that was treated with dynasore further supports the essential role of dynamin during entry into the epithelium. Thus, we conclude that HSV-1 can enter human keratinocytes by alternative entry pathways that require dynamin and host cholesterol. Introduction Herpes simplex virus type 1 (HSV-1) enters its human host via epithelia of mucosa, skin or cornea where keratinocytes represent the primary entry site. Cellular entry of HSV-1 involves multiple steps. Initial virus-cell contact is mediated by HSV-1 envelope glycoproteins gC and/or gB with cell surface heparan sulfate proteoglycans which facilitate subsequent binding to coreceptors. The viral envelope glycoprotein gD serves as the major virus ligand for all known HSV coreceptors and the best studied gD coreceptor is the immunoglobulin-like cell-cell adhesion molecule nectin-1 (named HveC) [1]. Depending on the cell line HSV-1 can enter cells either by direct fusion of the viral envelope with the plasma membrane or by endocytic pathways [2,3,4,5] which can be both pH-dependent and pH-independent [6]. Entry into neurons and Vero cells can occur via fusion at the plasma membrane at neutral pH while fusion with HeLa and CHO cells involves pH-dependent endocytosis, and fusion with C10 (B78-H1 mouse melanoma expressing nectin-1) cells involves pH-independent endocytosis. Interestingly, expression of nectin-1 in CHO cells correlates with endocytic uptake while expression of PILRa (paired immunoglobulin-like type 2 receptor a) in CHO cells points to HSV-1 uptake via fusion suggesting that the entry pathway into the same cell line depends on the cellular entry coreceptor used [7]. Furthermore, the same receptor may initiate different entry pathways, depending on the cell in which it is expressed. When expressed in the J1.1-2 cell line, nectin-1 mediates entry that is not blocked by endosome acidification inhibitors, however, nectin-1 mediated entry into CHO cells is dependent on endosome acidification [2]. After additional overexpression of avb3-integrin, HSV-1 entry in J1.1-2 nectin-1 cells is cholesterol-and dynamin-independent whereas cholesterol and dynamin play a role in CHO-nectin-1 expressing cells [8]. A phagocytosis-like uptake in which dynamin-mediated processes have been implicated, has been also suggested for CHO-nectin-1 expressing cells [9]. Dynamin is a multidomain GTPase that controls several distinct endocytic pathways, with the clathrinmediated endocytosis being the best studied [10]. Dynamin plays a direct role in catalyzing membrane fission. During clathrinmediated endocytosis dynamin forms a helical polymer around the vesicle neck and, upon GTP hydrolysis, mediates the fission of the vesicle from the plasma membrane [11]. Recent studies have also implicated dynamin in further cellular processes such as regulation of actin assembly and reorganization via its interactions with many actin-binding proteins [12,13]. Furthermore, dynamin can function in the process of fusion pore expansion and postfusion events in exocytosis [14,15]. HSV-1 seems to be capable of using a variety of entry mechanisms that may reflect an adaptation to differences in its target cells. The goal of this study was to characterize the HSV-1 entry mechanisms into human keratinocytes since little is known about this entry portal in the human host. There has been one report that HSV-1 may enter keratinocytes via a pH-dependent endocytic pathway [4]. The authors showed that treatment with agents that elevate endosomal pH inhibits entry, and cellular tyrosine kinase activity is selectively required for efficient entry by the low-pH, endocytic pathway [4]. Our results suggest that HSV-1 enters human keratinocytes both by direct fusion of virions at the cell surface and by an endocytic pathway. As dynamin is an important player during endocytic uptake we addressed its impact during entry into keratinocytes. Interestingly, dynamin inhibitors blocked infection by interfering with penetration of the virions at the plasma membrane which in turn inhibited both fusion at the plasma membrane and vesicle formation. Furthermore, we provide the first evidence that host cholesterol plays an important role during entry into keratinocytes. Uptake of HSV-1 into human keratinocytes We infected HaCaT cells representing undifferentiated human keratinocytes, and primary human epidermal keratinocytes to analyze the mode of virus uptake using electron microscopy. All studies were performed with high MOI (200 or 1500 PFU/cell) to achieve infection of all cells at rather high cell density. Primary keratinocytes were cultured in calcium-reduced medium to minimize cell-cell contacts and thereby enhance infectivity. At 2 min p.i. most virions were observed at the cell surface of HaCaT cells while 24% of the virus particles were internalized. Interestingly, 5% were found in vesicles and 19% were detectable as free capsids underneath the plasma membrane ( Fig. 1A, C). The same ratio of free capsids and enveloped particles in vesicles was present in primary keratinocytes although at much lower percentages suggesting that virus uptake was initially delayed as compared to HaCaT cells (Fig. 1B, C). At 10 min p.i. the same quantity of either free capsids or of particles in vesicles were observed in HaCaT cells while by 30 min p.i. free capsids were more abundant than particles in vesicles (Fig. 1C). In contrast, the percentage of free capsids in primary keratinocytes was significantly lower than the percentages of particles in vesicles at 10 and 30 min p.i. (Fig. 1C). The results suggest that uptake of HSV-1 into keratinocytes can occur via both direct fusion of the viral envelope with the plasma membrane and via an endocytic pathway. Interestingly, the ratio between the two uptake modes differed in primary cells compared with HaCaT cells suggesting that endocytic uptake is more pronounced in primary keratinocytes. We assume that the free capsids observed in the cytoplasm at 2 min p.i. were released following very rapid fusion at the plasma membrane. Many of the capsids observed at 2 and 10 min p.i. were located just underneath a region of the plasma membrane with a distinctive staining pattern resembling that of the viral envelope ( Fig. 1 A, b, c; B, b). This is highly suggestive of a direct fusion process. However, some of the free capsids observed at 10 and 30 min p.i. might be released from endosomes. Role of endocytic pathways The potential role of endosomal HSV-1 uptake into keratinocytes was analyzed by inhibitor studies. Successful infection in individual cells was visualized by staining with an antibody directed against the viral immediate-early protein ICP0. The cellular localization of ICP0 passes through distinct phases during early infection, in which ICP0 in nuclear foci indicates an early stage of viral gene expression followed by a cytoplasmic ICP0 relocalization [16]. Therefore, a reduction in the amount of cytoplasmic ICP0 following drug treatment suggests delayed infection, although we cannot exclude the possibility that any one drug could have an effect on export of ICP0 from the nucleus. Subconfluent cells were infected with 20 PFU/cell which led to ,80-90% of HaCaT cells and ,40-60% of primary keratinocytes becoming infected based on ICP0 expression visualized at 2 h p.i. When the microtubule-depolymerizing drug nocodazole which inhibits trafficking from early to late endosomes [17], was added prior to infection, the number of infected HaCaT cells was reduced in a concentration-dependent manner from 83% to 58%. The reduction was more marked when the MOI was lowered from 20 to 5 PFU/cell. In addition, the proportion of infected cells with cytoplasmic ICP0 decreased from 40% to 6% (20 PFU/cell) in drug-treated cells suggesting a delay of infection during the early phase ( Fig. 2A). When primary human keratinocytes were treated with the same amounts of nocodazole, infection was reduced from 51% to 33% at 20 PFU/cell and from 42% to 15% at 5 PFU/cell (Fig. 2B). Thus, our results support a role for trafficking via the microtubule network in the entry pathway in keratinocytes. However, this experiment does not distinguish between trafficking of free capsids or trafficking of vesicles containing enveloped particles from early to late endosomes. Cells were treated with lysosomotropic agents to address the impact of endosomal acidification during HSV-1 entry into keratinocytes. While treatment of HaCaT cells with the carboxylic ionophore monensin (40 mM) did not interfere with infectivity, addition of the weak base ammonium chloride (NH 4 Cl) reduced the level of infection from 87% to 47% of cells in a concentrationdependent manner (Fig. 2C). In addition, the decrease in cytoplasmic ICP0 localization in NH 4 Cl-treated HaCaT cells suggested delayed early infection (Fig. 2C). In primary keratinocytes the concentration-dependent effect of NH 4 Cl was much stronger. We observed a reduction in the number of infected cells from 39% to 3% in the presence of 75 mM NH 4 Cl (Fig. 2D). In contrast to HaCaT cells monensin-treatment of primary keratinocytes also decreased the number of infected cells from 45% to 26% (Fig. 2D). There was a concomitant reduction in the number of infected cells with cytoplasmic ICP0 in all drug-treated primary cells. Taken together, the results demonstrate that NH 4 Cl produced a limited and monensin no reduction in virus infectivity in HaCaT cells, while both agents had much greater effects in primary keratinocytes. Control experiments indicated that the reduction in infectivity by NH 4 Cl was reversible when the weak base was removed just prior to infection (data not shown). Based on DAPI staining of the cell nucleus, the chromatin seemed to be changed in the presence of NH 4 Cl. To exclude the possibility that the observed effects were due to a transcriptional block, GFP-expressing plasmids were transfected which demonstrated unchanged GFP expression in NH 4 Cl-treated HaCaT cells (data not shown). Furthermore, we analyzed the localization of HSV-1 in NH 4 Cl-treated primary keratinocytes to confirm that the elevated pH in intracellular compartments interferes with the delivery of capsids to the nuclear periphery as recently described [4]. Since in untreated control cells nuclear accumulation of newly synthesized capsid protein VP5 was observed already early during infection, we inhibited protein expression by cycloheximide to visualize input capsids in close proximity to the nucleus (Fig. 3). In contrast, input capsids were widely dispersed in NH 4 Cl-treated cells and localized only rarely to the nuclear periphery (Fig. 3). In summary, we conclude that endosomal acidification contributes to HSV-1 infection of HaCaT cells and may play a more prominent role in primary keratinocytes. To gain further insights into potential endocytic pathways, keratinocytes were treated with chlorpromazine which leads to misassembly of clathrin-coated pits by inhibiting the assembly of the clathrin adaptor protein AP2 [18]. Since treatment of HaCaT cells with 28 mM chlorpromazine had no influence on infectivity (Fig. 2E), we infer that clathrin-mediated endocytosis does not contribute to HSV-1 entry. We then treated HaCaT cells with the sodium-proton exchange inhibitor 5-(N-Ethyl-N-isopropyl)amilor- ide (EIPA) which is used as specific inhibitor of macropinocytosis [19]. The number of infected cells dropped from 90% to 61% when 75 mM EIPA was added prior to infection (Fig. 2E). Recent studies indicate that EIPA has significant effects on various endocytic processes such as relocalization of early and late endosomes [20]. Thus, the reduction in infectivity caused by EIPA may simply reflect the interference with an endocytic pathway. Since macropinosome formation involves filamentous actin (F-actin), keratinoyctes were treated with cytochalasin D (CD) which blocks actin polymerization at the barbed ends of Factin. In both HaCaT cells and primary keratinocytes CD treatment did not reduce the number of infected cells, although, early infection seemed to be delayed in a concentration-dependent manner (Fig. 2E, F). These results suggest that macropinocytosis is not favored during HSV-1 entry into either HaCaT cells or primary keratinocytes. In summary, the inhibitor studies suggest that the microtubule network and endosomal acidification contribute to the HSV-1 entry pathway in keratinocytes. The ion-transport inhibitor EIPA had some inhibiting effect on infectivity whereas disassembly of actin filaments correlated with only a minor effect on infection which does not support a major role for the macropinocytic pathway but indicates involvement of other endocytic processes. Taking these results together we conclude that endocytic pathways play a role during HSV-1 entry into keratinocytes. Impact of cholesterol Many endocytic pathways require cholesterol-rich lipid rafts for their function. The essential role of cholesterol for HSV-1 entry into Vero cells and mouse melanoma cells expressing either nectin-1 or HVEM has been shown [21]. We treated keratinocytes with methyl-b-cyclodextrin (MbCD) which depletes cholesterol from the plasma membrane to address the requirement for cholesterol for HSV-1 entry in keratinocytes [22,23,24]. The functionality of MbCD in keratinocytes was initially confirmed by visualizing the uptake of cholera toxin B, a glycosphingolipidbinding ligand that is known to be internalized by caveolaemediated and lipid-raft-dependent endocytosis [25,26]. When HaCaT cells were pretreated with 15 mM MbCD for 30 min cholera toxin B uptake was efficiently blocked as visualized by the loss of cytoplasmic cholera toxin B (Fig. 4D). Upon HSV-1 infection a concentration-dependent inhibition of infectivity was visible in both MbCD-treated HaCaT cells and primary keratinocytes. Upon pre-treatment with 10 mM MbCD the number of infected cells was reduced from 88% to 16% in HaCaT cells and from 64% to 22% in primary keratinocytes (Fig. 4B). Prior to infection MbCD was removed to avoid any depleting effect on cholesterol in the viral envelope. Control experiments indicate that preincubation of HSV-1 particles with 10 mM MbCD reduced infectivity (Fig. 4A). This observation is in agreement with previous results demonstrating a reduced infection rate of pseudorabies virus (PrV) when viral cholesterol was depleted with MbCD [27]. When 10 mM MbCD was added to the cells at 1 h p.i., we observed no effect on the number of infected cells (Fig. 4C) suggesting that depletion of cholesterol in the plasma membrane interferes directly with HSV-1 uptake and that MbCD does not disturb subsequent steps during early infection. In addition, we addressed whether cholesterol depletion was reversible by giving 50 or 200 mg/ml cholesterol to MbCDtreated cells. Following infection we observed a concentrationdependent increase in infectivity (Fig. 4C) demonstrating that replenishment of cholesterol restored infectivity. When filipin which binds cholesterol was used, we observed a reduction in the number of infected primary keratinocytes, however, no effect was visible in HaCaT cells (data not shown). Since filipin also failed to inhibit cholera toxin B uptake into HaCaT cells we conclude that filipin insufficiently sequestered cholesterol in these cells. Taken together, these results indicate that cholesterol in the plasma membrane is required for HSV-1 uptake into keratinocytes. Impact of dynamin The GTPase dynamin controls several distinct endocytic pathways [11]. To determine the role of dynamin during HSV-1 uptake, we performed overexpression, RNA interference and inhibitor studies. To overexpress the dominant-negative dynamin mutant K44A [28], HaCaT cells were transfected with plasmids expressing either the GFP-tagged dynamin mutant or GFP alone followed by infection. At 2 h p.i. we observed a reduced number of infected cells in the presence of the overexpressed dynamin mutant. Whereas 78% of the GFP-expressing cells became infected, the presence of the overexpressed dominant-negative dynamin mutant reduced the number of infected cells to 16% (Fig. 5A, B). In contrast, overexpression of wt dynamin had no inhibitory effect (data not shown). In keratinocytes the ubiquitously expressed dynamin 2 is present, while dynamin 1 expression is restricted to neurons and dynamin 3 is expressed in lung, heart, brain and testis. When we reduced dynamin 2 expression in HaCaT cells, almost no effect on infection was observed (data not shown). Since dynamin 2 expression was only reduced by ,80% after silencing, we conclude that the residual amount of dynamin 2 was still sufficient to allow viral entry. To further analyze the impact of dynamin we pretreated keratinocytes with dynasore, a small molecule inhibitor of the dynamin GTPase activity [29]. As a control we confirmed that dynamin-dependent transferrin uptake was blocked in HaCaT cells pretreated with 40 mM dynasore (Fig. 6A) [30,31]. When dynasore was added either to HaCaT cells or primary keratino- cytes a concentration-dependent inhibition of HSV-1 infection was observed. However, while 20 mM dynasore almost completely blocked infection of HaCaT cells (Fig. 6C, D), 80 mM dynasore was needed to produce the same level inhibition in primary keratinocytes (Fig. 6E, F). When dynasore (80 mM) was washed out prior to infection, no inhibitory effect on the number of infected cells was observed (data not shown). In contrast to keratinocytes, no inhibitory effect was detectable when murine hippocampal primary neurons were pretreated with up to 80 mM dynasore (Fig. 6G). These results suggest that dynamin is essential for HSV-1 entry into keratinocytes but does not play a role during entry into neurons. To exclude any direct effects of dynasore on virus particles, we analyzed the infectivity of dynasore pretreated virions. No difference in the number of infected cells was visible when HaCaT cells were infected with untreated or dynasore-pretreated particles (Fig. 6B) confirming that dynasore interfered only with cellular functions. As a further control to rule out possible adverse actions of dynasore itself, we analyzed the effect of MiTMAB, a surfaceactive inhibitor that blocks dynamin's interactions with phospholipids [32]. Upon pre-treatment of HaCaT cells with MiTMAB (1-20 mM) a concentration-dependent inhibition of infection was detectable which was comparable to the effects observed in dynasore-treated cells (data not shown). EM studies were performed to determine how dynasore affected the uptake mechanisms during HSV-1 entry into keratinocytes. As a precondition for the EM studies we tested whether dynasore blocked infection at the high MOIs needed for EM analysis of incoming virus particles. Infection studies with 1500 PFU/cell showed that 120 mM dynasore still blocked infection, while DMSO alone had no effect (Fig. 7B). As expected, the patterns of virus uptake at 10 min p.i. in cells pretreated with DMSO were comparable with those shown for untreated cells with both free capsids and particles in vesicles being observed in the cytoplasm (Fig. 7A a-b, D a-b). In contrast, when primary keratinocytes were pretreated with dynasore we observed virus particles almost exclusively on the outside of the cells at both 10 and 30 min p.i. (Fig. 7A e-h). These particles were predominantly located in invaginations at the plasma membrane, unlike particles that were attached to the cell surface upon incubation for 1 h at 4uC (Fig. 8A c-d). Quantification revealed that there was no increase in the low number of internalized particles in dynasore-treated cells between 10 and 30 min p.i. by which time about 81% of the observed particles had been taken up in control cells (Fig. 7C). Surprisingly, the few internalized particles in dynasore-treated cells included both free capsids underneath the plasma membrane and enveloped particles in vesicles. Dynasore has been reported to stabilize pit formation at the plasma membrane at early and late stages [29]. Therefore, we had assumed that the uptake of particles via fusion of their envelopes with the plasma membrane would not be blocked in dynasore-treated keratinocytes. Since cytoplasmic capsids were more often found in infected HaCaT cells than in primary keratinocytes, we looked for the presence of free capsids in dynasore-treated HaCaT cells at 10 min p.i., by which time approximately 20% of particles in untreated cells would be free capsids in the cytoplasm (Fig. 1C). Surprisingly, as in primary keratinocytes, virus particles were only found on the cell surface in invaginations of the plasma membrane (Fig. 7D, E). Thus, we conclude that dynasore blocks both modes of uptake fusion with the plasma membrane and endocytosis. Using a recently established protocol for ex vivo infection of murine epidermal sheets [16], we investigated whether infection of a target tissue is also dynamin-dependent. Skin from the backs of newborn mice was prepared, and the epidermis was separated from the dermis by dispase treatment. The epidermal sheets were allowed to float on virus-containing medium supplemented with DMSO alone, or with 40 or 120 mM dynasore. At 3 h p.i. costaining of keratin 14 and ICP0 revealed infection throughout the basal layer of keratinocytes in the DMSO-treated epidermis (Fig. 8). In contrast, infection was blocked in a concentrationdependent manner when the epidermal sheets were pretreated with dynasore (Fig. 8). These results support the essential role of dynamin for HSV-1 entry into basal keratinocytes of the epidermis. In summary, we conclude that dynamin plays an essential role during HSV-1 uptake into keratinocytes. Discussion HSV-1 can use a variety of entry modes all depending on a set of envelope viral glycoproteins [1]. Whether the virus can accomplish more than one entry pathway to infect any particular target cell, is still unclear. In this study, we investigated the entry pathway(s) into human keratinocytes which represent one of the natural target cells for HSV-1. Initial EM studies revealed free capsids underneath the plasma membrane in addition to enveloped virus particles in vesicles suggesting that HSV-1 can enter keratinocytes both by fusion with the plasma membrane and by endocytosis. The challenge is to distinguish whether both or only one of these entry pathways led to productive infection. We initially investigated the impact of endocytic uptake for initiation of infection using a variety of pharmacological inhibitors. In general, the major advantage of pharmacological inhibitors is the short exposure time that delays compensatory responses of the cells. However, the poor specificity of many commonly used drugs can hamper the identification of the precise endocytic pathway involved in virus uptake since they may perturb multiple cellular processes. To minimize this problem we used a range of drugs at concentrations that have been shown to interfere with virus uptake without major side effects. We confirmed that the concentrations of the drugs used in our assays had neither cytotoxic effects nor caused morphological changes in keratinocytes. Our data support endocytic uptake as contributing to HSV-1 entry, although, we are only just beginning to identify the components which characterize the route(s) of uptake. Our studies in chlorpromazine treated cells showed no effect on the number of infected cells and suggest that clathrin-mediated endocytosis does not play a role in HSV-1 entry into keratinocytes. In contrast, we observed a decreased number of infected cells after treatment with EIPA. EIPA can inhibit the enhanced fluid phase uptake that is associated with particle invagination during macropinocytosis [19]. Macropinocytosis is utilized for entry by a number of pathogens [33,34], and has been suggested to be involved in Kaposi's sarcoma-associated herpesvirus entry [35]. To further examine the putative role of macropinocytosis in HSV-1 entry we investigated the role of F-actin which is mostly associated with macropinocytic activity using cytochalasin D [33]. Since interference with actin polymerization had only minor effects on infection, our inhibitor studies do not support macropinocytosis as a major uptake mechanism in keratinocytes. This is in line with our previous findings that HSV-1 entry into keratinocytes is independent of Rac1 signaling [16] which participates in the regulation of macropinosome formation [33]. Based on reports showing that EIPA mediates a number of effects on endocytic pathways [20,36], we conclude that the inhibitory effect of EIPA points in general to the involvement of endocytic uptake but not to macropinocytosis specifically. A decreased number of infected cells was observed when we analyzed the effects of the microtubule-disrupting agent nocodazole in both HaCaT cells and primary keratinocytes. The importance of the microtubule network for the transport of capsids to the nucleus during HSV-1 entry has been shown previously [37]. Concomitantly, endosomal trafficking also relies on the integrity of the microtubule network [38]. Thus, our studies support a role for microtubules during HSV-1 entry into keratinocytes but do not distinguish between one initiated by fusion with the plasma membrane and releasing capsids into the cytosol or one involving endocytic uptake and vesicle transport. It has previously been reported that endosomal acidification is required to release HSV-1 after endocytic uptake in keratinocytes [4]. We also observed that lysosomotropic agents such as NH 4 Cl and monensin reduced infection in primary keratinocytes. However, although NH 4 Cl also reduced infectivity in the keratinocyte cell line HaCaT, only minor effects were observed with monensin. These studies suggest that endosomal acidification plays a more prominent role in primary keratinocytes. This is in line with our EM studies which showed that more virus particles were found in vesicles than as free capsids in primary cells as compared to HaCaT cells. Although our results are consistent with the previously described effect of NH 4 Cl, they differ from those described for monensin [4] which may be explained by the different experimental setting. Taken together our inhibitor studies support pH-dependent endocytic pathway(s) as a route for HSV-1 uptake into keratinocytes leading to productive infection. Interestingly, endocytic uptake seems to be more pronounced in primary keratinocytes than in the keratinocyte cell line highligthing the importance of carrying out studies in primary cells. Although our EM studies suggest that uptake of HSV-1 by fusion with the plasma membrane occurs alongside endocytosis, it remains to be determined whether and to what extent the direct fusion pathway leads to successful infection. The observation that the inhibitors of endocytosis never blocked infection completely but only reduced the number of infected cells or simply delayed infection may be an early indication that fusion at the plasma membrane can also lead to infection. Our studies revealed a requirement for cholesterol for HSV-1 uptake in keratinocytes. The depletion of cholesterol from the plasma membrane by MbCD resulted in inhibition of infectivity which was slightly stronger in HaCaT cells than in primary keratinocytes. The availability of cholesterol may be different in primary keratinocytes and HaCaT cells. The requirement for cholesterol suggests that lipid rafts may play an essential function in HSV-1 uptake into keratinocytes. Recent studies suggest that lipid rafts act as platforms for HSV-1 entry into Vero cells and mouse melanoma cells expressing either nectin-1 or HVEM involving the interaction of gB with cellular components in the rafts [21]. Interestingly, the HSV-1 receptors nectin-1 and HVEM were not found to be associated with lipid rafts when expressed in mouse melanoma cells. Thus, Bender et al. [21] argued that cholesterol may be required for fusion with the plasma membrane independent of whether the virus receptors are present in lipid rafts or not. Whether nectin-1, a potential HSV-1 receptor in HaCaT cells [39] is localized to lipid rafts in human keratinocytes, is still unknown. Our results suggest that cholesterol is essential during HSV-1 uptake into keratinocytes and we hypothesize that cholesterol supports both fusion with the plasma membrane and endocytic uptake. In addition to cholesterol our studies demonstrate the essential role of dynamin during HSV-1 entry into keratinocytes. Dynasore, a specific inhibitor of the dynamin GTPase activity [29,40], blocked HSV-1 infection in HaCaT cells at low concentrations. Interestingly, we observed that infection of primary keratinocytes was less sensitive to dynasore inhibition, requiring levels four times higher to achieve a reduction comparable to that seen in HaCaT cells. This was unexpected since we supposed a more prominent role of endocytic uptake in primary cells, and expected dynamin to be involved in endocytic pathways. The high amount of dynasore was also tested in primary murine neurons where no effect on HSV-1 infection was observed. In principle, dynasore can block endocytic pathways in hippocampal neurons [41]. Thus, dynamin seems to be nonessential for HSV-1 entry into neurons, but plays a major role during the uptake mechanism(s) into keratinocytes. We also examined the requirement for dynamin in murine epidermis using an ex vivo infection assay. After treatment of epidermal sheets with dynasore we observed a block of ICP0 expression in the basal keratinocytes suggesting that dynamin also plays a role during entry of the virus into intact tissue. EM studies in primary keratinocytes and HaCaT cells confirmed that neither free capsids nor enveloped particles in vesicles were present inside dynasoretreated cells. The only particles seen were enveloped virions trapped in plasma membrane invaginations at the cell surface. These results suggest that HSV-1 entry is completely dependent on dynamin-mediated pathway(s) which appear to include both early fusion events at the plasma membrane and vesicle scission. HIV, another enveloped virus, has long been assumed to fuse directly at the plasma membrane. Recent findings support HIV entry via endocytosis and suggest a role of dynamin in HIV release from endosomes [42]. The authors argue that the dynamin-dependent fusion with endosomes could rely on the ability of dynamin to regulate actin remodeling and/or associate with membranebending proteins which might facilitate endosomal fusion [42]. However, a recent study suggests a role of dynamin in pore expansion following hemifusion [15] which provides a possible reason why HSV-1 fusion at the plasma membrane was blocked Figure 7. Uptake of HSV-1 into dynasore-treated keratinocytes. Primary human keratinocytes and HaCaT cells were pretreated with 120 or 80 mM dynasore, and correspondingly with 1.2% or 0.8% DMSO for 30 min at 37uC followed by 15 min at 4uC to precool the cells. Cells were incubated with HSV-1 (1500 PFU/cell) for 1 h at 4uC followed by incubation at 37uC. (A) Infected primary human keratinocytes were fixed and prepared for electron microscopy at 10 min (a, b, e, f), or 30 min at 37uC (g, h). As control, DMSO-pretreatd cells were incubated with HSV-1 (1500 PFU/cell) for 60 min at 4uC (c, d). Bar, 0.2 mm. (B) At 2 h at 37uC infected primary human keratinocytes were fixed and costained with TRITCphalloidin (red) to visualize F-actin and mouse anti-ICP0. Single immunofluorescence analyses are shown. Bar, 40 mm. (C) Percentages of particles on surface, and particles inside including free cytoplasmic capsids and enveloped particles in vesicles are shown in DMSO-or dynasore-treated primary keratinocytes at 10 and 30 min at 37uC. In two independent experiments 108 (DMSO) and 122 (Dynasore) particles in total were evaluated for the 10 min time point and in one experiment 52 (DMSO) and 62 (Dynasore) particles were analyzed for the 30 min time point. (D) Infected HaCaT cells pretreated with DMSO or dynasore were fixed and prepared for electron microscopy at 10 min at 37uC (a-c). Bar, 0.2 mm. (E) Percentages of particles on surface, and particles inside are shown in DMSO-or dynasore-treated HaCaT cells at 10 min at 37uC. In two independent experiments 78 (DMSO) and 76 (Dynasore) particles in total were evaluated. Results are mean 6 standard deviation values. doi:10.1371/journal.pone.0025464.g007 Figure 8. HSV-1 infection of murine epidermis pretreated with dynasore. Epidermal sheets prepared from newborn mouse skin were separated from dermis by dispase II treatment, followed by incubation on 40 or 120 mM dynasore-containing medium or DMSO-containing medium. After 1 h at 37uC HSV-1 was added at 100 PFU/cell. At 3 h p.i. epidermal whole mounts showing the basal keratinocyte layer and developing hair follicles were costained with mouse anti-ICP0 (red) and rabbit anti-keratin 14 (green) visualized with AF555-conjugated anti-mouse (Molecular Probes) and AF488-conjugated anti-rabbit (Molecular Probes), respectively. Single immunofluorescence analyses are shown. Bar, 80 mm. doi:10.1371/journal.pone.0025464.g008 by dynasore. Probably, we are only at the beginning of understanding the precise mechanisms underlying dynamin function during viral uptake and that its role is more diverse than presently perceived. In summary, we suggest that HSV-1 uptake into human keratinocytes involves endocytic pathway(s) and fusion at the plasma membrane, and that both routes are dynamin-mediated and cholesterol-dependent. To understand the underlying mechanisms it will be important to characterize the contribution of nectin-1 and other HSV-1 receptors in human keratinocytes. Hippocampal murine neuron cultures were prepared as described [45]. In brief, hippocampi were dissected from embryonic day 9 mice. After treatment with 0.1% trypsin and 150 mg/ml DNAse for 30 min at 37uC, cell suspensions were mechanically dissociated by pipetting and finally centrifuged at 400 g for 5 min. About 20,000 cells were plated on poly-lysine coated coverslips in B27 neurobasal medium supplemented with 1% L-glutamine. Cultures were maintained for about 10 days before infection. Murine epidermal sheets were taken from back skin of wild-type (C57BL6) newborn mice. At 3 days after birth mice were decapitated and skin pieces of about 15 mm diameter were taken. After incubation for 30 min at 37uC with 5 mg/ml dispase II (Roche) in PBS, the epidermis was washed three times in PBS, gently removed from the underlying dermis as an intact sheet using forceps, and used immediately for infection studies. Infection studies were performed with purified preparations of HSV-1 wildtype strain 17 as described [46]. In general, virus inoculum was added to the cells at 37uC defining time point 0. In addition, virus was preadsorbed for 1 or 2 h at 4uC as indicated. Virus titers were determined on Vero cells. Pretreatment of virus with 40 mM dynasore or 10 mM MbCD was performed for 30 min at 37uC or room temperature, respectively. Ethics statement The preparation of neuronal cells and epidermal sheets from sacrificed animals was carried out in strict accordance with the recommendations of the Guide of Landesamt für Natur, Umwelt and Verbraucherschutz, Nordrhein-Westfalen (Germany). The study was approved by LANUV NRW (Number 8.84-02.05.20.11.058). Inhibitor studies Cytochalasin D and nocodazole (Sigma), and the dynamin inhibitors dynasore (Tocris) and MiTMAB (Calbiochem) were dissolved in dimethyl sulfoxide (DMSO). Methyl-b-cyclodextrin (MbCD) (Sigma) and chlorpromazine (Sigma) were dissolved in water; monensin (Sigma) and 5-(N-Ethyl-N-isopropyl)amiloride (EIPA) (Sigma) were dissolved in ethanol. Cells were treated with the appropriate drugs for 30 min followed by infection at 37uC in the continued presence of the drug. Only MbCD was removed prior to infection by washing the cells with medium three-times. Cholesterol (Sigma) was used to replenish depleted cholesterol in the plasma membrane; cells pretreated with MbCD for 30 min at 37uC were washed three-times, and cholesterol was added for 30 min at 37uC followed by three further washing steps prior to infection. Alexa Fluor 594-conjugated cholera toxin B (Molecular Probes) served as control for the cholesterol-depleting function of MbCD; cells pretreated with MbCD for 30 min at 37uC were washed and incubated for 15 min at 4uC. After addition of AF594conjugated cholera toxin B, cells were incubated for 10 min at 4uC followed by 10 min at 37uC and fixation. AF488-conjugated transferrin (Molecular Probes) was used as a control for dynasore inhibition. Transferrin was added to dynasore-treated or untreated cells for 15 min at 37uC, and removed from the cell surface prior to fixation by washing with 0.1 M glycine, 150 mM NaCl (pH 2.5) prior to fixation [47]. The effects of dynamin1 mutant K44A were quantified by counting about 300 transfected cells visualized by GFP fluorescence in three independent experiments and calculating the number of infected cells visualized by ICP0 staining. Electron microscopy Infected cells were prepared for electron microscopy as described [50]. Thin sections were cut, stained with uranyl acetate and lead citrate, and analyzed in a JEOL 1200 EX II transmission electron microscope. For quantification we examined sections of 0.1 mm. In each section we analyzed 80-100 cells with every third cell showing at least one virus particle. For each time point and experiment sections of about 30 cells with 60-90 virus particles in total were evaluated.
2014-10-01T00:00:00.000Z
2011-10-12T00:00:00.000
{ "year": 2011, "sha1": "1f6f2da692bf4c905446614253979416f6d59fda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0025464&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f6f2da692bf4c905446614253979416f6d59fda", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252438781
pes2o/s2orc
v3-fos-license
DELight: a Direct search Experiment for Light dark matter with superfluid helium To reach ultra-low detection thresholds necessary to probe unprecedentedly low Dark Matter masses, target material alternatives and novel detector designs are essential. One such target material is superfluid $^4$He which has the potential to probe so far uncharted light Dark Matter parameter space at sub-GeV masses. The new ``Direct search Experiment for Light dark matter'', DELight, will be using superfluid helium as active target, instrumented with magnetic micro-calorimeters. It is being designed to reach sensitivity to masses well below 100\,MeV in Dark Matter-nucleus scattering interactions. Introduction Dark Matter (DM) is a well-established concept in particle physics, astrophysics and cosmology and a key parameter in the Lambda Cold Dark Matter (ΛCDM) model of Big Bang cosmology. A fit of the ΛCDM model to the power spectrum of the cosmic microwave background (CMB) anisotropies predicts that DM constitutes about 85% of all matter in the Universe [1]. Still, although there is compelling evidence for DM [1][2][3][4], its nature is unknown and DM particles have yet to be discovered. A great variety of theoretical DM particle candidates exists, spanning many orders of magnitude in mass and coupling strength [5]. Ongoing and planned experiments are only sensitive to a subset of these candidates, one of which being the so-called weakly-interacting massive particle (WIMP) with mass and coupling(s) around the weak scale, produced in the early Universe in thermal equilibrium. Experimental efforts to detect WIMPs include direct detection experiments, designed to measure WIMP scattering off nuclei in the laboratory. Those searches have not discovered the WIMP to date, but have excluded most simple WIMP models with DM masses close to the weak scale [5]. One possible explanation is that DM is lighter than predicted by the standard WIMP paradigm and well below masses of a few GeV, a possibility that is currently the subject of much theoretical and experimental interest [6]. This WIMP-like sub-GeV DM particle candidate is commonly referred to as Light DM (LDM) and its potential couplings to standard matter have been barely probed by direct detection experiments thus far. The signal resulting from DM scattering off nuclei at such low masses is too small to be observed in typical direct DM search experiments and new detector concepts have to be developed to gain the necessary sensitivity. The Direct search Experiment for Light DM, DELight, will be built to thoroughly explore the LDM region well below the GeV mass scale in DM-nucleus scattering searches. Superfluid helium The DELight detector concept exploits the superfluid phase of the nobel gas 4 He which has many attractive features as target material for a LDM search experiment. It is very light compared to typical direct detection target elements like xenon, argon, germanium and silicon, naturally providing sensitivity to lower DM masses due to better kinematic matching between the DM particle and the target nucleus. Liquid helium is furthermore easily scalable, inexpensive and standard commercial methods exist for its handling. It has no long-lived radioisotopes of its own, it is self-cleaning at superfluid temperatures in that all other atomic species freeze out, and it has a high impedance to external vibration noise. Its first excited nuclear state is comparatively high at about 21 MeV, omitting all backgrounds from electronic excitation below this energy [7]. Background events near the helium-to-cell interface induced by radioactivity of the surrounding material can be mitigated by fiducializing the monolithic superfluid helium target. Superfluid helium thus provides an extremely radiopure and compact low-background target with means to suppress various external background sources. A key feature for efficient event classification is the presence of three independent and distinguishable signal channels and the fact that the energy partitioning among these channels depends on the ionization density resulting from the initial particle interaction. The initial interaction prompts a cascade of processes eventually terminating with the total energy distributed among 1) phonons and rotons (collectively referred to as "quasiparticles"), 2) infrared (IR) and especially ultraviolet (UV) photons, and 3) long-lived triplet helium excimers. Quantum evaporation enables the detection of the quasiparticles via liberation of 4 He atoms into a vacuum [8]. Also illustrated are the main signal channels. b) Schematic of a large-area MMC based sub-detector with dielectric wafer as absorber (gray), phonon collectors (yellow), paramagnetic temperature sensor (orange) and superconducting pickup coil (black) connected to a separately mounted readout SQUID [19]. with enough energy to overcome the helium binding energy of ∼ 0.7 meV lead to the evaporation of 4 He atoms with an efficiency of about 30 % [9, 10]. The typical quasiparticle energies involved are ≥ 0.8 meV. Thus particle interactions with recoil energies of O(10 keV) yield a large evaporation burst. The UV photons and triplet excimers result from the production of excimers in each event. Excimers in the triplet state are long-lived with a half-life of about 13 s and are observable as ballistic molecules. Excimers in the singlet state decay within about 1 ns emitting UV photons with a broad distribution peaking at 16 eV. Since the first excited state of atomic helium is at ∼ 20 eV the liquid is transparent to these UV photons. It was demonstrated in Ref. [11] that the singlet and triplet excimer signals can be separated and that single 15 eV photons can be detected. Because of the favorable properties of superfluid 4 He it has been considered early on a unique target material for LDM searches [12,13] and remains a target of interest to date as for example in the HeRALD project using superfluid 4 He instrumented with transition edge sensors (TESs) [14]. Careful studies have been carried out to simulate possible backgrounds and to explore DM sensitivities [14][15][16][17] and advanced detection schemes involving field ionization have been proposed [18]. All these studies underscore the vast opportunities of such a detector. Detector concept The basic principles of a particle detector based on superfluid helium were already developed and demonstrated in the 1990s within the solar neutrino project HERON [20]. The DELight project follows the same basic idea and concept. For HERON a detector was built with a superfluid 4 He volume of 3 L, operated at 20 mK. Using movable radioactive sources the underlying 016.3 detection scheme was established including the quasiparticle generation followed by the liberation of helium atoms through quantum evaporation [9,10]. The evaporated atoms are subsequently adsorbed onto a thin silicon wafer positioned directly above the liquid surface. The adsorption energy of a 4 He atom onto silicon is about 10 times larger than the binding energy of that atom on the liquid helium surface which provides an effective signal amplification by an approximate factor of 10. As long as helium remains superfluid, though, it creeps up walls including that of the detector cell until it eventually reaches the wafer. To maintain the amplification factor and to reduce the overall heat capacity of the calorimeters, the wafer surface must be kept free from helium which is achievable with a film burner. The film burner is a helium film removal device using heated baffles, as demonstrated by the HERON collaboration [21]. Figure 1a shows a draft schematic of the planned DELight detector together with the signal channels described in Sec. 2. The first DELight detector cell will hold a 4 He volume of 10 L. In later phases of the experiment larger cell volumes up to O(100 L) are anticipated. 4 He remains liquid down to zero kelvin, enabling energy measurements with ultra-sensitive cryogenic calorimeters. DELight will employ magnetic micro-calorimeters (MMCs) and will be operated below 20 mK to reduce quasiparticle scattering within the liquid and to enhance the sensitivity of the MMCs. The principal components of an MMC are a particle absorber and a paramagnetic temperature sensor [22,23]. The absorber is in tight thermal contact with the sensor and its material is matched to the particles to be observed. The sensor is placed in a weak magnetic field to create a temperature dependent sensor magnetization. The temperature and magnetic field dependence and the total heat capacity of the micro-calorimeter can be calculated and thus optimized using simulations. A change of sensor magnetization resulting from a particle-induced energy deposition, and thus from a rise in temperature, can be measured very precisely as a change of magnetic flux using a superconducting quantum interference device (SQUID). The outstanding performance of MMCs was demonstrated in Ref. [24], where 6 keV X-rays were detected with an energy resolution of 1.6 eV. To keep the MMC in a well-defined state in the absence of an energy deposition, the sensor is weakly linked to a thermal bath with constant temperature. Fundamentally, the energy resolution of MMCs is limited by thermal fluctuations between the absorber, the sensor, and the thermal bath, which are very small at the typical operating temperature of less than 20 mK [24,25]. The DELight detector will be instrumented with large-area MMC based sub-detectors, each consisting of a dielectric handling wafer that also acts as absorber, of phonon collectors, of a distributed paramagnetic temperature sensor, and of a superconducting pickup coil that is connected to a readout SQUID. A schematic of such a sub-detector is shown in Fig. 1b. Each flux change in one of the pickup coils is transferred to the SQUID and converted into a voltage signal. At the same time, the pickup coils are used to generate the magnetic field required to bias the paramagnetic sensors. The detection system for the first phase of DELight consists of about 50 large-area sub-detectors, each having an area of about 40 cm 2 . One fifth of the sub-detectors are placed above the liquid, for the detection of the evaporated He atoms and of UV photons, and four fifths are submerged into the superfluid to also collect UV photons and to additionally observe the long-living triplet excimers (see Fig. 1a). Using Si wafers with a thickness of 300 µm and operated at 10 mK, a detector intrinsic energy resolution of about 3-6 eV can be estimated [24], corresponding to a threshold of about 10-20 eV. Ultimately, at later phases of the experiment, a threshold below 10 eV is expected to be achieved by optimizing the sub-detector design and the instrumentation layout of the entire detector. Also shown are the neutrino discovery limit in helium calculated as described in Ref. [26] (gray area), the parameter space excluded by CRESST-III [27], DarkSide [28], and XENON1T [29,30] (blue area) and the projected limits by DARWIN [31] and SuperCDMS [32] (blue dashed lines). Science goals The current status of spin-independent DM-nucleus scattering searches spanning a wide mass range is indicated in Fig. 2. At high DM masses at the GeV to TeV scale, DARWIN is foreseen as the ultimate detector based on xenon which will reach the neutrino fog [31]. Towards lower masses, existing constraints extend to approximately 200 MeV, but at a substantial loss of sensitivity on the interaction strength. Improvements on the sensitivity to masses around 1 GeV are expected especially from upcoming solid-state detectors such as CRESST-III and SuperCDMS [27,32]. Key limitations for an extension towards lower masses are low signal amplitudes expected for massive target nuclei and the dark-current level originating largely from the application of high external fields. The DELight experiment will surpass these limitations by using one of the lightest elements and by not relying on an electric field in the baseline design. Already with a small exposure of O(kg·day) and a moderate threshold of about 20 eV, as planned during the first phase of DELight, unprecedented sensitivity on the scattering cross section is possible at masses below about 1 GeV based on zero-background projections (see Fig. 2). This is supported by HeRALD projections calculated under background assumptions that also qualify for DELight [14]. The long range plan of DELight targets a O(kg·year) exposure acquired with a helium volume of up to 200 L in an underground laboratory. With this exposure and an anticipated threshold of < 10 eV an LDM mass as low as 30 MeV becomes accessible. Exploring the sub-100 MeV mass range forms a milestone in direct DM-nucleus scattering searches. Conclusion Despite great advancement in direct DM searches, DM remains elusive to date using traditional detector materials. Most simple WIMP models have been excluded which motivates 016.5 searches for models beyond the traditional WIMP. Future projects must thus not only focus on enhancing the detector sensitivity to weaker coupling strengths but also on enlarging the parameter space towards LDM masses well below the GeV scale. A very promising target material for direct LDM-nucleus scattering searches is superfluid 4 He. It is a very light, ultra-pure, easily scalable, inexpensive target that allows for energy measurements with extremely sensitive micro-calorimeters. The DELight experiment will use superfluid helium as target material instrumented with MMC sensors and a SQUID readout system. DELight is currently in its planning phase and will be designed to probe LDM down to sub-100 MeV masses. It will become a leading experiment in direct searches for LDM in nuclear scattering interactions.
2022-09-23T06:43:00.255Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "46dfac2b21e44733cfa6f2e3049501dee4796284", "oa_license": "CCBY", "oa_url": "https://scipost.org/10.21468/SciPostPhysProc.12.016/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "169b5a7d15da336ab6b54eeb0234a2832b698838", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8158972
pes2o/s2orc
v3-fos-license
Passivation Characteristics of Alloy Corrosion-Resistant Steel Cr10Mo1 in Simulating Concrete Pore Solutions: Combination Effects of pH and Chloride The electrochemical behaviour for passivation of new alloy corrosion-resistant steel Cr10Mo1 immersed in alkaline solutions with different pH values (13.3, 12.0, 10.5, and 9.0) and chloride contents (0.2 M and 1.0 M), was investigated by various electrochemical techniques: linear polarization resistance, electrochemical impedance spectroscopy and capacitance measurements. The chemical composition and structure of passive films were determined by XPS. The morphological features and surface composition of the immersed steel were evaluated by SEM together with EDS chemical analysis. The results evidence that pH plays an important role in the passivation of the corrosion-resistant steel and the effect is highly dependent upon the chloride contents. In solutions with low chloride (0.2 M), the corrosion-resistant steel has notably enhanced passivity with pH falling from 13.3 to 9.0, but does conversely when in presence of high chloride (1.0 M). The passive film on the corrosion-resistant steel presents a bilayer structure: an outer layer enriched in Fe oxides and hydroxides, and an inner layer, rich in Cr species. The film composition varies with pH values and chloride contents. As the pH drops, more Cr oxides are enriched in the film while Fe oxides gradually decompose. Increasing chloride promotes Cr oxides and Fe oxides to transform into their hydroxides with little protection, and this is more significant at lower pH (10.5 and 9.0). These changes annotate passivation characteristics of the corrosion-resistant steel in the solutions of different electrolyte. Introduction Most reinforced concrete structures are expected to service for at least 75 years without major repairs [1]. However, this is difficult to achieve, not because of a structural problem but a durability issue. Corrosion of reinforcing steel inside concrete is one of the most important factors that reduce concrete structures durability. In order to minimize or prevent steel corrosion, various methods and techniques [2,3] have been developed and applied, being the most important: concrete cover optimization, electrochemical protection (including cathodic protection, electrochemical realkalization and electrochemical chloride extraction), chemical inhibitors incorporation, epoxy coating on rebar, Results of Optical Microscopy (OM) observation of the steel are shown in Figure 1, which indicates granular bainite with ferrite between the grains for the microstructure of the steel. Steel samples of 10 mm length were cut from ribbed rebars with a diameter of 25 mm. The cross-sections of steel samples were mechanically ground with grades 200, 600, 1000 and 2000 SiC emery papers successively, and polished with alumina paste up to 2.5 μm grit to eliminate the heterogeneities of the steel surface. After polishing, the samples were degreased with alcohol, rinsed with distilled water and dried with a stream of air just before immersion, to ensure their same initial surface states. The samples were kept in test solutions for up to 10 days, during which electrochemical responses were recorded for all the electrodes after 6 h, 1 day, 3 days, 7 days and 10 days immersions, to track the formation process of passive films on the exposed surfaces [16,17]. Electrochemical Measurements The electrochemical responses of surface films formed on the steel in solutions with different pH and chloride contents were monitored by electrochemical tests. The electrochemical tests, including linear polarization resistance (LPR), electrochemical impedance spectroscopy (EIS) and capacitance measurements (Mott-Shottky approach), were performed at room temperature (25 °C) and under natural aeration, in a classical electrochemical cell with three electrodes, where steel sample as the working electrode was installed with an exposed working area of 1 cm 2 , the reference electrode was a saturated calomel electrode (SCE, all electrode potentials reported in this study were referred to SCE) and a platinum counter electrode was also used. The LPR measurements were carried out with polarization within ±20 mV from the open-circuit potential (OCP) in the anodic direction with a scan rate of 0.1667 mV/s. The EIS response was recorded following LPR, in a frequency range from 10 4 Hz down to 10 −2 Hz with the applied AC amplitude of 10 mV at OCP. Capacitance measurements [18] were performed at a fixed frequency of 1000 Hz (The parameters obtained from Mott-Schottky plots are almost independent when the frequency on the order of kHz is used, according to our previous trials and some references [19]) and a sinusoidal signal of 10 mV, with the polarization applied in successive steps of 50 mV in the cathodic direction from the potential +0.25 V to −1.5 V vs. SCE. There were 3 replicates for each specimen. The equipment used was a PARSTAT 4000 electrochemical system (Princeton Applied Research Inc., Oakridge, TN, USA). Surface Analysis Steel samples immersed in the test solutions for 7 d at OCP were withdrawn, rinsed with distilled water and dried with ethanol, and then kept in a vacuum dryer. The chemical composition and thickness of the surface films on the steel samples were determined by X-ray photoelectron spectroscopy (XPS). A PHI Quantera SXM X-ray photoelectron spectrometer (ULVAC-PHI Inc., Chigasaki, Japan) equipped with a monochromatic Al Kα radiation source (1486.6 eV), a hemispherical electron analyser operating at a pass energy of 55 eV and an analytical chamber with a base pressure of 10 −7 Pa, was used to collect XPS spectra. The depth profile information was obtained by sputtering the specimens with a scanning argon-ion gun operating at Electrochemical Measurements The electrochemical responses of surface films formed on the steel in solutions with different pH and chloride contents were monitored by electrochemical tests. The electrochemical tests, including linear polarization resistance (LPR), electrochemical impedance spectroscopy (EIS) and capacitance measurements (Mott-Shottky approach), were performed at room temperature (25 • C) and under natural aeration, in a classical electrochemical cell with three electrodes, where steel sample as the working electrode was installed with an exposed working area of 1 cm 2 , the reference electrode was a saturated calomel electrode (SCE, all electrode potentials reported in this study were referred to SCE) and a platinum counter electrode was also used. The LPR measurements were carried out with polarization within ±20 mV from the open-circuit potential (OCP) in the anodic direction with a scan rate of 0.1667 mV/s. The EIS response was recorded following LPR, in a frequency range from 10 4 Hz down to 10 −2 Hz with the applied AC amplitude of 10 mV at OCP. Capacitance measurements [18] were performed at a fixed frequency of 1000 Hz (The parameters obtained from Mott-Schottky plots are almost independent when the frequency on the order of kHz is used, according to our previous trials and some references [19]) and a sinusoidal signal of 10 mV, with the polarization applied in successive steps of 50 mV in the cathodic direction from the potential +0.25 V to −1.5 V vs. SCE. There were 3 replicates for each specimen. The equipment used was a PARSTAT 4000 electrochemical system (Princeton Applied Research Inc., Oakridge, TN, USA). Surface Analysis Steel samples immersed in the test solutions for 7 d at OCP were withdrawn, rinsed with distilled water and dried with ethanol, and then kept in a vacuum dryer. The chemical composition and thickness of the surface films on the steel samples were determined by X-ray photoelectron spectroscopy (XPS). A PHI Quantera SXM X-ray photoelectron spectrometer (ULVAC-PHI Inc., Chigasaki, Japan) equipped with a monochromatic Al Kα radiation source (1486.6 eV), a hemispherical electron analyser operating at a pass energy of 55 eV and an analytical chamber with a base pressure of 10 −7 Pa, was used to collect XPS spectra. The depth profile information was obtained by sputtering the specimens with a scanning argon-ion gun operating at ion energy of 2 keV. The sputtering rate was estimated to be about 0.055 nm·s −1 . The spectra were calibrated by setting the main line for the C 1s signal of adventitious carbon to 284.6 eV. All XPS spectral analysis was performed by the commercial software XPSpeak version 4.1, which contains the Shirley background subtraction and Gaussian-Lorentzian tail function for better spectra fitting. The surface morphologies of the samples were examined by scanning electron microscopy (SEM), using a FEI 3D microscope (FEI, Hillsboro, OA, USA) equipped with energy dispersive spectrometer (EDS) microanalysis hardware, which aims to examine the chemical composition of the surfaces. Surface Film Composition and Structure (XPS Analysis) The survey spectra (not shown) from samples exposed to all media only exhibit signals from the alloy constituents, carbon and oxygen. No traces of components from solution (Na + , K + , Ca 2+ , and Cl − ) were detected. The high resolution XPS spectra of Fe-2p, Cr-2p and O-1s signals were deconvoluted into some chemical states which are most probable components needed for corresponding chemical assignments using a deconvolution software XPSpeak version 4.1, based on the average of the binding energies reported in the Handbook of X-ray photoelectron spectroscopy [20] and previous works [21]. According to the deconvoluting results (Figure 2), the Fe 2p 3/2 signal (Figure 2a) consists of four components, including metallic state (Fe met , 706.5 eV), Fe 2+ in oxide form (FeO, 709.5 eV) and Fe 3+ in oxide (Fe 2 O 3 , 710.6 eV) and hydroxide forms (FeOOH/Fe(OH) 3 , 712.0 eV). For the Cr 2p 3/2 spectrum (Figure 2b), there presents three constituent peaks which are assigned to Cr met (573.6 eV), Cr 2 O 3 (576.3 eV) and CrOOH/Cr(OH) 3 (577.1 eV), respectively. The peak intensity of the Cr 2 O 3 is apparently higher than that of CrOOH/Cr(OH) 3 , indicating Cr 2 O 3 is the dominant Cr species in the passive film for the steel. The O 1s signal (Figure 2c) was fitted using two contributions: one peak at 530.2 eV corresponding to O 2− in Fe and Cr oxides, and another one at 531.8 eV corresponding to OH − in hydroxides. The XPS measurements did not detect the presence of Mo in the passive film on Cr10Mo1 steel; therefore, its performance will not be discussed in this paper, although Mo may produce some considerable effects on electronic properties and electrochemical processes of the passive film, as presented in some works [22]. Figures 3 and 4 display the XPS depth profiles of the surface films formed on the corrosion-resistant steel in all test solutions. As can be observed, Fe-metal and Cr-metal concentrations increase progressively with depth and do dramatically where the oxidation state components almost disappear, as expected. This indicates that the predominant metallic state component comes from the substrate when the surface film is sputtered out. The concentration of oxygen, on the other hand, follows an increase initially (about 1~2 nm), and then decreases significantly approaching the metal surface. The high carbon concentration at the surface of each specimen is mainly due to organic carbon contamination (the absorbed alcohol), as referred in other studies [12]. The percentage composition of Fe-oxidation has a sharp rise at first, then reduces gradually and vanishes. Similarly, the Cr-oxidation component also shows an evolution of increasing initially and decreasing afterwards. However, the cut-off points are not identical: for the Fe species the cut-off point is at about 1 nm, while for the Cr species that is at a greater depth of about 2~3 nm. This important difference reveals that the constituents within passive film of the corrosion-resistant steel vary with depth into the layer: the inner region that is adjacent to the metal substrate is a Cr species concentrated layer, while the outer layer is mostly composed of Fe oxides and hydroxides. If Fe-oxidation or Cr-oxidation content tends to zero, it was considered as an interface where the layer is sputtered out [23]. In this term, the surface films on the steel in solutions with 0.2 M chloride are approximately 5 nm, 5 nm, 6 nm and 6 nm thick It is noteworthy that, in solutions with 0.2 M Cl − , the Cr-oxidation content as a function of pH shows an increasing trend with pH falling, indicating the Cr species have a gradual enrichment in the surface film. Literatures [24,25] attributed this to preferential solubility of Cr/Fe oxides at different pH: at high pH, Cr species become more soluble as chromite ions (CrO 2− and CrO3 3− ), whereas Fe oxides are more stable; at low pH, the Cr 3+ concentration within the surface film increases due to the tendency for higher stability of Cr oxides together with the high dissolution of Fe oxides into aqueous solution when the pH drops. However, the Cr species enrichment at low pH cannot be explained just in terms of Fe oxides relative decreasing resulted by selective dissolution. It is most probably the consequence of the further formation and growth of Cr oxides, because the Cr-concentrated inner layer in the surface film becomes more thicken when pH drops from 13.3 to 9.0 ( Figure 3), which might result from excessive dissolution of Fe and Cr from the substrate. When exposed to less alkaline media, Fe oxides in the outer layer decompose gradually and release more soluble ions into solution, promoting metallic Fe beneath the film to dissolve into the film layer as Fe(OH)aq [26]. Although lower pH facilitates dissolution of the excessive Fe, little new Fe species form in the film layer for the high dissolution rate of Fe oxides at the film/solution interface, so the Fe-based outer layer has no growth, as illustrated in Figure 3. Following metallic Fe dissolution, Cr from the metal also dissolves excessively and produces new Cr oxides due to the tendency for its very high stability. The newly formed Cr species precipitate in the inner layer and therefore the thickness of the Cr-oxides layer experiences an increase. Freire et al. [18] investigated the effect of pH on passive behaviour of AISI 316 stainless steel in alkaline media in the absence of chloride, and recognized that Cr oxides has an increasing enrichment accompanied by the passive film thickening at lower pH. It should be stated that this is also the same case for Cr10Mo1 corrosion-resistant steel, at least if the environment is moderately contaminated by chloride. When the chloride reaches 1.0 M (Figure 4), there is still a gradual enrichment of the Cr species in the surface film following pH decreasing, however, the film exhibits important changes, i.e., as pH falls below 10.5, the thickness of the film suffers a decline, coming to 4 nm, in contrast with 6 nm thickness of the layer in the presence of 0.2 M Cl − . This indicates It is noteworthy that, in solutions with 0.2 M Cl − , the Cr-oxidation content as a function of pH shows an increasing trend with pH falling, indicating the Cr species have a gradual enrichment in the surface film. Literatures [24,25] attributed this to preferential solubility of Cr/Fe oxides at different pH: at high pH, Cr species become more soluble as chromite ions (CrO 2− and CrO 3 3− ), whereas Fe oxides are more stable; at low pH, the Cr 3+ concentration within the surface film increases due to the tendency for higher stability of Cr oxides together with the high dissolution of Fe oxides into aqueous solution when the pH drops. However, the Cr species enrichment at low pH cannot be explained just in terms of Fe oxides relative decreasing resulted by selective dissolution. It is most probably the consequence of the further formation and growth of Cr oxides, because the Cr-concentrated inner layer in the surface film becomes more thicken when pH drops from 13.3 to 9.0 ( Figure 3), which might result from excessive dissolution of Fe and Cr from the substrate. When exposed to less alkaline media, Fe oxides in the outer layer decompose gradually and release more soluble ions into solution, promoting metallic Fe beneath the film to dissolve into the film layer as Fe(OH) aq [26]. Although lower pH facilitates dissolution of the excessive Fe, little new Fe species form in the film layer for the high dissolution rate of Fe oxides at the film/solution interface, so the Fe-based outer layer has no growth, as illustrated in Figure 3. Following metallic Fe dissolution, Cr from the metal also dissolves excessively and produces new Cr oxides due to the tendency for its very high stability. The newly formed Cr species precipitate in the inner layer and therefore the thickness of the Cr-oxides layer experiences an increase. Freire et al. [18] investigated the effect of pH on passive behaviour of AISI 316 stainless steel in alkaline media in the absence of chloride, and recognized that Cr oxides has an increasing enrichment accompanied by the passive film thickening at lower pH. It should be stated that this is also the same case for Cr10Mo1 corrosion-resistant steel, at least if the environment is moderately contaminated by chloride. When the chloride reaches 1.0 M (Figure 4), there is still a gradual enrichment of the Cr species in the surface film following pH decreasing, however, the film exhibits important changes, i.e., as pH falls below 10.5, the thickness of the film suffers a decline, coming to 4 nm, in contrast with 6 nm thickness of the layer in the presence of 0.2 M Cl − . This indicates that increasing chloride has some negativity on the passivation of the steel and this effect is more obvious in low pH (10.5 and 9.0) media. Figures 5 and 6 show the composition profiles (for atomic ratios of Fe hy /Fe ox and Cr hy /Cr ox at various sputtered depths) of the surface films on the corrosion-resistant steel in different test solutions, obtained from quantitative XPS analysis according to the peak intensity of components. Generally, with increasing sputtered depth, the contents of Fe and Cr hydroxides decrease gradually and disappear finally, indicating the hydroxides are the predominant components near the free surface of the film. It can be observed that with the decreasing of pH, the Fe hy /Fe ox ratio has a continuous rise at the same depth into the film (Figures 5a and 6a). In fact, less alkaline media favours Fe hydroxides formation [26]. When the surface film is exposed to lower pH, FeO (Fe 3 O 4 ) and Fe 2 O 3 decompose gradually, converting into porous and loose FeOOH/Fe(OH) 3 . This induces the Fe-oxides layer becomes more and more defective. From comparison of the Fe hy /Fe ox in Figures 5a and 6a for an identical pH, increasing chloride also brings about some rise of the Fe hy /Fe ox ratio at the same depth into the film, indicating a similar effect to that of decreasing pH. This is expected, because Cl − favours adsorbing on a passive film and then occupying oxygen vacancies by taking place of OH − in the oxides. The continuous increase in the vacancy production induces the formation of cavities in the oxides, eventually leading to the oxides destruction [27]. Thus, both carbonation and chloride could cause damage of the Fe-oxides layer and therefore reduce its protection, as mentioned in references [26,28]. Similar to the Fe hy content evolution, Cr hy concentration in Cr species also has an increasing trend with pH falling (Figures 5b and 6b), but the change is dependent on chloride contents. In solutions with 0.2 M Cl − , the Cr hy /Cr ox value remains small, at levels below 0.25 (Figure 5b), indicating most of Cr oxides maintain intact. However, with Cl − content increasing to 1.0 M, this ratio at the film surface enhances sharply from 0.24 to 0.58 when pH drops from 13.3 to 9.0 (Figure 6b), meaning high chloride promotes the substantial formation of Cr hydroxides at low pH, which are also low protective [29]. that increasing chloride has some negativity on the passivation of the steel and this effect is more obvious in low pH (10.5 and 9.0) media. Figures 5 and 6 show the composition profiles (for atomic ratios of Fehy/Feox and Crhy/Crox at various sputtered depths) of the surface films on the corrosion-resistant steel in different test solutions, obtained from quantitative XPS analysis according to the peak intensity of components. Generally, with increasing sputtered depth, the contents of Fe and Cr hydroxides decrease gradually and disappear finally, indicating the hydroxides are the predominant components near the free surface of the film. It can be observed that with the decreasing of pH, the Fehy/Feox ratio has a continuous rise at the same depth into the film (Figure 5a and 6a). In fact, less alkaline media favours Fe hydroxides formation [26]. When the surface film is exposed to lower pH, FeO (Fe3O4) and Fe2O3 decompose gradually, converting into porous and loose FeOOH/Fe(OH)3. This induces the Fe-oxides layer becomes more and more defective. From comparison of the Fehy/Feox in Figures 5a and 6a for an identical pH, increasing chloride also brings about some rise of the Fehy/Feox ratio at the same depth into the film, indicating a similar effect to that of decreasing pH. This is expected, because Cl − favours adsorbing on a passive film and then occupying oxygen vacancies by taking place of OH − in the oxides. The continuous increase in the vacancy production induces the formation of cavities in the oxides, eventually leading to the oxides destruction [27]. Thus, both carbonation and chloride could cause damage of the Fe-oxides layer and therefore reduce its protection, as mentioned in references [26,28]. Similar to the Fehy content evolution, Crhy concentration in Cr species also has an increasing trend with pH falling (Figure 5b and 6b), but the change is dependent on chloride contents. In solutions with 0.2 M Cl − , the Crhy/Crox value remains small, at levels below 0.25 (Figure 5b), indicating most of Cr oxides maintain intact. However, with Cl − content increasing to 1.0 M, this ratio at the film surface enhances sharply from 0.24 to 0.58 when pH drops from 13.3 to 9.0 (Figure 6b), meaning high chloride promotes the substantial formation of Cr hydroxides at low pH, which are also low protective [29]. that increasing chloride has some negativity on the passivation of the steel and this effect is more obvious in low pH (10.5 and 9.0) media. Figures 5 and 6 show the composition profiles (for atomic ratios of Fehy/Feox and Crhy/Crox at various sputtered depths) of the surface films on the corrosion-resistant steel in different test solutions, obtained from quantitative XPS analysis according to the peak intensity of components. Generally, with increasing sputtered depth, the contents of Fe and Cr hydroxides decrease gradually and disappear finally, indicating the hydroxides are the predominant components near the free surface of the film. It can be observed that with the decreasing of pH, the Fehy/Feox ratio has a continuous rise at the same depth into the film (Figure 5a and 6a). In fact, less alkaline media favours Fe hydroxides formation [26]. When the surface film is exposed to lower pH, FeO (Fe3O4) and Fe2O3 decompose gradually, converting into porous and loose FeOOH/Fe(OH)3. This induces the Fe-oxides layer becomes more and more defective. From comparison of the Fehy/Feox in Figures 5a and 6a for an identical pH, increasing chloride also brings about some rise of the Fehy/Feox ratio at the same depth into the film, indicating a similar effect to that of decreasing pH. This is expected, because Cl − favours adsorbing on a passive film and then occupying oxygen vacancies by taking place of OH − in the oxides. The continuous increase in the vacancy production induces the formation of cavities in the oxides, eventually leading to the oxides destruction [27]. Thus, both carbonation and chloride could cause damage of the Fe-oxides layer and therefore reduce its protection, as mentioned in references [26,28]. Similar to the Fehy content evolution, Crhy concentration in Cr species also has an increasing trend with pH falling (Figure 5b and 6b), but the change is dependent on chloride contents. In solutions with 0.2 M Cl − , the Crhy/Crox value remains small, at levels below 0.25 (Figure 5b), indicating most of Cr oxides maintain intact. However, with Cl − content increasing to 1.0 M, this ratio at the film surface enhances sharply from 0.24 to 0.58 when pH drops from 13.3 to 9.0 (Figure 6b), meaning high chloride promotes the substantial formation of Cr hydroxides at low pH, which are also low protective [29]. Figure 7 presents the growth processes of passive films on the steel in solutions of different pH with chloride contents. The thermodynamic stability of oxides or hydroxides, which is affected by the concentrations of hydroxyl ions and chloride ions [30], determines passive film formation and growth, as confirmed by the results obtained by XPS analysis in the present study. The following mechanism for passive film formation and growth on Cr-series stainless steels in alkaline solutions has been proposed in published works [31,32]. When the alloy is exposed to an alkaline electrolyte, Fe would prefer to dissolve and faster diffuse into the solution and therefore be enriched at the film/solution interface while dilatory dissolving and slower diffusing component like Cr will remains in the region nearer the metal, with little movement. Contacting with OH − predominantly, dissolved Fe/Cr initially forms hydroxides. With increasing passivation time, the hydroxides will dehydrate into oxides and then the oxides layer grows gradually. The outer Fe-oxides layer grows by the diffusion of metallic ions through micropores in the film and the inner Cr-oxides layer grows by access of OH − to the film/alloy interface through the micropores in the film. The continuously growing oxides layer will block the transport of metallic ions and hydroxyl ions and further slow down the growth of itself. As a result, the passive film is constructed by a Cr oxides and hydroxides concentrated inner layer (grown from alloy matrix) and an outer layer enriched in Fe oxides and hydroxides (formed by a diffusion and precipitation process). It should be noted that in solutions of high pH (13.3), high chloride not more than 1.0 M depresses little the growth and formation of the passive film on the corrosion-resistant steel (Figure 7a,b). When in solutions of low pH (9.0), metallic Fe/Cr dissolves and diffuses faster, but the precipitation of the hydroxides and oxides, especially Fe hydroxides and oxides, become more difficult for the diluted OH − concentration. This would allow the excessive dissolution of the Fe and Cr, in another vein, more metallic Fe and Cr dissolve from the metal. Thus, the inner Cr-oxides layer has further growth and becomes more thicken, but the outer Fe-species layer is not in the same case, due to its very high solubility at low pH ( Figure 7c). However, in the presence of 1.0 M chloride, the inner Cr-oxides layer has no growth as in the presence of 0.2 M chloride, but decreases dramatically in the layer thickness, together with further reduction of the Fe-species outer layer (Figure 7d), due to the standard free energies of Fe and Cr oxides in more severe environments are substantially enhanced [30]. Capacitance Measurements (Mott-Schottky Plots) The semi-conductive behaviour and electronic properties of the passive oxides film can be assessed by measuring the capacitance of the electrode/electrolyte interface. When the oxides film is in contact with an electrolyte, the capacitance of the electrode/electrolyte interface (C) can be described by the capacitance of space charge layer (C sc ) and the capacitance of the Helmholtz layer (C H ) as two condensators in series, neglecting the Guoy-Chapman diffuse layer and surface states contribution for the total capacitance [33]. When the used frequency is high enough (on the order of kHz), the measured C is considered to be approximately equal to the C sc , for the C H contribution could be negligible [34]. Assuming that space charge layer of the semiconductor is under depletion conditions, the determined capacitance of electrode/electrolyte interface, C, and the applied potential, E, can be described by Mott-Schottky relationship: for n-type semiconductor for p-type semiconductor where ε is the dielectric constant of the semiconductor (usually taken as 15.6 [33,35] for the oxides films formed on alloys), ε 0 is the vacuum permittivity (ε 0 = 8.85 × 10 −14 F·cm −1 ), N d and N a are the donor density and acceptor density, respectively, e is the elementary charge (e = 1.602 × 10 −19 C), k is the Boltzmann constant (k = 1.38 × 10 −23 J·K −1 ), T is the absolute temperature, and E FB the flat band potential. The KT/e term can be neglected as it is only about 25 mV at room temperature. The carrier density can be calculated from the slope of the experimental C −2 versus E plot (a negative slope is for a p-type semiconductor response inversely proportional to the acceptor density N a , while a positive slope for a n-type semiconductor also inversely proportional to the donor density N d ). Figure 8 displays the Mott-Schottky plots recorded for the passive films formed on the steel after 7 d immersion in all test solutions. It can be observed that in the Mott-Schottky plots two linear regions are presented, indicating the passive film formed on the steel exhibits both n-type (positive slope) and p-type (negative slope) semiconducting characteristics irrespective of the exposure conditions. The duplex semiconductor character of the passive film is related to its chemical composition. Its n-type semiconductor behaviour can be attributed to Fe oxides and hydroxides, and p-type semiconductor behaviour to Cr species, according to previous investigations [33,35]. In Table 1, the carrier concentrations (donor and acceptor species, Nd and Na, respectively) for the semiconductor passive films are listed. In solutions of 0.2 M Cl − , the steel has declined carrier concentrations as pH varies from 13.3 to 9.0, meaning the film behaves as a semiconductor with poorer and poorer electrical conductivity. This evolution indicates a more effective corrosion protection of the surface film at lower pH. According to the point defect model (PDM) proposed by Macdonald and co-workers [36], the passive film contains a number of point defects, such as oxygen and/or cation vacancies, which act as donors and acceptors, respectively. Carbonation promotes the excessive dissolution of Fe and Cr from the metal and results in more Fe and Cr species formed in the film, which have decreased oxygen and cation vacancy concentrations. However, in test solutions with 1.0 M Cl − , this trend is reversed. Na and Nd of the steel have far higher values at pH 9.0 than that at pH 13.3. This substantial change in carrier concentrations corresponds to non-stoichiometry defects in the space charge region or disordered character of the surface film [37]. Thus, the surface film becomes more and more defective with pH decreasing, and this is associated with the gradual destruction of the protective oxides, as shown by XPS measurements. Linear Polarization Resistance Linear polarization resistance (LPR) monitoring is a non-destructive technique for measuring the corrosion rate of reinforcing steel and accurately evaluating its condition, which has been discussed in detail in many works [38]. From LPR tests, the corrosion potential, Ecorr, which directly indicates the corrosion state, and polarization resistance, Rp, which is related to the corrosion rate of a corrosion process, can be obtained directly by the built-in fitting software. The In Table 1, the carrier concentrations (donor and acceptor species, N d and N a , respectively) for the semiconductor passive films are listed. In solutions of 0.2 M Cl − , the steel has declined carrier concentrations as pH varies from 13.3 to 9.0, meaning the film behaves as a semiconductor with poorer and poorer electrical conductivity. This evolution indicates a more effective corrosion protection of the surface film at lower pH. According to the point defect model (PDM) proposed by Macdonald and co-workers [36], the passive film contains a number of point defects, such as oxygen and/or cation vacancies, which act as donors and acceptors, respectively. Carbonation promotes the excessive dissolution of Fe and Cr from the metal and results in more Fe and Cr species formed in the film, which have decreased oxygen and cation vacancy concentrations. However, in test solutions with 1.0 M Cl − , this trend is reversed. N a and N d of the steel have far higher values at pH 9.0 than that at pH 13.3. This substantial change in carrier concentrations corresponds to non-stoichiometry defects in the space charge region or disordered character of the surface film [37]. Thus, the surface film becomes more and more defective with pH decreasing, and this is associated with the gradual destruction of the protective oxides, as shown by XPS measurements. Linear Polarization Resistance Linear polarization resistance (LPR) monitoring is a non-destructive technique for measuring the corrosion rate of reinforcing steel and accurately evaluating its condition, which has been discussed in detail in many works [38]. From LPR tests, the corrosion potential, E corr , which directly indicates the corrosion state, and polarization resistance, R p , which is related to the corrosion rate of a corrosion process, can be obtained directly by the built-in fitting software. The E corr and R p values of the steel in all test solutions against time (6 h, 1 day, 3 days, 7 days, and 10 days) are presented in Figures 9 and 10 At early immersion, the R p is relatively small, and then increases gradually and tends to remain stable after 7 d immersion. This behaviour should be attributed to the fact that at the beginning of the immersion, the surface of the steel sample is active and consequently the R p is low, but as the exposure time increasing, the surface of the steel sample is covered by a corrosion products film, and consequently the R p enhances and gets to stable feature until basically mature passive film is formed (after processing for 7 days). Certainly, this trend is in cases except for low pH (10.5 and 9.0) with 1.0 M Cl − . Generally, at the same pH, R p values obtained in 0.2 M Cl − condition are higher than that in solutions containing 1.0 M Cl − . The higher the R p value, the stronger the corrosion prevention capability of the surface film [40]. So chloride reduces the passivity for the steel and this is more significant at low pH. It is noteworthy that the R p has different evolution with the pH varying, depending on the chloride contents. In solutions with 0.2 M Cl − , R p values have marked raise at lower pH, and almost show an increment of 1/4 when pH drops from 13.3 to 9.0. However, with exposure to high chloride (1.0 M), R p decreases prominently with pH. When pH is below 10.5, R p fluctuates in the range of 5~20 kΩ·cm 2 , which are very low values, with little enhancement even if somewhat decline after 7 d immersion, revealing the steel hardly passivates for the surface films are very electric conductive, in consistent with Mott-Schottky plots analysis. immersion, and show slight rise and fall over the immersion time, but remains below −350 mV, suggesting difficult passivating of the steel. At early immersion, the Rp is relatively small, and then increases gradually and tends to remain stable after 7 d immersion. This behaviour should be attributed to the fact that at the beginning of the immersion, the surface of the steel sample is active and consequently the Rp is low, but as the exposure time increasing, the surface of the steel sample is covered by a corrosion products film, and consequently the Rp enhances and gets to stable feature until basically mature passive film is formed (after processing for 7 days). Certainly, this trend is in cases except for low pH (10.5 and 9.0) with 1.0 M Cl − . Generally, at the same pH, Rp values obtained in 0.2 M Cl − condition are higher than that in solutions containing 1.0 M Cl − . The higher the Rp value, the stronger the corrosion prevention capability of the surface film [40]. So chloride reduces the passivity for the steel and this is more significant at low pH. It is noteworthy that the Rp has different evolution with the pH varying, depending on the chloride contents. In solutions with 0.2 M Cl − , Rp values have marked raise at lower pH, and almost show an increment of 1/4 when pH drops from 13.3 to 9.0. However, with exposure to high chloride (1.0 M), Rp decreases prominently with pH. When pH is below 10.5, Rp fluctuates in the range of 5~20 kΩ·cm 2 , which are very low values, with little enhancement even if somewhat decline after 7 d immersion, revealing the steel hardly passivates for the surface films are very electric conductive, in consistent with Mott-Schottky plots analysis. immersion, and show slight rise and fall over the immersion time, but remains below −350 mV, suggesting difficult passivating of the steel. At early immersion, the Rp is relatively small, and then increases gradually and tends to remain stable after 7 d immersion. This behaviour should be attributed to the fact that at the beginning of the immersion, the surface of the steel sample is active and consequently the Rp is low, but as the exposure time increasing, the surface of the steel sample is covered by a corrosion products film, and consequently the Rp enhances and gets to stable feature until basically mature passive film is formed (after processing for 7 days). Certainly, this trend is in cases except for low pH (10.5 and 9.0) with 1.0 M Cl − . Generally, at the same pH, Rp values obtained in 0.2 M Cl − condition are higher than that in solutions containing 1.0 M Cl − . The higher the Rp value, the stronger the corrosion prevention capability of the surface film [40]. So chloride reduces the passivity for the steel and this is more significant at low pH. It is noteworthy that the Rp has different evolution with the pH varying, depending on the chloride contents. In solutions with 0.2 M Cl − , Rp values have marked raise at lower pH, and almost show an increment of 1/4 when pH drops from 13.3 to 9.0. However, with exposure to high chloride (1.0 M), Rp decreases prominently with pH. When pH is below 10.5, Rp fluctuates in the range of 5~20 kΩ·cm 2 , which are very low values, with little enhancement even if somewhat decline after 7 d immersion, revealing the steel hardly passivates for the surface films are very electric conductive, in consistent with Mott-Schottky plots analysis. Spectroscopy Figures 11 and 12 show the EIS spectra in Nyquist and Bode forms obtained for the corrosion-resistant steel after 7 d immersion in solutions of pH from 13.3 to 9.0 with different Cl − contents. It is evident that the EIS spectra profiles of the steel with the pH varying for the two Cl − contents are materially different. In solutions with 0.2 M Cl − , the steel shows capacitive-like behaviour with the maximum phase angles close to −90 • and high values of |Z| above 300 kΩ·cm 2 in the region of low frequencies, with regard to the Bode plots, suggesting that the passive films formed on the steel at all pH offer high corrosion resistance. It is noted that the impedance response is increasing following the pH dropping, and the capacitive arc radius and overall impedance are even larger at pH 9.0, indicating the film formed at lower pH exhibits more protective behaviour. In contrast, when chloride rises to 1.0 M, the capacitive arc radius markedly decreases as the pH does. The overall impedance has very low values in the order of 10~20 kΩ·cm 2 when the pH falls below 10.5, signifying that the steel hardly passivates in low pH media. These results are in good agreement with the Mott-Schottky approach and LPR observation. Indeed, this change is ascribed to the chemical composition evolution of the surface film formed on the steel with pH and Cl − contents varying, as mentioned above. contents. It is evident that the EIS spectra profiles of the steel with the pH varying for the two Cl − contents are materially different. In solutions with 0.2 M Cl − , the steel shows capacitive-like behaviour with the maximum phase angles close to −90° and high values of |Z| above 300 kΩ·cm 2 in the region of low frequencies, with regard to the Bode plots, suggesting that the passive films formed on the steel at all pH offer high corrosion resistance. It is noted that the impedance response is increasing following the pH dropping, and the capacitive arc radius and overall impedance are even larger at pH 9.0, indicating the film formed at lower pH exhibits more protective behaviour. In contrast, when chloride rises to 1.0 M, the capacitive arc radius markedly decreases as the pH does. The overall impedance has very low values in the order of 10~20 kΩ·cm 2 when the pH falls below 10.5, signifying that the steel hardly passivates in low pH media. These results are in good agreement with the Mott-Schottky approach and LPR observation. Indeed, this change is ascribed to the chemical composition evolution of the surface film formed on the steel with pH and Cl − contents varying, as mentioned above. The Zsimpwin program was used to fit the EIS data. Based on some trials and references [18,23], the equivalent circuit depicted in Figure 11a was adopted to fit the experimental data, which provided a right fitting with errors within 10%. The constant phase element (CPE) is used to descript the frequency dispersion behaviour of non-ideal capacitors with its impedance (ZCPE) defined by Equation (9) Z = 1 Y jw (9) contents. It is evident that the EIS spectra profiles of the steel with the pH varying for the two Cl − contents are materially different. In solutions with 0.2 M Cl − , the steel shows capacitive-like behaviour with the maximum phase angles close to −90° and high values of |Z| above 300 kΩ·cm 2 in the region of low frequencies, with regard to the Bode plots, suggesting that the passive films formed on the steel at all pH offer high corrosion resistance. It is noted that the impedance response is increasing following the pH dropping, and the capacitive arc radius and overall impedance are even larger at pH 9.0, indicating the film formed at lower pH exhibits more protective behaviour. In contrast, when chloride rises to 1.0 M, the capacitive arc radius markedly decreases as the pH does. The overall impedance has very low values in the order of 10~20 kΩ·cm 2 when the pH falls below 10.5, signifying that the steel hardly passivates in low pH media. These results are in good agreement with the Mott-Schottky approach and LPR observation. Indeed, this change is ascribed to the chemical composition evolution of the surface film formed on the steel with pH and Cl − contents varying, as mentioned above. The Zsimpwin program was used to fit the EIS data. Based on some trials and references [18,23], the equivalent circuit depicted in Figure 11a was adopted to fit the experimental data, which provided a right fitting with errors within 10%. The constant phase element (CPE) is used to descript the frequency dispersion behaviour of non-ideal capacitors with its impedance (ZCPE) defined by Equation (9) Z = 1 Y jw (9) The Zsimpwin program was used to fit the EIS data. Based on some trials and references [18,23], the equivalent circuit depicted in Figure 11a was adopted to fit the experimental data, which provided a right fitting with errors within 10%. The constant phase element (CPE) is used to descript the frequency dispersion behaviour of non-ideal capacitors with its impedance (Z CPE ) defined by Equation (9) Z CPE = 1 Y 0 (jw) n (9) where Y 0 is the CPE electrical constant admittance, ω is the angular frequency (in rad/s), j is the imaginary number (j 2 = −1) and n is the CPE exponent, as an adjustable parameter affected by non-homogeneities and roughness of the surfaces, that always lies from 0 to 1. For the meaning of the circuit elements in this circuit model, the following physical interpretation is adopted [18,23]: the resistance connected in series with the two time constants corresponds to the ohmic resistance of the solution (R sol ), which changes with ion concentrations of the test solution. The high frequency time constant (R 1 , CPE 1 ) can be attributed to the charge transfer processes in the active surface areas (film defects/pores) and it is represented by the charge transfer resistance (R 1 ) coupled with the double layer capacitance (simulated by CPE 1 ). The low frequency time constant (R 2 , CPE 2 ) was assigned to the redox processes taking place in the areas covered with the passive film (protective oxides) and it is composed of the passive layer resistance (R 2 ) and the passive film capacitance (CPE 2 ). Table 2 presents the fitting parameters obtained from the experimental EIS spectra of the steel in all test solutions at 7 d immersion time. It can be observed that in presence of 0.2 M Cl − , both R 1 and R 2 have the highest values at pH 9.0. This is obviously ascribed to the further growth and formation of protective Cr oxides on the metal when pH decreasing, which provides higher resistance to the corrosion processes as proposed in literatures [18]. The admittance evolution of CPE 2 agrees well with that of R 2 . It varies in the range 2.1 × 10 −5~1 .7 × 10 −5 Ω −1 ·cm −2 ·s n as the pH drops from 13.3 to 9.0, reflecting the passive film capacitance behaviour are somewhat more significant at lower pH. However, the CPE 1 values show an upward tendency as the media becomes less alkaline, suggesting that the dispersion effect of the double layer capacitance becomes more highlighted. This reveals the overall film surface becomes rougher and more heterogeneous, for more Fe hydroxides with porous and loose structure form in the outer layer at lower pH. Even so, the film formed on the steel at lower pH has greater protection, as evidenced by the higher R 1 and R 2 values, in agree well with the LPR and Mott-Schottky analysis results. On the contrary, in the presence of 1.0 M Cl − , R 1 and R 2 suffer an important drop as pH, and exhibit extremely low values less than 10 kΩ·cm 2 at pH 9.0, suggesting the very fast electrochemical corrosion process. Certainly, this is related with the formation of a surface film with more defects. CPE 1 increases about one order of magnitude and CPE 2 exhibits a similar trend when pH below 10.5, indicating the steel more approachable to accomplish depolarization [41]. All these signify that the film becomes less and less protective owing to its deterioration. Figure 13 presents several representative images of the steel samples in different exposure conditions. It is observable that, in presence of 0.2 M Cl − , the SEM images show a clean and bright steel surface for all the pH values, with almost no spots. When exposed to 1.0 M Cl − , the steel has the same case as above for pH 13.3, but for low pH values (pH 9.0), some pits rendering black dish-like holes were observed on the metal surface, and different attack morphologies could also be found. Selecting typical black dish-like holes as areas (e.g., area A) for EDS chemical analysis. The results reveal that the composition of chemical species inside the area A includes about iron (36.0%), calcium (19.4%), chromium (1.1%), oxygen (18.6%), and also small amounts of other inclusion elements from the metal (The considerable amounts of Al (6.61%) may originate from alumina paste adhering to the surface of steel when the steel is polished). This indicates some corrosion products (as Fe/Cr hydroxides) and calcium hydroxide crystals (from the solution) attached into the pitting hole, suggesting the area has suffered some corrosion degree. Surface Morphology Materials 2016, 9,749 14 of 17 (19.4%), chromium (1.1%), oxygen (18.6%), and also small amounts of other inclusion elements from the metal (The considerable amounts of Al (6.61%) may originate from alumina paste adhering to the surface of steel when the steel is polished). This indicates some corrosion products (as Fe/Cr hydroxides) and calcium hydroxide crystals (from the solution) attached into the pitting hole, suggesting the area has suffered some corrosion degree. Conclusions The passive behaviour of alloy corrosion-resistant steel Cr10Mo1 in simulating concrete pore solutions with different pH values (from 13.3 to 9.0) and chloride contents (0.2 M and 1.0 M) was investigated. Analytical and electrochemical results proved that the exposure conditions modify the chemical composition and electrochemical responses of the surface film. Surface composition analysis performed by XPS revealed that the passive film formed on the corrosion-resistant steel consists of Fe and Cr oxides/hydroxides, which presents in two layers with Conclusions The passive behaviour of alloy corrosion-resistant steel Cr10Mo1 in simulating concrete pore solutions with different pH values (from 13.3 to 9.0) and chloride contents (0.2 M and 1.0 M) was investigated. Analytical and electrochemical results proved that the exposure conditions modify the chemical composition and electrochemical responses of the surface film. Surface composition analysis performed by XPS revealed that the passive film formed on the corrosion-resistant steel consists of Fe and Cr oxides/hydroxides, which presents in two layers with the outer layer mainly composed of Fe oxides and hydroxides, and the inner one enriched with Cr species. In presence of 0.2 M chloride, as the pH drops, Fe oxides in the outer layer become more soluble but Cr oxides in the inner layer maintain good stability and have further growth, on account of excessive dissolution of metallic Fe and Cr from the substrate promoted by lower pH. However, in presence of 1.0 M chloride, both the Fe-oxides outer layer and Cr-oxides inner layer suffer a significant decrease in thickness, and substantial Fe and Cr hydroxides form in the surface film when pH drops below 10.5, indicating high chloride produce much negative effect on the passive film formation of the steel in low pH (10.5 and 9.0) media. Mott-Schottky analysis suggests the passive film performs n-type and p-type semiconductor behaviour related to its bilayer structure composed of Fe and Cr species. In presence of 0.2 M chloride, the surface film has poorer electrical conductivity with pH dropping mainly due to Cr oxides with excellent protection get enriched in the film. However, high chloride (1.0 M) facilitates Cr oxides to convert into their hydroxides which are defective, and the case is reversed: the lower the pH value, the more electric conductive the film. LRP and EIS tests evidence that, in solutions with relatively moderate chloride (0.2 M), Cr10Mo1 steel gets good and stable passivity after 7 d immersion no matter the pH drops, and in view of the electrochemical responses, lower pH provides better conditions for the steel passivation, due to more protective Cr oxides forming and precipitating in the inner layer. However, in presence of high chloride (1.0 M), there is an inversion of this trend. The passivity is dramatically weakened with increasing carbonation, and even pits occur when pH is below 10.5 as shown by SEM/EDS analysis. This can be related to the major decomposition of protective Fe and Cr oxides, which induces the formation of a defective and porous surface layer rich in hydroxides.
2016-09-01T08:36:37.373Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "8435954056c3ca68d2fb2e1201b29c63201654a7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/9/749/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8435954056c3ca68d2fb2e1201b29c63201654a7", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
26091970
pes2o/s2orc
v3-fos-license
The effect of immunosuppressive agents on the induction of nuclear factors that bind to sites on the interleukin 2 promoter. Cyclosporin A (CSA), FK506, and glucocorticosteroids all inhibit the production of lymphokines by decreasing lymphokine gene expression. Previous experiments have defined six different sites that may contribute to the transcriptional control of the interleukin 2 (IL-2) promoter, and for each, active nuclear binding factors are induced upon mitogenic stimulation. While dexamethasone markedly blocks the increase in IL-2 mRNA in stimulated human blood T cells, we found that the drug does not block the appearance of factors that bind to the transcriptional control sites termed AP-1, AP-3, NF-kB, OCT-1, B site, and NF-AT. In contrast, both CSA and FK506 have similar effects: the drugs cause modest decreases in AP-3 and NF-kB, and markedly decreases in the activity of AP-1 and NF-AT. Therefore, CSA and FK506, while chemically different, seem to act upon a similar pathway that leads to IL-2 gene expression, whereas glucocorticoids do not affect this pathway. A tion and lymphokine production. A major site of action of these immunosuppressive drugs is at the level of lym phokine gene expression . This was noted first for cyclosporin A (CSA) (1)(2)(3)(4), which acts primarily at the level of lymphokines rather than other components of T cell activation, such as the p55 IL2R and cfos (2) . Two other drugs, dexamethasone and FK506, also act primarily at the level of 11,2 gene expression (5)(6)(7). It is likely that glucocorticoids have a different mechanism of action than CSA and FK506, since steroids affect many cell types, whereas CSA and FK506 are more T cell restricted . To gain more insight into the mechanism ofaction of immunosuppressive drugs, we have taken advantage of recent progress in defining nuclear factors that bind to the IL2 promoter. An activation-dependent enhancer within sequences -326 to -52 of the 5' flanking region of the IL2 gene has been identified (for review see reference 8). In this enhancer reside several elements common to other genes, like the NFkB, AP-1, AP-3, and OCT-1 sites, as well as a site that seems restricted to activated lymphoid cells and is called the nuclear factor for activated T cells (NF-AT) (9) . Here, we report the induction of these nuclear factors in primary populations of human blood T cells . We show that FK-506 and CSA markedly inhibit the activation of factors that bind to the AP-1 and NF-AT sites, whereas dexamethasone has no effect on all six nuclear binding factors tested . 1869 Materials and Methods Cell Cultures. Briefly, human mononuclear cells were isolated from huffy coats on Ficoll-Hypaque density gradients, washed in PBS, and rosetted with neuraminidase-treated sheep erythrocytes. The rosette-positive fraction was further purified by passage over a nylon wool column and used as a source of T cells. T cells were cultured at 5 x 106/ml in RPMI 1640 supplemented with 10% heatinactivated FCS, 20,ug/ml gentamicin sulfate, and 5 x 10-5 M 2-ME. Cells were stimulated with PHA (Gibco Laboratories, Grand Island, NY) at 1 Rg/ml and PMA (Sigma Chemical Co., St. Louis, MO) at 5 ng/ml in the presence or absence ofCSA or CSH (Sandoz, Basel, Switzerland) at 1 hg/ml; FK506 (Fujisawa Pharmaceutical Co. Ltd., Osaka, Japan) at 100 ng/ml ; or dexamethasone (Sigma Chemical Co.) at 10-7 M. Nuclear Extracts. These were prepared from 2-4 x 108 T cells by homogenization in two-cell pellet volumes of 10 mM Hepes, pH 7.9, 10 mM KCI, 1.5 mM MgCIZ, 1 mM EDTA, 0.5 mM DTT, 0.5 mM PMSF, and 10% glycerol (10). Nuclei were centrifuged at 1,000 g for 5 min, washed, and resuspended in two volumes of the above solution . 3 M KCI was added drop by drop to reach 0.39 M KCI . Nuclei were extracted at 4°C for 1 h and centrifuged at 100,000 g for 30 min . The supernatants were dialyzed in 20 mM Hepes, pH 7 .9, 50 mM KCI, 20% glycerol, 0.5 mM PMSF, and 1 mM EDTA, and"then clarified by centrifugation and stored at -80°C. Protein concentration was determined using the Bradford method . DNA-Protein Binding Assay. 0.2 ng (-10' cpm) ofend-labeled DNA fragments were incubated at room temperature for 20 min with 5-10 jug of nuclear protein in the presence of 2 pg Poly(dl-dC) in 20 jl of 10 mM Tris HCI, pH 7.5, 50 mM NaCl, 1 mM EDTA, 1 mM DTT, and 5% glycerol (this buffer was used for NF-kB, NF-AT, and B sites) . For AP-1, AP-3, and OCT-1 sites, the buffer was 20 mM Hepes, pH 7.9, 4% Ficoll, 2.5 MM MgCIZ, 1 mM DTT, and 40 mM KC1. Protein-DNA complexes were separated from free probe on a 4% polyacrylamide gel in 0.25x TBE at 150 V for 1.5 h at room temperature . The gels were dried and exposed to X-ray film . For each site, we verified that a 20-fold molar excess of specific cold oligonucleotide would compete the binding of proteins to a radiolabeled probe, whereas a similar excess from another site would not. 1870 Suppression of Interleukin 2 Transcription Factors Fig. 1, except that FK506 (100 ng/ml) was used as the immunosuppressant, and nuclear extracts from Hela cells were also tested . Arrows indicate the specific DNA-protein complexes. Fig. 1, except that an anti-CD28 mAb, 9.3 (ascites, kindly provided by Dr. P. Martin, Seattle, WA, and used at 1:1,000), was used instead of PHA as the mitogen. Results and Discussion DNA-nuclear protein interactions were monitored with standard electrophoretic mobility shift assays (EMSA) . Extracts of nuclei from mitogen stimulated T cells were prepared 5 h after application of the mitogen in the presence or absence of an immunosuppressive drug. Resting T cells did not contain active factors that bind to the NF-AT, NF-kB, AP-1, AP-3, and B sites, but these activities were induced by stimulation with PHA and PMA (Fig. 1) . The immunosuppressive CSA, but not the nonimmunosuppressive analogue CSH, markedly inhibited the induction of NF-AT and AP-1, but only partially blocked induction of NF-kB and AP-3 (Fig. 1) . Inhibition by CSA of NF-AT was also observed in stimulated Jurkat cells (14). The fact that CSA primarily acts on NF-AT and AP-1 is of interest . Both are distinct from other transcription factors in requiring new protein synthesis as well as two signals, in this case lectin plus PMA (Granelli-Piperno, A., manuscript submitted for publication) . Fig . 1, except that dexamethasone (10 -7 M) was used as the immunosuppressive . Arrows indicate the specific DNA-protein complexes . We next examined the effect of another immunosuppressive drug, FK506 . FK506, like CSA, inhibits T cell proliferation and IL2 gene expression (5), which we confirmed. FK506 proved to be similar to CSA at the level of nuclear transcription factors. The inductions of NF-AT and AP-1 were markedly reduced, whereas AP-3, NF-kB, OCT 1, and the factor that binds to the B site were not (Fig . 2) . We simultaneously evaluated extracts of HeLa cell nuclei . These contained all the nuclear factors that were inducible in T cells, except for NF-AT, thus confirming that NF-AT is at T cell-restricted activity. In additional experiments, CSA and FK506 blocked the induction of nuclear factors in response to the other mitogens, anti-CD3 (not shown) or anti-CD28 mAb (Fig. 3) . Some authors find that stimulation with anti-CD28 is CSA resistant (15), but we noted that CSA reduced the induction of IL2 mRNA as well as nuclear factors that bind the IL-2 promoter (16) (Fig. 3) . We last tested dexamethasone, a glucocorticosteroid that also inhibits the increase in IL2 mRNA that occurs during mitogenesis (6,7). While the drug clearly blocks the induction of IL-2 mRNA (not shown), dexamethasone did not alter the induction of any of the factors that bind to the elements we have examined (Fig. 4) . These data indicate that the 1172 transcriptional control pathway that is suppressed by glucocorticoids is different from that of CsA and FK506 .
2014-10-01T00:00:00.000Z
1990-12-01T00:00:00.000
{ "year": 1990, "sha1": "8bf11d4c0721b75f8dc8b38ce5f5c74105854ccf", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/172/6/1869.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8bf11d4c0721b75f8dc8b38ce5f5c74105854ccf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259272022
pes2o/s2orc
v3-fos-license
The Efficacy of Immunotherapy in Long-Term Survival in Non-Small Cell Lung Cancer (NSCLC) Associated with the Syndrome of Inappropriate Antidiuretic Hormone Secretion (SIADH) Introduction: The syndrome of inappropriate antidiuretic hormone secretion (SIADH) is the most common cause of hyponatremia in cancer patients, occurring most frequently in patients with small cell lung cancer. However, this syndrome occurs extremely rarely in patients with non-small cell lung cancer. The results of the clinical trials have revealed that immuno-oncological therapies are effective for long periods of time, providing hope for long survival and with a good quality of life. Case Presentation: We present the case of a female patient who was 62 years old at the time of diagnosis in 2016 who underwent surgery for a right pulmonary tumor (pulmonary adenocarcinoma) and subsequently underwent adjuvant chemotherapy. The patient had a left inoperable mediastinohilar relapse in 2018, which was treated using polychemotherapy The patient also had an occurrence of progressive metastasis and a syndrome of inappropriate antidiuretic hormone secretion (SIADH) in 2019 for which immunotherapy was initiated. The patient has continued with immunotherapy until the time this study began to be written (April 2023), the results being the remission of hyponatremia, the clinical benefits and long-term survival. Discussion: The main therapeutic option for SIADH in cancer patients is the treatment of the underlying disease, and its correction depends almost exclusively on a good response to oncological therapy. The initiation of immunotherapy at the time of severe hyponatremia occurrence led to its remission as well as the remission of the other two episodes of hyponatremia, which the patient presented throughout the evolution of the disease, demonstrating an obvious causal relationship between SIADH and the favorable response to immunotherapy. Conclusions: Each patient must be approached individually, taking into account the various particular aspects. Immunotherapy proves to be the innovative treatment that contributes to increasing the survival of patients with metastatic non-small cell lung cancer and to increasing their quality of life. Abstract: Introduction: The syndrome of inappropriate antidiuretic hormone secretion (SIADH) is the most common cause of hyponatremia in cancer patients, occurring most frequently in patients with small cell lung cancer. However, this syndrome occurs extremely rarely in patients with nonsmall cell lung cancer. The results of the clinical trials have revealed that immuno-oncological therapies are effective for long periods of time, providing hope for long survival and with a good quality of life. Case Presentation: We present the case of a female patient who was 62 years old at the time of diagnosis in 2016 who underwent surgery for a right pulmonary tumor (pulmonary adenocarcinoma) and subsequently underwent adjuvant chemotherapy. The patient had a left inoperable mediastinohilar relapse in 2018, which was treated using polychemotherapy The patient also had an occurrence of progressive metastasis and a syndrome of inappropriate antidiuretic hormone secretion (SIADH) in 2019 for which immunotherapy was initiated. The patient has continued with immunotherapy until the time this study began to be written (April 2023), the results being the remission of hyponatremia, the clinical benefits and long-term survival. Discussion: The main therapeutic option for SIADH in cancer patients is the treatment of the underlying disease, and its correction depends almost exclusively on a good response to oncological therapy. The initiation of immunotherapy at the time of severe hyponatremia occurrence led to its remission as well as the remission of the other two episodes of hyponatremia, which the patient presented throughout the evolution of the disease, demonstrating an obvious causal relationship between SIADH and the favorable response to immunotherapy. Conclusions: Each patient must be approached individually, taking into account the various particular aspects. Immunotherapy proves to be the innovative treatment that contributes to increasing the survival of patients with metastatic non-small cell lung cancer and to increasing their quality of life. Introduction Lung cancer is the leader in terms of incidence and mortality worldwide, with an estimated number of two million newly diagnosed cases and 1.8 million deaths. It is the second most frequent form of cancer after prostate cancer in men, and after breast cancer in women [1]. According to the WHO 2015, lung cancer is divided into two large categories: nonsmall cell lung cancer, subdivided into adenocarcinoma and squamous carcinoma; and neuroendocrine lung cancer, subdivided into microcellular cancer, large cell neuroendocrine and carcinoid tumors [2]. Smoking, both active and passive, is considered the main risk factor, being responsible for more than 80% of all cases [3]. It is followed by exposure to radon (approximately 10% and over 30% of all lung cancers in non-smokers), asbestos, air pollution, arsenic and other lung diseases [4]. Among the non-modifiable risk factors, we mention age, male gender (at least twice as often), Afro-American men from the USA and genetic factors [5]. The low survival rate of five years for non-small cell lung cancer of 64% in localized disease and 8% in metastatic disease, and 29% for lung cancer in localized disease and 3% in metastatic disease, respectively [6], makes this pathology one with a major impact on health worldwide, requiring each aspect, both general and particular, to be given special attention, representing an extremely important chapter in oncological pathology. In the field of oncology, new treatments and evidence-based medicine have determined major progress both in terms of survival and quality of care, with a patients' life quality a major objective in patient care [7,8]. The current paper aims at highlighting the effectiveness of immunotherapy, combining theoretical and practical notions by presenting a clinical trial in an extremely particular situation, namely the occurrence of the paraneoplastic SIADH syndrome in a female patient with non-small cell lung cancer. The syndrome of inadequate antidiuretic hormone secretion (SIADH) is characterized by euvolemic hypotonic hyponatremia and it may be a consequence of the following factors: hyper-production of arginine vasopressin (an antidiuretic hormone -ADH) at the level of the pituitary gland as a response to a stimulus from tumor cells through the ectopic production of ASH-like peptides in the tumor cells, or as an adverse reaction to some drugs that stimulate the production of the antidiuretic hormone [9]. The diagnosis is made after ruling out other causes of hyponatremia, such as hypovolemia, hypervolemia (organ failure), hyperthyroidism and adrenal cortex failure [10]. SIADH is the most common cause of hyponatremia in cancer patients. More than 70% of the cases are attributed to microcellular lung cancer (with a frequency of 7-16% among this group of patients); while non-small cell lung cancer is attributed with a very low percentage, approximately 0.4-2% [11]. Other cancers, in which it is found, are head and neck cancers (approximately 3%), breast cancer, genitourinary cancers and sarcomata. In oncological pathology, SIADH may be a paraneoplastic syndrome, in which case there is no direct relation with the location of the primary tumor or metastases but with the ectopic activity of the cancer cells. An optimal response to the oncological treatment can normalize the level of arginine vasopressin [12]. Moreover, certain cytostatic drugs or some other drugs used for palliative purposes can be responsible for the occurrence of the syndrome. We mention the following: cyclophosphamide, vinorelbine, vincristine, cisplatin, ifosphamine, methotrexate, imatinib, non-steroidal anti-inflammatory drugs, opioids, some depressants and haloperidol. Even immunotherapy, through its immunosuppressive effects and the occurrence of immune-type disorders as adverse reactions can cause inflammation of the pituitary gland, increasing the level of the antidiuretic hormone [13]. Clinically, hyponatremia manifests with serum sodium levels below 120 mEq/L. The symptoms are represented by signs of water intoxication (hypoosmotic hyponatremia): apathy, confusion, convulsions and even coma. Focal neurological deficits may also be present [10]. The main therapeutic option for cases of SIADH is the treatment of the underlying disease and its correction depends almost exclusively on a good response to oncological therapy. In 2018, the Nobel prize for medicine was awarded to the doctors Jame Alisson and Tasuku Hojo for the description of PD-1 pathways and cytotoxic T-lymphocytes with antigen 4 (CTLA-4), showing their role in the inhibition of immune response checkpoints, which determined the improvement in the antitumor response to T-lymphocytes [15]. Ipilimumab, an anti CTLA-4 antibody, has proven effective in the treatment of malignant melanoma. [16]. Nivolumab, the first monoclonal antibody antagonist to PD-1, initially showed its effectiveness in non-small cell bronchopulmonary cancers and in melanoma and renal cancers, subsequently proving its contribution in other cancers, such as ENT cancers [17]. Immunotherapy has revolutionized oncology and continues to do so through the emergence of newer and newer therapeutic agents. Case Presentation The second part of the paper, actually its main topic, represents a clinical case closely related to the presented theoretical notions. This study is about a female patient who was 62 at the time of diagnosis who appeared in the records of the Oncology Department of "Sf. Luca" Chronic Disease Hospital in Bucharest. This female patient was diagnosed in February 2016 after a computed tomography of the head, thorax and abdomen with a right lung tumor, on which surgery was performed upon by practicing a right upper lobectomy with a mediastinal lymphadenectomy. The histopathological diagnosis indicated papillary lung adenocarcinoma, stage IIA -T2b N0 M0, with molecular markers (negative EGFR, ALK, PDL-1). Postoperatively, it was decided to administer adjuvant chemotherapy in four series with carboplatin and paclitaxel. The follow-up tomography did not reveal any sign of a local tumor recurrence or other oncological suspicions. The patient remained under observation, undergoing regular check-ups every six months. Two years after the completion of the adjuvant chemotherapy, a follow-up tomography (July 2018) revealed confluent adenopathy in the aorto-pulmonary window and both the left hilar and left bronchial, which extended precarinarily, being indistinguishable from each other and up to 20/27 mm or 32/22 mm in size. A PET-CT that was performed revealing a metabolically active left mediastinal and hilar adenopathy. The thoracic surgical examination revealed an inoperable left mediastinal-pulmonary relapse. The case was discussed in a multidisciplinary board and conformational or stereotactic radiotherapy was proposed but the patient declined this option. Chemotherapy was resumed in four series of Gemcitabine and platinum salts. The follow-up tomography in January 2019 revealed the disease was in a slight progression. Some of the previous adenopathies were dimensionally reduced (in the aorto-pulmonary window, from 18 mm short axis to 14 mm short axis) but others had increased (left infe-rior bronchus from 12/16 mm to 22/27 mm), with a newly appeared adenopathy in the prevascular space of 14/17 mm (Figure 1). Chemotherapy continued with Vinorelbine in monotherapy, administered weekly. In June 2019, the CT scan revealed unfavorable aspects when compared to the previous examinations: an increase in the mediastinal mass to 50/42 mm, the increase in the mediastinal adenopathies, the occurrence of peribroncho-vascular iodophilic tissue densifications adjacent to the left inferior lobe and by the appearance of numerous nodules and micro nodules at the level of the left inferior lobe, lung metastases, hepatic cysts, a right renal cortical cyst, splenic hemangioma, osteocondensation at the level of the left iliac wing-to be correlated with a bone scan, which subsequently denied secondary bone determinations. Immediately after the investigations and examinations, the patient was presented to the hospital with an altered general condition: confusional syndrome, nausea, vomiting, with an ECOG 4 performance status, and from a biohumoral point of view, with severe hyponatremia (Sodium 117 mmol/L with a normal inferior limit of 135 mmol/L), mild hypokalemia (K 3.3 mmol/L with a normal inferior limit of 3.5 mmol/L), hyperthyroidism (low TSH and increased T3 and T4). After the administration of sodium and potassium chloride, the hypokalemia remitted but the hyponatremia worsened, reaching 113 mmol/L. An endocrinology consultation was requested for the severe hyponatremia, and for the initiation of immunotherapeutic treatment for the metastatic disease. The patient was transferred to "C.I. Parhon" National Institute of Endocrinology, where the diagnosis of hyperthyroidisms was made in the context of polynodular goiter and paraneoplastic SIADH. The adrenal cortex failure was denied due to the increased basal cortisol level in the context of acute stress. She received recommendations for the treatment of hyperthyroidism; fluid restriction was recommended for the hyponatremia. The patient was given the approval to initiate oncological treatment, considering the fact that the first therapeutic intention in the occurrence of a paraneoplastic syndrome is the treatment of the underlying disease. Nivolumab was initiated for the progressive metastatic disease associated with SIADH with 240 mg being administered once every two weeks. The patient remained under observation, following the ionogram, so that at the next administration of immunotherapy the sodium and potassium levels were within normal limits. The patient was in a good general condition, without any complaints, with a ECOG 1 performance status. Practically, we found ourselves in front of a dramatic and extremely favorable and rapid response, monitoring the daily evolution of a patient who started at an ECOG 4, a situation which caused the severe hyponatremia and who reached an ECOG 1 state without obvious symptoms in just a few days, following a single administration of the immunotherapy treatment. The subsequent evolution of the patient was clinically favorable, with a constant ECOG 1 and with a general condition that was without any significant complaints. The imagine examinations were repeated approximately once every six months. A CT scan was performed in 2020, revealing a right adrenal nodule, which required a differential diagnosis between adenoma and a secondary determination, which then required monitoring. Additionally, the CT scan revealed the regression of the micronodules and the lung nodules that were previously described and the regression of the mediastinal adenopathy. Considering the good general condition, the following imaging investigation was expected in order to define the prognosis according to the RECIST criteria. It was performed in August 2021 and it revealed bilateral pulmonary micronodules to be monitored, mediastinal adenopathy (Figure 2), a hepatic lesion of the V segment ( Figure 3) and bilateral adrenal nodules suspicious for secondary determinations (Figure 4). The patient continued the treatment with Nivolumab considering the clinical benefit criterion, despite the suspicion of disease progression. The next CT scan was performed in March 2022, which revealed dimensional progression and pulmonary secondary determinations ( Figure 5), liver metastases ( Figure 6) and adrenal secondary determinations, the occurrence of abdominal adenopathy (retroperitoneal and mesenteric) and newly occurring peritoneal carcinomatosis (Figure 7). It was the time to question whether to continue with immunotherapy or with a palliative treatment. However, the Oncology Therapeutic Indication Board favored the clinical benefit against the imaging progression. The patient could move by herself, she continued with various light activities, she is conscious, cooperative and still comes for treatment almost a year after the previously mentioned decision. It is worth mentioning that along the way, and also in the last six months from the date of writing this paper (April 2023), there have been two episodes of hyponatremia: the first in October 2022 was moderate hyponatremia with a sodium level of 124 mmol/L and the second in January 2023, also with a sodium level of 122 mmol/L, both of which were corrected after the administration of immunotherapy. The next CT scan was performed in February 2023 revealed an aspect suggestive of oncological disease progression with the appearance of a nodule in the right cerebellar parenchyma suspicious of a secondary determination, which was to be completed with brain MRI (in the absence of contraindications) (Figure 8). Additionally, the CT scan revealed the numerical and dimensional progression of the secondary lung lesions (Figure 9), the marked dimensional progression of the secondary adrenal tumors, the right one invasive in the liver parenchyma, the progression of the secondary liver lesions (Figure 10), the progression of the mediastinal and abdominal adenopathies, the disappearance of the secondary splenic lesions and peritoneal carcinomatosis lesions and the dimensional regression of the secondary peritoneal lesion in the left hypochondrium ( Figure 11). The brain MRI performed in March 2023 did not reveal brain metastases. Discussion SIADH is a syndrome characterized by water retention associated with inadequate antidiuretic hormone secretion (ADH) [21]. This can appear as a paraneoplastic syndrome through an ectopic production of arginine Vasporesin peptide-like from cancer cells. It can appear through an endogenous hyperproduction of ADH from the pituitary gland or it can be iatrogenic due to a reaction from the administration of some drugs, including certain cytostatic drugs, such as cyclophosphamide, cisplatin, methotrexate and vinorelbine. Generally, the time of onset of this syndrome or the remission time guides us in clarifying its etiology. Hyponatremia that develops gradually over a period of time is most likely the result of an endogenous production. Hyponatremia is the most common complication of solid tumors having a negative impact on the quality of life in patients [21]. The discontinuation of some drugs that have this reaction associated with hyponatremia remission indicate an iatrogenic cause [21]. Moreover, the remission of the syndrome after specific interventions, such as surgery, radiotherapy and chemotherapy suggests an ectopic production of cancer cells, with a favorable response to the oncological treatment. Obvious cases of hyponatremia remission after surgical interventions in ENT cancers have been reported [22]. A case with NCSLC was reported for the first time along with a good response to chemotherapy by normalizing the sodium levels immediately after the administration [23]. Severe SIADH is associated with an increased mortality rate among hospitalized patients. In those patients with obvious neurologic symptoms, such as cerebral edema require a rapid intervention in the first 48 hours. The objective is the increase in sodium level with 1-2 mmol/L through the administration of a 3% hypertonic saline solution. Special attention should be paid to not correct by more than 8-10 mmol/L in the first 24 h and 18-25 mmol/L in the first 48 hours [24]. Non-small cell lung cancer is a very rare cause of SIADH. Only a few cases have been highlighted in the specialty literature in the last decades. In a study conducted on 427 patients with NSCLC, only 0.7% presented this syndrome. [26] The case presented in the current paper is a first in the specialty literature, given the following factors: 1. Initiation of immunotherapy at the occurrence of severe hyponatremia, with its remission, and also the remission of two other episodes of hyponatremia (no other cases with obvious causal relationship between SIADG and a favorable response to immunotherapy have been reported); 3. Long-term survival in the context of lung cancer diagnosed in 2016, metastasized in June 2019, the patient underwent immunotherapy with Nivolumab for three years and a half. Conclusions Lung cancer is the cancer with the highest incidence associated with the highest mortality rate in oncological pathology. This is one statement that triggers a high particular interest in the research of this disease. Although it cannot be considered statistically significant, it is important to also know rare situations, including their approach and evolution, such as the case presented in this paper: non-small cell lung cancer and SIADH, with a favorable response to specific immunotherapy treatment. The already proven success of immunotherapy is reinforced by the clinical evolution of this case with a much longer survival when compared to the normality of previous decades: lung cancer diagnosed seven years ago, which metastasized three and a half years ago and a living patient at the time of writing this paper (April 2023). Another important aspect is the decision to continue with the oncological treatment, given the progression of the disease from an imaging point of view but with an obvious control of the hyponatremia, which is associated with a clinical benefit a decision that most probably led to the prolongation of survival versus the continuation of a palliative treatment. The conclusion drawn is that each case must be approached individually, taking into account the various particular aspects of each patient. Each newly emerged situation that is studied will definitely lead to an improvement in knowledge and to the increase in safety and progress, which will contribute to increase the general survival and to an improvement in the quality of life. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient to publish this paper.
2023-06-29T06:15:54.598Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "f93b313c66849972ac5ce860b4a71fb6390662e9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/13/6/1279/pdf?version=1685414355", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef44ec544102a25c6cc78fe71cfc619873d8b8ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56298630
pes2o/s2orc
v3-fos-license
Social Capital Sports Oriented and Workers ’ Participation in Tehran Newspaper This present study examined the role of social capital sport oriented on workers’ participation in Tehran newspaper. Sport assumed as one of the principal factors in physical health that can be brilliant via difference items; furthermore, social-cultural is one of them. Besides, recognizing the main role of socialcultural on participation and also the motivation of workers is very necessary. On the other hand, lack of consideration to sport’s supportive factors appears many worries. In the current study, the sample of study was concentrated on 400 workers in Tehran newspaper. The questionnaire has comprised two main parts for assessing demographic factors and social capital sport oriented were applied. The result revealed that social capital sport oriented has a meaningful relationship with workers’ participation in physical exercises. As well, patterns champion determined as a key reason for sport amongst workers. Regarding the main role of sport in workers life, considering to the enhancing factors such as social capital is precious. By the way, the supervisors of workers should be established appropriate methods for workers’ participation in sport; likewise, stimulate them in sports accomplishments. Keyword: Physical activity, social capital, sport, and workers’ participation Received 22 Juni 2017/Accepted 25 March 2018 © JEHCP All rights reserved Introduction Sport assumed as a key factor in advancing human performance and also develops mental activities. In reality, the sport has a direct relationship with health and exhilaration that has been considered by different health institutions; and, they have been started several studies based on this productive activity. The sport as basic performance appears in diverse types amongst Journal of Educational, Health and Community Psychology Vol 7, No 1, 2018 E-ISSN 2460-8467 Shahram Alam, Abbas Gholizadeh 79 individuals in all developed and developing countries (Squire, 2000). In fact, it looks at various cultures and social class. This physical activity is determined as one realistic thinking that ignored mentalistic thinking; in this regard, developing sports activity amongst daily life is social, emotional, and cognitive development (Fathi, 2004). One of the important segments of society are workers that their mental and physical health have a significant effect on work outcomes. Therefore, considering the worker's presence in sport and physical activities is very noticeable. Nowadays, the presence of workers is so brilliant in various sections in Iran, but as compared to other countries it is not a lot; and likewise, it is needed to determine accurate ways for increasing their participation in exercise and physical performance (Thompson, Allen, Cunningham-Sabo, Yazzie, Curtis, Davis, 2002). In this regard, Sanderson, Littleton, and Vonne Pulley (2002) argue that the cultural and social factors have a role in physical activity. Similarly, Vosoughi and Khosravinezhad (2009) found the role of cultural and social factors on football fans' reaction in Iran. They described that these factors have a considerable contribution to sports performance. In the same vein, Medina and Messias (2011) focused on cultural, social, and economic factors on physical activity among adolescents. Their study explained that all of these factors play a critical role in the physical movement. But still, the context about workers’ participation in sports area remains remarkably insufficient. Social, cultural sport oriented is one of the vital factors that can attract attitude and presence of workers. This factor refers to the person (age, gender, marital status, etc.), family, friends, and also colleagues that motivate tendency of individuals for accurate managing his or her body and physical activity. In truth, this culture can be caused by motivation among individuals in choosing sport oriented lifestyle. In this style sports activity determined as a primary requirement; and persuades individual for doing exercise in during a day (Coakley & White, 1992). Family, friends, colleagues, and also workplaces have a noticeable role in sports activity; likewise, they can provide an appropriate basis for social, cultural sport oriented. In effect, this social-cultural included social support, social class, and patterns champion (Moienoldini & Sanatkhah, 2013). Journal of Educational, Health and Community Psychology Vol 7, No 1, 2018 E-ISSN 2460-8467 Shahram Alam, Abbas Gholizadeh 80 Social Cultural Sport Oriented The social, cultural sport oriented is one of the leading motivators in sports-oriented lifestyle. In this regard, the importance of social factors in sports is so substantial, and it cannot be ignored. Furthermore, the sports cannot be separate from social attitudes in society. These social attitudes appear based on different societies. Regarding different views, facilities and social and economic conditions can determine sport levels activities in society. The presence of facilities emphasized the role of sport and physical activities and determined them at a high level (Coakley & White, 1992). Social and family support have a significant impact on the tendency of an individual to exercise and a better understanding of his or her identity. Social attitude and supports that derived from different parts of society can be effective in directing the individual in sports activities; as well, they create a motivational state (Moflehi & Ghahreman Tabrizi, 2008). Therefore, in the present study focused on workers’ participation in a physical exercise based on social-cultural sport oriented; since ever, has not been done any scientific research about this topic in Tehran newspaper in Iran. Consequently, the current study is conducted to fill in the actual literature gap. Method Overview In the current study applied quantitative approach for the research objective and determines the association between investigation variables (social capital sport oriented and workers’ participation). As well, the study used the cross-sectional design and focused on a sample at one point in time. Participants The sample size of the present investigation was determined based on Cochran's sample size formula. Concerning to this formula has been studied 400 workers who worked in newspaper offices in Tehran, Iran. Journal of Educational, Health and Community Psychology Vol 7, No 1, 2018 E-ISSN 2460-8467 Shahram Alam, Abbas Gholizadeh 81 Measurement The questionnaire of this study comprised two parts 1) demographic (age, economic status, work experience, etc.), and 2) social capital sport oriented (social support, social class, and patterns champion) that was developed by the researcher. The questionnaire involved 29 items and measures used 5-point Likert scale (5=completely agree to 1=completely disagree). The Cronbach's alpha for these items was .76 sufficient revealing reliability (alphas > .70). Also, the questionnaire follows the role of Skewness (-3<Sk<+3), and Kurtosis (-10<Kr<+10). For measuring the worker participation in physical exercises, the researchers selected those who are exercises regularly in the specific gym club that their name has been registered. Data Analysis This study has been applied descriptive statistics and t-test for analyzing the data via SPSS version 20 statistical software. Result Table 1 displays descriptive statistic of social capital sport oriented and its dimension. In this Table demonstrates the social capital sport oriented in overall with a mean value of 3.79 (SD=0.460). Besides, patterns champion with a mean value of 4.38 (SD=0.751) has the highest mean among dimension of social capital sport oriented, while social support with an average value of 3.89 (SD=0.841) and social class with mean value of 3.11 (SD= 0.584) have the lowest mean among social capital sport oriented dimension. Journal of Educational, Health and Community Psychology Vol 7, No 1, 2018 E-ISSN 2460-8467 Shahram Alam, Abbas Gholizadeh 82 Table 1 Descriptive Statistic of Social Capital Sports Oriented and Its Dimension (N=400) Introduction Sport assumed as a key factor in advancing human performance and also develops mental activities.In reality, the sport has a direct relationship with health and exhilaration that has been considered by different health institutions; and, they have been started several studies based on this productive activity.The sport as basic performance appears in diverse types amongst individuals in all developed and developing countries (Squire, 2000).In fact, it looks at various cultures and social class.This physical activity is determined as one realistic thinking that ignored mentalistic thinking; in this regard, developing sports activity amongst daily life is social, emotional, and cognitive development (Fathi, 2004).One of the important segments of society are workers that their mental and physical health have a significant effect on work outcomes.Therefore, considering the worker's presence in sport and physical activities is very noticeable. Nowadays, the presence of workers is so brilliant in various sections in Iran, but as compared to other countries it is not a lot; and likewise, it is needed to determine accurate ways for increasing their participation in exercise and physical performance (Thompson, Allen, Cunningham-Sabo, Yazzie, Curtis, Davis, 2002).In this regard, Sanderson, Littleton, and Vonne Pulley (2002) argue that the cultural and social factors have a role in physical activity.Similarly, Vosoughi and Khosravinezhad (2009) found the role of cultural and social factors on football fans' reaction in Iran.They described that these factors have a considerable contribution to sports performance.In the same vein, Medina and Messias (2011) focused on cultural, social, and economic factors on physical activity among adolescents.Their study explained that all of these factors play a critical role in the physical movement.But still, the context about workers' participation in sports area remains remarkably insufficient.Social, cultural sport oriented is one of the vital factors that can attract attitude and presence of workers.This factor refers to the person (age, gender, marital status, etc.), family, friends, and also colleagues that motivate tendency of individuals for accurate managing his or her body and physical activity.In truth, this culture can be caused by motivation among individuals in choosing sport oriented lifestyle.In this style sports activity determined as a primary requirement; and persuades individual for doing exercise in during a day (Coakley & White, 1992).Family, friends, colleagues, and also workplaces have a noticeable role in sports activity; likewise, they can provide an appropriate basis for social, cultural sport oriented.In effect, this social-cultural included social support, social class, and patterns champion (Moienoldini & Sanatkhah, 2013). Social Cultural Sport Oriented The social, cultural sport oriented is one of the leading motivators in sports-oriented lifestyle. In this regard, the importance of social factors in sports is so substantial, and it cannot be ignored.Furthermore, the sports cannot be separate from social attitudes in society.These social attitudes appear based on different societies.Regarding different views, facilities and social and economic conditions can determine sport levels activities in society.The presence of facilities emphasized the role of sport and physical activities and determined them at a high level (Coakley & White, 1992).Social and family support have a significant impact on the tendency of an individual to exercise and a better understanding of his or her identity.Social attitude and supports that derived from different parts of society can be effective in directing the individual in sports activities; as well, they create a motivational state (Moflehi & Ghahreman Tabrizi, 2008). Therefore, in the present study focused on workers' participation in a physical exercise based on social-cultural sport oriented; since ever, has not been done any scientific research about this topic in Tehran newspaper in Iran.Consequently, the current study is conducted to fill in the actual literature gap. Overview In the current study applied quantitative approach for the research objective and determines the association between investigation variables (social capital sport oriented and workers' participation).As well, the study used the cross-sectional design and focused on a sample at one point in time. Participants The sample size of the present investigation was determined based on Cochran's sample size formula.Concerning to this formula has been studied 400 workers who worked in newspaper offices in Tehran, Iran. Measurement The questionnaire of this study comprised two parts 1) demographic (age, economic status, work experience, etc.), and 2) social capital sport oriented (social support, social class, and patterns champion) that was developed by the researcher.The questionnaire involved 29 items and measures used 5-point Likert scale (5=completely agree to 1=completely disagree).The Cronbach's alpha for these items was .76sufficient revealing reliability (alphas > .70).Also, the questionnaire follows the role of Skewness (-3<Sk<+3), and Kurtosis (-10<Kr<+10).For measuring the worker participation in physical exercises, the researchers selected those who are exercises regularly in the specific gym club that their name has been registered. Data Analysis This study has been applied descriptive statistics and t-test for analyzing the data via SPSS version 20 statistical software. Result Table 1 displays descriptive statistic of social capital sport oriented and its dimension.In this Discussion Based on the results of the study and the role of social capital sport oriented, it shows that patterns champion as one of the main dimension, has an imperative role in workers' perception and performance; in fact, this issue motivates individuals for attending in sports activities.In addition, the patterns champion as one effective factor in sports area has high mean value in social capital sport oriented.In reality, this result shows specific goal for those who want to do exercise and likewise increases their interests toward sport; so that, it can be an imperative element in the orientation of athletes' desires.In another word, the pattern champion in a population of workers' participation in Tehran Newspaper it was valuable; also, it illustrates the high level of their tendency and the specific goal toward sports activity. Table 1 Descriptive Statistic of Social Capital Sports Oriented and Its Dimension (N=400) Table 2 shows the descriptive statistic of the dimension' social support, social class, and pattern champion.In this Table reveals the patterns champion's dimension: tendency with a mean value of 4.38 (SD=0.760)and goal with a mean value of 4.38 (SD=0.832)have highest mean among another dimension.In addition, social support's dimension: family support with a mean value of Table 3 The Results of Social Capital Sports Oriented and its Dimension on Workers' participation based on a ttest (N=400)
2018-12-15T07:21:48.949Z
2018-04-11T00:00:00.000
{ "year": 2018, "sha1": "abff795d72cc655c88f943baa5a077bee59a27ce", "oa_license": "CCBYNC", "oa_url": "http://journal.uad.ac.id/index.php/Psychology/article/download/9555/pdf_8", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "abff795d72cc655c88f943baa5a077bee59a27ce", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
209464045
pes2o/s2orc
v3-fos-license
Hand-hygiene mitigation strategies against global disease spreading through the air transportation network Hand hygiene is considered as an efficient and cost-effective way to limit the spread of diseases and, as such, it is recommended by both the World Health Organization (WHO) and the Centres for Disease Control and Prevention (CDC). While the effect of hand washing on individual transmissibility of a disease has been studied through medical and public-health research, its potential as a mitigation strategy against a global pandemic has not been fully explored yet. In this study, we investigate contagion dynamics through the world air transportation network and analyze the impact of hand-hygiene behavioural changes of airport population against the spread of infectious diseases worldwide. Using a granular dataset of the world air transportation traffic, we build a detailed individual mobility model that controls for the correlated and recurrent nature of human travel and the waiting-time distributions of individuals at different locations. We perform a Monte-Carlo simulation study to assess the impact of different hand-washing mitigation strategies at the early stages of a global epidemic. From the simulation results we find that increasing the hand cleanliness homogeneously at all airports in the world can inhibit the impact of a potential pandemic by 24 to 69%. By quantifying and ranking the contribution of the different airports to the mitigation of an epidemic outbreak, we identify ten key airports at the core of a cost-optimal deployment of the hand-washing strategy: increasing the engagement rate at those locations alone could potentially reduce a world pandemic by 8 to 37%. This research provides evidence of the effectiveness of hand hygiene in airports on the global spread of infectious diseases, and has important implications for the way public-health policymakers may design new effective strategies to enhance hand hygiene in airports through behavioral changes. Introduction In past centuries, contagious diseases would migrate slowly and rarely across continents. Black death, for example, which was the second recorded pandemic in history after the Justinian Plague, originated in China in 1334 1 and it took almost 15 years to propagate from East Asia to Western Europe. While contagious diseases were then affecting more individuals within countries due to poor hygiene and underdeveloped medicine, the means of transportation of that era -sea and land -hindered the range and celerity of disease spreading. Nowadays in contrast, transportation means allow people to travel more often (either for business or for leisure) and to longer distances. In particular, the aviation industry has experienced a fast and continuing growth, permitting an expanding flow of air travelers. In 2017 alone, around 4.1 billion people traveled through airports worldwide 2 while the International Air Transport Association (IATA) expects that the number of passengers will roughly double to 7.8 billion by 2036 3 . Transportation hubs such as airports are therefore playing a key role in the spread of transmittable diseases 4 . In severe cases, such disease-spreading episodes can cause global pandemics and international health and socio-economic crises. Recent examples of outbreaks show how quickly contagious diseases spread around the world through the air transportation network. Examples include the epidemic of SARS (Severe Acute Respiratory Syndrome) and the widespread H1N1 influenza. SARS initial outbreak occurred in February 2003, when a guest at a hotel in Hong Kong transmitted an infection to 16 other guests in a single day. The infected guests then transmitted the disease in Hong Kong, Toronto, Singapore and Vietnam during the next few days, and within weeks the disease became an epidemic affecting over 8,000 people in 26 countries across 5 airport to a destination airport, and indicates any intermediate connecting flights (see Table 1 International Airport, Beijing, China) to HND (Haneda Airport, Tokyo, Japan), X2 individuals traveled from PEK to HND with a layover at PVG (Shanghai Pudong International Airport, Shanghai, China), and X3 individuals traveled from ATL (Hartsfield-Jackson Atlanta International Airport, Atlanta, USA) to ABV (Nnamdi Azikiwe International Airport, Abuja, Nigeria) with connecting flights at JFK (John F. Kennedy International Airport, New York, USA) and CDG (Charles de Gaulle Airport, Paris, France) airports. From the dataset, we observe that all trips in September 2017 were operated through a network of 3621 unique airports. For each airport, we estimate the total traffic by adding the number of passengers for the trips where the airport is denoted as 'Origin', the number of passengers for the trips where the airport is denoted as 'Destination', and twice the number of passengers for the trips where the airport is denoted as 'Connection' (either Connection 1 or 2). For subsequent computational efficiency, we restrict our analysis to the subset of the dataset corresponding to traffic among the 2500 busiest airports (by total traffic). This subset accounts for 98.25% of the total trips and 99.8% of the total traffic. Computational model We build a computational model that simulates the mobility of travelers through the air transportation system, coupled with the propagation of a hypothetical infectious disease. Using the OAG data, we first generate the worldwide air transportation network, where the nodes are the 2500 busiest airports and the links between them are given by the connections between airports for which flights exist in the dataset. The network describes a heterogeneous metapopulation of airports where each individual airport is a subpopulation of individuals 30,31 . We further develop a human mobility model to track the stochastic routes of traveling agents through the air transportation network. We finally implement a compartmental epidemic model to track the reaction dynamics of infection contagions as well as the hand washing related behavior of the traveling agents. Mobility Model The human mobility model has the form of a stochastic agent-base tracking system 32, 33 that accounts for the spatial distribution of airports, detailed air-traffic data, the correlated and recurrent nature of human mobility and the waiting-time distributions of individuals at different locations. We first generate the origin-destination where od f i j is the number of passengers that traveled in September 2017 from origin i to destination j, and the origin-destination probability matrix O O OD D D p p p = [od p i j ] where od p i j is the probability that an agent travels from origin i to destination j. Each element of the O O OD D D p p p matrix is calculated by od p i j = od f i j / ∑ j od f i j where ∑ j od f i j is the total number of passengers that traveled from origin i. We then assign a 'home' population P i at each subpopulation i following the nonlinear empirical relation P i = α √ T i , where T i is the total traffic at airport i and α is a constant parameter that is identified to give a total population size of N = ∑ i P i individuals. In other words, each individual agent is initially assigned to its 'home' subpopulation i. Within the mobility route, the agent that was assigned to home i chooses to travel at a 'destination' airport j with probability extracted from the O O OD D D p p p matrix. If the two nodes i and j are connected by more than one path (i.e. direct when the two airports are connected with direct flights and indirect when the two airports are connected only with connecting flights), then the probability that the agent selects a given path is proportional to the relative number of passengers traveling in each direct or indirect flight from origin i to destination j. After each trip (from origin i to destination j), the agent returns back to its home airport. Thus, the stochastic mobility model generates the spatial trajectory for all agents. In addition, using realistic waiting times at the three distinct locations where an agent can be (i.e. home, connecting airport or destination) and actual flight times required to travel between the airports we express the spatio-temporal patterns of all the agents at the granularity of an hour. The waiting times at home airports, connecting airports and destinations are provided by the Bureau of Transportation Statistics 2010 34 , and follow right-skewed distributions with means 897.87 hours (∼37 days), 1.33 hours, and 127.36 hours (∼5 days) respectively. The average flight times between each airports i and j, are estimated as the ratio of the geographical distance of the two airports, d i j , calculated by the spherical law of cosines, over the average velocity of an airplane which is assumed to be constant and equal to 640 km/h considering the changes in takeoff, climb, cruise, descent and landing speeds. Epidemic Model The conventional SIR model in epidemiology describes the reaction kinetics of an infection within a closed population 35 . According to the SIR model, each individual is considered as either susceptible (S), infected (I) or recovered (R). The sum of the compartments at any given time t is equal to the total population size (S(t) + I(t) + R(t) = N). The SIR reaction kinetics model two distinct processes: the infection process, S + I β − → 2I, where an infected individual transmits the infection to a susceptible individual with rate β , and the recovery process, I µ − → R, where an infected individual recover with rate µ (µ −1 is the average time required for an infected individual to recovers). The ratio R 0 = β /µ defines the basic reproductive number of the infection, denoting the average number of secondary infections an infected individual causes before it recovers. For a closed subpopulation the disease dies out exponentially fast when R 0 < 1, while in grows and potentially causes a pandemic for R 0 > 1. In this study, we modify the conventional SIR model to reflect the effects of hand washing behavior in the infection process. We formulate the SIR WD model where each individual is placed in one of the three epidemic compartments (susceptible, infected, recovered) but also is characterized by one of the two hand cleanliness states, namely washed (W ) or dirty (D) ( Figure 1A). The SIR WD epidemic model is then expressed by: where β 1 is the infection rate with which an infected individual with dirty hands transmits the infection to a susceptible individual (β 1 is equal to the infection rate β of the conventional SIR model), β 2 is the infection rate with which an infected individual with washed hands transmits the infection to a susceptible individual (β 2 < β 1 ), µ is the recovery rate (it is equal to the recovery rate of the conventional SIR model), p is the hand washing engagement rate (denoting the percentage of individuals with non-clean hands that move to the washed state within the next hour) and θ is the hand washing effectiveness rate (θ −1 denotes the average duration that an individual with washed hands returns back to the 'dirty' state). The infection reactions that are described in the first two expressions of the SIR WD model are shown in the diagram of Figure 1B. To get the infection, a healthy individual needs to touch to a contaminated surface or directly an infected person. If the individual is healthy and touch a contaminated surface -independent of how long ago he/she washed his/her hands -he/she will get the bacteria on hands. However, if he/she washes hands soon after he/she gets contaminated there is big probability of taking that bacteria out of hands before they are transmitted to body fluids. Therefore, the hand washing rate of healthy individuals affects the transmissibility of a disease as well. Our SIR WD model takes into account only the interdependence between disease transmission probability and the hand cleanliness of the infected individuals. To model the process where the hand washing behavior of susceptible/healthy individuals has a role in the infection process, we need to build a more sophisticated model based on SEIR reaction kinetics, where the extra epidemic compartment E indicates individuals that are Exposed to bacteria or viruses 36 . The SEIR epidemic model describes the following three processes: (a) a susceptible comes in contact with an infected individual and becomes exposed to the disease with some rate β (S + I β − → E + I), (b) an exposed becomes infected with some rate γ (E γ − → I), and (c) an infected recovers with rate µ (I µ − → R). Both rates β and γ are affected by the hand washing levels. Here, we keep our analysis simple by using the conventional SIR model with the assumption that if infected individuals wash hands frequently, there is smaller probability to contaminate surfaces or other healthy people directly. Initial conditions and assumptions We assume a flu-type disease, where the recovery rate is µ = 1/4 days -1 (i.e. on average each infected individual recovers after four days) and the reproductive number is R 0 = 3 (i.e. on average each infected individual transmits the disease to three other individuals). The infection rate for the SIR model is β = µR 0 which is equal to the infection rate β 1 for the processes 1 as effective hand washing has been proven to be able to prevent around 50-70% of infections 37 . The hand washing effectiveness rate which indicates the average time that washed hands become again contaminated is set to θ = 1/1.5 hours -1 (i.e. is the rate to change from 'washed' to 'dirty' state). We also consider that mostly 1 over 5 people in an airport have cleaned hands at any given moment in time (i.e. 20% of airport population). This is equivalent to hand washing engagement rate among the non-cleaned individuals equal to p = 0.12 per hour (i.e. every hour about 12% of the non-cleaned individuals are washing their hands). We declare this hand washing engagement rate (p = 0.12 hours -1 ) as the status quo (see next section). We vary p to analyze and quantify the effect of hand washing engagement on different scenarios of epidemic spreading. Status quo of hand washing engagement rate To derive an approximation of the status quo level of hand cleanliness (i.e. the percentage of people with cleaned hands) in the population of an airport at any given moment, we simulate a close population following some assumptions derived from the literature. We use data from a survey performed by the American Society for Microbiology 38 which revealed that 30% of travelers do not wash their hands after using public toilets at airports, denoting that the rest 70% are compliers with hand washing. Following a study in a college town environment, we consider that only the 67% of the compliers wash their hands properly (i.e. with water and soap and for the recommended by CDC duration of time 39 ), while the rest 33% are wetting their hands quickly and/or without soap 40 . Therefore, we assume that in an airport population of N individuals only the 70% · 67% = 49.6% of N are compliers with effective hand washing. Furthermore, we assume that each individual wash their hands on average between 4-10 times per day 41 which means that in a 24-hour timeframe one event of hand washing takes place every 2.5-6 hours. We assume that the frequency of hand washing follows a normal distribution with mean equal to 4.5 hours and standard deviation equal to 1 hour. We also consider that the duration of cleanliness of hands after hand washing follows an exponential distribution with mean value equal to 1.5 hours. Using the above approximations, we find that that at any given moment, the percentage of passengers in an airport that have cleaned hands has an upper bound of 24%. Given that this is very optimistic upper bound of the reality, we assume and use in simulations that the status quo for the percentage of individuals that have clean hands in an airport at any given moment is 20%. To preserve a stable 20% hand cleanliness level over time in an airport, the hand washing engagement rate in the compartmental SIR WD model, that indicates the rate of hand washing per hour between individuals with non-cleaned hands, is calculated to be equal to p = 0.12 h -1 (i.e. 12% of dirty individuals wash their hands within an hour). This indicates the status quo of hand washing engagement rate. In the case that we would like to increase the level of hand cleanliness in an airport to 30% or 40% or 50% or 60% we need to increase the hand washing engagement rate to 0.21 h -1 or 0.32 h -1 or 0.49 h -1 or 0.73 h -1 respectively. Methodology We implement the epidemic model within the mobility model using Monte-Carlo simulations to track the mobility and contagion dynamics through the air transportation network. In the simulations, we consider different hand-hygiene mitigation strategies and we study their effects on the propagation and the diffusion of a disease at the global scale. We first study the conventional SIR epidemic model to identify the spatio-temporal structure of the disease for different seeding scenarios and to identify the most influential spreaders within the air transportation network. Furthermore, we study four hand-hygiene scenarios and their effectiveness to disease spreading inhibition: a. homogeneous increase of hand washing engagement at all airports, b. increased hand washing engagement at the ten most influential airports in the network, c. increased hand washing engagement at the ten most influential airports for each source of the disease, and d. increased hand washing engagement only at the source of the disease. Monte-Carlo simulations At the initial time step of each simulation, t = 0, we declare an airport i as the source of the disease where we randomly choose ten individuals to seed the infection. For each analysis we run 500 realizations of 10 5 traveling agents each. At each time step, which corresponds to one hour, we let individuals travel, wash their hands, and recover or transmit the disease to susceptible agents when those individuals are infected. At each time step, an infected individual recovers with probability Π I→R = 1 − exp(−µ). When the transmission of an infection is associated with hand cleanliness of the infected individuals (as described by the SIR WD model), the probability of a susceptible to get the infection is where I D,i and I W,i are the numbers of 'dirty' and 'washed' infected individuals respectively at airport i and N i is the total population at airport i. The probability that an individual with washed hands becomes dirty is Π W →D = 1 − exp(−θ ) and the probability that an individual with dirty hands will wash his/her hands, within each one-hour time step, is Π D→W = 1 − exp(−p). Using these probabilities, the computational model generates the stochastic epidemic transitions for the traveling agents over time. In our analysis, we vary the model parameter p, considering different hand-hygiene interventions, and analyze their impact on global disease spreading. Evaluating the early-time impact of the disease We evaluate the early-time impact of the disease by measuring two quantities that are correlated between them: the disease prevalence and the Total Square Displacement two weeks after the disease is deliberately seeded in a source. The disease prevalence (PREV) is given by the total number of affected individuals (infected plus recovered) 42 . However, as we want to evaluate not only the total number of infected individuals but also how well spread they are within the globe, we use the Total Square Displacement (TSD) of the infected individuals as a simulation metric 32 . This metric is given by the formula is the number of infected individuals at time t = 2 weeks, L j is the geographic location of the j-th infected individual and L is the position of the geographic centre of the infection. The geographic centre is the centre of gravity (aka the centre of mass) for the locations of all infected individuals. To find the geographic centre, we first convert the latitude and longitude of each location L j from degrees to radians, and then into Cartesian coordinates using the formulas x L j = cos(lat L j π/180) · cos(lon L j π/180), y L j = cos(lat L j π/180) · sin(lon L j π/180) and z L j = sin(lat L j π/180). Conventional SIR Model The contagion dynamics of infectious diseases are broadly described by the basic SIR model. However, the concept of SIR reactions excludes the effects of individual hygiene activities (like hand washing) from the model of infection transmissions. In that case, the infection reaction process is considered as independent from the hand cleanliness of the infected individuals. In our initial analysis, we first use the SIR model (disregarding the impact of hand-hygiene behavior) to estimate the capacity of airports to spread an infectious disease globally. We seed the disease in each of the world major airports and through simulations we track the contagion dynamics two weeks after the outbreak. We rank the airports according to their spreading capacity as quantified by the TSD of infected individuals 32 (Figure 2 -middle). From the analysis, it is observed that the total traffic alone cannot predict the power of an airport to spread the disease (comparing left and middle panels in Figure 2), but should be accounted alongside with the location of each spreader airport and the spatial correlations with other influential airports in the network. NRT (Narita International Airport, Tokyo, Japan) and HNL (Honolulu International Airport, Honolulu, USA) airports are indicative examples, as they ranked in the 46th and 117th place by total traffic respectively, but they have large contribution in the acceleration and expansion of a disease contagion globally (ranked by TSD on the 7th and 30th place respectively). This happens because NRT and HNL combine three important features with high impact on the disease spreading. These are (i) they have direct connections with the world's biggest mega-hub airports, (ii) they operate long-range in-and out-bound international flights, and (iii) they are located in geographical conjunctive points between East and West 32 . The bar plot in the right of Figure 2, shows the two-week prevalence of the disease as measured by the percentage of world population that have been affected by the disease two weeks after a disease started from each of the major airports. The two-week prevalence is highly correlated with the total traffic of the airport (the Pearson correlation coefficient is equal to 0.88) indicating that large airports have a big impact in terms of absolute number of affected (infected plus recovered) individuals. SIR WD Model: Worldwide homogeneous hand washing intervention The effects of hand-hygiene are then embedded in the computations and we focus the analysis on the epidemic reaction kinetics as described by the SIR WD model. For each simulation, the disease is seeded in one of the major airports (ten randomly chosen 6/14 All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not peer-reviewed) is the author/funder. . https://doi.org/10.1101/530618 doi: bioRxiv preprint Figure 2. The impact of the source of the disease on its global spread. (middle) Ranking of the 40 most influential airports in the world with respect to the TSD of the infected individuals two weeks after the disease started from each one of these airports. (right) The two-week prevalence of the disease as measured by the percentage of world population that have been affected (infected plus recovered) by the disease two weeks after a disease started from each of these major airports. (left) The total monthly traffic of each of those major airports as has been calculated using the world air-traffic dataset from September 2017. individuals are infected t = 0) and the epidemic expansion all over the world due to the mobility of infected agents is recorded. In the status quo scenario, we consider that hand cleanliness level is on a 20% steady state at each airport in the world. This percentage represents the fraction of individuals with washed hands at any moment. The rate of hand washing per hour that corresponds to 20% cleanliness is equal to 0.12 h -1 (see Table 2). We rank the airports in respect to TSD metric and we observe that LHR has the greatest impact while LAX, JFK, SYD and CDG are among the five most influential spreaders worldwide. Using the same order of airports, we repeat the simulations, by increasing the hand washing engagement rate homogeneously at all airports to achieve global hand cleanliness levels of 30%, 40%, 50% and 60%. For each hand washing engagement rate (or hand cleanliness level) we analyse the changes in the impact of contagion. Figure 3A shows the early-time evolution of the fraction of affected individuals over the first two weeks after a disease is seeded at DXB (Dubai Airport). Its observed that with the increase of hand cleanliness level at all airports from 20% to 60% there is a significant reduction in the percentage of the affected individual in the total population from around 1.5% to less than 0.5%. In Figure 3B we demonstrate the spreading power of the most influential spreader airports measured by TSD of infected individuals two weeks after a disease was initiated at each of these major airports, at 20% (status quo), 30%, 40%, 50% and 60% of homogeneous hand cleanliness. A drastic, very significant reduction in TSD is observed with the increase of cleanliness level, verifying that hand-hygiene is one of the most important factors to control or even prevent an infection. For example, the spread of infection seeded in LHR was covered about 5 · 10 5 square meters around the mass centre of the infection within two weeks, while infected area was reduced to less than 2 · 10 5 square meters when cleanliness level increased from 20% to 60% globally. The relative to the status quo reduction of the disease impact is calculated by (T SD 20% − T SD X )/T SD 20% for the TSD metric or (PREV 20% − PREV X )/PREV 20% for the disease prevalence metric, where the cleanliness level X increases from 30% to 60% worldwide. The results are shown in Table 2 indicate a significant reduction of the impact of a disease worldwide by 24% to 69% depending on the hand washing engagement rate worldwide using the calculated by the TSD (or by 18% to 55% as calculated by the global prevalence of the disease). SIR WD Model: Strategic hand washing policies While increasing the level of hand washing engagement homogeneously at all airports is very costly and maybe infeasible, we test some other less costly intervention strategies. These interventions consider the increase of hand washing engagement rate only at a small number of 'key' airports. We test three different intervention strategies that consider the increase of the hand 7/14 All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not peer-reviewed) is the author/funder. . https://doi.org/10.1101/530618 doi: bioRxiv preprint Figure 3. The effect of a global, homogeneous hand washing strategy on the impact of a disease spreading. (A) The fraction of affected (infected plus recovered) individuals worldwide over the first two weeks after the infection was initiated at Dubai International Airport at different levels of hand cleanliness. (B) Airports are ranked according to their spreading power to transmit a disease faster and further across globe -measured by the total squared displacement of infected individuals two weeks after a disease started from each individual airport. From left to right the hand cleanliness level increases from 20% (status quo) to 60%. 8/14 All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not peer-reviewed) is the author/funder. Table 2. Reduction of the disease impact with a homogeneous increase of hand washing engagement worldwide. These are point estimates and 95% Confidence Intervals calculated across 120 disease spreading scenarios. In each scenario, the source of the disease is one of the 120 largest airports in the world. Each spreading scenario is evaluated over 100 mobility and epidemic realizations. washing engagement rate: (i) at the ten pre-identified key airports worldwide, (ii) the ten key airports of each source of the disease, and (iii) only at the source of the disease. For intervention scenario (i), we pre-identify the ten key airports of the world air transportation network by multiplying the susceptibility of each airport by the strength of the airport to spread an infection globally. The strength of airport i is calculated as where T i is the total outgoing traffic from airport i, k i is the number of connections of i (i.e. the degree of node i in the network), and ∑ k i i=1 w i j d i j is the effective length of all links of i which is the weighted sum of the actual distances d i j between i and j nodes. The weights w i j are the fractions of passengers traveling from i to j. The susceptibility of airport i is calculated using the conventional SIR simulations as the weighted average fraction of infected individuals that arrive at i over all the seeding scenarios considered in the SIR model described above. Using the above combined metric (susceptibility × strength), we identify the ten 'key' airports of the world air transportation network as being the LHR, LAX, JFK, CDG, DXB, FRA, HKG, PEK, SFO and AMS. For the intervention scenario (ii), we identify ten 'key' airports for each source of the disease, by multiplying the airport strength by the source-dependent susceptibility. The source-dependent susceptibility of airport i for the seeding of the disease at airport j is calculated as the fraction of infected individuals that arrive at i when the disease is initiated at airport j. Therefore, for this intervention scenario, knowledge of the source of the disease is required and for different sources of the disease we have different sets of 'key' airports (see Figure 4). Finally, for the intervention scenario (iii), since we increase the hand washing engagement rate only at the source of the disease, prior knowledge of the source is required. The results, shown in Figure 4, indicate that the design of a less costly (compared to homogeneous) strategic plan for hand washing intervention only at ten pre-identified "key" airports worldwide (Scenario (i)) could lead to a significant reduction of the disease impact calculated by the TSD from ∼8% to ∼37% (or ∼7% to ∼29% calculated by the world prevalence). If the strategic plan is deliberately implemented only at the ten most important airports for each source of disease (Scenario (ii)), we observe a further reduction of the disease impact. However, this further reduction is statistically different from that of Scenario (i) only in terms of the Prevalence of the disease, and not in terms of geographical spreading as calculated through TSD. Intervention Scenario (iii), that considers enhancing hand washing engagement only at the source of the disease also has a significant effect on the reduction of disease impact; yet, this effect is smaller than that of intervention Scenarios (i) and (ii). Discussion In this work we have analysed contagion dynamics through the world air transportation network and the impact of hand-hygiene behavioural changes of air travelers against global epidemic spreading. Using well-established methodologies, we have applied simulations to track traveling agents and their hand washing activity and analysed the expansion of flu-type epidemics through the world air transportation network. From the simulation results, we have measured the early-time spreading power of the major airports in the world under different hand-hygiene interventions. Using data-driven calculations, we estimated that mostly 1 over 5 people are cleaned at any given moment in time (i.e. 20% of airport population). This is translated to hand washing engagement rate among the non-cleaned individuals equal to 0.12 per hour (i.e. every hour about 12% of the non-cleaned individuals are washing their hands). From simulation results we have shown that, if we are able to increase the level of hand cleanliness at all airports in the world from 20% to 30% (or equivalently to increase the hand washing engagement rate from 0.12 to 0.21 per hour), either by increasing the capacity of hand washing and/or by increasing the awareness among individuals and/or by giving the right incentives to individuals, a potential infectious disease will have a worldwide impact that is about 21.2% smaller compared to the impact that the same disease would have with the 20% level of hand cleanliness (or 0.12 per hour hand washing engagement rate). Increasing the level of hand cleanliness to 60% (or equivalently the hand washing engagement rate among non-cleaned individuals to 0.73 per hour) at all airports in the world would have a reduction of 64.6% in the impact of a potential disease spreading. Moreover, we have identified the ten most important airports of the network, for which increasing the level of hand cleanliness (or hand washing engagement rate) only at those, the impact of the disease spreading would decrease by 9% to 37%. Our current analysis has some limitations, which can be addressed in the future. A first limitation is the use of the simple SIR reaction kinetics while a more complicated model, like the SEIR, will provide inferences on the impact of hand washing behavior among the individuals exposed to the disease on the expansion of epidemics. A second limitation is that we use data from the air transportation system as a proxy for human mobility. A complete analysis should focus on the spread of infections through a more realistic human mobility network that includes daily commuting patterns and travel through other means of transportation. A third limitation is the assumption of a homogeneous hand-hygiene behavior of air travelers, as we do not know the actual hand washing activity that varies between individuals within a local population and between individuals from different societies and cultures. Future research can be designed to understand the human hand-washing behavior and provide insights on what and how social effects can change it. Epidemiological outbreaks not only increase global mortality rates, but they also have a large socio-economic impact that is not limited to those countries that are directly affected by the epidemic. Outbreaks reduce the consumption of goods and services, negatively affecting the tourism industry, increasing businesses' operating costs, and speeding the flight of foreign capital, generating massive economic costs globally. For instance, even the relatively short-lived SARS epidemic in 2003 led to the cancellation of numerous flights and to the closure of schools, wreaking havoc in Asian financial markets and ultimately costing the world economy more than $30 billion 43 . Hypothetical scenarios of future global pandemics give estimates on the economic effects. The worldwide spread of a severe infectious disease is estimated to cause approximately 720,000 deaths per year and an annual reduction of economic outcome of $500 billion (i.e. ∼0.6% of the global income) 44 . In such severe scenarios where markets shut down entirely, a massive global economic slowdown is expected to occur shrinking the GDP of national economies. Of course, wealth and income effects are expected to differ sharply across countries, with a major shift of global capital from the affected economies (i.e. of developing countries) to the less-affected economies (i.e. of North America and Europe). The effectiveness of mitigation strategies against global pandemics is evaluated through the total expected cost versus the total public health benefit 45 . The target of each strategy is to maximise the social welfare by incurring in the minimum economic cost. For interventions where travel restrictions are implemented 46 , the cost increases with the number of closed airports and the number of individuals that get stranded in those airports. The reward is related to the relative decrease in the global footprint of the disease, compared with the null case of non-interventions. In contrast to the mobility-driven strategies that change the population's mobility patterns, other solutions such as hand washing appear to be more cost-and reward-effective. A future research on the socio-economic impact of global pandemics and the cost-effectiveness ratio of different mitigation strategies (e.g. hand washing, vaccination, airport closures, mobility routing diversions) against disease spreading would evaluate the efficiency and significance of hand-hygiene interventions. However, while hand hygiene is considered as the first prevention step in the case of an epidemic emergency, the capacity of hand washing facilities in crowded places, including airports, is limited only to wash basins at restrooms. It is not known, however, if increased capacity would enhance hand washing engagement by air travelers. New technology is being developed aiming to increase the capacity of facilities even outside restrooms, thus expanding the options for hand hygiene and the solutions for air and surface sterilization. Airbus 47 , for example, is exploring an innovative antimicrobial technology that is able to eliminate viruses and pathogens from aircraft surfaces (e.g. tray tables, seat covers, touch screens, galley areas). Boeing is also exploring a prototype self-sanitizing lavatory that uses ultraviolet light to kill 99.99% of pathogens 48 . At the same time, robotic systems for dirt detection and autonomous cleaning of contaminated surfaces 49 and smart touch-free hand washing systems 50 are promising tools on the evolution of cleaning technologies. An important question is how such smart technologies are adopted by the general public, and what incentives can promote hand washing behavioral changes. Do digital nudges (motivation messages) make health related establishments attractive to individuals? A recent study has found that nudges have been effective at improving outcomes in a variety of health-care settings including a significant increase of influenza vaccination rates 20 . Can social influence or peer effects improve smart hand-washing engagement? Recent works have identified that social influence plays an important role in many behaviors like exercise or diet 51,52 , and there is some initial evidence that it can play a role on individual hygiene 53 . There is certainly a need for rigorous and carefully designed field experiments on large population scale to identify and measure the causal effect of digital nudges, incentives and peer influence on public hand washing engagement of air travelers as well as the mechanisms of health-enhancing human behavior change. This research can potentially shape the way policymakers design and implement strategic interventions based on promoting hand washing in airports that will lead to hindering any infection within a confined geographical area at the early days of an outbreak and inhibit the expansion as a pandemic. The most important outcome derived from our study is the conclusion that proper hand-hygiene with regular and efficient hand washing is the simplest and most effective solution for preventing transmission of infections and reducing the chances of massive epidemics spreading globally. This should be followed up by the design of strategic mechanisms able to increase the capacity of hand washing facilities in public places, and nudges that will enhance the adoption of hand-hygiene related behaviors.
2019-04-03T13:10:19.852Z
2019-01-26T00:00:00.000
{ "year": 2019, "sha1": "21833459eb3d5aeef0ccb3fd99597caf2a45e575", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/125526/2/530618v1.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "8eef6fec694446e150e95dfbef3a3579ae8806d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
208296863
pes2o/s2orc
v3-fos-license
The Cholesterol-Lowering Effect of Oats and Oat Beta Glucan: Modes of Action and Potential Role of Bile Acids and the Microbiome Consumption of sufficient quantities of oat products has been shown to reduce host cholesterol and thereby modulate cardiovascular disease risk. The effects are proposed to be mediated by the gel-forming properties of oat β-glucan which modulates host bile acid and cholesterol metabolism and potentially removes intestinal cholesterol for excretion. However, the gut microbiota has emerged as a major factor regulating cholesterol metabolism in the host. Oat β-glucan has been shown to modulate the gut microbiota, particularly those bacterial species that influence host bile acid metabolism and production of short chain fatty acids, factors which are regulators of host cholesterol homeostasis. Given a significant role for the gut microbiota in cholesterol metabolism it is likely that the effects of oat β-glucan on the host are multifaceted and involve regulation of microbe-host interactions at the gut interface. Here we consider the potential for oat β-glucan to influence microbial populations in the gut with potential consequences for bile acid metabolism, reverse cholesterol transport (RCT), short-chain fatty acid (SCFA) production, bacterial metabolism of cholesterol and microbe-host signaling. INTRODUCTION A significant body of evidence demonstrates that consumption of oat products is linked to a reduction in serum LDL cholesterol, a risk factor for the development of cardiovascular disease (CVD) (1)(2)(3). Oats are a source of soluble fiber in the form of β-glucan (as well as arabinoxylan, xyloglucan, and other minor components), insoluble fiber, protein, lipids, phenolic compounds, vitamins, and minerals. Whilst other constituents in oats may also have an impact, the cholesterollowering activity of oats has been demonstrated to be associated with an increase in viscosity of the gut contents (4) which enhances excretion of bile acids and cholesterol in the feces (5). Indeed, consumption of β-glucan alone can reduce serum cholesterol (6). The weight of evidence in support of a beneficial role of oat β-glucans led the US Food and Drug Administration (FDA) to authorize the use of health claims on oat products attributing lowering of CVD risk to consumption of at least 3 g per day of β-glucan. Cholesterol lowering claims have also been approved in the EU by the European Commission (7)(8)(9) and in a number of other jurisdictions including Australia and New Zealand (10), Canada (11), Brazil (12,13), Malaysia (14), Indonesia (15), and South Korea (16). We, and others, consider that the cholesterol-lowering properties of oats may not be solely attributable to the viscous properties of β-glucans (17)(18)(19). Recent research suggests a significant role for the gut microbiota in the maintenance of cholesterol homeostasis in the host (20). A number of studies have demonstrated the efficacy of probiotics [in particular, probiotic strains with an ability to metabolize host bile acids through bacterial bile salt hydrolase (BSH) activity] in lowering cholesterol in animal models or in humans (21)(22)(23)(24)(25). Microbial metabolism of bile acids is known to influence systemic cholesterol metabolism. As cholesterol is a precursor of bile acids, influencing bile acid synthesis provides a means for enhanced excretion of cholesterol thereby lowering serum cholesterol levels in the host (26,27). Bile acid signaling through the farnesoid X receptor (FXR) and other receptors may also influence host metabolism of cholesterol, for example through the induction of cholesterol transport in the gut (27,28). Alterations to microbial production of short chain fatty acids (SCFA) including propionate are also likely to have consequences for cholesterol metabolism in the host (29), though the precise mechanisms remain to be elucidated (30). Importantly, oat products and oat β-glucans have been shown to modulate the gut microbiota in human, animal and in vitro fermentation systems (19,31,32). Therefore, oats (including oat β-glucans) may have a dietary influence upon the host gut microbiota with consequences for bile acid signaling, SCFA signaling, and other effects that are known modulators of host cholesterol homeostasis. Herein we review the evidence linking components of oats with alterations to host microbiota and discuss potential mechanisms by which such microbiota changes may influence host cholesterol metabolism with a particular focus upon bile acid metabolism. Whilst we appreciate that oat β-glucans may also play a role in post-prandial glucose homeostasis (33) in this review we will predominately focus upon mechanisms by which they lower host cholesterol. Our focus is primarily on the effects of oat β-glucans. However, some reference will be made to the mechanistic effects of barley β-glucans, notably in instances where relevant studies have not yet been performed using oat β-glucans. CLINICAL EVIDENCE FOR CHOLESTEROL-LOWERING PROPERTIES OF OATS A large number of individual randomized-controlled trials and subsequent meta-analyses have established a significant effect of consumption of oats or oat β-glucans in reducing LDL cholesterol and improving other markers of cardiovascular disease (CVD) risk (1)(2)(3)34). A meta-analysis of 126 individual studies by Tiwari and Cummins (1) examined the effect of βglucan intake on measures of blood cholesterol [total cholesterol (TC) and low-density lipoprotein (LDL)-cholesterol] as well as blood glucose levels. The study demonstrated a significant reduction of TC (by 0.6 mmol/L), LDL cholesterol (by 0.66 mmol/L), and TGL/TAG (by 0.04 mmol/L) and an increase in HDL cholesterol (by 0.03 mmol/L) following consumption of oat or barley β-glucan (oat and barley β-glucans are considered bioequivalent with respect to cholesterol-lowering properties). A dose response model demonstrated a decrease in TC with an increase in β-glucan dose but no increased effect in individuals consuming over 3 g/day β-glucan (1). This finding supports FDA recommendations relating to consumption of 3 g/day β-glucan to lower CVD risk (35). Similarly a meta-analysis of randomized-controlled trials by Whitehead et al. which focused upon consumption of ≥3 g/day oat β-glucan showed a significant reduction in both TC (by 0.30 mmol/L) or LDL cholesterol (by 0.25 mmol/L) (but no effect on HDL cholesterol or TGL) (2). The study found no increased effect in those consuming higher doses of β-glucan, again suggesting that a minimum recommended dose of 3 g/day is sufficient for the cholesterol-lowering effect and is not enhanced through consumption of higher doses. AbuMweis et al. (36) combined the data from 11 randomizedcontrolled trials that fitted their weighted criteria based on dose, duration, source of β-glucan, population characteristics and sample size to report that interventions did elicit changes in total and LDL cholesterol levels relative to control subjects, but no dose-response was observed. Reductions in TC of 0.30 mmol/L and reductions in LDL cholesterol of 0.27 mmol/L were reported in response to consumption of ≥3 g/day barley β-glucan. The lack of a dose-response when consuming levels of βglucan >3 g/day was noted above. This lack of dose response may reflect variation in the physico-chemical properties of β-glucans used in individual randomized-controlled trials and included in the above meta-analyses. It is known that highly water soluble β-glucan of medium to high average molecular weight (M w ) is more effective in reducing serum cholesterol than poorly water soluble β-glucan of low M w (37). However, the precise M w of β-glucan can be difficult to establish and may not be accurately reported in randomized-controlled trial data (2). It has also been suggested that the individual food matrix and/or food processing procedures may influence the M w (and therefore bioactivity) of β-glucan and that this is a further confounding factor when comparing data from individual trials (2). The influence of so many variables may suggest that particular meta-analyses are not sufficiently powered to detect a dose-effect when comparing studies which use differing forms of β-glucan with variations in viscosity and bioactivity where such parameters remain unknown (2,36). BILE ACID SYNTHESIS IN THE HOST Bile acid synthesis and excretion is the main route by which cholesterol is effectively eliminated from the body. In the following sections we provide a basic overview of bile acid metabolism in the host with a particular emphasis upon how the gut microbiota contributes to metabolism of the bile acid pool. These concepts are further expanded in sections Alterations to Gut Microbiota and Effects on Cholesterol Metabolism and Mechanisms by Which Oat β-Glucan may Influence Host Cholesterol Metabolism Through Alterations in BSH Activity of the Microbiome below. The Bile Acid Cycle, Cholesterol, and the Role of Microbial BSH Bile acids are synthesized in liver hepatocytes from cholesterol by cytochrome enzymes (CYPs). Approximately 500 mg of cholesterol is converted to bile acid (BA) on a daily basis (38). Prior to secretion and storage in the gall bladder primary bile acids chenodeoxycholic acid (CDCA) and cholic acid (CA) are conjugated to either a taurine or a glycine molecule to aid their solubility and excretion from the liver. The majority of conjugated bile acids are reabsorbed in the terminal ileum, with 5% excreted in the feces [see (39) for a review]. Conjugated bile acids are released postprandially from the gall bladder into the small intestine and are subject to enzymatic modifications by the bile salt hydrolase (BSH) activity of the microbiota to liberate them from their cognate amino acid. This renders them susceptible to further microbial modification to form secondary bile acids lithocholic acid (LCA) from CDCA and deoxycholic acid (DCA) from CA. This activity is completed by specific members of the colonic microbiota [the Eubacterium and Clostridium XIVa clusters (40)] although gene analyses suggest that other microbial representatives may be capable of carrying out these reactions [reviewed by Long et al. (39)]. Therefore, while the liver dictates bile acid production, the gut microbiota is responsible for the diversity of BAs derived from the bile acid CA and CDCA families and it also influences reuptake or enterohepatic circulation. Alterations to the range and relative profile of bile acids is a reliable readout of microbial changes in the gut and such changes are particularly evident in disease states including metabolic syndrome, inflammatory bowel diseases and Type II diabetes [see (41,42) for reviews]. Therefore, the dietary effects of oat β-glucan on the microbiota (outlined in section Alterations to Gut Microbiota and Effects on Cholesterol Metabolism), are likely to impact bile acid profiles in the host with potential consequences for metabolism and signaling. Bile acids are ligands for the farnesoid-X-receptor (FXR) which is a nuclear receptor that is central to energy and metabolic regulation in a range of different tissues (43). Microbiallymodified and unconjugated bile acids are the most potent natural FXR ligands with CDCA < LCA< DCA< CA in order of affinity and activation strength, while ursodeoxycholate (UDCA) and murine tauro-β-muricholic acid can hinder FXR activity (27,44,45). FXR is widely distributed in tissues including the intestine and the liver. FXR therefore acts as a bile acid sensor in the intestine and as a controller of bile acid synthesis in the liver (46). It controls bile acid synthesis by a variety of mechanisms. Agonism of FXR in the intestine induces production of the endocrine hormone fibroblast-growth-factor 19 [FGF19 (in humans) FGF15 (in mice)] which enters the circulation and activates specific receptors on hepatocytes to reduce bile acid synthesis (through down-regulation of a key enzyme CYP7A1) (47)(48)(49). Alternatively a reduction in engagement of the FXR may enhance expression of regulatory networks that are inhibited by FXR such as the liver orphan receptor (LXR) regulon (24). Another layer of cross-talk from the intestine to the liver acts through enterohepatic re-circulation of bile acids. On reaching the liver, circulating bile acids activate FXR directly to ultimately inhibit CYP7A1 transcription, again reducing bile acid synthesis (44). The importance of FXR in host cholesterol metabolism is highlighted by studies using FXR knock-out mice or specific chemical agonists of the FXR [reviewed in Li and Chiang (27)]. Knock-out of FXR in mice results in elevated LDL-C (50), whereas stimulation of the FXR in hypercholesterolaemic mice (using bile acids or specific agonists) results in a lowering of HDL-C and LDL-C (51). More recently, the intestinal FXR agonist Fexaramine (Fex) was shown to induce FGF15 and to lead to broadly beneficial metabolic effects including reduced weight gain in mice fed a high fat diet (52) and reduced serum cholesterol in a mouse model of diabetes (53). The precise mechanisms by which FXR contributes to cholesterol metabolism in the host remain unclear but are thought to involve regulation of fatty acid metabolism as well as Reverse Cholesterol transport (RCT) and Trans-intestinal Cholesterol excretion (TICE) (outlined below) (27,54). Reverse Cholesterol Transport (RCT) and Trans-intestinal Cholesterol Excretion (TICE) In addition to the incorporation of cholesterol into bile acids and subsequent bile acid excretion, other mechanisms contribute to the systemic control of host cholesterol. RCT is a mechanism for directly transporting cholesterol from the tissues to the liver for excretion into bile, and ultimately in feces. RCT relies upon cholesterol loading onto HDL particles which can remove cholesterol from the tissues, notably from macrophage foam cells in the artery wall [reviewed in Temel and Brown (55) and Tall and Yvan-Charvet (56)]. HDL-cholesterol then enters hepatocytes via specific receptors and the cholesterol is secreted directly into bile for excretion via the specific transporters ABCG5/G8. This represents a mechanism by which HDL-cholesterol is thought to be associated with reduced cardiovascular disease risk. More recent work has revealed a supplemental system for Trans-intestinal cholesterol excretion (TICE) directly into feces through enterocytes in the proximal small intestine. The model proposes that cholesterol is removed from HDL particles in the liver and loaded onto ApoB-containing lipoproteins which migrate to the small intestine where the particles are transported across enterocytes and the cholesterol is excreted into the intestinal lumen. Again cholesterol excretion is via the ABCG5/G8 transport system, in this case expressed in enterocytes (55). Importantly genes that encode essential components of both RCT and TICE are regulated via FXR (27). These include ApoA1 which encodes a component of HDL particles and the ABCG5/G8 transport system (28). This suggests that bile acid signals (and therefore the microbiota) can modulate both RCT and TICE (section Mechanisms by Which Oat β-Glucan May Influence Host Cholesterol Metabolism Through Alterations in BSH Activity of the Microbiome below). THE VISCOUS NATURE OF β-GLUCAN AND EXISTING PROPOSED MECHANISM FOR CHOLESTEROL-LOWERING The β-glucan polysaccharide forms a viscous liquid suspension in solution, a characteristic which is predicted to occur under physico-chemical conditions encountered in the GI tract. Intestinal viscosity of β-glucan is determined by its concentration, solubility and M w , features that may influence variation in clinical effects seen across different controlled trials. Indeed, recent studies have determined the effects of increasing viscosity upon physiological efficacy. In a large randomized controlled trial the capacity of oat products to lower serum cholesterol was directly proportional to the M w of the β-glucan component, with high (2.2 million g/mol) and medium (850,000 g/mol or 530,000 g/mol) M w β-glucans significantly reducing LDL cholesterol and a low (210,000 g/mol) M w β-glucan proving ineffective (4). Furthermore both increasing viscosity (57) or increasing M w (58) of βglucan have been shown to increase the ability to regulate post-prandial glucose concentrations in human subjects. The beneficial effects of high M w oat β-glucan are therefore thought to be related to an ability to form a viscous solution in the intestine. The mechanisms by which viscous β-glucans modulate host cholesterol are thought to be linked to modulation of host bile acid metabolism (59). Viscous β-glucan is hypothesized to interact with bile acids and prevent their readsorption in the terminal ileum. This results in increased fecal excretion of bile acids thereby increasing the requirement for de novo synthesis of bile acids from cholesterol, a mechanism which lowers systemic LDL cholesterol (59). In support of this, a number of animal studies (60, 61) and human intervention studies have shown elevated fecal bile acid excretion following consumption of oats or β-glucan (5,(62)(63)(64). This is matched by evidence for elevated de novo bile acid synthesis following consumption of oats both in animals (through measurement of activity of relevant liver enzymes including Cyp7A1) (61) or through measurement of 7 alpha-hydroxy-4-cholesten-3-one (HC) in humans (a marker for bile acid synthesis) (61,64). A comprehensive study in pigs demonstrated that oat β-glucan feeding increased bile acid excretion during the early feeding period but that bile acid excretion actually decreased in this group following dietary adaptation. The study pointed to alterations in gut physiology, reduced bile acid uptake, and a reduction in cholesterol absorption along with possible microbiota changes that could explain the reduction in systemic cholesterol levels in the β-glucan-fed group (65). The authors indicated that oat β-glucan significantly influenced bile acid and cholesterol metabolism in the host along with a likely beneficial (prebiotic) effect on the gut microbiota which enhanced both the generation of the secondary bile acid UDCA and cholesterol digestion in the gut (65). The possible effects of such microbiota-mediated mechanisms are outlined further in the sections below. Alterations to Gut Microbiota and Effects on Cholesterol Metabolism Whilst the precise mechanisms remain to be elucidated it is clear that the gut microbiota plays a significant role in host cholesterol homeostasis. Very early studies indicate that antibiotic treatment of mice inhibits cholesterol metabolism leading to accumulation of systemic cholesterol (66). Also, germ-free rats accumulate greater levels of cholesterol from elevated cholesterol diets compared to conventionally raised animals (67). Germ free rats demonstrated lower levels of systemic catabolism of dietary cholesterol (68) and also showed reduced fecal excretion of both total sterols and bile acids in particular (69). The data suggest that increased bile acid synthesis from cholesterol is a mechanism for lowering of systemic cholesterol levels (68) and is influenced by the activities of the gut microbiota. Furthermore, there is significant evidence that transient alteration of the microbiota through the administration of probiotic bacteria can be beneficial in lowering of systemic cholesterol (see sections below). The data suggest a role for the microbiota in the maintenance of cholesterol homeostasis in the host and suggest that alteration of the community structure of this microbial population has the capacity to influence cholesterol metabolism (70). More detailed studies are necessary in order to pinpoint specific microbial genera or species in the gut which may influence host cholesterol metabolism. Such information is emerging for models of lipid metabolism, weight gain and adiposity. For instance, murine studies pointed to alterations in the relative ratios of two major phyla, Bacteroidetes, and Firmicutes in promoting weight gain (71,72). The findings also correlated with human studies in obese volunteers subjected to a calorie-restricted diet (73) and the obesity phenotype was transferable by transplant of the microbiota from either obese mice or obese humans to microbiota naïve mice, thereby showing a functional role of the microbiota in this phenomenon (74)(75)(76). Other studies have shown a clear link between microbial gene richness and metabolic health. Individuals with low gene richness in the microbiota are more likely to display increased adiposity and dyslipidemia (77). β-glucan is resistant to depolymerization by gastric and pancreatic enzymes and transits to the colon for microbial fermentation. Alteration of bile acid metabolism is also known to impact the microbiota (78) and is therefore a further possible mechanism by which β-glucans may modulate microbial gut populations. There is significant evidence from models of increasing complexity (from in vitro fermentation models, to rodent to porcine models) and human intervention studies that oat fibers have a significant impact upon the compositional structure of the gut microbial community. For critical reviews of the effects of β-glucan upon the microbiota the reader is referred to Jayachandran et al. (79) and Sanders et al. (80). Relatively simple in vitro fermentation studies which mimic the human colon using human fecal bacterial populations allow for highly controlled analyses of bacterial responses to dietary components but lack the biological complexity of in vivo models. In vitro fermentation studies have shown that addition of oat or barley β-glucan directly promotes the growth of gut bacterial populations (including the Clostridium histolyticum subgroup and Bacteroidetes/Prevotella groups) (81). A recent study using an in vitro batch culture system demonstrated that oat β-glucan induced proliferation of Bacteroidetes but was not Bifidogenic. In contrast growth of Bifidobacteria was stimulated by oat-derived polyphenols (82). In another study oat flakes promoted growth of the Bacteroides/Prevotella group or Bifidobacterium group in a fecal slurry with effects related to the size of the oat flakes (31). A recent study indicates the ability of oat β-glucan to promote growth of Prevotella and Roseburia species in an in vitro fermentation with concomitant production of the short chain fatty acids (SCFA) propionate and butyrate (83). Overall these studies indicate that oat β-glucan and other components of whole oats can influence populations of biologically relevant bacterial taxa. Recent studies in mice demonstrated that oat β-glucan feeding increases populations of Bacteroides species and Prevotella species whereas bacteria from the phylum Firmicutes were decreased (84). Zhou et al. similarly showed that whole grain oat flour causes significant alterations to microbiota community structure relative to a control diet with alterations to Prevotellaceae, Lactobacillaceae, and Alcaligenaceae families (85). Importantly the microbiota changes correlated with a significant lowering of total cholesterol and non-HDL cholesterol in animals fed whole grain oat flour (85). Ryan et al. showed a significant reduction in markers of cardiovascular disease risk in an apoE deficient mouse model following feeding with oat β-glucans which correlated with an increase in the population of the family Verrucomicrobia and elevated production of nbutyrate (32). This is particularly interesting as Akkermansia muciniphila (a key member of the Verrucomicrobia) has been functionally linked to improved gut barrier function, reduction in obesity, and improved metabolic health (86). Early studies in rats utilized a culture-based approach and demonstrated that feeding of oat flour formulations led to an increase in Bifidobacteria populations in the gut (87). Insoluble high viscosity oat β-glucan enriched for Clostridium cluster I in a pig model with associated increases in butyrate production (88). When microbial composition studies were performed in humans, low M w barley β-glucan did not appear to alter microbial representation however high M w barley β-glucan was associated with higher levels of the phylum Bacteroidetes while Firmicutes levels were reduced (89). These alterations were accompanied by reductions in CVD risk factors including BMI, blood pressure and circulating triacylglycerol (TAG) over the 35day study period and the authors identified specific microbial taxa whose abundance correlated with markers of disease risk (including total cholesterol and LDL-C) (89). Another study used fluorescence in situ hybridization with probes specific for selected bacterial genera and showed that consumption of an oatbased granola breakfast cereal was associated with a cholesterollowering effect concomitant with elevated Bifidobacterium and Lactobacillus species. As these species are associated with BSH activity and as members of these species have been previously used as probiotics which can lower serum cholesterol levels, the authors suggested that the lowering of serum cholesterol in this study may be linked to alterations to bile acid metabolism and that further studies are warranted. No significant changes were seen in particular species of Bacteroides, Atopobium, or Clostridium targeted in this study (19). Overall, intervention studies utilizing sources of β-glucan suggest that consumption can promote alterations to the gut microbiota with some studies suggesting a potentially beneficial (prebiotic) effect (19). Mechanisms by Which Oat β-Glucan may Influence Host Cholesterol Metabolism Through Alterations in BSH Activity of the Microbiome Evidence from in vitro fecal fermentation studies and rodent and human intervention studies suggest that oat β-glucan consumption increases levels of bacteria in the gut with known BSH activity (reviewed above). A variety of studies have demonstrated an elevation of Bifidobacterium, Bacteroides, and Lactobacillus species in the gut following oat β-glucan consumption. These bacterial genera are known to predominately contain BSH-positive species (90). There is therefore good evidence for an effect of oat β-glucan on the host microbiota with a predicted influence upon those species that are BSH positive. This would suggest that consumption of oat β-glucan has the capacity to alter host bile acid profiles. However, further work is necessary to determine if oat β-glucan can effectively modulate host bile acid profiles in humans as predicted by these microbiota analyses. There is good evidence that BSH-active probiotic interventions can reduce serum LDL-C providing a direct link between elevated BSH activity and regulation of host cholesterol [reviewed in Jones et al. (24) and below]. Whilst the evidence for a cholesterol-lowering activity of BSH is strong the precise mechanisms remain elusive and most likely reflect an alteration in both the physico-chemical properties of bile acids and the molecular signaling potential of the bile acid pool for FXR. Bacterial BSH activity is known to alter the host bile acid signature through deconjugation of conjugated bile acids. Unconjugated bile acids have reduced micellular activity and therefore are less effective mediators of cholesterol absorption in the host relative to conjugated bile acids (24). In support of this, administration of a strongly BSH positive probiotic L. reuteri NCIMB 30242 strain to humans lowered serum LDL-C and lowered absolute plasma concentrations of plant sterols (surrogate markers of cholesterol) suggesting decreased inward transport of cholesterol in the gut (23). Therefore, elevated bacterial BSH activity is likely to directly reduce cholesterol uptake from the lumen and this may provide a general mechanism by which BSH regulates systemic cholesterol in the host (24). Unconjugated bile acids are also more likely to be eliminated in the feces thereby driving a requirement for de novo bile acid synthesis and an associated reduction of systemic cholesterol (68). Indeed, Joyce et al. (25) showed that expression of highly active L. salivarius BSH could significantly reduce LDL cholesterol, total cholesterol and also serum triglycerides in mice. In humans BSH-active L. acidophilus administered over 6 weeks could reduce plasma levels of both total cholesterol and LDLcholesterol (91). BSH active L. reuteri NCIMB 30242 significantly reduced LDL-C, total cholesterol in a human randomized controlled study with elevated free bile acid levels detected in circulation (22,23). The reduced reabsorption of bile acids also reduces their potential to interact with FXR and may lead to a reduction in stimulation of FXR. However, unconjugated bile acids also can act as potent ligands for the FXR and are also the substrates for further bacterial conversions of bile acids to secondary bile acids which are also potent FXR agonists. Therefore, another possible hypothesis is that FXR is activated in the gut by BSH activity leading to increased FXR signaling in the gut and expression of the hormone FGF19 by enterocytes leading to a reduction in hepatic bile acid synthesis. More research is necessary to understand the chemical and physiological parameters which dictate whether FXR is stimulated through local bacterial BSH activity. In the absence of such research we herein consider the evidence for two potential mechanisms by which elevated BSH activity may impact host systemic cholesterol levels. In hypothesis 1 FXR is not activated and de novo bile acid synthesis is increased. In hypothesis 2 FXR is activated and FGF19 is elevated resulting in a reduction in bile acid synthesis and an increase in other mechanisms by which cholesterol levels are potentially modulated in the host. Given a number of recent studies which show a decrease in FXR activation in the gut following administration of BSH-active probiotics we favor hypothesis 1 as most likely to represent a scenario in which BSH activity is increased in the gut microbiota following dietary interventions (as is potentially the case for β-glucan consumption). Hypothesis 1: Elevated Bacterial BSH Activity Can Reduce Engagement of FXR in the Intestine, Increase Bile Acid Excretion and Increase de novo Synthesis of Bile Acids in the Liver (Figure 1) Conjugated bile acids are actively reabsorbed via specific transport systems into enterocytes in the ileum whereas unconjugated bile acids are not subject to this specific reuptake system and are passively absorbed at a lower rate (92). BSH activity decreases the levels of conjugated bile acids which can be actively transported and recent in vivo evidence suggests that the resulting deconjugated bile acids are less efficiently reabsorbed into enterocytes (93). Unconjugated bile acids then enter the colon where conversions to secondary bile acids can take place (94) or are excreted in feces. As the FXR is an intracellular nuclear receptor lower levels of cellular adsorption of bile acids will lead to a lowering of FXR activation in the terminal ileum. In support of this hypothesis it has been shown that monocolonization of germ-free rats with BSH-active bacterial species significantly promotes fecal excretion of bile acids (95). More recently, oral inoculation of rodent models with BSHproducing probiotic bacteria was shown to reduce intestinal FXR activation relative to controls in tandem with an alteration of local bile acid profiles. Inoculation of mice with a highly BSHactive polybiotic mixture of organisms (VSL#3) significantly reduced FGF15 (a marker of FXR activation) and increased expression of hepatic Cyp7a1 and Cyp8b1 and increased bile acid synthesis (93) (see Figure 1). In a separate study oral administration of L. plantarum CCFM8661 to mice induced similar effects; inhibition of the FXR-FGF axis, elevated Cyp7a1 expression and elevated bile acid synthesis (96). In another study in mice administered a high fat diet oral inoculation with BSHactive Lb. rhamnosus LGG reduced serum cholesterol in concert with a downregulation in FXR transcription in the liver and increased expression of hepatic Cyp7a1 (but not Cyp8b1) (97). An earlier study also linked the cholesterol-lowering effect of a Lb. plantarum probiotic strain to an increase in expression of Cyp7a1 in mice, indicative of downregulation of FXR-mediated feedback (98). Furthermore, a BSH-active probiotic Lb. reuteri NCIMB 30242 has been shown to reduce serum LDL-C in humans with a concomitant increase in total bile acid levels (99). As bile acids are synthesized from cholesterol, increased de novo synthesis of bile acids contributes to cholesterol catabolism in the host leading to a lowering of systemic cholesterol levels (68). Another nuclear receptor, LXR, is indirectly repressed by the FXR. Therefore, another consequence of reduced FXR signaling is an elevation of LXR activity. This stimulation of LXR leads to an increase in expression of the cholesterol efflux system ABCG5/8 in enterocytes (100) and increased excretion of cholesterol (54,101). A recent study demonstrated that the BSH-active probiotic strain Lactobacillus plantarum LRCC 5273 reduces serum cholesterol in mice along with an increase in expression of Cyp7A1, an increase in both hepatic and gut LXR activity and elevated expression of gastrointestinal ABCG5/8 allied with a decrease in expression of the gene encoding NPC1L1 (a cholesterol uptake system) (102). The authors propose a model in which elevated BSH activity promotes TICE mediated through LXR activation which involves elevated excretion of cholesterol from the system and reduced cellular uptake (102). In support of this another study in mice demonstrated that an increase in BSH activity in the lumen can increase transcription of Abcg5/8 concomitant with a reduction in serum cholesterol (25). Whilst bile acid synthesis is controlled by the gastrointestinalhepatic FXR-FGF axis another mechanism of feedback inhibition is mediated directly via circulating bile acids. Bile acids entering the enterohepatic circulation have the potential to directly influence FXR signaling in the liver, a process which influences de novo bile acid synthesis and cholesterol metabolism (103). A consequence of the elevated excretion of bile acids in feces as outlined above may be reduced re-circulation of bile acids and downregulation of liver FXR activity (24). Downregulation of FXR leads to reduction in SHP and consequently elevated activity of LXR (24). Indeed the recent study cited above suggests that elevated BSH activity in the gut results in elevated LXR expression in the liver in mice (102). The consequences of reduced hepatic FXR signaling include increased expression of Cyp7A1 and therefore increased de novo bile acid synthesis. In addition, LXR activation leads to the expression of hepatic ABCG5/8 that promotes cholesterol excretion into bile (104) and Whilst both conjugated and unconjugated bile acids can activate the FXR, unconjugated bile acids generated through elevated bacterial BSH activity have a greater ability to enter target cells without a specific transport mechanism (106). There is evidence that FXR is activated via BSH activity in the gut. One study indicates that administration of a BSH-active probiotic, Lb. reuteri NCIMB 30242 to humans lowers LDL-C with levels of FGF19 trending toward an increase (though the increase was not statistically significant). Furthermore, somewhat counterintuitively, there was evidence for an increase in bile acid synthesis in subjects receiving the probiotic so it is difficult to determine whether the FXR-FGF axis was indeed engaged in this study (99). This is in contrast to numerous studies in animals indicating the opposite effect (see mechanism 1). Therefore, further human intervention studies are needed to determine whether FXR may be stimulated by microbial BSH activity in humans. Other studies demonstrate that a reduction in BSH activity through administration of antibiotics (107) or the antioxidant Tempol (108) decreases gut bacterial BSH activity leading to a reduction in gastrointestinal FXR signaling in mice. The corollary of these findings would suggest that a physiological FIGURE 2 | Under specific circumstances which remain unclear the Farnesoid X receptor (FXR) may be stimulated in the gastrointestinal tract. There is some evidence from a single human probiotic study that FXR may be stimulated through BSH activity but further studies are warranted. (A) In a model where gastrointestinal FXR is stimulated unconjugated bile acids (UC-BA) may access the FXR through non-specific passage through cell membranes. (B) There is good evidence that intestinal FXR activation promotes the Transintestinal Cholesterol Excretion (TICE) system for the net efflux of cholesterol into the feces. FXR activation leads to elevated intestinal production of FGF15/19 which feeds back to inhibit bile acid synthesis. Via a process that involves FXR this results in a net reduction in hydrophobic BA species but a relative increase in hydrophilic BA species which are released into the small intestine. As hydrophilic BAs poorly associate with cholesterol this may reduce cellular uptake of cholesterol in the gut. (C) There is also good evidence that elevated hepatic FXR activation increases systemic reverse cholesterol transport (RCT) for the mobilization of cholesterol from macrophages (as HDL-C) to the liver for excretion. (D) There is evidence to suggest that engagement of the FXR reduces the secretion of VLDL into the circulation thereby reducing the systemic circulation of this atherogenic molecule. (E) There is also evidence to suggest that engagement of the FXR increases the uptake of LDL into the liver from the circulation thereby reducing the systemic circulation of this atherogenic molecule. (F) Finally, there is evidence to suggest that FXR activation increases hepatic ABCG5/8 with potential to promote biliary secretion of cholesterol. role of microbial BSH is to enhance FXR signaling in the gut (106,108). However, the effects of Tempol or antibiotics upon the microbiota are so profound in these experiments, that it is difficult to equate these results to those expected following consumption of oat β-glucan where subtle increases in BSH activity akin to probiotic treatments, are expected. There is significant evidence to support a role for FXR signaling in cholesterol homeostasis. However, most of this evidence has been generated through the use of potent FXRagonists or through experiments using knock-out mice. There is relatively little evidence directly linking these effects to microbiota changes. Oral delivery of FXR agonists has been shown to decrease systemic LDL-C or non-HDL-C (68, 69) and to decrease atherosclerotic plaque formation in mouse models of atherosclerosis (66,68,69). Furthermore, mice deleted in FXR display hypercholesterolemia (23,50,109). The mechanisms by which the FXR regulates systemic cholesterol metabolism are thought to include regulation of cellular LDL-C uptake, reduction in plasma VLDL, modulation of plasma HDL-C levels, regulation of reverse cholesterol transport (RCT) and possible regulation of trans-intestinal cholesterol excretion (TICE) [reviewed in Li and Chiang (27)]. Studies in animal models suggest that FXR agonists can lower plasma VLDL levels (110,111). Recent studies demonstrate that FXR agonists reduce hepatic secretion of VLDL by suppressing expression of PLA2G12B, a protein involved in assembly and secretion of potentially atherogenic VLDL (111). In addition, FXR knockout mice demonstrate reduced clearance of plasma HDL-C and further studies suggest that hepatic FXR activation increases RCT from macrophages and fecal excretion of cholesterol (69,100). The cholesterol transport system ABCG5/8 is positively regulated by FXR (in addition to regulation via LXR) and there is evidence to suggest that FXR activation increases hepatic ABCG5/8 with potential to promote biliary secretion of cholesterol [reviewed in Li and Chiang (27)]. ABCG5/8 is also expressed in enterocytes where it plays a significant role in the elimination of cholesterol into the gut lumen through TICE [reviewed in de Boer et al. (54)]. In mice engagement of hepatic FXR using an agonist significantly reduced serum cholesterol, increased RCT and increased the hydrophilic bile acid pool. As hydrophilic bile acids less efficiently associate with cholesterol this was thought to be a factor which reduced intestinal absorption of cholesterol thereby lowering systemic cholesterol levels (112). Other studies have indicated that FXR agonists stimulate TICE to significantly increase cholesterol excretion. TICE was not evident in knock-out mice lacking intestinal FXR indicating that this pathway is highly dependent upon FXR activation in the gut (101). However it should be appreciated that activation of LXR in the gut (allied to downregulation of FXR) can also induce the TICE system (see mechanism 1) (101). Propionate and Other Short-Chain Fatty Acids (SCFA) SCFAs are microbial metabolites that are particularly associated with fermentation of dietary fibers. Exposure of fecal bacteria to oat bran fractions using in vitro model systems has demonstrated an ability of oat β-glucan to stimulate SCFA production by the gut microbiota (81,(113)(114)(115). In many of the studies propionate predominated amongst the SCFAs stimulated by oat bran fermentation (81,113,114). Animal studies support the finding that oat fermentation alters the microbiota and elevates SCFA production in the colon. In mice, oat-derived β-glucan consumption led to an alteration to the fecal microbiota and an elevated level of propionate in the colon (84) whilst in another recent study in ApoE − mice feeding of oat β-glucan led to elevated n-butyrate levels (32). In rats, oat β-glucan feeding also increased overall SCFA levels (87,116,117). Similarly porcine feeding studies indicate an increase in overall SCFA levels following consumption of oat β-glucans or similar feed additives (118,119) with elevated butyrate in particular being evident in some studies (120,121). In one study SCFAs were lower in pigs fed oat products relative to the control (122). However, overall the animal feeding studies suggest an influence of oat β-glucan on the gut microbiota which results in an elevated production of SCFAs. Whilst studies have examined the effects of oats on humanderived microbial populations in ex vivo models, relatively few studies have examined the effects of oat consumption on SCFA production in human intervention studies. In a randomized clinical trial oat β-glucan resulted in reduced cholesterol concomitant with an increase in total SCFA and in particular butyrate (123). A similar study determined that total SCFAs were elevated in subjects fed a β-glucan rich oat bran for 8 weeks (124). Another randomized clinical trial demonstrated the efficacy of bran β-glucans in lowering cholesterol with effects linked to an increase in SCFAs (in particular propionate) concomitant with changes to the microbiota (125). Another study demonstrated an increase in fecal SCFAs in subjects consuming a high M w barley β-glucan in concert with an increase in fecal bile acid excretion (126). The same effects were not seen in subjects consuming a low M w barley β-glucan (126). In contrast, a recent study which investigated the effects of a whole grain oat granola upon microbiota markers failed to show an influence upon fecal SCFA levels despite a significant lowering of TC and LDL cholesterol levels (19). The authors suggested that in future studies measurement of circulating SCFAs would be more informative in order to determine physiologically relevant systemic effects. Indeed, as SCFAs are rapidly absorbed by enterocytes in the gut their presence may be a rather transient marker of gut microbial activity. In this respect fecal fermentation studies with controlled human microbiota samples may represent an accurate measure of the influence of biotic factors on SCFA production in the gut (as the SCFA will not be absorbed in this model). A study by Carlson et al. recently demonstrated that a commercially available source of oat β-glucan significantly increased propionate production by the microbiota in a human fecal fermentation system (127). The work confirms other earlier studies which demonstrated that addition of sources of oat βglucan to in vitro microbial fermentation systems can increase SCFA (in particular propionate) production (81,114,128). The signaling and health-promoting effects of SCFAs are relatively well-established (129). Luminal propionate engages specific receptors (GPR41 and GPR43) to influence local production of hormones, and regulates satiety and intestinal transit times (129). Propionate and butyrate also mediate antiinflammatory effects in the host through interaction with GPR43 expressed in Treg cells (in the case of propionate) or interaction with GPR109A on dendritic cells (in the case of butyrate) (129,130). Of the SCFAs, propionate in particular plays a significant role in modulation of cellular lipid metabolism, resulting in effects that may be linked to the proposed cholesterol-lowering effect of propionate (30). However more studies are required to definitively prove these links and address mechanisms (30). Exposure of rat hepatocytes to propionate in culture resulted in a reduction in cellular cholesterol synthesis (131) an effect that was potentially linked to reductions in acetyl-CoA synthase activity or acetate uptake, both of which are features of cholesterol metabolism [reviewed in Hosseini et al. (30)]. A number of studies have proposed an effect of SCFAs (including propionate) in the lowering of cholesterol markers in animal or human systems. Positive correlations between the cholesterol-lowering properties of probiotics and elevated SCFAs (notably propionate and butyrate) have been made in murine and rat intervention studies (132,133). Positive correlations have also been made between the cholesterol-lowering properties of fibers other than β-glucan and elevated levels of SCFAs (134,135). More direct causal effects can be seen when subjects either consume dietary SCFAs or they are directly infused. In rats dietary supplementation with propionate led to a significant decrease in plasma TC levels (136). A more recent study demonstrated that dietary feeding of individual SCFAs (propionate, acetate or butyrate) was sufficient to lower TC and non-HDL cholesterol in hypercholesterolaemic hamsters (137). The effects were correlated with increased bile acid excretion in the feces and elevated expression of enzymes involved in bile acid synthesis (137). A recent study demonstrated that oral infusion of a mixture of acetate, butyrate and propionate can reduce serum cholesterol levels in pigs (138). In contrast a previous study in which pigs were infused with propionate directly into the caecum failed to show a cholesterol-lowering effect (139). To our knowledge studies investigating the effects of dietary supplementation with SCFAs on cholesterol levels in humans are relatively limited. In two separate studies consumption or infusion of additional dietary propionate did not alter markers of lipid metabolism (140) or cholesterol (141) in healthy volunteers. Microbial Exopolysaccharide (EPS) in Cholesterol Homeostasis In addition to modulation of bile acid profiles and production of SCFAs, gut microorganisms can influence the host through toll-like receptor agonists and other microbial components (including EPS). EPS is composed of repeating carbohydrate moieties, either strongly or loosely associated with the peptidoglycan layer of many lactic acid bacteria (including Lactobacillus and Bifidobacterium species) (142,143). Given that these bacterial populations may be altered by consumption of oat β-glucans (19) we predict that EPS is likely to play a role as an effector of microbe-host crosstalk influenced by potential prebiotic effects of β-glucans. EPS is thought to protect the bacterial cell from environmental stressors and to improve survival in the GI tract but also plays a role in microbe-host interactions [reviewed in Ryan et al. (143)]. Production of EPS has been associated with the immunoregulatory properties of specific strains used as probiotics (144) and also plays a role in lowering of cholesterol. The Pediococcus parvulus strain 2.6 produces an EPS that resembles the structure of oat βglucan (143) and the strain has been shown to regulate serum cholesterol in hypercholesterolaemic volunteers consuming a fermented beverage made with P. parvulus 2.6 (145). London et al. showed that a Lactobacillus strain engineered to produce EPS demonstrated a greater cholesterol-lowering effect in a mouse model of atherosclerosis than an isogenic non-producer (146). Furthermore a Lb. mucosae DPC6426 strain which naturally produces high levels of EPS was capable of reducing lipid markers (TC and serum triglyceride) in the same model system (146). EPS extracted from Lactobacillus strains caused a reduction in triacylglycerol lipid accumulation in an in vitro adipocyte model and a reduction in levels of triacylglycerol and cholesterol in murine fat tissue when mice were injected with EPS. The work demonstrated a role for TLR2 in the cholesterol and lipid-lowering effects of EPS (147). Overall the data suggest that alteration to the relative levels and chemical isotypes of EPS in the GI tract through alterations to the microbiota may have the potential to modulate host cholesterol metabolism, potentially through a TLR2-mediated mechanism. However, further mechanistic studies are required. Microbial Cholesterol Assimilation and Metabolism Numerous bacterial genera found throughout the biosphere have the capacity to metabolize cholesterol. Genomic approaches have identified likely mechanisms by which some species can degrade cholesterol but others remain uncharacterized [reviewed in Garcia et al. (148) and Bergstrand et al. (149)]. A number of gut-dwelling bacterial species have the capacity to transport and/or metabolize cholesterol with the potential mechanisms being established in Eubacterium coprostanoligenes (148) an organism that can actively metabolize cholesterol to coprostanol in the GI tract in animal models (150,151). Lactobacillus acidophilus, Lb., casei, and Lb. bulgaricus have been shown to assimilate cholesterol and to reduce cholesterol to coprostanol through the activity of a cholesterol reductase (152). Rationally selected Lactobacillus strains were capable of reducing serum TC and LDL cholesterol in rats fed a lipid-rich diet, a finding that correlated with elevated SCFAs and bile acid excretion in these animals (132). Recent work has identified that Bacteroides spp. isolated from the gut can produce a compound called commendamide which has the capacity to degrade cholesterol and may represent a bacterial adaptation to the gut environment (153). Human intervention studies have indicated an increase in Bacteroidetes in humans following consumption of β-glucan (89), so there is potential for this to represent a mechanism by which microbiota changes may influence cholesterol metabolism in the host. More work is necessary to establish the cholesterolmetabolizing activities of the gut microbiota in health and disease. However, it is clear that alterations to gut microbial community structure have the potential to alter this important physiological function. CONCLUSIONS AND FUTURE DIRECTIONS The significant clinical evidence for the cholesterol-lowering effects of β-glucan has led health authorities in the US, Europe and elsewhere to permit health claims attributing a lowering of CVD risk to consumption of specific amounts (generally 3 g per day) of β-glucan. The mechanisms by which β-glucan may lower host cholesterol levels are thought to be linked to an ability to prevent re-circulation or enhance excretion of bile acids, effects that are potentially related to the gel-forming properties of βglucan. As bile acids are a major repository of cholesterol in the host this leads to an overall reduction in cholesterol from the system. However, in recent years our knowledge of both cholesterol metabolism and the physiological role of the gut microbiota has increased significantly. It has become clear that diet (including consumption of β-glucans) has the potential to significantly alter the composition of the gut microbiota. In turn studies have shown that the composition of the gut microbiota is a major regulator of both cholesterol and bile acid metabolism in the host. Studies in pigs have shown that β-glucan feeding alters the ability of intestinal cells to reabsorb bile acids and also alters the bile acid profile in the host, suggesting that changes in the microbiota are concomitant with the cholesterollowering effect (65). Other studies have confirmed an apparent "prebiotic" effect whereby the microbiota is altered through consumption of oat β-glucan in a manner that is suggestive of an ability to alter the bile acid metabolizing potential of the gut microbial community (19). In the absence of studies which precisely analyze the effect of β-glucan consumption on both the microbiota and bile acid profiles we outlined two hypotheses by which cholesterol metabolism may be impacted by gut microbiota-mediated alterations (section Mechanisms by Which Oat β-Glucan May Influence Host Cholesterol Metabolism Through Alterations in BSH Activity of the Microbiome). We propose a microbe-centered model in which microbial bile acid metabolism results in reduced engagement of the host bile acid receptor FXR, stimulating enhanced de novo bile acid synthesis and enhanced TICE (Figure 1). Furthermore, in this review we outline that other microbe-host interactions may contribute to the cholesterol-lowering effects of β-glucan though stimulation of SCFA production, cholesterol degradation or via the effects of microbial EPS. We propose that future studies should utilize a systems biology approach toward understanding the complex interplay between β-glucan, the microbiota and mechanisms in the host that regulate serum cholesterol levels. Data which links consumption of β-glucan to bile acid changes in the host and identifies host metabolic changes (including to levels of FGF19) will be invaluable for enhancing our understanding of the mechanisms by which oat β-glucan mediates its cholesterollowering effects. AUTHOR CONTRIBUTIONS SJ and CG wrote and edited the manuscript. AK and LF edited, provided critical feedback, and contributed significantly to the writing of the manuscript. ACKNOWLEDGMENTS SJ and CG acknowledge the funding of APC Microbiome Ireland by the Science Foundation of Ireland Centers for Science, Engineering and Technology (CSET) programme (Grant Number SFI/12/RC/2273). SJ is also funded by SFI-EU Cabala 16/ERA-HDHL/3358. SJ and CG received a contribution to research costs for the writing of this article from PepsiCo. The views expressed in this manuscript are those of the authors and do not necessarily reflect the position or policy of PepsiCo Inc.
2019-11-27T14:08:11.636Z
2019-11-27T00:00:00.000
{ "year": 2019, "sha1": "767898199217d384e5f8ddfd7953fabacc186718", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2019.00171/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "767898199217d384e5f8ddfd7953fabacc186718", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
218966648
pes2o/s2orc
v3-fos-license
Design of Medium Depth Drainage Trench Systems for the Mitigation of Deep Landsliding For those slopes where the piezometric regime acts as internal landslide predisposing factor, drainage may represent a more effective mitigation measure than other structural interventions. However, drainage trenches have been generally considered as mitigation measure solely for shallow landslides. More recently, instead, some authors show that the variation in piezometric conditions at large depth is not negligible when medium depth drainage trenches are involved. The paper presents the results of finite element analyses of the transient seepage induced by the installation of systems of drainage trenches of different geometric parameters, and the effect of the drainage system on the stability factor of the slip surface, through 2D limit equilibrium analyses. The pilot region is the Daunia Apennines, where field studies have led to recognize for most of the landslides a “bowl-shaped” slip surface; the results accounting for the Fontana Monte slope at Volturino (Italy), selected as prototype landslide in the assessment of the stabilization efficacy of deep drainage trench systems, is discussed in the following. The study aims at providing indications about the design of the drainage trenches to reduce the pore water pressures on a deep slip surface of such type. Introduction Historically, the management of landslide risk within chain areas has been hardly sustainable, due to the cost of the engineering interventions [1][2][3][4][5]. This is especially the case in regions where deep slow landslides are widespread, as with deep landslides, the installation of earth retaining structures, representing the traditional mitigation measure, does not provide long-lasting successful mitigation effects. This is because of the large size of deep landslide bodies (maximum depth higher than 30 m, according to Cruden & Varnes [6]), whose kinematics may be not influenced significantly by the installation of either transversal pile diaphragms (e.g., at the landslide toe, or at mid-height; Figure 1a,b), or longitudinal ones (Figure 1a-c [7]). Eventually, a combination of various retaining diaphragms of significant depth (e.g., piles of more than 40 m depth in Figure 1), might provide a mitigation effect. Such intervention strategy, though, is recognizably highly expensive and does not necessarily restrain the development of new shear bands and further progressive failure [8]. For those slopes where the piezometric regime acts as landslide predisposing factor [9], drainage may represent a more effective mitigation measure. This is the case, for example, in deep landslides whose displacement rates are related to the piezometric excursions at depth, which are in turn connected to the slope-atmosphere interaction [10,11]. However, the traditional drainage systems used for the mitigation of deep landsliding, such as deep drainage wells combined with subhorizontal drainage pipes (Figure 2 [3,12]), are also costly and of expensive maintenance. Furthermore, the drainage capacity of the sub-horizontal drainage pipes is easily jeopardized by their interaction with the moving landslide, which may cause the pipe failure. In the past decades, significant research has been addressed to the hydraulic efficiency of drainage trench systems, which are a robust and relatively not expensive engineering opera [3,12]. In particular, since the late 1970s, research in the field of slope stabilization has paid attention to the effects of drainage trench systems on the slope stability factor, F, in order to optimize the design of such mitigation measure. However, these drainage trenches have been generally considered as mitigation measure solely for shallow landslides [1,3,[13][14][15]. Based on in situ monitoring after the installation of longitudinal For those slopes where the piezometric regime acts as landslide predisposing factor [9], drainage may represent a more effective mitigation measure. This is the case, for example, in deep landslides whose displacement rates are related to the piezometric excursions at depth, which are in turn connected to the slope-atmosphere interaction [10,11]. However, the traditional drainage systems used for the mitigation of deep landsliding, such as deep drainage wells combined with sub-horizontal drainage pipes ( Figure 2 [3,12]), are also costly and of expensive maintenance. Furthermore, the drainage capacity of the sub-horizontal drainage pipes is easily jeopardized by their interaction with the moving landslide, which may cause the pipe failure. For those slopes where the piezometric regime acts as landslide predisposing factor [9], drainage may represent a more effective mitigation measure. This is the case, for example, in deep landslides whose displacement rates are related to the piezometric excursions at depth, which are in turn connected to the slope-atmosphere interaction [10,11]. However, the traditional drainage systems used for the mitigation of deep landsliding, such as deep drainage wells combined with subhorizontal drainage pipes (Figure 2 [3,12]), are also costly and of expensive maintenance. Furthermore, the drainage capacity of the sub-horizontal drainage pipes is easily jeopardized by their interaction with the moving landslide, which may cause the pipe failure. In the past decades, significant research has been addressed to the hydraulic efficiency of drainage trench systems, which are a robust and relatively not expensive engineering opera [3,12]. In particular, since the late 1970s, research in the field of slope stabilization has paid attention to the effects of drainage trench systems on the slope stability factor, F, in order to optimize the design of such mitigation measure. However, these drainage trenches have been generally considered as mitigation measure solely for shallow landslides [1,3,[13][14][15]. Based on in situ monitoring after the installation of longitudinal Drainage wells with sub-horizontal drainage pipelines [12]. In the past decades, significant research has been addressed to the hydraulic efficiency of drainage trench systems, which are a robust and relatively not expensive engineering opera [3,12]. In particular, since the late 1970s, research in the field of slope stabilization has paid attention to the effects of drainage trench systems on the slope stability factor, F, in order to optimize the design of such mitigation measure. However, these drainage trenches have been generally considered as mitigation measure solely for shallow landslides [1,3,[13][14][15]. Based on in situ monitoring after the installation of longitudinal Later on, based on the results of a set of parametric numerical analyses performed with reference to the same scheme used by Stanic [13] and sketched in Figure 3, Desideri et al. [16], following Di Maio et al. [17], proposed design charts relating the hydraulic efficiency at small depth of shallow longitudinal trench systems ( Figure 3), with their geometric parameters. Moreover, these authors assumed fully saturated conditions; furthermore, they used the Terzaghi-Rendulic approach to compute the transient seepage in a deformable soil and calculated the variation with time of the drainage efficiency, accordingly. More recently, the mitigation effects of medium depth drainage trenches, e.g., of 10-12 m depth, have been formulated for medium depth landsliding [12,18], according to the same modeling approach and calculation scheme used by Desideri et al. [16]. Di Maio et al. [17] highlighted how the drainage efficiency of the trench system is affected by the hydraulic boundary conditions at the ground surface, hence, by the slope-atmosphere interaction. In 2005, D'Acunto & Urciuoli [19] proposed a parametric numerical analysis (in saturated conditions) of the effects on the stability of shallow landslides of drainage trenches, showing that implementing realistic infiltration at the ground level provides the prediction of larger efficiency than that calculated in presence of a permanent film of water at the ground surface. Later on, based on the results of a set of parametric numerical analyses performed with reference to the same scheme used by Stanic [13] and sketched in Figure 3, Desideri et al. [16], following Di Maio et al. [17], proposed design charts relating the hydraulic efficiency at small depth of shallow longitudinal trench systems ( Figure 3), with their geometric parameters. Moreover, these authors assumed fully saturated conditions; furthermore, they used the Terzaghi-Rendulic approach to compute the transient seepage in a deformable soil and calculated the variation with time of the drainage efficiency, accordingly. More recently, the mitigation effects of medium depth drainage trenches, e.g., of 10-12 m depth, have been formulated for medium depth landsliding [12,18], according to the same modeling approach and calculation scheme used by Desideri et al. [16]. Di Maio et al. [17] highlighted how the drainage efficiency of the trench system is affected by the hydraulic boundary conditions at the ground surface, hence, by the slope-atmosphere interaction. In 2005, D'Acunto & Urciuoli [19] proposed a parametric numerical analysis (in saturated conditions) of the effects on the stability of shallow landslides of drainage trenches, showing that implementing realistic infiltration at the ground level provides the prediction of larger efficiency than that calculated in presence of a permanent film of water at the ground surface. The development of numerical modeling of partially saturated soil behavior has allowed for more accurate predictions of the transient seepage due to rainfall infiltration in slopes. Accordingly, today the transient seepage, triggered by the installation of the trench systems, can be predicted more accurately accounting for partial soil saturation and the slope-atmosphere interaction (i.e., unsteady boundary conditions at the ground level). Furthermore, also the features of the trench system to be input in the analyses could be more realistic than those assumed in all the studies quoted above. In particular, the effects of the trench system have, so far, always been evaluated with reference to shallow depths (no more than 5-6 m b.g.l.), i.e., where such effects are not much influenced by the transversal extension of the system, ΣS (Figure 3), so that an infinite number of trenches have been always assumed in the modeling. Therefore, the seepage has been always modeled making reference to a set of two trenches, as shown in Figure 3b. According to the results of the numerical modeling following such approach, the effects of the drainage trench system on the piezometric conditions at depths larger than twice the trench depth, have always been considered negligible [16]. Therefore, the trench system has always appeared to be an engineering measure not useful for the mitigation of deep landslide activity. More recently, Cotecchia et al. [11] have proposed the finite element modeling of the transient seepage triggered by a finite number of medium depth drainage trenches, making reference to the whole system (n being the finite number of trenches). In the analyses, the authors implemented partially saturated conditions above the water table and unsteady boundary conditions at the ground level, to represent the effects of climate. The modeling results provide evidence of piezometric head reductions at large depth which vary with the distance "x" from the plane of symmetry of the system ( Figure 4) and are not always negligible. The authors have shown that the maximum drop in piezometric head caused by the trench system occurs below the center of the system; they have then defined such lateral variability in piezometric drop caused by the trench system installation as "group effect". The authors have then shown that the group effect is beneficial for the mitigation of the activity of deep landslides having a "bowl-shaped" sliding surface (see Figure 4a), even in low permeability slopes (e.g., clayey slopes). This is because, if the central plane of the trench system corresponds to the central longitudinal section of the landslide (Figure 4), the group effect makes the reduction in piezometric head at depth be maximum where the bowl-shaped slip surface reaches the maximum depth. In this case, the deepest portion of the slip surface benefits from the "group effect" (Figure 4). With regard to the technical feasibility of excavating deep trenches in slopes, according to Pun et al. [12], drainage trenches can reach a depth as high as 25-30 m through the currently available technologies. They are usually excavated by means of grab shells, using slurry to sustain the vertical walls of the trench (e.g., polymeric mud). Urciuoli & Pirone [18] report a couple of innovative construction technologies. In particular, for deep trenches the most recent construction procedure makes use of aerated concrete [20,21] precast panels, manufactured by mixing gravel of high permeability and cement of a good compression strength. This system can also be installed through the excavation and filling of a diaphragm made of secant piles. Its beneficial features are high permeability, the filtering capacity preventing the internal erosion of the filling soils, sufficient shear strength after a short curing time, avoiding the instability of adjacent previously built panels. Therefore, much progress has been made in developing both the efficiency prediction and the installation technique of medium depth to deep longitudinal drainage trench systems. Such progress makes the mitigation of deep landslide activity through such measure more sustainable, given the lower cost and the longer life of drainage trenches. The drainage effects can be nowadays predicted more accurately and controlled through the monitoring of the indicators of the landslide activity, such as the pore water pressures. and hydraulic parameters of the trench system. From the analysis results, the assessment of the increase of the slope stability factor, F, is derived. As the research work is framed in a larger study about the mitigation measures most sustainable in a pilot region, the Daunia Apennines (Southern Italy), the results will make reference to the slope features in such region. Nevertheless, the results are deemed to be extended to contexts of similar geo-hydro-mechanical features to those of the Daunia Apennines. (see plane of reference in panel (a)), at the start of the seepage analysis (hw0) and after 5 years of transient seepage since the installation of the system (revisited from work in [11]). The Deep Landsliding of Reference in the Study The Daunia Apennines ( Figure 5) are located in the eastern sector of the southern Italian Apennines. Here, clayey slopes have been involved in intense tectonic processes, so that the clays are often intensely fissured and characterized by rather low strength parameters, with fractured rocks floating in the clay matrix. Furthermore, deep slow historical landslides recur in such slopes, whose activity is often connected to the slope-atmosphere interaction [10,[22][23][24][25]. In particular, Cotecchia et al. [26] recognized three recurrent geo-hydro-mechanical set-ups in the region, named GM1, GM2, and GM3 ( Figure 6), characterized by the alternation of rock layers and clay layers. These set-ups differ mainly for the trend of the contact between the rock layer and clay layer. Furthermore, based on extensive field surveys and monitoring, Cotecchia et al. [26] recognized four main classes of landslide mechanism in the region: class M1 ( Figure 6) includes compound slides, usually deeper than 30 m, whose length is comparable with the width; Class M2 ( Figure 6) corresponds to mudslides that can have one or more source areas and whose body can be either elongate or lobate; class M3 ( Figure 6) includes the most complex landslides, such as shallow The aim of the present paper is to provide a methodology for an innovative design of systems of longitudinal medium depth drainage trenches, showing that these can allow for pore water pressure reductions at depth, relevant to the slope stabilization. In particular, through 2D finite element analyses in the transversal slope section (e.g., Figure 4), the study is aimed at exploring the variation of the hydraulic efficiency, E, on deep sliding surfaces, with the change in the geometric and hydraulic parameters of the trench system. From the analysis results, the assessment of the increase of the slope stability factor, F, is derived. As the research work is framed in a larger study about the mitigation measures most sustainable in a pilot region, the Daunia Apennines (Southern Italy), the results will make reference to the slope features in such region. Nevertheless, the results are deemed to be extended to contexts of similar geo-hydro-mechanical features to those of the Daunia Apennines. The Deep Landsliding of Reference in the Study The Daunia Apennines ( Figure 5) are located in the eastern sector of the southern Italian Apennines. Here, clayey slopes have been involved in intense tectonic processes, so that the clays are often intensely fissured and characterized by rather low strength parameters, with fractured rocks floating in the clay matrix. Furthermore, deep slow historical landslides recur in such slopes, whose activity is often connected to the slope-atmosphere interaction [10,[22][23][24][25]. evolution with time of the sliding rates of several bodies, of either M1 or M2 type ( Figure 6), with slip surfaces reaching 30 to 50 m maximum depth. Moreover, the piezometric monitoring has given evidence to concurrence of a slow rise in piezometric head (from the end of August to late winter/early spring) and the increase in sliding rate of the cited landslide bodies. At the stage of maximum piezometric head, the landslide rates reach maximum values [10], showing that the seasonal piezometric rise causes acceleration of landsliding. Cotecchia et al. [22,28] have demonstrated, based on both the above cited field data ( Figure 7) and numerical modeling, that the quoted relation between the piezometric head rise and the sliding acceleration is consequent to the seasonal infiltration in the slopes of the region of the net rainfalls, equal to the difference between the total rainfalls, the runoff, and the evapotranspiration rates triggered by the regional climate. The infiltration in the fissured clay slopes generates, over the year, In particular, Cotecchia et al. [26] recognized three recurrent geo-hydro-mechanical set-ups in the region, named GM1, GM2, and GM3 ( Figure 6), characterized by the alternation of rock layers and clay layers. These set-ups differ mainly for the trend of the contact between the rock layer and clay layer. Furthermore, based on extensive field surveys and monitoring, Cotecchia et al. [26] recognized four main classes of landslide mechanism in the region: class M1 ( Figure 6) includes compound slides, usually deeper than 30 m, whose length is comparable with the width; Class M2 ( Figure 6) corresponds to mudslides that can have one or more source areas and whose body can be either elongate or lobate; class M3 ( Figure 6) includes the most complex landslides, such as shallow earth sliding-flows or flow-slides [27]; and finally, M4 ( Figure 6) is represented by deep rotational landslides evolving into an earth-flow downslope. Inclinometer monitoring has given evidence to the evolution with time of the sliding rates of several bodies, of either M1 or M2 type ( Figure 6), with slip surfaces reaching 30 to 50 m maximum depth. Moreover, the piezometric monitoring has given evidence to concurrence of a slow rise in piezometric head (from the end of August to late winter/early spring) and the increase in sliding rate of the cited landslide bodies. At the stage of maximum piezometric head, the landslide rates reach maximum values [10], showing that the seasonal piezometric rise causes acceleration of landsliding. Cotecchia et al. [22,28] have demonstrated, based on both the above cited field data ( Figure 7) and numerical modeling, that the quoted relation between the piezometric head rise and the sliding acceleration is consequent to the seasonal infiltration in the slopes of the region of the net rainfalls, equal to the difference between the total rainfalls, the runoff, and the evapotranspiration rates triggered by the regional climate. The infiltration in the fissured clay slopes generates, over the year, seasonal fluctuations of the pore water pressures down to large depth, given the hydraulic properties (water retention curve and permeability function) of the clayey soils forming the flysches. The excursions in the available shear strength consequent to the pore water pressure variations, result in accelerations and decelerations of pre-existing landslide bodies ( Figure 7). As can be observed at the two pilot sites, Volturino and Pisciolo (Figure 7a,b), representative of several other hillslopes in the region and where monitoring was carried out ( Figure 5), the maximum piezometric heads concur with both the maximum values of the long term cumulative rainfalls (90 to 180 days), and the maximum deep displacement rates. The latter are measured in correspondence of the shear bands intercepted by the inclinometers, at the end of winter (Figure 7). At Pisciolo, the maximum displacement rate of the landslide body at the end of winter, detected through inclinometric monitoring, is also confirmed by means of GPS monitoring (sensor S2 in Figure 7b). Figure 5. Schematic geological map of the Southern Apennines and location of the study region (included in the ellipse). Key: 1-marine and continental deposits, wedge basin deposits, 2-Apenninic units, 3-carbonate platform units, 4-main thrust (a) and buried overthrusting (b), 5case studies: B-Bovino, Pi-Pietramontecorvino, V-Volturino (from work in [11]). Cotecchia et al. [22,28] have demonstrated, based on both the above cited field data ( Figure 7) and numerical modeling, that the quoted relation between the piezometric head rise and the sliding acceleration is consequent to the seasonal infiltration in the slopes of the region of the net rainfalls, equal to the difference between the total rainfalls, the runoff, and the evapotranspiration rates triggered by the regional climate. The infiltration in the fissured clay slopes generates, over the year, [29]). Pisciolo slope (b): displacement rates measured through the GPS sensor S2 and the inclinometer I12, at 19 m depth; piezometric levels along P7; 180 days cumulative rainfalls (from work in [22]). Given the diagnosis of the deep landslide mechanisms discussed so far [10], the adoption of drainage measures to reduce the high piezometric heads in the slopes represents a rational strategy to mitigate the landsliding. In particular, following the earlier work from Cotecchia et al. [11] and accounting for the geometric features of the landslide bodies in the region, a system of longitudinal medium depth drainage trenches (Figure 4a), of depth H0 ranging between 12 and 22 m, is here proposed as a landslide mitigation technique, that conjugates mitigation efficacy, durability and sustainability. As field studies have led to recognize for most of the landslides in the pilot region a "bowl-shaped" slip surface, of the type sketched in Figure 4a, the study aims at providing indications about the design of the drainage trenches to reduce the pore water pressures on a slip surface of such type. In the following, the results of finite element analyses of the transient seepage induced by the installation of systems of drainage trenches of different geometric and hydraulic parameters are discussed. The Fontana Monte landslide at Volturino has been selected as prototype landslide in the assessment of the stabilization efficacy of the deep drainage trench systems, discussed in the following ( Figure 8 [30]). It is a M2 type active landslide, lying in a slope formed of fissured stiff clays, the Toppo Capuana clays, alternating with fractured rock layers. In the following, the analysis of the seepage through the slope after the installation of various drainage trench systems is presented. By comparing the piezometric heads post-installation "hw" (after 5 years since the trench system setting up, see Figure 8b) and the initial piezometric heads, "hw0", the efficiency E(t) of the trench system across the slope is discussed. Thereafter, the effects of the piezometric head reduction generated by the trench system, on the stability factor F of the Fontana Monte landslide (Figure 8b) are examined. As first, in the following, the calculation strategy of both E(t) and of the stability factor F, is presented. [29]). Pisciolo slope (b): displacement rates measured through the GPS sensor S2 and the inclinometer I12, at 19 m depth; piezometric levels along P7; 180 days cumulative rainfalls (from work in [22]). Given the diagnosis of the deep landslide mechanisms discussed so far [10], the adoption of drainage measures to reduce the high piezometric heads in the slopes represents a rational strategy to mitigate the landsliding. In particular, following the earlier work from Cotecchia et al. [11] and accounting for the geometric features of the landslide bodies in the region, a system of longitudinal medium depth drainage trenches (Figure 4a), of depth H 0 ranging between 12 and 22 m, is here proposed as a landslide mitigation technique, that conjugates mitigation efficacy, durability and sustainability. As field studies have led to recognize for most of the landslides in the pilot region a "bowl-shaped" slip surface, of the type sketched in Figure 4a, the study aims at providing indications about the design of the drainage trenches to reduce the pore water pressures on a slip surface of such type. In the following, the results of finite element analyses of the transient seepage induced by the installation of systems of drainage trenches of different geometric and hydraulic parameters are discussed. The Fontana Monte landslide at Volturino has been selected as prototype landslide in the assessment of the stabilization efficacy of the deep drainage trench systems, discussed in the following ( Figure 8 [30]). It is a M2 type active landslide, lying in a slope formed of fissured stiff clays, the Toppo Capuana clays, alternating with fractured rock layers. In the following, the analysis of the seepage through the slope after the installation of various drainage trench systems is presented. By comparing the piezometric heads post-installation "h w " (after 5 years since the trench system setting up, see Figure 8b) and the initial piezometric heads, "h w0 ", the efficiency E(t) of the trench system across the slope is discussed. Thereafter, the effects of the piezometric head reduction generated by the trench system, on the stability factor F of the Fontana Monte landslide (Figure 8b) are examined. As first, in the following, the calculation strategy of both E(t) and of the stability factor F, is presented. Calculation Strategy The effect on the landslide body stability factor, F, of the installation of a drainage trench system has been derived through numerical calculations, following the procedure discussed hereafter, referring to the calculation model sketched in Figure 9. As first, transient seepage analyses have been carried out, in two dimensions (2D), within the transversal section of the slope model, including the drainage trench system, shown in Figure 9b. In the slope model in Figure 9, the plane seepage in the transversal section is assumed to be the same in all the transversal sections: a, b, c, and d. Therefore, the piezometric heads, both before and after the installation of the trenches, have been calculated with reference to a single transversal section (Figure 9b). The seepage analysis in such section has been carried out following the same procedure presented by Cotecchia et al. [11]. According to such procedure, for each set of geometric parameters of the trench system (number of trenches n, depth of the trenches H0, distance between the trenches S), the value of the piezometric head reduction after 5 years since the installation of the system has been determined across the whole section. Therefore, from such seepage analysis, the piezometric head reductions along the slip surface shown in Figure 9c could be derived. Thereafter, the safety factor of the landslide body has been evaluated through the limit equilibrium method for three longitudinal sections: 1-1', 2-2', and 3-3' in Figure 9. In the prototype transversal section (Figure 9b), section 2-2' and section 3-3' are located at 48 m and 96 m from the central axis (i.e. from section 1-1'), respectively. To such aim, for section 1-1', the piezometric head reduction at points 1a, 1b, 1c, and 1d in Figure 9c were derived from the seepage analysis at the corresponding points in the transversal section shown in Figure 9b. The piezometric heads along the longitudinal section between these four points have been assumed to vary linearly. The same procedure was used also for both sections 2-2' and 3- Calculation Strategy The effect on the landslide body stability factor, F, of the installation of a drainage trench system has been derived through numerical calculations, following the procedure discussed hereafter, referring to the calculation model sketched in Figure 9. As first, transient seepage analyses have been carried out, in two dimensions (2D), within the transversal section of the slope model, including the drainage trench system, shown in Figure 9b. In the slope model in Figure 9, the plane seepage in the transversal section is assumed to be the same in all the transversal sections: a, b, c, and d. Therefore, the piezometric heads, both before and after the installation of the trenches, have been calculated with reference to a single transversal section (Figure 9b). The seepage analysis in such section has been carried out following the same procedure presented by Cotecchia et al. [11]. According to such procedure, for each set of geometric parameters of the trench system (number of trenches n, depth of the trenches H 0 , distance between the trenches S), the value of the piezometric head reduction after 5 years since the installation of the system has been determined across the whole section. Therefore, from such seepage analysis, the piezometric head reductions along the slip surface shown in Figure 9c could be derived. Thereafter, the safety factor of the landslide body has been evaluated through the limit equilibrium method for three longitudinal sections: 1-1 , 2-2 , and 3-3 in Figure 9. In the prototype transversal section (Figure 9b), section 2-2 and section 3-3 are located at 48 m and 96 m from the central axis (i.e., from section 1-1 ), respectively. the paper. The increase in safety factor, F, is discussed for different trench systems. It is worth clarifying that the use of the here explained approach to the assessment of the slope stability, both before and after the intervention, is rather simple (e.g., it disregards a probabilistic analysis), because the values of the stability factor for the three sections represent just a mean for comparison of the increment of the degree of stability of the landslide body, achieved with the trench system installation. Analysis of the Transversal Seepage Determined by the Drainage Trench System Ē(t) has been calculated at any point of the transversal section, through 2D seepage analyses (Figure 9b), for five after the activation of the drainage trenches. It is assumed to coincide with the Ē(t) in the slope, assuming that the influence on Ē(t) on the component of the flow rate normal to the transversal section can be disregarded, according to Stanic [13] and as done previously in a similar calculation by Cotecchia et al. [11]. To such aim, for section 1-1 , the piezometric head reduction at points 1a, 1b, 1c, and 1d in Figure 9c were derived from the seepage analysis at the corresponding points in the transversal section shown in Figure 9b. The piezometric heads along the longitudinal section between these four points have been assumed to vary linearly. The same procedure was used also for both sections 2-2 and 3-3 . In the following, the pore water pressure reductions generated by the trench system are presented in terms of the hydraulic efficiency E(t) = [1 − h w (t)/h w0 ] [16,31], where h w (t) is the pressure head varying with consolidation after the trench installation at time t, h w0 is the initial pressure head, andĒ(t) is the average value over a horizontal segment of 15 m including the numerical point. In particular, in the following the discussion will focus on theĒ(t) values calculated for point A (Figure 9b), with reference to different trench systems. Point A (i.e., point 1b in Figure 9c), represents the deepest point of the slip surface, at z = 45 m depth, therefore the point where the piezometric head reduction is minimum. Therefore, the E(t) in A is the minimum possible on the slip surface (Figure 9c). The safety factors for the longitudinal sections 1-1 , 2-2 , and 3-3 , were derived using the limit equilibrium method [32]; in particular, the morphology and the geotechnical parameters of the Fontana Monte landslide body were accounted for in such analyses (Figure 8), as discussed later in the paper. The increase in safety factor, F, is discussed for different trench systems. It is worth clarifying that the use of the here explained approach to the assessment of the slope stability, both before and after the intervention, is rather simple (e.g., it disregards a probabilistic analysis), because the values of the stability factor for the three sections represent just a mean for comparison of the increment of the degree of stability of the landslide body, achieved with the trench system installation. Analysis of the Transversal Seepage Determined by the Drainage Trench System E(t) has been calculated at any point of the transversal section, through 2D seepage analyses (Figure 9b), for five after the activation of the drainage trenches. It is assumed to coincide with thē E(t) in the slope, assuming that the influence onĒ(t) on the component of the flow rate normal to the transversal section can be disregarded, according to Stanic [13] and as done previously in a similar calculation by Cotecchia et al. [11]. The numerical FE modeling of the seepage in the transversal section (Figure 9b) was carried out with reference to the mesh shown in Figure 10. In the numerical model, the trenches have been simulated as rectangular clusters of 1 m width, filled with a coarse-grained soil of low retention capacity ( Figure 11). The numerical FE modeling of the seepage in the transversal section (Figure 9b) was carried out with reference to the mesh shown in Figure 10. In the numerical model, the trenches have been simulated as rectangular clusters of 1 m width, filled with a coarse-grained soil of low retention capacity ( Figure 11). In Figure 10, a prototype system is shown, made of three trenches (n = 3); the mesh is coarser at the bottom of the model and finer in the upper part, discretized by means of 18000, 4-noded quadrilateral elements, with four Gauss points, and 18,281 nodes. The drainage in each trench has been guaranteed by setting zero pore water pressure at the base level of the trench and applying the water retention function and the permeability function of a coarse soil filling the trench. The section of a slip surface with the seepage plane is also shown in Figure 10, where point A represents the deepest point of the slip surface (see also point 1b in Figure 9c). In the 2D FE seepage model, the ground surface is assumed to be horizontal and the hydraulic properties of the slope soil, where the trenches are installed, are set to be uniform. The numerical modeling has been carried out by means of the finite element code Seep/w [32] that allows for a full numerical integration of the Richards' equation: Assuming partially saturated conditions for the soil above the water table. hw is the pore water pressure head, and θ = S n is the volumetric water content, where S is the degree of saturation and n is the porosity. The modeling has implemented the soil water retention curve (WRC), θ(s) (where s is the soil suction, equal to (hw γw) < 0), and the soil hydraulic conductivity function, K(s), which has been assumed to be related to θ(s) according to Mualem [33]. For θ(s), the adopted WRCs refer to interpolation of laboratory test results, as shown in Figure 11, whereas for K(s), the expression from Mualem-Van Genuchten [34] has been adopted. In particular, the WRC used for the clay was derived from experimental laboratory tests, carried out by Cafaro & Cotecchia, 2001 [35], on an unfissured high retention overconsolidated clay, whereas the WRC of the soil in the trenches has been taken as that typical for gravel [14]. In Figure 11, the WRC of the two soils, clay and gravel, used in the model are compared with the WRC measured for a silty sand by Bottiglieri et al. [36]. For the permeability of the saturated clay, a value representative of the "field clay permeability", which is about Ksat = 1 × 10 −9 m/s, has been adopted, following what already done with reference to the Fontana Monte slope by Lollino et al. [30] and Cotecchia et al. [11]. Both the high retention capacity and the very low permeability of the clay implemented in the model are to be considered conservative in the assessment of the efficiency of the drainage system [10]. In Figure 10, a prototype system is shown, made of three trenches (n = 3); the mesh is coarser at the bottom of the model and finer in the upper part, discretized by means of 18000, 4-noded quadrilateral elements, with four Gauss points, and 18,281 nodes. The drainage in each trench has been guaranteed by setting zero pore water pressure at the base level of the trench and applying the water retention function and the permeability function of a coarse soil filling the trench. The section of a slip surface with the seepage plane is also shown in Figure 10, where point A represents the deepest point of the slip surface (see also point 1b in Figure 9c). In the 2D FE seepage model, the ground surface is assumed to be horizontal and the hydraulic properties of the slope soil, where the trenches are installed, are set to be uniform. The numerical modeling has been carried out by means of the finite element code Seep/w [32] that allows for a full numerical integration of the Richards' equation: Assuming partially saturated conditions for the soil above the water table. h w is the pore water pressure head, and θ = S n is the volumetric water content, where S is the degree of saturation and n is the porosity. The modeling has implemented the soil water retention curve (WRC), θ(s) (where s is the soil suction, equal to (h w γ w ) < 0), and the soil hydraulic conductivity function, K(s), which has been assumed to be related to θ(s) according to Mualem [33]. For θ(s), the adopted WRCs refer to interpolation of laboratory test results, as shown in Figure 11, whereas for K(s), the expression from Mualem-Van Genuchten [34] has been adopted. In particular, the WRC used for the clay was derived from experimental laboratory tests, carried out by Cafaro & Cotecchia, 2001 [35], on an unfissured high retention overconsolidated clay, whereas the WRC of the soil in the trenches has been taken as that typical for gravel [14]. In Figure 11, the WRC of the two soils, clay and gravel, used in the model are compared with the WRC measured for a silty sand by Bottiglieri et al. [36]. For the permeability of the saturated clay, a value representative of the "field clay permeability", which is about K sat = 1 × 10 −9 m/s, has been adopted, following what already done with reference to the Fontana Monte slope by Lollino et al. [30] and Cotecchia et al. [11]. Both the high retention capacity and the very low permeability of the clay implemented in the model are to be considered conservative in the assessment of the efficiency of the drainage system [10]. Geosciences 2019, 9, x FOR PEER REVIEW 11 of 24 Figure 11. Water retention curves of the Montemesola clay and of the trench filling material (from [11]), compared with a silty sand [36]. Initial hydrostatic conditions with water table at 3 m depth below g.l., representative of winter conditions, have been set in the analyses [11]. At the lateral boundaries of the model, a constant hydraulic head (hydrostatic condition) has been assigned, hw = hw0, which represents the most conservative boundary condition for the drainage efficiency; the lower boundary has been set as impervious. At the ground level, the boundary condition has been set as a specific flow rate q(t), variable over time according to the rainfall regime of the Fontana Monte slope. Such flow rate has been set constant over each month of the year and equal to the mean value of the monthly rainfall, which has been derived from the database of the monthly rainfalls recorded at the meteorological station of Volturino in the period 1972 to 2009 ( Figure 12). Both evapotranspiration and surface runoff have not been accounted for in the seepage analyses [19], so that an overestimated water supply has been assumed. Figure 11. Water retention curves of the Montemesola clay and of the trench filling material (from [11]), compared with a silty sand [36]. Initial hydrostatic conditions with water table at 3 m depth below g.l., representative of winter conditions, have been set in the analyses [11]. At the lateral boundaries of the model, a constant hydraulic head (hydrostatic condition) has been assigned, h w = h w0 , which represents the most conservative boundary condition for the drainage efficiency; the lower boundary has been set as impervious. At the ground level, the boundary condition has been set as a specific flow rate q(t), variable over time according to the rainfall regime of the Fontana Monte slope. Such flow rate has been set constant over each month of the year and equal to the mean value of the monthly rainfall, which has been derived from the database of the monthly rainfalls recorded at the meteorological station of Volturino in the period 1972 to 2009 ( Figure 12). Both evapotranspiration and surface runoff have not been accounted for in the seepage analyses [19], so that an overestimated water supply has been assumed. Figure 4 shows the results of the analyses carried out for trench systems of different n, of depth H 0 = 12 m and spacing S of either 13 m or 22 m. The figure reports the pressure heads, h w , predicted after 5 years of consolidation along a horizontal plane located at 45 m depth. The different curves in the figure refer to systems of different S/H 0 , but for constant H 0 = 12 m. Figure 4 reports also the initial h w0 at 45 m depth, assumed to be constant as for a hydrostatic initial condition in the section. The "necklace" shape of the h w -x curves corresponds to the group effect cited in the introduction and suggests that, at depth, the drainage trench system generates a piezometric head drop that varies with x and is maximum below the center of the trench system, at x = 0 (point A of Figures 9 and 10). Furthermore, the results show that the piezometric head at large depth is controlled not only by the S/H 0 ratio, but also by the global width ΣS of the trench system, and consequently by the number of the trenches n. The h w drop, highest below the center of the trench system, suggests that with landslides of bowl-shaped slip surface, centering the trench system with respect to the landslide body longitudinal section optimizes the stabilizing effect of the system, as shown in Figures 9 and 10. In this way, the deepest portion of the sliding surface benefits from the maximum h w drop determined by the drainage system. The h w drop along the lateral portions of the slip surface is higher than in the central portion because these are shallower. Initial hydrostatic conditions with water table at 3 m depth below g.l., representative of winter conditions, have been set in the analyses [11]. At the lateral boundaries of the model, a constant hydraulic head (hydrostatic condition) has been assigned, hw = hw0, which represents the most conservative boundary condition for the drainage efficiency; the lower boundary has been set as impervious. At the ground level, the boundary condition has been set as a specific flow rate q(t), variable over time according to the rainfall regime of the Fontana Monte slope. Such flow rate has been set constant over each month of the year and equal to the mean value of the monthly rainfall, which has been derived from the database of the monthly rainfalls recorded at the meteorological station of Volturino in the period 1972 to 2009 ( Figure 12). Both evapotranspiration and surface runoff have not been accounted for in the seepage analyses [19], so that an overestimated water supply has been assumed. The h w (x, z) determined by the trench system is the input of the stability analysis, resulting in the stability factor F: where τ f is the shear strength available on the slip surface; τ m is the mobilized shear strength; c and ϕ are, respectively, the cohesion intercept and the friction angle; σ is the total normal stress; and u = γ w h w is the pore water pressure. Comparison of the New Modeling Results with Background Modeling As first, the hydraulic efficiency resulting from the numerical modeling of the seepage in presence of the drainage trenches described before has been compared with the trench system efficiency predicted in previous studies. In particular, the new predictions have been compared with those resulting from the modeling which assumes the soil to be fully saturated also above the water table and the number of trenches, n, to be infinite, according to the scheme from Stanic [13], shown in Figure 3. In this way, the effects onĒ of implementing more realistic seepage conditions in the modeling, such as a finite number of trenches, which provide a group effect, and partially saturated conditions which control the water mass balance in the top layers, can be verified. To this aim, theĒ values achieved through the new modeling ( Figure 10) have been compared with those reported by Desideri et al. [16] for steady-state conditions, in Figure 13, whose notation is the same as that shown in Figure 3a. The new seepage calculations implemented the same boundary conditions at the ground surface used by Desideri et al. [16], i.e., the presence of a permanent film of water at u = 0, and an initial water table at the ground level, i.e., h w0 = D (Figure 10). The results plotted in Figure 13 refer to different depths D: H 0 , 1.5H 0 , and 2H 0 for both models. In particular, it must be highlighted that in the figureĒ is the average over a distance equal to S ( Figure 10); for the new modeling,Ē has been averaged over a horizontal segment S, of depth D, below the center of the trench system. The comparison seems to suggest that accounting for more realistic conditions in the modeling results in a higherĒ value for D/H 0 = 1, whereas this is not the case forĒ at depths larger than H 0 . Furthermore, the results make evident thatĒ does not depend only on S/H 0 ; rather, the dependency ofĒ on the trench system geometry, S, H 0 , n, is more complex. efficiency values are plotted with respect to a normalized time T, which varies linearly with the saturated coefficient of permeability, the soil elastic stiffness moduli, and time [17]. Figure 14a,b reports also the results of the new modeling ( Figure 10) for two drainage trench systems, both of S/H0 = 2, still assuming the initial water table at the ground level. The comparison shows that for impervious top boundary (Figure 14a), the new model predicts Ē values smaller than those previously expected, but this is not the case when water is available at the ground surface ( Figure 14b), that is a more cautious design condition. Figure 13. Comparison between data from Desideri et al. [16] and new numerical results. (Figure 14a), or permanent film of water (Figure 14b). In both cases, the efficiency values are plotted with respect to a normalized time T, which varies linearly with the saturated coefficient of permeability, the soil elastic stiffness moduli, and time [17]. Figure 14a,b reports also the results of the new modeling ( Figure 10) for two drainage trench systems, both of S/H 0 = 2, still assuming the initial water table at the ground level. The comparison shows that for impervious top boundary (Figure 14a), the new model predictsĒ values smaller than those previously expected, but this is not the case when water is available at the ground surface (Figure 14b), that is a more cautious design condition. Hydraulic Efficiency at Depth for Different Trench Systems Seepage analyses were carried out for different trench systems, varying n, S, H0, in order to highlight the influence of each of these parameters on Ē(t). In detail, the parameters have been set to vary as follows; the trench spacing, S, from 12 m to 22 m; the trench depth, H0, from 4 m to 22 m; the corresponding S/H0 ratio, from 0.75 to 3; the number of trenches, n, from 3 to 7. Table 1 reports the combination of parameter's values input in the different analyses of reference in the following discussion. Hydraulic Efficiency at Depth for Different Trench Systems Seepage analyses were carried out for different trench systems, varying n, S, H 0 , in order to highlight the influence of each of these parameters onĒ(t). In detail, the parameters have been set to vary as follows; the trench spacing, S, from 12 m to 22 m; the trench depth, H 0 , from 4 m to 22 m; the corresponding S/H 0 ratio, from 0.75 to 3; the number of trenches, n, from 3 to 7. Table 1 reports the combination of parameter's values input in the different analyses of reference in the following discussion. Figure 15 shows the values of the averageĒ(t), after 5 years of consolidation, calculated at 45 m depth, below the center of the trench system (point A in Figure 10), for different n equal either to 3 or to 5. In the figure, the continuous lines refer to constant S and variable H 0 , whereas the dashed lines refer to constant H 0 and variable S; the black lines refer to n = 3, whereas the gray lines refer to n = 5. The variability ofĒ shown in Figure 15 for n = 3 is qualitatively representative of theĒ variability found for larger n values, e.g., shown for n = 5. In particular, for the constant S curvesĒ seems to increase exponentially with increasing H 0 . Conversely, the constant H 0 curves show that the maximumĒ corresponds to an optimum S value, and not to the minimum S value. According to the numerical results, then, the trench depth, H 0 , has a much stronger impact onĒ than the trench spacing, S. For n = 3 and S/H 0 values below 1.5, reducing S/H 0 of 0.5 provides a much higher increase inĒ if this is achieved by increasing H 0 , than if this is reached by reducing S. Moreover, for S/H 0 below 1.2, even an increase of H 0 of 1 m increases significantlyĒ, e.g., for S/H 0 = 0.8, H 0 from 14 m to 16 m increasesĒ from 4% to 6%. Geosciences 2019, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/geosciences Figure 15 shows the values of the average Ē(t), after 5 years of consolidation, calculated at 45 m depth, below the center of the trench system (point A in Figure 10), for different n equal either to 3 or to 5. In the figure, the continuous lines refer to constant S and variable H0, whereas the dashed lines refer to constant H0 and variable S; the black lines refer to n = 3, whereas the gray lines refer to n = 5. The variability of Ē shown in Figure 15 for n = 3 is qualitatively representative of the Ē variability found for larger n values, e.g., shown for n = 5. In particular, for the constant S curves Ē seems to increase exponentially with increasing H0. Conversely, the constant H0 curves show that the maximum Ē corresponds to an optimum S value, and not to the minimum S value. According to the numerical results, then, the trench depth, H0, has a much stronger impact on Ē than the trench spacing, S. For n = 3 and S/H0 values below 1.5, reducing S/H0 of 0.5 provides a much higher increase in Ē if this is achieved by increasing H0, than if this is reached by reducing S. Moreover, for S/H0 below 1.2, even an increase of H0 of 1 m increases significantly Ē, e.g. for S/H0 = 0.8, H0 from 14 m to 16 m increases Ē from 4% to 6%. For any S/H0, the Ē achieved using n = 5 is higher than that for n = 3. Therefore, given the group effect, the number of trenches impacts the achievable hw drop at very large depth; conversely, n is not a parameter controlling the Ē at shallow depths achievable through shallow trench systems, as shown in Figures 3, 13, and 14. It comes out that an increase in the number of trenches emphasizes the beneficial effect on Ē of the increase of H0, more than that due to a reduction in S. All the results in Figures 15 and 16 confirm that Ē, at a given depth, is not function of the sole S/H0. However, Ē does not always increase as n increases, as Ē decreases for n increasing beyond a threshold n value. Figure 16 reports the change of Ē as a function of n, for a value of S/H0 equal to 1. An increment about 0.5% is observed when n increases from n = 3 to n = 5, whereas no significant Ē variation is observed from n = 5 to n = 7. For any S/H 0 , theĒ achieved using n = 5 is higher than that for n = 3. Therefore, given the group effect, the number of trenches impacts the achievable h w drop at very large depth; conversely, n is not a parameter controlling theĒ at shallow depths achievable through shallow trench systems, as shown in Figure 3, Figure 13, and Figure 14. It comes out that an increase in the number of trenches emphasizes the beneficial effect onĒ of the increase of H 0 , more than that due to a reduction in S. All the results in Figures 15 and 16 confirm thatĒ, at a given depth, is not function of the sole S/H 0 . However,Ē does not always increase as n increases, asĒ decreases for n increasing beyond a threshold n value. Figure 16 reports the change ofĒ as a function of n, for a value of S/H 0 equal to 1. An increment about 0.5% is observed when n increases from n = 3 to n = 5, whereas no significantĒ variation is observed from n = 5 to n = 7. Geosciences 2019, 9, x FOR PEER REVIEW 2 of 24 Figure 16. Average efficiency Ē against n, at 45 m depth b.g.l. (point A in Figure 10) and S/H0 = 1 (t = 5 years). Figure 17 shows the whole set of analysis results in terms of Ē-H0. All the Ē-H0 curves (each relating to a set n-S of the trench system) fall close to a power function (dotted line in Figure 17), representing the average effect of the drainage trench systems tested through the numerical testing program. Such power function allows estimation of, in first approximation, the H0 value required to reach the desired value of Ē in the clay slope of reference, by 5 years of transient seepage. Figure 17 shows the whole set of analysis results in terms ofĒ-H 0 . All theĒ-H 0 curves (each relating to a set n-S of the trench system) fall close to a power function (dotted line in Figure 17), representing the average effect of the drainage trench systems tested through the numerical testing program. Such power function allows estimation of, in first approximation, the H 0 value required to reach the desired value ofĒ in the clay slope of reference, by 5 years of transient seepage. Figure 10) and S/H0 = 1 (t = 5 years). Figure 17 shows the whole set of analysis results in terms of Ē-H0. All the Ē-H0 curves (each relating to a set n-S of the trench system) fall close to a power function (dotted line in Figure 17), representing the average effect of the drainage trench systems tested through the numerical testing program. Such power function allows estimation of, in first approximation, the H0 value required to reach the desired value of Ē in the clay slope of reference, by 5 years of transient seepage. The variation ofĒ with depth has been investigated by comparing theĒ at 45 m depth, resulting from all the analyses discussed so far, withĒ at 25 m depth, from the same analyses (Table 1). In this way, the hydraulic efficiency pursued through the trench system, when a shallower landslide body has to be stabilized, is examined. The comparison of the drainage efficiency for the two different depth slip surfaces (45 m b.g.l. gray lines; 25 m b.g.l., black lines, Figure 18), highlights that the efficiency can increase of ten times when passing from a landslide of maximum depth 45 m to one of maximum depth 25 m. Geosciences 2019, 9, x FOR PEER REVIEW 3 of 24 The variation of Ē with depth has been investigated by comparing the Ē at 45 m depth, resulting from all the analyses discussed so far, with Ē at 25 m depth, from the same analyses (Table 1). In this way, the hydraulic efficiency pursued through the trench system, when a shallower landslide body has to be stabilized, is examined. The comparison of the drainage efficiency for the two different depth slip surfaces (45 m b.g.l. gray lines; 25 m b.g.l., black lines, Figure 18), highlights that the efficiency can increase of ten times when passing from a landslide of maximum depth 45 m to one of maximum depth 25 m. The effect of the soil hydraulic properties on the efficiency of the drainage system has been also investigated. In particular, the saturated coefficient of permeability of the slope soil has been increased from Ksat = 1 × 10 −9 m/s to Ksat = 3 × 10 −9 m/s. For a drainage trench system characterized by n = 5, S = 16 m, and H0 = 14 m, such change of Ksat produces an increase of Ē from 2.51% to 7.61%. Effect of the Drainage Trench System on the Landslide Stability Factor The stabilizing effects of the different drainage trench systems are discussed in the following by implementing the results of the seepage analyses, presented above, in 2D limit equilibrium analyses (LE, [32]) according to the procedure discussed in Section 3 ( Figure 9). Such LE analyses have been performed making reference to the Fontana Monte landslide body [30,37] as prototype landside to be stabilized, adopting the Morgenstern & Price [38] (code Slope/w [32]). Both the map and one of the sections of the landslide are shown in Figure 19. F has been assumed to be 1 before the installation of the drainage trench system and the mobilized strength parameters c'm-φ'm have been derived, accordingly. c'm and φ'm have been then used in the LE calculation for the Sections 1-1', 2-2', 3-3' in Figure 19, to derive F after 5 years since the trench system installation. Figure 19b shows the topography for Section 1-1', along with the slip surface of the landslide body and the piezometric level along the slip surface before the installation of the drainage system. The F post-installation has been calculated for several of the trench systems in Table 1. The effect of the soil hydraulic properties on the efficiency of the drainage system has been also investigated. In particular, the saturated coefficient of permeability of the slope soil has been increased from Ksat = 1 × 10 −9 m/s to Ksat = 3 × 10 −9 m/s. For a drainage trench system characterized by n = 5, S = 16 m, and H 0 = 14 m, such change of Ksat produces an increase ofĒ from 2.51% to 7.61%. Effect of the Drainage Trench System on the Landslide Stability Factor The stabilizing effects of the different drainage trench systems are discussed in the following by implementing the results of the seepage analyses, presented above, in 2D limit equilibrium analyses (LE, [32]) according to the procedure discussed in section 3 ( Figure 9). Such LE analyses have been performed making reference to the Fontana Monte landslide body [30,37] as prototype landside to be stabilized, adopting the Morgenstern & Price [38] (code Slope/w [32]). Both the map and one of the sections of the landslide are shown in Figure 19. F has been assumed to be 1 before the installation of the drainage trench system and the mobilized strength parameters c m -ϕ m have been derived, accordingly. c m and ϕ m have been then used in the LE calculation for the section 1-1 , 2-2 , 3-3 in Figure 19, to derive F after 5 years since the trench system installation. Figure 19b shows the topography for section 1-1 , along with the slip surface of the landslide body and the piezometric level along the slip surface before the installation of the drainage system. The F post-installation has been calculated for several of the trench systems in Table 1. medium-to-high activity (0.5 < A < 1). Due to the fissuring and the high plasticity index of the clay, the values of the peak strength parameters are relatively medium to low, ranging between c'P = 20 kPa, φ'P = 20°. According to the LE back-analyses conducted with reference to section 1-1', implementing the piezometric levels shown in Figure 9b [37], the parameter values c'm = 9 kPa − φ'm = 18.7°, which are about those reached post-peak in the shear tests on the clay samples, were found to provide F = 1. Therefore, these values have been used in all LE analyses port intervention. To start with, a stability factor increment, ΔF, equal to 14% is reached for section 1-1' after installation of a drainage trench system characterized by n = 3, S = 18 m and H0 = 22 m (S/H0 = 0.82), whose plane of symmetry coincides with the section. Given n = 3, Figure 20 outlines the variability of ΔF with S/H0 for the section 1-1'; in the plot, the contours of constant S (dashed lines) and constant H0 (solid lines) are indicated. ΔF is found to increase mostly with increasing H0, in accordance with the increase in Ē discussed before. Furthermore, Figure 21 shows that the increment in stability factor increases with the number of trenches, due to the group effect, although the increment obtained passing from n = 5 to n = 7 is significantly lower than that calculated with the increment of n from 3 to 5. Given the bowl shape of the Fontana Monte slip surface, the increment in stability factor ΔF in Figures 20 and 21 for section 1-1', are the minimum among the ΔF achieved along the other sections, since in the latter the slip surface is shallower and, therefore, the increase in shear strength achieved through drainage is higher. In order to assess the variability of ΔF among the 2D LE analyses carried out for the section 1-1', 2-2', and 3-3', in Figure 22, ΔF calculated for each 2D analysis has been renamed 2D-ΔF and has been plotted versus the distance of the longitudinal section from the central section of the landslide (which coincides with section 1-1'). The 2D-ΔF values in the figure refer to the case n = 5, H0 = 16 m and S = 14 m (called "standard system"). In such analyses, the efficiency Ē has been calculated in the different portions of the bowl-shaped slip surface according to the calculation strategy discussed in Section 3 ( Figure 9). The figure shows the increase in 2D-ΔF with the increasing distance of the calculation section from central section of the landside. As said before, the slip surface of the Fontana Monte landslide crosses the Toppo Capuana clays, which include a clay fraction ranging between 50 and 70%, are of high plasticity (30% < PI < 80%) and medium-to-high activity (0.5 < A < 1). Due to the fissuring and the high plasticity index of the clay, the values of the peak strength parameters are relatively medium to low, ranging between c P = 20 kPa, ϕ P = 20 • . According to the LE back-analyses conducted with reference to section 1-1 , implementing the piezometric levels shown in Figure 9b [37], the parameter values c m = 9 kPa − ϕ m = 18.7 • , which are about those reached post-peak in the shear tests on the clay samples, were found to provide F = 1. Therefore, these values have been used in all LE analyses port intervention. To start with, a stability factor increment, ∆F, equal to 14% is reached for section 1-1 after installation of a drainage trench system characterized by n = 3, S = 18 m and H 0 = 22 m (S/H 0 = 0.82), whose plane of symmetry coincides with the section. Given n = 3, Figure 20 outlines the variability of ∆F with S/H 0 for the section 1-1 ; in the plot, the contours of constant S (dashed lines) and constant H 0 (solid lines) are indicated. ∆F is found to increase mostly with increasing H 0 , in accordance with the increase inĒ discussed before. Furthermore, Figure 21 shows that the increment in stability factor increases with the number of trenches, due to the group effect, although the increment obtained passing from n = 5 to n = 7 is significantly lower than that calculated with the increment of n from 3 to 5. Given the bowl shape of the Fontana Monte slip surface, the increment in stability factor ∆F in Figures 20 and 21 for section 1-1 , are the minimum among the ∆F achieved along the other sections, since in the latter the slip surface is shallower and, therefore, the increase in shear strength achieved through drainage is higher. In order to assess the variability of ∆F among the 2D LE analyses carried out for the section 1-1 , 2-2 , and 3-3 , in Figure 22, ∆F calculated for each 2D analysis has been renamed 2D-∆F and has been plotted versus the distance of the longitudinal section from the central section of the landslide (which coincides with section 1-1 ). The 2D-∆F values in the figure refer to the case n = 5, H 0 = 16 m and S = 14 m (called "standard system"). In such analyses, the efficiencyĒ has been calculated in the different portions of the bowl-shaped slip surface according to the calculation strategy discussed in section 3 ( Figure 9). The figure shows the increase in 2D-∆F with the increasing distance of the calculation section from central section of the landside. On the whole, the results suggest some criteria of optimization of the design of the drainage trench system. In particular, the installation of the system may be more sustainable and as efficient if including a smaller trench spacing, S, and a larger trench depth, H0, in the area where the slip surface is deeper, i.e., in the middle of the landslide body. Conversely, in the lateral portions of the landslide, a larger spacing with shallower trenches can be used, as schematized in Figure 23. For example, a drainage trench system characterized by five trenches, with a spacing S equal to 12 m and a depth H0 of 18 m in the middle, and a spacing S equal to 16 m with a depth H0 of 12 m at the boundaries, as shown in Figure 23, has been implemented in an additional seepage analysis. On the whole, the results suggest some criteria of optimization of the design of the drainage trench system. In particular, the installation of the system may be more sustainable and as efficient if including a smaller trench spacing, S, and a larger trench depth, H 0 , in the area where the slip surface is deeper, i.e., in the middle of the landslide body. Conversely, in the lateral portions of the landslide, a larger spacing with shallower trenches can be used, as schematized in Figure 23. For example, a drainage trench system characterized by five trenches, with a spacing S equal to 12 m and a depth H 0 of 18 m in the middle, and a spacing S equal to 16 m with a depth H 0 of 12 m at the boundaries, as shown in Figure 23, has been implemented in an additional seepage analysis. On the whole, the results suggest some criteria of optimization of the design of the drainage trench system. In particular, the installation of the system may be more sustainable and as efficient if including a smaller trench spacing, S, and a larger trench depth, H0, in the area where the slip surface is deeper, i.e., in the middle of the landslide body. Conversely, in the lateral portions of the landslide, a larger spacing with shallower trenches can be used, as schematized in Figure 23. For example, a drainage trench system characterized by five trenches, with a spacing S equal to 12 m and a depth H0 of 18 m in the middle, and a spacing S equal to 16 m with a depth H0 of 12 m at the boundaries, as shown in Figure 23, has been implemented in an additional seepage analysis. The corresponding 2D-∆F are reported in Figure 22, for comparison with the previous results. The comparison shows that the average stability factor obtained through this kind of drainage trench system,~13%, is comparable to that reached with a standard system, of constant S and H 0 , but the system in Figure 23 is more sustainable, both in terms of costs and construction effort. Concluding Remarks The results of the study show the extent to which a properly designed drainage trench system may provide reasonable mitigation of the activity of deep landslide bodies, even in slopes formed of clays. Based on the analysis results, criteria for the optimization of the drainage trench system design have been proposed, for the stabilization of deep landslide bodies. The new design procedure implements a more realistic account of the field seepage conditions. The comparison of the new analysis results with theĒ predictions reported in previous studies shows that also at depth larger than the trench depth (e.g., up to double H 0 ), results from models accounting for partially saturated conditions, unsteady boundary conditions at the ground level and finite number of trenches, differ from model results assuming full saturation and an infinite number of trenches. Furthermore, the new results give evidence to the uncoupled influence of H 0 , n and S on theĒ values. Therefore, the present paper represents a step forward in the assessment of the influence of the geometric parameters of drainage trench systems and of the field hydraulic conditions (partially saturated conditions, initial water table location) on the increase in safety factor provided by the mitigation measure. At large depths, the assessment of the efficacy of the intervention must account for the "group effect", which influences the pore pressure regime. The case study used for the validation of the design strategy has been the Fontana Monte landslide, whose geomorphology (bowl-shaped slip surface) and geotechnical parameters had been assessed in previous studies. The analysis results show that the geometric parameter H 0 has a greater influence, than n and S, on the pore water pressure reduction at the maximum depth of the slip surface. Moreover, the account for the variation inĒ along the transversal sections of the bowl-shaped slip surface, allows for the recognition of the significant 3D-∆F that the trench system provides, which is far higher than the 2D-∆F calculated for the deepest longitudinal section of the slip surface. Based on the results, an optimization of the drainage system geometry has been proposed.
2020-05-21T00:10:52.544Z
2020-05-10T00:00:00.000
{ "year": 2020, "sha1": "f9edc8043e6aa453d43d4a4b42cf024bbc82e797", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/geosciences10050174", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bf70901c95964c4e4c05344fde7647e232360761", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
244021465
pes2o/s2orc
v3-fos-license
Detailing the ultrastructure’s increase of prion protein in pancreatic adenocarcinoma BACKGROUND Recent evidences have shown a relationship between prion protein (PrPc) expression and pancreatic ductal adenocarcinoma (PDAC). Indeed, PrPc could be one of the markers explaining the aggressiveness of this tumor. However, studies investigating the specific compartmentalization of increased PrPc expression within PDAC cells are lacking, as well as a correlation between ultrastructural evidence, ultrastructural morphometry of PrPc protein and clinical data. These data, as well as the quantitative stoichiometry of this protein detected by immuno-gold, provide a significant advancement in understanding the biology of disease and the outcome of surgical resection. AIM To analyze quantitative stoichiometry and compartmentalization of PrPc in PDAC cells and to correlate its presence with prognostic data METHODS Between June 2018 and December 2020, samples from pancreatic tissues of 45 patients treated with pancreatic resection for a preoperative suspicion of PDAC at our Institution were collected. When the frozen section excluded a PDAC diagnosis, or the nodules were too small for adequate sampling, patients were ruled out from the present study. Western blotting was used to detect, quantify and compare the expression of PrPc in PDAC and control tissues, such as those of non-affected neighboring pancreatic tissue of the same patient. To quantify the increase of PrPc and to detect the subcellular compartmentalization of PrPc within PDAC cells, immuno-gold stoichiometry within specific cell compartments was analyzed with electron microscopy. Finally, an analysis of quantitative PrPc expression according to prognostic data, such as cancer stage, recurrence of the disease at 12 mo after surgery and recurrence during adjuvant chemotherapy was made. RESULTS The amount of PrPc within specimen from 38 out of 45 patients was determined by semi-quantitative analysis by using Western blotting, which indicates that PrPc increases almost three-fold in tumor pancreatic tissue compared with healthy pancreatic regions [242.41 ± 28.36 optical density (OD) vs 95 ± 17.40 OD, P < 0.0001]. Quantitative morphometry carried out by using immuno-gold detection at transmission electron microscopy confirms an increased PrPc expression in PDAC ductal cells of all patients and allows to detect a specific compartmentalization of PrPc within tumor cells. In particular, the number of immuno-gold particles of PrPc was significantly higher in PDAC cells respect to controls, when considering the whole cell (19.8 ± 0.79 particles vs 9.44 ± 0.45, P < 0.0001). Remarkably, considering PDAC cells, the increase of PrPc was higher in the nucleus than cytosol of tumor cells, which indicates a shift in PrPc compartmentalization within tumor cells. In fact, the increase of immuno-gold within nuclear compartment exceeds at large the augment of PrPc which was detected in the cytosol (nucleus: 12.88 ± 0.59 particles vs 5.12 ± 0.32, P < 0.0001; cytosol: 7.74. ± 0.44 particles vs 4.3 ± 0.24, P < 0.0001). In order to analyze the prognostic impact of PrPc, we found a correlation between PrPc expression and cancer stage according to pathology results, with a significantly higher expression of PrPc for advanced stages. Moreover, 24 patients with a mean follow-up of 16.8 mo were considered. Immuno-blot analysis revealed a significantly higher expression of PrPc in patients with disease recurrence at 12 mo after radical surgery (360.71 ± 69.01 OD vs 170.23 ± 23.06 OD, P = 0.023), also in the subgroup of patients treated with adjuvant CT (368.36 ± 79.26 OD in the recurrence group vs 162.86 ± 24.16 OD, P = 0.028), which indicates a correlation with a higher chemo-resistance. CONCLUSION Expression of PrPc is significantly higher in PDAC cells compared with control, with the protein mainly placed in the nucleus. Preliminary clinical data confirm the correlation with a poorer prognosis. INTRODUCTION Pancreatic cancer is currently the fourth most frequent cause of death among cancers with a prevalence, which is increasing, mostly in western countries, where it is estimated to become the second prevalent cause of death from cancer by 2030 [1]. The overall median survival of patients with pancreatic ductal adenocarcinoma (PDAC) is 6 mo and the 5-year survival rate is less than 10% [2,3]. Surgical resection is still the only approach with a curative intent, but it is feasible for less than 20% of patients at diagnosis, while 80% of cases are considered too advanced for surgery, and this is based on regional infiltration or distant metastasis [4]. Even in surgically eligible cases, the 5-years survival rate is extremely low, since patients experience early local or distant relapse. The aggressiveness of the disease is mainly related to the extensive local infiltration and to the early lymphatic and hematogenous spread, which rely on the high propensity of these cells to produce neuro-invasion. In this scenario, an in-depth knowledge of the biology of the disease is fundamental. The comprehension of the mechanisms involved in tumorigenesis is needed, in order to discover early diagnostic tools, and novel therapeutic strategies to improve patients' prognosis. When comparing PDAC with other tumors, the biology of the disease is scarcely investigated, and it remains largely unknown. Recent evidence demonstrates that cellular Prion Protein (PrPc) is overexpressed either in vitro and in vivo in PDAC cells and that it interacts with several pathways, enhancing cellular growth, tumoral proliferation and invasion [5][6][7]. Still, the occurrence of increased PrPc was described so far by using quite general approaches, in the absence of subcellular localization of the protein and without a quantitative, stoichiometric count of protein particles within cancer cells. Thus, we felt it mandatory to provide a quantitative measurement of the increase in PrPc, by using ultrastructural stoichiometry, which is more reliable in protein analytical detection compared with immunohistochemistry. Again, understanding cell compartmentalization of PrPc, and whether this is shifted within PDAC cells is key to establish its potential role in the biology of disease. This is expected to improve the knowledge about the intrinsic role of PrPc in PDAC pathogenesis. So far, studies focusing on these aspects based on an in vivo approach are lacking. Again, it is fundamental to strengthen whether a higher expression of PrPc in PDAC cells does associate with specific clinical phenotypes of PDAC. Therefore, the aim of this study is to quantify the increase in PrPc and analyze the subcellular localization of PrPc in PDAC cells from surgically resected patients. These findings are correlated with clinical data, in order to dissect a potential role of PrPc as a biological marker to predict disease severity. November 14, 2021 Volume 27 Issue 42 Patients and specimens Samples from tumors of patients surgically treated with pancreatic resections at our Institution were collected between January 2018 and December 2020. Written informed consent was obtained from patients to use their surgical specimens and clinical pathological data for research purposes. All patients had a preoperative suspicion of PDAC. Preoperative evaluation included medical history, physical, laboratory and radiological examinations, computed tomography (CT) and magnetic resonance imaging, often with magnetic resonance cholangiopancreatography. In addition, abdominal ultrasound with and without contrast, endoscopic ultrasonography (EUS), and fine-needle aspiration (FNA) during EUS were also performed in selected patients. Preoperative data included age and gender. All the specimens were frozen intraoperatively and further sliced and scored for histology. Tissue from pancreatic nodules other than adenocarcinomas were ruled out from the present study. Similarly, we could not proceed when tumor specimens were too small. When PDAC diagnosis was confirmed, the pathologist took specimens from the pancreatic tumor and from normal pancreatic tissue. From each specimen (either from controls or tumor tissues) a part was fixed and kept in glutaraldehyde and paraformaldehyde for electron microscopy analysis, while the other one was rapidly frozen and kept at -80°C for storage until for western blotting analyses (SDS-PAGE immunoblotting) to be carried out. Patients were staged according to the T and N definitions proposed for the AJCC 8 th edition [8]. Proposed T-stage definitions are the following: T1 ≤ 2 cm maximal diameter, T2 > 2 ≤ 4 cm maximal diameter, T3 > 4 cm maximal diameter, T4 = locally unresectable. Extra-pancreatic extension was not included in T-stage definitions. The N-staging included the following: N0 = node negative, N1 = 1-3 nodes positive for metastatic disease, N2 ≥ 4 nodes positive for metastatic disease. Pathology, post-operative disease outcome and oncologic follow-up were prospectively retrieved and organized in a specific database. For our clinical analysis patients with at least 12 mo from surgical resection were considered. The degree of PrPc expression in PDAC was reported and compared also on the basis of cancer stage according to AJCC 8 th edition. Samples (25 μg) were separated on 4%-20% sodium dodecyl sulfate-polyacrylamide gel. Following electrophoresis, proteins were transferred to a nitrocellulose membrane (Biorad; Milano, Italy). The membrane was then immersed in blocking solution containing PBS with 0.05% Tween-20 (PBS-T) and 5% not fat dried milk (Sigma) for 1 h at room temperature. Then the membrane was incubated overnight at 4°C with primary antibody anti-protein PrPc (1:2000, Abcam, Cambridge, United Kingdom) diluted in PBS-T containing 2% BSA (Sigma). The blots were washed three times with PBS-T and incubated for 1 h with goat anti-rabbit horseradish peroxidase-labelled secondary antibody (1:2000; KPL, Maryland, United States) diluted in PBS-T containing 2% not fat dried milk (Sigma). The bands were visualized with enhanced chemiluminescence reagents (Immuno-Star HRP Substrate; Bio-RadLaboratories) and image analysis was carried out by ChemiDoc System (Bio-RadLaboratories). β-Actin was used as an internal standard for semi-quantitative protein measurement, so-called "house-keeping protein". Densitometric analysis was performed with ImageJ software and the unit of measure was the optical density (OD). Western blotting of PDAC tissues were compared with control tissues. November 14, 2021 Volume 27 Issue 42 Semi-thin sections When preparing embedded pancreas tissue blocks for electron microscopy analysis, firstly we carried out semi-thin sections in order to better focus on those areas of the tissue where ductal and parenchymal area could be evidenced. Each semi-thin section owning a thickness range of 3-6 μm was cut at ultra-microtome. After cutting, slices were picked up with an iron loop 1 cm long and 2 mm thick. By using the loop, the slice was moved into a drop of distilled water lying on a glass slide and it was then placed on a hot plate at approximately 60°C to be dried. Then, 1 or 2 drops of a toluidine blue staining solution were added on the semi-thin slice. When the edge of the staining drop switched the color from blue to metallic gold, the slide was removed quickly from hot plate to be rinsed with distilled water. Finally, these slides were mounted by using the mounting medium DPX to be analyzed by a Nikon Eclipse 80i light microscope, which was connected to the NIS Elements software for image analysis (Nikon, Tokyo, Japan). Electron microscopy For electron microscopy small fragments of normal and tumoral pancreatic tissue were fixed in 0.1% glutaraldehyde and 2 % paraformaldehyde in phosphate buffer pH = 7.4 for 90 min, creating a fixing solution minimally covering antigen epitope, while fairly preserving tissue architecture. After washing for 10 min in buffer, samples were post-fixed in 1% OsO4 buffered solution for 1 h at 4°C. Then samples were dehydrated in a series of increasing ethanol concentration (50%, 70%, 90% 95%, 100%) followed by propylene oxide for 20 min. Afterward, samples were embedded in a mixture of Epon-Araldite and propylene oxide (ratio of 1:1 overnight at room temperature) and finally they were embedded in pure Epon-Araldite resin for 72 h at 60°C. Ultra-thin sections were stained with uranyl acetate and lead citrate and examined with Jeol Jem 100SX transmission electron microscope (TEM) (Jeol, Tokyo, Japan) at an acceleration voltage of 80 kV. Post-embedding immunocytochemistry Post-embedding procedure was carried out on ultrathin sections collected on nickel grids. Grids were washed in PBS and incubated in a blocking solution containing 10% goat serum and 0.2% saponin for 20 min, at room temperature, then they were incubated with a primary antibody solution containing anti-rabbit Prion protein PrPc (Abcam, diluted 1:50), with 0.2% saponin and 1% goat serum in a humidified chamber, overnight, at 4°C. After washing in PBS, grids were incubated with secondary antirabbit antibodies conjugated with gold particles (20 nm mean diameter, BB International Crumlin, United Kingdom) which were diluted 1:40 in PBS containing 0.2 % saponin and 1% goat antiserum for 1 h, at room temperature. Slices used as methodological controls were incubated with secondary antibody only. After washing in PBS, grids were incubated on droplet of 1% glutaraldehyde for 3 min; an additional extensive washing of grids with distilled water was carried out to remove an excess of salt traces. Sections were stained with uranyl acetate and lead citrate and examined at TEM. From each experimental group (Controls and PDAC) 20 grids were observed each one containing a mean of 5 cells: In particular we selected the region in which the cellular ducts were present. In order to measure the distribution of the immuno-gold particles, first we counted the total number of gold particles within each cell, then their numbers in nucleus and in cytosol. TEM analysis was performed at a magnification of 6000-8000x which allowed the concomitant visualization of immuno-gold particles and all cell organelles, using higher magnification when it was necessary to better visualize the immuno-gold particles and lower magnification when an ensemble view of the whole ultrastructure was requested for our analysis. The expression of PrPc was revealed by counting the immuno-gold particles in whole cells, nucleus and cytosol both in control and PDAC groups. Statistical analysis Continuous variables with normal distribution are expressed as the mean ± SD of the mean and they were compared by using Student's t test. A P value equal or lower than 0.05 was arbitrarily considered as not due to biological variability (H 0 hypothesis rejected). The software used for such a statistical analysis was SPSS (Statistical Production and Service Solution for Windows, SPSS Inc., Chicago, IL, United States), version 23. November 14, 2021 Volume 27 Issue 42 Patients Surgical specimens of 45 patients were collected. 7 of them were ruled out because of the small dimension of the tumor that prevented an adequate sampling or because the diagnosis of PDAC was excluded at frozen section. The samples from 38 PDAC patients which were included in the analysis, 19 (50%) were from male, while 19 (50%) from female. The mean age was 72.7 ± 7.9 years (range 52-87). Histological exam confirmed the presence of PDAC in all these 38 cases. The grading of the pancreatic tumor was "moderately differentiated" in 35/38 cases (92.1%), "poorly differentiated" in 3/38 (7.9%). The mean tumor size was 3.2 ± 1.1 cm (range 1.5-6.5 cm). The mean harvested lymph nodes were 34.2 ± 15 (range 14-79). Metastatic lymph-nodes occurred in 30/38 cases (78.9%) with a number of metastatic lymph-nodes of 4.5 ± 4.9 (range 1-23). The presence of angio-invasion was reported in 5/38 cases (13%), while the presence of peri-neural infiltration was reported in 33/38 cases (86.8%). Three cancer stage groups were identified according to pTNM, stage I (n = 8, 21.1%), stage II (n = 14, 36.8%) and stage III (n = 16, 42.1%). These data are shown in Table 1. Expression of PrPc in pancreatic tissue at western blotting PrPc was markedly expressed in tumor pancreatic tissues, while the expression in noncancer tissues was scarce, with a significant difference (242.41 ± 28.36 OD vs 95 ± 17.40 OD, P < 0.0001) ( Figure 1). Semi-thin sections When observing representative semi-thin sections at different magnification ( Figure 2) the difference between control and PDAC pancreas was strikingly evident, mostly at the level of ductal tissue. In detail, the pancreas from control at low magnification possesses a quite homogeneous staining at toluidine blue, where ductal areas are simply evident as empty roundish areas surrounded by cell staining as much as those in the neighboring parenchyma ( Figure 2A). This was further evidenced at higher magnification showing a pale toluidine staining of ductal cells ( Figure 2B). Conversely, in the pancreas affected by PDAC, low magnification showed a highly nonhomogeneous tissue where ductal regions were markedly stained. Also, neighboring areas possess a scattered toluidine staining ( Figure 2C). At high magnification, the ductal cells from PDAC are overwhelmed and they tend to occlude the ductal lumen with multiple cell layers with an abnormal shape and abundant cell protrusions ( Figure 2D). This ductal tissue was the topographical reference to proceed with electron microscopy analysis. Electron microscopy The ultrastructure of pancreatic healthy cells owns normal architecture with wellpreserved cell compartments and well-defined membranes. The zymogen granules maintain their integrity, which suggests fair biochemical activity ( Figure 3A). Conversely, PDAC cells ( Figure 3B) possess severe cell pathology, which is concomitant with a marked derangement of vacuolar compartment and damaged organelles. Clinical data When correlating within PDAC group the amount of PrPc expression with specific cancer stages, a significantly higher expression of PrPc for advanced stages was detected. When considering clinical data retrieved from our follow-up, patients surgically treated in the last 12 mo were ruled out from correlation with disease prognosis. Overall, 27 patients were considered, of which 3 were ruled out for missing information. For the remaining 24 patients, the mean follow-up was 16.8 mo (range 5.6-34.4 mo). Median overall survival and disease-free survival were 15.9 mo and 11.2 mo, respectively. The 12-mo recurrence rate was 54.1% (n = 13). Comparing patients with relapse at 1 year with those without evidence of the disease, the difference in PrPc expression at immuno-blotting was statistically significant between the two groups (360.71 ± 69.01 OD vs 170.23 ± 23.06 OD, P = 0.023) ( Figure 6A). Moreover, 21 patients out of 24 received adjuvant chemotherapy (CT). Of these, 10/21 (47.6%) were without evidence of disease relapse at 12 mo, while 11/21 (52.4%) experienced a relapse of the disease. Detailing our analysis of PrPc expression correlated with disease recurrence a significant higher PrPc expression was detected for patients who experienced a relapse, despite the administration of adjuvant CT compared with those receiving CT without evidence of disease at follow-up (368.36 ± 79.26 OD vs 162.86 ± 24.16 OD, respectively, P = 0.028) ( Figure 6B). DISCUSSION Cellular prion protein is a cell surface glycoprotein, firstly discovered as the normal isoform of the scrapie prion protein (PrP Sc ), the infectious agent of Transmissible Spongiform Encephalopathies. Despite the normal isoform is a highly conserved glycoprotein present in all vertebrates, which indicates some intrinsic and fundamental roles in cell homeostasis, studies about the physiology of PrPc have long been overlooked. Only recently some authors focused on the role of PrPc and many paths have been opened [11]. PrPc was proposed to protect neurons against cell death and oxidative stress [ Finally, PrPc directly induces proliferation and confers resistance to apoptosis in cancer stem cells[18], by dysregulating their interactions with surrounding environment and thus causing cancer stem cells proliferation [52]. As with proliferation, the contribution of PrPc to cancer stem cells self-renewal may be envisioned as a diversion of its physiological role into normal stem cell maintenance [ Collectively, the involvement of PrPc in various aspects of cancer progression may be viewed as directly related to its physiological role in normal cells. Prion-mediated changes may represent initiating events that promote the emergence of the hallmarks of cancer, including self-sufficiency in growth signals, insensitivity to antigrowth signals, tissue invasion and metastasis, limitless replicative potential and inhibition of apoptosis [53]. Moreover, over-expression of PrPc has been found to be related to a higher chemoresistance [54]. This is likely to have clinical implications: from a therapeutic perspective, reducing PrPc expression may be beneficial, as it was documented for glioblastoma [55] or colon cancer [56]. Besides, alternative opportunities may ensue from a better knowledge of the signals upregulating PrPc expression in cancer cells. November 14, 2021 Volume 27 Issue 42 In particular, looking to PDAC, a deeper comprehension of its role may add a significant piece of evidence in the puzzle of this highly aggressive pathology, with a potentially relevant clinical implication. The present study indicates a higher expression of PrPc in PDAC tissues in vivo, and provides the first evidence of the cellular localization of this protein. A higher amount of PrPc specifically within PDAC cells is now confirmed in a wider pool of patients compared with our recent in vivo study [7]. Moreover, the present study provides a stoichiometric measurement of the protein is situ within PDAC cells and indicates a shift in its sub-cellular placement in PDAC. Such a latter novelty was provided by ultrastructural morphometry, and immuno-gold staining under transmission electron microscopy. For instance, such an approach allowed to detect a misplacement of PrPc towards the nuclear compartment, which is in line with its strong effects on cancer-related gene expression. In particular, in PDAC cells the number of PrPc immuno-gold particles was two-fold higher compared with controls, although such an increase is more relevant in the cell nucleus, where it rises up to three-fold of the amount measured in controls. When comparing PrPc compartmentalization in PDAC cells and normal ductal pancreatic cells, the nuclear concentration of PrPc is 1.65-fold higher compared with cytosol, while in the normal cells there is no significant difference between nucleus and cytosol. From one hand, this may be due to a marked ongoing over-expression of PrPc gene (PRNP) in PDAC cells. In fact, the first hint of a link between PrPc and pancreatic cancer dates back to the early 2000s, when PRNP was identified as one of the 30 genes mostly expressed in pancreatic cancer cell lines when compared with normal cells [57]. On the other hand, such a preferential nuclear compartmentalization may disclose a specific role of PrPc in the biology of PDAC. So far, the evidences from PDAC cell cultures show that PrPc act as a cell surface glycoprotein to activate specific intracellular pathways and signaling that bring to a proliferative effect [5]. However, the detection of a peculiar and prominent concentration in the nucleus suggests an involvement of PrPc in regulating directly gene expression, acting as a nucleojunctional interplay in order to modulate the transcriptional activity of different pathways involved in carcinogenesis. In fact, PrPc could have a role in signaling complexes that contribute to a regulation of proliferation and cell-to-cell adhesion. November 14, 2021 Volume 27 Issue 42 Recently an association has been found in enterocytes between nuclear PrPc and Wnt and Hippo pathways, which are modulated by cell contacts and are de-regulated at high frequency in many human cancers. In this way, PrPc should be considered as an actor in oncogenic processes through its role in the dynamics of cell-to-cell junctions because its nuclear localization could lead to modulate transcriptional activity of Wnt and Hippo effectors, some of the pathways clearly involved in carcinogenesis [58]. The relevance of these findings in clinical setting is supported by the evidence of a more aggressive behavior of PDAC depending on the amount of PrPc expression. As already demonstrated in our previous study [7], the expression of PrPc correlates with predicted patients' prognosis based on cancer stage according to pathology results. These data are encouraging, since they are confirmed also in a wider group of patients. Moreover, by further collecting early clinical data, we were able to validate such a hypothesis. In fact, in the subgroup of patients with a relapse of the disease after a surgical R0 resection, the expression of PrPc was significantly higher compared with those patients without evidence of relapse. Moreover, these data indicate a relationship between PrPc and chemo-resistance, since the relapse during adjuvant CT was significantly higher in patients according to their expression of PrPc at western blot. This is the first project that investigates the sub-cellular nuclear compartmentalization of increased PrPc expression with electron microscopy in vivo in patients with PDAC treated with surgery. Our findings could open new research avenues: in fact, being PrPc markedly expressed in the nuclear compartment, studies should investigate whether this protein is involved in specific activation of some unknown nuclear pathways that can lead to a higher cancer aggressiveness and de-differentiation. Similarly, the correlation of PrPc expression with a chemo-resistant phenotype could be used not only for prognostic purposes. For instance, patients may be stratified for prognosis according to PrPc expression in the resected specimen. This may involve also the development of novel therapeutic outcomes. In fact, specific therapeutic agents designed to target PrPc metabolism should be able to reduce a pathological cell growth and to revert chemo-resistance. CONCLUSION The study confirms our previous data, highlighting in vivo in patients PDAC tissue the role of PrPc in PDAC aggressiveness. The evidence of a peculiar nuclear compartmentalization of this protein in the cellular nuclei of PDAC cells is in line with in vitro data from literature showing over-expression of PRNP gene in PDAC cells, and suggests the presence of some, still unknown, molecular pathways triggered by PrPc in the nuclear compartment. This extends the influence of PrPc beyond its role as cellmembrane glycoprotein. Finally, PrPc expression seems to be associated with a greater risk of relapse after radical surgery and in shifting the cancer towards a phenotype with a higher chemo-resistance. These data provide a step forward in the comprehension of PDAC biology, confirming that PrPc is likely to represent a marker of disease severity and a determinant of PDAC biology. Further studies are needed to validate these results and to investigate the molecular mechanism of PrPc in PDAC pathogenesis and its potential clinical application. Research background Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal human cancers, but its molecular basis is still poorly understood. Among the several new targets for the comprehension of its biology, cellular Prion protein (PrPc) deserves particular mention, since it seems to be involved also in tumorigenesis. Research motivation Recent evidences have shown a relationship between PrPc expression and PDAC, with a possible role of this protein in the molecular basis of PDAC aggressiveness itself. Research objectives The present study aimed to further analyze the occurrence of PrPc within PDAC tissues, by investigating the specific compartmentalization of PrPc within PDAC cells, which is a fundamental aspect in order to provide a significant advancement in understanding the biology of disease. Moreover, we aimed to correlate the presence of PrPc with clinical data in order to find an association with patients' prognosis. November 14, 2021 Volume 27 Issue 42 Research methods Samples from pancreatic tissues of 45 patients treated with pancreatic resection PDAC at a single institution were collected. Immune-gold stoichiometry within specific cell compartments was analyzed with electron microscopy in order to elucidate the subcellular compartmentalization of PrPc within PDAC cells. Western blotting was used to detect, quantify and compare the expression of PrPc in PDAC and control tissues, such as those of non-affected neighboring pancreatic tissue of the same patient. Data from western blot analysis where also used to perform a correlation with prognostic data from patients' follow-up. Research results Immune-electron microscopy highlighted an increased PrPc expression in PDAC ductal cells of all patients and allowed to detect a peculiar compartmentalization of PrPc within tumor cells, with a specific increase of PrPc in the nucleus. Furthermore, semi-quantitative analysis by using Western blotting, showed that PrPc increased almost three-fold in tumor pancreatic tissue compared with healthy pancreatic regions, with a significantly higher expression of PrPc in patients with disease recurrence at 12 mo after radical surgery, also in the subgroup of patients treated with adjuvant CT, thus revealing a possible higher chemoresistance. Research conclusions Our study provides evidence for a correlation between PrPc expression in PDAC and a worse biological behavior, with a higher recurrence rate and chemo-resistance. Moreover, it provides for the first-time the evidence of a peculiar subcellular compartmentalization of PrPc itself within PDAC cells. Research perspectives PrPc could be a molecular marker associated to PDAC aggressiveness and ominous prognosis. Its nuclear compartmentalization suggests the activation of specific, but still unknown, molecular pathways involved in the biology of the disease and further studies in this sense are necessary, since specific therapeutic agents designed to target PrPc metabolism should be able to reduce a pathological cell growth and to revert chemo-resistance. Poindessous
2021-11-12T16:27:47.315Z
2021-11-14T00:00:00.000
{ "year": 2021, "sha1": "c0ff3deb62bc19c16668ac3ed159feb21920e480", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v27.i42.7324", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c66b8633626acc6b3728d670ce7da9b14d6fb619", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56037179
pes2o/s2orc
v3-fos-license
Awareness of L2 American English Word Stress: Implications for Teaching Speakers of Indo-Aryan Languages This study aims to investigate the word stress placement in English and Sindhi words in learners from Indo-Aryan language and American English backgrounds. Since correct placement of word stress is key for L2 English intelligibility, and it is known that native language background affects English language learners’ word stress perception and production. The study explores English language learners’ intuition through behavioral data from the native speakers of Sindhi and American native speakers to compare their awareness of word stress in L1 and L2. It further investigates learner’s stress patterns by measuring their reports of word stress location in their Sindhi and in their L2 English. There were twenty native speakers (10 from Sindh, Pakistan-10 from Illinois State, America) who were recruited from the location in their countries. Results of three experiments show that Sindhi native speakers have less awareness of stress location in their native language than native English controls, and this effect carries into their L2 English. Teachers of Sindhi-speaking students should be prepared to provide explicit training on word stress. Introduction In English, word stress is contrastive, meaning that two words may differ by stress location alone i.e., the verb 'record' with the noun 'Record'.Moreover, pronunciation of English word-level stress is highly salient because reduction and co-articulation systematically distinguish stressed from unstressed syllables.In other words, English word stress modifies the meaning of English words, whereas, Sindhi word stress does not change the meaning of Sindhi words, though lexical stress is used for emphasis purpose on the words.The study investigated the intuition of both native speakers i.e., Sindhi and American as to where and how they assign primary stress on word level in their L1 and L2.American native speakers have only been judged for their L1 that is English language whereas, Sindhi native speakers were experimented for L1 and L2. Word Stress in English The placement of word stress is of particular importance for English language learners (ELLs) because research suggests that prosodic features such as word stress affects the intelligibility of L2 English speakers (Munro & Derwing, 1999), and native listeners 'recall […] significantly more content and evaluate […] the speaker significantly more favorably' when primary stress is correctly placed vs. incorrectly placed or missing (Hahn, 2004).Similarly, prosodic accuracy contributes to the overall impression of fluency as measured by intelligibility ratings (Derwing & Rossiter, 2003).Not only is word stress important for overall intelligibility, it is especially important for comprehending English for Academic Purposes (EAP).Longer words with Latin substrate are much more common in EAP than in everyday English.There are 39 different patterns of syllable stress in words on the widely-used Academic Word List created by Coxhead (2000), according to Murphy and Kandil (2004).Some of these patterns are rare, but mastery of the 14 most common of these patterns is required for pronunciation of 90% of the words.This task is difficult because the placement of stress is not entirely predictable in English, and therefore is difficult to teach and learn (Hammond, 1999). The difficulty that L2 learners have in accurately producing and perceiving English stress may lie in interference effects from their L1.Prior studies have investigated transfer of word stress in fixed stress languages, or languages which are claimed to have no stress, and found robust evidence that stress patterns of a learners' L1 can interfere with their ability to accurately perceive and produce stress patterns in the target L2 (Peperkamp & Dupoux, 2002;Archibald, 1997). Furthermore, evidence suggests that insensitivity to novel stress patterns is not necessarily due to a failure in auditory processing.Speakers of Polish (a fixed stress language) could not reliably report differing stress patterns, yet measurements of their brain activity showed evidence of a neural response to the difference between correctly stressed and incorrectly stressed syllables (Domahs, Knaus, Orzechowska, & Wiese, 2012).The transfer of stress, or the absence of stress, from the learners' native language has been shown to result in both pronunciation mistakes and decreased intelligibility (Bian, 2013).Despite the importance of stress placement for intelligibility, and the effect of L1 stress on the acquisition of novel L2 stress patterns, the question of L1 interference in L2 stress acquisition remains understudied for languages spoken outside of East Asia and Western Europe. This paper focuses on L2 English stress acquisition by L1 speakers of Sindhi, an Indo-Aryan language.Indo-Aryan languages are an important case for L2 English stress learning because their word stress patterns are different from English, and because, as official languages of India and Pakistan, there are many adult ESL learners, many of whom learn English for Academic Purposes (EAP). Word Stress in Sindhi Specifying the difference between Sindhi and English word stress patterns is difficult because there is a little agreement on the phonology of Sindhi word stress.Initial analyses indicate that Indo-Aryan languages have no stress.However, the data used to draw this conclusion comes from only two languages, Hindi and Urdu, and does not include data from Sindhi (Krishnamurti et al., 1986).Jatoi (1996) analyzes Sindhi and agrees with earlier work that Sindhi has no word stress, while Nihalani (1995), on the other hand, argues that word stress does exist, and it is fixed on the first syllable of a word.Measurements of acoustic factors clarify the matter because the available evidence shows that stressed syllables are not marked acoustically in the same way that English syllables are (Abbasi & Hussain, 2015). Data on Sindhi word stress collected by the first author (Abbasi & Hussain, 2015;Abbasi, 2017;Abbasi, Channa, Kakepoto, Ali, & Mehmood, 2017) suggest that Sindhi does have word stress, and that rather than being fixed it is weakly quantity sensitive.In a quantity-sensitive language, stress falls on so-called 'heavy' syllables, which contain a long vowel and/or a coda consonant, rather than on 'lighter' syllables with short vowels, and/or no coda consonants.If Sindhi is indeed quantity sensitive, then Sindhi stress is similar to English stress, though the measured acoustic manifestation of stress is much more robust in English.Author tested Sindhi speakers' perceptual judgments of stress location, and reports results from logistic mixed effects regression models showing that syllable weight (light vs. heavy) is a small but significant predictor of stress perception in Sindhi.Abbasi (2017) reports acoustic measurements for stress in disyllables, and shows that the acoustic difference between stressed and unstressed syllables in Sindhi is much less than it is in English.Whether Sindhi has no stress, or has a system of quantity-sensitive stress there is no question that word stress in Sindhi is phonologically distinct from that of English in the location of stress within the word, and is phonetically distinct from English in the acoustic correlates of stress. Due to the differing typology of stress in English and in Sindhi, we are led to wonder about Sindhi speakers' awareness of word stress in English.The motivation to study stress transfer in Sindhi ELLs is threefold: (1) The phonological status of Sindhi stress is contested, and so native judgments of stress will help to confirm the status of stress in Sindhi (2) Stress transfer has been measured from East Asian languages with no stress and European languages with fixed stress, but not from Indo-Aryan languages such as Sindhi with variably placed stress that differs from English.(3) Word stress is important for English pronunciation, and there are many ELLs with a Sindhi language background.Therefore, information about Sindhi stress transfer has the potential to inform pronunciation pedagogy. Research Questions and Predictions In this study, two research questions are asked as follows: 1) Are there differences among Sindhi and English speakers in the awareness of stress in their native
2018-12-06T04:09:18.132Z
2018-02-05T00:00:00.000
{ "year": 2018, "sha1": "03bc12fee4fe3d0d776db3632c2594910d5a0de8", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ijel/article/download/73378/40515", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "03bc12fee4fe3d0d776db3632c2594910d5a0de8", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
9936351
pes2o/s2orc
v3-fos-license
Molecular Basis of the Mechanical Hierarchy in Myomesin Dimers for Sarcomere Integrity Myomesin is one of the most important structural molecules constructing the M-band in the force-generating unit of striated muscle, and a critical structural maintainer of the sarcomere. Using molecular dynamics simulations, we here dissect the mechanical properties of the structurally known building blocks of myomesin, namely α-helices, immunglobulin (Ig) domains, and the dimer interface at myomesin’s 13th Ig domain, covering the mechanically important C-terminal part of the molecule. We find the interdomain α-helices to be stabilized by the hydrophobic interface formed between the N-terminal half of these helices and adjacent Ig domains, and, interestingly, to show a rapid unfolding and refolding equilibrium especially under low axial forces up to ∼15 pN. These results support and yield atomic details for the notion of recent atomic-force microscopy experiments, namely, that the unique helices inserted between Ig domains in myomesin function as elastomers and force buffers. Our results also explain how the C-terminal dimer of two myomesin molecules is mechanically outperforming the helices and Ig domains in myomesin and elsewhere, explaining former experimental findings. This study provides a fresh view onto how myomesin integrates elastic helices, rigid immunoglobulin domains, and an extraordinarily resistant dimer into a molecular structure, to feature a mechanical hierarchy that represents a firm and yet extensible molecular anchor to guard the stability of the sarcomere. INTRODUCTION The M-band is located in the middle of the sarcomere, the muscle's force-generating unit. It features dark lines in microscopic images formed from fibrils interconnecting the tails of myosin thick fibrils as well as the titin C-terminal portions (1). As an integrating molecular network, the M-band is believed to act as a structural safeguard of the sarcomere (2). Although three-dimensional structures of the M-band have been constructed by single-particle experiments (3), the detailed molecular structure and its way of balancing forces during a force-generating cycle of the sarcomere are still largely unknown. The interactions between the M-band and other muscle molecules are key to the integration of big fibrils, most importantly myosin and titin, in the sarcomere. Three genetically related molecules have been identified to be responsible for the M-band lines, namely myomesin, M-protein, and myomesin-3 (4). As the most important molecule, being present in all types of striated muscles (5,6), myomesin is a promising candidate for deciphering the mechanical characteristics of the M-band. Myomesin consists of 13 domains (Fig. 1). It is expressed at a fixed ratio with myosin at different types of muscles, with its special N-terminal domain, my1, connecting to one myosin tail (7). Similar to other muscle proteins, myomesin possesses 12 other domains either as immunoglobulin (Ig) domain or as fibronectin type-III (FNIII) domain. Its fourth-to-sixth domains, my4-my6, interact with titin to firmly accommodate the titin C-terminus in the M-band (8,9), as shown in Fig. 1. One iso-form of myomesin, termed ''EH-myomesin'', has an unstructured peptide segment (EH segment) between the sixth and the seventh domains, my6 and my7, which is expressed only in early development of the heart (6). The EH segment has a function similar to PEVK in titin in that it provides a large extensibility to prevent damage upon stretching (10)(11)(12). The normal form of myomesin is lacking this critically important elastic EH segment, which raises the question how other segments in myomesin equip this constantly stressed molecule with elasticity. Two myomesin molecules dimerize at their 13th domains, my13. The dimerized myomesins thereby expand from the M4 to the M4 0 line in the M-band (1), and thus establishes a regular molecular organization in the sarcomere by connecting two antiparallel myosin molecules and a titin molecule with each other. In this assembly, myomesin acts as a force-transmitting bridge to balance mechanical force between molecules during the sarcomeric force-generating cycles (2). Silencing the function of myomesin resulted in sarcomere damages and muscle weakness (2,13,14). These results have called for an explanation of the molecular basis of myomesin function, in particular as myomesin shares structural similarities with other M-band proteins (4). Recently, crystal structures of the myomesin C-terminal portion had been resolved (15,16), comprising myomesin's 9th (my9) to 13th (my13) Ig domains, the latter of which forms a homodimer. Unexpectedly, long a-helices were found to connect each pair of adjacent Ig domains in this portion (Fig. 1). This structure of alternating Ig-domains and a-helices is likely to provide a hierarchy of mechanical responses in the force-generating sarcomere. Investigations of the my12-my13 homodimer by atomic-force microscopy (AFM) experiments combined with molecular-dynamics (MD) simulations showed that the domain-connecting a-helix, a 12 , functions as a strain absorber for the molecule (17). Significant mechanical forces of~30 pN for helix unfolding were required, and elongations of up to 150% of helical original length could be reached (17). Myomesin Ig domain unfolding followed complete helix extension at much higher forces (>100 pN). Alpha-helices are among the most ubiquitous secondary structures in proteins. Single molecule stretching experiments and simulations of individual a-helices or a-helical domains such as calmodulin or spectrin had previously revealed the unfolding and refolding dynamics of helices under force (18)(19)(20)(21). Other theoretical and experimental studies also had been performed to examine a-helix mechanical properties and even structural transitions (18,19,(22)(23)(24)(25)(26). A common finding was that a-helical proteins were comparably soft, and unfold at lower forces than the mostly-stiff and force-resistant b-sheet domains. All the myomesin a-helices expressed in the C-terminal portion of the molecule (which orients along the longitudinal axis of the sarcomere and is thus mechanically the most important (1,15)) are critical points for understanding the molecular behavior. Given that their interfacing Ig domains are participating in their folding-unfolding transitions (17), inspection of the mechanics of all these helices and their molecular neighbors can give an overview of the biological functions of the M-band molecules. How the mechanical properties of the myomesin domains are coupled, and how the hierarchy of mechanical unfolding and refolding events contribute to the integrity of the molecular architecture of the M-band, are the subjects of this study. Whereas former experimental studies concentrated on the C-terminal ends or individual Ig domains (10,17,27), we here expand our focus on the whole C-terminal portion of myomesin, aiming at understanding its subexperimental resolution molecular mechanics. In MD simulations, we found helices to exhibit fast nanosecond-scale unfolding and refolding dynamics under constant force, with the adjacent Ig domain stabilizing the interfacial part of the a-helix. Unexpectedly, small forces of up to~5 pN stabilized the a-helix secondary structure, which, in the absence of Ig domains, tended to partially unfold. The Ig domains' mechanical stability largely exceeded those of the a-helices, and increased toward the dimerized C-terminal my13 domain. METHODS To explore the conformational dynamics of myomesin under stretching forces, we performed MD simulations of different fragments of the C-ter-minal portion, which included the helices and Ig domains from my9 to my13. The simulated structures were the following: 1. Individual helices, namely the helix between my9 and my10 (a 9 ), the helix between my10 and my11 (a 10 ), and the helix between my12 and my13 (a 12 ); 2. Combined Ig and helical constructs, namely my9-a 9 -my10-a 10 -my11 (also denoted my9-my11) and dimeric my12-a 12 -my13 (denoted (my12-my13) 2 ); and 3. Individual Ig domains, from my9 to my13, as well as the my13 dimer. Structural equilibration The x-ray structure of two myomesin C-terminal segments, my9-my11 (PDB:2Y23 (16)) and (my12-my13) 2 (PDB:2R15 (15)) (insets in Fig. 1), were subjected to MD simulations for structural equilibration. The MD and force-probe MD simulations of (my12-my13) 2 (see below) have been already described in Berkemeier et al. (17), where the focus had been on the force-extension behavior of this individual structural unit. We used the WHAT IF (28) package to determine the protonation states of all histidine residues. On solvating the molecular structure with TIP4P water (29) in a simulation box, we ensured the distance between the protein and the box edge to be 1.5 times of the nonbonded interactions cut-off distance of 1.0 nm. We used an ionic concentration of 0.1 mM to mimic the physiological environment. We chose the GROMACS 4.5.x package (30) for all the subsequent MD simulations, and the OPLS-AA force field (31) for the protein. The simulation systems for my9-my11 and (my12-my13) 2 comprised~540,000 and~630,000 atoms, respectively. In all simulations, we removed artificial boundary effects by employing periodic boundary conditions. We used the particle-mesh Ewald method (32) to account for long-range electrostatics. To use a simulation time step of 2 fs, we used LINCS (33) to constrain all bond vibrations. We simulated an NpT ensemble for all simulations, using a temperature of T ¼ 300 K and a pressure of p ¼ 1 bar. The temperature coupling method was Nosé-Hoover (34,35), with a coupling time constant t T ¼ 0.4 ps; the pressure coupling method was Parrinello-Rahman (36), with a coupling time constant of t P ¼ 4 ps. After a simulation time of 50 ns, we cut the final equilibrated structures into different parts, namely the individual helices, the Ig domains my9-my13, and the dimer of two my13 domains, and further solvated them in appropriately sized simulation boxes with the same solvent conditions as described above. We equilibrated the individual helices for 400 ns tõ 2 ms and monitored their secondary structures. We also equilibrated all five individual Ig domains and the my13 dimer for another 10 ns before subjecting them to force-probe MD (FPMD) and force-clamp MD (FCMD) simulations. Helices under force All three myomesin helices studied here have a similarly high helical properties, as depicted in Fig. S1 in the Supporting Material. Because the force-extension behavior of a 12 has already been reported in Berkemeier et al. (17), we chose it as a representative helix to investigate its response to a constant pulling force in our FCMD simulations. We used different constant forces ranging from 5 to 100 pN applied to the two ends of a 12 . Our FCMD simulation lengths are determined by the responses of the helix under force. For example, higher forces lead to helix unfolding on the nanosecond timescale, which resulted in the termination of our simulations after complete helix unfolding. Meanwhile, reversible unfolding and refolding events of helical turns were observed at lower force range, which were monitored over sampling times of up to 2 ms. We assessed helical structures using the software DSSP (37). Except from the application of a constant pulling force, other simulation parameters in FCMD were those described above for the equilibrium simulations. Ig domains under force All individual Ig domains and the (my13) 2 dimer were subjected to forced unfolding or detaching for the first time in FPMD simulations, mimicking the AFM experiments (38). In these simulations, we attached two virtual springs to the two terminal residues of the domain, or to the two N-terminal residues in the My13-dimer. The pulling force was generated by moving these two springs with constant velocity away from the center of the protein. We used a pulling velocity of 0.5 nm  ns À1 , and a spring constant of 500 kJ  mol À1  nm À2 in all FPMD simulations. To accommodate the extending protein during unfolding, we increased the simulation box dimension along the pulling direction to~25.0 nm, resulting in system sizes of~160,000 atoms. Other simulation parameters were the same as those listed above for the equilibration simulations. We should note that full forced unfolding of either of the two molecular segments, my9-my11 or (my12-my13) 2 , would require simulation box dimensions of~80 nm along the axis of force application, which is computationally very demanding. We also used FCMD simulations to probe the stability of myomesin Ig domains and compared them to those in titin, such as I27 (39). Constant forces in FCMD simulations effectively decrease the energy barrier between folded and unfolded states of these Ig domains, and exponentially correlate with the reciprocal of time needed for energy barrier crossing (40,41). We chose the my12 domain and the my13-dimer for these simulations because of their high rupture force among other Ig domains (see below). We also simulated the titin Ig domain I27 for comparison. These FCMD simulations were continued only until the first unfolding event, allowing the structures to be accommodated in smaller simulation systems. After resolvation, energy reminimization, and solvent equilibration, we finally applied constant forces ranging from 250 to 1328 pN to the termini of the domains (or the two N-termini of the my13-dimer) in opposite directions. We used the same simulation parameters as described above if not otherwise specified. Under mild applied constant forces, the distance between two pulled group shows a plateau before an abrupt increase caused by Ig domain unfolding or dimer detaching in our simulations. We defined the dwell time t as the time between the initiation of force application and the abrupt increase of the end-to-end distance due to domain unfolding or dimer dissociation. We obtained t for each of the three proteins at 3-4 different constant forces (between 200 and 800 pN) (Fig. 5 B). Myomesin helices in equilibrium Former studies have demonstrated that the helix a 12 connecting adjacent Ig domains in myomesin serve as a strain absorber in the molecule by providing force-resisting viscosity as well as extensibility (17). However, the atomic detailed and sources of elasticity of a 12 and other myomesin helices under force, especially at the subexperimental resolution scale, have remained unclear. Here we assessed the molecular basis of this elasticity by MD simulations. The three structurally known helices in myomesin show lengths ranging between 19 and 25 residues. Although they share hardly any sequence similarity, they were predicted to have a relatively high helical propensity in common (see Fig. S1) (42,43). As shown in Fig. 1, they are solvent-exposed for roughly half of the helix length, while the remaining N-terminal half packs itself against the adjacent Ig domain by hydrophobic interactions. We first asked how conformationally flexible the helices were in the context of the myomesin molecule and with external me-chanical stress being absent. To restrict the simulation systems to a reasonable size, two segments of the structurally known myomesin molecule were subjected to equilibrium simulations. One contained three Ig domains, my9-my11, and two helices, a 9 and a 10 ; the other contained the myomesin dimer consisting of two my12, two my13, and two a 12 helices, as shown in Fig. 1. Both segments were flexible in simulations. In the 50-ns equilibration, both structures showed a high root meansquare deviation up to 1.2 nm. The root mean-square deviation of individual Ig domains in both segments was onlỹ 0.2 nm, as expected for a structurally well-defined Ig domain. The high structural deviation was caused by the bending of the helices, as depicted in Fig. 2 A for the example of the my12-my13 dimer. This hinge motion was reversible on the nanosecond timescale. Thus, helices in myomesin can act as flexible linkers by performing hinge motions at the C-terminal solvent-exposed helical section. In contrast, the interactions between the helices and the adjacent Ig domains were firmly maintained throughout the simulations. A large hydrophobic surface area was buried between the helix and the Ig domain. The tight packing between the two was established by large side chains, such as phenyalanine and leucine. We also monitored the tilting angles between the adjacent Ig domains during the simulations to quantify the hinge motion of each helix. To do so, we measured the direction vector of each Ig domain, which was defined from the first residue to the last one in that domain (16). The tilting angles between vectors across adjacent Ig domains are shown in Fig. 2, B-D. Their average was in good agreement with the tilting angles found in the crystal structures (16). However, they showed strong deviations from the mean with standard deviations between 12 and 17 . This suggests that while the crystal packing prevents such large-scale domain motions, myomesin in solution can in principle show fluctuations of Ig domains relative to each other by helix hinge motions. The pronounced ability of the myomesin helices to form hinges between two neighboring Ig domains might be of physiological importance. In this way, FIGURE 1 Simplified molecular connections in the M-band of the sarcomere in striated muscle. (Eclipse spheres) Myomesin domains. The two all-atom molecular structures used in this study, my9-my11 and dimeric my12-my13, are shown (insets), with the same coloring as in the scheme to highlight their position within myomesin. Biophysical Journal 107(4) 965-973 myomesin could buffer shearing forces as they might occur within the sarcomere during the force-generating cycle. Helical hinging allows myomesin to bridge shifted myosin molecules, and as such might be one crucial ingredient for M-band structural integrity. We next tested whether the hydrophobic interfaces affect the equilibrium stability of the helices, by subjecting the individual helices, namely a 9 , a 10 , and a 12 , to equilibrium MD simulations for 500 ns to 2 ms in the absence of Ig domains. We then compared the helicity of these supposedly helical fragments with and without hydrophobic protection from the Ig domain, as shown in Fig. 3. The three helices, when embedded into the myomesin molecule, showed helicities of 80-90%, slightly below the helicity of virtually 100% observed in the crystal structures. The hinge motion described above led to a minor unfolding of the helix with respect to the experimental structure. When removing the protection of the N-terminal helix portion from solvent by the Ig domain, the helicity significantly decreased to~22% on average (see Fig. S2). Thus, the Ig domains strongly stabilize the secondary structure of myomesin helices, which otherwise quickly and largely unfold. Our result of a high flexibility and instability of the solvent-exposed helix portions was in line with the finding by Berkemeier et al. (17). Because the other myomesin helices show hydrophobic packing to IG domains similar to a 12 , our data suggests the IG domains to be the general factor stabilizing myomesin helices. Helix elasticity Although helicity overall decreased when removing the hydrophobic protection from the myomesin Ig domains, all helices occasionally showed refolding events of helical turns in solvent, interestingly-hinting toward a nanosecond scale reversibility of unfolding. This effect is relevant to the mechanical function of myomesin, yet is beyond the capacities of AFM experiments, which showed a lack of hysteresis of force-extension profiles on the millisecond timescale (17). We used DSSP (37) to monitor the secondary structure of the helices in the simulations, as shown in Fig. 4 A for a 12 . Blue areas indicate a-helical secondary structure, and white areas any other structure. Although most of the helicity was lost on average, numerous unfolding and refolding events of individual helix turns were observed. Apparently, the preference of a-helical secondary structure and the destabilization of this helix balanced each other such that a dynamic equilibrium of low helical content was established, resulting from the solvent-exposed hydrophobic patches, which within a full myomesin molecule are protected by the Ig domains. This feature of fast helix refolding is likely to be a key function of myomesin elasticity, which is involved in fast-force-generating circles during muscle operation. In contrast to this scenario of individual helices in water, under physiological conditions, myomesin helices are inserted between two adjacent Ig domains, and thus embedded into the tightly packed M-band of the sarcomere. Consequently, the two ends of the helices are positionally restrained and subjected to stretching. Interestingly, as confirmed by previous AFM experiments, these helices were capable of resisting axial pulling force not only in the first round extension but also after recoiling, which suggested helical structure to reform at millisecond timescales (17). To fully understand how these helices unfold and refold under force, we here applied a range of constant pulling forces to the termini of the a 12 helix in FCMD simulations. We first compared the a-helical conformation of a 12 in equilibrium and under a stretching force of 5 pN during five 400-ns MD simulations for each case. As shown in Fig. 4 A, a 12 was helical to an only minor extent in equilibrium when the interface to the Ig domain was absent, as discussed above. We determined an average helical content of 23.0% (Fig. 4 C). Counterintuitively, the helical content significantly increased when a pulling force of 5 pN was applied (Fig. 4 B), on average to 31.3% (p-value < 2.2eÀ16, Wilcoxon test). Thus, more residues preferred an a-helical conformation when a 12 was held at a low force of 5 pN as compared to the absence of force. Apparently, a stretching force enhances helicity in a 12 , and supposedly in other myomesin helices too, given their only minor helical propensity (see Fig. S1). The application of 5 pN force leads to an only minor extension of the end-to-end distance of the helix from 1.45 to 1.87 nm (Fig. 4 C (inset, bottom) and see Table S1 in the Supporting Material). This results in an estimate of the work added to the helix by pulling of 1.3 kJ/mol. We interpret this as an ability of a 12 to store mechanical energy before finally fully elongating at higher forces. To further analyze the numerous events of a-helical turn refolding, we monitored the end-to-end distances of the a 12 as well as its helicity in equilibrium and under forces. As shown in Fig. S3 and Fig. 4 C (inset, top), end-to-end distance of the helix is averagely extended by 5 pN from 1.45 nm in equilibrium to 1.87 nm. The corresponding work by this pulling force through this distance difference is 1.26 kJ/mol. This work is not enough to fully extend the peptide but serves as a counterplay to bending and coiling entropy and prevents peptide fully collapsing, which also increases the probability of forming hydrogen bonds in the backbone and thus helical turns. If we define the peptide state 1 as <50% of the a 12 residues in helical conformation with state 2 being not less than 50%, we observed the helix dwelled more in state 2 under 5 pN force than in equilibrium. There are 1387 transitions from state 1 to state 2 under this force, compared to 1135 without force in the second-half of our five independent simulations. The longest dwelling time of state 2 is also longer under force, with 13.75 ns compared to 6.45 ns in equilibrium. By cumulatively collecting all the transitions from state 1 to state 2 (see Fig. S4), we estimated a transition rate to the helix-dominated state of 151 ps at 5 pN instead of 168 ps at 0 pN. This estimation was confirmed by the reverse transitions, such as 217 ps at 5 pN compared to 193 ps at 0 pN (see Fig. S4). In summary, a 5-pN pulling force enhances spontaneous turn reforming and helix stability. We also analyzed a 12 under forces up to 100 pN in FCMD simulations to quantify its mechanical stability. At each force, we monitored the secondary structures and end-toend distances of the peptide (see Fig. S5), together with a wormlike chain (WLC) model using the same contour and persistence length employed previously for myomesin (17). The accumulated simulation time for this analysis was 7.8 ms to allow for extensive sampling of conformational space at each force. Overall, the helix when not being protected by an Ig domain largely behaved like a WLC in the probed range of forces, agreeing with the overall very minor helical content. For forces of 35 pN and higher, the a-helical content vanished within our nanosecond timescale. In this comparably high-force region, the end-to-end distance closely followed that of an analogous, i.e., unstructured, WLC. Only at forces of 10 pN and lower was a significant fraction of helicity observed, as described above, giving rise to average end-to-end distances of a 12 that were lower than the WLC predictions. We note that two outliers were observed for intermediate forces, namely 20 and 30 pN (see Fig. S5 C). In these two simulations, a 12 was trapped in a coiled state during our submicrosecond timescale, causing a divergence from the WLC curve, which we believed to be an overestimation of the coiling propensity at these forces due to limited sampling. Taken together, our force-extension data agreed with the previous AFM study that myomesin helices largely followed the behavior of a WLC. We note that experiments had probed the force-extension behavior of myomesin helices between unfolded Ig domains only at forces higher than %15 pN (see Fig. S1 in Berkemeier et al. (17)). We found that helical structure was dominant at, and thus can resist, forces of 5 and 10 pN, which were below the experimental force range. They here cause a divergence from a purely WLC behavior. Although with adjacent Ig domains, myomesin helices were primarily helical and showed a distinct plateau in the force-extension curve upon unfolding, the force plateau vanished in the AFM experiments if Ig domains had been unfolded before the helix extension during stretch-relax cycles (17). Our MD simulations of individual helices indicated a lower helicity of these peptides when adjacent Ig domains were absent, suggesting the a-helical conformation and therefore an intact neighboring Ig domain for stabilization to be required for the force resistance observed experimentally. Hence, it is the presence of an interface to an Ig domain that equips myomesin helices with a significantly higher resistance against unfolding up to~30 pN. In all, lower force enhances spontaneous turn formation, probably by restraining the conformational space of the helix ends whereas higher force unfolds helical turns by breaking down hydrogen bonds (Fig. 4 C (inset, top) and see Fig. S5). Previous AFM experiments had demonstrated that stochastic refolding of hydrogen bonds in a stretched single a-helix could cause an increase in molecular stiffness (18,19). The fast refolding of helical turns rendered a 12 , and probably the other myomesin helices, highly elastic. This elasticity can give rise to a restoring force within the sarcomere, and therefore can support the reestablishment of relative structural arrangements and the survival of the sarcomere. In this scenario, fast helix refolding is vital to the relaxation period of the force-generating cycle of the sarcomere. It is worth noting that the fast refolding of helices observed here is different from helical nucleation from fully random coils, which is a much slower process on microsecond or even millisecond timescales (44,45). The helical turn refolding in this study is also faster than formerly reported (45), a difference possibly due to the additional pulling forces or due to the limited experimental timeresolution to detect the potential subnanosecond helicity fluctuations observed here. Myomesin a 12 is considered to provide the extending elements that give elasticity and absorb external stress (17). Our results strongly indicate the same characteristics of other helices studied here. Each single helix can extend to >150% of its initial distance under force, which is critical to the adaptivity of myomesin in the M-band of the sarcomere. Except one of its iso-forms, EH-myomesin, myomesin molecules do not contain an elastic module equivalent to the PEVK segment in titin. Helices in myomesin combine a surprisingly high force resistance against unfolding with a nanosecond-scale refolding propensity, in contrast to disordered protein sequences. As such, they compensate the lack of disordered modules typically employed for facile and reversible extension at low forces. Considering that unfolding of Ig domains are rare physiological events in muscle (as tensile forces are believed to be below critical domain unfolding forces (46)), and the recovery of Ig structures would require seconds, by integrating helices into a molecular structure the myomesin is able to significantly elongate without taking the risk of unfolding its Ig domains. Mechanical robustness of myomesin Ig domains The highly elastic helices are inserted in-between Ig domains. This protein domain is also the major component of other muscle proteins such as titin and tenascin, and known to be mechanically very robust (47). All five C-terminal Ig domains in myomesin are structurally known, and are highly homologous both in terms of sequence and structure. As it is being pointed out that these five Ig domains, connected by a-helices, build up the most important part of myomesin for mechanical functions (1), we next asked how their mechanical stability differs among each other and from titin Ig domains and the related fibronectin domains. To this end, all five known myomesin Ig domains, my9-my13, and the my13 dimer, were independently subjected to a pulling force in FPMD simulations, as shown in Fig. 5 A (top). Average rupture forces of the individual myomesin Ig domains ranged from 440 to 720 pN (Fig. 5 A). We note that the rupture forces obtained in our simulations cannot be directly compared to the much lower forces probed in AFM experiments (17), due to the orders-ofmagnitude higher loading rates used here. However, relative mechanical stabilities are likely to be preserved. We next probed the mechanical stability of the myomesin dimer interface formed by my13. Force was applied to the N-termini of the my13 homodimer, with the same loading rate used for the unfolding simulations. We obtained a detachment force of 818 5 51 pN in FPMD simulations, which was significantly higher than the forces to unfold the Ig domains of myomesin (Fig. 5 A). The mechanical superiority of the my13 dimer was further confirmed by FCMD simulations, in which different constant forces were used to hold my12, and the my13 dimer (Fig. 5 B). Again, the my13 dimer dissociated after longer dwell-times at a given force, in comparison to the unfolding times of my12. These dwell-times showed a highly similar logarithmic dependency on force (linear fit in Fig. 5 B), so that the Biophysical Journal 107(4) 965-973 same relative stability can be expected at the more relevant low force regime (40). The predicted transition state distances, such as 0.51 nm for my12 and 0.36 nm for my13 dimer, are in line with our simulation results (see Fig. S6). This hierarchy in mechanical stability had been partially observed in the AFM experiments, where my11 and my12 unfolding preceded dimer disintegration (17). This domain interface even outperformed the robustness of titin I27, one of the most mechanically stable protein domains known, in our simulations at constant forces (Fig. 5 B, black). Also in FPMD simulations, the detachment forces of the my13 dimer were found to be higher than those of Ig domains in titin, such as I1 and I27, when unfolded at comparable loading rates in MD simulations (48). We ascribe the surprisingly high detachment force of the myomesin dimer to the interdomain b-sheet formed across two my13 domains, and also the assisting interdomain polar contacts (see Fig. S7 A). It is established by an extended intermolecular b-sheet formed by the N-terminal b-strand of each my13 (compare Fig. 1), with a strand direction parallel to the direction of force application. This connection is further enhanced by interdomain salt bridges. Detachment required, in total, 16 cross-domain hydrogen bonds and salt bridges, and additional hydrophobic interactions to rupture virtually at once (see Fig. S7 B), explaining the outstanding resilience of the my13 dimer interface. Interestingly, the my9-13 fragment showed a weak tendency to feature higher mechanical stabilities toward the C-terminus of myomesin, as reflected by the increasing rupture forces from left to right in Fig. 5 A, with the only exception being the my11-my12 pair. This tendency can be expected to prevail throughout myomesin to some extent, as the N-terminal b-sandwich domains are the mechanically less robust FNIII domains (27,49,50). Based on these considerations, we hypothesize that after the elongation of heli-ces, at extremely high forces, myomesin would further elongate by FNIII and Ig domain unfolding, starting at the N-terminal and proceeding to the C-terminal side, on average. In this scenario, the late onset of unfolding events at the C-terminal side of myomesin might represent an additional mechanism of protecting the my13 dimeric interface. As a consequence of the my13 dimer's outstanding mechanical stability, the linkage between two myomesin molecules stays intact while the domain-connecting helices unwind, and eventually FNIII/Ig domains unfold upon passive overstretching of the muscle. This mechanical hierarchy enables myomesin to establish a firm yet highly stretchable bridge between two antiparallel myosin fibrils. The importance of the myomesin dimer linkage has been further verified by experiments with myomesin mutants showing decreased dimer stability, which is linked to diseases such as hypertrophic cardiomyopathy (13). CONCLUSIONS In this study, we used MD simulations to dissect the mechanics of myomesin, trying to understand its function in the M-band of the sarcomere. Myomesin is an important structural molecule for establishing the intricate connective network anchoring muscle fibrils. It uses flexible and elastic a-helices as elastomers to provide molecular extensibility and elasticity. By ultrafast nanosecond-scale refolding, these helices shorten virtually instantaneously in a relaxed sarcomere, and thereby are able to quickly restore the network structure, a key factor in maintaining M-band stability. Interestingly, small forces of~5 pN stabilize the helical structure, suggesting a toughening of the helices by small mechanical loads. In line with this finding, previous studies suggested axial stretching forces at the termini of a helix to reduce peptide terminal vibrations and strand bending, accompanied by a decrease in peptide entropy and an enhancement of helicity (51)(52)(53). Because isolated myomesin helices showed significant helicity only at forces smaller than 15 pN, it is the interface to adjacent Ig domains that significantly fosters mechanical resistance of the helices up to 30 pN. Hence, the hydrophobic interface between the Ig and helical building blocks in myomesin is a major molecular prerequisite of the force-buffering function of myomesin. Myomesin is the only muscle molecule featuring interdomain helices as elastic components. Other elastomeric proteins contain disordered regions, such as the PEVK domain in titin and twitchin (54,55). Instead, myomesin accommodates an intrinsically disordered region, the EH segment, but only in its embryonic isoform. As force buffering regions, helices differ from disordered regions, according to our simulation results, in two aspects: 1. The~30 pN force plateau at low to intermediate extensions provides a comparably high and constant absorption of mechanical work during loading. Biophysical Journal 107(4) 965-973 2. Myomesin helices ensure a small yet finite resistance to bending, as compared to disordered random coils such as PEVK. Apparently, these mechanical properties are beneficial for the cross-linking and force-buffering function of myomesin dimers in the M-band. Finally, myomesin's dimer interface is tougher than its Ig domains, which ensures antiparallel myosin fibrils to remain intact during the force-generating cycles of the sarcomere. An analogous, yet structurally different case of interdomain association with extraordinary mechanical stability is the sarcomeric titin Z1Z2-telethonin complex (56)(57)(58). In both cases, interdomain b-sheets orienting in parallel to the direction of tensile forces acting on the molecule are the basis of the high resistance against rupture (56). Myomesin also uses side-chain salt bridges to further reinforce the interdomain b-sheet. A tendency for rupture forces to increase throughout the myomesin molecule from the fibronectin domains at the N-terminal portion to the mechanically more robust my11 and my12 Ig domains might further aid in protecting the my13 dimer from dissociation. Myomesin dimers are actually experiencing forces in the sarcomere close to that required for possible domain unfolding and dissociation, such as was recently demonstrated for titin (59). Taken together, the molecular composition of myomesin-soft a-helices embedded into a tough dimer-creates a mechanically adaptive molecule for elasticity in muscle through a mechanical hierarchy of its components. We thank our collaboration partners, especially M. Rief and M. Wilmanns, for discussions and for providing us with experimental results and myomesin molecular structures. We acknowledge funding from the Klaus Tschira Foundation.
2018-04-03T03:54:01.301Z
2014-08-19T00:00:00.000
{ "year": 2014, "sha1": "afc17d8bb9ebb4175661d9f3c38413c7c32adaa4", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S0006349514006894/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "afc17d8bb9ebb4175661d9f3c38413c7c32adaa4", "s2fieldsofstudy": [ "Biology", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
76664000
pes2o/s2orc
v3-fos-license
Effect of Macro-, Micro- and Nano-Calcium Carbonate on Properties of Cementitious Composites—A Review Calcium carbonate is wildly used in cementitious composites at different scales and can affect the properties of cementitious composites through physical effects (such as the filler effect, dilution effect and nucleation effect) and chemical effects. The effects of macro (>1 mm)-, micro (1 μm–1 mm)- and nano (<1 μm)-sizes of calcium carbonate on the hydration process, workability, mechanical properties and durability are reviewed. Macro-calcium carbonate mainly acts as an inert filler and can be involved in building the skeletons of hardened cementitious composites to provide part of the strength. Micro-calcium carbonate not only fills the voids between cement grains, but also accelerates the hydration process and affects the workability, mechanical properties and durability through the dilution, nucleation and even chemical effects. Nano-calcium carbonate also has both physical and chemical effects on the properties of cementitious composites, and these effects behave even more effectively than those of micro-calcium carbonate. However, agglomeration of nano-calcium carbonate reduces its enhancement effects remarkably. Introduction Concrete is a kind of multi-component and multi-scale composite. Because of its relatively low price, diverse sources and good durability, concrete is widely used in many kinds of buildings and structures. However, with the massive use of concrete, environmental pollution and resource consumption inevitably happen. Cement is a necessary raw material for concrete production, on the other hand, cement manufacture is also one of the most energy intensive industries among mineral process industries [1]. According to data from the U.S. Geological Survey in 2017, global cement production reached approximately 4.2 billion tons [2] and is expected to increase year by year. Moreover, major growth will be foreseen in some developing countries such as China and India [3]. Thus, resource and energy consumption will be an even more serious problem with the increase in cement production. At the same time, 0.87 tons of carbon dioxide will be generated at per ton of cement produced [2]. So, the large amount of carbon dioxide emissions is another severe problem that needs to be solved. Aggregates are also an important constitution of concrete and aggregate consumption is detrimental to the environment as well. Limestone can be formed of various minerals such as calcite, aragonite, vaterite and amorphous calcium carbonate [2]. Among these, calcite is the most common and stable. So most natural limestone is formed of calcite [23]. It has been confirmed that incorporation of calcium carbonate will not be detrimental to mechanical properties and even has a positive synergic effect on early-age strength, the hydration process, durability and microstructure of cementitious composites [2,22,[24][25][26]. Hence, much research has been conducted to clarify the effect mechanism of calcium carbonate on cement paste, mortar or concrete [7,9,27]. In 1938, Bessey et al. [2] first found the formation of calcium-carboaluminate in the hydration process of cement when calcium carbonate was incorporated, which was called the chemical effect of calcium carbonate, and similar results have been found in later studies [2,22,[28][29][30][31][32][33]. Subsequently, a large number of studies were conducted on the role of calcium carbonate in cement paste, mortar or concrete. Now it is widely accepted that the density of the matrix can be increased when calcium carbonate is incorporated, because of its filler effect, and the hydration process can be accelerated because of its nucleation effect [2,22,34]. When the particle size of calcium carbonate is comparable to cement grains, the dilution effect will be effective to influence the workability and hydration process of cement [2,35,36]. However, these effects are not independent and often have a coupling effect on mechanical properties, the hydration process, workability and durability of cementitious composites because of its particle size, content and morphology. Figure 1 shows the number of publications about using limestone in concrete from 2000 to 2017. Applications of limestone is still a hotspot attracting many researchers, especially in recent years. Based on extensive references and research on the effects of calcium carbonate on properties of cementitious composites, many standards have been set to guide the use of calcium carbonate (limestone) in interground and blended cement production, and these standards are sorted chronologically in Table 1. It can be found that calcium carbonate is widely used in many countries acting as aggregates, fillers or admixtures and its content varies from country to country because of its various applications and particle sizes. way to reduce the carbon dioxide emissions and sources consumption, but also an economic and environmentally friendly way to produce cement or concrete, because most SCMs and mineral admixtures are industrial waste products [4]. Among these SCMs and mineral admixtures, limestone is widely used as a kind of filler material [5][6][7][8][9], aggregate [10][11][12][13], micro-fiber [14][15][16][17][18][19][20] and early strength agent [21] because of its various scales (macro-, micro-and nano-scale) [2,22], morphologies (bulk, granular and fibrous shape), crystal systems (calcite, aragonite, vaterite and amorphous calcium carbonate) [2]. Limestone can be formed of various minerals such as calcite, aragonite, vaterite and amorphous calcium carbonate [2]. Among these, calcite is the most common and stable. So most natural limestone is formed of calcite [23]. It has been confirmed that incorporation of calcium carbonate will not be detrimental to mechanical properties and even has a positive synergic effect on early-age strength, the hydration process, durability and microstructure of cementitious composites [2,22,[24][25][26]. Hence, much research has been conducted to clarify the effect mechanism of calcium carbonate on cement paste, mortar or concrete [7,9,27]. In 1938, Bessey et al. [2] first found the formation of calciumcarboaluminate in the hydration process of cement when calcium carbonate was incorporated, which was called the chemical effect of calcium carbonate, and similar results have been found in later studies [2,22,[28][29][30][31][32][33]. Subsequently, a large number of studies were conducted on the role of calcium carbonate in cement paste, mortar or concrete. Now it is widely accepted that the density of the matrix can be increased when calcium carbonate is incorporated, because of its filler effect, and the hydration process can be accelerated because of its nucleation effect [2,22,34]. When the particle size of calcium carbonate is comparable to cement grains, the dilution effect will be effective to influence the workability and hydration process of cement [2,35,36]. However, these effects are not independent and often have a coupling effect on mechanical properties, the hydration process, workability and durability of cementitious composites because of its particle size, content and morphology. Figure 1 shows the number of publications about using limestone in concrete from 2000 to 2017. Applications of limestone is still a hotspot attracting many researchers, especially in recent years. Based on extensive references and research on the effects of calcium carbonate on properties of cementitious composites, many standards have been set to guide the use of calcium carbonate (limestone) in interground and blended cement production, and these standards are sorted chronologically in Table 1. It can be found that calcium carbonate is widely used in many countries acting as aggregates, fillers or admixtures and its content varies from country to country because of its various applications and particle sizes. Because many studies have been conducted on the effects of calcium carbonate on properties of cementitious composites from fresh mixtures to hardened products, this review focuses on particle size of calcium carbonate and the influence of macro-, micro-and nano-calcium carbonate on the hydration process, mechanical properties, workability and durability of cementitious composites. Moreover, through the summaries of previous references, some constructive suggestions and expectations are proposed as well. Macro-Calcium Carbonate Macro-calcium carbonate refers to calcium carbonate with particle sizes of a millimetric (>1 mm) level, such as coarse limestone aggregates [10,11] and coarse limestone sand [13]. At this scale, the chemical and nucleation effects of calcium carbonate are not significant and thereby the influence of macro-calcium carbonate on the hydration process is negligible. However, the water absorption, particle size and constitution of macro-calcium carbonate aggregates (coarse limestone aggregates) are effective to influence the workability, mechanical properties and durability of cementitious composites. Workability The coarse and fine aggregates generally occupy 70-80% of the concrete volume and the water absorption of coarse aggregates significantly influences the fresh properties of cementitious composites [11]. It has been found through investigation that slump loss of fresh concrete is most significant in the first 15 min and dry coarse limestone aggregates causes a higher slump loss compared with the wet one because fresh concrete containing dry coarse limestone aggregates have a higher effective water to cement ratio [11]. Moreover, the workability of fresh concrete also depends on the surfacing filling and particle size of the coarse limestone aggregates. When the fineness modulus of aggregates decreased, the coarse limestone aggregates ratio decreased. Thus, more water is required to achieve the desired workability [37]. Mechanical Properties The mechanical properties of concrete depend on water absorption, particle size and constitutions of coarse limestone aggregates. Incorporation of wet coarse limestone aggregates can generate a concrete with higher compressive strength compared with incorporation of the dry one [11]. In addition, utilization of smaller particle size aggregates may produce a higher compressive strength, as shown in Table 2 [37]. When the coarse limestone aggregate dimension is 0-5 mm, the compressive strength of hardened concrete is up to 42.12 MPa (w/c = 0.33-0.36, 28 d). At the same time, when partial mountain sand is replaced by limestone aggregates with a grain size less than 5 mm, the drying shrinkage of hardened concrete will also decrease [38]. The constitution of coarse limestone aggregates influences the strengths and elastic modulus of concrete, especially for high strength concrete (HSC). Due to its low water to cement ratio, the strengths of HSC are determined by the strengths of aggregates, rather than the bond strength between cement paste and coarse aggregates [39,40]. Therefore, it is the mineralogy and strength that control the ultimate strength of HSC. Compared with the different constitutions of coarse limestone aggregates such as calcareous limestone aggregate (85% calcite), dolomitic limestone aggregate (80% dolomite) and quartzitic-gravel aggregate containing schist, dolomite limestone concrete has the highest compressive strength [40]. Beshr and Almusallam [39,40] also obtained similar results when comparing four kinds of coarse aggregates (calcareous limestone, dolomitic limestone, quartzite limestone and steel slag). In addition, they also found that the steel slag concrete had the highest split tensile strength and elastic modulus, followed by that of concrete specimens prepared with the quartzitic, dolomitic and calcareous limestone aggregates because of soft nature of calcareous limestone aggregates. These results have also been proved by the loss on abrasion in different coarse aggregates as shown in Table 3 [39]. However, incorporation of some SCMs such as silica fume, the split tensile strength may increase because of the reaction of calcium hydroxide (Ca(OH) 2 ) and silica fume. Thus, concrete prepared with mineral aggregates, such as dolomitic and calcareous limestone aggregates, has a significant improvement in split tensile strength, especially for 90d-strength [40]. Table 3. Loss on abrasion in the coarse aggregates [39]. Type of Aggregate Loss on Abrasion (%) Durability According to what is known, incorporation of limestone aggregates in concrete can affect its durability [25], especially the acid resistance and fire resistance. Acid Attack Concrete used for sewer structures is often attacked by sulfuric acid converted from hydrogen sulfide by bacterial action [13]. To reduce the damage of concrete in an acid condition, there are two effective ways. First, incorporation of SCMs such as fly ash and silica fume in concrete is effective in the reduction of acid attack because of the decreased presence of Ca(OH) 2 , which reacts with acid [41]. Second, usage of a sacrificial medium can reduce the acid concentration near the concrete surface and decrease the rate of deterioration in concrete subjected to acid attack. Calcareous limestone aggregates could act as a sacrificial medium to neutralize the acidic environment and reduce the pH value [13]. In addition, the calcareous limestone aggregate concrete has an excellent sulfuric acid attack resistance ability when SCMs are incorporated in it. High Temperature Exposure The compressive strength of concrete after exposure to high temperatures significantly depends on the type of aggregates. The compressive strengths of limestone and siliceous aggregate concrete after exposure to high temperatures have been compared [42,43]. Limestone aggregate concrete has a higher thermal stability compared with the siliceous aggregate concrete, because quartz in siliceous aggregates polymorphically changes at 570 • C with a volume expiation but the decomposition of calcium carbonate is at 800-900 • C [42,44]. However, due to the functions of internal autoclaving, secondary hydration of unhydrated clinkers and SCMs, and the pozzolanic effect, the post-fire strength of concrete may have an increasing trend before 300 • C [43][44][45], especially for the concrete made with siliceous aggregates [43]. Note that when the temperature exceeds 800 • C, concrete would deteriorate irreversibly regardless of being prepared by limestone or siliceous aggregates. In conclusion, macro-calcium carbonate such as coarse limestone aggregate plays an important role in controlling workability, mechanical properties and durability of cementitious composites. Incorporation of macro-calcium carbonate in cementitious composites can improve both ambient and post-fire strengths. Moreover, macro-calcium carbonate can be regarded as an inert filler. Micro-Calcium Carbonate Micro-calcium carbonate (1 µm-1 mm), such as limestone powder and limestone dust, is widely used in cement manufacture as a kind of blended or interground material. Though micro-calcium carbonate has no pozzolanic activity and cannot react with alkaline substances such as Ca(OH) 2 and calcium oxide (CaO), incorporation of micro-calcium carbonate in cement can have both physical and chemical effects on the hydration process, workability of fresh mixture and mechanical properties of hardened products. Thus, it is imprecise to regard micro-calcium carbonate as an inert filler, especially when micro-calcium carbonate has a smaller particle size than cement grains or is incorporated in ternary or quaternary blends containing SCMs such as fly ash and metakaolin; in these situations micro-calcium carbonate may participate in the cement hydration process and affect the factors of hydration kinetics and microstructure [46][47][48]. Finally, the mechanical properties and durability will also be influenced. Therefore, the effect of micro-calcium carbonate on the hydration process, workability, mechanical properties and durability is reviewed in the following section. Hydration Process As a micro-calcium carbonate, limestone powder has a higher specific area and surface energy than that of macro-calcium carbonate. So the effects of limestone powder on accumulative hydration heat, the release rate of hydration heat and the hydration products of cementitious composites are different from that of macro-calcium carbonate and mainly affected by various particle size, content and crystal structure of micro-calcium carbonate. Table 4 shows the main action mechanism of limestone powder on the hydration process of cement paste. According to this table, the main action mechanism of limestone powder on the hydration process is discussed through the following aspects of particle size, content and crystal structure. The particle size of micro-calcium carbonate affects its physical effects (the filler effect, dilution effect and nucleation effect) and chemical effects. When a coarser (comparable or coarser than cement grains) limestone powder is used in cementitious composites, the main action effect of limestone is the filler effect. Because of the smaller surface energy and lower dissolution in an alkaline environment, limestone powder hardly participates in the hydration process of cement and may only fill the voids between aggregates such as sand and coarse aggregates. However, when a finer (finer than cement grains) limestone powder is incorporated in cementitious composites, accumulative hydration heat, the release rate of hydration heat and the hydration products are all greatly different. For the chemical effect of micro-calcium carbonate, the results may be interesting. Vance et al. [47] investigated the particle size of limestone powder on cement hydration and three limestone powders of different fineness were used. The finer limestone powder (median particle size = 0.7 or 3 µm) significantly accelerates the hydration process of calcium silicate and increases the hydration peak (see in Figure 2 [47]), because finer limestone powder has a larger specific area and surface energy and provide additional nucleation sites for the formation and development of calcium silicate hydrate (C-S-H) [5,9,26,35,47,50,51], which is known as nucleation effect. Moreover, the second hydration peak is generally recognized as the hydration of calcium aluminate and will demonstrate a significant improvement when 0.7 µm limestone powder is incorporated, which means the formation of new hydration products such as hemicarboaluminate and monocarboaluminate. The formation of carboaluminates has also been confirmed by many other researchers [5,26,30,51]. However, hemicarboaluminate is not thermostable and mainly exists in the early hydration process (before 7 d), and then slowly converts to monocarboaluminate. The formation of carboaluminate depends on many factors such as kinetics of hemi-and monocarboaluminate formation and the dissolution of calcium carbonate is lower in high pH conditions, which causes the actual amount of calcium carbonate participating in the formation of carboaluminate to be far less than the content of limestone powder [51]. Therefore, the intensity of the carboaluminates peaks in the X-ray diffraction (XRD) pattern is lower and difficult to detect compared to other hydrates, as shown in Figure 3 [51]. For the second hydration peak, Bentz et al. had a similar result through the investigation of hydration of cement prepared with different fineness of limestone powder [50], and another possibility for the increasing second hydration peak of cement containing fine limestone powder (nano-limestone powder and 4.4 µm limestone powder in reference [50]) may be that limestone powder can provide an additional source of calcium irons to the pore solution, even though calcium carbonate has a relatively low dissolution in the elevated pH condition [50]. When a coarser limestone powder (15 µm in reference [47]; 20 µm in reference [35] and 15.7 µm in reference [26]) is used in cementitious composites, the dilution effect is also significant. Though the heat release rate of coarse limestone powder-cement is still higher than that of the pure cement, the total hydration heat is comparable or even lower than that of pure cement, as shown in Figure 2 [47]. Content The content of limestone powder can also affect the main action mechanism of limestone powder on cement hydration. In general, the nucleation effect increases with the increase of limestone powder content. This is because more nucleation sites can be provided for the formation of C-S-H and the accumulative hydration. The heat release rate will also increase. The effect of the content of limestone powder on its chemical effect may be complicated for the following two reasons: (1) the formation of hemi-and monocarboaluminate mainly depends on the kinetics rather than the amount of calcium carbonate present; and (2) the dissolution of calcium carbonate is small and the content of aluminate in cement is low as well [51]. However, some quantitative relationships of the chemical effects of limestone powder can still be calculated according to the chemical reaction equations, and the results are shown in Figure 4 [24]. Regions I, II and III are delineated by dotted lines, which means the hydrates in these areas are metastable phases. According to the boundaries in the three areas, the content of carboaluminates is the function of the content of sulfate, carbonate and aluminate. There is no calcite in regions I to IV, which means calcite totally participates in the reaction of calcium carbonate and tricalcium aluminate (C 3 A). But in regions V and VI, the calcite just acts as an inert filler to fill the voids between cement grains. Conversely, the dilution effect has a significant enhancement with the increase of limestone powder content, especially for the ultra-fine limestone powder [2]. Because more free water can be substituted by the ultra-fine limestone powder in voids, the effective water to cement ratio increased. Content The content of limestone powder can also affect the main action mechanism of limestone powder on cement hydration. In general, the nucleation effect increases with the increase of limestone powder content. This is because more nucleation sites can be provided for the formation of C-S-H and the accumulative hydration. The heat release rate will also increase. The effect of the content of limestone powder on its chemical effect may be complicated for the following two reasons: (1) the formation of hemi-and monocarboaluminate mainly depends on the kinetics rather than the amount of calcium carbonate present; and (2) the dissolution of calcium carbonate is small and the content of aluminate in cement is low as well [51]. However, some quantitative relationships of the chemical effects of Figure 2. Influence of particle size on the heat release rate [47]. Content The content of limestone powder can also affect the main action mechanism of limestone powder on cement hydration. In general, the nucleation effect increases with the increase of limestone powder content. This is because more nucleation sites can be provided for the formation of C-S-H and the accumulative hydration. The heat release rate will also increase. The effect of the content of limestone powder on its chemical effect may be complicated for the following two reasons: (1) the formation of hemi-and monocarboaluminate mainly depends on the kinetics rather than the amount of calcium carbonate present; and (2) the dissolution of calcium carbonate is small and the content of aluminate in cement is low as well [51]. However, some quantitative relationships of the chemical effects of Crystal Structure Limestone powders with different crystal structures may have different influences on cement hydration. The influence of aragonite (sturcal F) and calcite (heat-treated sturcal F) on cement hydration have been investigated [26]. Calcite can significantly accelerate the hydration process, but aragonite may not. As shown in Figure 5 [52], the planar configuration of calcite consists of Ca and O atoms, which is similar to the CaO layer in C-S-H gel. But to aragonite, only Ca atoms are detected in the surface layer for aragonite. So, calcite has an improvement on hydration process. However, because of the similar dissolution processes of calcite and aragonite in an ambient environment, the chemical effect may not be distinguished [26]. limestone powder can still be calculated according to the chemical reaction equations, and the results are shown in Figure 4 [24]. Regions I, II and III are delineated by dotted lines, which means the hydrates in these areas are metastable phases. According to the boundaries in the three areas, the content of carboaluminates is the function of the content of sulfate, carbonate and aluminate. There is no calcite in regions I to IV, which means calcite totally participates in the reaction of calcium carbonate and tricalcium aluminate (C3A). But in regions V and VI, the calcite just acts as an inert filler to fill the voids between cement grains. Conversely, the dilution effect has a significant enhancement with the increase of limestone powder content, especially for the ultra-fine limestone powder [2]. Because more free water can be substituted by the ultra-fine limestone powder in voids, the effective water to cement ratio increased. Crystal Structure Limestone powders with different crystal structures may have different influences on cement hydration. The influence of aragonite (sturcal F) and calcite (heat-treated sturcal F) on cement hydration have been investigated [26]. Calcite can significantly accelerate the hydration process, but aragonite may not. As shown in Figure 5 [52], the planar configuration of calcite consists of Ca and O atoms, which is similar to the CaO layer in C-S-H gel. But to aragonite, only Ca atoms are detected in the surface layer for aragonite. So, calcite has an improvement on hydration process. However, because of the similar dissolution processes of calcite and aragonite in an ambient environment, the chemical effect may not be distinguished [26]. Workability The workability of a fresh mixture containing limestone powder mainly depends on its particle size, content and surface morphology. The viscosity (tested by V-funnel time or rheometer) increases with the decrease in particle size of limestone powder, especially when the particle size is comparable or smaller than those of cement grains, because of the fill effect and higher specific area of limestone powder [53]. Therefore, self-compacting concrete (SCC) prepared with fine limestone powder Workability The workability of a fresh mixture containing limestone powder mainly depends on its particle size, content and surface morphology. The viscosity (tested by V-funnel time or rheometer) increases with the decrease in particle size of limestone powder, especially when the particle size is comparable or smaller than those of cement grains, because of the fill effect and higher specific area of limestone powder [53]. Therefore, self-compacting concrete (SCC) prepared with fine limestone powder (median particle size < 20 µm [53][54][55]) may have a good segregation resistance ability and workability. The viscosity is also influenced by the replacement content of limestone powder, but it is not a linear relationship between the replacement content and variation of viscosity [53]. The effects of limestone powder on yield stress (tested by spread flow or rheometer) may be complicated. Coarse limestone powder (Blaine fineness = 4430 cm 3 /g) could reduce the spread flow values (increased the yield stress), but fine limestone powder (Blaine fineness = 5380 cm 3 /g) increased the spread flow values (decreased the yield stress) [53]. Bentz et al. [52] used a finer limestone powder than that in reference [53] and found that fine limestone powder could increase the flowability and decrease the yield stress. Cao et al. also investigated the effect of morphology of calcium carbonate on viscosity and yield stress of cement mortar containing aragonite calcium carbonate whisker (CW) with a needle-like shape (aspect ratio = 10-60) [56]. Both viscosity and yield stress increase with the increased substitution amount of cement because of its higher specific area. The purity of limestone powder [57] also affects the workability of fresh mixture in addition to the above factors. Limestone Powder The mechanical properties of cementitious composites containing limestone powder depend on particle size, content and morphology. With the decrease particle size, compressive strength at early-age (before 7d) is found to increase with a constant content of limestone powder [1,47,54,55]. But for long-term age, incorporation of finer limestone powder may decrease the compressive strength [1], because the dilution effect of finer limestone powder may be more effective than its filler effect or nucleation effect at the end stage of the hydration process. With the increased content, compressive strength and flexural strength decrease [1,47,54,55,58]. On one hand, a high replacement content reduces the amount of cement and this is not good for the strength development because limestone powder has no cementitious ability. On the other hand, the dilution effect is more effective with the increase in substitution content, and causes a high effective water to cement ratio and lower strength. However, flexural defection of polyvinyl alcohol fiber reinforced engineered cementitious composites (PVA-ECC) may be improved after the addition of limestone powder because of uniform dispersion of PVA fiber caused by the diluting effect of limestone powder [58]. Calcium Carbonate Whisker Calcium carbonate whisker (CW) was first used in the paper industry to enhance the toughness of paper. It is different from limestone powder with a bulk shape [35] or granular shape (see in Figure 6), but is needle-like [56] (see in Figure 7). Because of its shape, CW not only fills the voids to make the matrix dense [14] but also plays a role in resisting the development of micro-cracks, especially with incorporated steel fiber [16,20,59], PVA fiber [15] and carbon fiber [60]. Moreover, a positive synergic effect can be demonstrated when incorporated of a hybrid fiber system [61,62]. Calcium carbonate whisker (CW) was first used in the paper industry to enhance the toughness of paper. It is different from limestone powder with a bulk shape [35] or granular shape (see in Figure 6), but is needle-like [56] (see in Figure 7). Because of its shape, CW not only fills the voids to make the matrix dense [14] but also plays a role in resisting the development of micro-cracks, especially with incorporated steel fiber [16,20,59], PVA fiber [15] and carbon fiber [60]. Moreover, a positive synergic effect can be demonstrated when incorporated of a hybrid fiber system [61,62]. of paper. It is different from limestone powder with a bulk shape [35] or granular shape (see in Figure 6), but is needle-like [56] (see in Figure 7). Because of its shape, CW not only fills the voids to make the matrix dense [14] but also plays a role in resisting the development of micro-cracks, especially with incorporated steel fiber [16,20,59], PVA fiber [15] and carbon fiber [60]. Moreover, a positive synergic effect can be demonstrated when incorporated of a hybrid fiber system [61,62]. Acid Attack In some special environments, concrete may be attacked by acid. Because of the reaction between Ca(OH) 2 produced by the hydration process and acid ions, a high weight loss may occur and, therefore, cause the deterioration of strength. Incorporation of limestone power in cementitious composites can reduce the weight loss [55,63]. Moreover, with the increase of substitution content and decrease of particle size, cement mortar or concrete containing limestone powder exhibit a better resistance to acid attack [55]. This is because less Ca(OH) 2 is produced when the replacement content of cement is higher. In addition, a finer limestone powder may have a more effective filler effect and make a denser matrix [55]. Thereby, the incorporation of more and finer limestone powder in cementitious composites may give a better resistance to acid attack, to some extent. High Temperature Exposure Incorporation of limestone powder in cementitious composites may be not good for their ability to resist high temperature exposure. With the increases of temperature and/or content of limestone powder, there are decreases of the compressive strength, ultrasonic pulse velocity (UPV) decrease and increase of weight loss [64,65], especially after the decomposition of calcium carbonate at around 800-900 • C [42,44]. However, it is noticeable that limestone powder used in these references [64,65] is coarser than cement grains and causes a more effective dilution effect, especially when more cement is replaced. Therefore, the residual is lower compared to that without limestone powder before 600 • C. In addition, there are not enough studies about the effects of limestone powder on the properties of high temperature-damaged cementitious composites, especially when the fineness and crystal structure of limestone powder are taken into consideration. So, more studies are needed in this area. In conclusion, micro-calcium carbonate can affect the hydration process of cement by its dilution effect, nucleation effect and chemical effect. These effects are significantly influenced by the particle size, content and crystal structure. Workability of fresh mixture is also influenced by the particle size and content through the filler effect. Subsequently, mechanical properties and durability are also influenced because of the effect of micro-calcium carbonate on the hydration process and workability. Moreover, the main difference between macro-and micro-calcium carbonate is that micro-calcium carbonate has a chemical effect on cementitious composites, except for the physical effect (filler effect), especially incorporation of finer micro-calcium carbonate (limestone powder) in cementitious composites. Nano-Calcium Carbonate Nanoparticles are commonly defined as materials with a particle size of less than 100 nm [66,67], and can make revolutionary changes in bulk material properties [68]. Incorporation of nanoparticles in cementitious composites can significantly improve their mechanical properties and durability [67,[69][70][71][72]. Among these nanoparticles, nano-calcium carbonate is one of the most widely used nanoparticles in the construction sector. In order to distinguish micro-calcium carbonate and nano-calcium carbonate, the particle size of nano-calcium carbonate is less than 1 µm, rather than being more strictly defined as less than 100 nm. Compared with micro-calcium carbonate, nano-calcium carbonate has a finer particle size and larger specific area, and thereby a more significant effect on the hydration process, workability, mechanical properties and durability of cementitious composites can be observed, even only with a small amount. Hydration Process The effect of nano-calcium carbonate on the hydration process of cement depends on its content, particle size and crystal structure [22,[73][74][75][76][77][78][79]. Sato et al. [73,79] studied the influence of content and particle size of nano-calcium carbonate on cement hydration. Nano-calcium carbonate (50-120 nm) is very effective in accelerating the cement hydration, especially for the induction period of tricalcium silicate (C 3 S), because of its nucleation effect [79]. Moreover, with the increase of calcium carbonate content, the acceleration effect of nano-calcium carbonate is more and more pronounced and the hydration peak of tricalcium aluminate (C 3 A) and tetracalcium aluminoferrite (C 4 AF) is also more and more remarkable. Similar results are also shown in Figure 8 [77]. Both the dormant period and the appearance of the second hydration peak (associated with hydration of C 3 A and C 4 AF) are shortened. The reasons are that the calcium ions can be absorbed onto the surface of nano-calcium carbonate when the C 3 S is dissolved in water because of the high surface energy of nano-calcium carbonate, and thereby it causes the concentration reduction of calcium ions around the C 3 S. It is favorable for accelerating the reaction of C 3 S. In addition, dissolved carbonate ions from nano-calcium carbonate can react with C 3 A to form hemi-and monocarboaluminates [76,77]. However, nano-calcium carbonate can also react with C 3 S to form C-S-H gel and Ca(OH) 2 and this may be also the reason for earlier and higher hydration heat. The dilution effect of nano-calcium carbonate can also be found in Figure 8 because the mixture containing 4.8% ( by weight) nano-calcium carbonate (15-105 nm, 97.8% calcite) has a higher and earlier hydration heat [77], which means nano-calcium carbonate is more effective to perform a dilution effect compared with micro-calcium carbonate (micro-calcium carbonate performs a dilution effect when its content is 10% (by weight) discussed in Figure 2 [47]). in Figure 9, ECC containing AC has a similar Ca(OH)2 content compared with the control group at 90d because the surface structure of aragonite calcium carbonate is less favorable for the formation of C-S-H [78], which means AC is less effective to accelerate the hydration process compared with the NC. These results are similar to that for micro-calcium carbonate [26]. However, the Ca(OH)2 content decreases with the increase of age because of the formation of carboaluminates, carbonation and pozzolanic effect (only for nano-silicon oxide in this reference). Crystal structure of nano-calcium carbonate can also influence the cement hydration process. The influence of calcite nano-calcium carbonate (NC) and aragonite nano-calcium carbonate (AC) on the properties of PVA-ECC are investigated [78]. From the thermogravimetric analysis (TGA/DTA) in Figure 9, ECC containing AC has a similar Ca(OH) 2 content compared with the control group at 90d because the surface structure of aragonite calcium carbonate is less favorable for the formation of C-S-H [78], which means AC is less effective to accelerate the hydration process compared with the NC. These results are similar to that for micro-calcium carbonate [26]. However, the Ca(OH) 2 content decreases with the increase of age because of the formation of carboaluminates, carbonation and pozzolanic effect (only for nano-silicon oxide in this reference). Workability The workability of cementitious composites with incorporated nano-calcium carbonate depends on content and particle size. In general, with an increase in particle size or content, the yield stress (spread flow) and viscosity (V-funnel time) increase [27,80,81]. However, when particle size and content of nano-calcium carbonate are taken into consideration at the same time, their combined effect on the workability may be different from the effect of each one. It is generally recognized that demanding water of cementitious composites includes two aspects: (1) filling water in the voids between the cement grains; and (2) absorbing water on the surface of cement particles [77]. In addition, the action mechanism may also include two aspects: (1) the dilution effect, which means water in voids can be substituted by nano-calcium carbonate particles; and (2) the filler effect, which means finer nano-calcium carbonate particles can fill the space between cement particles and at the same Workability The workability of cementitious composites with incorporated nano-calcium carbonate depends on content and particle size. In general, with an increase in particle size or content, the yield stress (spread flow) and viscosity (V-funnel time) increase [27,80,81]. However, when particle size and content of nano-calcium carbonate are taken into consideration at the same time, their combined effect on the workability may be different from the effect of each one. It is generally recognized that demanding water of cementitious composites includes two aspects: (1) filling water in the voids between the cement grains; and (2) absorbing water on the surface of cement particles [77]. In addition, the action mechanism may also include two aspects: (1) the dilution effect, which means water in voids can be substituted by nano-calcium carbonate particles; and (2) the filler effect, which means finer nano-calcium carbonate particles can fill the space between cement particles and at the same time, more free water can be absorbed onto the surface of nano-calcium carbonate because of its larger specific area and higher surface energy. Both the two action mechanisms can be influenced by particle size and content of nano-calcium carbonate, and thereby with the increase of content, the flowability may perform differently [22,75,82]. Mechanical Properties The mechanical properties of cementitious composites containing nano-calcium carbonate mainly depend on their contents. Flexural strength initially increased up to a nano-calcium carbonate (15-40 nm) addition rate of 2% (by weight) and then decreased [78]. With incorporation of nano-calcium carbonate in PVA-ECC, the mid-span deflection significantly improves, especially at early-age (before 1 d). However, comparing the effect of calcite and aragonite nano-calcium carbonate on flexural and compressive properties, the calcite is more effective because it is more favorable to accelerate the formation of C-S-H [78]. For compressive strength, with the increase in nano-calcium carbonate content, compressive strength initially increases and then decreases [27,77,81,83]. The reasons are that, on one hand, nano-calcium carbonate can accelerate the hydration process and react with C 3 S and C 3 A to form C-S-H and carboaluminates, and this effect is more effective with the increase of content to some extent. However, when a large amount of cement is replaced by nano-calcium carbonate, the dilution effect is more effective, just like the micro-calcium carbonate. Moreover, agglomeration of nano-calcium carbonate will seriously reduce the development of compression, which is different to micro-calcium carbonate [1,54,55,58]. In addition, the denser matrix caused by the addition of nano-calcium carbonate could not provide available space for the formation of hydration products [77]. In general, incorporation of nano-calcium carbonate in cementitious composites can improve early-age strength and incorporation of SCMs may be helpful for long-term strength [84], so the hybrid use of nano-calcium carbonate and SCMs may have a synergic effect on both early-age and long-term strength. Durability Very few studies about the effects of acid attack on properties of cementitious composites containing nano-calcium carbonate can be found. However, according to other durability tests such as water sorptivity and chloride permeability [80,85], incorporation of nano-calcium carbonate in cementitious composites can make the matrix dense and reduce the pores. Thus, the impermeability and the acid attack resistance ability may be good, especially when the partial replacement of cement is greater. For high temperature exposure, incorporation of nano-calcium carbonate in cementitious composites can improve its peak compressive stress, ultimate compressive strain, compressive toughness and flexural properties no matter in ambient environment or in/after high temperature [68,86,87]. But it is ineluctable for the rapid decrease in strength after 800 • C because of the decomposition of calcium carbonate. In conclusion, just like the micro-calcium carbonate, nano-calcium carbonate can also affect the hydration process, workability, mechanical properties and durability through the filler effect, dilution effect, nucleation effect and chemical effect. All these effects are influenced by the content, particle size and crystal structure of the nano-calcium carbonate. However, the effects of nano-calcium carbonate are more effective than those of micro-calcium carbonate, and it is ineluctable that the agglomeration of nano-calcium carbonate is also more remarkable than that of micro-calcium carbonate because of its higher surface energy and larger specific area. Summary The effects of macro-, micro-and nano-calcium carbonate on the hydration process, workability, mechanical properties and durability of cementitious composites have been reviewed. Based on the discussion above, conclusions can be drawn as follows: (1) Macro-calcium carbonate mainly acts as an inert filler in cementitious composites. The influence of macro-calcium carbonate on the cement hydration process is insignificant. The workability of fresh mixture depends on water absorption and particle size of macro-calcium carbonate, and concrete prepared with dry macro-calcium carbonate has a higher slump loss. The mechanical properties of concrete depend on water absorption, particle size and the constitutions of macro-calcium carbonate and concrete prepared with wet and fine macro-calcium carbonate may have a higher strength. Comparison of different mineral aggregates, macro-calcium carbonate (coarse limestone) aggregates are less favorable for the improvement of strength, but incorporation of SCMs in concrete can offset this problem. Macro-calcium carbonate is not a good material to resist an acid attack because of its soft nature. But it is good for resisting high temperature exposure compared with siliceous aggregate, because of its higher thermostability. (2) Micro-calcium carbonate has both a physical effect (filler effect, dilution effect and nucleation effect) and a chemical effect on cementitious composites. The cement hydration process depends on particle size, content and crystal structure of micro-calcium carbonate. In general, the finer the micro-calcium carbonate particles are, and the higher the content is, the more significant the acceleration effect of the hydration process will be. Moreover, calcite is more favorable in accelerating the hydration process than aragonite, because of their different crystal surface structures. The workability of fresh mixture containing micro-calcium carbonate powder mainly depends on its particle size, content and surface morphology. A finer powder and higher content may cause a higher yield stress and viscosity, but at the same time the dilution effect is also more effective. Therefore, the workability may not have a clear and linear relationship with the particle size or content when all of these factors work together. Conversely, the influence of CW on workability is clearer, and both viscosity and yield stress increase with increased substitution amount of cement. The mechanical properties of cementitious composites containing micro-calcium carbonate depend on its particle size, content and surface morphology. In general, the improvement of micro-calcium carbonate is effective on early-age strength because of its acceleration effect of cement hydration. Incorporation of micro-calcium carbonate in cementitious composites can make the matrix denser and thereby the acid attack resistance ability is better compared with that without micro-calcium carbonate. Incorporation of micro-calcium carbonate in cementitious composites may not be good for its ability to resist high temperature exposure. (3) Nano-calcium carbonate can have physical and chemical effects on cementitious composites, and these effects of nano-calcium carbonate are more effective compared with those of micro-calcium carbonate. However, the agglomeration of nano-calcium carbonate is also more effective. The effect of nano-calcium carbonate on the hydration process of cement depends on its content, particle size and crystal structure. The hydration process can be accelerated through the nucleation effect and chemical effect. But the dilution effect decreases total hydration heat. The workability of cementitious composites with incorporated nano-calcium carbonate depends on their content and particles. In addition, yield stress and viscosity will perform differently because of the combined effect of particle size and content. Incorporation of nano-calcium carbonate in cementitious composites can improve early-age strength when a proper amount is used. Hybrid use of nano-calcium carbonate and SCMs has a synergic effect both on early-age and long-term strengths. The resistance ability to acid attack of cementitious composites containing nano-calcium carbonate is not clear, but nano-calcium carbonate can make the matrix denser. Incorporation of nano-calcium carbonate in cementitious composites is favorable for its high temperature behaviors. Expectation There have been many studies about the effect of macro-, micro-and nano-calcium carbonate on properties of cementitious composites. Many effective and significant results and mechanisms have been produced and proposed. But further studies are still needed on: (1) The high temperature properties of cementitious composites containing calcium carbonate particles. On one hand, the activity and chemical constitutions of calcium carbonate may be different in/after high temperatures. On the other hand, whether incorporation of calcium carbonate is favorable for the high temperature properties of cementitious composites needs more study, especially for aragonite, because a crystal transition will happen at around 450 • C and the influence of crystal transition on the properties of cementitious composites is still not clear. (2) Hybrid use of multi-scale calcium carbonate. Macro-and micro-calcium carbonate are more widely used compared with nano-calcium carbonate because nano-calcium carbonate has a relatively high price and is difficult to be dispersed uniformly. Hybrid use of multi-scale calcium carbonate may be a useful way to solve these problems, and thereby more research needs to be conducted to investigate the feasibility and effectiveness of this method. Conflicts of Interest: We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any nature or kind in any product, service and company that could be construed as influencing the position presented in.
2019-03-15T02:58:01.621Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "7c4ac8e619720a2d54fce57e6448f69c8b9efc89", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/12/5/781/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c4ac8e619720a2d54fce57e6448f69c8b9efc89", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
15989971
pes2o/s2orc
v3-fos-license
Rigorous theoretical constraint on constant negative EoS parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega $$\end{document}ω and its effect for the late Universe In this paper, we consider the Universe at the late stage of its evolution and deep inside the cell of uniformity. At these scales, the Universe is filled with inhomogeneously distributed discrete structures (galaxies, groups and clusters of galaxies). Supposing that the Universe contains also the cosmological constant and a perfect fluid with a negative constant equation of state (EoS) parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega $$\end{document}ω (e.g., quintessence, phantom or frustrated network of topological defects), we investigate scalar perturbations of the Friedmann–Robertson–Walker metrics due to inhomogeneities. Our analysis shows that, to be compatible with the theory of scalar perturbations, this perfect fluid, first, should be clustered and, second, should have the EoS parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega =-1/3$$\end{document}ω=-1/3. In particular, this value corresponds to the frustrated network of cosmic strings. Therefore, the frustrated network of domain walls with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega =-2/3$$\end{document}ω=-2/3 is ruled out. A perfect fluid with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega =-1/3$$\end{document}ω=-1/3 neither accelerates nor decelerates the Universe. We also obtain the equation for the nonrelativistic gravitational potential created by a system of inhomogeneities. Due to the perfect fluid with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega = -1/3$$\end{document}ω=-1/3, the physically reasonable solutions take place for flat, open and closed Universes. This perfect fluid is concentrated around the inhomogeneities and results in screening of the gravitational potential. Introduction The accelerated expansion of the Universe at late stages of its evolution, found little more than a decade ago [1,2], is a e-mail: aburgazli@gmail.com b e-mail: maxim.eingorn@gmail.com c e-mail: ai.zhuk2@gmail.com one of the most intriguing puzzles of modern physics and cosmology. Recognition of this fact was the awarding of the Nobel Prize in 2011 to Saul Perlmutter, Adam Riess and Brian Schmidt. After their discovery, there were numerous attempts to explain the nature of such acceleration. Unfortunately, there is no satisfactory explanation so far (see, e.g., the state of art in [3]). According to the recent observations [4][5][6], the CDM model is the preferable one. Here, the accelerated expansion is due to the cosmological constant. However, there is a number of problems associated with the cosmological constant. Maybe, one of the main of them consists in the adjustment mechanism which could compensate originally huge vacuum energy down to the cosmologically acceptable value and to solve the coincidence problem of close magnitudes of the non-compensated remnants of vacuum energy and the energy density of the Universe at the present time [7]. To resolve this problem, it was proposed to introduce scalar fields as a matter source. Such scalar fields can be equivalently considered in the form of perfect fluids. Among these perfect fluids, a barotropic fluid is one of the most popular objects of study. This fluid is characterized by the pressure which is the function of the energy density only: p = p(ρ), and the linear equation of state (EoS) p = ωρ is the most popular. These barotropic perfect fluids with the EoS parameters ω < −1/3 can cause the accelerated expansion of the Universe. Such fluids are called quintessence [8][9][10] and phantom [11,12] for −1 < ω < 0 and ω < −1, respectively. Usually, they have a time varying parameter ω of the EoS. However, there is also a possibility to construct models with constant ω (for the corresponding experimental restrictions see, particularly, Planck 2013 results [6]). This imposes severe restrictions on the form of the scalar field potential [13,14]. In this case, a scalar field is equivalent to a perfect fluid with ω = const. A large class of mod-els is expected to be well described (at least as far as the CMB anisotropy is concerned) by an effective constant EoS parameter [15]. For example, it is also well known that frustrated networks of topological defects (cosmic strings and domain walls) have the form of perfect fluids with the constant parameter ω [14,[16][17][18]. For example, ω = −1/3 and ω = −2/3 for cosmic strings and domain walls, respectively. It is of interest to investigate the viability of the models with constant ω and to answer the question whether these models are an alternative to the cosmological constant. In our paper, we consider the compatibility of these models with the scalar perturbations of the Friedmann-Robertson-Walker (FRW) metrics. In the hydrodynamical approach, such investigation was performed in a number of papers (see, e.g. [19] for ω = const and [20,21] for ω = const). We consider the Universe at late stages of its evolution when galaxies and clusters of galaxies have already formed. At scales much larger than the characteristic distance between these inhomogeneities, the Universe is well described by the homogeneous and isotropic FRW metrics. This is approximately 190 Mpc and larger [22]. At these scales, the matter fields (e.g., cold dark matter) are well described by the hydrodynamical approach. However, at smaller scales the Universe is highly inhomogeneous. Here, the mechanical approach looks more adequate [22,23]. It is worth noting that similar ideas concerning the discrete cosmology have been discussed in the recent papers [24,25]. Obviously, at early stages of the Universe evolution (i.e. before the inhomogeneities formation when the density contrast is much less than unity), the hydrodynamical approach works very well at small scales. It is clear that cosmological models should be tested at all stages of the Universe evolution. It is not sufficient to show their compatibility with observations only at early stages, i.e. in the hydrodynamical approach, as in previous papers. These models should also be in agreement with the mechanical approach. This is the aim and the novelty of our study. To start with, in the present paper we consider the simplest model where a perfect fluid has a constant parameter of the EoS. This article belongs to a series of studies where we intend to test different cosmological models for their compatibility with the mechanical approach. Recently, such investigation was performed for nonlinear f (R) models [26] as well as models with quark-gluon nuggets [27]. In the following paper we will consider the case of time-dependent parameters of the EoS. In mechanical approach, galaxies, dwarf galaxies and clusters of galaxies (all of them mostly composed of dark matter) can be considered as separate compact objects. Moreover, at distances much greater than their characteristic sizes they can be well described as point-like matter sources. This is generalization of the well-known astrophysical approach [28] (see §106) to the case of dynamical cosmological background. Usually, gravitational fields of these inhomogeneities are weak and their peculiar velocities are much less than the speed of light. Therefore, we can construct a theory of perturbations where the considered point-like inhomogeneities disturb the FRW metrics. Such theory was proposed in the paper [23]. Then we applied this mechanical approach in [29] to describe the mutual motion of galaxies, in particular, the Milky Way and Andromeda galaxies. For such investigations, the form of the gravitational potential plays an important role. Hence, one of the main tasks of the present paper is to study a possibility to get a reasonable form of gravitational potentials in the models with an additional perfect fluid with constant negative ω. Then, if such potentials exist, we can study the relative motion of galaxies in the field of these potentials and compare it with the corresponding motion in the CDM model [29]. Because perfect fluids have ω = const, their perturbations are purely adiabatic (see, e.g. [30]), i.e. dissipative processes are absent. Then we demonstrate that, first, these fluids must be clustered (i.e. inhomogeneous) and, second, ω = −1/3 is the only parameter which is compatible with the theory of scalar perturbations. It is well known that such perfect fluid neither accelerates nor decelerates the Universe. Frustrated network of cosmic strings is a possible candidate for such perfect fluid. It is worth noting that this conclusion is valid for perfect fluids with the constant EoS parameter. The conclusion for imperfect fluids (e.g., for scalar fields with arbitrary potentials) can be quite different. We also obtain formulas for the nonrelativistic gravitational potential created by a system of inhomogeneities (galaxies, groups, and clusters of galaxies). We show that due to the perfect fluid with ω = −1/3, the physically reasonable expressions take place for flat, open, and closed Universes. If such perfect fluid is absent, the hyperbolic space is preferred [23]. Hence, even if this perfect fluid does not accelerate the Universe, it can play an important role. It is worth noting also that according to the paper [18], a small contribution from the string network can explain the possible small departure from CDM evolution. The paper is structured as follows. In Sect. 2, we consider scalar perturbations in the Friedmann Universe filled with the cosmological constant, pressureless dustlike matter (baryon and dark matter) and perfect fluid with negative constant EoS. Here, we get the equation for the nonrelativistic gravitational potential. In Sect. 3, we find solutions of this equation for an arbitrary system of inhomogeneities for flat, open, and closed Universes. These solutions have the Newtonian limit in the vicinity of inhomogeneities and are finite at any point outside inhomogeneities. The main results are summarized in the concluding Sect. 4. Homogeneous background To start with, we consider a homogeneous isotropic Universe described by the FRW metrics, and K = −1, 0, +1 for open, flat, and closed Universes, respectively. As matter sources, we consider the cosmological constant 1 , pressureless dustlike matter (in accordance with the current observations [4,5], we assume that dark matter (DM) gives the main contribution to this matter) and an additional perfect fluid with the EoS p = ωε where ω < 0. In the present paper, ω = const. As we already wrote in the Introduction, such perfect fluids can be modeled by scalar fields with the corresponding form of the potentials [13,14] as well as by the frustrated network of the topological defects [14,[16][17][18]. We exclude the values ω = 0, −1 because they are equivalent to DM and the cosmological constant, respectively. Scalar fields with −1 < ω < 0 and ω < −1 are usually called quintessence and phantom, respectively. Below, the overline denotes homogeneous perfect fluids. It can easily be seen from the conservation equation that in the case of the homogeneous perfect fluid where a 0 is the scale factor at the present time and ε 0 is the current value of the energy density ε. Because we consider the late stages of the Universe evolution, we neglect the contribution of radiation. It is worth noting that radiation can also be included into consideration [22], and the simple analysis demonstrates that this does not affect the results of the paper. Therefore, the Friedmann 1 Perfect fluids (e.g., quintessence and phantom) with the negative parameter of the EoS ω < −1/3 were introduced to explain the late time acceleration of the Universe. They are an alternative to the cosmological constant. However, in our model, we shall keep both perfect fluids and the cosmological constant because we investigate the full range of negative parameters ω < 0. Moreover, we shall show that the only possible value of ω for the considered perfect fluid is −1/3. Then the inclusion of becomes justified. Additionally, a small contribution from these fluids (e.g., frustrated network of cosmic strings with ω = −1/3) can explain the possible small departure from CDM evolution [18]. equations read where H ≡ a /a ≡ (da/dη)/a and κ ≡ 8π G N /c 4 (c is the speed of light and G N is the Newton gravitational constant). Here, T ik is the energy-momentum tensor of the average pressureless dustlike matter. For such matter, the energy density T 0 0 = ρc 2 /a 3 is the only nonzero component. ρ = const is the comoving average rest mass density [23]. It is worth noting that in the case K = 0 the comoving coordinates x α may have a dimension of length, but then the conformal factor a is dimensionless, and vice versa. However, in the cases K = ±1 the dimension of a is fixed. Here, a has a dimension of length and x α are dimensionless. For consistency, we shall follow this definition for K = 0 as well. For such choice of the dimension of a, the rest mass density has a dimension of mass. Conformal time η and synchronous time t are connected as cdt = adη. Therefore, Eqs. (2.4) and (2.5), respectively, take the form where a 0 and H 0 are the values of the conformal factor a and the Hubble "constant" H ≡ȧ/a ≡ (da/dt)/a at the present time t = t 0 , and we introduced the density parameters: It is of interest to get the experimental restriction on pf . This requires a separate investigation which is out of the scope of our paper. We can easily see from Eq. (2.7) that perfect fluids with ω < −1/3 can provide the accelerated expansion of the Universe. Scalar perturbations As we have written in the Introduction, the inhomogeneities in the Universe result in scalar perturbations of the metrics (2.1). In the conformal Newtonian gauge, such perturbed metrics is [31,32] where scalar perturbations , 1. Following the standard argumentation, we can put = . We consider the Universe at the late stage of its evolution when the peculiar velocities of inhomogeneities (both for dustlike matter and the considered perfect fluid) are much less than the speed of light: We should stress that smallness of the nonrelativistic gravitational potential and peculiar velocities v α are two independent conditions (e.g., for very light relativistic masses the gravitational potential can still remain small). Under these conditions, the gravitational potential satisfies the following system of equations (see [23] for details 2 ): where the Laplace operator and γ is the determinant of γ αβ . Following the reasoning of [23], we took into account that peculiar velocities of inhomogeneities are nonrelativistic, and under the corresponding condition (2.11) the contribution of δT 0 β is negligible compared to that of δT 0 0 both for dustlike matter and the considered perfect fluid. Really, according to [23], the true rest mass density ρ of usual matter, presented by a sum of delta-functions (see Eq. (3.4) below), is comparable with itself after subtracting the average value ρ. Consequently, Exactly the same strong inequality 2 It is well known that in the hydrodynamic approach, the linear formalism is not applicable to study the formation of galaxies and clusters of galaxies. However, first, we consider the late stage of the Universe evolution when these inhomogeneities were mainly formed. Second, in our mechanical approach, we can use the linear approximation due to the smallness of the gravitational fields and peculiar velocities. Here, the structure of the galaxies can evolve on account of mechanical merger of inhomogeneities. holds true also for the additional perfect fluid under the quite natural assumption that only its fraction of the order δε/ε takes part in considerable motion due to interaction between inhomogeneities. In other words, account of δT 0 β is beyond the accuracy of the model. This approach is completely consistent with [28] where it is shown that the nonrelativistic gravitational potential is defined by the positions of the inhomogeneities but not by their velocities [see Eq. (106.11) in this book]. In the case of an arbitrary number of dimensions, a similar result was obtained in [33]. On the other hand, the motion of nonrelativistic inhomogeneities is defined by the gravitational potential (see, e.g. [29]). The perturbed matter remains nonrelativistic (pressureless) that results in the condition δT α β = 0. For the considered perfect fluid we have δT α β = −δpδ α β , and δε is a fluctuation of the energy density for this perfect fluid. In Eq. (2.12) δT 0 0 is related to the fluctuation of the energy density of dustlike matter and has the form [23]: where δρ is the difference between real and average rest mass densities: where ϕ(r) is a function of all spatial coordinates and we have introduced c 2 in the denominator for convenience. Below, we shall see that ϕ(r) ∼ 1/r in the vicinity of an inhomogeneity, and the nonrelativistic gravitational potential (η, r) ∼ 1/(ar) = 1/R, where R = ar is the physical distance. Hence, has the correct Newtonian limit near the inhomogeneities. Substituting the expression (2.18) into Eqs. (2.12) and (2.14), we get the following system of equations: From the Friedmann equations (2.4) and (2.5) we obtain It should be noted that we consider the perfect fluids without thermal coupling to any other type of matter. It means, in particular, that evolution of its homogeneous background as well as scalar perturbations occurs adiabatically or, in other words, without change of entropy. Therefore, in the case of the constant parameter of the EoS we preserve the same linear EoS δp = ωδε with the same constant parameter ω for the scalar perturbations δp and δε of pressure and energy density, respectively, as for their background values p and ε (see, e.g., equations (1) and (2) in [30]). Obviously, imperfect fluids such as scalar fields with arbitrary potentials (which results in time-dependent parameter ω) require a different approach [34][35][36][37]. Taking into account the expression (2.18), we see that in the right hand side of Eq. (2.16) the second term is proportional to 1/a 4 and should be dropped because we consider the nonrelativistic matter. 3 This is the accuracy of our approach, i.e. for the terms of the form of 1/a n , we drop ones with n ≥ 4 and leave terms with n < 4. Obviously, 4 + 3ω < 4 for ω < 0. Hence, we can draw the important conclusion regarding the purely homogeneous non-clustered quintessence/phantom fluids with δp, δε = 0. For these fluids, we arrive at a contradiction because in Eq. (2.22) the right hand side is equal to zero while the left hand side is nonzero. Therefore, such fluids are forbidden. 4 The considered perfect fluid (quintessence, phantom or frustrated network of topological defects) should be capable of clustering. In the papers [10,38], it was also pointed out that the quintessence has to be inhomogeneous. For the inhomogeneous perfect fluid we get from Eq. (2.22) that In this equation, all terms except the last one do not depend on time. 5 Therefore, ω = −1/3 is the only possibility to avoid this problem. Hence, we arrive at the following important conclusion. At the late stage of the Universe evolution, 3 Radiation can easily be included in our scheme [22]. The simple analysis shows that this does not change all of the following results. 4 It can easily be realized that the homogeneous solution δε = 0, ϕ = 0 is forbidden because it contradicts Eq. (2.19). The point is that the standard matter density perturbations δT 0 0 defined in Eq. (2.16) are supposed to be nonzero. In other words, we consider the Universe filled with inhomogeneously distributed galaxies, groups, and clusters of galaxies. The presence of these inhomogeneities results in nonzero perturbations of the 00 component of the corresponding energy-momentum tensor [23]. 5 We would like to recall that quantities ϕ and δρ are the comoving ones [23]. Therefore, within the adopted accuracy when both nonrelativistic and weak field limits are applied, they do not depend explicitly on time [22]. the considered perfect fluids are compatible with the scalar perturbations only if, first, they are inhomogeneous, and, second, they have the EoS parameter ω = −1/3. For example, the frustrated network of cosmic strings can be a candidate for this fluid. On the other hand, frustrated domain walls are ruled out because they have ω = −2/3. Equation (2.7) clearly demonstrates that the perfect fluid with ω = −1/3 neither accelerates nor decelerates the Universe. It is worth noting that in our model neither the nonrelativistic gravitational potential ∼ 1/a nor the perfect fluid density contrast δε/ε ∼ 1/a diverge with time (with the scale factor a) in spite of the negative sign of the ratio δp/δε which is often treated as the speed of sound squared. In the papers [39,40] it was shown that such components could be stable if sufficiently rigid. Really, as we shall show below, our perfect fluid is not purely fluid. Its fluctuations are concentrated around the matter/dark matter inhomogeneities (see, e.g., Eq. (3.8)). Obviously, the speed of sound in this case is close to zero. As noted in the paper [41], for the "solid" dark energy, the zero speed of sound is preferable. On the other hand, due to the concentration of fluctuations around the matter/dark matter inhomogeneities, they have velocities of the order of the velocities of matter/dark matter. That is, the condition (2.11) is valid for the perfect fluid in spite of the averaged relativistic EoS p = ωρ. For ω = −1/3, the equation for the gravitational potential and the fluctuation of the energy density of the perfect fluid read, respectively: [23] in the absence of the perfect fluid (i.e. for ε 0 = 0). Moreover, for K = 0 and ε 0 = 0, this equation coincides (up to an evident redefinition) with Eq. (7.14) in the well-known book [42] and Eq. (2) for the GADGET-2 [43]. In the next section, we shall investigate Eq. (2.25) depending on the curvature parameter K. We shall show that reasonable expressions of the conformal gravitational potential ϕ exist for any sign of K. This takes place due to the presence of the perfect fluid with ω = −1/3. If this fluid is absent, the hyperbolic model K = −1 is preferred [23]. Therefore, the positive role of such perfect fluid is that its presence gives a possibility to consider models for any K. Gravitational potentials It is convenient to rewrite Eq. (2.25) as follows: where the truncated gravitational potential is and As we have already mentioned in the Introduction, on scales smaller than the cell of uniformity size and on late stages of evolution, the Universe is filled with inhomogeneously distributed discrete structures (galaxies, groups, and clusters of galaxies) with dark matter concentrated around these structures. Then the rest mass density ρ reads [23] where m i is the mass of ith inhomogeneity. Therefore, Eq. (3.1) satisfies the very important principle of superposition. It is sufficient to solve this equation for one gravitating mass m i and obtain its gravitational potential φ i . The gravitational potential for all system of inhomogeneities is equal to a sum of potentials φ i . It is worth recalling that the operator is defined by Eq. (2.15). As boundary conditions, we demand that, first, the gravitational potential of a gravitating mass should have the Newtonian limit near this inhomogeneity φ i ∼ 1/r and, second, this potential should converge at any point of the Universe (except the gravitating mass position). It seems reasonable to assume also that the total gravitational potential averaged over the whole Universe is equal to zero (see, e.g. [23]): where V is the volume of the Universe. This demand results in another physically reasonable condition: δε = 0 (see Eq. (2.26)). Flat space: K = 0 In the case ε 0 > 0 → λ 2 = 8π G N c 4 ε 0 a 2 0 > 0, the solution of (3.1) for a separate mass m i satisfying the mentioned above boundary conditions reads It can easily be seen that this truncated potential has the Newtonian limit for r → 0. This expression shows that the perfect fluid results in the screening of the Newtonian potential. A similar effect for the Coulomb potential takes place in plasma. In our case, the screening originates due to specific nature of the perfect fluid. It is worth mentioning that the exponential screening of the gravitational potential was introduced "by hand" in a number of models to solve the famous Seeliger paradox (see, e.g., the review [44]). In our model, we resolve this paradox in a natural way due to the presence of the specific perfect fluid. For a many-particle system, the total gravitational potential takes the form Substituting (3.7) into (2.26), we get for the fluctuations of the perfect fluid energy density the following expression: Therefore, we arrive at a physically reasonable conclusion that these fluctuations are concentrated around the matter/dark matter inhomogeneities and the corresponding profile is given by Eq. (3.8). The averaged value of the ith component of the truncated potential over some finite volume V 0 is Then, letting the volume go to infinity (r 0 → +∞ ⇒ V 0 → +∞) and taking all gravitating masses, we obtain where ρ = lim V 0 →+∞ i m i /V 0 . Therefore, the averaged gravitational potential (3.5) is equal to zero: ϕ = 0. Consequently, δε = 0. The case ε 0 < 0 ⇒ λ 2 ≡ −μ 2 < 0 is not of interest. Here, we get the expression φ i = −(G N m i /r ) cos(μr ) which does not have clear physical sense. Additionally, this expression does not allow the procedure of averaging. Spherical space: K = +1 Let us consider, first, the case λ 2 = 8π G N c 4 ε 0 a 2 0 −3 ≡ −μ 2 < 0. This case is of interest because it allows us to perform the transition to small values of the energy density of the perfect fluid: ε 0 → 0. Here, the solution of (3.1) for a separate mass m i is (3.11) For μ 2 + 1 = 2, 3, . . . (we would recall that μ 2 = 0), this formula is finite at any point χ ∈ (0, π] and has the Newtonian limit for χ → 0. In the case of absence of the perfect fluid ε 0 = 0 → μ 2 + 1 = 2, this expression is divergent at χ = π . We demonstrated this fact in our paper [23]. Therefore, the considered perfect fluid gives a possibility to avoid this problem for the models with K = +1. It can easily be verified that for the total system of gravitating masses, the averaged value of the total truncated potential has the form of (3.10) that results in ϕ = 0 ⇒ δε = 0. In the case λ 2 > 0, the formulas can easily be found from (3.11) with the help of analytical continuation μ → iμ. In other words, it is sufficient in Eq. (3.11) to replace μ 2 by −λ 2 . The obtained expression is finite for all χ ∈ (0, π] and the averaged gravitational potential is equal to zero: ϕ = 0 ⇒ δε = 0. If the perfect fluid is absent (ε 0 = 0), then we reproduce the formula obtained in [23]. On the other hand, the expression (3.12) shows that for ε 0 > 0 → λ 2 + 1 > 4, the perfect fluid enhances the screening of the gravitating mass. For a many-particle system, the total gravitational potential takes the form where l i denotes the geodesic distance between the ith mass m i and the point of observation. Similarly, using Eq. (3.11), we can write the expression for the total potential in the case of the spherical space. Taking into account that the averaged total truncated potential has again the form (3.10), the procedure of averaging leads to the physically reasonable result: Concerning the case λ 2 < 0, the truncated gravitational potential is finite in the limit χ → +∞. However, the procedure of averaging does not exist here. Therefore, this case is not of interest for us. To conclude this section, we discuss briefly the case λ 2 = 0. For K = 0, −1, the principle of superposition is absent now. To make the gravitational potential finite at any point including the spatial infinity, we need to cutoff it smoothly at some distances from each gravitating mass. If K = 0, then the perfect fluid is absent and this case was described in detail in [23]. It was shown that the averaged gravitational potential is not equal to zero. This is a disadvantage of such models. In the case K = +1, the principle of superposition can be introduced due to the finiteness of the total volume of the Universe. Here, the comoving averaged rest mass density can be split as follows: ρ = i m i /(2π 2 ) ≡ i ρ i . Then Eq. (2.25) can be solved separately for each combination (m i , ρ i ). As a result, the gravitational potential of the ith mass is (3.14) This potential is convergent at any point χ = 0, including χ = π . It is not difficult to see that ϕ i = 0. Therefore, the total averaged gravitational potential is also equal to zero: Conclusion In our paper, we have considered the perfect fluids with the constant negative parameter ω of the EoS. We have investigated the role of these fluids for the Universe at late stages of its evolution. Such perfect fluids can be simulated by scalar fields with the corresponding form of the potentials [13,14] as well as by the frustrated network of the topological defects [14,[16][17][18]. Scalar fields with −1 < ω < 0 and ω < −1 are usually called quintessence and phantom, respectively, and they can be an alternative to the cosmological constant explaining the late time acceleration of the Universe. It takes place if their parameter of the EoS ω < −1/3. On the other hand, a small contribution from these fluids (e.g., the frustrated network of cosmic strings with ω = −1/3) can explain the possible small departure from CDM evolution [18]. To check the compatibility of these fluids with observations, we considered the present Universe at scales much less than the cell of homogeneity size which is approximately 190 Mpc [22]. At such distances, our Universe is highly inhomogeneous and the averaged Friedmann approach does not work here. We need to take into account the inhomogeneities in the form of galaxies, groups and clusters of galaxies. It is natural to assume also that the perfect fluid fluctuates around its average value. Therefore, these fluctuations as well as inhomogeneities perturb the FRW metrics. To consider these perturbations inside the cell of uniformity, we need to use the mechanical approach. This approach was established in our papers [22,23]. This is the novelty of our present work because in the previous studies the scalar perturbations were considered in the hydrodynamical approach which works well for the early Universe. It is obvious that the cosmological models should be consistent with the observations at all stages of the evolution of the Universe (both early and late). Taking into account that the perturbations of the considered perfect fluids are purely adiabatic (i.e. dissipative processes are absent), we have shown that such perfect fluids are compatible with the theory of scalar perturbations if they satisfy two conditions. First, these fluids must be clustered (i.e. inhomogeneous). Second, the parameter of the EoS ω should be −1/3. Therefore, this perfect fluid neither accelerates nor decelerates the Universe. The frustrated network of the cosmic strings can be a candidate for this fluid. On the other hand, frustrated domain walls are ruled out because they have 6 ω = −2/3. Therefore, in the case of negative constant ω, only models with ω = −1 (a pure cosmological constant) and ω = −1/3 are compatible with the mechanical approach, which is the most appropriate to describe the late Universe inside the cell of uniformity. Substituting ω = −1/3 into the background Eq. (2.6), we can see that such perfect fluid behaves here as curvature. Hence, we can combine both terms to get a total "curvature" density parameter K,tot ≡ K + pf . It is tempting to use the experimental restrictions on the curvature density parameter (see, e.g., sections 4.3 and 6.2.3 in [4] and [6], respectively) applying them for K,tot and then to get limitations for λ from Eq. (3.3). Exactly this parameter λ determines the characteristic scales of the Yukawa-type screening in Eqs. (3.6) and (3.12). However, we cannot do it because the experimental restrictions have topological origin (i.e. they are due to the different form of the function in (2.2)) but not due to the fact that the curvature term in the Friedmann equations behaves as 1/a 2 . In other words, the topological restrictions follow from the different definitions for the distances in the case of different topologies. Then we have obtained the equation for the nonrelativistic gravitational potential. We need to know the form of the gravitational potential to describe dynamics of inhomogeneities. For example, all numerical simulations use the expression for the gravitational potentials of the inhomogeneities. Obviously, dynamical behavior of these inhomogeneities is determined by two competing mechanisms. On the one hand, it is the gravitational interaction between the inhomogeneities, and, on the other hand, the cosmological accelerated expansion. Therefore, one of the main tasks of the present paper was to study a possibility to get a reasonable form of the gravitational potential in the considered model. We have shown that due to the perfect fluid with ω = −1/3, the physically reasonable solutions of the equation for the gravitational potential take place for flat, open, and closed Universes. The presence of this perfect fluid helps to resolve the Seeliger paradox [44] for any sign of the spatial curvature parameter K. If the perfect fluid is absent, the hyperbolic space is preferred [23]. Hence, such perfect fluid can play an important role. This perfect fluid is concentrated around the inhomogeneities and results in screening of the gravitational potential. It should be noted that the obtained gravitational potentials have an important property: the total gravitational potentials averaged over the whole Universe are equal to zero ϕ = 0. Because the perfect fluid energy density fluctuation is proportional to the total gravitational potential δε ∼ ϕ, then the averaged energy density fluctuation is also equal to zero δε = 0. Therefore, we arrive at the natural condition that the total perfect fluid energy density ε = ε+δε after the procedure of the averaging is equal to ε. It must be emphasized that the case of imperfect fluids with the varying parameter ω (e.g., scalar fields with arbitrary potentials) requires a separate consideration which may lead to quite different conclusions. We intend to investigate this case in our forthcoming paper.
2016-05-04T20:20:58.661Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "98e880a298a4b83dbe2b26cd02087ed3190ad93d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-015-3335-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98e880a298a4b83dbe2b26cd02087ed3190ad93d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
269188911
pes2o/s2orc
v3-fos-license
Schwann cell-derived extracellular vesicles promote memory impairment associated with chronic neuropathic pain Background The pathogenesis of memory impairment, a common complication of chronic neuropathic pain (CNP), has not been fully elucidated. Schwann cell (SC)-derived extracellular vesicles (EVs) contribute to remote organ injury. Here, we showed that SC-EVs may mediate pathological communication between SCs and hippocampal neurons in the context of CNP. Methods We used an adeno-associated virus harboring the SC-specific promoter Mpz and expressing the CD63-GFP gene to track SC-EVs transport. microRNA (miRNA) expression profiles of EVs and gain-of-function and loss-of-function regulatory experiments revealed that miR-142-5p was the main cargo of SC-EVs. Next, luciferase reporter gene and phenotyping experiments confirmed the direct targets of miR-142-5p. Results The contents and granule sizes of plasma EVs were significantly greater in rats with chronic sciatic nerve constriction injury (CCI)than in sham rats. Administration of the EV biogenesis inhibitor GW4869 ameliorated memory impairment in CCI rats and reversed CCI-associated dendritic spine damage. Notably, during CCI stress, SC-EVs could be transferred into the brain through the circulation and accumulate in the hippocampal CA1-CA3 regions. miR-142-5p was the main cargo wrapped in SC-EVs and mediated the development of CCI-associated memory impairment. Furthermore, α-actinin-4 (ACTN4), ELAV-like protein 4 (ELAVL4) and ubiquitin-specific peptidase 9 X-linked (USP9X) were demonstrated to be important downstream target genes for miR-142-5p-mediated regulation of dendritic spine damage in hippocampal neurons from CCI rats. Conclusion Together, these findings suggest that SCs-EVs and/or their cargo miR-142-5p may be potential therapeutic targets for memory impairment associated with CNP. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-024-03081-z. Introduction Chronic neuropathic pain (CNP) is a common sequela of peripheral nerve injury and abnormal nervous system function [1].With a prevalence ranging from 6.9 to 10% of the general population, CNP is a major contributor to the global disease burden [1,2].Substantial evidence from clinical and experimental research suggests that chronic pain often coexists with memory impairment [3,4].Peripheral nerve injury has been linked to abnormalities in short-term hippocampal working and recognition memory deficits and long-term potentiation deficits associated with spatial learning and memory disorders [5].Persistent pain accelerates memory loss, affects daily activities, and decreases patient quality of life; for example, the relative risk of inability to manage finances independently is 11.8% greater [6].Moreover, CNP has also been reported to induce genetic and structural changes in brain regions, including hippocampal volume reduction, abnormal hippocampal gliosis, and microtubule stability [3,[7][8][9].Dendritic spines are tiny postsynaptic spines on the dendritic surface of neurons where synapses between neurons are formed.Changes in the number and morphology of spines are strongly correlated with learning and memory functions.Stress-induced dendritic spine remodeling in hippocampal neurons is somewhat reversible; however, prolonged and sustained stress may result in irreversible abnormal dendritic spine remodeling and impairment of hippocampal function, resulting in cognitive impairment.Recent studies have reported that the reduced number and density of dendritic spines in hippocampal neurons may be involved in the memory impairment associated with CNP [4].However, current pain therapies are frequently ineffective for CNP.Moreover, neither effective nor ineffective of these therapies typically do not specifically target the rehabilitation of pain-related memory impairment [5].Therefore, new strategies targeting the prevention or treatment of CNP-related memory impairment are urgently needed. Schwann cells (SCs) are the most common glial cells in the peripheral nervous system and innervate peripheral nerves by interacting with axons and blood vessels [10].SCs serve as a double-edged sword by engaging in axon creation and repair [11] but also perceiving harmful stimuli and relaying injurious information to nerves, triggering mechanical abnormalities in mouse models of neuropathic and cancer pain [10,12].Whether SCs participate in the onset of CNP-associated memory impairment is unknown. Extracellular vesicles (EVs) are nanoscale membrane vesicles that are actively released by cells.They serve as important transduction mediators and mediate signal exchange between different cells and tissues [13].Exosomes and microvesicles, which are derived from separate biogenesis routes and are referred to collectively as small EVs (sEVs), are two basic forms of EVs [14].EVs are key mediators of protein and RNA transfer between glial cells and neurons and are involved in spatial transmission in the nervous system [15].SC-derived EVs (SC-EVs) containing mRNA, miRNA, and protein cargoes are transferred into damaged axons and are crucial for axonal elongation and remyelination [11].Little is known about whether SC-EVs can mediate communication between the sciatic nerve and the central nervous system, particularly in the exacerbation of brain pathology and memory impairment under CNP conditions. In this study, we reported that concomitant memory impairment following chronic constriction injury (CCI) resulting from unilateral sciatic nerve injury was associated with reductions in dendritic spine density, dendritic complexity, and the expression of synapse-associated proteins in the hippocampus.Specifically, SC-EVs from injured sciatic nerves accumulated primarily in hippocampal CA1-CA3 cells through the bloodstream circulation.miR-142-5p was identified as the key molecule responsible for pathological SC-EVs-mediated memory impairment, and α-actinin-4 (ACTN4), ELAV-like protein 4 (ELAVL4, also known as HuD) and ubiquitinspecific peptidase 9 X-linked (USP9X) were identified as novel target molecules of miR-142-5p.ACTN4 is a class of actin-binding proteins whose primary function is to cross-link actin filaments into bundles.Crosslinking of actin filaments provides rigidity and stability for filaments.ACTN4 Governs Dendritic Spine Dynamics [16].ELAVL4 is a member of the human antigen/ELAV-like family of RNA-binding proteins (RBPs) that are mostly expressed in neurons but also in the pancreas and testis at lesser levels [17].ELAVL4 is essential for proper neural development, nerve regeneration, and synaptic plasticity, and its dysregulation is involved in several pathologies [18,19].USP9X is a neurodevelopmentaldisorder-associated deubiquitinase and involve in dendritic spine development [20].Reducing EV secretion, interfering with miR-142-5p expression or overexpressing ACTN4, ELAVL4 and USP9X ameliorated memory impairment and rescued CNP-associated dendritic spine changes.Together, these findings provide reliable evidence that SC-EVs mediate pathologic communication between dysfunctional sciatic nerves and the brain. Animals A total of roughly 230 Male SD rats (5-8 weeks, 180-220 g) were purchased from Chengdu Dashuo Laboratory Animal Co. Rats were housed under controlled conditions (SPF) with free access to food and water and a 12-hour light/dark cycle. Cell culture HT-22 and HEK293T cells (Huiying Biological Technology Co., Ltd., Shanghai, China) were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 4.5 g/L glucose, 10% fetal bovine serum, 1% penicillin-streptomycin and 5% CO 2 .The isolated embryonic hippocampal neurons were cultured as previously described [21].Briefly, pregnant rats aged 15 and 17 days were anesthetized, and the hippocampi of the unborn rats were isolated under a dissection microscope at 4 °C.The hippocampal tissue was sliced into tiny pieces using scissors and digested for 8-12 min with 0.25% trypsin.After trituration with the plating media, the neurons were filtered through a 40 μm cell sieve and plated onto a cover slip or a 6-well cell culture plate coated with poly-D-lysine.The neurons were incubated in a humidified incubator at 37 °C with 5% CO 2 for 10 min before the media was changed to neurobasal medium supplemented with 2% B27. CNP model establishment and treatment Rat models of CNP induced by peripheral nerve injury were established as described previously [22,23].The details of the CNP rat model are provided in the Supplementary Methods section.We successfully constructed rat CNP models using CCI, partial sciatic nerve ligation (PSNL) and spared nerve injury (SNI), respectively(Sup plementary Fig. 1 and Supplementary Fig. 2).To inhibit exosome synthesis, the neutral sphingomyelinase-2 (nSMase2) inhibitor GW4869 (2 mg/kg) was intraperitoneally administered three times per week for three weeks in vivo [24]. Behavioral tests Behavioral tests (including the PWT, PWL test, acetone test, Y maze test and object recognition test) were carried out by two trained operators who were blinded to the group to which the animals belonged.Rats were habituated to the testing environment for 2 h before testing.Specific information can be found in the Supplementary Methods section. AAV vector construction, production, titration, and injection AAV vector construction, production and titration were performed as described in the Supplementary Methods section.In brief, rats were anesthetized with 2-3% isoflurane, and the sciatic nerve and dorsal root ganglion (DRG) were exposed as described in previous studies [25,26].A volume of 4 µL of the viral vectors was manually and gently injected into the sciatic nerve/DRG through a 33-gauge needle (Hamilton syringe).The needle was kept at the injection location for 1 min before being carefully withdrawn. Golgi and DIL/DIO staining Golgi staining was performed utilizing the FD Rapid Golgi StainTM Kit according to the manufacturer's instructions (FD NeuroTechnologies, USA).Briefly, rat brains were washed in distilled water and impregnated with an impregnation solution for 14 days.The slices were sliced to 150 μm thickness with a vibratome (Leica, Wetzlar, Germany), mounted on slides, and dyed with chemicals from the kit.A Zeiss LSM 710 Duo microscope with 20X lenses was used to acquire Z-stack images of the neurons.With 100X objectives, dendritic spines were collected.Sholl analysis was performed with ImageJ to assess the total dendritic length, number of branch points, and spine density.With ImageJ software, each spine head was manually specified in the channel of the cell fill to pinpoint the location.Dendritic spines were classified as previously reported [27].Specifically, the criteria for spine classification were as follows: spines with lengths greater than 3 m and less than 10 m were classified as "filopodia, " spines with lengths longer than widths were classified as "long, thin, " spines with lengths shorter than widths were classified as "stubby, " and spines with head widths longer than the neck width were classified as "mushroom." The filopodia and long thin pseudopods are immature, while the stubby and mushroom forms are adults.For DIL/DIO staining, cells were fixed with 2% paraformaldehyde for 15 min at room temperature.Then, 5 µM DIL (Beyotime Biotechnology, Cat No. C1036) or DIO (Beyotime Biotechnology, Cat No. C1038) was added to the cells, which were incubated for 15 min at RT. Before imaging, the cells were washed twice with PBS.Images were acquired using an autoinverted fluorescence microscope (Olympus, Japan) with a 100X objective. Transfection of primary neurons with AAV, miRNA agomir or antagomir For AAV transduction in primary neurons, the viral particles were added to neurobasal medium at a multiplicity of infection of 1 × 10 6 on the second day of plating.The neurons were analyzed on 14 days in vitro (DIV).The transduction efficiency was observed at > 90%.For miRNA transduction in primary neurons, the neurons were transfected on DIV 12.According to the manufacturer's instructions, Lipo3000 (L3000008, Invitrogen) was used to transfect 100 nm of miRNA agomir or antagomir into each well of cells using Opti-MEMTM I Reduced Serum Medium (31985062, Thermo Fisher).The cells were examined 48 h after miRNA agomir or antagomir transfection.The miRNA agomir, antagomir, negative control NC agomir, and NC antagomir were purchased from RiboBio (Guangzhou, China). miRNA agomir and antagomir injection Rats were anesthetized with 2-3% isoflurane.For bilateral hippocampal stereotactic injection, the miR-142-5p agomir was injected into the hippocampus of rats using the following coordinates: anteroposterior, ± 4.3 mm; medial and lateral, ± 2.8 mm; and dorsoventral, -2 mm.After 48 h, brain tissues were harvested for further experiments.For injection of the miRNA agomir into the sciatic nerve, the miR-142-5p antagomir (2 nmol/rat) was injected into the sciatic nerve once a week for 3 weeks after CCI induction. Separation of extracellular vesicles from tissue EVs were isolated from tissue using a protocol developed previously by Vella et al., with minor modifications [28].The dissociation mixture was formulated utilizing the Miltenyi Human Tumor Dissociation Kit (Miltenyi Biotec, cat.no.130-095-929).The details are listed in the Supplementary Methods section. Cellular uptake of labeled exosomes from rat plasma Exosomes from the plasma of normal and model rats were labeled with 1,1′-dioctadecyl-3,3,3′,3′tetramethylindocarbocyanine perchlorate (DiI) fluorescent dye (Beyotime Biotechnology, C1036).Briefly, the EV solution was incubated with Dil (1 µmol) for 10 min in the dark at room temperature (RT), followed by a single wash with PBS and ultracentrifugation (100,000 × g) for 1 h at 4 °C.HT22 cells were incubated with DiIlabeled exosomes for 3 h (cell-to-EV ratio of 1:300).The cells were fixed with 4% paraformaldehyde.The cells were permeabilized with 0.1% Triton X-100 in PBS for 10 min and then blocked with 5% bovine serum albumin at room temperature for 1 h.Subsequently, the cells were stained with FITC-phalloidin (Beyotime Biotechnology, C1003) for 30 min at RT in the dark.Nuclei were stained with DAPI.A fluorescence microscope (Olympus, IX83, Japan) was used for observation and imaging. NTA, TEM, microbead-assisted flow cytometry, and other experiments NTA, TEM, SEM, microbead-assisted flow cytometry, immunofluorescence, western blot, RT-qPCR, library preparation, sequencing, and quantification and differential expression analysis of miRNAs were conducted as described in the Supplementary Information: Supplementary Methods. Statistical analysis GraphPad Prism 9.0 was used for statistical analysis.For comparisons between two groups, unpaired tests or twoway analysis of variance (ANOVA) tests with Šídák's multiple comparisons test were used.Statistical differences among the 3 groups were determined using two-way ANOVA with Tukey's multiple comparisons test.All the data are presented as the means ± SDs.A p value < 0.05 was considered to indicate statistical significance. Plasma_EV CCI exacerbates memory impairment and dendritic spine damage We first collected EVs from the plasma of rats in the sham and CCI groups by differential ultracentrifugation and characterized them by transmission electron microscopy (TEM).The particle size of the extracted EVs was mainly distributed between 50 and 300 nm, with typical vesicular structures (Supplementary Fig. 3A).The expression of the marker proteins CD9, TSG101 and HSP70 in these EVs was detected (Supplementary Fig. 3B).Moreover, nanoparticle tracking analysis (NTA) revealed that the particle size and concentration of plasma EVs were significantly greater in the CCI group than in the sham group 14 days after surgery (Fig. 1A and Supplementary Fig. 3C).To investigate the influence of plasma EVs on hippocampal dendritic spines, we performed DIL staining following a 24-hour treatment of primary hippocampal neurons with plasma_EV sham and plasma_EV CCI .The plasma_EV CCI reduced the number of dendritic spines in primary hippocampal neurons (Fig. 1B).In addition, the expression levels of the neuronal synapse-associated proteins PSD95 and SYN were decreased after treatment of primary hippocampal neurons with plasma_EV CCI (Fig. 1C).To further determine the role of plasma EVs in CCI-associated memory impairments, we subcutaneously injected the classical EV inhibitor GW4869 into rats and assessed their memory behaviors after CCI (Fig. 1D).Unexpectedly, GW4869 significantly increased PWT and PWL at 1, 3, 7, 14 and 21 days after CCI (Fig. 1D).Furthermore, GW4869 substantially enhanced spontaneous alternation behaviors in CCI rats in the Y maze test at 7, 14, and 21 days after CCI (Fig. 1E), as well as improving the memory recognition indices of CCI rats 21 days after CCI in the ORM, OLM, and TOM Fig. 1 Plasma_EV CCI damages neuronal dendritic spines, and the EV inhibitor GW4869 alleviates hyperalgesia and memory impairment in CCI rats.Plasma EVs were extracted by differential high-speed centrifugation of the rats' plasma.(A) NTA analysis showed that the concentration and diameter of plasma EVs were greater in CCI rats than in sham rats.n = 9 rats.unpaired Student's t test was used.(B, C) DIL staining and immunofluorescence staining showed that plasma_EV CCI significantly inhibited the dendritic spine density and the expression levels of PSD95 and SYN proteins in primary hippocampal neurons.n = 4-6 images from three wells of each group.unpaired Student's t test was used.(D) Flowchart for establishing the CCI model, GW4869 intervention, evaluation of nociceptive hypersensitivity and memory behavior evaluation; The paw withdrawal threshold (PWT), and paw withdrawal latency (PWL) were each determined using Von Frey filaments and thermal pain stimulators, respectively.GW4869 treatment substantially raised the PWT and PWL at 1, 3, 7, 14 and 21 days after CCI.n = 12 rats, two-way ANOVA followed by Tukey's multiple comparisons test was used.(E) Compared with CCI or vehicle treatment, GW4869 treatment enhanced the spontaneous alternation behavior of CCI rats.n = 10 rats.two-way ANOVA followed by Tukey's multiple comparisons test was used.(F) Compared with CCI or vehicle treatment, GW4869 treatment increased the recognition indices of CCI rats in the ORM, OLM and TOM tests.n = 8-10 rats.two-way ANOVA followed by Tukey's multiple comparisons test was used.(G, H) After Golgi staining, Sholl analysis and ImageJ analysis showed that GW4869 rescued the dendritic complexity, dendritic length, density, and spine head size in hippocampal neurons from CCI rats compared with neurons from the hippocampus of CCI or CCI + Vehicle rats.n = 5-6 stained sections from 3 animals in each group.ordinary oneway followed by Tukey's multiple comparisons test was used for dendritic length; mixed-effects analysis followed by Tukey's multiple comparisons test was used for dendritic density and spine head size; two-way ANOVA followed by Tukey's multiple comparisons test was used for dendritic complexity (I) GW4869 improved PSD95, SYN and BDNF protein levels in the hippocampus of CCI rats.The data are shown as the means ± SDs. ordinary one-way followed by Tukey's multiple comparisons test was used.*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001 tests (Fig. 1F).Moreover, the PWT, PWL, and cold pain score/duration in the CCI rats were also improved after GW4869 intervention (Supplementary Fig. 3D).Furthermore, we examined the changes in hippocampal neuronal dendritic spines and found that hippocampal (CA1 and CA2) neuronal dendritic spine density, dendritic length, total/mature dendritic spine density and spine head size were significantly lower in both CCI rats and CCI + Vehicle rats; however, the EV inhibitor GW4869 significantly reversed the above pathological changes in CCI rats (Fig. 1G-I). Schwann cell-derived EVs migrate to the blood circulation to accumulate in the hippocampus To verify the role of SC-EVs in the central nervous system (CNS) under CNP stress, we designed and constructed an AAV2/8 viral vector containing the Schwann cell-specific promoter Mpz (AAV-Mpz-CD63-GFP) and an AAV9 viral vector containing the neuron-specific promoter hSyn (AAV-hSyn-CD63-GFP) (Fig. 2A).To verify the specificity of the viral vector transduction, we injected the AAV viruses described above into the sciatic nerves and DRGs of the rats and collected the sciatic nerves and DRG tissues 14 days later (Fig. 2A).After staining the sciatic nerve and DRG with DAPI, GFP, and S100β/MAP2, strong fluorescent signals of GFP (colocalized with S100β/MAP2) were observed in the sciatic nerve and DRG (Supplementary Fig. 3E, F), indicating that the specific cell-expressing viral vector had improved transduction efficiency and cell specificity. We subsequently investigated whether SC-EVs exposed to CNP stress could enter the central nervous system and impact memory function.AAV-Mpz-CD63-GFP, AAV-hSyn-CD63-GFP, and the control were administered through injection into the sciatic nerve of rats to introduce CD63-GFP into SC-EVs.The experimental model was then established after 14 days.The brain tissues were harvested after 14 days of modeling and subsequently subjected to co-staining using DAPI, GFP, and CD63.Our results showed that after AAV-Mpz-CD63-GFP transduction, CCI rats exhibited significant CD63-GFP signals, primarily in the hippocampal CA1/2/3 area (Fig. 2B, C), with a minor amount of signal in the hippocampal DG area (Supplementary Fig. 4A).In contrast, no CD63-GFP signals were present in the brains of the rats injected with the control virus, indicating that EVs released from SCs could reach the CNS and enrich the hippocampus.There was little CD63-GFP signal in the cortex and prefrontal cortex of the rats in both groups (Supplementary Fig. 5A).In addition, we observed that hippocampal neurons exhibited a significant increase in CD63-GFP signals, while astrocytes and microglia exhibited a conspicuous lack of CD63-GFP signals (Supplementary Fig. 5B-D).Moreover, we pondered whether this occurrence was duplicated in alternative CNP models.The results showed that the CD63-GFP signals in the hippocampal CA1/2/3 region significantly increased in the PSNL and SNI rat models transfected with AAV-Mpz-CD63-GFP (Fig. 2B, C). Furthermore, we injected AAV-hSyn-CD63-GFP into the rat DRG and evaluated the enrichment of EVs in the hippocampus and prefrontal cortex on day 14 following CCI, and we found almost no CD63-GFP signals in the hippocampus, cortex, or prefrontal cortex in either group (Supplementary Fig. 6A-F).Therefore, our subsequent experiments focused on SC-EVs.We further sought to investigate the potential correlation between the concentration of SC-EVs in the hippocampus and the exact time point at which samples were collected for the purpose of modeling.After successful transfection of rats with AAV-Mpz-CD63-GFP, a model was established.After the experimental procedure, entire brains were collected at 3, 7, and 21 days after the modeling process.At the aforementioned time points, some CD63-GFP signals were observed in the hippocampal CA1/2/3 region of the CCI group, but only a tiny quantity of signals was observed in the hippocampus of the sham group (Supplementary Fig. 7A-C).Furthermore, our results showed a significant increase in fluorescein sodium leakage within the hippocampal region in rats exposed to CCI compared to sham rats (Supplementary Fig. 8A).This observation suggested increased permeability of the blood-brain barrier and damage to the blood-brain barrier in CCI rats.Interestingly, no significant difference was observed in the cortical region (Supplementary Fig. 8A). To further understand the source of SC-EV accumulation in the hippocampus, we investigated possible pathways involved in axoplasmic transport and circulation.First, we collected sciatic nerve, DRG, and spinal cord tissues from rats after AAV-Mpz-CD63-GFP transduction and evaluated the CD63-GFP signals using laser confocal microscopy.We discovered that CD63-GFP signals were significantly upregulated in the sciatic nerve (Supplementary Fig. 4B).Nevertheless, few CD63-GFP signals were observed in the DRG and spinal cord across all the experimental groups (Supplementary Fig. 4C, D), indicating that SC-EVs may not reach the CNS via the spinal cord pathway.EVs are rarely examined using flow cytometry because their nanoscale size surpasses the detection limit of flow cytometry.We used microbead-aided flow cytometry to detect GFP-labeled EVs in plasma according to previous reports [24] (Fig. 2D).Both transmission electron microscopy (TEM) and scanning electron microscopy (SEM) indicated that the collected EVs were densely coated on the microbeads, indicating EV enrichment on the beads (Supplementary Fig. 8B).Following AAV-Mpz-CD63-GFP virus transduction, we isolated EVs from the plasma of the rats (at 7 and 14 days postmodeling) (Fig. 2D).Furthermore, GFP-labeled EVs were significantly more abundant in the plasma of CCI rats (7 days after modeling) than in that of sham rats (Fig. 2E, F).On day 14 of modeling, there was a trend toward a difference between the two groups, but there was no significant difference (Fig. 2F).Furthermore, after treating primary hippocampal neurons with plasma_EV sham or plasma_EV CCI for 24 h, we discovered (E, F) Microbead-assisted flow cytometry revealed that the GFP/CD63 labeled-cell ratio in the plasma EVs of CCI rats was greater than that in those of sham rats on day 7 after modeling.There was no change in the GFP/CD63 ratio between the two groups on day 14 postmodeling.n = 3 rats.unpaired Student's t test was used.(G) Images of primary rat hippocampal neurons co-incubated with sham or CCI rat plasma EVs after labeling with DIL revealed increased plasma_EV CCI within the neurons.n = 3-4 images from three wells of each group.unpaired Student's t test was used.The data are shown as the means ± SDs. ns, no significant difference; *P < 0.05; **P < 0.01 that they were predominantly rich in plasma_EV CCI , as indicated by immunofluorescence staining (Fig. 2G). Identification of miR-142-5p as the main carrier of SC-EVs CCI enriched in the hippocampus Although EVs contain a range of signaling molecules involved in cell-cell communication, there is growing evidence that miRNAs are essential molecules in the regulation of receptor cell activity by EVs [29].As a result, we hypothesized that miRNAs transported by SC-EVs were engaged in dendritic spine remodeling in CCI-associated memory impairment. Five series of procedures were conducted.Initially, hippocampal tissues were harvested from the rats in the sham or CCI group for 14 days and then centrifuged to extract hippocampal EVs.The whole hippocampi from three rats per group were mixed, and EVs were isolated.EV-containing small RNAs were isolated and subjected to deep RNA-seq analysis (each sample included genetic information from 3 rats).Global miRNA profiling revealed significant upregulation of 4 miRNAs and downregulation of 4 miRNAs in hippocampal EVs (filtering criteria, p < 0.05, |log 2(FC)| > = 0.58) (Fig. 3A).Subsequently, to predict that EV-containing miRNAs potentially promote dendritic spine remodeling, miRNAs significantly increased in hippocampal EVs were analyzed against three miRNA databases: Schwann-enriched miRNAs [30], the Tissue Atlas database (https://ccbweb.cs.uni-saarland.de/tissueatlas/hsa_vs_rno), and the Sham/CCI rats' hippocampus differential miRNAs via our RNAseq.Venn overlap analysis revealed 7 common miRNAs, including 4 significantly upregulated miRNAs (Fig. 3B). miR-142-5p mediates memory impairment and hippocampal neuronal dendritic spine damage via SC-EVs transport Therefore, we performed gain-of-function and loss-offunction experiments to obtain direct evidence of miR-142-5p-induced memory impairment and dendritic spine damage in Schwann cells of the sciatic nerve.We designed and constructed AAV viruses containing the specific Schwann cell promoter Mpz, AAV-miR-142-5p sponge and AAV-miR-142-5p to block or upregulate miR-142-5p expression in Schwann cells by sciatic nerve injection for 14 days after modeling, after which cognitive behavioral tests were performed (Fig. 4A).The results showed that the AAV-miR-142-5p sponge improved the spontaneous alternation of CCI rats at 7, 14 and 21 days in the Y-maze and improved the cognitive recognition indices of CCI rats at 21 days in the novel object recognition, location recognition, and temporal sequencing experiments; however, the AAV-miR-142-5p aggravated the memory impairment of the CCI (Fig. 4B, C).In addition, we assessed pain and memory behaviors after sciatic nerve injection of the miR-142-5p antagomir (Supplementary Fig. 8C).Additionally, sciatic nerve injection of the miR-142-5p antagomir improved spontaneous alternating behaviors in CCI rats in the Y maze on days 7, 14, and 21 postmodeling; improved memory recognition indices in the ORM, OLM, and TOM experiments on day 21 postmodeling; and improved the PWT, PWL, and cold pain scores/durations in CCI rats (Supplementary Fig. 8D, E). Furthermore, we found that the AAV-miR-142-5p sponge improved hippocampal (CA1 and CA2) dendritic length, dendritic spine density, the number of mature dendritic spines, spine head size, as determined by Golgi staining and the expression levels of the synapseassociated proteins PSD95, SYN, and BDNF in CCI rats (Fig. 4D, E, G).AAV-miR-142-5p aggravated the above pathological changes in CCI rats.Additionally, our study revealed that the administration of the miR-142-5p antagomir effectively counteracted the reduction in dendritic spine density observed in primary hippocampal neurons following exposure to plasma_EV CCI via DIL staining (Fig. 4F).These findings point to the involvement of sciatic nerve-derived miR-142-5p in the onset of CCI-mediated memory impairment.showed that the relative expression of mature miR-142-5p, mature miR-25-5p, mature miR-505-3p, and mature miR-873-5p were obviously upregulated in the hippocampus of CCI rats compared with that in sham rats.n = 4-6 rats.unpaired Student's t test was used.(D) RT-qPCR results showing that the relative expression of pri-miR-25-5p, pri-miR-505-3p and pri-miR-873-5p were downregulated in the hippocampus of CCI rats, but that of pri-miR-142-5p was upregulated compared with that in sham rats.n = 4-5 rats.unpaired Student's t test was used.(E) RT-qPCR results showing that the relative expression of mature miR-142-5p and miR-25-5p were increased in the sciatic nerves of CCI rats, but that the expression of mature miR-505-3p was decreased; moreover, there was no difference in the expression of mature miR-505-3p compared with that in sham rats.n = 4-6 rats.unpaired Student's t test was used.The data are shown as the means ± SDs. ns, no significant difference; *P < 0.05; **P < 0.01 Fig. 4 Identification of miR-142-5p as a molecule responsible for SC-EV-induced memory impairment and dendritic spine damage.We designed and constructed an AAV carrying a specific Schwann cell promoter that modulates (upregulates or encloses) miR-142-5p expression in Schwann cells after sciatic nerve injection.(A) Flowchart for establishing the CCI model, injecting AAV, and evaluating memory-related behaviors.(B) The AAV-miRNA-142-5p sponge enhanced the spontaneous alternation behavior of CCI rats compared with that of rats in the CCI + AAV-NC group; however, AAV-miR-142-5p aggravated this behavior.n = 9-10 rats.two-way ANOVA followed by Tukey's multiple comparisons test was used.(C) The AAV-miRNA-142-5p sponge increased the recognition index of CCI rats in the ORM, OLM and TOM tests compared with that of rats in the CCI + AAV-NC group; however, AAV-miR-142-5p aggravated this index.n = 8-10 rats.ordinary one-way followed by Tukey's multiple comparisons test was used for OLM and TOM; mixed-effect analysis followed by Holm-Šídák's multiple comparisons test was used for ORM; (D, E) After Golgi staining, Sholl analysis and ImageJ analysis showed that, compared with those in neurons in the rat hippocampus of the CCI + AAV-NC group, the AAV-miRNA-142-5p sponge rescued the dendritic complexity, dendritic length, density, and spine head size in hippocampal neurons of CCI rats, but AAV-miR-142-5p exacerbated these parameters.n = 5-6 stained sections from 3 animals in each group.ordinary one-way followed by Tukey's multiple comparisons test was used for spine head size and dendritic length; two-way ANOVA followed by Tukey's multiple comparisons test was used for dendritic complexity; mixed-effects analysis followed by Tukey's multiple comparisons test was used for dendritic density; (F) DIL staining showed that the miR-142-5p antagomir effectively counteracted the reduction in dendritic spine density observed in primary hippocampal neurons following exposure to plasma_EV CCI .n = 6 images from three wells per group.ordinary one-way followed by Tukey's multiple comparisons test was used.(G) The AAV-miRNA-142-5p sponge increased PSD95, SYN and BDNF protein levels in the hippocampus of CCI rats, but AAV-miR-142-5p decreased PSD95, SYN and BDNF protein levels.Ordinary one-way followed by Tukey's multiple comparisons test was used.The data are shown as the means ± SDs. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001 Identification of ACTN4, ELAVL4 and USP9X as important targets of miR-142-5p Bioinformatics analyses and experimental validation were used to identify miR-142-5p target genes that are objectively responsible for dendritic spine remodeling.We analyzed the potential target genes of miR-142-5p by prediction via the TargetScan, miRDB and miRGate databases, and 137 target genes were identified after intersection analysis via Venn analysis (Fig. 5A).The 137 target genes were subjected to GO enrichment analysis using DAVID tools, and the enrichment pathway GO:0005856 (cytoskeleton) was significantly different and associated with dendritic spines.Consequently, 10 noteworthy putative target genes of miR-142-5p were identified from the aforementioned pathway: SGCE, KITLG, USP9X, ELAVL4, SPIRE1, DMD, ACTN4, SLAIN1, RHOC, and PTPN4.Next, we examined the expression of these 10 genes.First, we stereotactically injected the miR-142-5p agonist (agomir) and NC control into the bilateral hippocampal region of normal rats and found that the mRNA levels of five target genes (ELAVL4, PTPN4, USP9X, ACTN4 and SLAIN1) were significantly reduced (Fig. 5B).Immediately after that, we examined the five differentially expressed genes in the hippocampal tissues of rats in both groups and found that the mRNA levels of ELAVL4, USP9X, and ACTN4 were significantly decreased in the hippocampal tissues of the CCI rats after 14 days of modeling than in sham rats, whereas the mRNA levels of PTPN4 and SLAIN1 were not significantly different (Fig. 5C, D). In addition, we examined the protein levels of the above three important target genes in hippocampus, and found that the protein levels of ACTN4, ELAVL4 and USP9X were significantly lower in the rats' hippocampusinjected bilaterally with the miR-142-5p agomir in the hippocampal stereotaxic space (24 h after the injection) than in NC control group (Fig. 5F); in contrast, compared to those in the sham group, the protein levels of ACTN4, ELAVL4 and USP9X were significantly lower in the hippocampus of CCI rats (Fig. 5G).Moreover, we examined the changes in the protein expression of target genes in hippocampus after the injection of AAV to interfere with miR-142-5p expression in Schwann cells and found that the protein levels of ACTN4, ELAVL4 and USP9X were significantly lower in the CCI + NC sponge group than in the Sham + NC sponge group.The miR-142-5p sponge significantly increased the protein expression of target genes in the hippocampus of CCI rats; in addition, AAV-miR-142-5p significantly reduced the protein expression of target genes in the hippocampus of CCI rats (Fig. 5H).These results suggest that ACTN4, ELAVL4 and USP9X were important target genes of miR-142-5p that are predominantly carried by SC-EVs. The overexpression of ACTN4, ACTN4 or ELAVL4 reverses mir-142-5p-mediated dendritic spine damage We predicted the binding partners of miR-142-5p with ACTN4, ELAVL4 and USP9X through a database.miR-142-5p has three binding sites for ELAVL4, two of which are in close proximity to each other; thus, two mutant plasmids, MUT1 and MUT2, were designed (containing two binding sites in close proximity).miR-142-5p has only one potential binding site for ACTN4 and USP9X (Fig. 6A and Supplementary Fig. 8F).To directly demonstrate that ACTN4, ELAVL4 and USP9X are direct downstream targets of miR-142-5p, we performed luciferase assays in HEK293T cells transfected with plasmids containing the predicted miR-142-5p binding site in the 3′ untranslated region (UTR).The use of the miR-142-5p agomir inhibited luciferase activity upstream of the 3′UTR of ACTN4, ELAVL4 and USP9X, and these inhibitory effects were rescued by the corresponding gene mutation (MUT1 of ELAVL4) (Fig. 6B).These results suggest that miR-142-5p can inhibit the expression of ACTN4, ELAVL4 and USP9X through binding. Then, we further investigated the effects of ACTN4, ELAVL4 and USP9X on dendritic spines.We designed and constructed AAV overexpressing ACTN4, ELAVL4 and USP9X, respectively, transfected primary hippocampal neurons for 14 days, and then coadministered plasma EVs or the miR-142-5p agomir.As a result, we found that overexpression of ACTN4, ELAVL4 and USP9X, respectively, reversed the effects of plasma_EV CCI or the miR-142-5p agomir on the dendritic spines of primary hippocampal neurons, as did the reversal of the reduction in spine density (Fig. 6C, D) and the associated reduction in the levels of the synaptic proteins PSD95 and SNY (Fig. 6E-H). Discussion In the present study, we identified previously unknown mechanisms of sciatic nerve-brain interorgan communication in which Schwann cell-derived EVs and their cargo miRNAs can be transferred to the brain, particularly in the hippocampus, inducing dendritic spine remodeling and memory impairment in a CNP model.Inhibition of EV secretion and intervention with miR-142-5p ameliorated CNP-associated memory impairment and dendritic spine remodeling. Exosomes (30-150 nm) and microvesicles are the most extensively studied extracellular vesicles [29].Emerging data reveal that several forms of EVs, such as exosomes and microvesicles, can transfer functional proteins and RNA to surrounding or distant cells [34].In addition, exosomes contain mRNAs and miRNAs that, when transported to destination cells, remain functional and change cellular behavior [34,35].EVs play a role in the etiology of various neurological disorders, including Fig. 5 ACTN4, ELAVL4 and USP9X are direct targets of miR-142-5p and contribute to miR-142-5p-induced memory impairment and dendritic spine damage.(A) miR-142-5p was predicted using TargetScan, miRDB and miRGate, and the predicted data were subsequently subjected to Venn analysis.(B) Intersecting genes were subjected to GO pathway enrichment analysis, and 10 genes in the dendritic spine-associated actin pathway were selected for validation.miR-142-5p agomir and NC were injected into the normal rat bilateral hippocampus via stereotaxic localization.After 48 h, RT-qPCR showed that the mRNA levels of ELAVL4, PTPN4, USP9X, ACTN4, and SLAIN1 in the hippocampal tissues of the rats in the miR-142-5p agomir intervention group were significantly lower, while the other genes were not significantly different.n = 3-6 rats.unpaired Student's t test was used.(C, D) RT-qPCR revealed that ACTN4, ELAVL4 and USP9X mRNA levels were significantly lower in the hippocampal tissues of CCI rats than in those of sham rats.n = 5-8 rats.unpaired Student's t test was used.(E) RT-qPCR revealed that the miR-142-5p antagomir significantly reversed the inhibitory effect of plasma_EV CCI on the mRNA levels of ACTN4, ELAVL4 and USP9X in primary hippocampal neurons.ordinary one-way followed by Tukey's multiple comparisons test was used.(F) Western blot analysis showed that the protein levels of ACTN4, ELAVL4 and USP9X were significantly lower in the hippocampal tissues of rats stereotaxically injected with the miR-142-5p agomir than in those of the NC group.n = 3 rats.unpaired Student's t test was used.(G) Western blot analysis revealed that ACTN4, ELAVL4 and USP9X protein levels were significantly lower in the hippocampal tissues of CCI rats than in those of sham rats.n = 3-4 rats.unpaired Student's t test was used.(H) AAV-miR-142-5p sponge significantly improved the decreased ACTN4, ELAVL4 and USP9X protein levels in hippocampal tissues of CCI rats compared to the CCI + AAV-NC group; however, AAV-miR-142-5p exacerbated the decreased protein expression, via Western blot analysis.n = 3-4 rats.ordinary one-way followed by Tukey's multiple comparisons test was used.The data are shown as the means ± SDs. *P < 0.05, ** P < 0.01, ***P < 0.001, ****P < 0.0001 Fig. 6 ACTN4, ELAVL4 and USP9X overexpression reversed miR-142-5p-and plasma_EV CCI -induced dendritic spine damage.(A) The potential binding sites of miR-142-5p in ACTN4, ELAVL4 and USP9X.(B) The direct effects of miR-142-5p on ACTN4, ELAVL4 and USP9X were identified by reporter gene analysis.miR-142-5p has three potential binding sites for ELAVL4; therefore, two mutant plasmids were designed separately for ELAVL4 (MUT1), one of which included two relatively similar binding sites (MUT2), as presented in Supplementary Fig. 8F.After transfecting HEK293T cells with miR-142-5p and reporters carrying the 3′UTRs (including mutated binding sites) of ACTN4, ELAVL4 and USP9X, dual-luciferase reporter activities were detected.Compared to the WT + agomir group, the miR-142-5p agomir group exhibited reversed Rluc/luc (Renilla luciferase activity) activity in response to the mutated reporter plasmids ACTN4, ELAVL4 (MUT1) and USP9X.ordinary one-way followed by Dunnett's multiple comparisons test was used for ACTN4 and USP9X; ordinary one-way followed by Tukey's multiple comparisons test was used for ELAVL4.(C-H) After transfecting primary hippocampal neurons with AAVs overexpressing ACTN4, ELAVL4 or USP9X, plasma EVs and the miR-142-5p agomir were coadministered, and DIL staining and immunofluorescence staining were performed.Overexpression of these genes significantly ameliorated the impairments in the dendritic spine density of neurons and the protein expression of PSD95 and SYN in the plasma_EV CCI and miR-142-5p agomir groups compared with those in the plasma_EV CCI and miR-142-5p agomir groups.n = 3-4 images from three wells of each group.ordinary one-way followed by Dunnett's multiple comparisons test was used.The data are shown as the means ± SDs. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001 Alzheimer's disease and Parkinson's disease.However, studies have demonstrated that exosomes also regulate nociception and other sensory process pathways [36,37].Recent research has shown that exosomes isolated from the rat brain nucleus accumbens and medial prefrontal cortex contribute to allodynia and hyperalgesia following nerve damage [38].Exosomes released by immune cells or stimulated SCs may be ingested by peripheral sensory neurons, triggering a chain reaction that results in neuronal sensitization.As a double-edged sword, SCs, critical peripheral nervous system components, are continually exposed to physiological and mechanical stressors during movement due to dynamic stretching and compression pressures [10,11].SC-EVs transmit cargo to promote peripheral nerve regeneration in vitro [11], but they also play a role in nociceptive hypersensitivity [39].Here, we discovered for the first time that, compared to those in sham rats, SC-EVs in the hippocampus were significantly enriched in CNP rats by generating three classic models of CNP, namely CCI, SNI, and SNI (14 days after modeling), overexpressing specific SC promoters using AAVs containing CD63-GFP.Additionally, CD63-GFP signals were significantly enriched in the hippocampus (CA1-CA3) at 3, 7, 14, and 21 days after CCI modeling.As expected, treatment with the classical EV inhibitor GW4869 reversed the nociceptive hypersensitivity and memory impairment behaviors associated with CNP. Moreover, although EVs contain proteins and lipids, they are highly enriched in noncoding RNAs.The miRNA content of EVs varies considerably according to the cell type of origin and does not just mirror the miRNA profile of donor cells [35].Specific miRNAs are preferentially enriched in EVs.Protein and RNA exosome release has been postulated to constitute a fundamental method of communication in the nervous system, augmenting the established processes of anterograde and retrograde transmission across synapses [40,41].Disease states may alter exosome composition, and new research has revealed that SCI models of neuropathic pain affect the proteomic profile of sEVs in the mouse circulation [42].An emerging finding showed that following peripheral nerve injury, sensory neurons transfer EV-encapsulated miR-23a to M1 macrophages, activating them and exacerbating neuropathic pain [37].In this study, we found that CNP stress altered the composition of hippocampal EVs.By combining bioinformatic and in vivo approaches, we found that miR-142-5p is a common molecule that is significantly increased in the hippocampal tissues of CNP-associated memory impairment models.Furthermore, we demonstrated that the precursor and mature body mass of miR-142-5p are dramatically increased in the sciatic nerve.However, only the level of mature miR-142-5p was significantly increased in the hippocampal tissues of CNP model rats.Notably, an AAV carrying a specific Schwann cell promoter exacerbates or ameliorates CCI-associated memory impairment and hippocampal neuronal dendritic spine damage by upregulating or enclosing miR-142-5p expression in Schwann cell of sciatic nerve.In addition, intrasciatic nerve injections of the miR-142-5p antagomir also significantly attenuated CNP-associated memory impairment.miR-142-5p is implicated in neurodegenerative disorders such as Alzheimer's disease, isoflurane-induced neurological impairment, and posttraumatic stress disorder, and inhibiting miR-142-5p improves memory impairment in these models [31,32,43].An RNA sequencing profile of the sciatic nerve in the CCI model revealed that miR-142-5p levels were considerably greater in the CCI group than in the control group [44].To our knowledge, our study is the first to report a critical role of Schwann cell-derived miR-142-5p in the development of memory impairment associated with CNP.These results provide an experimental basis for the therapeutic application of anti-miR-142-5p in CNP-associated memory disorders. A new study revealed that ELAVL4 influences multiple biological pathways linked to Alzheimer's disease, including those involved in synaptic function and the expression of genes downstream of APP and tau signaling [45].Hu proteins participate in numerous aspects of posttranscriptional gene regulation by directly binding mRNAs, including mRNA polyadenylation, alternative splicing, trafficking, turnover, and translation [46].ELAVL4 interacts with numerous unstable mRNAs at the molecular level, and as a consequence of this contact, the target transcript is stabilized [47].ELAVL4 contains three RNA recognition structures, the first two necessary for binding to GAP-43 mRNA, one of ELAVL4's best-studied targets [47].In addition to GAP-43, other mRNAs, including BDNF, AChE, and tau, have been demonstrated to interact with ELAVL4 both in vitro and in vivo [48,49]. Ankyrin-G contains multiple anchor protein repeat domains, and its isoforms are abundantly expressed in the brain and play important roles in a variety of neurobiological processes, including synaptogenesis, synaptic plasticity, action potential generation and transmission, and ion channel regulation, with the 190 kDa isoform being enriched in dendrites and postsynaptic densities and regulating dendritic spine structure.Usp9X can reduce the level of polyubiquitination of ankyrin-G and stabilize it to maintain dendritic spine development [20].ACTN4 supports the transition from fine to mushroom spines and is required for metabotropic glutamate receptor-induced dynamic remodeling of dendritic protrusions [16].We demonstrated for the first time that ACTN4, ELAVL4 and USP9X are direct target genes of miR-142-5p and that the mRNA and protein levels of ACTN4, ELAVL4 and USP9X in the hippocampus of CCI rats are decreased.Additionally, the expression of ACTN4, ELAVL4, and USP9X in the hippocampus of CCI rats was significantly downregulated by hippocampal stereotactic injection of a miR-142-5p agomir and specific AAV-mediated upregulation of miR-142-5p in Schwann cell.Conversely, the expression of these proteins was reversed by specific AAV-miR-142-5p sponge occlusion of miR-142-5p in Schwann cell. .In addition, in in vitro experiments, the antagomir ameliorated the inhibitory effect of plasma EVs on the expression of these proteins in primary hippocampal neurons from CCI rats.Additionally, dual-luciferase reporter gene analysis revealed that the 3'UTRs of ACTN4, ELAVL4 and USP9X could bind to miR-142-5p.Overexpression of these genes ameliorates dendritic spine damage and the inhibition of synaptic protein expression in primary rat hippocampal neurons via plasma_EV CCI or miR-142-5p.Taken together, these results support the novel hypothesis that ACTN4, ELAVL4, and USP9X are important downstream molecules in Schwann cell-derived EVs that act as major carriers of miR-142-5p to mediate CNP memory impairment and dendritic spine damage in hippocampal neurons in the context of CNP. Additionally, our study has several implications and limitations.We report a significant increase in both plasma extracellular vesicle concentration and particle size in CCI rats (modelled for 14 days) compared to the Sham group, which may suggest an effect of chronic neuropathic pain stress on the release of extracellular vesicles, as well as the potential for the number of plasma extracellular vesicles to be used as a biomarker of CNPP and cognitive impairment associated with CNPP.Existing studies have linked chronic inflammatory diseases to elevated EV concentrations and altered EV composition [50].Some researchers have found that EV concentrations in plasma from a tibia fracture model (closely mimic complex regional pain syndrome), although comparable to controls, are significantly larger in particle size than in control mice [51].Studies have found that circulating EV counts are significantly increased in patients with Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS, debilitating disease with multiple symptoms, including pain, depression, and neurocognitive deterioration in function) and that circulating EV counts correlate significantly with serum C-reactive protein levels and have reported that circulating EV counts and EV-specific proteins can be used as novel biomarkers for the diagnosis of ME/CFS [52].However, there are also studies that report no difference in the number of purified sEVs in the serum of mice four weeks after SNI, but there are differences in size [42].In our study, the clinical value of extracellular vesicle contents was more emphasized and the phenomenon of changing extracellular vesicle concentrations may have been overlooked.Therefore, in future clinical studies, it is recommended to accurately quantify plasma extracellular vesicle levels and to assess the correlation with clinically relevant features of acute and chronic pain (e.g., inflammation, level of oxidative stress, or degree of pain and memory impairment) at different time points.Besides, whether and how pathological changes in plasma EV content, quantity, and size, which are altered in the context of CNP, contribute to CNPassociated memory impairment has not been addressed in recent studies.The pathological mechanisms underlying memory impairment associated with CNP are complex, and the molecular properties of EVs produced by other cell types in CNP model tissues, influenced by other pathological stresses affecting other cytopathologies in CNP, need to be clarified to fully understand the molecular processes involved in understanding memory impairment caused by chronic peripheral nerve injury.Furthermore, we observed the same significantly increased GFP-CD63 signaling in both SNI and PSNL hippocampal regions (CA1 to CA3) compared to Sham, suggesting SC-EVs as a potential target for peripheral nerve injury.Future studies need to further explore the role of SC-EVs in peripheral nerve injury.Lastly, Neurological injury may lead to a maladaptive inflammatory response, with SCs and resident immune cells (e.g.mast cells and macrophages) being the first to respond, which ultimately also contributes to the development of persistent pain and the development of other complications such as memory impairment [53].Several relevant studies have confirmed the link between peripheral inflammation and memory impairment [3].In this study, we focused more on the role of SC-EVs and their contents in the transmission and communication with dendritic spines of CNS neurons.As EVs can also directly carry various cytokines, the future studies need to further investigate the mechanism of action of SC-EVs with neuroinflammation and cognitive impairment. In summary, our study identified SC-EVs as novel mediators aggravating chronic peripheral nerve injuryassociated dendritic spine modeling and memory impairment in the CNP model, as schematically illustrated in Fig. 7.These findings suggest that inhibiting aberrant SC-EV production, interfering with SC-derived miR-142-5p expression, and modulating hippocampal ACTN4, ELAVL4, and USP9X expression may be potentially practical therapeutic approaches for preventing and treating CNP-associated memory impairment. Fig. 2 Fig. 2 Tracing of specifically labeled Schwann cell-derived EVs in the hippocampus and plasma.(A) AAV design, construction, and flowchart for AAV injection.(B, C) Compared to those in the sham group, GFP-CD63 signal intensity (Meaning Schwann cells secreting GFP-CD63-specific labeled EVs) were significantly clustered in the hippocampal CA, CA2 and CA3 regions of CNP model rats with memory impairment, including the CCI, PSNL and SNI models.n = 3-4 rats.ordinary one-way followed by Dunnett's multiple comparisons test was used.(D) Protocol for microbead-assisted flow cytometry.(E,F) Microbead-assisted flow cytometry revealed that the GFP/CD63 labeled-cell ratio in the plasma EVs of CCI rats was greater than that in those of sham rats on day 7 after modeling.There was no change in the GFP/CD63 ratio between the two groups on day 14 postmodeling.n = 3 rats.unpaired Student's t test was used.(G) Images of primary rat hippocampal neurons co-incubated with sham or CCI rat plasma EVs after labeling with DIL revealed increased plasma_EV CCI within the neurons.n = 3-4 images from three wells of each group.unpaired Student's t test was used.The data are shown as the means ± SDs. ns, no significant difference; *P < 0.05; **P < 0.01 Fig. 3 Fig.3miR-142-5p was increased in the hippocampus of rats in the CCI-associated memory impairment model.(A) Differentially expressed miRNAs were selected from sequencing data of hippocampal EV miRNAs.(B) Venn analysis of the seven candidate miRNAs satisfying the three conditions.(C) RT-qPCR showed that the relative expression of mature miR-142-5p, mature miR-25-5p, mature miR-505-3p, and mature miR-873-5p were obviously upregulated in the hippocampus of CCI rats compared with that in sham rats.n = 4-6 rats.unpaired Student's t test was used.(D) RT-qPCR results showing that the relative expression of pri-miR-25-5p, pri-miR-505-3p and pri-miR-873-5p were downregulated in the hippocampus of CCI rats, but that of pri-miR-142-5p was upregulated compared with that in sham rats.n = 4-5 rats.unpaired Student's t test was used.(E) RT-qPCR results showing that the relative expression of mature miR-142-5p and miR-25-5p were increased in the sciatic nerves of CCI rats, but that the expression of mature miR-505-3p was decreased; moreover, there was no difference in the expression of mature miR-505-3p compared with that in sham rats.n = 4-6 rats.unpaired Student's t test was used.The data are shown as the means ± SDs. ns, no significant difference; *P < 0.05; **P < 0.01
2024-04-18T13:08:09.328Z
2024-04-17T00:00:00.000
{ "year": 2024, "sha1": "fe5a1677fdaf07870c585ac7fb17a4eaa0a09d3d", "oa_license": "CCBY", "oa_url": "https://jneuroinflammation.biomedcentral.com/counter/pdf/10.1186/s12974-024-03081-z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7907d9c97b3833bd1570c2afc2adcb7d3887a9fd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13976774
pes2o/s2orc
v3-fos-license
Curvature of Multiply Warped Products In this paper, we study Ricci-flat and Einstein Lorentzian multiply warped products. We also consider the case of having constant scalar curvatures for this class of warped products. Finally, after we introduce a new class of spacetimes called as generalized Kasner space-times, we apply our results to this kind of space-times as well as other relativistic space-times, i.e., Reissner-Nordstrom, Kasner space-times, Banados-Teitelboim-Zanelli and de Sitter Black hole solutions. We recall the definition of a warped product of two pseudo-Riemannian manifolds (B, g B ) and (F, g F ) with a smooth function b : B → (0, ∞) (see also [13,65]). Suppose that (B, g B ) and (F, g F ) are pseudo-Riemannian manifolds and also suppose that b : B → (0, ∞) is a smooth function. Then the (singly) warped product, B × b F is the product manifold B × F equipped with the metric tensor g = g B ⊕ b 2 g F defined by where π : B × F → B and σ : B × F → F are the usual projection maps and * denotes the pull-back operator on tensors. Here, (B, g F ) is called as the base manifold and (F, g F ) is called as the fiber manifold and also b is called as the warping function. Generalized Robertson-Walker space-time models (see [2,11,36,68,70,71]) and standard static space-time models (see [3,4,5,55,56]) that are two well known solutions to Einstein's field equations can be expressed as Lorentzian warped products. Clearly, the former is a natural generalization of Robertson-Walker space-time and the latter is a generalization of Einstein static universe. One way to generalize warped products is to consider the case of multi fibers to obtain more general space-time models (see examples given in Section 2) and in this case the corresponding product is so called multiply warped product. In [75], covariant derivative formulas for multiply warped products are given and the geodesic equation for these spaces are also considered. The causal structure, Cauchy surfaces and global hyperbolicity of multiply Lorentzian warped products are also studied. Moreover, necessary and sufficient conditions are obtained for null, time-like and space-like geodesic completeness of Lorentzian multiply products and also geodesic completeness of Riemanninan multiply warped products. In [22,23], the author studies manifolds with C 0 -metrics and properties of Lorentzian multiply warped products and then he shows a representation of the interior Schwarzschild space-time as a multiply warped product space-time with certain warping functions. He also gives the Ricci curvature in terms of b 1 , b 2 for a multiply warped product of the form M = (0, 2m) × b 1 R 1 × b 2 S 2 . In [45], physical properties (2+1) charged Bañados-Teitelboim-Zanelli (BTZ) black holes and (2+1) charged de Sitter (dS) black holes are studied by expressing these metric as multiply warped product space-times, more explicitly, Ricci and Einstein tensors are obtained inside the event horizons (see also [9]). In [69], the existence, multiplicity and causal character of geodesics joining two points of a wide class of non-static Lorentz manifolds such as intermediate Reissner-Nordström or inner Schwarzschild and generalized Robertson-Walker space-times are studied. In [37], geodesic connectedness and also causal geodesic connectedness of multi-warped space-times are studied by using the method of Brouwer's topological degree for the solution of functional equations. There are also different types of warped products such as a kind of warped product with two warping functions acting symmetrically on the fiber and base manifolds, called as a doubly warped product (see [74]) or another kind of warped product called as a twisted product when the warping function defined on the product of the base and fiber manifolds (see [35]). Moreover, Easley studied Local Existence Warped Product Structures and also defined and considered another form of a warped product in his thesis (see [31]). In this paper, we answer some questions about the existence of nontrivial warping functions for which the multiply warped product is Einstein or has a constant scalar curvature. This problem was considered especially for Einstein Riemannian warped products with compact base and some partial answers were also provided (see [41,52,53,54]). In [53], it is proved that an Einstein Riemannian warped product with a non-positive scalar curvature and compact base is just a trivial Riemannian product. Constant scalar curvature of warped products was studied in [25,27,32,33] when the base is compact and of generalized Robertson-Walker space-times in [32]. Furthermore, partial results for warped products with non-compact base were obtained in [7] and [21]. The physical motivation of existence of a positive scalar curvature comes from the positive mass problem. More explicitly, in general relativity the positive mass problem is closely related to the existence of a positive scalar curvature (see [78]). As a more general related reference, one can consider [51] to see a survey on scalar curvature of Riemannian manifolds. The problem of existence of a warping function which makes the warped product Einstein was already studied for special cases such as generalized Robertson-Walker space-times and a table given the different cases of Einstein generalized Robertson-Walker when the Ricci tensor of the fiber is Einstein in [2] (see also references therein). Einstein Ricci tensor and constant scalar curvature of standard static space-times with perfect fluid were already considered in [55,63]. Moreover, in [56], the conformal tensor on standard static space-times with perfect fluid is studied and it is shown that a standard static space-time with perfect fluid is conformally flat if and only if its fiber is Einstein and hence of constant curvature. In [28], this problem is considered for arbitrary standard static space-times, more explicitly, an essential investigation of conditions for the fiber and warping function for a standard static space-time (not necessarily with perfect fluid) is carried out so that there exists no nontrivial function on the fiber guaranteing that the standard static space-time is Einstein. Duggal studied the scalar curvature of 4-dimensional triple Lorentzian products of the form L × B × f F and obtained explicit solutions for the warping function f to have a constant scalar curvature for this class of products (see [30]). Moreover, in the present paper, we introduce an original form to generalize Kasner space-times and then we obtain necessary and sufficient conditions as well as explicit solutions, for some special cases, for a generalized Kasner space-time to be Einstein or to have constant scalar curvature. Besides than the form mentioned here, there are also other generalizations in the literature (see [46,58]). In [46], an extension for Kasner space-times is introduced in the view of generalizing 5-dimensional Randall-Sundrum model to higher dimensions and in [58], another multi-dimensional generalization of Kasner metric is described and essential solutions are also obtained for this class of extension. One can also consider [26,39,47,48,61,66,76] for recent applications of Kasner metrics and its generalizations. We organize the paper as follows. In Section 2, we give several basic geometric facts related to the concept of curvatures (see [73,75]). Moreover, we recall two well known examples of relativistic space-times which can be considered as generalized multiply Robertson-Walker space-times. In Section 3, we obtain two results in which, under several assumptions on the fibers and warping functions, multiply generalized Robertson-Walker space-times are Einstein or have constant scalar curvature. In Section 4, after we introduce generalized Kasner space-times, we state conditions for this class of space-times to be Einstein or to have constant scalar curvature. In Section 5, we give an explicit classification of 4-dimensional multiply generalized Robertson-Walker space-times and 4-dimensional generalized Kasner space-times which are Einstein. In the last section, we focus on BTZ (2+1)-Black Hole solutions and classify (BTZ) black hole solutions given in Section 2 by using a more formal approach (see [8,9,45,62]) and then we also prove necessary and sufficient conditions for the lapse function of a BTZ (2+1) Black Hole solution to have a constant scalar curvature or to be Einstein. Our main results are obtained in Sections 3,4 and 5, especially see Theorem 3.3, Propositions 4.3 and 4.11 as well as Tables 1,2 and 3. Preliminaries Throughout this work any manifold M is assumed to be connected, Hausdorff, paracompact and smooth. Moreover, I denotes for an open interval in R of the form I = (t 1 , t 2 ) where −∞ ≤ t 1 < t 2 ≤ ∞ and we will furnish I with a negative metric −dt 2 . A pseudo-Riemannian manifold (M, g) is a smooth manifold with a metric tensor g and a Lorentzian manifold (M, g) is a pseudo-Riemannian manifold with signature (−, +, +, · · · , +). Moreover, we use the definition and the sign convention for the curvature as in [13]. For an arbitrary n-dimensional pseudo-Riemannian manifold (M, g) and a smooth function f : M → R, we have that H f and ∆(f ) denote the Hessian (0,2) tensor and the Laplace-Beltrami operator of f, respectively ( [65]). Here, we use the sign convention for the Laplacian in [65], i.e., defined by ∆ = tr g (H), (see page 86 of [65]) where H is the Hessian form (see page 86 of [65]) and tr g denotes for the trace, or equivalently, ∆ = div(grad), where div is the divergence and grad is the gradient (see page 85 of [65]). Furthermore, we will frequently use the notation grad f 2 = g(grad f, grad f ). When there is a possibility any misunderstanding, we will explicitly state the manifold or the metric for which the operator is considered. We begin our discussion by giving the formal definition of a multiply warped product (see [75]). Definition 2.1. Let (B, g B ) and (F i , g F i ) be pseudo-Riemannian manifolds and also let b i : B → (0, ∞) be smooth functions for any i ∈ {1, 2, · · · , m}. The multiply warped product is the product manifold M = B × F 1 × F 2 × · · · × F m furnished with the metric is called a warping functions and also each manifold (F i , g F i ) is called a fiber manifold for any i ∈ {1, 2, · · · , m}. The manifold (B, g B ) is the base manifold of the multiply warped product. • If m = 1, then we obtain a singly warped product. • If all b i ≡ 1, then we have a (trivial) product manifold. • If (B, g B ) and (F i , g F i ) are all Riemannian manifolds for any i ∈ {1, 2, · · · , m}, then (M, g) is also a Riemannian manifold. • The multiply warped product (M, g) is a Lorentzian multiply warped product if (F i , g F i ) are all Riemannian for any i ∈ {1, 2, · · · , m} and either (B, g B ) is Lorentzian or else (B, g B ) is a one-dimensional manifold with a negative definite metric −dt 2 . • If B is an open connected interval I of the form I = (t 1 , t 2 ) equipped with the negative definite metric is Riemannian for any i ∈ {1, 2, · · · , m}, then the Lorentzian multiply warped product (M, g) is called a multiply generalized Robertson-Walker space-time or a multi-warped space-time. In particular, a multiply generalized Robertson-Walker space-time is called a generalized Reissner-Nordström space-time when m = 2. We will state the covariant derivative formulas for multiply warped products (see [22,73,75]). Proposition 2.2. Let M = B × b 1 F 1 × · · · × bm F m be a pseudo-Riemannian multiply warped product with metric g = g B ⊕ b 2 1 g F 1 ⊕ · · · ⊕ b 2 m g Fm also let X, Y ∈ L(B) and V ∈ L(F i ), W ∈ L(F j ). Then One can compute the gradient and the Laplace-Beltrami operator on M in terms of the gradient and the Laplace-Beltrami operator on B and F i , respectively. From now on, we assume that ∆ = ∆ M and grad = grad M to simplify the notation. bm F m be a pseudo-Riemannian multiply warped product with metric g = g B ⊕b 2 1 g F 1 ⊕· · ·⊕b 2 m g Fm and φ : B → R and ψ i : F i → R be smooth functions for any i ∈ {1, · · · , m}. Then Now, we will state Riemannian curvature and Ricci curvature formulas from [73]. Proposition 2.4. Let M = B × b 1 F 1 × · · · × bm F m be a pseudo-Riemannian multiply warped product with metric g = g B ⊕ b 2 1 g F 1 ⊕ · · · ⊕ b 2 m g Fm also let X, Y, Z ∈ L(B) and V ∈ L(F i ), W ∈ L(F j ) and U ∈ L(F k ). Then Proposition 2.5. Let M = B × b 1 F 1 × · · · × bm F m be a pseudo-Riemannian multiply warped product with metric g = g B ⊕ b 2 1 g F 1 ⊕ · · · ⊕ b 2 m g Fm , also let X, Y, Z ∈ L(B) and V ∈ L(F i ) and W ∈ L(F j ). Then Now, we will compute the scalar curvature of a multiply warped product. In order to do that, one can use the following orthonormal frame on M constructed as follows: ∂ ∂x r and ∂ ∂y 1 i , · · · , ∂ ∂y s i i be orthonormal frames on open sets U ⊆ B and V i ⊆ F i , respectively for any i ∈ {1, · · · , m}. Then Then, τ admits the following expressions The following formula can be directly obtained from the previous result and noting that on a multiply generalized Robertson-Walker space-time grad we denote the usual derivative on the real interval I by the prime notation (i.e., ′) from now on. Then, τ admits the following expressions We now give some physical examples of relativistic space-times and state some of their geometric properties to stress the physical motivation and importance of Lorentzian multiply warped products. The first example is Schwarzschild black hole solution or known as inner Reissner-Nordström space-time and the second one is Kasner spacetime. Our last two examples are closely related to each other, more explicitly, the third example is Bañados-Teitelboim-Zanelli (BTZ) black hole solution and the final example is de Sitter (dS) black hole solution. • Schwarzschild Space-time We will briefly discuss the interior Schwarzschild solution. We show how the interior solution can be written as a multiply warped product. The line element of the Schwarzschild black hole space-time model for the region r < 2m is given as (see [44]) where dΩ 2 = dθ 2 + sin 2 θdϕ 2 on S 2 . In [22], it is shown that this space-time model can be expressed as a multiply generalized Robertson-Walker space-time, i.e., Moreover, we also need to impose the above multiply generalized Robertson-Walker space-time model for the Schwarzschild black hole to be Ricci-flat due to the fact that the Schwarzschild black hole is Ricci-flat (see also the review of Miguel Sánchez in AMS for [22]). • Kasner Space-time We consider the Kasner space-time as a Lorentzian multiply warped product (see [64]). A Lorentzian multiply warped product (M, g) of the form [49]). It is known by [43] that −1/3 ≤ p 1 , p 2 , p 3 < 1. It is also known that, excluding the case of two p i 's zero, then one p i is negative and the other two are positive. Thus we may assume that −1/3 ≤ p 1 < 0 < p 2 ≤ p 3 < 1 by excluding the case of two p i 's zero and one p i equal to 1. Furthermore, the only solution in which p 2 = p 3 is given by p 1 = −1/3 and p 2 = p 3 = 2/3. Note also that since −1/3 ≤ p 1 , p 2 , p 3 < 1, we have to assume B to be (0, ∞). Clearly, the Kasner space-time is globally hyperbolic (see [75]). By making use of the results in [75], it can be easily seen that the Kasner spacetime is future-directed time-like and future-directed null geodesic complete but it is past-directed time-like and past-directed null geodesic incomplete. Moreover, it is also space-like geodesic incomplete. Notice that the Kasner space-time is Einstein with λ = 0 (i.e., Ricci-flat) (see [49] and page 135 of [59]) and hence has constant scalar curvature as zero. This fact can be proved as a particular consequence of our results in the next section, namely by using Theorem 3.3. • Static Bañados-Teitelboim-Zanelli (BTZ) Space-time In [45], authors classify (BTZ) black hole solutions in three different classes as static, rotating and charged. Here, we will only give a brief description of a static BTZ space-time in terms of Lorentzian multiply warped products, i.e., multiply generalized Robertson-Walker space-times (see also [8,9,62]). The line element of a static BTZ black hole solution can be expressed as The line element of the Static BTZ black hole space-time model for the region r < r H can be obtained by taking In this case, the space-time model can be expressed as a multiply generalized Robertson-Walker space-time, i.e., Here, note that the constant scalar curvature τ of the multiply generalized Robertson-Walker space-time introduced above is τ = −6/l 2 (see [45]) or apply Corollary 2.7. We now state a couple of results which we will frequently be applied along this article. The first one is an easy computation which we will show explicitly below. Let (M, g) be an n-dimensional pseudo-Riemannian manifold. For any t ∈ R and v ∈ C ∞ (2. 2) The second one is a lemma that follows (for a proof and some extensions as well as other useful applications, see Section 2 of [29]). Lemma 2.8. Let (M, g) be an n-dimensional pseudo-Riemannian manifold. Let L g be a differential operator on C ∞ >0 (M ) defined by where r i , a i ∈ R and ζ := (ii) If ζ = 0 and η = 0, for α = ζ η and β = ζ 2 η , then we have Special Multiply Warped Products 3.1. Einstein Ricci Tensor. In this section, we state some condition to guarantee that a multiply generalized Robertson-Walker space-time is Ricci-flat or Einstein. Now, we recall some elementary facts about Einstein manifolds starting from its definition. Recall that an n-dimensional pseudo-Riemannian manifold (M, g) is said to be Einstein if there exists a smooth real-valued function λ on M such that Ric = λg, and λ is called the Ricci curvature of (M, g) (see also page 7 of [6]). Remark 3.1. Concerning to this notion, it should be pointed out: (1) If (M, g) is Einstein and n ≥ 3, then λ is constant and λ = τ /n, where τ is the constant scalar curvature of (M, g). By using Proposition 2.5, we easily obtain the Ricci curvature of Lorentzian multiply warped products, (M, g) of the above form. v i and by noting that ∂ ∂t , ∂ ∂t = 0 and by using Proposition 2.5, we obtain the result. The following result can be easily proved by substituting v j = 0 for any j ∈ {1, · · · , m} − {i} and v i = 0, in Proposition 3.2 along with the method of separation of variables. is Einstein with Ricci curvature λ if and only if the following conditions are satisfied for any i ∈ {1, · · · , m} (3) can be expressed in different forms and here we want to present some of them. By applying Equation 2.2, we can have or equivalently, Constant Scalar Curvature. It is possible to obtain equivalent expressions for the scalar curvature in Corollary 2.7, namely the following just follows from Equation 2.2, Since 2s i = 0 and s i + s 2 i = s i (s i + 1) = 0, by Lemma 2.8, there results Note that when m = 1 this relation is exactly that obtained in [27] and [29] when the base has dimension 1. The following result just follows from the method of separation of variables and the fact that each τ F i : F i → R is a function defined on F i , for any i ∈ {1, · · · , m}. Proposition 3.5. Let M = I × b 1 F 1 × · · · × bm F m be a multiply generalized Robertson-Walker space-time with the metric g = −dt 2 ⊕ b 2 1 g F 1 ⊕ · · · ⊕ b 2 m g Fm . If the space-time (M, g) has constant scalar curvature τ, then each fiber (F i , g F i ) has constant scalar curvature τ F i , for any i ∈ {1, · · · , m}. As one can notice from the above formula, it is extremely hard to determine general solutions for warping functions which produce an Einstein, or with constant scalar curvature multiply generalized Robertson-Walker space-time. Note that non-linear second order differential equations need to be solved according Theorem 3.3. Further note that there is only one differential equation and m different warping functions in Corollary 2.7. Therefore instead of giving a general answer to the existence of warping functions to get an Einstein, or with constant scalar curvature, space-time, we simplify this problem and consider some specific cases in mentioned Sections 4 and 5. From now on, for an arbitrary generalized Kasner space-time of the form in Definition 4.1, we introduce the following parameters By applying Theorem 3.3, we can easily state the following result and later we will examine the solvability of the differential equations therein. Proposition 4.3. Let M = I × ϕ 1 F 1 × · · · × ϕm F m be a generalized Kasner space-time with the metric g = −dt 2 ⊕ ϕ 2p 1 g F 1 ⊕ · · · ⊕ ϕ 2pm g Fm . Then the space-time (M, g) is Einstein with Ricci curvature λ if and only if (1) each fiber (F i , g F i ) is Einstein with Ricci curvature λ F i for any i ∈ {1, · · · , m}, Proof. (of Proposition 4.3 and Remark 4.4) In order to prove (3), note that Equation Hence, by Equation (2.2), and from here and by the definition of ζ If furthermore ζ = 0, applying again Equation 2.2, results (E Hence, if ζ = 0 and as consequence η = 0, applying Lemma 2.8 (b), results (E gK −i). Note that, from now on and also including the previous result, when we apply Lemma 2.8, we denote the usual derivative in equations by means of the prime notation. Remark 4.5. Note that the conditions ζ = 0 and η = 0 agree with the conditions usually imposed in the classical Kasner space-times, namely p 1 + p 2 + p 3 = 1 and p 2 1 + p 2 2 + p 2 3 = 1 (see [49]). It is easy to show that the unique possibility to construct an Einstein classical Kasner manifold or a constant scalar curvature classical Kasner manifold with p 1 + p 2 + p 3 = 0 is p 1 = p 2 = p 3 = 0, so that we have just a usual product. Indeed, considering ϕ(t) = t, it is possible to apply Proposition 4.3 and later Proposition 4.11, respectively. Thus, since for all i, ζ − p i = 0 and η − p i ζ = 0, then applying Lemma 2.8, the result just follows. Remark 4.10. Note that Corollary 4.9 contains the classical Kasner metrics except the case in which at least one p i = 1 (really at most one could be 1 because η = p 2 1 +p 2 2 +p 2 3 = 1). The following just follows from Corollary 2.7 and again we discuss the existence of a solution for the differential equation below. Proposition 4.11. Let M = I × ϕ 1 F 1 × · · · × ϕm F m be a generalized Kasner space-time with the metric g = −dt 2 ⊕ ϕ 2p 1 g F 1 ⊕ · · · ⊕ ϕ 2pm g Fm . Then the space-time (M, g) has constant scalar curvature τ if and only if (1) each fiber (F i , g F i ) has constant scalar curvature τ F i for any i ∈ {1, · · · , m}, and Proof. (of Proposition 4.11 and Remark 4.12) For each i ∈ {1, · · · , m}, let γ i = p i s i + 1 2 and ψ i = ϕ γ i , then by (sc gRW − iii) and Equation 2.2 there results Hence, if ζ = 0, applying Lemma 2.8, Remark 4.14. If ζ = 0 and there is only one fiber, i.e., in a standard warped product, the equation in the previous corollary corresponds to those obtained in [27,29]. Example 4.15. Let us assume that ζ = 0 and each F i is scalar flat, namely τ F i = 0. Hence, equation in the previous corollary is written as Thus all the solutions have the form with constants A and B such that u > 0. If ζ = 0, by Proposition 4.11, we look for positive solutions of the equation Since η > 0, the latter is equivalent to Solutions of the equation above are given as, where C is a positive constant. Note that this example include the situation of the classical Kasner space-times in the framework of scalar curvature. Compare with the results about Einstein classical Kasner metrics in Remark 4.5 and Example 4.7. 4-Dimensional Space-time Models We first give a classification of 4-dimensional warped product space-time models and then consider Ricci tensors and scalar curvatures of them. Note that Type (I) contains the Robertson-Walker space-time. The Schwarzschild black hole solution can be considered as an example of Type (II). Type (III) includes the Kasner space-time. Type (I). Let M = I × b F be a Type (I) warped product space-time with metric g = −dt 2 ⊕ b 2 g F . Then the scalar curvature τ of (M, g) is given as The problem of constant scalar curvatures of this type of warped products, known as generalized Robertson-Walker space-times is studied in [32], indeed, explicit solutions to warping function are obtained to have a constant scalar curvature. If v is a vector field on F and x = ∂ ∂t + v, then In [2], explicit solutions are also obtained for the warping function to make the space-time as Einstein when the fiber is also Einstein. If v i is a vector field on F i , for any i ∈ {1, 2} and x = ∂ ∂t Note that Ric F 1 ≡ 0, since dim(F 1 ) = 1. • Classification of Einstein Type (II) generalized Kasner space-times: Let M = I × ϕ p 1 F 1 × ϕ p 2 F 2 be an Einstein Type (II) generalized Kasner space-time. Then the parameters introduced before Proposition 4.3 are given by ζ = p 1 + 2p 2 , η = p 2 1 + 2p 2 2 . Hence the latter arises The last equation implies in particular that λ F 2 is constant. Let the system where ν and σ are real parameters. All its solutions ϕ σ have the form with constants A and B such that ϕ > 0. Furthermore, let the (ϕ σ ; ν) modified system Note that ν must be > 0. It is easy to verify that all its solutions are given by where A is a positive constant. Consider now two cases, namely ζ = 0: First of all, note that p 2 = − 1 2 p 1 and η = 3 2 p 2 1 . η = 0: Thus, p i = 0, for all i and 0 = λ = λ F 2 . Thus the corresponding metric is −dt 2 + g F 1 + g F 2 . λ F 2 = 0: then p 1 = p 2 and the system can be reduced to which is equivalent to the solvable system (ϕ ζ ; 3λ; * ). Note that λ must be > 0. λ F 2 = 0: then ϕ is constant and this gives a contradiction. The table that follows specifies the only possible Einstein generalized Kasner spacetimes of Type (II) with the corresponding parameters. The last column indicates the function ϕ or the system which it satisfies. τ F 2 = 0: Note that a particular subcase is η ζ 2 = 1 3 . In fact, in this case, p 1 = p 2 = ζ 3 (see Remark 4.2) and the latter equation reduces to the non-homogeneous linear ordinary differential equation Synthetically, remembering that in each case the corresponding metric may be written as −dt 2 + ϕ 2p 1 g F 1 + ϕ 2p 2 g F 2 , we find that the only possibilities to have constant scalar curvature in a generalized Kasner space-time of type (II) are generated by ζ η Table 2 where the conditions for τ must be imposed by the existence of positive solutions of the ordinary differential equations of the last column, on the corresponding interval I. which is equivalent to the solvable system (ϕ ζ ; 3λ; * ). Note that λ must be > 0. The table that follows specifies the only possible Einstein generalized Kasner space-times of type (III) with the corresponding parameters. Like for the table of Type (II), the last column indicates the function ϕ or the system which it satisfies. This example may be easily generalized to the situation all the F i 's are Ricci (ϕ ζ ; 3λ; * ) Table 3 • Classification of Type (III) generalized Kasner space-times with constant scalar curvature Let M = I × ϕ p 1 F 1 × ϕ p 2 F 2 × ϕ p 3 F 3 be a Type (III) generalized Kasner manifold with constant scalar curvature. Then the parameters introduced before Proposition 4.11 satisfy ζ = p 1 + p 2 + p 3 , η = p 2 1 + p 2 2 + p 2 3 . Thus, this case is already included in the analysis of Example 4.15. We will close this section by an example and the following comment which gives some preliminary ideas about our future plans on this topic (see also the last section for details). Example 5.2. Let M = I × ϕ p 1 S 3 × ϕ p 2 S 2 be a generalized Kasner manifold with constant scalar curvature. Then the parameters introduced before Proposition 4.11 are given by ζ = 3p 1 + 2p 2 , η = 3p 2 1 + 2p 2 2 . Consider now p 1 = 1 and p 2 = −1, then ζ = 1 and η = 5. Hence, applying Corollary 4.13 the latter conditions arise for u = ϕ 3 the problem where τ S 3 , τ S 2 > 0 are the constant scalar curvatures of the corresponding spheres. Note that the equation in (5.4) has always the constant solution zero and there exists τ 1 > 0 such that for τ = τ 1 there is only one constant solution of (5.4) and for any τ > τ 1 there are two constant solutions of (5.4), so that there exists a range of τ 's, (τ 1 , +∞), where the problem (5.4) has multiplicity of solutions; while there is no constant solutions when τ < τ 1 . On the other hand, as in Example 5.2, considering S 3 instead of S 2 with the same values of p 1 and p 2 , i.e., M = I × ϕ p 1 S 3 × ϕ p 2 S 3 , results ζ = 3p 1 + 3p 2 = 0, η = 3p 2 1 + 3p 2 2 = 6. Hence, applying Proposition 4.11 the latter conditions arise the problem The equation in (5.5) does not have the constant solution zero. Furthermore there is no constant solution of (5.5) if τ < 2τ S 3 , there is only one constant solution of (5.5) if τ = 2τ S 3 and two constant solutions of (5.5) if τ > 2τ S 3 . The cases considered above are just some examples for the different type of differential equations involved in the problem of constant scalar curvature when the dimensions, curvatures and parameters have different values. In a future article, we deal with the problem of constant scalar curvature of a pseudo-Riemannian generalized Kasner manifolds with a base of dimension greater than or equal to 1. This problem carries to nonlinear partial differential equations with concave-convex nonlinearities like in (5.4), among others. Nonlinear elliptic problems with such nonlinearities have been extensively studied in bounded domains of R n , after the central article of Ambrosetti, Brezis and Cerami [1], in which the authors studied the problem of multiplicity of solutions under Dirichlet conditions. The problem of constant scalar curvature in a generalized Kasner manifolds with base of dimension greater than or equal to 1 is one of the first examples where those nonlinearities appear naturally. Another related case is the base conformal warped products, studied in [29]. BTZ (2+1) Black Hole Solutions Now we consider BTZ (2+1)-Black Hole Solutions and give another characterization of (BTZ) black hole solutions mentioned in Section 2 (for further details see [8,9,45,62]) in order to apply the results obtained in this paper. All the cases considered in [45], can be obtained applying the formal approach that follows. By considering the corresponding square lapse function N 2 , the related 3dimensional, (2+1)-space-time model can be expressed as a (2+1) multiply generalized Robertson-Walker space-time, i.e., and F −1 the inverse function of F (assuming that there exists) and a is an appropriate constant that is most of the time related to the event horizon. Recalling we obtain the following properties by applying the chain rule. Here, note that all the functions depend on the variable t and the derivatives are taken with respect to the corresponding arguments. On the other hand, by Corollary 2.7 applied to the metric (6.1), with s 1 = s 2 = 1. The scalar curvature of the corresponding space-time is given by Note that, the latter is an expression of the scalar curvature as an operator in the square lapse function. Remember that b 2 = F −1 . About the Ricci tensor, applying our Proposition 3.2 and Theorem 3.3 and by considering again s 1 = s 2 = 1, Theorem 3.3 says that the metric (6.1) is Einstein with λ if and only if (6.7) On the other hand by making use of (6.5), the system (6.7) is equivalent to (all the functions are evaluated in r = b 2 ) (6.8) or moreover to the following (6.9) Thus, we have (6.10) (N 2 ) ′′ = λ. Proposition 6.1. Suppose that we have a (2 + 1)-Lorentzian multiply warped product with the metric given by (6.1), where b 1 and b 2 satisfying both (6.2) and (6.3). The space-time is Einstein with Ricci curvature λ if and only if the square lapse function N 2 satisfies (6.11), with c 1 = 0 and a suitable constant c 2 . Notice that the static (BTZ) and the static (dS) black hole solutions considered in [45] satisfy Proposition 6.1. Thus they are Einstein multiply warped product spacetimes. Remark 6.2. Remark that if N 2 satisfies (6.11) with c 1 = 0, then an application of (6.6) gives the constancy of the scalar curvature τ = 3λ, as desired. Note that this result agrees with the ones obtained in [45]. Furthermore, the following just follows from the solution of the involved second order linear ordinary differential equation arisen by the expression (6.6). Note that Proposition 6.3 agrees with Remark 6.2. Conclusions Now, we would like to summarize the content of the paper and to make some concluding remarks. In a brief, we studied expressions that relate the Ricci (respectively scalar) curvature of a multiply warped product with the Ricci (respectively scalar) curvatures of its base and fibers as well as warping functions. By using expressions obtained in the paper, we proved necessary and sufficient conditions for a multiply generalized Robertson-Walker space-time to be Einstein or to have constant scalar curvature. Furthermore, we introduced and considered a kind of generalization of Kasner spacetimes, which is closely related to recent applications in cosmology where metrics of the form are frequently considered (see [42,72]; for other recent topics concerned Kasner type metrics see for instance [26,39,47,48,61,66,76,77]). If each warping function e 2α i is expressed as for suitable p i 's, then (7.1) takes the form Our generalization of Kasner space-times corresponds exactly to the case in which the ϕ i 's are independent of i. More explicitly, α i = p i α in Equation (7.2), with α = α(t) for a sufficiently regular fixed function. Note that a classical Kasner space-time corresponds to the case of α ≡ 1 (see [58] also). By applying Lemma 2.8, we obtained useful expressions for the Ricci tensor and the scalar curvature of generalized Robertson-Walker and generalized Kasner space-times. These expressions allowed us to classify possible Einstein (respectively with constant scalar curvature) generalized Kasner space-times of dimension 4. We also obtained some partial results for grater dimensions. Finally, in order to study curvature properties of multiply warped product spacetimes associated to the BTZ (2 + 1)-dimensional black hole solutions, we made applications of the previously obtained curvature formulas. As a consequence, we characterized the Einstein BTZ (respectively with constant scalar curvature), in terms of the square lapse function. In forthcoming papers we plan to focus on a specific generalization of the structures studied here, which is particularly useful in different fields such as relativity, extra-dimension theories (Kaluza-Klein, Randall-Sundrum), string and super-gravity theories, spectrum of Laplace-Beltrami operators on p-forms, among others. Roughly speaking, we will consider a mixed structure between a multiply warped product and a conformal change in the base. Naturally, our main interest is the study of curvature properties. As we have made progress on this subject, we realized that these curvature related properties are interesting and worth to study not only for the physical point of view (see for instance, the several recent works of Gauntlett, Maldacena, Argurio, Schmidt, among many others), but also for exclusive nonlinear partial differential equations involved. Indeed, the curvature related questions arise problems of existence, uniqueness, bifurcation, study of critical points, etc. (see Example 5.2 above and the different works of Aubin, Hebey, Yau, Ambrosetti, Choquet-Bruat among others).
2014-10-01T00:00:00.000Z
2004-06-02T00:00:00.000
{ "year": 2004, "sha1": "eb24ae8814a24e2d28bbe17b607e34a7e53f700d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0406039", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eb24ae8814a24e2d28bbe17b607e34a7e53f700d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
260390904
pes2o/s2orc
v3-fos-license
Extracellular Matrix Collagen I Differentially Regulates the Metabolic Plasticity of Pancreatic Ductal Adenocarcinoma Parenchymal Cell and Cancer Stem Cell Simple Summary Pancreatic ductal adenocarcinoma (PDAC) has an extremely poor prognosis largely due to the intense fibrotic desmoplastic reaction, characterized by high levels of extracellular matrix (ECM) collagen I that constitutes a niche for the cancer stem cells (CSCs). The role of the ECM composition in determining metabolic plasticity is still unknown. As ECM collagen I content increased, the CSCs switched from glucose to mostly glutamine metabolism. While all the bioenergetic modulators (BMs) decreased cell viability and increased cell death in all extracellular matrix types, a distinct, collagen I-dependent profile was observed in CSCs, in which the CSCs switched from glucose to mostly glutamine metabolism. Furthermore, all BMs synergistically potentiated the cytotoxicity of paclitaxel albumin nanoparticles (NAB-PTX) in both cell lines. Abstract Pancreatic ductal adenocarcinoma (PDAC) has a 5-year survival rate of less than 10 percent largely due to the intense fibrotic desmoplastic reaction, characterized by high levels of extracellular matrix (ECM) collagen I that constitutes a niche for a subset of cancer cells, the cancer stem cells (CSCs). Cancer cells undergo a complex metabolic adaptation characterized by changes in metabolic pathways and biosynthetic processes. The use of the 3D organotypic model in this study allowed us to manipulate the ECM constituents and mimic the progression of PDAC from an early tumor to an ever more advanced tumor stage. To understand the role of desmoplasia on the metabolism of PDAC parenchymal (CPC) and CSC populations, we studied their basic metabolic parameters in organotypic cultures of increasing collagen content to mimic in vivo conditions. We further measured the ability of the bioenergetic modulators (BMs), 2-deoxyglucose, dichloroacetate and phenformin, to modify their metabolic dependence and the therapeutic activity of paclitaxel albumin nanoparticles (NAB-PTX). While all the BMs decreased cell viability and increased cell death in all ECM types, a distinct, collagen I-dependent profile was observed in CSCs. As ECM collagen I content increased (e.g., more aggressive conditions), the CSCs switched from glucose to mostly glutamine metabolism. All three BMs synergistically potentiated the cytotoxicity of NAB-PTX in both cell lines, which, in CSCs, was collagen I-dependent and the strongest when treated with phenformin + NAB-PTX. Metabolic disruption in PDAC can be useful both as monotherapy or combined with conventional drugs to more efficiently block tumor growth. Introduction Pancreatic ductal adenocarcinoma (PDAC) is the fourth cause of death from cancer in Western countries [1]. In most cases, it is diagnosed at an advanced stage and only a low percentage of patients are able to undergo surgical resection. PDAC presents a 5-year survival rate of less than 10%, due to the development of early metastases [2]. PDAC is characterized by an extensive desmoplasia of the extracellular matrix (ECM), characterized by the accumulation of collagen I [3][4][5][6][7][8]. The fibrotic and dense stroma is responsible for poor tumor vascularization, leading to hypoxic regions [9,10] associated with metastatic potential, as well as resistance to chemo-and radiotherapy [6,8,11,12]. The first-line chemotherapeutic drug is gemcitabine, combined either with paclitaxel associated with albumin (NAB-PTX) or with FOLFIRINOX (a combination of 5-fluorouracil, leucovorin, oxaliplatin and irinotecan) [13,14]. However, both regimens have low efficacy rates [15]. Another feature responsible for treatment failure and tumor relapse is the presence of a subset of cells with stem cell characteristics [16,17], the cancer stem cells (CSCs). CSCs have been identified in different types of cancers, such as leukemia [18], glioma [19], colon [20], lung [21], breast [22], prostate cancers [23] and also in PDAC [24][25][26]. They present similar features to normal stem cells, such as self-renewal capacity and the competence to differentiate into multiple cancer cell types, a functional system of DNA repair, metabolic adaptations and other stemness abilities [27]. Thus, the development of new therapies that preferentially target CSCs could result in important improvements in current antitumor strategies in PDAC patients [26]. One of the features that can be used as a promising target is the reprogrammed metabolism [28]. The non-stem parenchymal pancreatic cancer cells (CPCs) show an increased dependence on glycolysis coupled with a high production of lactate, known as the Warburg effect [29]. This metabolic switch induces the upregulation of glycolytic enzymes, glucose transporters (GLUTs) and lactate transporters (MCTs) and there are reports of the overexpression of these transporters in PDAC [30,31]. While non-tumoral stem cells rely more on glycolysis than differentiated cells that prefer OXPHOS [32], there are a few studies published about the metabolic features of CSCs with conflicting/controversial results and explanations. Even if glucose is considered to be the main metabolite used by CSCs, they can use it to fuel different metabolic pathways [16] and there are studies showing CSCs to be particularly reliant on amino acid metabolism for oxidative phosphorylation, especially in leukemia stem cells, which are completely dependent on metabolizing amino acids for energy [33,34]. While PDAC is characterized by high levels of glutamine uptake [35][36][37][38][39], neither the tumor cell type involved nor the role of the desmoplastic reaction in driving this glutamine uptake have been described. Many studies have revealed that CSCs identified in some types of cancer, such as PDAC [40], ovarian [41] and gliomas [42], rely mainly on oxidative metabolism with a higher mitochondrial mass content and higher oxygen consumption levels [43][44][45][46]. In contrast, other studies reported a more glycolytic profile, when these cells are maintained in hypoxic regions, regulated by HIF-1α and HIF-2α and, consequently, by an acid tumor microenvironment, responsible for stemness features and resistance to treatment [47]. Some examples are osteosarcoma [48], breast, lung [49] and colon cancers [50]. An explanation for different theories on the metabolic features of CSCs is likely to be due to their metabolic plasticity [51] that permits CSCs to adapt to the availability of nutrients and to external tumor environmental conditions [52]. This adaptive metabolic plasticity might allow CSCs to survive in variable, hostile environments found during tumor progression and there is no doubt that the altered metabolic profiles represent an emergent hallmark of CSCs that can be used to develop new therapeutic approaches. This would be an efficient strategy to eradicate the tumor cell population most responsible for tumor relapse and chemoresistance, thus helping to improve the survival rate of cancer patients [28]. Some studies have shown that CSCs are sensitive to OXPHOS inhibition by biguanide compounds, such as metformin, that inhibits ATP formation via the blockage of complex I in mitochondria [53,54]. Indeed, it has been shown that metformin enhanced the capacity of gemcitabine to inhibit the proliferation and invasion of PDAC cells by inhibiting the proliferation of CSC cell populations [55]. However, recent studies using metformin produced somewhat disappointing results [56,57], which has led to a new clinical trial using another more potent/efficacious biguanide compound, phenformin, that showed promising results, but not in PDAC [58]. Glycolysis inhibitors, such as 2-deoxyglucose or 3-bromopyruvate, have also been tested in in vitro models of PDAC and both were able to decrease cell viability and increase cell death [45,[59][60][61]. The limited success of the existent therapeutic modalities in pancreatic cancer has underlined the potential future importance of using metabolic inhibitors either singly or in combination with existing therapeutic regimens [45]. Unfortunately, the knowledge in this field is scarce and more efforts are needed to understand the benefit of using the altered metabolism as targets in PDAC treatment, including as co-adjuvant therapy. However, the existing studies of in vitro anti-metabolic treatments used 2D and not three-dimensional (3D) culture systems and did not take into account the influence of the extracellular desmoplastic environment predominant in PDAC [62]. Therefore, the efficacy of therapeutic strategies was not correctly evaluated. In this line, recent studies have demonstrated the importance of the extracellular matrix (ECM) in regulating the behavior of tumor parenchymal and CSCs [63][64][65]. Indeed, recent studies in PDAC cells have demonstrated that the stromal ECM composition "per se" provides important cues that guide the expression of growth kinetics, morphology, invasion, chemosensitivity and the secretome profiles in both PDAC parenchymal cells (CPCs) [66][67][68] and CSCs [67,68]. The use of the 3D organotypic model in this study allowed us to manipulate the ECM constituents and mimic the progression of PDAC from an early tumor to an ever more advanced tumor stage [62]. We therefore used this 3D organotypic culture platform to determine the role of ECM composition on metabolic plasticity in PDAC CPCs and CSCs and their response to various metabolic pathway inhibitors (energy disruptor molecules or bioenergetic modulators; called here, BMs) [45] in the absence and presence of nab-paclitaxel (NAB-PTX) [46]. Cell Lines and Culture Two PDAC cell lines were utilized, the parenchymal, parental PANC-1 cell line (hereon called CPC) and the CSC cell line derived from the parenchymal line. The PANC-1 line was obtained from the American Type Culture Collection. The CSC line was selected as described [67,68] using a selective medium, DMEM-F12 (US Biological Sciences, Swampscott, MA, USA) without serum but containing (per L) 10 mL B27 (Gibco, Life Technologies, New York, NY, USA), 1 ug/mL fungizone (Gibco, Life Technologies), 5 ug/mL heparin (Sigma Aldrich, St. Louis, MO, USA) and the growth factors EGF and FGF (20 ng/mL, PreProTech, Rocky Hill, CT, USA). After selection, the CSCs were maintained in this medium for all experiments. The CPC cell line (PANC-1) was maintained in RMPI (Gibco, Life Technologies) supplemented with 10% fetal bovine serum (FBS, Gibco, Life Technologies), 50 µg/mL gentamycin (Gibco, Life Technologies) and 1% penicillin-streptomycin solution (Gibco, Life Technologies). Both cell lines were maintained at 37 • C and 5% CO 2 . Drugs and Their Application The bioenergetic modulators (BMs) 2-DG (5 mM), DCA (20 mM) and phenformin (0.01 mM) (Sigma Aldrich) were dissolved in PBS at stock solutions of 1 M, 10 M and 100 mM, respectively. NAB-PTX was obtained from the University Hospital, Bari, Italy [69,70] at the final stock solution concentration of 5.8 mM, from which working solutions were prepared. Cells were plated into 96-well substrate-coated plates at a density of 1.5 × 10 3 cells/well; after substrate polymerization and 24 h after cell plating, cells were treated with the chemotherapeutic drug, NAB-PTX, BMs (2-DG: 5 mM, DCA: 20 mM and phenformin: 0.01 mM) or a combination of both (each BM + NAB-PTX 10 nM). Three Dimensional Organotypic Growth Three different mixtures constituted by Matrigel Basement Membrane Matrix™ (Corning) and collagen I (bovine-Gibco, Life Technologies) were prepared as per [67,68]. The stock concentration of Matrigel was 7 mg/mL dissolved in serum-free media, whereas the stock concentration of collagen I was 3 mg/mL dissolved in distilled water, PBS 10X (Sigma Aldrich) and 0.015 N NaOH. Then, the mixtures were prepared as per [67,68]: A total of 90% Matrigel-10% collagen I. The mixtures of Matrigel and collagen I were prepared at the final concentration of 6.3 mg/mL and 0.3 mg/mL, respectively, as described above, and then mixed. A total of 70% Matrigel-30% collagen I. The mixtures of Matrigel and collagen I were prepared at the final concentration of 4.9 mg/mL and 0.9 mg/mL, respectively, as described above, and then mixed. A total of 10% Matrigel-90% collagen I. The mixtures of Matrigel and collagen I were prepared at the final concentration of 0.7 mg/mL and 2.7 mg/mL, respectively, as described above, and then mixed. In 96-well plates, 100 µL of each ECM mixture was added to each well and, after 1 h of polymerization at 37 • C, all the wells were washed with a culture medium without serum, to eliminate the metabolites generated during the polymerization process. Cell Viability Assay Cells were plated into 96-well substrate-coated plates at a density of 1.5 × 10 3 cells/well, treated with the chemotherapeutic drugs as already described in Section 2.2 and cell viability was analyzed after 7 days in culture. The effect of compounds on cell proliferation was determined by the resazurin assay (alamarBlue R ) (OD 590 nm), as described previously [62,67,68]. IC 50 values were estimated from 3 independent experiments, each one in triplicate, using GraphPad Prism 7 Software. Cell Death Assay At the end of 7 days of exposure to BMs and NAB-PTX in 96-well substrate-coated plates at a density of 1.5 × 10 3 cells/well, cells were incubated overnight with ethidium homodimer (Sigma-Aldrich) at 10 mg/mL to evaluate the fluorescence of death cells (O.D. 650 nm). Images were acquired with a fluorescence microscope (Olympus). The analysis of pixel density was quantified with the ImageJ 1.46r software [68]. Extracellular Glucose and Lactate and cell ATP Content Assays Cells were plated in 24-wells in 2D conditions or in ECMs-coated plates at a density of 9 × 10 5 cells/well. For basal analysis of metabolic parameters, 2D cells were used as control. For the analysis of metabolic parameters in cells exposed to BMs, untreated cells were used as control. Cells were treated with the above BM concentrations and the cell culture medium was collected after the respective incubation time for glucose and lactate quantification. Glucose and lactate were quantified using commercial kits (Sigma-Aldrich), according to the manufacturer's protocols. Results are expressed as ∆mM metabolite/proliferative cells. Simultaneously, after ECM removal [67], the cells were used to quantify intracellular ATP. ATP was measured using a commercial kit according to the manufacturer's instructions (Abcam). ATP concentration in each sample was determined through the calibration curve constructed for each assay. The ATP content was expressed as total ATP normalized to the concentration of protein analyzed previously. The results presented correspond to the average of at least three independent experiments. Quantification of Amino Acids in the Growth Medium by HLPC The medium removed from the untreated, control samples or samples treated with the above concentration of BMs, as well as standards, were first deproteinized by treating them with 10 volumes of methanol. The samples were incubated on ice for 30 min and centrifuged at 12,000 rpm for 20 min at 4 • C and the supernatant was removed and stored at −20 • C until use. Prior to injection, the analytes were derivatized with OPA-MPA (ortho-phthalaldehyde, Sigma-Aldrich) and 2-mercaptoethanol (MPA) as reducing agent. Derivatization was carried out by mixing 10 µL of each sample with 48 µL of the derivatization reagent (0.2 M borate buffer, 60 mM MPA, pH 9) in a total volume of 190 µL. The separation of the derivatized analytes was obtained by HPLC (Waters Alliance Separations Module 2695) with a guard cartridge and a Kinetex 5 µm C18 150 × 4.6 mm column (Phenomenex Inc., Torrance, CA, USA). Separation was carried out at a flow rate of 1.3 mL/min using a linear gradient elution with a mobile phase consisting of 0.05 M acetate /methanol (95/5) as a polar phase (eluent A, pH 7.2) and a 0.1 M sodium acetate/methanol/acetonitrile (46/44/10) as nonpolar phase (eluent B, pH 7.2). The following gradient was applied: start 0% B, 3 min 2% B, 7 min 15% B, 17 min 50% B, 22 min 100% B (flow increased to 1.8 mL/min and hold for 5 min), 30 min 0% B (flow rate decreased to 1.3 mL/min) and hold for 5 min for equilibration. For the detection, a Waters 2475 fluorescence detector (337 nm excitation, 453 nm emission) was employed. Bioenergetic Modulator (BM) Effect on NAB-PTX Cytotoxicity A total of 1.5 × 10 3 cells/well were seeded into 96-well in ECM-coated plates and treated with NAB-PTX (10 nM) alone or combined with the above concentrations of BMs for 7 days. Untreated cells were used as control. The effect of NAB-PTX alone and NAB-PTX + BMs on cell proliferation and cell death were evaluated as above. The results presented correspond to the average of at least three independent experiments. The combined effect of the drugs was determined using the CalcuSyn Software version 2.0 (Biosoft, Cambridge, UK). Synergy or antagonism was quantified by the combination index (CI), where CI = 1 indicates an additive effect, CI < 1 indicates synergy and CI > 1 indicates antagonism. Statistical Analysis The GraphPad prism 7 software was used, with either the Student's t-test or one-way ANOVA followed by the Dunnett test, considering significant values to be p ≤ 0.05. ECM Composition Is Involved in the Regulation of Metabolic Plasticity, Especially in CSCs To analyze the role of the ECM composition on PDAC cells' metabolic profile, intracellular ATP content and extracellular lactate and glucose levels were measured for both CPCs and CSCs grown in 2D or in 3D on three different ECM mixes. In Figure 1A, it can be seen that both cell lines present distinct metabolic profiles that were differently dependent on ECM composition. After 7 days of growth in the different ECMs, the growth medium was collected and basic metabolic parameters such as glucose consumption, lactate release and ATP production were measured. Results are presented as mean ± SEM in triplicate of at least three independent experiments. Significantly different between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to 2D condition of each cell line. # p < 0.05; ### p < 0.001 and ns. p > 0.05 compared the 3D ECMs of each cell line. (B) Expression of the major transporters responsible for glucose consumption and lactate release. Upper panels display representative Western blots for the indicated proteins and lower panels (C) show a quantitative analysis of their relative expression as standardized to CPCs cells on 2D. Significantly different between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to 2D CPCs. ### p < 0.001 compared 2D CSCs; + p < 0.05; ++ p < 0.01; +++ p < 0.001 comparing CPCs to CSCs on the same ECM. Significantly different between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to 2D CPCs. ### p < 0.001 compared 2D CSCs; + p < 0.05; +++ p < 0.001 comparing CPCs to CSCs on the same ECM. CPCs In 2D culture, the CPCs presented a canonical Warburg glycolytic phenotype, while in 3D growth, both glucose consumption and lactate production were reduced compared to 2D and displayed similar levels in all three ECM compositions. However, as seen in Table 1, the ratio of lactate release to glucose consumption was essentially unchanged from 2D to all 3D growth conditions, suggesting that the canonical Warburg glycolytic phenotype was maintained in all 3D ECM compositions. The cellular ATP content decreased significantly compared to 2D cultured cells only in the 10%M-90%C ECM. This, together with the lower consumption of glucose and production of lactate, suggests that these cells can utilize both OXPHOS and aerobic glycolysis in these conditions. These trends were confirmed via Western blots ( Figure 1B,C), where GLUT1, GLUT3 and MCT1 expression were almost identical in all 3Ds. Interestingly, MCT4 expression in the CPCs decreased as ECM collagen I concentration increased, suggesting the possible participation of the CPCs in a reverse Warburg/lactate shuttle phenomenon (e.g., metabolic symbiosis), in which the CPCs utilize lactate produced by other cell types via MCT1 while reducing their expression of the efficient lactate exporter, MCT4 [71][72][73][74][75][76]. The ratio of mM of lactate released to mM of glucose consumed. CPCs: Cancer parenchymal cells; CSCs: Cancer stem cells. * p < 0.05, ** p < 0.01 compared to 2D counterparts. CSCs The CSCs displayed a more complex pattern and the ECM collagen I content had a greater influence on their metabolic behavior. When grown in 90%M-10%C, their rates of both glucose consumption and lactate production were greatly reduced compared to 2D. In contrast, as the percentage of ECM collagen I increased, both glucose consumption and lactate release increased stepwise such that, in 10%M-90%C, both rates were higher than in 2D. However, as seen in Table 1, the ratio of lactate release to glucose consumption decreased significantly as collagen I increased, suggesting that ECM collagen I induced a complex metabolic rewiring in the CSCs. Cellular ATP content increased immediately in 90%M-10%C compared to 2D and further increased stepwise as collagen I content increased. Western blot analysis ( Figure 1B,C) showed that in CSCs, GLUT1 and GLUT3 expression were essentially identical in 2D and in all 3Ds. Interestingly, while MCT1 expression decreased from 2D to 3D and then remained essentially stable, MCT4 expression increased as ECM collagen I concentration increased. This was opposite to what occurred for MCT4 in the CPCs and suggests that the high lactate production by the CSCs in collagen I-rich ECMs could be supplying lactate to the CPCs for their metabolic use, determining a reverse Warburg effect (lactate shuttle) as hypothesized above [71][72][73][74][75][76]. High ECM Collagen I Percentage Increases Glutamine Consumption and Glutaminolysis in CSCs The large increase in ATP production in the CSCs could indicate an increasing dependence on glutamine uptake and its use in glutamine-driven oxidative phosphorylation (glutaminolysis), as recently reported in pluripotent stem cells [77,78], leukemia CSCs [33,34] and in PDAC cells, where the primary amino acid taken up was glutamine [35,36,59,79,80], including PDAC-initiating cells [25]. However, while PDAC is characterized by high levels of glutamine uptake, the role of the increasing collagen I during the desmoplastic reaction in driving this glutamine uptake/usage has not been described. Therefore, the medium concentration of several amino acids were analyzed in both 2D and 3D growth conditions using HPLC analysis and, by far, the largest fluxes were observed for the non-essential amino acids glutamine and glutamate ( Figure 2). Glutamine consumption was similar for both CPCs and CSCs in 2D; and while collagen I content did not influence glutamine consumption in the CPCs, in the CSCs, glutamine consumption increased stepwise with increasing ECM collagen I content. Further, while glutamate release increased in 3D compared to 2D for both cell types, the ECM collagen I content had no influence on glutamate release in either cell type. The ratio of mM of glutamate released to mM of glutamine consumed. * p < 0.05, ** p < 0.01 compared to compared to 2D growth The ratio of mM of lactate released to mM of glutamine consumed. Treatment with Bioenergetic Modulators (BMs) Affects Cell Growth and Survival of PDAC Cells The growth and levels of death of the two PDAC cell lines (CPCs and CSCs) were evaluated after metabolic remodeling through the treatment with three different bioenergetic modulators (BMs): the glycolysis inhibitor 2-DG (5 mM), the PDH kinase inhibitor DCA (20 mM) and the mitochondrial complex I inhibitor, phenformin (0.01 mM). In 2D, CPC growth was inhibited primarily by the glycolytic inhibitors (54 ± 6%, 37 ± 5% and 9 ± 4% inhibition for 2DG, DCA and phenformin, respectively), while the opposite was true for the CSCs (4 ± 3%, 2 ± 5% and 41 ± 2% inhibition for 2DG, DCA and phenformin, respectively). These data are shown in a graphical format in Supplemental Figure S1 and support the basic pattern observed for the two cell lines reported in Figure 1. In 3D, we measured the effect of the BM compounds in the two extreme ECM compositions (90%M-10%C vs. 10%M-90%C). As can be seen in Figure 3, all the BM Figure 2. Amino acid quantification by HPLC in PDAC cell lines. The growth medium was collected after 7 days of growth in different substrates and the amino acids quantified by HPLC. Results are presented as mean ± SEM in triplicate of at least three independent experiments. Significance between groups: * p < 0.05; compared to 2D condition of each cells; # < 0.05 compared to 3Ds; + < 0.05 comparing CPCs to CSCs on the same ECMs. An analysis of the ratio of both glutamate release to glutamine uptake (Table 2) and of lactate release to glutamine uptake ( Table 3), revealed that in 3D, as ECM collagen I content increased, these ratios remained stable in the CPCs while decreasing in the CSCs. Indeed, in these conditions, the ratio of glutamate release to glutamine uptake was greatly reduced only in the CSCs, suggesting a major shift in these cells in the utilization of glutamine. This also demonstrates that it is the CSCs that display metabolic amino acid plasticity, as only they shifted their metabolic dependences toward anaplerotic OXPHOS glutamine metabolism (glutaminolysis) as the collagen I content of the ECM increased. Altogether, these data suggest that while the CPCs remained essentially reliant on glycolysis in all growth conditions, in the CSCs, ECM collagen I induced a more complex metabolic pattern in which they were able to switch their metabolic dependence from glucose toward glutamine. This is different from leukemia stem cells, which are completely dependent on metabolizing amino acids for energy [59,78]. Treatment with Bioenergetic Modulators (BMs) Affects Cell Growth and Survival of PDAC Cells The growth and levels of death of the two PDAC cell lines (CPCs and CSCs) were evaluated after metabolic remodeling through the treatment with three different bioenergetic modulators (BMs): the glycolysis inhibitor 2-DG (5 mM), the PDH kinase inhibitor DCA (20 mM) and the mitochondrial complex I inhibitor, phenformin (0.01 mM). In 2D, CPC growth was inhibited primarily by the glycolytic inhibitors (54 ± 6%, 37 ± 5% and 9 ± 4% inhibition for 2DG, DCA and phenformin, respectively), while the opposite was true for the CSCs (4 ± 3%, 2 ± 5% and 41 ± 2% inhibition for 2DG, DCA and phenformin, respectively). These data are shown in a graphical format in Supplemental Figure S1 and support the basic pattern observed for the two cell lines reported in Figure 1. In 3D, we measured the effect of the BM compounds in the two extreme ECM compositions (90%M-10%C vs. 10%M-90%C). As can be seen in Figure 3, all the BM compounds reduced cell proliferation in both cell lines, demonstrating that 3D growth increases their plasticity for dependence on both glycolytic and OXPHOS pathways. However, in the CPCs ( Figure 3A, left panels), the glycolytic agents still induced a much higher inhibition of cell growth compared to phenformin, and these effects were independent of ECM composition. In the CSCs ( Figure 3A, right panels), ECM collagen I content differentially modified the effect triggered by the BMs, in that growth was similarly inhibited by all three compounds in 90%M-10%C, while in 10%M-90%C, the inhibitory effect of phenformin was 2-fold higher than with the anti-glycolytic compounds. Cell death in the CPCs ( Figure 3B, left panel) was much more strongly induced by the glycolytic inhibitors than phenformin on both ECMs, while in the CSCs ( Figure 3B, right panel), phenformin had a stronger cytotoxic effect, especially in 90%M-10%C, where neither glycolytic inhibitor had any effect, but also in 10%M-90%C, where the effects of the glycolytic inhibitors were minimal. Therefore, we hypothesize that both lactic fermentation (glycolysis) and OXPHOS are important for CSC proliferation and mortality when growing in 90%M-10%C, while OXPHOS dominates these phenotypes in 10%M-90%C. The dependence of CSCs on respiratory metabolism (OXPHOS) is supported by the enhanced glutamine consumption observed in Figure 2 and Tables 2 and 3. The Disruption of Energetic Pathways Synergistically Potentiates NAB-PTX Action We lastly investigated the influence of these BMs on the efficacy of the currently utilized conventional PDAC antitumor drug, NAB-Paclitaxel (NAB-PTX), with the objective to increase its efficacy and overcome treatment resistance. To evaluate the combined effect of NAB-PTX with BMs, both CPCs and CSCs were treated with 10 nM NAB-PTX alone or simultaneously with each BM for 7 days with untreated cells as control. As seen in Figure 4A, NAB-PTX treatment alone inhibited cell growth similarly in both cell lines and on both ECMs. The combined treatment of NAB-PTX with the BMs effectively enhanced the inhibition of cell growth by NAB-PTX for most cases. Again, in the CPCs, the glycolytic inhibitors more strongly potentiated the inhibitory effect of NAB-PTX on growth than phenformin, while in the CSCs, all three BMs potentiated the inhibitory effect of NAB-PTX about equally in both ECMs, although DCA had a somewhat lower effect than the other BMs. where neither glycolytic inhibitor had any effect, but also in 10%M-90%C, where the effects of the glycolytic inhibitors were minimal. Therefore, we hypothesize that both lactic fermentation (glycolysis) and OXPHOS are important for CSC proliferation and mortality when growing in 90%M-10%C, while OXPHOS dominates these phenotypes in 10%M-90%C. The dependence of CSCs on respiratory metabolism (OXPHOS) is supported by the enhanced glutamine consumption observed in Figure 2 and Tables 2 and 3. (B) Cell death was quantified using the ethidium homodimer assay where the integrity density of dead or dying cells (stained red) was measured by the ImageJ software. Untreated cells were used as control. Results are presented as the mean ± SEM of triplicates from at least three independent experiments. Significance between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to untreated cells; ns: not significant. The Disruption of Energetic Pathways Synergistically Potentiates NAB-PTX Action We lastly investigated the influence of these BMs on the efficacy of the currently utilized conventional PDAC antitumor drug, NAB-Paclitaxel (NAB-PTX), with the objective to increase its efficacy and overcome treatment resistance. To evaluate the combined effect of NAB-PTX with BMs, both CPCs and CSCs were treated with 10 nM NAB-PTX alone or simultaneously with each BM for 7 days with untreated cells as control. (B) Cell death was quantified using the ethidium homodimer assay where the integrity density of dead or dying cells (stained red) was measured by the ImageJ software. Untreated cells were used as control. Results are presented as the mean ± SEM of triplicates from at least three independent experiments. Significance between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to untreated cells; ns: not significant. Regarding cell death ( Figure 4B), treatment with NAB-PTX alone did not induce a significant cytotoxic effect in the CPCs while producing a low but significant increase in death on both ECMs in the CSCs. In the CPCs, all the BMs, but especially the glycolytic inhibitors, significantly increased NAB-PTX-induced cell death and, again, this effect was independent of the ECM composition. In CSCs, the OXPHOS inhibitor phenformin, combined with NAB-PTX induced the highest rate of cytotoxicity with a slightly higher effect in 10%M-90%C. These results support the previous data (Figures 1 and 3) that the CPCs are more dependent on the glycolytic pathway, since the glycolytic inhibitors induced higher levels of death in these cells when combined with NAB-PTX. In contrast, the CSCs were more sensitive to the OXPHOS inhibitor, phenformin, supporting the hypothesis that they have a larger dependence on OXPHOS and mitochondrial pathways. The CalcuSyn program (Biosoft, Cambridge, UK) was used to analyze if these drugs had an additive or synergistic effect in combination with NAB-PTX by calculating the combination index. As can be seen in Table 4, all BMs produced a synergistic effect on both growth and cytotoxicity in both cell lines when in combination with NAB-PTX. However, in the CSCs, the treatment with the highest level of synergism was phenformin plus NAB-PTX. Therefore, the combination of BM compounds with NAB-PTX can overcome the relative resistance to standard NAB-PTX treatment in more aggressive conditions, such as the presence of desmoplastic environment and in the CSC population. had an additive or synergistic effect in combination with NAB-PTX by calculating the combination index. As can be seen in Table 4, all BMs produced a synergistic effect on both growth and cytotoxicity in both cell lines when in combination with NAB-PTX. However, in the CSCs, the treatment with the highest level of synergism was phenformin plus NAB-PTX. Therefore, the combination of BM compounds with NAB-PTX can overcome the relative resistance to standard NAB-PTX treatment in more aggressive conditions, such as the presence of desmoplastic environment and in the CSC population. ) and NAB-PTX (10 mM), during the respective incubation time. Results represent the mean ± SEM of triplicates from at least three independent experiments. Significance between groups: * p < 0.05; ** p < 0.01; *** p < 0.001 compared to untreated cells. # p < 0.05; ## p < 0.01; ### p < 0.001 compared to cells treated only with NAB-PTX 10 nM; ns: not significant. Discussion The ECM plays many functions, including acting as a physical scaffold, facilitating interactions between the different cell types, providing survival and differentiation signals and resistance to anticancer drugs [81]. Further, the ECM composition is now known to be able to override cancer cell intrinsic signaling to regulate tumor progression independently of the clonal heterogeneity of the tumor and is one of the major drivers of heterogeneity, metastatic capacity, plasticity and therapy resistance risk/prognosis in cancer cells [82,83], including PDAC [67,68,84,85]. Indeed, the desmoplastic reaction, characterized by an intense deposition of collagen and one of the main features of PDAC, is related to a more invasive and aggressive phenotype and with resistance to therapy [6,8,86]. The low survival rates and the dismal results presented by the available conventional treatments in PDAC can also be related to the interaction of this desmoplastic extracellular environment with the CSC subset of tumor cells [87]. Understanding how the ECM component of the tumor microenvironment influences cancer cell behavior will lead to the better comprehension of PDAC biology and to the identification of new targets that may improve the treatment of pancreatic cancer patients. CSCs represent a small group of cells that are considered to be the main cause of metastasis and treatment resistance, due to their DNA repair systems, relative quiescent behavior, and ability to form new tumors. Therefore, the targeted eradication of these cells will be an important advance in PDAC therapy. The altered metabolism that distinguishes cancer cells from their normal counterparts is a novel paradigm in cancer treatment, and new and effective compounds targeting the metabolic alterations in cancer cells have been developed and approved in clinical trials [88,89]. Like the parenchymal cancer cell population (CPCs), CSCs also have a reprogrammed metabolism [90]; however, much less is known concerning CSC metabolism and its regulation. The role of the interaction of the cancer cell with the extracellular matrix in determining their metabolic phenotype and the cross-talk between the two are still very poorly understood [91]. Data concerning CSC metabolism is controversial with some reports that also CSCs use the same energetic pathway(s) as the non-stem CPCs, e.g., lactic fermentation [32], while others have demonstrated that OXPHOS is the main source of energy for CSCs [28,43]. It has been demonstrated that CSCs show a high metabolic plasticity and can readily adapt their metabolism according to the microenvironment conditions, tissue of origin and state of differentiation [32,91]. Indeed, CSCs are mainly oxidative in the quiescent state, and in the proliferative state, CSCs have a higher capacity to change their metabolic necessities, such that they have a higher metastatic ability, resistance to therapy and a combined phenotype that relies upon glycolysis and OXPHOS simultaneously [28,43]. Therefore, the main objective of the present study was to understand how ECM composition influences the metabolic behavior of PDAC parenchymal cells and CSCs. To this end, we measured glucose and amino acid consumption, lactate secretion and the expression of their principal transporters together with ATP production in 2D growth and 3D growth in different ECM substrates. This was performed in both control conditions and in the presence of the following bioenergetic modulators (BMs) that target different metabolic pathways: (i) the anti-glycolytic agent 2-deoxyglucose (2-DG, inhibitor of the HK); (ii) dichloroacetate (DCA), an inhibitor of PDH kinase that activates pyruvate oxidation decreasing the Warburg effect; (iii) and an inhibitor of the OXPHOS mitochondrial complex I, phenformin. Lastly, to study the influence of ECM and the role of metabolism in the response to therapy, we determined the effect of the combination of each of these BMs with nab-paclitaxel (NAB-PTX) on the two ECMs. We observed that the desmoplastic microenvironment can be decisive in the selection of the metabolic course: increasing levels of collagen I in the ECM drove a higher metabolic plasticity in the CSCs, while the CPC metabolic phenotype was more independent of the ECM composition, although some subtle differences were also observed. When grown in 2D, the CPCs exhibited Warburg metabolism with consumption of glucose and production of lactate and ATP and, indeed, in 2D, the effect of the BMs on growth and cell death supported this conclusion. However, in 3D and especially in the 90%M-10%C, a distinct pattern was found in which both the lactic fermentation and OXPHOS pathways can be used. Indeed, as seen in Figure 3, in CPCs both anti-glycolytic agents, 2-DG and DCA, strongly decreased proliferation by inducing cell death independently of ECM composition, while phenformin determined a much lower inhibition of cell growth and stimulation of cell death, although these effects were higher than in 2D. Therefore, we conclude that, in 3D culture, while glycolysis is the main energetic pathway used by the CPCs, the mitochondria also seem to play a minor role as indicated by the effect of the OXPHOS inhibitor, phenformin. On the contrary, the CSCs presented a very distinct metabolic pattern that was strongly influenced by the ECM composition. As reported in the literature, the CSCs displayed a strong ability to modify their metabolic characteristics according to the environment conditions [51][52][53][54]. As the ECM collagen I increased, the CSCs had a higher consumption of glucose, release of lactate and ATP production, but with a decreasing ratio of lactate release to glucose consumption, suggesting a rewired metabolism. Indeed, on collagen I-enriched ECM, the CSCs consumed ever more glutamine with, however, strong reductions in the ratio of both glutamate and lactate released to glutamine consumption. Together with the stepwise increase in ATP, these data strongly point to an increase in glutaminolysis as collagen I increases in the ECM with a shift in its metabolic pathway end products. In this respect, while all the BMs were able to reduce CSC proliferation and increase cell death, phenformin induced the highest inhibition of proliferation and increase in cell death, especially in the ECM composition that had the highest level of glutamine uptake. The inhibition of metabolic dependence by using metabolic inhibitors has been tested in different types of cancers and their derived CSC populations. For example, breast cancer CSCs were shown to be very sensitive to 2-DG and, furthermore, 2-DG showed a synergistic effect with the commonly used doxorubicin in breast cancer therapy, which further reduced the stemness of the cell population [92]. However, other studies have shown the importance of the mitochondrial complex respiration in the CSCs of many types of cancers and a high dependence on this mitochondrial activity even in cells with mutations that impaired the TCA cycle. Indeed, OXPHOS plays an important role in the maintenance of CSC stemness [43,93,94] and the anti-diabetic drug, and the OXPHOS inhibitor, metformin, leads to a reduction in the stem cell pool and in in vivo tumor growth in pancreatic/breast [54] and prostate CSCs [95]. Indeed, mitochondria are still important organelles for the use of other precursors, for example, glutamine via glutaminolysis [96]. Other recent reports have shown the importance of metabolic sources other than glucose to maintain the tumorigenicity of CSCs, as they are able to use the amino acids glutamine and alanine to support their mitochondrial energy production [33,34,97]. For this reason, the use of compounds that block these metabolic pathways can be of great importance in antitumor therapy. Indeed, our results show that in PDAC, it is the CSC population that has a higher consumption of glutamine, and that this increase in consumption is driven by collagen I levels in the ECM. In this respect, Li and co-authors showed that decreasing the glutamine concentration present in the microenvironment through the inhibition of enzymes responsible for glutamine metabolism reduces stemness and sensitizes the CSCs to radiotherapy in vitro and in vivo [96]. We verified in this study, that CSC grown in 10%M-90%C have an increased glucose consumption, production of lactate and ATP, but also a very large increase in glutamine consumption, indicating a sustained mitochondrial activity. Consequently, targeting not only glycolysis, but also mitochondrial functionality will expand the search for novel anticancer drugs against tumorigenesis and chemoresistance, and a combination of mitochondria-targeted agents with conventional chemotherapeutic drugs may be required to achieve the maximum efficacy to disrupt the CSC population and to improve the current therapies [45,46]. In this respect, our results clearly showed that the combination of the conventional drug NAB-PTX with BMs increased the effect of the standard therapy in all the conditions. It is important to note that in both the CPC and CSC cells, the anti-glycolytic compounds together with NAB-PTX induced a stronger effect on cell death compared with NAB-PTX treatment alone, independently of ECM composition, and this effect is synergistic (Table 4). Importantly, in the CSCs, the treatment with the highest level of synergism was phenformin plus NAB-PTX. Altogether, these data further support our hypothesis that ECM composition directly influences basic metabolism differently in the CPCs and CSCs with collagen I in the ECM driving a higher metabolic plasticity in the CSCs. Further, we find that the BMs both alone and especially in combination with current treatment procedures can be an emergent strategy in PDAC treatment and in the abolishment of the CSC population. Conclusions We conclude that ECM composition directly influences basic metabolism differently in the CPCs and CSCs with collagen I in the ECM, driving a higher metabolic plasticity in the CSCs. Interestingly, a possible hypothesis can be that the high lactate production by the CSCs in collagen I-rich ECMs could be supplying lactate to the CPCs for their metabolic use, which is supported by the levels of MCT1 and MCT4 present in CSCs and CPCs in collagen I-rich ECMs. Indeed, MCT1 is a monocarboxylate transporter that mediates the uptake of lactate into the cell, whereas MCT4, due to its high Km, is mainly involved in lactate efflux. In this way, as CSCs presented higher levels of MCT4, this indicates that they export higher levels of lactate that can be used by CPCs as substrate, as these cells present higher levels of MCT1, involved in lactate uptake. Further, we find that the BMs both alone and especially in combination with current treatment procedures can be an emergent strategy in PDAC treatment and in the abolishment of the CSC population. Indeed, the results clearly showed that the combination of the conventional drug NAB-PTX with BMs increased the effect of the standard therapy in all conditions. Indeed, in both the CPC and CSC cells, the anti-glycolytic compounds together with NAB-PTX induced a stronger effect on cell death compared with NAB-PTX treatment alone, independent of ECM composition, and this effect is synergistic (Table 4). Importantly, in the CSCs, phenformin plus NAB-PTX was the treatment with the highest level of synergism. Further, we find that the BMs both alone and especially in combination with current treatment procedures can be an emergent strategy in PDAC treatment and in the abolishment of the CSC population. Consequently, targeting not only glycolysis, but also mitochondrial functionality will expand the search for novel anticancer drugs against tumorigenesis and chemoresistance, and a combination of mitochondria-targeted agents with conventional chemotherapeutic drugs may be required to achieve the maximum efficacy to disrupt the CSC population and to improve the current therapies. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers15153868/s1, Figure S1: Representative Western blot of Gluts and MCTs in CPCs and CSCs growing as 2D or 3D organotypic cultures on different ECMs; File S1: Original blots. Funding: This work has been funded by National funds, through the Foundation for Science and Technology (FCT)-project UIDB/50026/2020 and UIDP/50026/2020; and by the project NORTE-01-0145-FEDER-000055, supported by Norte Portugal Regional Operational Program (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development
2023-08-03T15:40:33.717Z
2023-07-29T00:00:00.000
{ "year": 2023, "sha1": "d292e32591d57618753e338ff8a8207e953f77e0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/15/3868/pdf?version=1690625700", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc1258673ca6caa65a87f12d098f319b68007b8e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
8994564
pes2o/s2orc
v3-fos-license
ChemSAR: an online pipelining platform for molecular SAR modeling Background In recent years, predictive models based on machine learning techniques have proven to be feasible and effective in drug discovery. However, to develop such a model, researchers usually have to combine multiple tools and undergo several different steps (e.g., RDKit or ChemoPy package for molecular descriptor calculation, ChemAxon Standardizer for structure preprocessing, scikit-learn package for model building, and ggplot2 package for statistical analysis and visualization, etc.). In addition, it may require strong programming skills to accomplish these jobs, which poses severe challenges for users without advanced training in computer programming. Therefore, an online pipelining platform that integrates a number of selected tools is a valuable and efficient solution that can meet the needs of related researchers. Results This work presents a web-based pipelining platform, called ChemSAR, for generating SAR classification models of small molecules. The capabilities of ChemSAR include the validation and standardization of chemical structure representation, the computation of 783 1D/2D molecular descriptors and ten types of widely-used fingerprints for small molecules, the filtering methods for feature selection, the generation of predictive models via a step-by-step job submission process, model interpretation in terms of feature importance and tree visualization, as well as a helpful report generation system. The results can be visualized as high-quality plots and downloaded as local files. Conclusion ChemSAR provides an integrated web-based platform for generating SAR classification models that will benefit cheminformatics and other biomedical users. It is freely available at: http://chemsar.scbdd.com.Graphical abstract . Electronic supplementary material The online version of this article (doi:10.1186/s13321-017-0215-1) contains supplementary material, which is available to authorized users. Background The increasing availability of data on characteristics and functions of biomolecules and small chemical compounds enables researchers to better understand various chemical, physical and biological processes or activities with the use of machine learning methods [1][2][3][4][5][6][7]. Particularly in the drug discovery field, machine learning methods are frequently applied to build in silico predictive models in studies of structure-activity relationships (SAR) and structure-property relationships (SPR) to assess or predict various drug activities [8,9], and ADME/T properties [10][11][12][13][14][15][16]. Nowadays, with the development of various public data sources (e.g., ChEMBL [17], PubChem [18], and DrugBank [19]), more and more scientific studies are utilizing predictive SAR/SPR models to perform virtual screening [20], study drug side effects [21][22][23][24], predict drug-drug interactions [25] or drugtarget interactions [26][27][28], and investigate drug repositioning [29,30]. Undoubtedly, robust and predictive SAR models built upon machine learning techniques provide a powerful and effective way for pharmaceutical scientists to tackle the aforementioned problems; however, there still exist two major barriers to overcome. First, owing to the fusion of different scientific disciplines, a higher level of background knowledge and professional skills is required to solve many existing biological problems. For example, to reliably predict drug Open Access *Correspondence: oriental-cds@163.com 1 Xiangya School of Pharmaceutical Sciences, Central South University, No. 172, Tongzipo Road, Yuelu District, Changsha, People's Republic of China Full list of author information is available at the end of the article ADME/T properties, a researcher must be familiar with both pharmacokinetics and a modern programming language [14]; however, in many cases, researchers from the pharmaceutical or biomedical fields may lack formal training in computing skills. It may thus become necessary to save these investigators from tedious programming or deployment work such that they can focus on solving scientific problems. Second, even if a researcher acquired the related background knowledge and computing skills, it is very timeconsuming to build a predictive model as a number of steps are needed, including molecule representation, feature filtering, selection of a suitable machine learning method, prediction of new molecules, and relevant statistical analysis. In particular, the researcher needs to select and combine different tools to accomplish these steps; for example, using RDKit [31] to calculate molecular descriptors, using libSVM [32] or scikit-learn [33] to establish a model, using ggplot2 [34] to plot or visualize the results. However, the selection and integration of such tools involve lots of programming issues and efforts. Toward this goal, a few previous studies made attempts to integrate one or two of the steps into a single package. For instance, the VCCLAB group [51,54] developed E-DRAGON, an online platform for DRAGON software, to calculate various molecular descriptors, and also provided several online machine learning tools like PLSR and ASNN for model building. However, these tools are independent of each other and cannot be further integrated together to accomplish the entire modeling process. The OCHEM platform [55] was developed to provide a practical online chemical database. Also, this platform provides a modelling environment that enables users to standardize molecules, calculate molecular descriptors and build QSAR models. However, the OCHEM database is not specialized for SAR modelling and thus still lacks some essential functionalities like feature selection and advanced statistical analysis. Its modeling function is solely based on small molecules and thus cannot be used to analyze other independent biomedical dataset. More recently, Murrell et al. [56] developed a relatively wellintegrated R package, named camb, for QSAR modeling. However, it does require users to have sufficient programming skills in R. In 2010, Chembench [57,58] was developed and make progress in the simplicity of the use of QSAR modelling for analyzing experimental chemical structure-activity data. Other software applications that should be mentioned include eTOXlab [59], AZOrange [60], QSARINS [61], OECD QSAR Toolbox [62], Build-QSAR [63], Molecular Operating Environment (MOE) [64] and Discovery Studio (DS) [65]; however, these software are either commercial or difficult for users to deploy by themselves. In view of these limitations, we implemented a webbased platform, called ChemSAR, as an online pipelining for SAR model development. ChemSAR integrates a set of carefully selected tools and provides a user-friendly web interface and allows users to complete the entire workflow via a step-by-step submission process without involving any programming effort. Currently, Chem-SAR is mainly designed for molecular SAR analysis and is capable of accomplishing seven modeling steps: (1) structure preprocessing, (2) molecular descriptor calculation, (3) data preprocessing, (4) feature selection, (5) model building and prediction, (6) Model interpretation (7) statistical analysis. These seven steps together with several ancillary tools are implemented in six modules: (1) User space, (2) Structure preprocessing, (3) Data preprocessing, (4) Modeling process, (5) Model interpretation (6) Tools. The six modules form an integrated pipeline for modeling, but each of these modules can also be used as a standalone tool. The whole workflow is shown in Fig. 1. Implementation The whole project runs on an elastic compute service (ECS) server of Aliyun. The number of CPU cores and memory are automatically allocated to the running instances on demand, which ensures the elastically stretchable computing capability. Python/Django and MySQL are used for server-side programming, and HTML, CSS, JavaScript are employed for front side web interfaces. The realization of functionality should go through three main components (MCV short for Model-Control-View model). To illustrate the implementation of the ChemSAR architecture, we consider the functionality of "Feature Calculation" as an example and the corresponding diagram is shown in Fig. 2 (see also Additional (6) Tools. Each of them not only could be utilized as one part of the whole pipelining but also could be used as an independent tool file 1). This module consists of four *.py files and a template folder: models.py acts as "M" to access the database; views.py acts as "C" to realize the functionality; the functions.py acts as a library to store the key calculation procedures which could be called into views.py; the forms. py stores input forms that can be used in templates; The HTML pages in the template folder act as "V" to visualize the results. Firstly, users go to the index page of "Feature Calculation" and submit the request. Then, the function: fingerprint_list (from views.py) executes as the back-end calculation program. In this function, (1) the input data from users is handled; (2) the input data is saved into database; (3) the key calculation function is called and executed; (4) the calculation result is stored into database and provided as file for download; (5) all the related variables are rendered into content for view. Among them, UserData and FeatureData are called from models.py to store users' data and result data; handle_uploaded_file and calcChemopyFingerprints_list are imported from functions.py to store the uploaded file and calculate specified fingerprints. At last, the calculated fingerprints values will be rendered to static contents displayed in fingerprint_result3.html. As an easy-to-use web service, ChemSAR supports commonly-used file formats for data exchange between the server-side and the client-side. Specifically, simplified molecular input line entry specification (SMILES) and Structure Data Format (SDF) are acceptable molecular file formats (or users can convert their files into these two formats using OpenBable [37] or ChemCONV [49]). The modeling results will be presented as HTML web pages, but users can download the results in SDF, CSV, PNG or PDF format (see Table 1 for details). User interface To accomplish the complex modelling steps by using a web-based tool, a user-friendly interface is very necessary. In ChemSAR, database and session technologies are utilized to help develop a complete job submission and user space system. The AJAX technology is used in those processes that usually take a long time to finish, which makes it possible to check the status of different jobs at a convenient time. Besides, a logging system is developed to make sure that every step or wrong operation will trigger user-friendly tips or messages. The user interface consists of three main parts: the "Model" section, "Manual" section and "Help" section. The "Model" section is the main entrance for structure preprocessing, data preprocessing, modelling process and tools. The "Manual" section describes theories and requirements Fig. 2 The process of calculating molecular fingerprints-an example to explain the development methods of ChemSAR for each module. The "Help" section gives detailed explanations of each module and a standard example of each step of building an SAR classification model. User space In general, to build a model, there is a need of storage space for user's data and computing results. In this project, the "User space" module is developed to enable users to view, download and reuse all related files or models at any time. Structure preprocessing It is very difficult for SAR practitioners to collect and integrate chemical structures from multiple sources due to the use of different structural representations, diverse file formats, distinct drawing conventions and the existence of labeling mistakes in such sources [66]. Therefore, preprocessing and standardizing these structures are very important tasks that will ensure the correctness of graphical representation and the consistence of molecular property calculation. Hence, we developed the "Structure preprocessing" module for molecule structure standardization based on the RDKit package. This module consists of three sub-modules, called Validation of molecules, Standardization of molecules, and Custom preprocessing, respectively. The "Validation of molecules" sub-module can check and visualize molecular structures. A warning message will be triggered if any anomaly is detected (e.g., a molecule has zero atoms, or has multiple fragments, or is not an overall neutral system, or contains isotopes), and both the molecular structure and the validation result will be displayed in an interactive table. The "Standardization of molecules" sub-module consists of the following steps: removing any hydrogen from the molecule, sanitizing the molecule, breaking certain covalent bonds between metals and organic atoms, correcting functional groups and recombining charges, reionizing a molecule such that the strongest acids ionize firstly, discarding tautomeric information and retaining a canonical tautomer. Different from the one-click process implemented in "Standardization of molecules", the "Custom preprocessing" sub-module provides flexible options for users to construct customized standardization process according to their own preferences of operations and execution orders. Data preprocessing Feature selection is one of the focuses of many SARbased researches, for which datasets with tens (or hundreds) of thousands of variables need to be analyzed and interpreted. The variables from the descriptor calculation step usually need to be selected for the following reasons: removing unneeded, irrelevant or redundant features, simplifying models for easiness of interpretation, and shortening training time [67]. This module is built upon the scikit-learn package and consists of six sub-modules, including imputation of missing values (imputer), removal of low variance features (rm_var), removal of highly correlated features (rm_corr), univariate feature selection (select_univariate), tree-based feature selection (select_tree_based) and recursive feature elimination (select_RFE). The imputer module can impute missing values (e.g., nan) in the data. The rm_var and rm_corr modules remove features by a predefined threshold of variance or correlation coefficient without incurring a significant loss of information. The select_ univariate module works by selecting k best features based on univariate statistical tests (e.g., Chi square or F tests). The select_tree_based module discards trivial features according to the importance computed using an estimator of randomized decision trees. The select_ RFE module performs recursive feature elimination in a cross-validation loop to find the optimal number of features. Here, an estimator of support vector classification with a linear kernel is invoked to compute a cross-validated score for each recursive calculation. After the calculation, a figure is displayed on the result page to show the relationship between the number of features and the cross-validation scores. A table that contains the optimal number of features, the feature ranking and the crossvalidation scores is also presented there. Modeling process The core steps of building an SAR model are implemented in the "Modeling process" module. It contains four sub-modules: feature calculation, model selection, model building, and prediction. Feature calculation In this project, we developed the feature calculation submodule as an online tool [36], which allows users to calculate 783 molecular descriptors from 12 feature groups (see Table 2). These features cover a relatively broad range of molecular properties and are carefully selected based on our experience. In recent years, molecular fingerprints are widely used in drug discovery area, especially for similarity search, virtual screening and QSAR/ SAR analysis due to their computational efficiency when handling and comparing chemical structures. In this submodule, ten types of molecular fingerprint algorithms are implemented (see Table 2). These molecular fingerprints have been shown to have a good performance in characterizing molecular structures. Model selection The model selection sub-module is developed to select a proper learning algorithm and computing parameter set based on user's dataset via comparing/validating models and tuning parameters. Five learning algorithms [68][69][70][71][72] from the scikit-learn package are implemented. These Model interpretation and application domain It is very necessary to have a reasonable interpretation of machine learning models and to define its application domain [66,73,74]. Here, we developed two related sub-modules to help researchers to interpret their models. The feature importance module enables researchers to interpret models in terms of feature importance. The forests of trees are used to evaluate the importance of features. By using this module, researchers can obtain a figure displaying the feature importances of the forest, along with their inter-trees variability. Another module is tree visualization which enables one to observe how the features classify the samples step by step in the decision tree model. By using this, the model will be displayed as a clear tree along with class names and explicit variables. Moreover, we define an S index in the prediction module to help users to estimate which ones are considered reliable. It only works for chemical datasets. The S indices represent the mean similarity between each molecule from external samples and all molecules from training set using Tanimoto similarity metrics based on MACCS fingerprints. The higher the S index for a new molecule, the closer the molecule is to the main body of the training set, and thus we could conclude that a more reliable prediction for this molecule should be obtained by our constructed predictive model. Report system One of the striking features of ChemSAR is that it provides a complete report generation system. It retrieves the results of each calculation step and re-arrange them into an organized HTML page and a PDF file for users. After finishing the whole modelling pipeline, the user can go to the "My Report" module to obtain the report. At the index page of this module, all job IDs that the user has created will be listed there. A "Get a PDF" button allows the user to generate a PDF file for off-line usage. A "Query" button is available to query the information about models created in other jobs. This is very helpful when a user attempts to construct multiple models using different machine learning methods and computing parameters, or when the user wants to build more than one model by the same client at the same time. Useful tools In addition to main modules mentioned above, Chem-SAR offers three useful and convenient auxiliary tools. The first tool, statistical analysis, can be used to analyze the model performance. This tool is separate from the prediction module because the test set may have no real response values. The "attach to current job or not" option allows the user to predict different test sets and get a complete report each time. After the calculation, commonly used statistical indicators related to classification are displayed, including number of positive samples, number of negative samples, AUC score, accuracy, MCC, F1 score, sensitivity, specificity and ROC curve. The second tool, random training set split, can be used to split training set and test set by picking a subset of molecules randomly. The third tool, diverse training set split, can be used to split training set and test set by picking a subset of diverse molecules [75]. First, the similarity of ECFP4 fingerprints based on Dice similarity metric [31] is employed to calculate distances between molecular objects and then the MinMax algorithm is applied to select a subset of diverse molecules based on the aforementioned distances. This is usually a good strategy to avoid the unsuitable training/test data split problem. Results and discussion The most important strategy of pharmaceutical industry to overcome its productivity crisis in drug discovery is to focus on the molecular properties of absorption, distribution, metabolism and excretion (ADME). Nowadays, machine learning based approaches have been becoming a very popular choice to predict ADME properties of drug molecules. Here, in order to demonstrate the practicability and reliability of ChemSAR, we studied the Caco-2 Cell permeability using dataset from our previous publication [12] . All the compounds were divided into two classes according to the Caco-2 permeability cutoff value [12]. Then, we obtain a dataset of 1561 molecules containing 528 positive samples and 1033 negative samples. A detailed workflow of building the permeability models is shown in Fig. 3. Firstly, through the structure preprocessing step (Standardization of molecules) using the default parameters, 1561 molecules were left. The random training set split tool (set test size for the data: 0.2) was then used to split the training set and test set. After this, a training set of 1249 samples (423 positive samples and 826 negative samples) and a test set of 312 samples (105 positive samples and 207 negative samples) were obtained. In the feature calculation step, 203 descriptor were calculated including 30 constitution descriptors, 44 connectivity indices, 7 kappa indices, 32 Moran auto-correction descriptors, 5 molecular properties, 25 charge descriptors and 60 MOE-Type descriptors. Four filtering steps in data preprocessing were then performed: (1) missing values were imputed with the default parameters, and (2) descriptors with zero variance or near zero variance were removed with a cut-off value of 0.05, and (3) one of two highly correlated descriptors were randomly removed with a cut-off value of 0.95, and (4) perform the treebased feature selection (n_estimators: 500, max_features: auto, threshold: mean). After these steps, 43 features were selected to build the model. To test every module of ChemSAR and to build a model with high prediction performance, we employed five methods (RF, SVM, k-NN, NB, DT) to build classification models. In model selection, the parameters for each learning method were optimized using grid search strategy (The best parameters set for RF: {'cv': 5, 'max_features': 9, 'n_estimators': 500}; SVM: {'kernel': 'rbf' , 'C': 2, 'gamma': 0.125}; k-NN: {n_neighbors: 5}; NB: {'classifier': 'Gaussi-anNB'}.) Then, a robust model was established with a 5-fold cross validation again. Each of the modelling processes will be repeated 10 times and then the statistical results will be reported as "mean ± variance". Additionally, to test the fingerprints module and to make a further comparison, we also calculated five kinds of fingerprints and then built corresponding models. The model performance was displayed in Additional file 2: Fig. S1 and Additional file 2: Table 4 The current tools that can be used for SAR modelling The I and II represent the classification algorithms and regression algorithms; The "restricted" means that some modules of the tool are limited to the public or need the permits of the developers; The "low" coupling means that the main modules of the tool can be called in the modelling pipelining and can also be used as an independent tool, while the "high" coupling means that they must work together to build a model Commercial Low algorithm models for one dataset and then makes a comprehensive comparison and further analysis to identify the best prediction model for the current problem. To further evaluate the prediction ability of our models, we compared our prediction results with the published models in recent papers. The latest report was in 2013 [76], the authors built a model using DT method with 1289 compounds which could accurately predict 78.4/76.1/79.1% of H/M/L compounds on the training set and 78.6/71.1/77.6% on the test set. In 2011 [77], Pham et al. built a model using linear discriminant analysis (LDA) method with 674 molecules which reported results: MCC = 0.62, Accuracy = 81.56% (training set), Accuracy = 83.94% (test set). Compared with the two models above, our model has an almost comparative or better performance. Obviously, ChemSAR has the capacity to obtain the reliable and robust classification model for the evaluation of Caco-2 Cell permeability. Comparison with other related tool sources For the purpose of further comparison, we have studied related publications as much as we can, and searched on Google to collect related tools that possess SAR modelling functionality. Then, a comparison based on application scenarios and functionality was performed, which was summarized in Table 4. In this table, we compared and then marked several aspects of each tool, including "type", "structure preprocessing", "data preprocessing", "molecular representation", "feature selection", "model selection", "algorithm type", "charge or free" and "coupling". The results suggest that ChemSAR is strongly recommended for multiple advantages of it as shown in the table. Note that we cannot absolutely guarantee the accuracy of the description for each tool because we get all the available information mainly from the corresponding publications or its documentations but some tools are not accessible or commercial. Also, the features of each tool evaluated here are from this tool's main framework, not the plugins provided by its user community. Conclusion In this study, we developed the ChemSAR platform as an online pipelining of building SAR classification models. It is freely accessible to the public and is platformindependent so users can access this platform via almost all different types of operation systems (Linux, Microsoft windows, Mac OS, Android) and clients (PC clients, mobile clients). The main advantages of the proposed platform are summarized as follows: (1) ChemSAR implements a complete online model-building process, which enables biomedical investigators to construct predictive models easily without suffering from tedious programming and deployment work. (2) ChemSAR provides a comprehensive modelling pipelining by integrating six model generation steps into a unified workflow. (3) The modular design of the framework enables six sub-modules to run independently to accomplish specific functionalities. (4) The job submission strategy allows users to query the calculation results at spare time. This provides an essential basis for the report system to generate a clear modeling report. (5) The modular design of the framework allows researchers to deal with not only the analysis of small molecules but also modelling problems in the biomedical field. For example, building a classification model based on the biochemical indicators of patients helps to study the disease classification or stage. In addition, we conducted a case study to illustrate the use of this platform in practice, and several models were obtained to evaluate the Caco-2 Cell permeability at the same time. A major goal of the cheminformatics development is to make its techniques to be applied into the study of practical problems. The trend for future development of SAR models is towards making models publicly accessible on-line, interactive, and usable [78]. ChemSAR, to some extent, has made a step in this direction. It is expected that ChemSAR can be applied to a wide variety of studies when there exists a significant demand of using SAR models. In the future, we will continue to implement more classification algorithms and add the options for a more flexible parameter control. Also, we will add the regression algorithms if needed.
2018-04-03T00:00:40.074Z
2017-05-04T00:00:00.000
{ "year": 2017, "sha1": "899993a90c133f51692c844e7419656e46197450", "oa_license": "CCBY", "oa_url": "https://jcheminf.biomedcentral.com/track/pdf/10.1186/s13321-017-0215-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "899993a90c133f51692c844e7419656e46197450", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
125976892
pes2o/s2orc
v3-fos-license
Experimental investigation of solid rocket motors for small sounding rockets Experimentation and research of solid rocket motors are important subjects for aerospace engineering students. However, many institutes in Thailand rarely include experiments on solid rocket motors in research projects of aerospace engineering students, mainly because of the complexity of mixing the explosive propellants. This paper focuses on the design and construction of a solid rocket motor for total impulse in the class I-J that can be utilised as a small sounding rocket by researchers in the near future. Initially, the test stands intended for measuring the pressure in the combustion chamber and the thrust of the solid rocket motor were designed and constructed. The basic design of the propellant configuration was evaluated. Several formulas and ratios of solid propellants were compared for achieving the maximum thrust. The convenience of manufacturing and casting of the fabricated solid rocket motors were a critical consideration. The motor structural analysis such as the combustion chamber wall thickness was also discussed. Several types of nozzles were compared and evaluated for ensuring the maximum thrust of the solid rocket motors during the experiments. The theory of heat transfer analysis in the combustion chamber was discussed and compared with the experimental data. Introduction Solid rocket motor is a rocket engine that uses the combustion process to break the chemical energy bond of the fuel and oxidiser to produce the thrust for the rocket. The fuel and oxidiser are mixed together to form a well-combined mixture whereas the binder binds the fuel and oxidiser. Many types of fuel and oxidisers can be used in a solid rocket motor [1]. As the solid rocket motor contains this explosive material, there might be some concerns regarding the safety regulations and laws depending on the geographic location, which require several documents to be presented for purchase, handling, and storage of such materials [2]. However, the study of solid rocket motor has become a fundamental subject in the field of rocket propulsion for aerospace engineering students. For that reason, many aerospace engineering institutes around the world are interested in the theoretical and experimental study of sounding rockets, as they help students to directly experience, develop, and apply their knowledge of rocket propulsion. In 2006, T. John et al. demonstrated a launch vehicle for a small satellite using a sounding rocket [3]. In mid-2013, S. Singh designed and constructed a solid rocket motor that researchers could use as the main propulsion of a sounding rocket [4]. A. Okninski et al. developed the Polish small sounding rocket as a reusable CanSat launcher in 2015 [5]. In addition, sounding rockets for picosatellite launchers and rocket competitions have been studied and reported [6,7]. This paper focuses on the development of the I-J class of solid rocket motors that use commercially available materials and chemicals for fuel and oxidiser as an alternative to avoid high cost, complex machining, and procurement challenges. It is a preliminary step for further development of solid rocket motors for Thai research students, which can enable them to design and construct rockets that have an altitude of 1-5 km in the near future. Rocket Motor Principle A basic rocket motor comprises three main parts: head cap, combustion chamber, and nozzle. The principal of rocket motor operation is shown in figure 1. Initially, the ignition fuse is ignited causing the propellant inside the combustion chamber to burn at a rapid rate. This combustion produces hot gas, which increases the pressure in the combustion chamber. The hot gas flows through the exit at the nozzle. Usually, the nozzle contains converging and diverging parts. The converging part has an exit area smaller than the entrance, which increases the velocity of the exhaust gas. The connecting part between the converging and diverging parts is called the throat. At this point, the velocity of the hot gas reaches the speed of sound, Mach (M) = 1. Beyond the throat, the exit area increases further to bring the hot gases to supersonic speed, as explained in the 'de Laval nozzle' [1,2]. However, the pressure decreases as the gas flows through the nozzle. The thrust is generated because of the increase in velocity of the hot gas and the pressure difference between the ambient and exit pressure at the nozzle. Thrust is given by the equation where is the thrust (N), ̇ is the mass flow rate (kg/s), is the exit velocity of the hot gas (m/s), , are the exit pressure at the nozzle and the ambient pressure respectively, and is the exit area of the nozzle. Rocket propellant Rocket propellant is a well-combined mixture of fuel and oxidiser. The oxidiser provides oxygen for the fuel to burn. In some cases, a binder can be used for increasing the flexibility of the mixture in the casting process. The ratio of the fuel and oxidiser can be altered or the combustion chamber parameters can be changed to get different burn rates. The fuel and oxidiser are commercial grade and available with local suppliers. In this study, the appropriate ratio of fuel and oxidiser were obtained as follows. Traditionally, the black powder is prepared by dry mixing 15% charcoal powder, 10% sulphur powder, and 75% potassium nitrate by weight, and 1 g dextrin is added per 100 g of the mixture. The ingredients were mixed together for nearly 2 min in a plastic container to prevent electrostatic discharge. The black powder can be used as a fuse ignitor, as it burns rapidly. The maximum combustion temperature is approximately 1400 ℃ [8]. White mixed powder. In this study, white powder consisted of 65% potassium nitrate and 35% sugar powder by weight mixed well together. The final mixed white powder can absorb the moisture in the air; so, it must be kept sealed in an airtight container if meant for later use. The combustion temperature of the white mixed powder was approximately 1200-1300 ℃. Rocket Candy or R-Candy. This propellant used in our study contained a combination of sugar syrup, potassium nitrate, and sugar powder in the ratio of 65%, 18%, and 17% by weight, respectively. In this research, the rocket candy was prepared using an electric pot, as can be seen in figure 2. Initially, the sugar syrup was heated to boiling temperature. Then, sugar powder and potassium nitrate were mixed in it and the syrup was heated until it started boiling again, after which the electric pot was turned off and the hot mixture was poured into the moulds. The combustion temperature of R-Candy was approximately 1347 ℃ [9]. Material and propellant selection The propellant selection criteria for cost and safe handling are commercially available. Based on the criteria, the white mixed powder was selected as the first propellant to test in rocket motor class I and the second propellant was prepared with the R-Candy for rocket motor class J. The segments used were PVC 5 and PVC 13.5 pipes, which contained the white mixed powder and R-Candy, respectively. The end burning with white mixed powder and tubular grain configurations with R-Candy were prepared. The end burning was quite easy to prepare in the PVC pipe; however, the tubular grain configuration was quite difficult to prepare in the casting and moulding process, as there was a risk of the R-Candy solidifying if the process took longer than 10 min. The wooden base was 1-in thick, and the rod used to cast and mould was ¾-in in diameter; the moulds and the PVC can be seen in figure 3. Rocket Motor Design The rocket motor consists of head cap, combustion chamber, nozzle, igniter charge, and propellant grain segment, as can be seen in figure 4. Propellant grain segment using white powder Four types of rocket motor nozzles were used in this study. The first was a straight connector of length ¾ in between the internal thread and outer thread. The second was the reduction connector from ¾ to ½ in, which acted as a divergent nozzle. Next was a straight internal thread connector with a washer as a throat inside the connector. Last was a straight internal thread connector without a washer. The completely assembled rocket motors are shown in figure 5. The length of the ¾-in PVC 5 tube was 5 in and it was filled with white mixed powder propellant with total propellant weights of 82.8, 85.5, 76.0, and 77.1 g, respectively. Then, the ¾-in head cap was connected to the PVC 5 tube and nozzle. The fuse ignitor was connected with the electrical wire and taped to the nozzle to avoid falling off. Propellant grain segment using Rocket Candy For this study, R-Candy with tubular grain segment was selected with PVC 13.5 as the propellant grain segment. The molten R-Candy was cast into the prepared segments. The total lengths of the propellant grain segments were 2.5 in, 5 in, 10 in, and 15 in. The propellant grain segments of length 2.5 in were selected, as the propellant could be conveniently cast and moulded into this length. Then, each 2.5 in segment was connected with electrical tape together until the required lengths of 5 in, 10 in, and 15 in were obtained. Next, the grain segments were slid into the PVC 13. 5 1234567890''"" 8th TSME-International Conference on Mechanical Engineering (TSME-ICoME 2017) IOP Publishing IOP Conf. Series: Materials Science and Engineering 297 (2018) 012009 doi:10.1088/1757-899X/297/1/012009 ignitor was placed at the top of the propellant segment so that it could burn rapidly and then, the head cap, outer shell, and nozzle were assembled together. The rocket numbers one to five were prepared with 2.5 in, 5 in, 10 in, and two 15 in propeller segments, respectively, as shown in figure 6. Nozzle Nozzle is the part that produces thrust in a rocket motor; the exhaust gases rapidly flow with increasing velocity as they pass through the throat and exit to the atmosphere. In the case of R-Candy propellant, the nozzle was cast with concrete and the washer was placed at the throat to sustain the high temperature and pressure, as can be seen in figure 7. The mixing ratio of concrete and water was 1:0.35 by weight [10]. The ratios of the divergent part of the nozzle can be seen in figure 8. The throat diameter was 1.5 cm and the exit nozzle diameter was 9.8 cm. For simplicity, the converging part of the nozzle was not included in the measurements. Ignition fuse and ignition system Electrical ignitor switches are required for safety purposes. There are two types of switches-pre-fire and fire switches-as can be seen in figure 9. The wire from the switch was connected to the rocket motor and the other side of the switch was connected to the battery of 12 V and 8 Ah. Heat transfer analysis in combustion chamber In this study, only conduction and convection were considered inside the combustion chamber and the radiation was neglected for simplicity. The 1D heat conduction is the transfer of heat through a material (solid, liquid, or gas) and can be expressed as: where " is the heat flux, k is the thermal conductivity of the material, L is the thickness of the material, and and are the hot wall and cold wall temperatures. The 1D heat convection is the transfer of heat from a surface (liquid or solid) to a fluid in motion and is described by: where h is the convection coefficient, and and are combustion gas temperature and wall temperature, respectively. The heat flows during conduction and convection are shown in figure 10 and figure 11, respectively. For safety reasons, the combustion chamber temperature was assumed to be approximately 2000-2500 ℃. Figure 10. 1D heat conduction. Figure 11. 1D heat convection. Static rocket motor test stand The static rocket motor test stands were made with square steel and connected together with screw. The width, length, and height of the stand were measured as shown in figure 12, and were 50 cm, 50 cm, and 150 cm, respectively. This test stand could be transported via a sedan. Moreover, the test stand could be used in the vertical or horizontal position. The total weight of the stand was around 30 kg. The rocket motor was tightly secured into this test stand. The thrust force of the rocket motor was recorded using a digital scale and a camera recorder, as can be seen in figure 13. Safety regulations Safety was our first priority in this lab and we followed the safety regulations for an amateur rocket [11]. In addition, a screen was used to cover the structure to prevent the debris from scattering except on one side, which was kept open to release the exhaust gases. The surrounding area was cleared and prepared for the test every time. Testing and Results The rocket motor numbers one to four, which used white mixed powder, provided the total impulse of approximately 100 N•s, which is similar to that of a class G rocket. The end burning time was determined as approximately 30 s using the camera recorder. The initial propellant weight was approximately 78-82 g and the average thrust was around 3.2-3.6 N. The total impulse was calculated as approximately 98-110 N•s and the maximum thrust was measured by a digital weight as approximately 83-103 N. However, the maximum thrust of the rocket motor number one could not be measured, as the combustion chamber melted and developed a hole of approximately 2-mm diameter because its burning time was longer than that of the other parts. Based on the results, the rocket motor number two provided the maximum thrust using the divergent nozzle. The heat flux was approximately 0.35-0.44 MW, assuming that the thermal conductivity of PVC is 0.19 W/(m K) and its melting point is approximately 160 ℃ [12-13]. The results for the white mixed powder rocket motor can be seen in table 1, assuming that the specific impulse (Isp) of the white mixed powder and R-Candy [1][2] is approximately 130 s. The total impulse (I) can be calculated as: where I is the total impulse, Wp is the propellant weight, and Isp is the specific impulse. The R-Candy rocket motor numbers one to four were categorised as rocket classes G, H, and I because the total impulse ranged within 80-160, 160-320, and 320-640 N•s [14]. The sensitivity of the digital scale was not very accurate because the maximum thrust varied during the test, as shown in table 2. However, the rocket numbers 4 and 5 misfired and exploded. There are several reasons for this, which include the casting process of R-Candy, alignment of the propellant segments, and expansion area ratio of the nozzle. The test stand was placed in the horizontal position, as can be seen in figure 14. It can be observed in tables 1 and 2 that the mass of the propellant increases with the total impulse, as shown in equation (3). The pressure gauge scale was connected but the scale had a wide range from 0-10000 psi because of which it could not detect and record the data for white mixed powder and R-Candy. The digital thrust via data acquisition was developed to record the thrust and pressure simultaneously; however, there were problems in the signal from the data acquisition to the computer, which resulted in unclear data. Conclusions and Recommendations It was observed in this research that the white mixed powder could be used for small class G rockets; the walls in the combustion chamber made of PVC 5 might not be able to sustain such high temperature and long combustion duration of more than approximately 32 s for this class of rockets. The divergent nozzle is the most suitable, as it provides the maximum thrust. For rockets of higher class, the walls need to be thicker, such as of PVC 13.5, to prevent melting. However, an outer pipe should be used to cover the propellant segment in the interest of safety. The classification of R-Candy propellant rocket motors from classes G to I was successfully performed and recorded; however, the class J rocket motors misfired and exploded. In conclusion, the rocket motors were successfully tested and classified, and this study of the rocket motor can be used for determining the flight of rockets in the near future.
2019-04-22T13:10:51.416Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b6266c7ee6178676bb1e5dd7f0a48cc2a7ea76ce", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/297/1/012009/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8ec1922a71a7c44fca859735e460d7f6a52bf853", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
135115897
pes2o/s2orc
v3-fos-license
Contribution of BeiDou satellite system for long baseline GNSS measurement in Indonesia The demand for more precise positioning method using GNSS (Global Navigation Satellite System) in Indonesia continue to rise. The accuracy of GNSS positioning depends on the length of baseline and the distribution of observed satellites. BeiDou Navigation Satellite System (BDS) is a positioning system owned by China that operating in Asia-Pacific region, including Indonesia. This research aims to find out the contribution of BDS in increasing the accuracy of long baseline static positioning in Indonesia. The contributions are assessed by comparing the accuracy of measurement using only GPS (Global Positioning System) and measurement using the combination of GPS and BDS. The data used is 5 days of GPS and BDS measurement data for baseline with 120 km in length. The software used is open-source RTKLIB and commercial software Compass Solution. This research will explain in detail the contribution of BDS to the accuracy of position in long baseline static GNSS measurement. Introduction Global Navigation Satellite System (GNSS) has been widely used for positioning application, such as infrastructure, deformation monitoring and atmospheric monitoring. Currently, GNSS is consist more than one satellite constellations. There are three global satellite constellation (GPS, GLONASS and Galileo) and there is one regional satellite constellation which rapidly developing, BeiDou Navigation Satellite System (BDS). By combining GPS and BDS constellation, the number of observed satellites has increased in conjunction with the accuracy and precision of the estimated position for short baseline GNSS data processing [1,2]. In General, with the addition of frequencies and GNSS constellation satellites, it is posibble to improve the reliable ambiguity resolution, reduce the various of error sources and imporve the positioning accuracy and precision [3,4,5,6,7]. In Indonesia, the use of GNSS is mainly used to establish the control point network. The control point network is divided into 5 classes, namely Orde-0 to Orde 4. Orde-0 and Orde 1 are maintained by Badan Infromasi Geospasial (BIG) while Orde-2 to Orde-4 are maintained by Badan Pertanahan Nasional (BPN). Those classes mainly divided by its baseline length. The baseline length for Orde-0, 1, 2, 3 and 4 are 200 to 1000 km, 100 to 200 km, 10 to 15 km, 1 to 2 km and up to 500 m respectively [8]. Those control point networks are represented by benchmark or monument which has coordinate as a reference point. Control points network should be distributed evenly, however, in Indonesia, those control points network were not distribution evenly, especially for those outside the Java Island (figure 1). Therefore, the baseline length of reference point that used was so far away. Its time consuming when we use traverse method to established the control point, so that, GNSS method could overcome that problem. GNSS data processing for longer baseline requires a specific processing strategy which can be done by using a scientific GNSS processing software. Orbital, ionosphere and troposphere error are can be resolved by using scientific GNSS processing software, meanwhile, GNSS data processing for longer baseline by using commercial GNSS processing software should be done carefully. On the long baseline GNSS data processing, the un-differenced biases in orbital, ionosphere and troposphere error are the main factors of poor position solution [10]. This research aims to investigate the contribution of BDS in GNSS long baseline data processing in Indonesia. Data and Method An experimental observation points was carried out in Bandung and Jakarta (figure 3) with baseline length up to 120 km. ITB1 which is located in Bandung was used as a reference point, while A001 which is located in Jakarta was used as a rover. Both of ITB1 and A001 were Continuous Operating Reference System (CORS) that can observed BDS data. ITB1 used ComNav M300 GNSS receiver while A001 used CHC N72 (figure 2). Geometry of the Satellites Better satellite geometry leads a better position accuracy and precision. Satellite geometry is assessed by its satellite visibilities and the distribution of the satellites. The quality of the satellite geometry is quantified by the value of Dilution of Precision (DOP). DOP is divided into several terms, namely vertical DOP (VDOP), horizontal DOP (HDOP), position DOP (PDOP) and time DOP (DOP). These terms can be concluded by using geometric DOP (GDOP) term. In general, GDOP is defined based on the user-equivalent range error (UERE), which is the standard deviation of the satellite's pseudorange error at the determined position. DOP can be defined as follows [11] where σx, σy, σz is the standard deviation of the three-dimensional position and σctb is the standard deviation of the clock timing defined in distance at the specified location (u). Better satellite geometry is represented by low DOP value which lead to a better accuracy and precision. Better and poor satellitesreceiver geometry is represented in figure 4. GNSS Positioning Algorithm The GNSS positioning algorithm in relative positioning was used double difference (DD) positioning method. DD used carrier phase range that is constructed by differencing two single difference (SD) GNSS observation. Below is the brief equation for GNSS positioning algorithm [12]: where Pi and Li are pseudorange and carrier phase range on selected frequency (i =1,2), ρ is related to geometrical range from receiver to satellite, dρ is the orbital error, dtrop and dion are troposphere and ionosphere biases, c is the speed of light (299,729,458 m/s), dt and dT are time error of receiver and satellites, MP i and ML i are related to multipath error of pseudorange and carrier phase range, λ i and N i are related to wavelength and ambiguity number, while ϑPi and ϑLi are related to noise error. SD can be described as follows: where Δ is the difference between receivers A and B. The superscript -j is the observed satellite. The satellite clock error is eliminated by taking single difference between receivers that observed the same satellite, while the atmospheric biases like tropospheric and ionospheric may be eliminated depending to the length of the baseline. Multipath would be considered as a noise error that could not be eliminated. The remaining receiver clock error is eliminated by subtracting two SD observation. Mathematically, a DD is defined as follows: where the two satellites are denote as he superscript-j and k. The atmospheric biases are negligible in Equation 8, however, for a longer baseline the atmospheric biases should be carefully handle. This research used RTKLib and Tersus Geomatic Office (TGO) to process the kinematic and static position respectively. RTKLib [13] is an open source program package to handle satellite based positioning. RTKLib consists of a portable program library which has support to handle GPS, TGO is an integrated GNSS processing software to manage baseline processing and network adjustment. The processing strategy was designed as follow:  The data were processed with cutoff angle of 15 o .  The orbit used was broadcast ephemeris.  The troposphere model is Saastomoinen  The ionosphere model is from broadcast ephimeris Figure 5 shows the number of observed satellites along with DOP values for GPS, BDS and Combined GPS-BDS respectively. It can be found that at least 10 satellites of BDS can be observed for all observation, while at least 7 satellites of GPS can be observed. The number of BDS and GPS observed satellites is vary from 7 to 11 and 10 to 14 satellites respectively. Although BDS gave a better number of observed satellites than GPS, the DOP value for BDS was worse than GPS. The geometry of BDS satellites give smaller volume compared with the geometry of GPS satellites. This is due to the 5 geocentric satellites of BDS that makes the geometry of BDS satellites clustered into horizontal axis ( figure 6). The combination of GPS-BDS can be significantly increased the number of observed satellites up to 25 satellites. This circumstances led to a better GDOP compared with GPS only observation, the GDOP improved from 1.694 to 1.195. Figure 7 shows the daily solution of 5 days GNSS observations. The precision of daily solution is improved by using combined GPS-BDS observations. The precision of GPS-BDS observations is improved to cm level compared with GPS only observations. Kinematic solution were also analysed in this research, figure 8 shows the kinematic epoch-wise solution from each system. Unlike the daily solution, the combined solution gives poor result at around 2017-6-1 21:00 UTC. This is likely due to propagation error from GPS observation, hence, further investigation is required. In general, BDS only solution gives better precision compared with GPS only solution. Result and Discussion The use of high point positioning accuracy and precision is not unneglectable in Indonesia. In accordance with the highly growth of positioning system, BDS will significantly help to achieve the high accuracy and precision positioning. The accuracy and precision for combined GPS-BDS daily solution is within 5 cm for horizontal position and 15 cm for vertical position, however, further researches to investigate the utilization of BDS in Indonesia is still needed.
2019-04-27T13:09:48.687Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "0f1d52ac46a0c294be9ad0a4cbd4fb829e08c451", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/149/1/012070", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f592074d3e3205b796d6454c882ad13685106e49", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270513742
pes2o/s2orc
v3-fos-license
Patient activation in adults with visual impairment: a study of related factors This study aims to analyze variables related to patient activation in 78 individuals with visual impairment. The Patient Activation Measure (PAM) scores of participants showed no differences between males and females. It was found that the individuals living in urban areas, and participants with higher income and education levels had higher PAM scores. Still, the difference between the groups was statistically insignificant (p > 0.05). The PAM scores of the visually impaired individuals reflect taking action level of activation (66.51 ± 18.14-PAM level 3). There was a moderately significant relationship between PAM scores and visually impaired individuals’ self-management, self-efficacy, healthy life awareness, social relations, and environment (p < 0.001). We found that the variables included in the regression model (marital status, self-management, self-efficacy, healthy life awareness, social relations, and environment) explained 72.2% of the PAM score. Individuals with visual impairment can be given training on self-management, self-efficacy, healthy life awareness, and quality of life associated with social relations and environment to develop positive health behaviors. Introduction Visual impairment is a chronic condition whose prevalence increases with age [1], and it is considered a public health problem that impacts communities both socially and financially.According to the World Health Organization (WHO) reports, the global prevalence of visual impairment has been reported as approximately 285 million people.Of these, approximately 39 million people who participates in their self-care [12][13][14].Patient activation is having the knowledge, skill and confidence to self-manage disease symptoms and health problems, participate in activities that maintain or improve functioning, and be an active participant in one's own health care [11,13].Therefore, patients' involvement in decisionmaking about their own health plays a leading role in successful self-management and health promotion [12]. The effect of vision loss on activation varies based on many factors.Age of onset of vision loss, duration of life with vision loss, and patient perception of the impact of visual impairment-rather than clinical measures-may be more important factors in understanding the importance of vision loss on activation [15,16].Cognition in accepting the disease requires acceptance of low vision and confidence in living with limitations and adapting to them.It is suggested that the adaptive and maladaptive cognitions demonstrated in chronic disease states may be important for fully understanding individual differences in adaptation to chronic diseases.Acceptance refers to accepting the need to adapt to a chronic illness while perceiving the ability to tolerate the unpredictable, uncontrollable nature of the illness and cope with its negative consequences [17].A body of evidence demonstrates that greater acceptance of visual impairment is associated with improved psychological adaptation [18][19][20].Moreover, the patient's interpretation, perception and evaluation of their illness as an individual, as well as their emotional and behavioural reactions, are the factors that determine the patient's way of coping, the development of psychosocial difficulties and psychiatric disorders, and the quality of life [21].Owsley et al. have shown that, even though most of the comments of individuals with age-related macular degeneration (AMD) were negative, the number of these negative comments was not related to disease severity [22].This data indicates that how individuals with vision loss react to the medical condition is more significant than the rate of vision loss.Vision loss is characterized by its progressive nature and negative impact on daily life.As a result, patients may encounter persistent psychological stress stemming from anxiety or fear, in addition to secondary repercussions including social isolation and depression [23].Prolonged mental stress can exacerbate vision loss, even though it is an obvious consequence of the condition.Regarding this, it is possible to assert that tension is both the catalyst and the consequence of visual impairment.This psychosomatic perspective holds significant value in advocating for clinical practices that involve coping mechanisms, self-management, awareness, and access to social and environmental support for those who are experiencing vision loss [24,25]. Prevention of complications and favourable prognosis are possible with effective management of chronic diseases.According to the active patient concept as defined by Hibbard et al., the individual believes that they have an important role in self-management, cooperates with supportive people, maintains their health, and knows how to manage their condition, protect their functions, and prevent regression in health status [11].In addition, the individual has the ability to behaviourally maintain their current state, cooperate with the health team, maintain and protect health functions, and access high-quality care appropriately [12,13].When patients are actively engaged in their self-care, care experiences and outcomes improve [26][27][28]. Evidence on patient activity is critically important, especially in individuals with chronic diseases, because individuals need to follow complex treatment regimens, monitor their condition, make lifestyle changes, and be decision-makers in their care.Such evidence is a prerequisite for the extensive adoption and implementation of strategies to support greater activation in patients.Although many studies have investigated patient activation in many different chronic diseases [29][30][31], the ability of individuals with vision loss to participate actively in the management of their health care has not been extensively studied, and the evidence on this subject is insufficient [16,32].Based on the absence of studies on this subject, we aimed to study and examine the variables related to patient activation in adults with visual impairment.We suggest that the results of the study will guide the diagnosis and rehabilitation process of the disease in individuals with visual impairment. Methods The study was performed according to the guidelines of the Declaration of Helsinki.The study was approved by the University of Health Sciences Gülhane Scientific Research Ethics Committee (2021 − 400/25.11.2021).Individuals who applied to the Ulucanlar Eye Training and Research Hospital Ophthalmology Unit and met the inclusion criteria were referred to study.The interviews were held between January and April 2022.All participants who met the inclusion criteria and gave informed consent completed the face-to-face interview.The scale items were read aloud, and the participants were asked to choose the option that they found most suitable.The answers given by the participants were recorded by the researchers. Participants The study group consisted of individuals over the age of 18.Having a visual acuity worse than 20/40 and having a diagnosis causing visual impairment were determined as inclusion criteria.Individuals with any psychological disorders and communication problems were excluded from the study. Instruments Socio-demographic information form, Patient Activation Measure (PAM), Self-Control and Self-Management Scale (SCMS), General Self-Efficacy Scale (GSES), Healthy Life Awareness Scale (HLAS) and World Health Organization Quality of Life Assessment (WHOQOL-BREF) were applied to the individuals participating in the study. The sociodemographic form included questions such as the participants' age, gender, marital status, education level, place of residence, income level, and age at the onset of vision loss.In addition, the participants' vision loss rates and general disability rates included in the physician committee report issued by the Ministry of Health were also recorded. Patient activation measure (PAM) PAM was developed by Hibbard et al. in 2004 [13] in patients with chronic disease in order to detect and evaluate patient activity level, and in 2005 [11], Hibbard et al. studied a short version of the scale in patient group with a chronic disease.PAM is a valid, highly reliable, one-dimensional, Guttman-type scale.The scale has 22 items, but in this study, we used the Turkish version, which consists of 13 items.The scale scoring system is as follows: Strongly Agree = 4 points, Agree = 3 points, Disagree = 2 points, Strongly Disagree = 1 point, and I Don't Know/Can't Evaluate = 0 points.The activity scores obtained from the measurement tool range from 0 to 100 points.So, the results were interpreted as: Stage 1 = lowest activity (belief in the importance of taking an active role): < 47; Stage 2 (knowledge and confidence to take action) = 47-55; Stage 3 (taking action) = 55-72; Stage 4 = highest activity (keeping routine even under stress): > 72.5.The Cronbach's alpha internal consistency coefficient of the original scale is 0.91, while this number is 0.81 in the Turkish version [13,33]. Self-control and self-management scale (SCMS) The Self-Control and Self-Management Scale was developed by Mezo in 2008 [34], and the Turkish testing of the scale's validity and reliability was performed by Ercoşkun in 2016 [35].SCMS is an adult self-assessment tool that was developed to measure the general characteristics of self-control and self-management skills.It has a cognitive and behavioural structure and was successfully applied and evaluated during the scale development stage [34,35].SCMS is a process-oriented scale that independently evaluates each of the three components of the self-management structure [36,37].It consists of three sub-dimensions: Self-Reinforcing (SR), Self-Evaluating (SE) and Self-Monitoring (SM).The total score ranges from 0 to 80.A high score indicates high self-management and self-control [35]. General self-efficacy scale (GSES) The General Self-Efficacy Scale-Turkish form is a valid and reliable tool that measures the general self-efficacy of people 18 and older who are at least primary school graduates.In the study, the Likert-type scale (consisting of five different responses ranging from 'Not at All' to 'Exactly True') was used for the question 'How well does this describe you?' The score of each question is between 1 and 5. Items 2,4,5,6,7,10,11,12,14,16 and 17 in the scale are reverse scored.The total score of the scale ranges between 17 and 85.A higher score indicates more self-efficacy [38]. The healthy life awareness scale (HLAS) The HLAS is a 15-item and five-point, Likert-type scale.The lowest score is 15, and the highest score is 75.A high score on the scale is considered a high level of healthy living awareness.Cronbach's alpha value was 0.813, testretest reliability coefficient was determined to be 0.849, and the scale was proven to be highly reliable [39]. World Health Organization Quality of Life Assessment (WHOQOL-BREF) The health-related quality of life scale is a scale developed by WHO [40] that measures a person's well-being and allows cross-cultural comparisons.Eser et al. [41] were tested Turkish validity and reliability.The scale measures physical, spiritual, social and environmental wellbeing and consists of 26 questions.Each area expresses the quality of life in its own area, independently of each other.As the score increases, the quality of life increases.Cronbach's alpha internal consistency coefficients of the scale were obtained as 0.76 in the physical health dimension, 0.67 in the psychological health dimension, 0.56 in the social relations dimension and 0.74 in the environmental dimension.Test-retest reliability varies between 0.51 and 0.81 [42].Social relations and environment subdimensions were used in our research. Sample size At the beginning of the study, the number of participants was determined using the G*Power (version 3.1.9.4) package programme.The sample size was calculated according to the multivariate linear regression analysis with six variables that predicted the PAM score, which was the primary variable.When the Cohen f 2 effect size index was 0.21, the type 1 error rate was 0.05, the power was 0.80, and the sample size was determined as a minimum of 76 individuals with vision loss. Statistical methods For descriptive statistics, mean ± standard was used for continuous data, and frequency and percentage were used for categorical data.Conformity of continuous data to normal distribution was checked by the Kolmogorov-Smirnov test and graphical analysis (box-line plot, Q-Q Plot).The difference between the two groups was evaluated using the t-test in independent groups with normal distribution, and with the Mann-Whitney U test for those without normal distribution.The distribution of scale scores in groups of three or more was evaluated with one-way ANOVA for those with normal distribution, and with Kruskal-Wallis test for those without normal distribution. Multiple linear regression analysis was performed with the Enter method to obtain the estimation model.From the linear regression assumptions, conformity to normal distribution was examined by the Kolmogorov-Smirnov test, and linear relationship was examined by scatter plot.The adequacy of the model was evaluated by multicollinearity Variance Inflation Factor (VIF) analysis.Autocorrelation between errors was analysed by Durbin-Watson (D-W) test, effective observations were analysed by Covariance Ratio, and distant observations were analysed by Cook's distance.Variance homoscedasticity, normal distribution of errors, and extremely distant and outlier observations were examined with residual plots [43].In case of multicollinearity among the independent variables included in the model, Ridge Regression (RR), one of the biased regression techniques, was used instead of ordinary least squares (OLS) regression. Ridge regression analysis was performed using the NCSS (version 21.0.3)package program.IBM SPSS 21 (IBM SPSS Inc, Chicago, IL) program was used for all other analyses.The statistical significance level was taken as 0.05 [44]. There was no statistically significant difference between the mean scores of PAM scale of female and male participants (U = 641.50,p = 0.475).The distribution of PAM scores based on the education levels of the participants was evaluated using the Kruskal-Wallis test.The mean score of those with undergraduate and graduate education (68.82 ± 16.90) was higher than those with primary and high school education (61.42 ± 21.76 and 66.83 ± 16.10, respectively).However, the difference was not significant (χ2 = 2.045, p = 0.360).The distribution of scale scores to income levels was evaluated using the Kruskal-Wallis test.The mean score (71.49± 14.95) of those with an income above and twice the minimum wage (n = 26) was higher than other income groups, but the difference was not statistically significant (χ2 = 3.989, p = 0.136).Mean PAM scores of individuals living in the urban and smaller residential areas were evaluated using the student's t test.The mean score of those living in urban areas (67.81 ± 17.63) was higher than those living in smaller residential areas (61.06 ± 19.82), but the difference was not statistically significant (t = 1.302, p = 0.197) (Table 1). There was a moderate and statistically significant positive correlation between PAM score (66.51 ± 18.14) and SCMS_SM (24.29 ± 5.96), total SCSM (61.75 ± 12.73), total HLAS (58.72 ± 9.57) and total GSES scores (p = 0.000).There was a weak and statistically significant positive correlation between PAM score (66.51 ± 18.14) The linearity of the relationship between PAM and other variables was examined by scatter plot.We showed that the relationship between SCMS_SM, SCMS_SE, SCMS_SR and SCMS_Total and PAM was far from linear.Therefore, we performed square transformation to these variables and these variables were included in the multiple linear regression analysis (multiple linear regression analysis based on the ordinary least squares (OLS) method) as they were [45]. Table 2 shows that, by using residual plots, the errors were normally distributed for the model obtained by ordinary least squares (OLS) regression analysis.We established a model that predicts PAM scores with the Enter method.There was no heteroscedasticity problem and no correlation between errors (autocorrelation) (D-W = 1.79).The outliers were analysed with standardised residuals, effective observations with Covariance Ratio, and distant observations with Cook's distance values.Two extremely observation (observation 25 and 64) were excluded from the dataset [46].When Table 2 is examined, VIF and Condition Index (CI) values, which are two important indicators of Multicollinearity, are seen.We found that the VIF values of SCMS_total, SCMS_SM, SCMS_SR and SCMS_SE variables were 385.026, 58.257, 80.378 and 64.709, respectively; these values were greater than 10, indicate multicollinearity problems.In order to determine the presence and degree of multicollinearity, Vinod and Ullah has proposed the condition index (CI) based on the largest and the smallest eigenvalues [47].If CI is smaller than 10, then there is no multicollinearity issue, if CI is between 10 and 30, it is considered to be a multicollinearity issue, and if CI is greater than 30, then it is considered to be a severe multicollinearity issue [45].The CI value was 2707.33,indicating a severe multicollinearity. Multicollinearity causes the standard errors of the regression coefficients to be high, t-statistic values to be small, and therefore to reach the wrong conclusion that the contribution of the variables to the regression model is insignificant.It causes the multiple linear regression obtained with the OLS method to produce unreliable results.Multicollinearity among independent variables will result in less reliable statistical inferences.Therefore, we used Ridge Regression (RR) analysis developed by Hoerl and Kennard [48], which is a more effective method.Ridge estimators were calculated to eliminate the multicollinearity problem and obtain estimators with small variance [49]. The stability of the estimations to be made with the RR method depends on the determination of the optimum value for Ƙ.In order to determine the optimum value of the Ridge parameter Ƙ, we used the Ridge Trace method proposed by Hoerl and Kennard [48] and chose the least possible Ƙ value as the optimum Ƙ value in the region where the regression coefficients become stationary.According to the Ridge trace plot and Variance Inflation Factor plot obtained as a result of the Ridge regression analysis, we found that the regression coefficients became more stationary after a very small (Ƙ = 0.02) bias constant.A value of Ƙ = 0.02 was chosen, corresponding to the situation where the VIF values suggested by Marquardt and Snee [50] were between 1 and 10. Table 3 presents Ridge Regression coefficients and standard errors, least squares coefficients and standard errors, R 2 and standard errors for the bias constant Ƙ = 0.02.The model established for the Ridge Regression was found to be statistically significant (F(9,66) = 19.031,p < 0.001).The established model explains 72.2% of the variation in the PAM score variable. Based on the results in Discussion The results of this study provide information about the factors that may affect patient activation in visually impaired individuals.Our study shows that marital status, self-management, self-efficacy, wellness awareness, quality of life related to social relationships and the environment have a significant effect on patient activation.Individuals with high levels of self-management, self-efficacy, healthy life awareness and positive social relations and supportive environment can avoid useless automatic thoughts and habits.They exhibit conscious behaviours in terms of maintaining and improving health, and they can achieve better results by accessing health services [51].The PAM scores of participants according to sociodemographic variables did not show any differences among visually impaired men and women.Studies on differences in patient activation among men and women with chronic diseases show conflicting results [16,[52][53][54][55].While some studies show higher levels of patient activation in men [45,52], some do not show any difference between men and women in terms of patient activation [16,53,55].In our study, we demonstrated that as the education level of the participants increased, PAM scores increased.Literature shows that individuals with chronic diseases and a better education have higher levels of activity [56,57].In our study, the PAM scores of the participants with higher income levels were also higher.In addition, individuals living in urban areas had higher PAM scores than those living in smaller residential areas, but there was no statistically significant difference between the groups.Studies have shown that patient activation is only moderately related to socioeconomic status, and education and income account for less than 5-6% of the variation in patient activation [55,58].Given the considerable potential for promoting activation and enhancing health outcomes among patients from low socioeconomic status, activation-promoting strategies may prove to be especially efficacious [58].Patient activation is also affected by complex factors, such as quality of life, well-being and self-efficacy [59,60].However, not all of these studies have examined some important factors that could influence the association between sociodemographic variables and patient activation.Therefore, these factors may confound the impact of socio-demographic variables on patient activation.In our study, the PAM scores of the visually impaired individuals had better activation values (66.51 ± 18.14), compared to the values (58.5 ± 15.0) found in the research conducted by Morse and Seiple [16].When an individual has basic knowledge of their own condition and treatment for taking action and has some experience and success in changing behaviour, they begin to take action, but there may be a lack of confidence and skills to support the new behaviours [11].When the action phase is supported by brief interventions, individuals' current activity levels can change positively.Attempts to make patients ask more questions can make a difference in their information-seeking behaviour [12].Physical symptoms, environmental stimuli and media are among the factors that influence individuals' protective health behaviours to take action [51].Using these factors correctly may contribute to the development of the activity levels of visually impaired individuals. Age of onset of vision loss, the time lived with vision loss, and the patient's perception of the impact of vision loss-rather than clinical measures-may be key factors in understanding the importance of vision loss on activation [16].Although the majority of comments on the impact of vision loss in patients with AMD were unfavorable, these negative remarks were not correlated with the severity of the disease [22].In addition, Morse & Seiple have shown that there was no relationship between visual acuity and patient activation in visually impaired individuals [16].In the current study, the rate of vision loss, total disability rate, and the age of onset of vision loss were recorded in the visually impaired individuals, and similar to Morse and Seiple's study [16], there was no relationship between these variables and PAM scores.Our research studied the correlation between marital status, a social variable, and PAM.The findings revealed a positive association between being married and high PAM levels.Being married can be thought of as social support in chronic condition [61,62].From this perspective, it may have turned into a positive life situation in terms of patient activation. In our study, there was a moderately significant relationship between the self-management, self-efficacy, healthy life awareness, quality of life related to social relationships and the environment of visually impaired individuals and PAM scores.Van do et al. have reported that low patient activation levels were associated with low self-efficacy, poor knowledge on heart failure, and low engagement in heart failure self-management behaviours after being discharged from the hospital [63].In Social Cognitive Theory, self-management is an important factor in self-efficacy, skills and behaviour change [64].Selfmanagement is defined as the patient's knowledge, skills, abilities and willingness to manage one's own health and care [65].Increased self-efficacy can help patients gain more control over their health outcomes and alleviate some of their concerns about vision loss [66].One way to approach self-management is to activate the patient to participate in their own care.Patient activation is one of the important steps in addressing self-management and self-efficacy needs in the best possible way for individuals with chronic conditions [67].Patient activation affects activities of daily living and self-management, and patient activation awareness, knowledge and skills can help improve health care outcomes [60,63].In our study, we found that the variables included in the regression model explained 52.9% of the PAM score.According to our model, activation can be explained by self-management, self-efficacy and health awareness.Previous research has shown that more activation is associated with more knowledge of state [28].We believe that knowing the level of activation in visually impaired individuals will shape self-management programmes in the management of chronic health conditions, and this can improve health outcomes by ensuring patient-specific planning of the programmes. A comprehensive understanding of the social context surrounding patient activation in chronic diseases has significant ramifications for the development of interventions targeted at enhancing self-management behavior associated with patient activation, as well as for the overall health and well-being of individuals with chronic diseases.Social support is crucial for preserving optimal bodily and mental health [68].Overall, research suggests that having strong, supportive social networks can help people become more resilient to stress, guard against the emergence of trauma-related psychopathology, lessen the functional effects of trauma-related disorders like post-traumatic stress disorder (PTSD), and lower their risk of illness and death [19,69,70].Our research found a correlation between patient activation levels of individuals with visual loss and social relationships as a subdimension of quality of life.Additionally, social relations were shown to be significant in the developed regression model.According to these findings, we believe that family members and friends can assist in the self-management of vision loss by offering intermittent guidance, offering tangible support that aids in self-management, such as emotional support, and providing hands-on assistance.Data suggests that support tailored to a specific condition improves health outcomes compared to more generic support.Thus, it may be postulated that when it comes to managing chronic diseases, assistance that is tailored to the individual condition or treatment regimen may have a more pronounced effect on self-management behavior compared to more generic forms of support [69].Coping with visual impairment and striving to preserve autonomy in everyday tasks can be an extremely difficult experience.The capacity to cultivate novel personal resources to offset the impairment resulting from visual loss is predominantly contingent upon the efficacy of psychological adaptation.Psychological adaptation to vision loss refers to the cognitive and behavioral process by which an individual effectively adjusts to the challenges and constraints brought about by the loss of vision [19].Individuals are highly susceptible to emotional distress and social isolation throughout the adaptation process; consequently, they may develop psychological issues including depression, anxiety, and sleep disorders.Frequently, these patients' psychological issues serve as an additional burden of disability, impeding their ability to be active and reintegrate into society [71,72].The perception of social support is prominent among the determinants correlated with enhanced adaptation [19].Interventions designed to increase the support of the patient's friends and family, in addition to the establishment of community peer support groups, can also benefit from social support.Social support from family and acquaintances had a substantial effect on psychological well-being and adaptation to vision loss [24].Social support can indirectly influence self-management by enhancing self-efficacy.Additional consequences may potentially arise through alternative psychological mechanisms.Being part of a supportive social network can have positive impacts on motivation, coping mechanisms, and psychological wellbeing.Highly motivated individuals, who have strong morale, or experience less depression may participate in situations linked to their illness [73][74][75] and exhibit more activation. As noted in Social Cognitive Theory, the engagement of individuals with chronic illnesses in the self-management process occurs within a context that includes formal healthcare providers, informal social network members, and the physical environment (e.g., housing, air quality).All of these contextual factors have the potential to significantly influence self-management behavior, either directly or indirectly through self-efficacy [74].The Personalized Patient Activation and Empowerment Model (P-PAE) is a comprehensive concept that encompasses several aspects such as social and physical settings, patients, healthcare professionals, communities, and the broader healthcare delivery system.The model prioritizes patients as the focal point of the system and employs patient-centered outcome research theory to elucidate how individualized patient activation and empowerment may be achieved [76].Furthermore, the environment encompasses a wide range of elements that might either impede or facilitate patients' engagement, performance, and entitlement to preserve their dignity [77].For example, the International Classification of Functioning, Disability, and Health (ICF) defines environmental factors as "those that constitute the physical, social, and behavioral environment in which people live and lead their lives." The impact of the environment on an individual's life and ability to function depends on the degree of support or demand (e.g., accessibility, usability) that the physical environment may have [78].For example, person-environment adaptation theories explain that the adequacy of the fit between a person's functional abilities and their environment can affect a person's level of independence, participation, and overall health and well-being [77].In our research, it was seen that patient activation levels of clients with vision loss were related to the environment sub-dimension of quality of life and that the environment was an effective factor in the established regression model.The ICF, which is also accepted by the World Health Organization (WHO), states that environmental factors are very important for patient health outcomes.Therefore, health professionals should integrate environmental factors into their assessments and goal-setting to encourage patient participation [78].Moreover, the current trend towards short-term hospital stays and ongoing rehabilitation and care at home for people with complex health problems requires greater involvement of the environment in health-related communication throughout the care process.Small changes in the physical environment can have large effects on behavior and can be used in environmental, self-management, and chronic disease research [79]. Patient activation can be considered as the operationalisation of the concept of patient empowerment or patient self-efficacy in the focus of chronic condition management in recent years.It also provides additional benefits in terms of more effective self-care and tailoring services, as well as greater efficiency [80].Globally, there is a growing awareness that patients need to become more active and effective managers of their health and health care in terms of strategies to improve health care quality [58].The increasing prevalence and duration of visual impairment cause an increased burden of self-management and the need for more support for visually impaired individuals and their families [81].For this reason, activities related to patient activation are extremely important for visually impaired individuals.It is critical for visually impaired individuals to have access to education and supportive interventions in order to increase their skills and confidence in the management of their own health problems in order to gain awareness of healthy living, self-management and self-efficacy, and to enable their families to acquire behaviours for health promotion [82].Programmes to be planned to ensure the development of patient activation in visually impaired individuals will also address medical management (disease information, drug management, etc.), role management (management of daily family and work-related functions) and emotion management (stress management, problem solving and adaptation skills, etc.) [81,83,84].In addition, it is very important to encourage social support, such as peer support, inter-patient support and coaching support, that contributes to improved self-management behaviour [19,85].Finally, these programmes to improve patient activation should focus on the priorities of the visually impaired client; that is, they should be presented individually and prioritise concepts like self-management, self-efficacy, healthy living awareness, supportive social relations and environment in light of the findings of our study. Strengths and limitations Our study is the first study in which variables, including self-management, self-efficacy, health awareness, quality of life related to social relationships and the environment in visually impaired individuals, are included and comprehensively examined.The study was conducted in a secondary health care centre where outpatient or inpatient diagnosis, treatment and rehabilitation services are provided.Individuals with vision loss from many provinces of Turkey apply, which increases the generalisability of our study results.However, the findings of the individuals who have applied to health services may not fully reflect the characteristics of the visually impaired population, which includes individuals who are not health care recipients and may be biased in favour of participants who have applied for health services in terms of activation level.The use of self-reported questionnaires for assessments may result in misclassification due to socially desirable responses.Although our study shows the relationship between the investigated variables and activation, it includes a cross-sectional design that does not allow the evaluation of temporal effects or causality potential.In order to better understand the effects of factors affecting patient activation over time, longitudinal studies are needed.Even though PAM is extensively used for chronic health conditions, it is limited to the perceived self-assessment of the patient's ability to manage self-care, rather than the direct measurement of selfmanagement behaviour itself [86].In addition, patient activation may require situation-specific knowledge and skills in situations related to vision loss.Even though PAM has been used in a study by Morse and Seiple [16] to measure activation in individuals with visual impairment (item reliability was 0.88, and person reliability was 0.86), it can be discussed in further research whether it is an appropriate measure for visually impaired individuals. Conclusion This study showed that visually impaired individuals are activated at the level of taking action according to PAM, and it showed that marital status, self-management, self-efficacy, health awareness, quality of life related to social relationships and the environment greatly affect patient activation.These results reflect the importance of addressing self-management, self-efficacy, health awareness, quality of life related to social relationships and the environment in achieving better health outcomes in visually impaired individuals.However, although including patient activation within the scope of providing healthcare services to visually impaired individuals has the potential to target quality of life with the existing chronic condition by improving the health behaviours of individuals, additional evidence is needed to better understand the role of patient activation in visually impaired individuals. Table 1 Descriptive statistics of PAM total scores, socio-demographic and other characteristics and the rate of vision loss(86.44 ± 17.16), rate of disability (80.52 ± 24.60) and age (44.87 ± 16.33) and a very weak negative correlation (r = 0.092, p = 0.42; r = 0.076, p = 0.511; r = 0.044, p = 0.702) between the total PAM score and the age of vision loss (11.27 ± 18.41) which was statistically (r=-0.086,p = 0.456) insignificant.As a result of the Point Biserial correlation analysis between PAM score and marital status, a highly (0.716) statistically significant relationship was found (p < 0.001).No relationship was found between the participant's age, vision loss rate, disability rate, and age at onset of vision loss and PAM, so only marital status among sociodemographic variables was included in the regression model. Table 2 Ordinary least squares (OLS) regression analysis for total PAM score SCMS: self-control and self-management; SE: self-evaluating, SM: self-monitoring, SR: self-reinforcing, GSES: general self-efficacy scale, HLAS: health life awareness scale; WHOQOLBREF: World Health Organization Quality of Life Assessment Table 3 , OLS regression model of the factors that can affect the PAM total score for the value of Ƙ = 0.02 was established as:
2024-06-16T06:18:16.976Z
2024-06-14T00:00:00.000
{ "year": 2024, "sha1": "610a7cc6006c4c8272be6f70a1af3e487e99c6e1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c4d19fb08c1ae3f81e01974cf4fbe4743fdc9d77", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247412629
pes2o/s2orc
v3-fos-license
A Novel Deep Learning Method to Predict Lung Cancer Long-Term Survival With Biological Knowledge Incorporated Gene Expression Images and Clinical Data Lung cancer is the leading cause of the cancer deaths. Therefore, predicting the survival status of lung cancer patients is of great value. However, the existing methods mainly depend on statistical machine learning (ML) algorithms. Moreover, they are not appropriate for high-dimensionality genomics data, and deep learning (DL), with strong high-dimensional data learning capability, can be used to predict lung cancer survival using genomics data. The Cancer Genome Atlas (TCGA) is a great database that contains many kinds of genomics data for 33 cancer types. With this enormous amount of data, researchers can analyze key factors related to cancer therapy. This paper proposes a novel method to predict lung cancer long-term survival using gene expression data from TCGA. Firstly, we select the most relevant genes to the target problem by the supervised feature selection method called mutual information selector. Secondly, we propose a method to convert gene expression data into two kinds of images with KEGG BRITE and KEGG Pathway data incorporated, so that we could make good use of the convolutional neural network (CNN) model to learn high-level features. Afterwards, we design a CNN-based DL model and added two kinds of clinical data to improve the performance, so that we finally got a multimodal DL model. The generalized experiments results indicated that our method performed much better than the ML models and unimodal DL models. Furthermore, we conduct survival analysis and observe that our model could better divide the samples into high-risk and low-risk groups. INTRODUCTION As lung cancer is still a major contributor to cancer deaths, predicting lung cancer survival plays an important role in lung cancer precision medicine. Precision medicine is a novel kind of therapy which sprang up in the development of high-throughput sequencing technology and computer-aided treatment. It is able to give diseases a more detailed description by genomics and other technologies so that clinicians can get more precise targeted subgroups for therapies (Ashley, 2016), and survival prediction is one of the key components in precision medicine. Recent years have witnessed the burgeoning of sequencing data generation in the context of next-generation sequencing technology. RNA-Seq (Wang et al., 2009) was developed for profiling the transcriptome using deep-sequencing technologies, which can describe the transcripts far more precisely. A large amount of gene expression data was generated since its development. As a result of the explosively increasing gene expression data, cancer analysis and prediction using gene expression data such as cancer survival prediction and cancer subtype prediction have become hot spots in biomedical research. Many machinelearning-based analysis methods had been proposed, such as survival trees (Gordon and Olshen, 1985), Bayesian methods (Fard et al., 2016), and artificial neural networks (ANNs) (Faraggi and Simon, 1995), so that pathological cancer analysis can be done at a molecular level and in a big-data background. With the fact that patients having the same disease still may give different responses to a specific therapy (Sharma and Rani, 2021), analyzing and dividing patients with the same disease according to their molecular-level features have the potential to improve diagnosis accuracy. In this paper, what we do can also be seen as to divide samples into different groups by the predicted survival status according to their gene expression data. There are many classical machine learning (ML) methods that have been widely used to make cancer prediction and analysis. For example, the Cox proportional hazard model is an algorithm which models the relationship between survival distribution and covariates with a proportional hazard assumption in a linear-like manner (Fox and Weisberg, 2002). Support vector machine (SVM) is a supervised ML algorithm that can be nicely summed up as (1) the separating hyperplane, (2) the maximum margin, (3) the soft margin, and (4) the kernel function (Noble, 2006). SVM has been used extensively by bioinformatics practitioners due to its powerful classification capability, such as gene selection for cancer classification (Guyon et al., 2002) and cancer survival prediction . Besides the regression problem such as survival regression analysis and the classification problem such as cancer classification we have noted above, the unsupervised learning problems for complex objects with heterogeneous features are also ubiquitous and important in real-world applications (Ma and Zhang, 2019). For instance, some researchers leveraged the clustering method, an unsupervised ML algorithm, to predict survival and surgical outcomes with gene expression data and got reliable results (Wang et al., 2017). Although ML algorithms are endowed with a natural ability to learn patterns automatically from data, they have some shortcomings. One of the greatest Achilles' heels of classic ML methods is the strong dependence on how the data are represented. The classification performance of a machine model is closely related with the quality and relevance of the features. And deep learning (DL), as a part of the ML family, emerged to address this issue through automatically learning feature representations in the training process, thereby forming an end-to-end learning pipeline (Eraslan et al., 2019). And the unique compatibility with GPUs greatly facilitates the development of DL because of GPUs' much higher computing performance than CPUs at similar prices. For the past few years, many bioinformaticians get into the combination between bioinformatics and DL. For instance, DeepBind was proposed in 2015, which leveraged the convolutional neural network (CNN) to predict the sequence specificities of DNA-and RNA-binding proteins using sequencing data. The results showed that it outperformed other state-of-the-art methods (Alipanahi et al., 2015). From that time, the usages of DL methods in bioinformatics have increased rapidly. Many novel DL models are applied in bioinformatics research and got great performance, such as the CNN we have noted above, LSTM (Lamurias et al., 2019), deep autoencoder (Chicco et al., 2014), and knowledge graph (Sousa et al., 2020). Survival prediction is to build an association between covariates and the time of an event, and the covariates could be clinical information (for example, sex, cancer types, tumor stages, and ages), genomics data, and medical images; the time of event could be the time to death (overall survival, OS), the progression-free survival time (PFS), the disease-free survival (DFS), and the disease-specific survival (DSS). The canonical survival prediction methods are mainly some statistical ML algorithms such as Cox proportional hazard regression we have noted above, Kaplan-Meier estimator (Bland and Altman, 1998), and random survival forests (Ishwaran et al., 2008). Survival prediction plays an important role in bioinformatics research, and some researchers try to leverage the strong learning ability of DL for predicting survival patterns, such as DeepSurv (Katzman et al., 2018) and Cox-nnet (Ching et al., 2018). While DL methods have been widely used in recent years, they sometimes have difficulty in cancer survival prediction with genomics data due to the curse of dimensionality (Altman and Krzywinski, 2018), which means that, in cancer survival analysis and prediction problems, we usually have a small number of samples, namely, the patients; however, each sample has fairly high-dimensional features (for example, genes). Furthermore, the gene expression data are heterogeneous and noisy; many genes may be irrelevant with the target problem. All of the above factors usually cause the DL algorithms to become disoriented and more inclined to overfitting. To address this "High Dimensionality, Few Samples" issue in cancer survival prediction, we design a DL method for cancer survival prediction. Firstly, we propose a method to convert patients' gene expression data into two kinds of gene expression images, the first kind with KEGG BRITE (Kanehisa and Goto, 2000) gene functional information incorporated and the second kind with KEGG Pathway information incorporated, to overcome the curse of dimensionality. Then we propose a multimodal DL model with the two kinds of gene expression images and clinical data as inputs, to perform lung cancer long-term (60 months OS) survival prediction. Experiments on lung cancer data showed that our method achieved much better results on AUC (average AUC up to 71.48% on TCGA (Chang et al., 2013) lung cancer data set and 72.51% on GEO (Barrett et al., 2012) data set GSE37745 from 50 times experiments) than those of unimodal DL models and ML models. And survival analysis was conducted to further prove the prediction capability of our model. DL Applications in Survival Prediction The canonical statistical ML algorithms usually use the clinical information we have mentioned above as covariates to make prediction. To get the most from high-throughput genomics data and medical image data, many deep-learning-based methods were proposed for survival prediction. We will review the literature about DL applications in survival prediction in the following, and the more refined branch of this, that is, using CNN with gene expression data, will be reviewed in the next subsection. Travers et al. proposed Cox-nnet (Ching et al., 2018), which is an ANN using high-throughput omics data as input; the hidden node features learned by neural network layers were seen as the dimension-reduced omics features, and a Cox regression layer was added to perform the final prognosis prediction. Compared with Cox regression, Cox-nnet could reveal more relevant biological information. Katzman et al. proposed DeepSurv (Katzman et al., 2018) to perform survival analysis; the architecture of DeepSurv consisted of some neural network layers and a linear output layer; the clinical data were used as input. What the DeepSurv predicted was the hazard ratio of a specific time, so that DeepSurv is a DL survival prediction model which is subjected to the Cox proportional hazard assumption. Results showed that DeepSurv outperformed the Cox regression model. Arya and Saha (2021) proposed a multimodal DL method for breast cancer survival prediction, and the data they used included genomics data, histopathology images, and clinical data. Their model was a gated attentive DL model with the random forest classifier stacked. Using this proposed method, they got a significant enhancement in sensitivity scores in the survival prediction of breast cancer patients. Panagiotis et al. proposed to mine the MGMT methylation status through MR images; they used a pretrained ResNet-50, which is a 50-layer residual network for transfer learning and outperformed the ResNet-18 and ResNet-34 (Korfiatis et al., 2017). Sairam et al. proposed to make pan-renal cell carcinoma classification and survival prediction from histopathology images using CNN and achieved good results in classification accuracy (Tabibu et al., 2019). Using CNN With Gene Expression Data CNN (Lawrence et al., 1997) is a kind of DL algorithm. In particular, CNNs using 2-D convolution kernels can be seen as a sort of tailor-made models for learning image representations; they can perform multiple computer vision tasks, such as image classification, face recognition, video recognition, image segmentation, and medical image processing. A canonical CNN usually has an input layer for loading the images. Behind the input layer, there are some hidden layers for image representation learning. At the end, an output layer will be added for making prediction. The hidden layers are mainly composed of (1) convolution layers which convolve the input, (2) pooling layers which reduce the dimensions of the data delivered by convolution layers, and (3) fully connected layers for learning the representations to be used for the final prediction. In the past decade, CNNs have made remarkable achievements, a cornucopia of great models based on CNN have been proposed, such as LeNet (LeCun et al., 1989), AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan and Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016). Training the CNN model with gene expression data may seem not workable subconsciously, because unlike the pixels in image data, which are in order, the gene expression data are much noisier and without order. To tackle this defect, some researchers committed to rearrange the gene expression data and use them for prediction based on CNN. Lyu et al. proposed the first model to convert gene expression data to image and make cancer-type classification with CNN (Lyu and Haque, 2018); they rearranged the normalized RNA-Seq counts into a matrix according to their relative position according to their chromosome numbers; their model achieved an accuracy score of up to 0.9559. Ma et al. proposed a model called OmicsMapNet; in this work, they transformed gene expression data into image by constructing a treemap graph using their functional annotation in KEGG BRITE dataset. And a CNN model was used to do prediction (Ma and Zhang, 2018). Guillermo et al. also proposed a method to rearrange gene expression data image by the treemap and KEGG BRITE dataset (López-García et al., 2020), but their method has a distinction from OmicsMapNet; that is, the area size of each functional branch in the treemap is determined by the gene expression levels in this branch, which makes the image more representative in terms of gene expression values. They used CNN to predict the 230 days of lung cancer progression-free survival (LUAD and LUSC), and transfer learning was added to increase the performance. Results showed that their method outperformed the ML algorithms and multilayer perceptron (MLP). Sharma et al. (2019) proposed Deep-Insight, a novel method in which the feature vector such as gene expression values is first fitted by clustering methods such as kPCA and tSNE and then the scatter diagram produced by clustering would be contracted to the smallest rectangle consisting of all the data points to get the final image. Their method performed well on the classification task using CNN. Bazgir et al. (2020) proposed a method to transform features to image based on their neighborhood dependencies, and CNN was used for drug resistance prediction. Oh et al. (2021) proposed PathCNN, which used multi-omics data and pathway data to predict 2year OS for glioblastoma (GBM). They first convert the multiomics data into images with 146 pathways. Then they leveraged CNN for 2-year OS prediction and got an average AUC of up to 75.5% for GBM. MATERIALS AND METHODS In this section, we first give descriptions of the data sets we chose, then we introduce the process of feature selection; afterwards, we introduce our proposed method to convert the selected genes to gene expression images with KEGG BRITE and KEGG Pathway data incorporated, respectively ( Figures 1A and 1C); finally, we present our multimodal DL model for 60 month lung cancer OS prediction ( Figure 1B). The overview of the workflow is shown in Figure 1. The implementation of our method is available at https://github.com/PPDPQ/Lung-cancer-long-term-survivalprediction. Data Descriptions In this paper, we used the TCGA Pan-Cancer dataset (Chang et al., 2013;Tomczak et al., 2015) downloaded from the UCSC Xena data browser; from the data set, 1,122 lung cancer (LUAD and LUSC) samples were selected; then their gene expression data and clinical data were separated from the Pan-Cancer dataset, and 471 samples were selected for our research for they have all the data we need. To check the generalization performance of our model, we used a data set from the GEO database (Barrett et al., 2012) , and tidyverse (Wickham et al., 2019) to get the KEGG Pathway data and made mappings between pathways and genes. The general statistic for the data sets included are shown in Table 1. Feature Selection After separating the lung cancer data from the Pan-Cancer dataset, we performed feature selection on the lung cancer gene expression data based on mutual information (MI). There are 60,498 gene expression values (log 2 (TPM + 0.001)-transformed values) for each TCGA lung cancer sample (ENSEMBL (Zerbino et al., 2018) provides different IDs for one gene that maps to different chromosomes) and 20,356 genes' expression values for each sample in GSE37745. First of all, we filtered the genes that appear in both the TCGA samples and the GSE37745 samples, and we got 18,975 genes. Then we performed feature selection on the TCGA samples. We first removed the genes with variance below the assigned threshold; in this research, we set this threshold as 10. And 3,053 genes were obtained for further selection. Then we split the data into a train set (80% of the samples) and a test set (20% of the samples), and we calculated the MI scores between genes and the labels on the train set; the labels we used were in keeping with our target problem, namely, whether the sample survived after 60 months. The MI between two variables X and Y can be calculated as follows: where p(x, y) is the joint probability density of variable X and Y and p(x) and p(y) are marginal densities. We can observe that X and Y are completely unrelated when p(x, y) is equal to p(x)p(y), and the MI score will be zero. The X here is the gene expression values, and Y is the targets which are 0 or 1, which indicates whether the sample survived after 60 months. Then we chose the top K genes according to their MI scores, we tested the prediction performance of different sizes of Ks, and finally, we selected K = 1,000 for further data conversion. In fact, a size of 1,000 is roughly the same magnitude as the number of lung cancer samples, which means the model will not be prone to overfitting in terms of feature dimensionality. Converting Gene Expression Data Into Images With the 1,000 selected genes, we proposed a multi-indexsorting-based method to convert gene expression data into images, and the biological knowledge was incorporated. Gene Expression Image Using KEGG BRITE The overview of the process to convert gene expression data into images using KEGG BRITE data is shown in Figure 1A. Firstly, we mapped the KEGG BRITE IDs to KEGG gene IDs, the KEGG gene IDs were mapped to HUGO gene names, and finally, the HUGO gene names were mapped to ENSEMBL gene IDs. After the above work was done, we successfully bridged the gaps between the gene expression data and the KEGG BRITE data, and we got the hierarchical data with genes and proteins as the root and gene expression values as leaves. We used these hierarchical data to do multi-index sorting; in each subclass in the leaf level, the genes were arranged according to their average expression level across all the lung cancer samples. The obtained rearranged genes were filled into a square matrix, and Min-Max was leveraged to transform gene expression values into a range from 0 to 1 for feeding into the convolution layer. The Min-Max process is defined by where X denotes the expression values of a gene overall samples and X min and X max denote the minimum and maximum expression values of this gene, respectively. Gene Expression Image Using the KEGG Pathway The overview of the process to convert gene expression data into images using the KEGG Pathway data is shown in Figure 1C. We implemented this process using R; first, we used KEGGREST for KEGG information; we got the human KEGG pathways and their Entrez gene IDs, and then we mapped the Entrez IDs to HUGO gene names and ENSEMBL gene IDs using the R package org.Hs.eg.db. With the generated data of mappings between genes and pathways, the same multi-index sorting, genes-toimage rearrangement, and Min-Max normalization as above were carried out. Multimodal DL Model To make good use of the generated gene expression images to predict lung cancer long-term survival, we proposed a multimodal DL model that makes good use of the multimodal data to achieve a good result. Model Construction The model contained four input layers; among them, two were gene expression images, namely, the KEGG BRITE image and the KEGG Pathway image, and the other two inputs were clinical data: one is the age at initial pathological diagnosis and another is the AJCC pathological tumor stage. Because of the non-numeric characteristic of the AJCC pathological tumor stage, we encoded the stages by adding five per stage from Stage I to Stage IV to leverage the data. With the two gene expression images being fed into the model, two convolution modules with similar structures were constructed to learn representations of the two images; the detailed structure of the convolution module of our model is shown in Figure 1D, where each convolution module contained two Conv blocks, i.e., (1) a convolution layer for learning representations from the input features sparsely, (2) a maxpooling layer for representation dimensionality reduction, and (3) a batch normalization layer for preventing overfitting. After the stacked two Conv blocks, a fully connected layer was added to integrate the learned representations of all the filters. The generated representations of the two images were then concatenated and flattened, and the two clinical data were also concatenated in. Then a set of fully connected layers were added to learn the integrated representations of these four kinds of features. In the end, a sigmoid layer was used for the final prediction. Thus, our lung cancer long-term prediction task can be seen as a classification task in which the model used four kinds of input data to predict whether a sample survived after 60 months. The following are the introductions of the four inputs: Gene-expression-image-BRITE: The gene expression image constructed from gene expression data and KEGG BRITE hierarchical gene function data. Gene-expression-image-Pathway: The gene expression image constructed from gene expression data and KEGG Pathway data. Age-at-initial-pathological-diagnosis: The sample's age when the sample was diagnosed with lung cancer. This is one of the two kinds of clinical data. AJCC-pathological-tumor-stage: A stage value given by the AJCC staging system (Edge and Compton, 2010) which describes the amount and spread of cancer in a patient's body. This is another of the two kinds of clinical data. We encoded the stages by adding five per stage from Stage I to Stage IV to leverage the data, which means we encoded Stage I as 5 and Stage IB as 10, and other stages were encoded by that analogy. Model Hyperparameter Searching With Bayesian Optimization and Grid Search In order to get the best model of the proposed model architecture, we leveraged Bayesian optimization to search the best hyperparameters. Bayesian optimization (Snoek et al., 2012) is a method that uses Bayes theorem to regularize the search for finding the minimum or maximum value of the objective function. This paper took advantage of Bayesian optimization to search for the best set of hyperparameters with the maximum AUC score. From the view of train, test, and validation sets, in this paper, we only used one train-test split for hyperparameter searching. Then we used another 50 different train-test splits for computing the generalized performance scores. To avoid data leakage, in each experiment of the 50 experiments, we created a model with only the hyperparameters; all the trained hyperparameters were initialized and trained on its own train set, which means, for each model, we set the hyperparameters only once using one train-test split, and then we used this set of hyperparameters for another 50 train-test splits. We used this strategy to display the generalization power of our model. All the DL-based models ensured their hyperparameters from 100 times Bayesian optimization searching trials, and all the ML models ensured the hyperparameters from Grid Search. The hyperparameters we searched are listed in Table 2. All the DL models in the paper are with the same depth and similar structure, the only difference being that they have different numbers of inputs. For the ML models, we leveraged Grid Search, which can take all the hyperparameter combinations in the searching space into consideration. The searching spaces and searching results for all the DL and ML models are provided as a table in the Supplementary Material. EXPERIMENTS AND RESULTS In this section, we present a number of experiments to show the performance of our multimodal DL model. Firstly, we tested the effectiveness of the two proposed methods, which convert gene expression data into gene expression images, on lung cancer longterm survival prediction. Secondly, we proved that inputting the two kinds of images into one DL model simultaneously can improve prediction performance. Thirdly, we tested the effectiveness of the two kinds of clinical data respectively. Finally, we compared our model with five ML models to show our model's remarkable performance, and we conducted independent validation on the GSE37745 data set. The results are shown in Table 3. Experiments Settings In this subsection, we introduced the experiments implemented in this paper. Lung Cancer Long-Term Survival Prediction Experiments on the TCGA Lung Cancer Dataset To prove the prediction power of the DL model, we used six different DL models which have similar structures. But their inputs are different. We used these six DL models to prove the effectiveness of the two kinds of gene expression images and the two kinds of clinical data. We used five ML models to prove that the DL models are better. The models used in this paper were introduced as follows: DL-Four-Inputs: A DL model with two kinds of gene expression images and two kinds of clinical data as inputs; this model was used to show the best performance of our method; DL-Three-Inputs-Age: A DL model with two kinds of gene expression images and the age at initial pathological diagnosis as inputs; this model was used to show the effectiveness of the clinical data age; DL-Three-Inputs-Stage: A DL model with two kinds of gene expression images and the AJCC pathological tumor stage as inputs; this model was used to show the effectiveness of the clinical data tumor stage; DL-Two-Inputs: A DL model with two kinds of gene expression images as inputs; this model aimed to indicate that using the two kinds of gene expression images as inputs simultaneously will make the DL model achieve better results; DL-One-Input-BRITE: A DL model with only the KEGG BRITE gene expression image as input; this model was used to show that the KEGG BRITE gene expression image with the DL model was better than all the ML models so that it could validate the effectiveness of our DL algorithm and this gene expression image formation method; DL-One-Input-Pathway: A DL model with only the KEGG Pathway gene expression image as input; this model was used to show that the KEGG Pathway gene expression image with the DL model was better than all the ML models so that it could validate the effectiveness of our DL algorithm and this gene expression image formation method; KNN: An ML model using the K-nearest-neighbor algorithm (Laaksonen and Oja, 1996); SVM: An ML model using the support vector machine algorithm (Noble, 2006); Random-Forest: An ML model using the random forest algorithm (Biau and Scornet, 2016); Logistic-Regression: An ML model using the logistic regression algorithm (Wright, 1995); MLP: An ML model using the multilayer perceptron, which is a kind of a feedforward ANN (Pal and Mitra, 1992). Survival Analysis on the TCGA Lung Cancer Data Set To more directly perceive the prediction performance of our best DL model without clinical data, namely, the two-input DL model, we conducted Kaplan-Meier survival analysis on the two-input model and the five ML models. Firstly, for all the models, we fixed the data shuffling random state to the same value (random seed was set as 126 in this paper) to ensure that all the models made prediction on the same test data set. Then we let the trained models make a prediction on the test set. Finally, we separated the samples in the test set into two groups for each model, which were the high-risk group with samples having predicted values that are larger than the optimal threshold selected with Youden's J statistic and the low-risk group with samples having predicted values that are smaller than the optimal threshold. We compared the analysis results, leveraging the log-rank test (Bland and Altman, 2004); the analysis of the six models can be seen in Figure 6. We also implemented the Cox-PH analysis (Fox and Weisberg, 2002). To get rid of the influence of the other factors such as age, we only selected the DL model without any clinical input, namely, the two-input DL model so that the only remaining factor was the 1,000 genes we selected. Then we created a binary variable: if the sample was predicted dead, the variable's value was 1; otherwise, the value was 0. Finally, we conducted a univariate Cox-PH analysis using this binary variable. The hazard ratio of each model was then calculated; we show them in Table 4. 3 | Results of the five average metrics scores from 50 different train-test-split experiments (mean ± SD) on the TCGA lung cancer data set. The accuracy, precision, recall, and f1-score were calculated with the optimal threshold selected using Youden's J statistic. Models Average The bold values are the highest among all the models. Generalization Performance Validation on the Independent Data Set It is important to show the generalization ability of the model. So we conducted an independent test on the data from a different platform. We chose a data set from the GEO database with accession number GSE37745. And 195 samples were included in our test experiments. The gene expression data on the TCGA database are obtained by RNA-Seq, while the gene expression data on the GEO database are obtained through Chip-Seq (Park, 2009). The different sequencing technologies make the gene expression data on these two databases different. Hence, if our proposed method is successful on the GEO database, we can prove that our method is generalized. We implemented all the experiments in the same way we had done on the TCGA lung cancer data set. And the results can be seen in Table 5. Sample Selection and Split For lung cancer long-term survival prediction, we chose the samples according to their OS time and OS event in their clinical data, where if a sample had an OS time longer than 60 months, we labeled the sample as 0, and if a sample had an OS time shorter than 60 months and the OS event was equal to 1, we labeled the sample as 1; we removed samples which did not come under any of the above circumstances. Then the samples which did not have the two kinds of clinical data were removed. The removed samples had no event occurring, but their OS time was less than 60 months. So we could not use these samples for training because we could not label them. Finally, we got 471 samples from the TCGA lung cancer data set and 195 samples from the GEO data set with accession number GSE37745. In the TCGA lung cancer data set, 26% of the samples survived after 60 months, and 74% did not. In the GEO GSE37745 data set, 42% of the samples survived after 60 months, and 58% did not. Then, we split the samples into 50 different train sets and their corresponding test sets in which 80% of the samples were chosen for training and 20% of the samples for testing. To get generalized results, we made 50 different train-test splits of the samples by changing the shuffling random rate, also known as random seed, of the data before applying the split. With the 50 different splits, every model was trained for 50 times, and 50 scores per metric were obtained, and the average scores were used as the generalized results. Evaluation Metrics Since lung cancer long-term survival prediction can be viewed as a binary classification problem, we chose area under the ROC curve (AUC) to evaluate the classification performance of models. AUC represents the probability of a random predicted positive value located in the right of a random predicted negative value. And there are a series of classification thresholds being included compared with accuracy's and f1-score's only one classification threshold. So AUC can better display the classification performance of a binary classification model compared with accuracy and f1score. Besides AUC, we also computed the accuracy, precision, recall, and f1-score of each model using a curated optimal threshold (the optimal threshold selection method will be introduced in the next subsection); their values are calculated as follows: The bold values are the highest among all the models. where TP, FP, TN, and FN are illustrated in Table 6. The following are the explanations of the other four metrics: Accuracy: Accuracy represents the number of correctly classified samples over the total samples. In this paper, it is the number of correctly predicted long-term survival samples and the correctly predicted dead samples over the total samples. Precision: It represents the number of the correctly predicted dead samples (TP) over all the predicted dead samples (TP + FP). Recall: It represents the number of the correctly predicted dead samples (TP) over all the real dead samples (TP + FN). F1-Score: F1-score is a metric which takes into account both precision and recall. Optimal Threshold Selection Based on Youden's J Statistic Because of the imbalance of our data (74% positive vs. 26% negative for the TCGA cohort and 58% positive vs. 42% negative for the GSE37745 cohort), it is often difficult for the metrics scores calculated with the default threshold to represent the model's classification performance. Hence, selecting the optimal threshold is a good way to get good results. And Youden's J statistic (Ruopp et al., 2008) was used in our experiments to tune the classification threshold. Youden's J statistic is calculated from sensitivity and specificity; the whole calculation process is shown as follows: and the series of (TPR, FPR) tuples with their corresponding thresholds can be gained from the ROC curve. We choose the threshold with the largest value of Youden's J statistic for further calculating the final classification metrics scores. Results Analysis In this subsection, we analyzed the results from 50 experiments per model. For a better learning effect on an imbalanced classification task, all the DL and ML models used SMOTE (Chawla et al., 2002) to oversample the minority samples except for the KNN model (an error occurred when using SMOTE on it, so we used random oversampling instead). Then we performed a Kaplan-Meier survival analysis (Goel et al., 2010) on our best DL model and the five ML models to make the classification performance of our model more intuitive. Model Validity Analysis We firstly tested the validity of the two kinds of gene expression images. We used two CNN models each with same architecture as the four-input model to test the prediction performance of the two kinds of gene expression images. To evaluate the effectiveness of the gene expression images well, the five ML models used the same selected 1,000 gene expression values which we used for generating images as input. The average AUCs were 63.58% for the model with KEGG BRITE images and 64.69% for the model with KEGG Pathway images. Both the AUCs of the two kinds of images were far better than those of the five ML models, showing that it was meaningful to convert gene expression data into images. Then we tested the performance when the two kinds of images were inputted in one model simultaneously, and we got an AUC of 65.15%, which was better than both of the model using only one gene expression image as input. This result enlightened us that we could add more inputs to improve the performance. Next, we tested the effectiveness of adding clinical data into the DL model. We proposed two models with three inputs: one used two kinds of images and age at the initial pathological diagnosis as inputs, and the other used two kinds of images and the numerical AJCC pathological tumor stage as inputs. Their AUCs were 65.68% and 70.69%, respectively; both of them outperformed the model with only the two kinds of expression images as inputs, so that we could conclude that the two kinds of clinical data were both helpful in improving prediction performance. Naturally, in the end, we harvested the best AUC (71.48%) when we fed all four kinds of data into one model, which was a remarkable result given that the samples were imbalanced. And the four-input model achieved the best scores in accuracy, precision, recall, and f1-score calculated from the threshold with the largest value of Youden's J statistic, which was a fantastic accomplishment. In Figure 2, a radar plot showed the combination of the five evaluation metrics for the six DL-based models. It was readily observable that our best DL model, namely, the four-input model, achieved the best all-around performance among all the DL models. And in Figure 3, another radar plot showed the synthetic performance of the five metrics for the two-input DL model and five ML models. We drew this radar plot aiming at making a performance comparison between the DL and ML models when no clinical data are included. And our two-input DL model performed better than all the ML models while not using any of the clinical data as input. In Figure 4, a box plot showed the distribution of AUCs from 50 experiments; we could observe that the four-input model was more robust for it got the best median value, first-quartile value, and third-quartile value among all the models. Among the ML models, we can observe that random forest performed the best. We also conducted statistics on the 50 optimal thresholds for each model, and a box plot showing the distribution of the thresholds is presented in Figure 5. In this box plot, we can find that all the DL models have threshold distribution mainly between 0.4 and 0.6, so that the median values are closer to 0.5. With the fact that the TCGA lung cancer data set is very imbalanced, getting such threshold distributions indicated that the DL models overcame the problem of overfitting. As for ML models, we can find that their first-quartile values are closer to 1, which means that the ML models faced severe overfitting. Results of Survival Analysis on the TCGA Lung Cancer Data Set Figure 6 shows that the two-input model could divide the samples better than the other five ML models, and the twoinput model got the smallest p-value among the models. As for the Cox-PH univariate analysis, in Table 4, we can observe that the DL model and SVM model both got a hazard ratio of 4.00, which means that the DL model and SVM model can separate the samples into two more distinct risk groups. But in Figure 6, we can see that the classification threshold of SVM was up to 0.9951 while the DL model's threshold was 0.5159, which means that the DL model was far from overfitting, but the SVM was overfitting severely. All of these indicated that our DL model can better get two risk groups with more significant separation. As can be seen with the results in Table 5, surprisingly, almost all the metric scores were higher than those of results on the TCGA lung cancer data set; even the total number of samples were much less than that of the TCGA samples. For example, the four-inputs DL model achieved 72.51% on AUC, larger than that of TCGA, which was 71.48%. The gap between DL models and ML models was more evident. We can see that the smallest AUC score was 67.37% of DL models, which was much larger than the best value of the ML models (55.76% with KNN). And the conclusion on the TCGA lung cancer data set is still effective on this independent data set. For instance, the four-inputs DL model was still the best among all the models, and the two-inputs DL model was still the best model without clinical data. All of above prove that our proposed method has the potential for generalization. DISCUSSION In this paper, we introduced a method to predict lung cancer long-term OS using gene expression data and clinical data. Due to the extremely high feature dimensionality of gene expression data, it was difficult to directly use them in a DL or ML model for prediction. So we firstly used a supervised MI-based feature selection method to select the most relevant genes to the prediction target. Then we proposed a novel data transformation method to convert gene expression data into images with KEGG BRITE and KEGG Pathway data incorporated in. Using the gene expression images, we could take advantage of the CNN model to extract high-level representations from the gene expression data. The experiment results illustrated the effectiveness of using the CNN-based DL model with gene expression images to predict lung cancer longterm survival. When we combined two kinds of gene expression FIGURE 6 | The Kaplan-Meier curves of the predicted high-risk and low-risk samples for our best DL model (without clinical data) and the five ML models on the TCGA lung cancer data set. The p-values were computed using log-rank test.
2022-03-14T13:13:17.040Z
2022-03-14T00:00:00.000
{ "year": 2022, "sha1": "addd343836c12e2cac75a36b5a45e708817ccba1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "addd343836c12e2cac75a36b5a45e708817ccba1", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269482717
pes2o/s2orc
v3-fos-license
Artificial Intelligence and Occupational Health and Safety, Benefits and Drawbacks This paper discusses the impact of artificial intelligence (AI) on occupational health and safety. Although the integration of AI into the field of occupational health and safety is still in its early stages, it has numerous applications in the workplace. Some of these applications offer numerous benefits for the health and safety of workers, such as continuous monitoring of workers’ health and safety and the workplace environment through wearable devices and sensors. However, AI might have negative impacts in the workplace, such as ethical worries and data privacy concerns. To maximize the benefits and minimize the drawbacks of AI in the workplace, certain measures should be applied, such as training for both employers and employees and setting policies and guidelines regulating the integration of AI in the workplace. In 1955, John McCarthy was the first to create the term 'Artificial Intelligence' (AI) [1].AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans.It involves the development of algorithms and computational models that enable machines to perform tasks traditionally requiring human intelligence.These tasks include problem-solving, speech recognition, decision-making, visual perception, language translation, and more [2]. AI can be divided into two primary categories: Internet of Things (IoT) optimized for specific tasks and performs well in voice assistants, recommendation algorithms, and image recognition systems [1, 2] and generative AI, i.e., systems that associate words, learn, and solve complicated issues but, despite their name, are not as intelligent as human beings [2,3].AI comprises several subfields, such as robotics, computer vision, natural language processing, machine learning, and expert systems.AI mostly relies on machine learning, which uses algorithms to allow computers to learn from experience, providing "intelligent" outcomes without explicit programming [4]. On the other hand, occupational health and safety (OHS) is defined as a multidisciplinary field concerned with safeguarding and promoting the well-being of individuals in the workplace.It encompasses a systematic approach to identifying, assessing, and mitigating risks and hazards that may arise from work-related activities [5].The primary goals of OHS are to prevent injuries, illnesses, and fatalities among workers and to create and maintain a work environment fostering the workers' physical, mental, and social health [6]. Currently, AI enables real-time monitoring of workplace hazards, identifying and addressing risks proactively, and enhancing preventive measures through predictive analytics powered by AI forecasts health trends [4,7,8].The incorporation of AI not only improves safety protocols but also advances a comprehensive approach to employee well-being, marking a paradigm shift in the field of OHS with increased efficiency and precision [9,10].On the other side, innovative uses of AI in the workplace provide significant challenges for OHS professionals who need to gain a deeper grasp of AI approaches and their possible consequences on work and workers when AI-enabled apps are implemented in the workplace [2,3,11,12].As AI technologies are used in the workplace, it is imperative to maximize their potential benefits for OHS while minimizing any potential drawbacks. Worker's Health Monitoring Through Wearable Devices, Sensors and IoT Devices Wearable devices and sensors in the workplace are pivotal in enhancing workers' well-being, safety, and overall productivity [13].These devices are commonly used to monitor various health metrics, including vital signs, steps taken, and sleep patterns, identify fatigue or stress levels, and promptly notify workers and supervisors in case of emergencies or potential health risks [13][14][15].IoT refers to the network of interconnected physical devices, objects, and systems that communicate and share data through the Internet [16].In a workplace context, IoT involves embedding various sensors and other smart devices into the infrastructure to collect and exchange data [17].Numerous studies have indicated that companies can utilize data from wearable devices, sensors, and IoT, enhanced by AI, to identify potential health risks such as elevated stress levels or irregular sleep patterns [14,15,[18][19][20].Moreover, the data collected by wearable devices and IoT can be manipulated by AI to inform the implementation of targeted wellness programs, including personalized fitness plans and stress management workshops, to support overall employee well-being [15,21].In hazardous work environments like construction, mining, and manufacturing, specialized wearable devices such as smart helmets equipped with sensors can detect harmful gases, monitor environmental conditions, and assess head injuries [22].These wearables, integrated with AI, trigger automatic alerts or emergency responses in case of accidents, ensuring timely assistance and preventing severe consequences [23].Hence, the integration of wearables, sensors, and artificial intelligence empowers both employers and employees to prioritize health and safety, resulting in increased productivity, reduced absenteeism, and enhanced job satisfaction [3,7].As these technologies advance, we can anticipate even more sophisticated applications that will reshape the landscape of workplace health monitoring in the future. Sensor technology extends beyond wearables to workplace health monitoring, with environmental sensors throughout workspaces detecting factors like temperature, humidity, noise levels, and air quality [24,25].When coupled with AI-driven systems, these sensors evaluate overall workplace health and safety, identifying potential hazards and proactively improving conditions [22,26]. Smart Building Systems for Energy Efficiency and Employee Comfort AI can optimize smart building systems to enhance energy efficiency while maintaining optimal conditions for employee comfort [27].This includes intelligent climate control, lighting, and resource management in the workplace [26,28]. Hazard Identification and Risk Assessment Hazard detection programs help protect against various risks, such as unsafe working conditions, workers without protective clothing, misuse of tools and equipment, trip and fall hazards, unattended vehicles, equipment out of place, and other compliance issues [29][30][31].Industries can employ AI systems to examine images and videos from workplaces, uncovering potential hazards that may elude human observation [29,32].For example, the UK's Health and Safety Executive developed an artificial intelligence program called Estimation and Assessment of Substance Exposure (EASE) to assess occupational exposure to certain substances in the workplace [32].Additionally, AI can play a role in forecasting machinery breakdowns.Through the analysis of sensor data on machines, AI can identify abnormal patterns that signal a potential fault [1].This proactive detection enables companies to perform maintenance before a machine malfunctions, averting potential accidents.Moreover, AI programs can identify, assess, and mitigate risks by analyzing data and identifying patterns and anomalies [16,32].However, few studies have been conducted to demonstrate the positive and negative aspects of integrating AI into the risk assessment process and health surveillance in workplaces.This might be because the integration of AI in the industry is still in its early stages, and the main current focus is on its impact on immediate concerns such as safety and regulatory compliance [4,10,29]. AI-Integrated Smart Personal Protective Equipment Personal Protective Equipment (PPE), such as respirators, safety shoes, ear muffs, and safety goggles, has always played a crucial role in safeguarding workers from various hazards in the workplace [33].When a task poses inherent risks that cannot be sufficiently controlled through collective technical or organizational measures, the use of PPE becomes essential to enable workers to perform their tasks with reduced injury risks [5].The reliability and effectiveness of PPE are paramount, aligning with the established principle of the hierarchy of prevention. Smart PPE refers to PPE that combines traditional PPE (such as firefighter protective suit) with electronics, such as sensors, detectors, data transfer modules, batteries, cables, and other elements [22,34].By combining AI technologies with smart PPE, it actively monitors and adapts to changing environmental conditions, detecting hazards, assessing air quality, and providing real-time alerts [22,34,35].This innovation enhances communication and fosters a proactive approach to occupational safety, ensuring a safer work environment across diverse industries. Workplace Violence Monitoring Workplace violence is a pervasive issue globally that poses a risk to workers' mental health.More than one in five people (almost 23 %) in employment have experienced violence and harassment at work, whether physical, psychological or sexual [36].AI can play an important role in preventing workplace violence.Natural language processing (NLP) is a technique from computer science that helps to analyze large bodies of text.Using NLP, AI can scan emails and files for inappropriate language, alerting managers when such phrases are detected [37,38].With voice recognition, AI can recognize spoken phrases in meetings, generating detailed reports to address instances of harassment [36,39,40]. AI in Drug and Alcohol Screening Programs About 60% of people with substance use disorders (SUDs) are currently employed [41].Hence, workers' alcohol and drug use can harmfully impact both the workers and the workplace, resulting in absenteeism, high turnover, decreased productivity, and other safety problems [42].AI can contribute to more efficient and accurate drug and alcohol screening processes in the workplace [43].Automated systems can analyze biological samples, ensuring compliance with safety regulations and promoting a substance-free work environment [43,44]. Workforce Mental Health Monitoring AI-driven tools are increasingly employed for monitoring and addressing mental health issues in the workplace, which can be done using remote health monitoring systems by tracking vital signs and health metrics and providing real-time information to healthcare professionals for early detection of health issues among workers [4,45].In addition, NLP can play a role in analyzing workers' communication for signs of stress, enabling timely interventions and support [38].This enables organizations to implement preventive measures to support workers' mental health and well-being. In their literature review, Moshawrab et al., 2022, discussed the importance of using AI-integrated to share the workspace with human operators [3].This collaboration aims to enhance productivity and safety in sectors such as manufacturing and logistics [3].Chatbots are bots designed to engage in conversation with users, and they are commonly used in customer service, providing quick and automated responses to queries [55].Automation through AI and Machine Learning (ML) enhances the efficiency of robots, particularly in handling hazardous tasks, including safety inspection of hazardous environments, maintenance, and handling of dangerous materials [55,56]. AI-Enhanced Occupational Health Compliance Safety Audits By using IoT sensors, AI can track and audit every individual worker on multiple levels, ensuring that workplaces adhere to safety standards, minimize legal risks, and promote a culture of compliance [16].This includes monitoring worker locations, tracking vital signs, alerting workers to environmental hazards, providing accurate information to remote workers, reducing the risk of physical injuries, and enhancing staff training [7,16,24]. Decision Support Systems (DSS) Decision support systems (DSS) are computerbased tools or systems that support decision-making activities within an organization [57].They provide interactive access to databases and help users analyze complex data, generate reports, and make decisions based on the insights gained [58]. AI-powered DSS can assist managers and executives in making informed decisions by analyzing complex data sets, identifying patterns, and providing insights and recommendations [3,7].These systems leverage techniques like data mining, machine learning, and NLP to aid decision-making across various industries [57,59]. drawBacKs and etHIcal Issues of aI In occupatIonal HealtH and safety Despite AI's immense potential to enhance workplace safety, its implementation brings challenges smart wearable devices to screen and identify occupational physical fatigue among workers [13].They reported that AI-integrated smart wearables have established their usefulness in identifying and screening fatigue at work, which can limit the harmful effects of fatigue on workers [13]. Musculoskeletal System and Ergonomics The work-related musculoskeletal disorders (WMSDs) are considered an important cause of occupational injury at the workplace, leading to increased absence rates from work [46,47].On the other hand, ergonomics can defined as adjusting work environments, tools, and worker postures to prevent WMSDs induced by ergonomic risk factors such as awkward posture, repetitive movements, and excessive force at work [48,49].Ergonomists usually assess each worker's ergonomic risk factors using techniques such as postural analysis, anthropometric measures, motion and time studies, biomechanical models, force evaluation, and energy expenditure assessments [48,50].Recently, several studies have shown the possibility of improving ergonomic analysis through the combined use of artificial intelligence and wearable sensors [26,[51][52][53].AI-assisted health programs can analyze ergonomic factors and individual anthropometric data to predict and prevent musculoskeletal disorders in the workplace [51].AI-driven wearable devices can continuously analyze workers' motions and body postures [52] to recognize movements that may pose a risk of injury.Alerts are then issued to workers to mitigate the potential for long-term health problems [53]. Automating Dangerous Tasks Using AI Automated Bots Bots, short for robots, are automated software programs designed to perform specific tasks.The most important bots used in industry are collaborative robots (Cobots) and Chatbots.Collaborative robots, often referred to as cobots, are designed to work in close proximity to humans, fostering a collaborative and cooperative environment [54,55].Unlike traditional industrial robots that operate in isolation or behind safety barriers, cobots are engineered their tasks are automated by AI [4,66,67].Recognizing and addressing these emotional impacts is essential to creating a positive and supportive work environment while implementing AI technologies.Considering the role of occupational physicians excluded from algorithm definitions and the potential organizational and evaluation implications arising from such exclusion is of utmost importance.This brings attention to the critical intersection between healthcare professionals, technology, and regulatory frameworks, emphasizing the significance of including occupational doctors in discussions around AI implementation and compliance with existing laws and regulations. conclusIon In conclusion, integrating AI in occupational health and safety offers benefits such as enhanced safety and productivity through predictive maintenance and real-time risk assessment.drawbacks include ethical concerns, data privacy considerations, and the need for regulatory compliance.Work organizations must balance innovation with respecting workers' rights, investing in workforce education, building AI expertise, and collaborating with solution providers to seamlessly ensure a safe workplace that integrates AI and human ingenuity.High-quality data is essential for AI to make accurate risk assessments and envisage effective recommendations.If the data used is incomplete, outdated, or inaccurate, it can significantly impact the performance of the AI system, which could result in erroneous predictions and potentially lead to safety hazards [12].Similar to humans, AI is susceptible to amplify bias if it is trained on biased data.So, it is imperative to ensure that AI systems are trained on balanced and representative data to mitigate such biases [60]. AI-Related Ethical Issues at the Workplace Artificial intelligence can potentially revolutionize health and safety practices, introducing ethical considerations that must be addressed.Critical ethical issues include ensuring privacy and data security, given that AI systems rely on extensive datasets containing personal information such as wearable devices and sensors [12,60,61].So, it is essential to guarantee this data's ethical and secure collection, utilization, and storage.Additionally, concerns arise regarding biases and discrimination inherent in AI systems stemming from the data on which they are trained, leading to potential unfair or discriminatory decision-making [4,12,62].Furthermore, the automation capabilities of AI raise apprehensions about job displacement, prompting considerations about the necessity for safety professionals to acquire new skills in response to evolving tasks [12,63,64]. AI-Impacts on Worker's Mental Health Integrating AI in health and safety could negatively impact workers' mental health, including anxiety and stress related to job automation or the potential for AI errors to lead to accidents [4,11,65,66].Workers may feel a loss of control in an environment monitored by AI systems, experience isolation and disconnection from human colleagues when interacting more with AI, and perceive a diminishing sense of meaning and purpose when declaratIon on tHe use of aI: ChatGPT 3.5 was used for English language editing.This research received no external funding. fundIng:
2024-05-02T06:17:09.177Z
2024-04-24T00:00:00.000
{ "year": 2024, "sha1": "2a99edbf4e927ef8cd31a2b2b2fc7ab6b40799ee", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8ab5fc07d9e5df0966a9e229b54cf8a4cae3ff72", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
17361113
pes2o/s2orc
v3-fos-license
Advantages of Pure Platelet-Rich Plasma Compared with Leukocyte- and Platelet-Rich Plasma in Treating Rabbit Knee Osteoarthritis Background Concentrated leukocytes in leukocyte- and platelet-rich plasma (L-PRP) may deliver increased levels of pro-inflammatory cytokines to activate the NF-κB signaling pathway, to counter the beneficial effects of growth factors on osteoarthritic cartilage. However, to date no relevant studies have substantiated that in vivo. Material/Methods Autologous L-PRP and pure platelet-rich plasma (P-PRP) were prepared, measured for componential composition, and injected intra-articularly after 4, 5, and 6 weeks post-anterior cruciate ligament transection. Caffeic acid phenethyl ester (CAPE) was injected intraperitoneally to inhibit NF-κB activation. All rabbits were sacrificed after 8 weeks postoperative. Enzyme-linked immunosorbent assays were performed to determine interleukin 1β (IL-1β) and prostaglandin E2 (PGE2) concentrations in the synovial fluid, Indian ink staining was performed for gross morphological assessment, and hematoxylin and eosin staining and toluidine blue staining were performed for histological assessment. Results Compared with L-PRP, P-PRP injections achieved better outcomes regarding the prevention of cartilage destruction, preservation of cartilaginous matrix, and reduction of IL-1β and PGE2 concentrations. CAPE injections reversed the increased IL-1β and PGE2 concentrations in the synovial fluid after L-PRP injections and improved the outcome of L-PRP injections to a level similar to P-PRP injections, while they had no influence on the therapeutic efficacy of P-PRP injections. Conclusions Concentrated leukocytes in L-PRP may release increased levels of pro-inflammatory cytokines to activate the NF-κB signaling pathway, to counter the beneficial effects of growth factors on osteoarthritic cartilage, and finally, result in a inferior efficacy of L-PRP to P-PRP for the treatment of osteoarthritis. Background Osteoarthritis is a degenerative joint disorder characterized by articular cartilage destruction that leads to pain and loss of function primarily in the knees and hips [1]. In clinical practice, challenges are still frequently encountered in the treatment of osteoarthritis. Although total hip arthroplasty and total knee arthroplasty have been well accepted as the gold standards, and have achieved favorable clinical outcomes in the aged population, the long-term outcomes of these surgical therapies in young adults are controversial due to the increased risk of revision that results from younger age [2]. Numerous clinical attempts have been made to alleviate major complaints such as pain, swelling, and muscle tightness. However, these therapies are barely effective on the prevention of osteoarthritis progression and promotion of articular cartilage regeneration, possibly because of the minimal blood supply, limited extracellular matrix formation and low cell density of this tissue [3]. To solve these problems, biological agents have been introduced as promising alternatives for the treatment of osteoarthritis; antagonists of interleukin 1b (IL-1b) and tumor necrosis factor a (TNF-a) have been used to inhibit the effects of these proinflammatory cytokines on cartilage destruction, and growth factors have been added to improve cartilage regeneration [4]. Platelet-rich plasma (PRP) is an autologous blood product that contains concentrated platelets. After activation, the a-granules of concentrated platelets in PRP release growth factors at concentrations significantly higher than the baseline blood levels, including platelet-derived growth factor (PDGF), transforming growth factor-b (TGF-b), insulin-like growth factor (IGF), fibroblast growth factor (FGF), epidermal growth factor (EGF), and many others [5][6][7]. Many of these growth factors can stimulate chondrocyte and chondrogenic mesenchymal stem cell (MSC) proliferation, enhance chondrocyte and MSC survival, promote chondrocyte cartilaginous matrix secretion, induce MSC chondrogenic differentiation, and diminish the catabolic effects of pro-inflammatory cytokines [8][9][10]. Consequently, PRP has gained growing popularity in the treatment of osteoarthritis in the last decade [11][12][13]. Despite the increasing use of PRP, there is no standardized protocol for PRP preparation in clinical practice, and different protocols may result in PRP formulations that differ in componential composition, in particular, leukocyte concentration [14]. It has been shown that the significantly concentrated leukocytes in leukocyte-and platelet-rich plasma (L-PRP), compared with pure platelet-rich plasma (P-PRP), may release significantly higher levels of pro-inflammatory cytokines, such as IL-1b and TNF-a [15]. IL-1b and TNF-a have been described to have crucial roles in the physiopathology of osteoarthritis via inducing the nuclear translocation of NF-kB p65 to activate expression of a wide range of catabolic genes, including inducible nitric oxide synthase, cyclooxygenase-2, and matrix metalloproteinases, to disturb anabolism and enhance catabolism of chondrocytes [16][17][18]. Recently, the in vitro study by Cavallo et al. showed that L-PRP and P-PRP had significantly different leukocyte and pro-inflammatory cytokine concentrations, and induced distinct effects on human articular chondrocytes in terms of the production of destructive proteases and extracellular matrix synthesis [19]. Hence, high levels of IL-1b and TNF-a in L-PRP may activate the NF-kB signaling pathway to induce harmful effects on cartilage, to counter or overwhelm the beneficial effects of growth factors, and finally, make L-PRP unsuitable for the treatment of osteoarthritis. However, no relevant studies have substantiated that in vivo. The objective of this study was to evaluate the efficacies of L-PRP and P-PRP for the treatment of osteoarthritis, and the in vivo effects of L-PRP and P-PRP on the NF-kB signaling pathway in a rabbit osteoarthritis model, in order to develop an alternative method for the treatment of osteoarthritis. Animal surgery The study protocol was approved by the Animal Care and Use Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Fifty mature New Zealand white rabbits (weighing 2.5-3.0 kg) were used in this study. The osteoarthritis model in rabbits was created by anterior cruciate ligament transection as described previously [20]. In brief, after achieving anesthetization with an intravenous injection of 60 mg/kg of ketamine hydrochloride and 6 mg/kg of xylazine, a 2 cm lateral para-patellar skin incision was made. Then, the patella was dislocated medially to expose the knee joint and the anterior cruciate ligament was transected visually with a #15 blade. The joint was then repositioned, irrigated with sterile saline and closed with 4-0 nylon. After surgery, all rabbits were housed in separated cages and had ad libitum access to food and water. All animals were sacrificed after 8 weeks postoperative. Treatments of rabbit osteoarthritis As anterior cruciate ligament transection has been reported to lead to cartilage degeneration in rabbit knees similar to human knee osteoarthritis after 4 weeks postoperative [20], rabbits were randomly divided into five groups of 5 male and 5 female rabbits each at 4 weeks postoperative. The control group received three weekly intra-articular injections of 300 μL saline, initiated 4-weeks postoperative for each knee joint. At the same time points, the L-PRP and P-PRP groups received three weekly intra-articular injections of 300 μL autologous L-PRP or P-PRP for each knee joint. A course of three weekly intraarticular injections of saline, L-PRP, or P-PRP was chosen to match the protocol that was used frequently in clinical practice [21][22][23][24]. Besides L-PRP or P-PRP intra-articular injections, the L-PRP+ caffeic acid phenethyl ester (CAPE) and P-PRP+CAPE groups received 21 daily intraperitoneal injections of 1 mL of 10 μmol/kg/day CAPE (Sigma-Aldrich, St. Louis, MO, USA), initiated 4-weeks postoperative, to inhibit the activation of the NF-kB signaling pathway [25]. All rabbits were sacrificed after 8 weeks postoperative. The study design is summarized in Figure 1. Preparation of L-PRP and P-PRP Whole blood used for L-PRP or P-PRP preparation was collected from rabbits of the L-PRP group and L-PRP+CAPE group, or the P-PRP group and P-PRP+CAPE group, through the central auricular artery into acid-citrate dextrose solution A (ACD-A) anticoagulant at a ratio of 9:1 (v/v). L-PRP was prepared with a buffy coat-based double-spin method, as described elsewhere [26]. In brief, 10 mL of whole blood was spun at 250× g for 10 minutes in a 15-mL centrifuge tube. After the first spin, the blood was separated into three components: erythrocytes at the bottom, buffy coat in the middle, and platelet-containing plasma at the top. Then, the top and middle layers were transferred to a new centrifuge tube and spun again at 1,000× g for 10 minutes. After the second spin, the supernatant platelet-poor plasma was discarded, and the precipitated platelets were resuspended in the remaining 1 mL of plasma to obtain L-PRP. P-PRP was prepared with a plasma-based double-spin method. In brief, a spin at 160× g for 10 minute was used to separate 15 mL of whole blood into three components, as above. Then, the platelet-containing plasma was transferred to a new tube and spun again at 1,000× g for 10 minutes. After discarding the supernatant platelet-poor plasma, the remaining plasma and precipitated platelets were blended evenly to obtained 1 mL of P-PRP: 0.6 mL of each PRP sample was used for intraarticular injections, 0.1 mL for whole blood analysis to determine leukocyte and platelet concentrations, and 0.3 mL for enzyme-linked immunosorbent assays (ELISA) to determine cytokine concentrations. Quantification of components of L-PRP and P-PRP Leukocyte and platelet concentrations in L-PRP and P-PRP were measured by whole blood analysis with an automatic hematology analyzer (XS-800i, Sysmex, Kobe, Japan) in the clinical laboratory of the hospital. Concentrations of PDGF-AB, TGF-b1, IL-1b, and TNF-a concentrations in L-PRP and P-PRP were determined by ELISA according to the protocols described previously [19]. In brief, L-PRP and P-PRP were incubated with 10% CaCl 2 (final concentration 22.8 mM) at 37°C. Then, the supernatants were collected and assayed for growth factors and pro-inflammatory cytokine concentrations using commercial kits (Xitang, Shanghai, China) according to manufacturer's instructions. Quantification of IL-1b and prostaglandin E2 concentrations in the synovial fluid After rabbits were euthanized at 8 weeks postoperative, the synovial fluid in knee joints was collected and measured for concentrations of IL-1b and prostaglandin E2 (PGE2) by ELISA with commercial kits (Xitang, Shanghai, China) according to manufacturer's instructions. Gross morphological assessment After the rabbits were euthanized, femoral condyles were harvested and stained with Indian ink for 30 minutes. Then, gross morphological assessment was performed on both the medial and lateral sides, as described previously, according to the criteria shown in Table 1 [27]. Histological assessment Femoral condyles were fixed with 4% paraformaldehyde for 72 hours, decalcified with 10% EDTA for 1 month, dehydrated with graded ethanol solutions, embedded in paraffin, and sectioned at 5 μm. Then, sections were stained with hematoxylin and eosin (HE) for general histological assessment, or with toluidine blue for assessment of cartilaginous matrix distribution. Statistical analysis Data were analyzed using the Statistical Package for Social Sciences version 22.0 (SPSS, Chicago, IL, USA) and presented as mean ± standard deviation (SD) or median and range as appropriate. One-way analysis of variance and Bonferroni posthoc test or Wilcoxon rank sum test was performed to analyze the difference between groups as appropriate. Pearson correlation analysis was conducted to analyze the linear correlations between cytokine concentrations and platelet concentration, and leukocyte concentration of PRP formulations. A p value less than 0.05 was considered statistically significant. Components of L-PRP and P-PRP Components of L-PRP and P-PRP used in different groups at different time points are shown in Table 2. L-PRP used in the L-PRP group and L-PRP+CAPE group at 4, 5, and 6 weeks postoperative had similar concentrations of leukocytes, platelets, growth factors, and pro-inflammatory cytokines compared with each other (p>0.05). Also, P-PRP used in the P-PRP group, and P-PRP+CAPE group at 4, 5, and 6 weeks postoperative, were similar in leukocyte, platelet, growth factors, and L-PRP -leukocyte-and platelet-rich plasma; P-PRP -pure platelet-rich plasma; CAPE -caffeic acid phenethyl ester; IL-1b -interleukin-1b; TNF-a -tumor necrosis factor-a; PDGF-AB -platelet-derived growth factor AB; TGF-b1 -transforming growth factor-b1. However, the leukocyte concentrations in L-PRP were significantly higher than P-PRP used at the same time point (p<0.001, Figure 2A). In accordance with the leukocyte concentrations, the IL-1b and TNF-a concentrations in L-PRP were significantly higher than P-PRP used at the same point (p<0.001, Figure 3A), whereas the platelet concentrations in L-PRP were similar to P-PRP used at the same time point (p>0.05, Figure 2B). A similar trend was observed in the results of growth factor concentrations, which demonstrated that L-PRP and P-PRP used at the same point had similar PDGF-AB and TGF-b1 concentrations (p>0.05, Figure 3B). These findings indicated that the L-PRP and P-PRP used in this study were constant in componential composition in terms of concentrations of leukocytes, platelets, pro-inflammatory cytokines, and growth factors. The significantly higher leukocyte concentration in L-PRP resulted in the significantly higher pro-inflammatory cytokine concentrations in L-PRP compared with P-PRP used at the same time point, and the similar platelet concentration between L-PRP and P-PRP used at the same time point resulted in the similar growth factor concentrations between them. ANIMAL STUDY with each other (p>0.05, Figure 5A). As shown in Figure 5B Gross morphological assessment Gross morphological assessment was performed on both the medial and lateral sides, as described previously, according to the criteria shown in Table 1 (a total of 20 joints, 40 scores, in each group). As shown in Figure 5, the medians of gross morphological grading of the P-PRP group (median 1; range 1 to 4b), L-PRP+CAPE group (median 1; range 1 to 4b), and P-PRP+CAPE group (median 1; range 1 to 4b) were similar (p>0.05), but significantly better than the L-PRP group (median 2.5; range 1 to 4c, p<0.05), which, in turn, was significantly better than the control group (median 4a/4b; range 2 to 4c, p<0.05). Histological assessment HE staining was performed for general histological assessment and toluidine blue staining was performed for assessment of cartilaginous matrix distribution. As shown in Figure 7, fullthickness cartilage defects and total loss of toluidine blue staining were observed in the control group ( Figure 7A, 7B). In the L-PRP group, there was a marked reduction in the severity of cartilage loss compared with that of the control group, whereas the loss of toluidine blue staining was still extensively severe ( Figure 7C, 7D). In the P-PRP group, less severe changes regarding loss of cartilage and toluidine blue staining were exhibited ( Figure 7E, 7F). Also, samples from the L-PRP+CAPE and P-PRP+CAPE groups demonstrated a clear reduction in the severity of degenerative changes in cartilage compared with The mean IL-1b and PGE2 concentrations in the synovial fluid collected from rabbits of the L-PRP group were significantly higher than the control group, which, in turn, was significantly higher than the P-PRP group, L-PRP+CAPE group, and P-PRP+CAPE group, which were similar compared with each other. L-PRP, leukocyteand platelet-rich plasma; P-PRP, pure platelet-rich plasma; CAPE, caffeic acid phenethyl ester. Boxes and error bars represent mean ± standard deviation (n=20); * p<0.05 compared with the control group; # p<0.05 compared with the L-PRP group. ANIMAL STUDY the control and L-PRP groups ( Figure 7G-7J). Besides that, the height of cartilage in samples from the P-PRP, L-PRP+CAPE, and P-PRP+CAPE groups also seemed to be higher than the control and L-PRP groups. However, the therapeutic effects of P-PRP, L-PRP+CAPE, and P-PRP+CAPE on osteoarthritis regarding the prevention of cartilage destruction and preservation of toluidine blue staining seemed to be equal. Discussion The comparison of platelet and growth factor levels between PRP formulations used in the study is important, because the growth factors released from platelet a-granules are believed to be the rationale behind PRP therapy, and small variations in their concentrations may result in distinct results [28][29][30][31]. Many growth factors have been detected in elevated concentrations in PRP, including PDGF-AB, TGF-b1, IGF, FGF, and EGF. PDGF-AB and TGF-b1 have been described previously to improve cell proliferation and cartilaginous matrix secretion in vitro [32,33], and increase cartilage regeneration in vivo [34]. Besides that, TGF-b1 may be capable of modulating the deleterious effects of IL-1b on cartilage by decreasing IL-1b receptor transcription and binding ability, while promoting IL-1 receptor antagonist synthesis [35]. Although IGF, FGF, and EGF also have beneficial effects on cartilage regeneration, they were shown to have much more variable concentrations in PRP. Moreover, exercise and nutritional status, which are hard to control, may affect IGF concentration in whole blood, and therefore, in PRP formulations [36]. Therefore, concentrations of IGF, FGF, and EGF in PRP formulations were not quantified in this study, and PDGF-AB and TGF-b1 concentrations, as well as platelet concentration, were measured to characterize the L-PRP and P-PRP used in this study. Our findings showed that the L-PRP and P-PRP used in this study were similar in platelet and growth factor concentrations. Additionally, we found that there were significantly positive correlations between platelet concentration and growth factor concentrations, and the similar platelet concentration in L-PRP and P-PRP might result in the similar growth factor concentrations between them, which were in accordance with a previous study [15]. These findings imply that the L-PRP and P-PRP used in this study are similar in the levels of anabolic molecules, and therefore, should have similar effects on the promotion of cartilage anabolism in osteoarthritis. However, our in vivo results indicated that L-PRP was not as effective as P-PRP in the treatment of osteoarthritis in rabbits, possibly because L-PRP not only concentrated platelets and growth factors that have beneficial effects on cartilage regeneration, but also concentrated leukocytes and pro-inflammatory cytokines compared with P-PRP. The inclusion of leukocytes in PRP formulations is debatable because of concerns of the potential effects of leukocytes on tissue healing. Several leukocyte subsets, such as M2 macrophages, may have an anti-inflammatory function, aid in removal of debris from damaged tissue to initiate tissue repair, and suppress fibrosis [37]. Therefore, some authors advocated that increased macrophage infiltration and the ratio of M1/ M2 macrophages might lead to increased collagen deposition and reduced fibrosis in skeletal muscle repair [38]. However, M2 macrophages are not present in the whole blood or PRP, due to the fact that they are differentiate from monocytes that have migrated into injury tissues [39,40]. Besides that, other leukocyte subsets, such as neutrophils, monocytes, and lymphocytes, are essential elements of the immune system, and may release excessive amounts of catabolic molecules to induce harmful effects on tissue healing [41]. In a study by McCarrel et al. [42], tendon explants treated with PRP formulations with higher leukocyte concentration demonstrated increased pro-inflammatory cytokine synthesis and decreased extracellular matrix synthesis, and increasing platelet concentration was not able to counter the catabolic effects of leukocytes. These findings suggest that a reduction in leukocyte concentration may be more important than the platelet to leukocyte ratio in enhancing tissue repair. Our findings support a previous study finding demonstrating the significantly lower levels of pro-inflammatory cytokines in P-PRP compared with L-PRP, and the significantly positive correlations between leukocyte concentration and pro-inflammatory cytokine concentration in PRP formulations [15]. The deleterious effects of pro-inflammatory cytokines on cartilage in the physiopathology of osteoarthritis have become elucidated in the past few decades. IL-1b, which can be locally produced by both the synovial cells and articular chondrocytes, was detected in high levels in the synovial fluids of osteoarthritis patients [43] and shown to stimulate the expression of catabolic molecules with subsequent degradation of cartilaginous matrix [19]. In addition to its catabolism-promoting effect, IL-1b markedly inhibits the synthesis of extracellular matrix components [44]. Furthermore, an excess of IL-1b may downregulate the expression of type II receptor and phosphorylation of Smad3 and MAPKs to overwhelm the favorable effects of growth factors on the promotion of cartilage anabolism and the modulation of the effects of catabolic molecules [45]. Besides IL-1b, other pro-inflammatory cytokines, such as TNF-a, IL-6, and IL-17, also attribute to the physiopathology of osteoarthritis [46], and may act in synergy to induce more severe articular cartilage destruction in vivo compared with acting independently [47]. Also, studies have demonstrated that antagonists of these pro-inflammatory cytokines are effective with respect to relieving osteoarthritis symptoms, providing substantial proof for the harmful effects of IL-1b and TNF-a on cartilage [48,49]. These findings imply that the concentrated leukocytes in L-PRP may release increased levels of pro-inflammatory cytokines to counter the beneficial effects of growth factors and result in the inferior effects of L-PRP compared with P-PRP, as observed in our study. Considerable evidence has suggested that the NF-kB signaling pathway is intimately involved in the disturbed metabolism and enhanced catabolism of osteoarthritic cartilage that is induced by IL-1b and TNF-a, and inhibition of NF-kB or cyclooxygenase-2, a downstream inflammation-related gene of the NF-kB signaling pathway, may be targets for novel drugs for the treatment of osteoarthritis [18,50]. However, the effects of PRP formulations that increase or decrease levels of IL-1b and TNF-a on the NF-kB signaling pathway have never been evaluated. Some studies on other platelet products, such as platelet lysate and PRP clot releasate, indicated that molecules released from platelets might inhibit the activation of the NF-kB signaling pathway [51,52]. However, the absence of viable leukocytes and platelets, as well as pro-inflammatory cytokines, in these products makes them have distinct characteristics compared with the PRP formulations used in clinical practice, especially those with increased levels of pro-inflammatory cytokines. IL-1b and TNF-a activate the NF-kB pathway via the canonical pathway, which involves the nuclear translocation of NF-kB p65 and the subsequent upregulated production of downstream catabolic molecules, including IL-1b and PGE2 [53,54]. Therefore, the concentrations of IL-1b and PGE2 in the synovial fluid were determined in our study. Our results demonstrated that intra-articular injections of L-PRP increased IL-1b and PGE2 concentrations in the synovial fluid, which were reversed by intraperitoneal injections of CAPE, an inhibitor of NF-kB activation. Interestingly, IL-1b and PGE2 concentrations were decreased after P-PRP injections, but not decreased further after P-PRP+CAPE injections. These findings suggest that P-PRP may be similar in its capacity of inhibiting NF-kB activation in osteoarthritis to the previously mentioned platelet products, while L-PRP may be similar in the capacity of activating NF-kB to IL-1b and TNF-a. The contrary effects of L-PRP and P-PRP on the NF-kB signaling pathway may play a mechanistic role in their distinct efficacies for the treatment of osteoarthritis. Our results demonstrated that the combined use of L-PRP and CAPE yielded better outcomes in the treatment of osteoarthritis in rabbits than using L-PRP alone, while neither the combined use of L-PRP and CAPE nor the combined use of P-PRP and CAPE achieved any better results than using P-PRP alone. These findings reveal that inhibiting NF-kB activation by using inhibitors of NF-kB activation enhances the efficacy of L-PRP for the treatment of osteoarthritis to a level similar to P-PRP. Hence, the capacity for inhibiting NF-kB activation may play an equally important role as the capacity for promoting cartilage regeneration in the beneficial outcomes of PRP formulations for the treatment of osteoarthritis. Also, concentrated leukocytes in L-PRP may release increased levels of pro-inflammatory cytokines to activate the NF-kB signaling pathway, to counter or overwhelm the beneficial effects of growth factors on cartilage metabolism, and finally, result in an inferior capacity of L-PRP compared to P-PRP for the treatment of osteoarthritis. In summary, L-PRP and P-PRP, with similar platelet and growth factors concentrations but different leukocyte and pro-inflammatory cytokine concentrations, induced distinct in vivo effects on the NF-kB signaling pathway and osteoarthritis, with P-PRP showing better efficacy for the treatment of osteoarthritis in rabbits. Further studies are needed to substantiate these findings in larger animals or human volunteers, to inform the development of an alternative method for the treatment of osteoarthritis in clinical practice. Conclusions Increased levels of pro-inflammatory cytokines released from concentrated leukocytes in L-PRP may activate the NF-kB signaling pathway to counter the beneficial effects of growth factors on osteoarthritic cartilage, and finally, result in a lower efficacy of L-PRP compared with P-PRP for the treatment of rabbit knee osteoarthritis. Therefore, P-PRP may be more suitable for the treatment of osteoarthritis. Conflicts of interest There are no conflicts of interest for any of the authors.
2016-05-12T22:15:10.714Z
2016-04-17T00:00:00.000
{ "year": 2016, "sha1": "32ba109b6b8049cf47ef8859dfe2c0be2e7dae1a", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4837928?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "32ba109b6b8049cf47ef8859dfe2c0be2e7dae1a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221191739
pes2o/s2orc
v3-fos-license
Editorial: Integrative and Translational Uses of Herbarium Collections Across Time, Space, and Species This Research Topic is dedicated to the legacy of Vicki Funk [1947-2019], who was a Senior Research Botanist and Curator at the Smithsonian’s National Museum for 38 years, and a strong advocate for collections-based research. In 2004, Vicki compiled an inspirational list of uses of herbaria entitled ‘100 Uses for an Herbarium (Well at Least 72)’ (Funk, 2004), a study motivated by her appreciation of the potential for herbaria to make a unique contribution to a remarkable and growing array of research questions, at a time when the long-term survival of many such collections was under threat. The intervening decade and a half has seen that list of uses increase with new and unexpected techniques for the analysis of herbarium specimens and of the data derived from them. Herbarium collections provide a unique record of our global biodiversity and natural history amassed over centuries. According to Index Herbariorum (Thiers, 2020), a global database of herbaria, as of December 2019, there are 3324 active herbaria in the world that collectively are estimated to hold nearly 400 million specimens. They offer a verifiable source of specimens for a plethora of research questions from taxonomy to evolution and change through time. They are being used to help tackle global societal challenges, and their value is currently increasingly realized and explored (Funk, 2002; Funk, 2004; Bebber et al., 2010; Funk, 2018; James et al., 2018; Meineke et al., 2018). Such impact will deepen as herbarium collections become more accessible both in digital and physical forms, and stories of the specimens and collectors are brought to light. Indeed, current global efforts to digitize herbarium collections are continuously increasing the number of available images and data through portals such as JSTOR Global Plants (www.plants.jstor.org), Global Biodiversity Information Facility (www.GBIF.org), and iDigBio (Integrated Digitized Biocollections; www.idigbio.org) (Soltis, 2017). As scientists based at three of the world’s oldest and largest herbaria: the Royal Botanic Gardens Kew (K; founded 1852; 8,125,000 specimens), The Natural History Museum, London (BM; founded 1753; 5,200,000 specimens), The Natural History Museum of Denmark (C; founded 1759; 2,900,000 specimens), and one smaller and younger herbarium at the National Tropical Botanical Garden, USA (PTBG; founded 1971; 88,870 specimens), we encourage the continued use and exploration of the unique heritage and resources held in the world’s herbaria. This Research Topic aims to synthesize and inspire the frontier of integrative and translational research using herbarium collections to highlight their unharvested potential for addressing outstanding research questions and societal challenges. The articles published in this Research Topic provide a selection of examples and new approaches illustrating trends and opportunities in this expanding field. Though an inherently biased sample of biodiversity through timeshaped by historical and contemporary collecting practices and by colonialism and tradeherbarium specimens provide a verifiable record that a species occurred at a particular place at a particular time, and they represent several hundred years of collection history across the globe (Loṕez and Sassone; Romeiras et al.). Their imperfections extend to problems with erroneous identifications and with biased digitization efforts (e.g. Smith and Blagoderov, 2012), but careful assessment of data quality significantly improves their value (Goodwin et al., 2015;Maldonado et al., 2015), as do efforts to evaluate collection history and association with additional data sources (Allasi Canales et al., 2020;Loṕez and Sassone;Romeiras et al.;Stefanaki et al., 2019). These challenges notwithstanding, herbaria provide a unique record of changes in distributions, of extinction and of habitats that have now disappeared. Specimens in herbaria provide readily available information and research possibilities extending far beyond the use of contemporary samples (Greve et al., 2016;Silva et al., 2017;Cardoso et al., 2018) and, indeed, far beyond the scope of what their original collectors may ever have envisaged (Heberling and Isaac, 2017;Heberling et al., 2019). Herbaria present a time window into the past allowing exploration of changes in composition of floras (Calinger, 2015), of distribution of invasive weeds or threatened species (Stadler et al., 2002;Rivers et al., 2011;Hardion et al., 2014;Martin et al., 2016), changes in flowering times (Davis et al., 2015;Willis et al., 2017), leaf-out times (Everill et al., 2014) or of stomatal densities through time (Large et al., 2017) in response to environmental change, and they can be used to model predictions of future trends (James et al., 2018). Increasingly, they are being used to investigate not just the plant itself but other, associated organisms, in studies of trophic interactions, to infer relationships with associated insects including pollinators and herbivores and with microorganisms including those causing diseases (Martin et al., 2013;Yoshida et al., 2014;Vega et al.). Destructive sampling of herbarium samples must always be done with due care and for good scientific reason, but small amounts of material can now be used for a diversity of studies including exploring the viability of seeds from old herbarium specimens for conservation purposes (Godefroid et al., 2011;Porteous et al.;Wolkis and Deans, 2019). As the technical difficulties of extracting high quality DNA from historical materials are being overcome (Wales et al., 2014;Bieker and Martin, 2018) and new high-throughput methods are being developed including a customized Angiosperms353 probe set (Brewer et al.;Johnson et al., 2019), herbarium samples are increasingly being used in genomic studies-often termed Museomics-at all scales, from populations to phylogenies (Kuzmina et al., 2017;Bieker and Martin, 2018;Malakasi et al.) as well as to studies of genome duplications (Viruel et al.), domestication history (Kistler et al., 2018;Ramos-Madrigal et al., 2019), and plant pathogens (Yoshida et al., 2014). Herbarium materials can be a resource of chemical data for chemotaxonomy (Cook et al., 2009;Jafari Foutami et al., 2018;Allasi Canales et al., 2020), chemical ecology (Zangerl and Berenbaum, 2005), environmental bio-indicators (Foan et al., 2010;Monforte et al., 2015;Martinez-Swatson et al.) as well as drug discovery and authentication (Saslis-Lagoudakis et al., 2015;Rønsted et al., 2017). CONCLUDING REMARKS Herbaria are a unique resource for understanding global biodiversity and for addressing societal challenges. Their traditional use for taxonomy-for documenting and describing biodiversityis as important today as it has ever been but that function is being supplemented by an increasing array of questions that herbaria are being used to address. We hope the articles in this Research Topic will inspire new integrative and translational uses of herbarium collections as well as highlight the need for continuous preservation, curation, and expansion of the collections, accompanied by detailed collecting data. AUTHOR CONTRIBUTIONS NR drafted the editorial with contributions from OG and MC. The authors all contributed to the Research Topic assembly and editing.
2020-08-21T13:08:51.145Z
2020-08-21T00:00:00.000
{ "year": 2020, "sha1": "1d0f1b0ddbae600915f84598717291089f5f0adf", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.01319/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d0f1b0ddbae600915f84598717291089f5f0adf", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
229258001
pes2o/s2orc
v3-fos-license
Vulnerability and Cost Analysis of Heterogeneous Smart Contract Programs in Blockchain Systems The first generation blockchain designed for Bitcoin cryptocurrency maintains a record of transactions across several computers linked in a peer-to-peer network [1]. The second generation blockchain maintains a record of digital contracts as well as cryptocurrency transactions in a peer-to-peer network [2]. These digital contracts are made by programming codes and referred to as smart contracts. Because of great commercial potentiality of smart contracts, many blockchain systems with their own designs for smart contract have been launched: Ethereum, EOS, Hyperledger Fabric, Tron, Qtum, Cosmos, Cardano, Klaytn, ICON, etc. However, based on the reentrancy vulnerability of smart contracts, the DAO attack steal about 3.6 million Ethers (valued about US$55,000,000 at that time) in June 2016 [3]. Also, similar attacks such as Parity MultiSig Wallet and SmartMesh occurred in 2017. Therefore, the vulnerability analysis study of smart contracts is beginning recently. Although the existing studies focused on the smart contracts of the Ethereum blockchain platform, we examine the vulnerability analysis for other blockchain platforms such as EOS, NEO, and ICON. Another main issue of smart contracts is the transaction costs of smart contracts. The blockchain system consumes computing resources for transactions of smart contracts. So, the blockchain system requires some costs of cryptocurrency for transactions of smart contracts. If a smart contract requests expensive costs for its transactions due to its bad design, its cumulative cost loss becomes more severe as the number of called transactions grows. Only a few recent study [4] dealt with the cost-optimized design of smart contracts. But it considered only the Ethereum blockchain platform. In this article, we examine the cost-optimized design of other blockchain platforms. The rest of this article is organized as follows; In Section 2, we introduce and discuss the vulnerability issues of smart contracts on various blockchain platforms. In Section 3, we introduce and discuss the cost optimization issues of smart contracts on various blockchain platform. In Section 4, we summarize the current issues and address other issues to be considered for smart contracts. Introduction The first generation blockchain designed for Bitcoin cryptocurrency maintains a record of transactions across several computers linked in a peer-to-peer network [1]. The second generation blockchain maintains a record of digital contracts as well as cryptocurrency transactions in a peer-to-peer network [2]. These digital contracts are made by programming codes and referred to as smart contracts. Because of great commercial potentiality of smart contracts, many blockchain systems with their own designs for smart contract have been launched: Ethereum, EOS, Hyperledger Fabric, Tron, Qtum, Cosmos, Cardano, Klaytn, ICON, etc. However, based on the reentrancy vulnerability of smart contracts, the DAO attack steal about 3.6 million Ethers (valued about US$55,000,000 at that time) in June 2016 [3]. Also, similar attacks such as Parity MultiSig Wallet and SmartMesh occurred in 2017. Therefore, the vulnerability analysis study of smart contracts is beginning recently. Although the existing studies focused on the smart contracts of the Ethereum blockchain platform, we examine the vulnerability analysis for other blockchain platforms such as EOS, NEO, and ICON. Another main issue of smart contracts is the transaction costs of smart contracts. The blockchain system consumes computing resources for transactions of smart contracts. So, the blockchain system requires some costs of cryptocurrency for transactions of smart contracts. If a smart contract requests expensive costs for its transactions due to its bad design, its cumulative cost loss becomes more severe as the number of called transactions grows. Only a few recent study [4] dealt with the cost-optimized design of smart contracts. But it considered only the Ethereum blockchain platform. In this article, we examine the cost-optimized design of other blockchain platforms. The rest of this article is organized as follows; In Section 2, we introduce and discuss the vulnerability issues of smart contracts on various blockchain platforms. In Section 3, we introduce and discuss the cost optimization issues of smart contracts on various blockchain platform. In Section 4, we summarize the current issues and address other issues to be considered for smart contracts. Vulnerability Analysis of Heterogeneous Smart Contracts The vulnerability of smart contracts in Ethereum blockchain platform was recently issued due to critical money loss of attacks. In order to prevent the vulnerability of smart contracts, many developers try to find vulnerable coding patterns of smart contracts. But they focus on smart contract coding written with the solidity programming language [3,5]. We introduce vulnerable coding patterns of smart contracts written with other programming languages such as python, C++, and C# [6][7][8]. The following are vulnerability issues of smart contract coding written with various programming languages. Vulnerabilities of smart contracts written with solidity Reentrancy, front-running, timestamp dependence, integer overflow and underflow, DoS with unexpected revert, DoS with block gas limit, Insufficient gas griefing, forcibly sending ether to a contact, deprecated/historical attacks, function default visibility, outdated compiler version, floating pragma, unchecked call return value, improper access control, uninitialized storage pointer, assert violation, use of deprecated solidity functions, delegated call to untrusted callee, authorization through tx.origin, signature malleability, incorrect constructor name, shadowing state variables, weak sources of randomness from chain attributes, and write to arbitrary storage location. Vulnerabilities of smart contracts written with python Timeout, unfinishing loop, package import, system call, outbound network call, internal API, randomness, fixed SCORE information, IRC2 token standard compliance, IRC2 token parameter name, event log on token transfer, event log without token transfer, ICX transfer event log, super class, keyword arguments, big number operation, instance variable, and state DB operation. Vulnerabilities of smart contracts written with C++ Numerical overflow, authorization check, apply check, transfer error prompt, random number practice, and rollback attack. NEP-5 tokens, DoS vulnerability, and storage injection. Smart contracts written with solidity programming language are applied to the Ethereum, Klaytn, Tron and Qtum blockchain platforms. Smart contracts written with python language are applied to the ICON, EOS, and NEO blockchain platforms. Smart contracts written with C++ language are applied to the EOS blockchain platform, and those with C# language are applied to the NEO blockchain platform. Note that the above smart contract vulnerabilities are found until now. Other unknown vulnerabilities of smart contracts possibly exist, and many white hackers try to find other smart contract vulnerabilities. Also, there are other smart contracts written with different programming languages such as Ethermint and Plutus. To remove the vulnerabilities of smart contracts, commercial services, call audit service, have emerged. These services check whether a given smart contract includes some vulnerable code. So, the provider of smart contracts needs to go through the audit service before deploying the smart contracts to the commercial blockchain system. Cost Analysis of Heterogeneous Smart Contracts Blockchain systems are managed in a distributed manner by multiple separate computers. The benefit of a computer to join in the blockchain system is to get some cryptocurrency. As the computing resources handling a transaction is larger, the charged cryptocurrency becomes severer. So, the minimization of computing resources leads to cost optimization of transactions in the blockchain system. But the computing resources depends on the platform of blockchain systems. For example, in the the Ethereum, Tron, Klaytn, Qtum, and NEO platforms, the number of storing variables dominates the costs of a transaction. In contrast, in the EOS platform, the foot print size and the execution speed dominate the costs of a transaction. Also, the cost of the same transaction may vary depending on the type of blockchain platform. Figure 1 shows a transaction code example in a smart contract written with the solidity language, where the first element in an array with 100 elements is deleted (the value of "pos" is set to 99). In this programming code, the rest ninety-nine elements are shifted in their front positions. On the other hand, Figure 2 shows its variant transaction code, where the first element is deleted by moving the 100-th element into the first element. While ninety-nine storing operations are performed in Figure 1, only one storing operation is performed in Figure 2. In the Ethereum blockchain platform, the cost of "delete_shift (99)" function is 0.00113400 Ether and the cost of "delete_move (99)" functions are 0.00027552 Ether. In this comparison, we derive the fact that the optimized code for the first element deletion in Figure 2 saves about 76% costs of the non-optimized code in Figure 1. The costs of these transactions may vary when they are applied to other blockchain platforms. In the Klaytn blockchain platform, the cost of "delete_shift (99)" function is 0.00620111 Klay and the cost of "delete_move (99)" functions are 0.00033175 Klay. In this comparison, the optimized code in Figure 2 saves about 95% costs of the non-optimized code in Figure 1, 3 & 4 show similar transactions written with the python language, where the first element is deleted by ninety-nine shift operations or one move operation, respectively. In the ICON blockchain platform, the cost of "delete_shift" function in Figure 3 is 0.002751 ICX and the cost of "delete_move" functions in Figure 4 is 0.001355 ICX. In this comparison, the optimized code in Figure 4 saves about 50% costs of the non-optimized code in Figure 3. Through these experiments, we confirm that the cost of the same transaction varies depending on the blockchain platform. Also, the code design with the minimum number of storing operations is very important for the cost optimization of transactions in a smart contract. In contrast, in the EOS blockchain platform, the C++ code design with the fast execution and small foot print [9] will be efficient for the cost optimization of a smart contract. Conclusions and Discussion In this article, we summarize the current issues under studying with regards to smart contracts of blockchain systems. Also, we point out other issues never considered but essential for successful commercialization settlement of the smart contract technology. Due to short history of the smart contract technology, its software vulnerability issue is still not solved. Although many vulnerable coding patterns are verified, there are possibly more vulnerable coding patterns not verified yet. Also, the vulnerable coding patterns are affected by the programming language of smart contracts. Because the current study concentrates on the vulnerability of smart contracts. written with the solidity language, more vulnerability study is needed for other programming languages. To minimize the damage caused by attacks with unknown vulnerability, it needs to consider monitoring the real-time traffic of transactions and detecting the abnormal traffic pattern as soon as possible. Besides of vulnerability issue, another important issue is the optimization of transaction costs. We show that the efficient code design of smart contracts can significantly reduce the cost of transactions. The efficient code design of smart contracts depends on the blockchain platforms. While the minimization of storing operations is critical in most blockchain systems, the foot print size and the fast execution are critical in some blockchain system. Assets of Publishing with us • Global archiving of articles
2020-03-09T20:40:06.804Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "33209c1c8ac4b102b189c7088938b39be0a9f462", "oa_license": "CCBY", "oa_url": "https://lupinepublishers.com/computer-science-journal/pdf/CTCSA.MS.ID.000126.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33209c1c8ac4b102b189c7088938b39be0a9f462", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246754577
pes2o/s2orc
v3-fos-license
Clinical utility of a pediatric hand exoskeleton: identifying users, practicability, and acceptance, and recommendations for design improvement Background Children and adolescents with upper limb impairments can experience limited bimanual performance reducing daily-life independence. We have developed a fully wearable pediatric hand exoskeleton (PEXO) to train or compensate for impaired hand function. In this study, we investigated its appropriateness, practicability, and acceptability. Methods Children and adolescents aged 6–18 years with functional limitations in at least one hand due to a neurological cause were selected for this cross-sectional evaluation. We characterized participants by various clinical tests and quantified bimanual performance with the Assisting Hand Assessment (AHA). We identified children whose AHA scaled score increased by ≥ 7 points when using the hand exoskeleton and determined clinical predictors to investigate appropriateness. The time needed to don each component and the number of technical issues were recorded to evaluate practicability. For acceptability, the experiences of the patients and the therapist with PEXO were evaluated. We further noted any adverse events. Results Eleven children (median age 11.4 years) agreed to participate, but data was available for nine participants. The median AHA scaled score was higher with PEXO (68; IQR: 59.5–83) than without (55; IQR: 37.5–80.5; p = 0.035). The Box and Block test, the Selective Control of the Upper Extremity Scale, and finger extensor muscle strength could differentiate well between those participants who improved in AHA scaled scores by ≥ 7 points and those who did not (sensitivity and specificity varied between 0.75 and 1.00). The median times needed to don the back module, the glove, and the hand module were 62, 150, and 160 s, respectively, but all participants needed assistance. The most critical failures were the robustness of the transmission system, the electronics, and the attachment system. Acceptance was generally high, particularly in participants who improved bimanual performance with PEXO. Five participants experienced some pressure points. No adverse events occurred. Conclusions PEXO is a safe exoskeleton that can improve bimanual hand performance in young patients with minimal hand function. PEXO receives high acceptance. We formulated recommendations to improve technical issues and the donning before such exoskeletons can be used under daily-life conditions for therapy or as an assistive device. Trial registration Not appropriate Supplementary Information The online version contains supplementary material available at 10.1186/s12984-022-00994-9. Robot-assisted training can complement such interventions by providing repetitive goal-directed yet engaging movements [1]. These three forms of therapy have been investigated relatively frequently in children with cerebral palsy (CP) [2]. A recent systematic review concluded that the two forms of conventional therapy effectively improve motor function in children with CP [3]. Evidence for the effectiveness of robot-assisted therapy is emerging [3]. In patients with sensorimotor impairments of the hand due to a neurological lesion, fully wearable robotic hand exoskeletons bear the potential to support task-oriented training in the clinic or at home (i.e., therapy robot), or compensate for the loss of function and assist daily-life activities (i.e., assistive technology) [4]. Soft hand exoskeletons are rapidly emerging due to their inherent safety, less complex design, and increased potential for portability and efficacy [4]. In 2018, the authors of two reviews identified 44 [4] and 45 [5] unique devices, and this number is increasing as shown by more recent publications (e.g., [6][7][8]). However, only a few publications have focused on developing such technologies for children, which entails specific challenges such as accounting for children's growth in the sizing of dedicated devices or making the device intuitive and easy to use [9]. For example, one group developed an exoskeleton for the thumb that can actuate the carpometacarpal and metacarpophalangeal joints through ranges of motion required for activities of daily living [10]. Another group developed a finger exoskeleton that assists finger flexion and extension but did not include the thumb [11]. However, various functional tasks require the inclusion of both finger and thumb movements, highlighting the need for devices that assist full hand grasping. No results from studies applying such technology in children have been published, despite these studies being crucial for design adoption by the users [9]. To answer these needs, we developed the pediatric hand exoskeleton (PEXO) [12]. In a previous study, we presented the requirements and design modifications for adapting an adult hand exoskeleton [13] to the unique needs of children with neuromotor impairments, and we made a preliminary validation with a 6-year-old child with stroke [12]. The current paper builds on this work and more extensively evaluates the clinical utility of the current prototype of PEXO in a larger group of patients. In line with Smart [14], we understand clinical utility as a multi-dimensional model that outlines four aspects in practitioners' and patients' judgments: appropriateness, practicability, acceptability, and accessibility. In this study, we evaluated three of these aspects: (i) appropriateness: i.e., actual effectiveness but also relevance, including how meaningful the intervention could be in the broader context of clinical decision-making; (ii) practicability, concerning the functionality and suitability of robotic devices for clinical applications; and (iii) acceptability by patients and therapists to determine whether there are concerns that might affect treatment or practice. We did not investigate accessibility, i.e., costs and cost-effectiveness or availability of the technology, as PEXO is still a research prototype. The specific research questions were: (i) Appropriateness: can we identify children with upper limb impairments that can improve hand capacity and particularly bimanual performance when using PEXO? Based on our clinical experience, we hypothesized that children with little hand function but good proximal arm muscles can benefit from PEXO. Furthermore, we investigated whether children with upper limb impairments can familiarize themselves with the use of PEXO within a reasonable time. (ii) Practicability: can patients put PEXO on independently, or how long does it take to don PEXO? We also wanted to identify any technical issues during training sessions to improve the design of PEXO further and reported safety issues. (iii) Finally, we investigated the acceptability of PEXO prototypes by asking participants and the supervising therapist various questions concerning the advantages and disadvantages of PEXO. By combining the insights regarding these dimensions of clinical utility, we formulate recommendations to improve PEXO and pediatric hand exoskeletons in general and pave the way for their successful clinical application. Participants We included children and adolescents with brain or peripheral nerve damage resulting in functional limitations of at least one hand. Both in-and out-patients, between 6 and 18 years of age, were recruited at the Swiss Children's Rehab clinic. The participants should be able to sit for an hour and understand the tasks of the study protocol. Children or adolescents who were not able to actively flex either their shoulder or their elbow against gravity (manual muscle testing (MMT) score < 3) [15,16] were excluded from the study. All participants and their legal guardians provided verbal consent to participate in the study: parents and adolescents aged 14 years and older provided written consent. Age, sex, most affected hand, dominant hand, and handedness were recorded to characterize the participants. We also noted whether the children had received Botox injections in the upper limbs during the six months before the study. Participants were characterised according to six standard clinical assessments comprising the Manual Ability Classification System (MACS), MMT, Selective Control of the Upper Extremity Scale (SCUES), Hypertonia Assessment Tool (HAT), modified Ashworth Scale (MAS), and Functional Independence Measure for Children (WeeFIM). Trained therapists performed the clinical assessments, except for the WeeFIM, which was scored by trained and certified nurses in the center. The MACS reliably classifies whether and how children handle objects in everyday life. Classifications vary between level I, where children handle objects effortlessly and successfully, and level V, where children do not handle objects at all [17]. MMT was applied to rate upper extremity muscle strength from 0, i.e., no contraction visible, to 5, i.e., movement over the full range of motion against gravity and severe resistance [15,16]. If the patient can perform the movement against gravity covering the whole range of motion, the MMT is 3. The test protocol included standardized starting positions, a demonstration of the test by the therapist, and the active execution of the movement by the participant. Shoulder and elbow flexion as well as wrist and finger extension were tested. The SCUES measures upper limb selective voluntary motor control [18], which is defined as the ability 'to selectively activate muscles independently of each other in the context of the requirement for voluntary movement or posture' [19]. Shoulder abduction and adduction, elbow flexion and extension, pro-and supination, wrist flexion and extension, and finger flexion and extension are tested. Each movement is scored on an ordinal scale from 0, indicating no selective motor control, to 3, reflecting normal selective motor control. The HAT differentiates between the hypertonia categories spasticity, dystonia, and rigidity (or mixed) and consists of seven items [20]. Items 1, 2, and 6 indicate dystonia, 3 and 4 spasticity, and 5 and 7 rigidity. Each limb is scored separately. Spasticity severity was measured with the MAS that scores the speed-dependent resistance of moving a joint [16]. In this study, the therapist assessed the MAS of the wrist and finger joints by first moving the joint covering the full passive range of motion at a slow pace, followed by a faster movement. The ordinal scale varies from 0 (i.e., no resistance during passive movement) to 4 (i.e., the affected section is rigid in flexion or extension). The WeeFIM is a valid and reliable instrument assessing the degree of independence on a seven-level scale [21]. The functional assessment includes 18 items covering self-care, mobility, and cognition. The participants were characterized with the WeeFIM total and particularly the self-care score, as the latter contains items reflecting upper limb use in daily activities. Assistive hand exoskeleton PEXO The detailed design of the pediatric assistive hand exoskeleton PEXO was presented previously [12], and an overview of PEXO components is shown in Fig. 1A. In short, PEXO assists full-hand grasping in children with neuromotor impairment by actively supporting flexion and extension of the four fingers (index, middle, ring, and little finger) combined and the thumb separately, using a soft three-layered spring blade mechanism [22]. The thumb of the exoskeleton can be moved to opposition using a passive slider, allowing the users to perform different grasp types relevant for activities of daily living (e.g., power grasp, precision pinch, and lateral grasp). The hand exoskeleton provides sufficient force to grasp objects weighing up to 0.5 kg and closes and opens within 1 s. PEXO consists of a hand module (i.e., the actual exoskeleton) and a back module. The sleek hand module (weight < 105 g, maximum 1.5 cm added height on the back of the hand) is donned on a user's hand using a Velcro glove to fixate the exoskeleton on the fingers. Two straps around the wrist and one strap around the palm securely fix the exoskeleton. The back module (weight 492 g) contains the electronics, motors, and battery to power the hand module via a cable-based transmission system [23]. This design reduces the weight carried on the hand. The entire hand exoskeleton system is fully wearable since the back module can be worn as a backpack or attached to a wheelchair, allowing the user to move around freely (see also Fig. 1B, C). While the hand module of PEXO was explicitly optimized for the application in children in terms of size, weight, design, and functionality [12], the back module remained unchanged from the prior developed RELab tenoexo for adults with neurological hand impairment after stroke or spinal cord injury [13]. This commonality, combined with the possibility of detaching the transmission system from the hand module, allowed for using hand modules of different sizes with only a single back module. Hand modules were prepared in three different sizes for the left and right hand, covering the hand sizes of children aged 6 years, 7 to 8 years, and 9 to 12 years based on anthropometric data. The hand module of the adult RELab tenoexo was used by adolescents between 13 and 18 years of age. Largediameter pushbuttons were used in this study to trigger the opening and closing movement of PEXO. An additional control unit allowed therapists or other caregivers to adjust the supporting force exerted by the hand exoskeleton. Measurement procedures and assessments The measurements took about two hours and were paused for a break to avoid fatigue of the participants. An experienced occupational therapist (JL) and a research engineer (JD) conducted the measurements. The order of the tests and the instructions were standardized. First, the patient descriptors (MMT, HAT, MAS, and SCUES) were assessed (without PEXO). To determine the most appropriate PEXO size, participants had to place their hands on wooden stencils, which were created Large pushbuttons or a control unit can be used to trigger the opening and closing of PEXO. B ID05, female, 7.5 years old, 6 months after being diagnosed with rhabdomyolysis, performing the Shape Completion task with the Smart Pegboard from Neofect. C ID03, male, 15.7 years, 2 months after stroke, opening a bottle, as part of the Assisting Hand Assessment, while PEXO assisted in holding the bottle. We received permission from the children and parents to present these pictures 19:17 in accordance with the available hand module sizes based on age-appropriate standard anthropometry data of children. Subsequently, the participants were asked to put on the back module, the glove, and PEXO hand module as independently as possible. We recorded the time needed and whether the children needed assistance to put the separate components on. Next, the participants chose a location on the table or the body (e.g., see Fig. 1C) that was easy for them to reach, where the pushbutton to close and open PEXO was placed. Then, while wearing PEXO, the participants performed a standardized assessment with the Smart Pegboard (Neofect, Munich, Germany). This instrumented pegboard is usually used therapeutically to practice reaching, grasping, and transporting movements and fine motor skills. The pegboard includes animated games on an electronic perforated plate with light signals (Fig. 1B). Patients have to insert pegs, which can be of different dimensions, in the holes that are illuminated. For this study, we used pegs (dimensions: length 4 cm, diameter 5 mm) with a knob (diameter 10 mm) on top, allowing a lateral or tip pinch. The time needed by the participant to insert a maximum of eleven pegs was measured, with the maximum test duration being set to 120 s. The number of pegs positioned in the appropriate hole [x/11] was recorded if the participants could not insert all pegs within 120 s. Afterward, the participants had the opportunity to practice the use of PEXO on the pegboard. The practice time was recorded. Then, the pegboard assessment was repeated, once with and once without PEXO. The grip strength and lateral pinch strength were measured with and without PEXO using the Jamar dynamometer and the finger closure gauge [15,16,24]. Reference data for typically developing children and adolescents are available for comparison [15,16,24]. We then investigated whether participants could perform various hand movements (i.e., lateral pinch, tip pinch, and fist closure). When assessing the ability to perform various hand movements, the child manually repositioned the PEXO thumb if possible. For those children who were unable to do so, the therapists assisted the child. Furthermore, they performed two functional assessments, the Assisting Hand Assessment (AHA) and the Box and Block Test (BBT), with and without PEXO. The kids-AHA is a test procedure for children between 18 months and 12 years of age and assesses how effectively a child uses its impaired upper extremity (assisting hand) in bimanual tasks. For the analysis, the participant is videotaped in a play situation. Afterward, a trained and certified occupational therapist assesses the spontaneous use of the assisting hand for 20 items. Each item is scored on a scale from 1 to 4 (1-does not do, 2-ineffective, 3-somewhat effective, and 4-effective). Rated items are, for example, whether participants initiate the use of the assisting hand themselves, open a bottle (Fig. 1C), stabilize objects or whether they reach for objects with the assisting hand [25]. The AHA provides raw scores but also scaled scores (0-100), which are derived from Rasch-analysis and are interval-scaled. The BBT is a capacity test measuring unimanual gross dexterity of the arm and hand. Within 60 seconds, the participants need to move as many blocks as possible from one compartment of the box to the other. Ageappropriate norm values exist for children and adolescents [26,27]. In line with the International Classification of Functioning, Disability, and Health-Children and Youth Version (ICF-CY), strength as assessed by the Jamar dynamometer is a body function, while the pegboard and the BBT are capacity measures (activity domain), and the kids-AHA is a performance measure (activity domain) [28,29]. After completing the tests, the therapist supported the participants in doffing PEXO. The participants were asked to rate six statements concerning the training with PEXO on a Likert Scale from 1 (not at all) to 5 (very much). The statements are listed in Table 3 (P1 to P6). Additionally, the participants were questioned regarding pressure points while wearing PEXO, potentially leading to discomfort. If the participants experienced discomfort, these areas were located and the participants were asked to rate the intensity of the caused pain on a Visual Analogue Scale from 0 (no pain) to 10 (worst imaginable pain) [30]. Finally, the participants were asked to give feedback on what they liked and disliked about the therapy with PEXO. The therapist filled in a custom-made questionnaire consisting of five statements (T1 to T5 in Table 3) and answered three open questions "If the child was not able to perform a goal-oriented training, please specify why this was not possible", "What was your general impression of training with PEXO for this specific child?", and "Were there any technical problems? If yes, please describe them in detail and indicate their numbers. " We rated the technical errors by their number of occurrences and severity, comparable to a retrospective failure mode, effect, and criticality analysis (FMECA) [31,32]. The following severity levels were defined: 3 Issue allowing successful task completion, leading to a major delay (> 1 min) and/or requiring support from a caregiver. 4 Critical issue requiring intervention by the study coordinator to avoid potential harm to the participants and/or preventing task completion due to total failure of the device requiring technical maintenance Finally, we noted any PEXO-related adverse events. Outcomes and statistical analyses Appropriateness We quantified bimanual hand performance with the AHA scaled score and hand capacity with the BBT. Due to the small sample size, the nonparametric Wilcoxon signed-rank test was performed to determine differences between the conditions with versus without PEXO. The Z-statistic value of the Wilcoxon signed-rank test and the p-value were reported. As a first step in identifying children who could improve bimanual performance and unimanual hand capacity when using PEXO, the non-parametric Spearman's correlations (ρ) were calculated between the differences in AHA scaled scores (i.e., with PEXO minus without PEXO) and various patient characteristics and functional measures. We interpreted the magnitude of the correlation coefficients as follows: 0-0.25 (no or little relationship), 0.25-0.50 (fair degree), 0.50-0.75 (moderate to good relationship), 0.75-1.00 (very good to excellent). In addition to the correlation analyses, we calculated a dichotomous variable indicating an improvement in bimanual performance yes/no. Based on the standard error of measurement calculated for the intra-rater reliability (raw score: 1.2 points), we estimated the smallest detectable change (2.77 × 1.2 = 3.3) and made a conservative estimation of the smallest detectable change for the scaled scores (i.e., 7 points) using transformation curves published by the authors of the AHA [33], i.e., we interpreted an improvement of 7 points or more when wearing PEXO as a conservative estimate of improved bimanual performance. To identify characteristics and functional measures that differed between the children who could improve bimanual hand performance when wearing PEXO or not, chi-square tests were used to determine differences in dichotomous measures and Wilcoxon signed-rank test to determine differences in ordinal or interval-scaled measures. Furthermore, Receiver Operating Characteristics (ROC) analyses were performed to determine the level of sensitivity and specificity with which the ordinal and interval-scaled measures could distinguish between participants who performed better when wearing PEXO (≥ 7 points improvement in scaled AHA scores) and those who did not. To investigate familiarization with using PEXO, data from the Smart Pegboard was used (number of correct placements from 11 pegs and time needed to accomplish the task). Differences in the pegboard scores were compared between pre-to post-practice time. The postpractice conditions with versus without PEXO were further compared. While α was generally set at 0.05 for all comparisons, we set it at 0.025 for these multiple comparisons. Practicability Time needed for donning the components of PEXO and whether this could be done independently by the participant. For the time needed to don the back module, the glove, and the hand module, the median and interquartile range (IQR) were calculated and the minimum and maximal values were reported. Furthermore, the number and nature of technical and safety issues were described. Acceptability The descriptive values of the Likert scores that participants and the therapist provided to the various questions were reported. Results The characteristics of the participants can be found in Table 1. The median age was 11.4 years (IQR 9.4-16.1 years). None of the patients had received Botox during the past months. ID4 was excluded from the data analysis as the measurement protocol could not be completed due to a malfunction of PEXO electronics, preventing the actuation of PEXO. In ID2, a cable of the transmission system ruptured during the measurements before the AHA and BBT could be completed with PEXO (i.e., there are no data available for AHA, BBT, lateral pinch strength, and grip strength with PEXO). The BBT did not differ significantly between the conditions (without PEXO: median 3 blocks (IQR: 0-33.3); with PEXO: 5 blocks (IQR: 1.5-6.5; Z = − 0.93; p = 0.35); Fig. 2B). Participants with high BBT scores performed worse with PEXO than without. Three participants (ID1, ID7, and ID10), who could not transport a single block without PEXO, could move 5, 3, and 5 blocks, Patients ID1, 6, 7, 9, and 10 improved in AHA scaled scores more than 7 points (i.e., dichotomous improvement: yes). The results from the ROC analyses show that most measures (MACS, MAS of wrist and fingers, MMT values for shoulder and elbow flexors and wrist extensor, Jamar and lateral pinch dynamometry measurements) could not distinguish significantly between those participants who improved in bimanual performance when wearing PEXO and those who did not. The only measures which could do so were the BBT, SCUES, and MMT of the finger extensors (Fig. 2E-G). Five out of ten participants (i.e., no data from ID4) could perform a lateral pinch without PEXO, while all participants could perform it with PEXO. Five participants could perform a tip pinch when not wearing PEXO, while seven participants could perform a tip pinch when wearing PEXO (p = 0.48). Eight out of ten participants could close the fist without PEXO, but no participant could close the fist when wearing PEXO. For the lateral pinch and fist closure scores, chi-square tests could not be calculated. Chi-square tests showed a tendency that patients not able to perform a lateral or tip pinch without PEXO improved bimanual performance (as assessed by the AHA scaled scores) when wearing PEXO (for both: Chisquare = 2.723, p = 0.099). However, such a tendency was not observed for those who could not initially make a fist (Chi-square = 2.057, p = 0.15). When investigating familiarization with the use of PEXO, participants practiced the pegboard task while wearing PEXO for a median duration of 5 min (IQR: Practicability The median time needed to don the back module (n = 11) amounted to 62 s (IQR: 40-120 s) and varied between 20 and 150 s. No child was able to put the back module on without assistance. The median time required to don the Velcro glove was 150 s (IQR: 30-240 s) and varied Table 2 and elaborated on in Additional file 1: Technical issues and proposed solutions. The most critical failures were identified to be the robustness and reliability of the transmission system and electronics (issue ID 2.2 and 2.3 with a severity level of 4) and attachment system (issue ID 1.1 and 1.2 with severity levels of 3 and 2, respectively, at frequent occurrence). Acceptability The responses of the participants and the occupational therapist are shown in Table 3. Responses varied considerably between participants and questions. Interestingly, for the questions P1 to P5, we found good to very good correlations between the subjective impressions of the participants and the difference in AHA scaled scores (with minus without PEXO conditions). There was only a fair relationship for question P6. Concerning the open questions (Table 4), participants appreciated that they could do more with the hand when wearing PEXO (ID1), opening and closing the hand with pressing a button was possible (ID3), one could play with it (ID5), it was fun (ID8), and the hand felt alive again and its function improved (ID10). One participant (ID1) disliked that she needed patience while donning PEXO and mentioned that it should be softer. Many participants commented that wearing the glove resulted in a warm and sweaty hand (ID2, 5, 7, and 8). Furthermore, participants mentioned that individual movements of the fingers were not possible (ID3) and that the back module became heavy over time and pressed on the shoulder (ID5). Five patients reported the sensation of pressure on the skin. VAS scores indicating the level of pain were very low (0-2) for four patients. ID5 reported pain at the dorsum of the hand (VAS 7.5) and the little finger and thumb (VAS 5). No other PEXO-related adverse events were noted. We asked the participants whether they could think of activities that they could perform with PEXO. They mentioned activities such as closing (the zipper of ) a jacket (ID1 and 9); holding, opening, or drinking from a bottle (ID1, 3, 7); opening crayons or lip-gloss or pushing a shopping cart (ID1); tying shoes (ID3, 9 and 10); brushing teeth (ID3, 8,9); holding a knife, cutting food or eating with cutlery (ID7, 8,9,10); holding a book or a mobile phone, opening a door or playing videogames (ID8); or holding a sharpener (ID10). The responses of the therapist varied widely between participants ( Table 3). The responses to statement T3 showed that the opinion of the therapist which patient could perform a goal-oriented training with PEXO correlated well with the differences in AHA scaled scores, which were calculated a-posteriori, i.e., these responses were not available at the time of responding. The correlations with the other questions were negative and of moderate sizes only. The therapist responded further in the open questions that PEXO would be better adjustable for several children if the wrist of PEXO could be flexible. For children with wrist and finger contractures, a flexible wrist joint would make it easier to don PEXO. Generally, a flexible wrist joint could make grasping movements more physiological. Discussion While others have developed soft hand exoskeletons to support thumb [10] or finger [11] motion in children, this study presents the first evaluation of the appropriateness, practicability, and acceptability of a whole hand wearable exoskeleton in a pediatric user group. We consider such technology essential, particularly as children are a very vulnerable group. This technology, when used at a critical phase of a child's development, could improve both the quality of life and the longterm health prospects [9]. Furthermore, adult patients with stroke participating in a similar trial frequently reported that the exoskeleton should become available in smaller sizes to fit small hands [34]. This shows the need for reducing the size of such technologies to include more patients. Concerning the appropriateness results, we identified those children with upper limb impairments that could improve bimanual performance while using PEXO, i.e., those with low BBT, SCUES, and finger extensor values. Results from the Smart Pegboard showed that the participants needed only a short practice period to improve the handling with PEXO. Concerning the practicability, it was noted that most children needed help with donning. Furthermore, we identified and rated any technical issue and feedback received from participants and the therapist towards the design and robustness of PEXO. Interestingly, the participants' subjective acceptance of PEXO and the impression of the therapist whether a particular child could perform a goal-oriented therapy with PEXO correlated well with the objective improvement in bimanual performance caused by PEXO. By combining the insights regarding these dimensions of clinical utility, we formulated recommendations to improve PEXO and paved the way for its successful clinical application. Identifying participants who could benefit from PEXO This study showed that patients with poor upper limb function could benefit from a wearable soft exoskeleton. This is in line with results from adults with spinal cord injury, where participants with lower baseline motor function received significant benefits from soft robotic glove assistance [35], and participants with loss of hand function improved while wearing a soft robotic glove [36]. However, an overall classification instrument like the MACS did not assess impairments in sufficient detail to select patients who might benefit from using PEXO. The most promising classifiers seem to be the BBT as a simple, practical, and reliable measure of gross manual dexterity, the SCUES, a measure quantifying selective voluntary motor control, and the manual muscle testing of the finger extensors. Indeed, lack of finger extension was one of the inclusion criteria in the study of Yurkewich and colleagues [34], and these adult patients with stroke showed considerable improvements in hand use when wearing the Hand Extension Robot Orthosis (HERO) Grip Glove compared to no exoskeleton. While the results of our ROC analyses propose specific cut-off values to identify patients who might be suitable for training with PEXO, we emphasize that these numbers should be interpreted cautiously due to the small number of participants that were involved. At the current stage, we recommend that more extensive trials are needed to investigate whether these assessments prove valuable in selecting appropriate patients for training with wearable exoskeletons like PEXO. In patients with better hand and arm function, PEXO seemed to slow down the movements (e.g., Fig. 2B) due to the required coordination between positioning the grasping hand and the hand triggering PEXO by pressing the button, and also the time PEXO needs to close. Similar results were observed in studies with adult users. Correia and colleagues reported that the button control was an effective intention detection method [35]. They found that higher-functioning adult patients with spinal cord injury were also slowed down in tasks that they could perform without wearing the glove, whereas lower-functioning individuals were challenged by engaging both limbs simultaneously to hold an object with one hand and press a button with the other [35]. Yurkewich et al. [34] reported that hand exoskeletons controlled by a button or an automatic mode based on inertial measurement unit data decreased BBT scores in adults with higher baseline scores. The number of blocks transferred within one minute was in a comparable range to the results obtained in our study with button control (mean 2.9 blocks in [34]). Participants in [34] preferred the automatic mode, performing slightly better than button control (3.3 blocks). Particularly in children and adolescents, a technology that might slow them down is unlikely to receive high acceptance. Researchers developing assistive technologies for children should consider putting additional effort into designing control modalities that are robust, intuitive, and easy to use by children. We plan for our subsequent evaluations to investigate the use of several control systems: (i) a myoelectric control system (e.g., [37,38]), (ii) a sensor glove that embeds the user input directly into the movement through contact detection with the object to grasp [39], and (iii) voice control (e.g., [40]). Such control systems might be more intuitive, speed up the control of the hand, and be beneficial, particularly for bimanual tasks. The range of motion supported by PEXO is not sufficient to fully close the hand to a fist. This could partly explain why we found no differences during dynamometer testing where almost full closure is needed to exert pressure on the dynamometer. Resultingly, PEXO could not sufficiently assist the subjects in the grip dynamometer task. The selection of patients could also explain this finding. Several participants could close a fist without wearing PEXO, but the numbers of those who could not do so were small, which might have affected the statistical power. Nevertheless, it seems to be a limitation of the current prototype, and increasing the range of motion and force production would be desirable. A full hand closure is not critical for most daily-life relevant tasks as patients would use PEXO to hold objects that do not require a complete closure of the hand nor maximal grip strength. Overall, our data shows that in children and asolescents, bimanual hand use increased when wearing PEXO. Interestingly, even though dynamometer assessments are highly reliable, they did not seem good estimators for identifying patients that could improve bimanual performance with PEXO, unlike, for example, the MMT of the finger extensors. This difference can have several causes. For example, it is known that patients with neurological lesions frequently have more difficulties in opening the hand (i.e., extending the fingers) than in closing. Furthermore, active finger extension has previously been identified as a predictor of functional improvement in adult patients with stroke [41][42][43]. Another explanation Familiarization with the use of PEXO Immediate improvements in grasping function has been observed in several studies. This was seen in an adult individual using the RELab tenoexo after suffering a spinal cord injury [13], in adults with spinal cord injury wearing a textile-based soft robotic glove [35,36], and in adults after stroke wearing the HERO Grip Glove [34]. While we also noted immediate improvements in certain tests, the performance of children with some hand function worsened in tests involving a timed component when using PEXO. For example, the Smart Pegboard task showed that the number of correctly inserted pegs improved after the practice period, while the time remained the same. We assume that patients could control increased precision in their hand without necessarily getting faster. Certain participants got faster in subsequent trials, but were still unable to complete the task of inserting all 11 pegs in under 120 s. Indeed, several relevant aspects of the task might require some time to get used to, for example, opening and closing PEXO using the pushbutton or the correct positioning of the lower arm and wrist so that by closing PEXO, the pegs can be grasped. Overall, the participants required a median practice time of around 5 min to improve the performance with the hand exoskeleton. This duration seems acceptable and lies in the range of training times reported in adults after stroke [34,44]. However, the practice time with the Smart Pegboard varied largely between participants (from 1.5 to 15 min). While we had initially planned 20 min to practice hand opening and closing with PEXO, we noticed that some children learned this task within minutes. To avoid losing motivation and compliance due to too many practice sessions, the therapist decided to continue with the protocol on an individual basis, i.e., if the child could perform basic grasping tasks with PEXO. The participants had very diverse motor and cognitive impairments and were at different stages of motor development, which influenced the time needed to learn new tasks differently in each child. We expect that a more intuitive control system will speed up the performance of tasks and will contribute to an even higher acceptance level in participants. Practicability and suggestions for improvement The donning process is critical to ensure unrestricted use of assistive devices. In this study, participants could not don all of the components independently, which falls in line with other studies performed in adult patients with spinal cord injury (e.g., [35]) or stroke (e.g., [34]). In the latter study, participants reported the lowest satisfaction scores for ease of donning. While help with donning is not limited during one-to-one therapy sessions, the individual using PEXO depends on assistance from another person in daily life situations. The participants of this cross-sectional study were exposed to the technology for the first time, and we expect that participants can increase donning capabilities after practicing. Further, increasing the adaptability of the assistive device to individual needs will not only benefit the donning process but also improve the functionality and wearing comfort. Some participants experienced pressure points, and the thumb position did not fit perfectly. Modular and adaptable designs for children's hand exoskeletons are needed to account for the fact that young patients will grow, and the devices need to be adjusted to the changing anthropometrics over time. PEXO currently allows no movement in the wrist joint. The therapist commented that a flexible wrist could simplify donning and doffing and make grasp movements more physiological. Indeed, Valevicius and colleagues showed that in healthy adults performing a cup transfer task that included reach, grasp, transport, and release phases, wrist flexion/extension varied significantly [45]. For example, when the participants grasped the cup from the top at the rim and moved the cup, the wrist showed a mean peak flexion angle of 45° combined with a peak ulnar deviation angle of 28°. However, when moving the cup while holding it from the side, the wrist showed a mean peak extension angle of 33° and a radial deviation of 9°. This example underlines that wrist movements play an important role in specific tasks, and restricting movement of that joint will affect the kinematics of the other upper limb joints (e.g., leading to compensatory movements). However, as children with CP show different trunk and upper limb kinematics during reach-to-grasp movements compared to typically developing children (i.e., increased trunk movements, reduced shoulder elevation, elbow extension, and supination, and increased wrist flexion) [46], further research is needed to verify whether adding wrist flexion will make multijoint movements more physiological in the target group. Based on the technical issues that occurred during the sessions and the comments from the participants and the therapist, we identified critical components that need to be improved. This includes the transmission system and attachment of the hand exoskeleton with the glove. Accordingly, we formulated recommendations on how these could be resolved (Additional file 1: Technical issues and proposed solutions). We will investigate the clinical utility of several of these improvements in future studies. Acceptance by participants and therapist It is interesting to see that the participants' subjective impression of acceptability of the technology correlates well with their improvements in bimanual performance. Statements such as 'With PEXO, I have better control over my hand activities' or 'I would like to continue training with PEXO' are meaningful indicators of the potential of such technologies. Also, the therapist's expert opinion on whether 'The child could carry out a goal-oriented training with PEXO' could be validated by objective data. The most commonly mentioned 'dislike' was that the glove resulted in a warm, sweaty hand. Similar concerns were highlighted in studies with adults employing a fullhand soft robotic glove [44]. If participants are expected to wear such technologies for extended periods of time, designers need to consider these comments early during their development stage. Otherwise, the comfort of wearing PEXO seemed sufficient based on the participants' feedback in the custom-made questionnaire. While the young patients reported several daily life activities that they could perform with PEXO, realistically, not all of these activities can be performed with the current version of PEXO (e.g., tasks that require high dexterity such as tying shoes or playing video games). Some participants reported pressure of the glove and exoskeleton at specific locations. Pressure might result from incorrect sizing of PEXO for children, highlighting the need for tailored exoskeletons fitting the specific hand size for optimal adoption of such technologies. However, no skin lesions were observed, and the pain levels reported for these locations were generally low (≤ 2/10). Only one participant reported higher VAS values at locations where pressure was perceived (7,5 and 5 from 10). However, at the same time, this patient rated the comfort of wearing PEXO as good (4/5). This highlights the challenges when evaluating such technologies in children and adolescents. Methodological considerations The therapist in this study was involved in the testing and, therefore, not blinded for the clinical assessments and the participants' responses. One could argue that this would have influenced some correlations, particularly those of the AHA scaled scores. However, the AHA scores and analysis were performed later after the test session from video recordings. Furthermore, the occupational therapist was initially unaware of the details of the planned data analyses. Therefore, the lack of blinding did not result in a large bias. Despite our sample being already quite heterogeneous, all participants had low spasticity scores for the wrist or fingers. Therefore, we cannot conclude from this study whether PEXO could provide sufficient force to overcome higher spasticity levels. Besides, as the proximal muscles were relatively strong, we do not know whether children or adolescents with weaker proximal muscles could benefit from PEXO to improve bimanual performance. The kids-AHA has been developed and validated for children between 18 months and 12 years with unilateral CP or plexus brachialis lesion. However, it was applied to all children and adolescents in this study to improve comparability between the tasks and scorings. We recruited several children who wore PEXO sizes 2 or 3, one adolescent who wore the adult tenoexo, and one participant who wore the smallest version. Recruiting children who could wear the smallest version was more complicated because we only had a right-hand version. Nevertheless, we did not find any indication that the uneven representation of PEXO sizes might have affected our findings. Conclusion PEXO is a safe, wearable soft exoskeleton developed for children and adolescents with minimal hand function. PEXO can improve bimanual hand performance and enable the lateral and tip pinch in participants who otherwise could not perform these tasks independently. We identified several factors (dexterity, arm and hand selective motor control, and finger extensor strength) that, in the long-term, might prove meaningful in identifying patients who might be suitable for training with or using a wearable exoskeleton for the hand. PEXO was well-accepted by patients, who can increase their bimanual performance when using it. The subjective opinion of the patient and an experienced therapist after a 2-h training seem additional good indicators of whether PEXO can improve functional abilities. The current PEXO prototype has several design shortcomings that need to be addressed before being tested under less standardized, more realistic daily-life conditions as an assistive device. Still, this study proved the feasibility of PEXO regarding its clinical application and highlighted its potential benefit for children with severe upper limb impairments.
2022-02-12T14:45:42.816Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "117da67b5c1eafce16893180958e1654547b6cdf", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1e44f2f74b9381c8e13b4fe591c829505ecbdb5d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
195733562
pes2o/s2orc
v3-fos-license
The added burden of depression in patients with osteoarthritis in Japan Objectives: In Japan, osteoarthritis (OA) is a leading source of pain and disability; depressive disorders may limit patients’ ability to cope with OA. This study examined the incremental effect of depression on the relationship between OA and health-related outcomes. Methods: Data from the 2014 Japan National Health and Wellness Survey (N=30,000) were collected on demographics, OA characteristics, and health characteristics of patients with OA. Depression symptoms were measured, and outcomes included health-related quality of life (HRQoL), work productivity and activity impairment, and health care resource utilization. Generalized linear regression models controlling for confounders were used to predict health-related outcomes. Results: Of 565 respondents with OA, 63 (11%) had symptoms of moderate or severe depression. In adjusted models, HRQoL remained lower among respondents with than without depression (p<0.001). Higher levels of presenteeism (mean±SE: 50%±9% vs 23%±2%) and activity impairment (mean±SE: 57%±7% vs 30%±1%) were observed for patients with than without depression (p<0.001); however, there were no differences for absenteeism (p=0.534). Patients with depression (vs no depression) reported more health care provider visits, emergency room visits, and hospitalizations (for all, p<0.001). Conclusion: Depression heightens the health-related burden of OA. Greater attention to depression among patients with OA is warranted. Introduction Osteoarthritis (OA) is the most prevalent form of arthritis in Japan and worldwide. [1][2] In Japan, knee-related OA is found in 43% of men and 62% of women 40 years and older, while shoulder OA affects 17% of patients. 3,4 OA comprises an estimated 75% of arthritis cases; it is one of the 10 leading sources of disability globally and one of the main sources of disability in older age. OA affects over half of the population aged 65 years and older. 5 OA is caused by mechanical stress and inflammation on the joints resulting in a breakdown of joint cartilage and the underlying bone. Symptoms often include joint pain, swelling, and stiffness, which can further lead to physical limitations and disability, sleep disturbance, and fatigue. OA is not only associated with chronic pain and disability but it is also related to poor mental health, with depressive disorders and anxiety being common among patients with OA. 11,14 Specifically, a systematic review showed that 20% of patients with OA experience depression or anxiety. [11][12][13] Because the weight-bearing joints (eg, knee, hip, or ankle) tend to be most affected, patients with OA may experience restrictions in their ability to perform daily activities, which is associated with depression. [6][7][8] The relationship between musculoskeletal pain and mental health has also been reported in recent studies from Japan in which more severe chronic lower back pain was associated with lower quality of life and greater health care resource use. [9][10][11][12][13] Overall, the connection between OA and mental health has important implications, as depression has been linked to greater pain sensitivity and poorer coping mechanisms, which can subsequently interfere with the ability to successfully manage OA symptoms. 5,15,16 OA has been identified as a leading source of pain and disability in Japan. 13 The association between depression and OA and the negative effect of depression on a patient's ability to successfully cope with OA have been well established. However, there is a scarcity of literature on patients with OA in Japan who also experience depression. We hypothesize that in Japan, the incremental effects of depression on patients with OA are represented in the increased health-related burden. This objective was accomplished by documenting differences in patient characteristics, health-related quality of life (HRQoL), and healthcare resource use between patients with OA by the presence of depressive symptoms. Methods Sample For this retrospective observational study, data were collected from the 2014 Japan National Health and Wellness Survey (Kantar Health, New York, USA) as reported previously. 9,17 NHWS respondents were recruited through voluntary survey panels, with sampling stratified by gender and age to reflect the demographic distribution of the Japanese general adult population, as reported in the US Census International Database. 18 (2). Potential respondents for the NHWS were identified through the Lightspeed Research general panel. A convenience sample was used with an attempt to approximate the age and sex distribution of the adult population in Japan. All respondents who completed the online informed consent form were eligible to complete the survey. No ethical review was undertaken specifically to the analysis of the anonymous data presented in this report. All data were self-reported and missing data were reported as "declined to answer". For example, previous studies have shown that the NHWS Japan data are similar in demographic composition to the Japan adult population and that disease-related characteristics for OA and for other conditions were similarly distributed. [19][20][21] Of the total NHWS sample (N=30,000), 565 respondents reported receiving an OA diagnosis from a physician and were included in the current study. Measures Depression symptoms Depression symptoms were measured using the Patient Health Questionnaire (PHQ-9), 22 a validated scale for classifying the severity of depressive symptoms over the last 2weeks. According to the PHQ-9, depression symptom severity is categorized as scores of 0-4=none, 5-9=mild, 10-14=moderate, 15-19=moderately severe, and 20-27=severe. The scale evaluates the frequency of anhedonia, depressed mood, sleep disturbance, lack of energy, appetite disturbance, negative self-feelings, difficulty concentrating, psychomotor retardation or agitation, and thoughts of self-harm. A single-item measure of the interference of these symptoms was also included. For this study, respondents who scored ≥10 (the cutoff associated with symptoms of moderate depression) were considered to have exhibited depressive symptoms, regardless of whether they indicated a diagnosis of depression, and respondents scoring <10 (associated with minimal or mild depression symptoms) were considered not to have symptoms of depression; this value has shown good sensitivity and specificity for major depression in previous research. 22 Health-related characteristics Health-related characteristics that were assessed included body mass index (calculated from height and weight), cigarette smoking (never or former vs current), alcohol use (any vs no alcohol), and vigorous exercise in the past month (yes vs no). Charlson Comorbidity Index (CCI) scores were also included. The CCI is a summary weighted index of the presence of the following conditions: HIV/AIDS, metastatic tumor, moderate or severe liver disease, lymphoma, leukemia, any tumor, moderate/ severe renal disease, hemiplegia, diabetes, mild liver disease, ulcer disease, connective tissue disease, chronic pulmonary disease, dementia, cerebrovascular disease, peripheral vascular disease, myocardial infarction, congestive heart failure, and diabetes with end organ damage. 23 The greater the total index score, the greater the comorbid burden on the patient. CCI was used in this study as a categorical variable (0, 1, 2+). OA-related characteristics Measures of OA-related characteristics included selfreported length of diagnosis (in years), number of joints affected, severity of arthritis (mild, moderate, or severe), frequency of problems with arthritis (daily, 4-6 times a week, 2-3 times a week, once a week, 2-3 times a month, or once a month or less often), and use of prescription medication for arthritis (yes vs no). 17 Health-related quality of life HRQoL was assessed with the revised validated Medical Outcomes Study 36-Item Short Form Health Survey version 2 (SF-36v2). The instrument contains 36 questions and measures eight health concepts (physical functioning, role physical, bodily pain, general health, vitality, social functioning, role emotional, and mental health), according to Japanesebased population norms (mean=50, SD=10). Mental component summary and physical component summaryscores with US-based norms (mean=50, SD=10) were also used. In addition, the SF-36v2 instrument was used to generate a single index of health state utilities, namely the Short-Form 6-Dimension (SF-6D). 24 Scoring of the SF-6D takes into consideration six dimension of the SF-36, including physical functioning, role participation, social functioning, bodily pain, mental health, and vitality. It is scored. The SF-6D index has interval scoring properties and yields summary scores from 0.0 (worst health state) to 1.0 (best health state) with an empirical floor of 0.3. Higher scores on these measures indicate better HRQoL. Work productivity and activity impairment Work productivity and activity impairment was measured using the validated Work Productivity and Activity Impairment-General Health questionnaire, 25 which measures four domains: 1) absenteeism (the percentage of work time missed within the past week due to one's health), 2) presenteeism (the percentage of impairment at work within the past week due to one's health), 3) overall work productivity loss (an overall impairment estimate that assesses a combination of absenteeism and presenteeism), and 4) activity impairment (the percentage of impairment in day-to-day activities within the past week due to one's health). Higher percentage scores indicate greater impairment. Only employed participants provided data for absenteeism, presenteeism, and overall work productivity impairment, whereas the full sample of patients with OA completed the activity impairment measure. Health care resource use Health care resource use was measured using self-reported number of total physician visits, emergency room (ER) visits, number of ER visits, and the number of times respondents were hospitalized in the past six months. Analyses Respondents with OA and moderate to severe depression symptoms (PHQ-9 score ≥10) were compared to respondents with physician-diagnosed OA and mild or no depression symptoms using chi-square tests for categorical variables and independent-samples t-tests for continuous variables. Multivariable models analyzed health outcomes as a function of depression symptoms (reference=none/ mild depression vs depression=moderate/severe). These models were used to demonstrate the burden of depression symptoms on patients with physician-diagnosed OA, controlling for socio-demographic or health-related variables that varied by group at p<0.05 in the bivariate analyses: age, marital status, employment status, and smoking status. Regression models were chosen according to the distribution of the outcome variable as tested using the onesample Kolmogorov-Smirnov test for normality and for non-linear outcomes using the likelihood ratio test in the regression model to assess over-dispersion (Poisson vs negative binomial). Thus, generalized linear regression models were used for measures with normal (HRQoL and work productivity outcomes) or negative binomial (health care resource utilization) distributions. Linear model assumptions were tested for linearity of covariates and disproportionate influential observations using Cook's distance and studentized residuals. All analyses assumed a null hypothesis, with a two-sided α<0.05, and were conducted with IBM Statistical Package for the Social Sciences (SPSS) Statistics, version 22 or later. Results Our final sample comprises a total of N=565 respondents with OA, and 63 (11%) of these individuals had moderate or severe depression symptoms, as measured by the PHQ-9 (score ≥10). These individuals were considerably younger than those without depression symptoms, more likely to be employed, less likely to be married or living with a partner, had higher CCI scores, and were more often current smokers, although gender and OA-related characteristics did not differ significantly between patients with and without depression symptoms (Table 1). Depression symptoms were associated with substantial decrements in HRQoL (Table 2), particularly on the MCS (nearly 20 points) and SF-6D (over 0.15 points). Substantial and significant decrements on all other SF-36v2 metrics, including physically oriented metrics, such as PCS, physical functioning, and bodily pain, were also observed. Among employed respondents, moderate to severe depression symptoms were associated with markedly higher work productivity impairment, except for absenteeism (Table 3). For patients with moderate to severe depression symptoms, both presenteeism and overall work impairment were approximately double the levels reported by their counterparts with no or mild depression symptoms. Activity impairment was also nearly double for those with moderate to severe depression symptoms, relative to the group with no or mild depression symptoms. Moderate to severe depression symptoms were associated with a greater number of physician visits, ER visits, and hospitalizations for patients with OA, compared with no or mild depression symptoms (Table 3). After adjusting for potential confounders, HRQoL remained significantly lower among respondents with moderate to severe depression symptoms, compared with those with no or mild depression symptoms (Table 4). Additionally, impairments to work productivity (presenteeism and overall work impairment) and daily activities, as well as health care resource utilization (physician visits, ER visits, and hospitalizations), were significantly higher among respondents with moderate to severe (vs mild or no) depression symptoms. For absenteeism, no statistically significant differences were observed (Table 4). Discussion Approximately 11% of respondents with OA in our study had symptoms of moderate to severe depression, according to the PHQ-9. Our results showed that depression symptoms are more severe among patients with OA who smoke and are not married or living with a partner. The frequency of moderate to severe depression among employed patients with OA was greater than among the unemployed. The reason could be that the age of patients with symptoms of moderate to severe depression was, on average, lower than that of patients with no or mild depression symptoms, and patients under age 60 had a higher frequency of employment. Gender was not differentially associated with depression severity in OA, which may be due to cultural factors and may also warrant further investigation in a future study. The prevalence of moderate to severe depression symptoms reported by the current study is lower than the estimates reported in US-based studies investigating OA and depression. 16,26 This 11% prevalence of depression may reflect a slight skew toward a younger healthier population in the NHWS in general. 27 Cultural factors may play a role, as depression is less prevalent in Japan than in Western countries. 28 Results were consistent with other research that identified characteristics, such as smoking, 29,30 living alone, 16 and widowed or divorced status, 29 as being associated with depression. However, it is unclear whether these characteristics are causal factors for depression in the adult population with OA, and further research is needed to clarify this issue. Respondents with OA who had symptoms of moderate to severe depression had worse mental and physical HRQoL, including more interference from bodily pain, than those with no or mild depression symptoms, even after adjusting for potential confounders. Studies show the interconnectedness between physical pain and mental health. 5,16,[31][32][33][34] While depression and anxiety are associated with reduced activity leading to chronic pain, functional disability has been reported to be a risk factor for depression, both pathways impacting prognosis and rehabilitation. [31][32][33][34] The literature has produced conflicting results as to whether the relationship between depression and pain or disability varies by gender. 9,15,31 Specifically, some studies have reported significant gender differences in depression among those with OA, with women reporting higher rates of depression than men. 15 In contrast, the findings that reported no significant gender difference in the prevalence of depression severity in participants with OA are aligned with other studies. 15 Greater impairments to work productivity and daily activities among those with than without depression symptoms were reported by a previous study of patients with chronic lower back pain in Japan, 9 which is similar to the results of the current study. Moreover, consistent with the present study, a prior retrospective observational study of 167,068 US patients with arthritis demonstrated that those with (vs without) comorbid depression had greater disability and limitations to work and social activities, as well as poorer general health and HRQoL. 35 Therefore, the collective evidence suggests that depression symptoms may substantially augment the burden of OA. In the present findings, absenteeism and depression symptoms were not significantly associated with each other, which was aligned with prior research showing Japanese workers tend to have fewer sick leave claims than workers in other countries. 36 Prior research has also reported that health care expenditures were 39% higher among adults with OA who also reported depression, when compared with individuals with OA who did not have depression; 26 this is in accordance with the patterns of health care resource use found in the present study. One possible explanation is that patients with both chronic physical and psychological conditions may use the healthcare system more often than individuals who solely have physical impairments. 26 For example, patients with OA who also have symptoms of moderate to severe depression may visit both psychiatric and orthopedic health care providers; alternatively, these patients may perceive their health to be worse, relative to the health perceptions of patients without depression. 26 Overall, this study contributes to the literature by identifying key differences between respondents with OA who have symptoms of moderate to severe depression and counterparts with no or mild depression symptoms. Notably, this is one of the first studies to explore these relationships among patients with OA in Japan. The observed association between moderate to severe depression symptoms and health-related burden among patients with OA suggests that physicians should screen for and address symptoms of depression when treating OA, particularly for individuals who may be at a greater risk of developing depression. In the workplace, for example, those with presenteeism may be at greater risk of depression, although this may not be the case among those with absenteeism. Mitigating the impact of depression symptoms is essential, as prior research indicates that depression can interfere with the effective management of OA by reducing patients' adherence to their medication regimen. 37 Critical aspects of Notes: a Generalized linear regression models were used for measures with normal distributions (HRQoL and work productivity outcomes) and negative binomial models were run for non-negative count data (healthcare resource utilization). All models were adjusted for age, marital status, employment status, and smoking status that were significant predictors of depression severity at integrated programs that address both depression and OA include screening for depression and pain at the first visit and follow-up, supporting patient self-efficacy through education and behavioral therapy, and adjusting treatment intensity, based on a patient's progress. The results of the current study can help to inform clinicians about the importance of identifying and treating patients with OA who are also likely to concurrently have depression. The cross-sectional study design prevents us from detecting causal or longitudinal relationships between variables. The measures in the study were self-reported, and respondent recall bias may have introduced measurement error into the study findings. We also cannot exclude the possibility that unmeasured variables could at least partially explain the results, such as clinical measures of severity. The data were collected using an Internet survey of respondents who opted to participate. Therefore, selection bias may have affected the representativeness of the study population and prevalence of OA. Specifically, it is possible that younger adults are more likely to participate in Internet surveys like the NHWS, which could account for the unexpectedly low prevalence of OA in the overall NHWS sample. In general, OA knee prevalence, for example, is highest among older adults ages 70 years and older, and this age group represents approximately 14% of the study sample. 3,4,38 Moreover, given the relatively young age of patients with physician-diagnosed OA in this study, the findings may underestimate both the burden of OA and the incremental impact of depression on this burden. Conclusion In this study, over 10% of patients with physician-diagnosed OA reported symptoms of moderate to severe depression. There were significant differences in HRQoL, work productivity impairment, and health care resource utilization between those with and without moderate to severe depression, even after adjusting for potential confounders. Hence, results suggest that depression may incrementally increase the health-related burden of OA. To help mitigate this burden, physicians should address symptoms of depression when treating patients with OA.
2019-06-26T13:45:38.320Z
2019-06-20T00:00:00.000
{ "year": 2019, "sha1": "ea8e9f1bef68f94135dec9c995f1f1cdac5bee53", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=50615", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea8e9f1bef68f94135dec9c995f1f1cdac5bee53", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
209573071
pes2o/s2orc
v3-fos-license
Predictive whisker kinematics reveal context-dependent sensorimotor strategies Animals actively move their sensory organs in order to acquire sensory information. Some rodents, such as mice and rats, employ cyclic scanning motions of their facial whiskers to explore their proximal surrounding, a behavior known as whisking. Here, we investigated the contingency of whisking kinematics on the animal’s behavioral context that arises from both internal processes (attention and expectations) and external constraints (available sensory and motor degrees of freedom). We recorded rat whisking at high temporal resolution in 2 experimental contexts—freely moving or head-fixed—and 2 spatial sensory configurations—a single row or 3 caudal whiskers on each side of the snout. We found that rapid sensorimotor twitches, called pumps, occurring during free-air whisking carry information about the rat’s upcoming exploratory direction, as demonstrated by the ability of these pumps to predict consequent head and body locomotion. Specifically, pump behavior during both voluntary motionlessness and imposed head fixation exposed a backward redistribution of sensorimotor exploratory resources. Further, head-fixed rats employed a wide range of whisking profiles to compensate for the loss of head- and body-motor degrees of freedom. Finally, changing the number of intact vibrissae available to a rat resulted in an alteration of whisking strategy consistent with the rat actively reallocating its remaining resources. In sum, this work shows that rats adapt their active exploratory behavior in a homeostatic attempt to preserve sensorimotor coverage under changing environmental conditions and changing sensory capacities, including those imposed by various laboratory conditions. Introduction Perception is a process in which the sensory organ is actively employed in order to acquire sensory data from the external environment [1][2][3][4][5][6]. In his classic study, Alfred L. Yarbus [1] demonstrated active sensing in human visual perception; Yarbus showed that different behavioral contexts, determined by giving subjects perceptual instructions, entail different spatial sampling strategies. A similar approach was employed to study sensorimotor exploration in other visual animals [7,8], yet little is known about the effects of context on spatial sampling in other modalities. Many mammals use the long hairs (vibrissae or whiskers) on either side of their snout to navigate the environment and to collect information about their proximal surroundings [9]. In some rodents, movements of the whisker array are used to actively acquire tactile information about both the position and nature of nearby objects [10]. These movements are, in turn, affected by the acquired sensory information, as well as by other "top-down" modulatory processes [11][12][13]. In other words, vibrissal perception is not solely active but is also reactive, giving rise to closed-loop dynamics of the perceiving organism and its environment [14][15][16]. Several studies have described basic components of vibrissal active behavior, both those observed in synchronous exploratory whisking in air [2,[17][18][19][20][21] and those related to interactions with external objects [22][23][24][25]. Only a handful of studies, however, have analyzed how vibrissal behavior is affected by behavioral context. Arkley and colleagues [26], e.g., showed dramatic effects of training and environmental familiarity on the whisking strategy employed by rats. Furthermore, they showed that whisking strategy reflects the animals' expectation of future object encounters. In another study by the same group, Grant and colleagues [27] tracked the developmental emergence of previously described behaviors in rat pups' first postnatal weeks. Finally, rats change their whisking strategy in response to external perturbations, keeping some behavioral variables controlled while modulating others in order to maintain perceptual performance [28]. The effects of behavioral context on free exploratory whisking, however, remain poorly described. In laboratory experiments, highly dominant contextual factors emerge from the experimental methodology. Experimental biologists are often forced to impose methodological constraints on their study subjects to ensure precise and stable observations that are amenable to analysis. One of the most common practices in neurophysiological and neuroimaging studies is "motion restraint," examples of which are (i) head-fixing, in which head movements are eliminated by the physical anchoring of the head [29], and (ii) body restraint, in which head movements are permitted while the body is restrained [21]. Such procedures entail drastic reduction of the motor degrees of freedom available to the animal, as well as the introduction of psychological stress [29]. An additional practice that is prevalent in the study of vibrissal perception is the reduction of the number of vibrissae available to the rodent, either by trimming or by plucking them from a full pad of 33 macrovibrissae that are arranged in 5 rows to (i) a single row [30], (ii) a few whiskers [23], or (iii) none at all [9]; in many cases, this is done to facilitate precise measurement of whisker position and shape using overhead videography. This procedure directly and selectively reduces the rodents' sensory degrees of freedom, and was shown to entail compensatory behavioral adjustments in the context of object interrogation [31]. Despite the ubiquity of such manipulations and their possible implications on the sensorimotor system, no attempt has been made so far to quantify the adaptations they might entail during exploratory behavior. Quantification of such compensatory adaptation is essential for discriminating between open-and closed-loop models of perception. When considering an individual perceptual epoch, open-loop models assume that sensation is "presented" to the brain, which extracts the information it needs using computational tools. Closed-loop perception assumes that the generation of sensations is actively controlled as the sensory information accumulates [14,[32][33][34]. The 2 schemes yield contradicting predictions with regard to the so-called "controlled variables" [35]: closed-loop perception predicts that there will be motor-sensory variables that are maintained invariant despite external or embodied constrains, while open-loop perception predicts that such variables should not exist [14,35]. Our experiments and analyses are aimed at directly testing these predictions. Here, we compare different aspects of vibrissal behavior measured in 3 contexts ( Fig 1A): (i) head-fixed rats with a trimmed whisker pad (in this case, only 3 caudal macrovibrissae-C1, C2, and D1-were untrimmed on either side; reuse of data published in [23]), (ii) freely moving rats with the same 3-whisker configuration as in (i), and (iii) freely moving rats with an entire single row (row C) of macrovibrissae on either side (termed here "free single-row," for the sake of brevity; behavioral apparatus is illustrated in S1 Fig). We focused only on segments in which the animals performed exploratory rhythmic movements in free air without encountering any object with the whiskers. It should be noted that whisker contacts with the floor could not be ruled out in the freely moving rats because of video limitations (resolution and focus). However, based on a previous study [13], we can estimate the probability of such floor contacts to be only 2.5% per cycle for any whisker, and therefore, they are not expected to significantly alter our findings. Moreover, head-fixed rats were positioned such that no floor contacts were possible. We begin by describing the different parameters of whisking behavior used in our analyses, and then focus our analysis on a kinematic feature called the "free-air pump" [20,36]. We then show that head-fixing exerts a dramatic shift in the whisking profile, suggesting an adaptive redistribution of sensorimotor exploratory resources. Finally, we demonstrate that whisker trimming likewise entails an alteration in whisking strategy. Characterizing whisking behavior All analyses were performed on free-air exploratory whisking, in the absence of interactions with external objects. It has been shown that this mode of whisking is characterized by extremely high correlations between the angle of motion of the different whiskers [19,23], i.e., the entire whisker pad moves in unison. Since all whiskers move together, analyses of free-air whisking can be performed on one representative whisker; whisker C2 is often chosen because it is centrally located within the whisker pad [37]. In the analysis of head motion, we used whisker C2 on both sides of the face; in all other analyses, only the left side was used. Our tracking tool extracted from the videos the whisker's "base angle," i.e., the angle of the whisker at the point it enters the skin relative to the line connecting the ipsilateral eye and the tip of the nose. To quantify the governing variables of exploratory whisking, we employed "rhythmic decomposition" on each of the tracked segments [21]; this algorithm models whisking dynamics as rapid oscillations (whisking cycles), modulated by a multiplicative "amplitude process" and an additive "offset" process (see Fig 1B and Methods). The whisking amplitude was bimodally distributed (Fig 1C and 1D), consisting of either high-amplitude "bouts" (nonshaded in Fig 1B, light-colored lines in Fig 1C and 1D) or low-amplitude "pauses" (shaded and hatched in Fig 1B, dark-colored lines in Fig 1C and 1D); it has been suggested that these 2 modes reflect 2 attentional states: an active exploratory state during bouts and a passive, "receptive" state during pauses [16,38]. By fitting a Gaussian Mixture Model (GMM), we found the maximum-likelihood threshold between these modes (2.6 deg for head-fixed, 2.1 deg for freely moving; see arrows in Fig 1C and 1D); we therefore set a conservative threshold of 2.5 deg for all data sets. Overall, head-fixed rats whisked less frequently than freely moving, with head-fixed rats pausing for 27% of the tracked time, while freely moving rats paused 7.8% of the tracked time (inset, Fig 1B). This reflects the well-known reluctance of head-restrained rats to whisk; often, some sensory stimulation (e.g., olfactory) is required to encourage headfixed animals to whisk. The duration of whisking bouts varied greatly (Fig 1E, Cumulative Distribution Function [CDF]); in the head-fixed data set, the median bout duration was 0.366 s (2 whisking cycles), while the mean was 0.92 s (5.86 cycles) (N = 363). This large difference between median and mean reflects the "heavy tail" of the bout distribution. Indeed, the bout probability density is well fitted with a power-law function (inset of Fig 1E). The freely moving data sets contained much shorter segments since object contacts had to be edited out (headfixed segments were 7 ± 3.1 s long, while freely moving segments were 2.6 ± 1.7 s long). This, together with the scarcity of pauses, led to there not being enough freely moving complete bouts (in which both the beginning and the end of the bout are recorded) to allow for a statistical analysis (42 freely moving bouts versus 363 head-fixed bouts). Most of the analyses described in this paper focused on features related to the active bout epochs only; e.g., the whisking pumps (see below) are only defined in the context of active protractions or retractions. Each whisking cycle is composed of 2 stages: protraction, in which the whiskers move rostrally (i.e., when the whiskers' angular velocity is positive; unshaded in Fig 1F), and retraction, in which they move caudally (i.e., when angular velocity is negative; shaded in Fig 1F); henceforth, we will refer to protractions and retractions as the 2 "phases" of the whisking cycle. Usually, both protractions and retractions display smooth, ballistic-like kinematic profile with a single velocity peak; however, it was noted in several previous studies that some protraction/ retraction phases exhibit a multipeaked velocity profile, indicating that the whisker motion consists of 2 or more consecutive "thrusts." This feature, termed a whisking "pump" (red markers in Fig 1F [20,36]), was suggested to serve as a fast "error-correction" mechanism for the whisking kinematics [20]; another hypothesis is that these pumps cause the whiskers to linger and "resample" a spatial locus of interest, thus increasing the sensory information throughput from such loci. Consistent with this hypothesis, such pumps often occur immediately (latency < 18 ms) after the whisker contacts an external object. These "Touch Induced Pumps" or TIPs [23] were shown to be related to object-oriented spatial attention [13]. We detected the occurrence of pumps in our "free-air" data, in which no objects were contacted, by first segmenting the whisking signal into phases and then finding the phases in which the velocity profile had more than one peak (see Methods). In the next section, we compare various properties of these "free-air pumps" and TIPs. Free-air pumps are temporally clustered and entail phase prolongation and offset shifting TIPs cluster in time [23]. We tested the tendency of free-air pumps to cluster by measuring the cross-correlation of pump instances between cycles. The probability of free-air pump occurrence was significantly correlated across multiple cycles (Fig 2A 1,2 ). Importantly, these correlations were specific to pumps of the same whisking phase: a cycle in which a pump occurred in the protraction phase was likely to be preceded and followed by other cycles with protraction pumps (p < 0.05 up to lags of 7 cycles, random permutations; Fig 2A 1 ), and the same was observed for retraction pumps (p < 0.05 for up to and beyond 10 cycles, random Three data sets were used in this study: HF rats with only whiskers D1, C1, and C2 (left); freely moving rats with only whiskers D1, C1, and C2 (free trimmed, center); and freely moving rats with 7 whiskers of row C (C1-C7) (free single-row, right); only motion of C2 (in red) was used in all analyses; only the left side whisker was used in all analyses except in Figs 3 and 4. (B) Rhythmic decomposition of whisking. Dynamics of the whisker's base angle (black line) were analyzed as fast oscillations modulated by multiplicative amplitude (top, purple) and additive offset (black). Hatched: nonwhisking epochs (amplitude < 2.5 deg). Inset: HF animals spend less time whisking. (C,D) Whisking amplitude histogram (circles; note the logarithmic scale) for the HF rats (C) and freely moving (D) trimmed rats. Histograms were fitted using a GMM (gray line). Distributions exhibited 2 modes: low-amplitude pauses (dark-colored lines) and high-amplitude bouts (light-colored lines). Arrows: maximum-likelihood thresholds (2.6 deg for HF, 2.1 deg for freely moving). (E) CDF of bout duration in cycles (bottom abscissa, orange) and in seconds (top abscissa, green) in HF rats. Examples of bouts 0.44 s, 1.04 s, and 7.3 s long are shown. Inset: probability density histogram of bout length is heavy-tailed; solid black line: power-law fit, dotted line: best exponential fit, shown for comparison. (F) Example of a whisking trajectory. Bottom: base angle; top: angular velocity. White background: protraction phases; gray background: retraction phases. Red markers and vertical dashed lines: free-air pumps, phases in which velocity profile is double-peaked; filled circle: protraction pump, empty circles: retraction pumps. The data and analysis code for this figure can be found here: https://github.com/avner-wallach/Rat-Behavior.git. CDF, Cumulative Distribution Function; GMM, Gaussian Mixture Model; HF, head-fixed; PDF, Probability Density Function. Did cycles containing free-air pumps differ from those lacking them? In the context of object encounters, protractions containing TIPs are longer and shifted forward when compared with protractions with no pumps [13,23]. This was also true for free-air pumps: first, phases with such pumps were much longer than those lacking pumps (31% longer in protraction and 71% longer in retraction; Fig 2C 1,2 ; as previously shown in [20]). Second, the whisking offset (i.e., the midpoint angle of the motion) was shifted in the direction of the whisking phase that included the pumps, i.e., more protracted in protractions that included pumps and more retracted in retractions that included pumps (+4.2% and −7.6% change in protraction/ retraction, respectively; Fig 2D 1,2 ). We note, however, that the statistical significance of this last finding is difficult to assess because of temporal correlations in the offset and pump-rate signals, which render neighboring samples statistically dependent. To conclude, we have shown some basic properties of free-air pumps: First, free-air pumps of each whisking phase (protraction/retraction) exhibited strong temporal correlations and therefore occurred in temporal "clusters," rather than appearing randomly and uniformly in time. Second, phases containing free-air pumps were distinct from those that lacked pumps in both their duration and offset. Importantly, these properties are similar to those previously described for TIPs [23], which were later shown to be related to object-oriented spatial attention [13]. Direction-specific correlations between free-air pumps and head motion We decomposed head motion into 3 components: "turn" (rotation around the midpoint between the eyes), "thrust" (longitudinal translation or forward/backward motion), and "slip" (transverse translation or side motion). While turning was uncorrelated with thrust (Pearson coefficient R = −0.035), it was highly correlated with slip (Pearson coefficient R = 0.863, see S2 Fig); these correlations reflect the fact that the axis of head rotation (the neck) is caudal to the eyes (which were the feature tracked in our analysis). Therefore, we limit our analysis of head motion to the turn and thrust variables. The likelihood of protraction pumps was correlated with contralateral head turns: turns to the left increased the pump frequency of the right whisker pad while decreasing that of the left whisker pad and vice versa (Fig 3A; plots are normalized to overall pump rates; statistical significance measured using random permutations, N = 5,000). We note that when the head is turning in a certain direction, the contralateral side of the snout moves forward and the ipsilateral side moves backwards, and therefore, Fig 3A demonstrates that protraction pumps on either side of the face occur more frequently when that side moves forward. Correspondingly, protraction pumps were also more frequent during forward thrust at moderate speeds (i.e., walking, but not running) when both sides of the face move forward (Fig 3B). In contrast, retraction pumps occurred mostly when the rat kept its head motionless (close to zero velocity of turn and thrust) and were significantly inhibited during rapid motion in any direction. Therefore, head motion was accompanied with directionspecific modulations in the occurrence of free-air pumps; protraction pumps occurred mostly during forward motion (either of the entire head or of the side containing the pumping whisker), whereas retraction pumps were frequent when the head stayed nearly motionless. Free-air pumps are predictive of changes in head motion We next checked whether free-air pumps systematically precede changes in the animal's head motion. We performed event-triggered analysis to explore the reciprocal relations between head and pump dynamics. First, we identified motion-change events, in which the rat changed the direction of head turning or thrust. We then measured the dynamics of free-air pump rate, relative to the onset of the change (the onset of acceleration in the opposite direction). Onset of forward motion was preceded by a substantial increase in protraction pumps, peaking around 100 ms prior to the change (p < 2 × 10 −4 , random permutations; black arrow, Fig 4A). No such change occurred in the onset of backward motion (p = 0.22, random permutations; Fig 4B). Note that in both cases, retraction pumps were inhibited before and after the change occurred. Similarly, ipsilateral head turns (towards the pumping-whisker side) were preceded by an increase in protraction pump rate (p < 2 × 10 −4 , random permutations; black arrow, Fig 4C), while no such increase is seen prior to contralateral turns (p = 0.26, random permutations; Fig 4D). The predictive power of free-air pumps regarding future spatial targets of the rat is confirmed by measuring the dynamics of head turning probability relative to the time a pump occurred. Consistent with the previous analysis, the average protraction pump was followed by an increased probability of turning towards the pumping-whisker side (p = 0.0038, random permutations, left ordinate in Fig 4E), which is also evident in the dynamics of angular acceleration (p > 0.05, random permutations, right ordinate in Fig 4E). Retraction pumps, however, were followed by a significant drop in this probability (p = 0.02, random permutations, Fig 4F). These results are not sensitive to the choice of motion-change detection threshold (see S3 Fig). We conclude that free-air pumps predict the rats' motion targets: initiation of a forward motion or a head turn were preceded by an increase in bilateral or ipsilateral protraction pumps, respectively, and a drop in retraction pumps. Head-fixed effects on whisking resemble those of voluntary motionlessness Are voluntary and imposed motionlessness similar in their effects on whisking kinematics? To answer this question, we compared the frequency of protraction and retraction pumps in head-fixed rats with those observed in freely moving rats at different motion speed ranges ( Fig 5A). When the entire data set of freely moving episodes is analyzed (maximal head velocity = "1"), head-fixed rats exhibited 22% fewer protraction pumps than freely moving rats (p = 10 −4 , bootstrap, N = 4,056 and 412 cycles for head-fixed and free trimmed, respectively). In contrast, retraction pumps were dramatically more prevalent in head-fixed rats (208%, p < 10 −4 , bootstrap, N = 4,046 and 549 cycles for head-fixed and free trimmed, respectively). However, as we limit our analysis of pump frequency in freely moving rats to phases in which head velocity did not exceed a certain bound, the frequency of protraction pumps decreases PLOS BIOLOGY and that of retraction pumps increases to the point of becoming statistically similar to one another (p = 0.14, bootstrap, N = 75 protractions and 80 retractions), as well as to the frequencies measured in head-fixed rats (p = 0.21 and 0.81 for protractions and retractions, respectively, bootstrap; Fig 5A). In other words, the overall ratio of protraction/retraction pumps drops as velocity decreases, approaching that of head-fixed rats at near motionlessness ( Fig 5B). A similar trend can be seen in the distribution of protraction/retraction durations ( Fig Fig 5. Free-air pumps in HF rats resemble those in voluntary motionlessness. Abscissa in all panels: maximal head velocity for samples analyzed from the freely moving data set (i.e., only cycles in which head velocity did not exceed this value were taken); 1 = entire data set. (A) Probability of protraction (filled circles) and retraction (empty circles) pumps for HF (blue) and freely moving (red) rats. (B) Ratio between protraction and retraction pump probabilities. (C) Distributions of protraction (median: filled circles; interquartile range: light shading) and retraction (median: empty circles; interquartile range: dark shading) durations for HF (blue) and freely moving (red) rats. (D) Protraction-retraction duration ratio for HF (blue) and freely moving (red) rats (median: asterisks; interquartile range: shading). ��� p < 0.005. The data and analysis code for this figure can be found here: https://github.com/avner-wallach/Rat-Behavior.git. HF, head-fixed; n.s., not significant. https://doi.org/10.1371/journal.pbio.3000571.g005 PLOS BIOLOGY 5C). Protractions were significantly longer than retractions for the entire freely moving data set (means 74 ms and 52 ms, respectively), consistent with previous observations [17,20,[39][40][41][42][43]. However, while both protractions and retractions increased in duration as the maximal speed decreased, the ratio of protraction duration to retraction duration decreased, approaching that of head-fixed rats at near motionlessness ( Fig 5D). We conclude that freely moving rats, which display a strong emphasis on the protraction phase during motion, approach the near-parity in protraction/retraction that is typical of head-fixed rats as their velocity decreases towards motionlessness. Whisking envelope spatial distribution is dispersed because of head-fixing Our findings so far suggest that limiting the rat's degrees of motor freedom (by head-fixing) entails a more even distribution of the animal's sensorimotor resources between protraction and retraction; this was reflected in features such as pump occurrence and phase duration. Can we see a similar effect in the "whisking envelope"-the combination of amplitude and offset (see Fig 1B) that dictates the range of angles covered by whisking at each cycle? It is important to note that while amplitude and offset may be, in principle, uncorrelated, they are necessarily statistically dependent because the maximal possible amplitude is always dictated by the distance from the current offset to the maximal whisker protraction and retraction angles. The theoretical domain of all possible amplitude-offset combinations, therefore, is bound by an isosceles triangle (Fig 6). Utilization of this domain was indeed affected by head-fixing. Comparing rats with the same whisker arrays (head-fixed and freely moving trimmed rats) revealed that while freely moving trimmed rats had a restricted focal region in which they preferentially whisked (i.e., the whisks were narrowly distributed around a preferred amplitude of 11.8 deg and a preferred offset of 74.5 deg; Fig 6A), head-fixed rats exhibited a highly dispersed distribution that covered a large portion of the triangular domain (Fig 6B). This dispersion may reflect compensation for the lost motor degrees of freedom of the head and body. Indeed, whisking envelope distributions became dispersed when changes in head angle were considered in freely moving rats (relative to the mean head angle in each tracked segment, Fig 6C). To quantify the envelope dispersion in each scenario, we measured the distributions' information entropy; to evaluate statistical significance, the bootstrap method was used to generate random subsets of equal size from each data set (Fig 6D; see Methods). The entropy was significantly smaller in freely moving trimmed rats than in head-fixed ones when only whisker angle was used (p < 10 −4 , bootstrap), reflecting the increased dispersion of the latter. However, no significant difference was measured when the head's degree of freedom was taken into account (p = 0.13, bootstrap). We conclude that head-fixed rats' whisking patterns were much more diverse than those of freely moving rats. This suggests that the rats compensated for the loss of motor degrees of freedom due to head-fixing by employing a wider range of whisking configurations. Whisker trimming entail shifts in whisking strategy We next examined the impact of whisker trimming on the distributions of individual whisking variables (offset and amplitude, diluted to avoid temporal correlations; see Methods). As described above (see Fig 1A), freely moving rats had either a configuration of 3 caudal whiskers or a full row of 7 whiskers (i.e., including both caudal and rostral whiskers). Comparison of the whisking amplitude and offset distributions of these 2 data sets of freely moving rats revealed that the preferred whisking strategy was noticeably different. The trimming-induced effect consisted of 2 adjustments: first, a significant 33.8% decrease in the variance of offsets (p < 10 −4 , bootstrap, N = 292 and 280 for single-row and trimmed, respectively; Fig 7A) while HF rats (B). While freely moving rats exhibit a preferred subspace of amplitude-offset combinations, HF rats cover much of the available envelope space (range of offsets is determined by the absolute bounds on whisker C2 angle in all data sets, 11-133 deg; maximal possible amplitude is offset dependent and peaks at the median offset at 61 deg). (C) Probability distribution of whisking envelope, taking into account head rotations, for free rats. (D) Whisking envelope information entropies for random subsets taken from each data set (box-and-whisker plots: horizontal line, median; box, IQR). Mean entropy is significantly smaller for freely moving rats (Free W, left) than for HF rats (center), indicating that the whisking envelope distribution is much more dispersed during head-fixing; however, there is no significant difference when head rotations are included in the analysis (Free W + H, right). ��� p < 0.005. Inset: entropies calculated for entire data sets (free whiskers only 6.24 bits, HF 7.325 bits, free whiskers + head 6.97 bits). The data and analysis code for this figure can be found here: https://github.com/avner-wallach/Rat-Behavior.git. HF, head-fixed; IQR, Interquartile Range; n.s., not significant; PDF, Probability Density Function. https://doi.org/10.1371/journal.pbio.3000571.g006 maintaining the mean offset almost unchanged (2% decrease, p = 0.11, bootstrap); and second, a significant 27.4% increase in mean amplitude (p = 10 −4 , bootstrap, N = 367 and 268 for single-row and trimmed, respectively; Fig 7B) while not significantly changing its variance (p = 0.91, bootstrap). Therefore, the trimmed rats tended to employ large amplitude whisks around a relatively constant offset angle, while those having a full row of whiskers used smaller amplitudes while shifting the offset over time. Head-fixing entails a backward shift in offset during bouts, but not during pauses Lastly, we compared the offset distributions in head-fixed and freely moving rats during both active whisking bouts and passive, low-amplitude pauses (see Fig 1C and 1D). When only bouts were analyzed, head-fixing caused an increase in offset variance (23.5% increase, p < 10 −4 , bootstrap, N = 1,010 and 494 for head-fixed and free, respectively; Fig 8A), consistent with our findings above (see Fig 6). Additionally, head-fixed rats whisked around a slightly retracted offset angle (6.3% reduction, p < 10 −4 , bootstrap). This further suggests that headfixed animals explored more retracted positions than free animals, in line with the pump results described above. During pauses, however, the head-fixed and free offset distributions were not significantly different in either mean or variance (p = 0.4 and p = 0.96, respectively; bootstrap, N = 556 and 125 for head-fixed and free, respectively; Fig 8B). A likely explanation for this is revealed by comparing the distributions of head-motion variables (Fig 8C: absolute translation velocity; Fig 8D: absolute angular velocity) during bouts and pauses; both variables show that when rats stopped whisking, their head was nearly motionless (p < 10 −4 , bootstrap, N = 125 and 494 for pauses and bouts, respectively). Therefore, the similarity in pause offset distribution between head-fixed and freely moving rats provides further evidence that head fixation and voluntary motionlessness entail similar behavioral patterns. Discussion In this paper, we have shown that whisking kinematics predict consequent head and body locomotion and, consistently, that these kinematics depend on the behavioral context. The spatial and dynamical characteristics of rats' exploratory whisking were affected by the rats' ability to move and the number of whiskers they had available. It is commonly assumed that overt spatial attention is associated with preparing to move the body or the sensors towards a selected location or object [13,14,[44][45][46][47] and, therefore, we suggest that the alterations in the spatial exploration described here reflect alterations in overt spatial attention. Our findings, therefore, suggest that both head-fixed rats and free but motionless ones dedicate more attention to whisker retraction than do rats in motion. PLOS BIOLOGY This interpretation suggests that the whisking pump, a subtle alteration in the whisking dynamics, is a useful indicator of perceptual attention. It was previously shown that, when induced by encountering an object, such pumps are robustly associated with object-oriented attention [13]. While free-air pumps were shown to have different temporal dynamics than those of TIPs [13], suggesting distinct underlying sensorimotor pathways, we demonstrated here that the 2 types of pumps share several key temporal and spatial features: temporal clustering, phase prolongation, spatial offset shifting, and bidirectional relationship with head motion. Critically, we show that free-air pumps predict the rats' future orienting behavior (compare Fig 1C in [13] and Fig 4E here). The differences in the temporal scales of sensorimotor kinematics [13] can probably be accounted for by differences in the underlying sensorimotor pathways. While TIPs seem to be implemented via brainstem loops [13,23,48], free-air pumps may involve higher-order sensorimotor loops. The predictive relations between free-air pumps and the rat's locomotion may reflect the rat's expectation of future encounters. Thus, when the rat moves forward at moderate velocity while exploring the environment, it might expect novel encounters to occur mostly during protraction [26] and therefore increases the pump rate in that direction in order to improve sensory acquisition (Fig 3), either by extending protraction duration (Fig 2) or by the pump causing the whisker to briefly "revisit" places of interest. Conversely, when the rat is motionless, either by choice or because of imposed head-fixing, encounters may occur in either direction of whisker motion, and therefore, the rat shifts some of its spatial attention towards retraction (Figs 3 and 5). Importantly, this shift in spatial attention was also evident in the overall distribution of the whisking offset in head-fixed rats during active bouts, but not during pauses (Fig 8), which occur when the head is motionless and were hypothesized to reflect a passive receptive mode of attention [38]. These findings also offer an interesting way to reconcile an apparent contradiction in previously reported data. Our previous study in anesthetized animals on the sensory representation of whisker motion at the primary afferents and brainstem levels found cells responsive throughout the whisking cycle, with most cells responding to the protraction phase [49]. In awake head-fixed animals, however, brainstem, thalamic and cortical sensory cells show an overrepresentation of the retraction phase [50,51]. So far, no ethological or physiological explanation was given for this finding. A recent study [52] found correlations between the preferred phase of vibrissal afferents, and the activation of different facial muscle groups controlling whisking motion [53]. Whisker motion is evoked in anesthetized rats by stimulating the buccal motor branch of the facial nerve, which mostly innervates the intrinsic whisker pad muscles responsible for protraction [54,55] (the extrinsic nasolabialis muscle group involved in active retraction is innervated by the zygomatic branch). Thus, this method generates active protractions and passive retractions [56], and therefore, more cells responding to protraction were sampled. However, we can infer from the results reported here that awake head-fixed animals shift their attention backwards and may therefore strongly activate the muscle groups involved in controlled retraction [22], and the afferents correlated with that motion. It is also possible that increased attention to retraction involves activation of internal feedback loops within the brain [25,55], enhancing the activity related to this phase. In other words, the predominance of retraction related cells reported in awake head-fixed animals may reflect the behavioral context imposed by the experimental set-up, rather than the actual distribution of sensitivity in the vibrissal system. The conclusion arising from this possibility is far-reaching: behavioral context may bias physiological findings down to the cellular level, either via alteration of the sensorimotor interactions with the world at the periphery or via internal feedback loops in the central nervous system. Our freely moving rats displayed a preferred whisking pattern, based on small amplitude whisks around a varying offset. This pattern appears to allow a combination of local active sensation with global (row-wide) passive reception, resembling in part the fovea-periphery division of work in vision. In contrast, trimmed rats applied large amplitude whisks around a fixed offset, probably in order to achieve similar spatial coverage with the few whiskers they had left. Overall, while freely moving rats used combinations of head and body movements to shift their attentional foci, head-fixed rats had to apply a wide range of whisking patterns, varying both amplitudes and offsets, possibly in an attempt to cover as many attentional foci as they could. Animals use their available resources in an adaptive manner. We show here that rats adapt their perceptual behavior to their locomotion dynamics, both when voluntarily selected and when externally imposed, and suggest that they do so in a way that attempts to optimize coverage of the relevant space and future encounters with objects. Taken together, these observations are consistent with the closed-loop view of perception, in which essential perceptual variables, such as space coverage, are actively maintained when facing external or embodied constrains. Whisking in freely moving rats The experimental protocol is described in detail in [13]. Briefly, the whisking patterns of Wistar strain male albino rats aged 3-6 months were measured (N = 3 for free-air trimmed, N = 4 for free-air single row). On the day prior to behavioral recording, trimmed whiskers were clipped close to the skin (approximately 1 mm) under Dormitor anesthesia (0.05 ml/100 g, SC). All experimental protocols were approved by the Institutional Animal Care and Use Committee of the Weizmann Institute of Science. Behavioral experiments were performed in a darkened, quiet room. The behavioral apparatus (see S1 Fig) consisted of a holding cage (25 cm width, 35 cm length, 29.5 cm height) with a small door (6.9 cm height, 6 cm width), through which the rats could emerge into the experimental area (18 cm × 20 cm) [57]. Both the holding cage and the experimental area were fixed approximately 15 cm above the surface of a table. The experimental area consisted of a back-lit Perspex plate with 1-2 objects (Perspex cubes and cylinders) placed on it. The experimental area was filmed from above by a high-speed, high-resolution camera (1,280 × 1,024 pix, 500 fps, CL60062; Optronis, Kehl, Germany). An in-house program (E. Segre, Weizmann Institute) triggered the high-speed camera whenever the rat emerged from the holding cage into the experimental area. Video recording stopped when the rat returned to the holding cage. An experimental session consisted of recording a rat's whisking behavior whenever the rat was in the experimental area, over a period of 30-120 min. Preceding a session, the animal was placed in the holding cage for a 15-min acclimation period. During the acclimation period, the door of the holding cage was blocked. The experimental session began with unblocking the door to allow the animal to leave the holding cage and explore the experimental area at will. Each trial started when the rat moved from the holding cage to the experimental area and ended when the rat went back into the holding cage. The length of the experimental session varied depending on the animal's behavior, and the amount of recorded video. Whisker movements were tracked offline using the MATLAB (The MathWorks, Natick, MA, USA)-based WhiskerTracker image processing software (available at https://github.com/pmknutsen/ whiskertracker/). Base angles of all existing macrovibrissa were tracked, as well as the location of both eyes and the tip of the nose. Only tracked segments that did not include object contacts were used for analysis in the current work. Whisking in head-fixed rats The experimental protocol is described in details in [23]. Briefly, 7 male albino rats were headfixed using screws glued to the skull under anesthesia. After full recovery, rats were gradually adapted to head fixation for 4-5 days. All except for 3 whiskers (C1, C2, and D1) on either side were clipped close (approximately 1 mm) to the skin during brief (5-10 min) isoflurane anesthesia and were retrimmed 2 to 3 times a week at least 2 hours before an experiment. A single experiment lasted up to 30 min but terminated earlier if rats showed signs of distress. Each rat had 1 or 2 experimental sessions a day, 2 or 3 times a week. All experiments were performed in a dark, sound-isolated chamber. Head orientation was estimated by imaging the corneal reflections of 2 infrared (880 nm) light-emitting diode (LED) spotlights. An imaginary line between the nose and eye on each side of the rat served as a reference line for the whisking angle. Bright-field imaging of the whiskers was accomplished by projecting infrared light (880 nm) with an array of 12 × 12 LEDs from below the animal. Video acquisition was triggered manually, and high-speed video was buffered and streamed to disks at either 500 or 1,000 frames/s. A total of 255 trials with an average duration of 8.3 ± 3.2 s (mean ± standard deviation, SD) were acquired. The 106 "free-air trials" and 149 "contact trials" were intermixed. Only the free-air trials are used for analysis in the current work. Whisker movements were tracked offline using the MATLAB (The MathWorks, Natick, MA, USA)-based WhiskerTracker image processing software (available at https://github.com/pmknutsen/whiskertracker/). Analysis Rhythmic decomposition and phase segmentation. All analyses were performed on the base angle of whisker C2 on the left side of the snout. Analyses of head turns also used whisker C2 on the right side of the snout, and the results from both sides were pooled after mirroring. Signal was first filtered using a low-pass filter (sixth-order Butterworth, passband cutoff 20 Hz) to remove noise; this filtering was chosen to remove high-frequency noise due to tracking errors while preserving the pumping information. Whisking phase φ was extracted using the Hilbert transform from a high-pass-filtered version (cutoff 4 Hz). Peaks and troughs were located within ±π/8 of φ = 0 and π, respectively, and the signal was segmented into individual protraction and retraction phases. Offset of each motion was defined as the middle point between trough and peak; amplitude was half the distance between the 2 points. Bouts/pauses discrimination. Histograms of the whisking amplitudes were computed using 200 logarithmic bins in the range 0.03-100 deg. These histograms were fitted using a GMM with 8 components. The maximum-likelihood threshold for discriminating between the 2 modes of the distributions was found by classifying the amplitude bins using the GMM and finding the first bin classified to one of the high-amplitude (mean > 3.16 deg) components. Pump detection and quantification. Angular trajectory of each phase was differentiated to produce the angular velocity. Retraction velocity profile was negated. A pump was identified whenever the resulting velocity profile had more than one peak. The shapes of individual protraction/retraction phases were previously analyzed in great detail in [20]; there, 3 categories of motion were described: "single pumps," in which the protraction/retraction velocity profile contains a single peak (these are the "default," unmodulated whisking cycles); "delayed pumps," in which there are 2 velocity peaks, but there is no reversal in the direction of motion; and "double pumps," in which motion in the opposite direction (e.g., backward during protraction) is detected. We quantify the pump strength σ pump by using the formula where v max is the maximal velocity during the protraction/retraction and v trough the velocity at the lowest trough. Note that by this definition, σ pump = 0 for "single pump" (i.e., no pump), 0<σ pump <1 for "delayed pump," and σ pump >1 for "double pump." We did not observe any discontinuity in the distributions around σ pump = 1, and therefore, delayed and double pump profiles were lumped together; we refer to these 2 profiles simply as pumps. Head motion. Total head translational velocity in freely moving experiments was defined as the velocity of the middle point in between the 2 eyes (see S2A Fig). Head direction was defined as the direction of the line connecting this point and the tip of the nose. Length of this line was defined as head size used to normalize translational velocity (to units of heads/s). Direction of translational velocity was than subtracted from the head direction to obtain the translational direction in head-centered coordinates. The projection of this vector on the line pointing towards the nose was defined as thrust (longitudinal translational velocity), while the orthogonal projection was defined as slip (transverse translational velocity). Statistics. Random permutations were used to evaluate significance of correlations (Figs 2A, 3, and 4) and run length distributions ( Fig 2B); the bootstrap method was used in all comparisons between sets (Figs 5,6D and 7). Unless stated otherwise, 5,000 permutations/draws were used in each comparison. When comparing offset and amplitude distributions in different contexts (Figs 1B and 7), the samples of each set are not statistically independent because of temporal correlations [21,23]. Therefore, for each segment analyzed, we computed the autocorrelation functions of these variables and measured the lag (number of cycles) at which their significance dropped below the p = 0.05 level (using random permutations as control). The amplitude/offset sequence was then diluted by this lag to obtain statistically independent samples. Whisking envelope entropy. To estimate the dispersion of the whisking envelope (Fig 6D), we measured the bivariate probability distribution p(θ amp ,θ off ) by binning each variable into 25 bins. We then measured the information entropy using the formula H ¼ P 25 i¼1 P 25 j¼1 À logðpðy amp ¼ x i ; y off ¼ y j ÞÞpðy amp ¼ x i ; y off ¼ y j Þ. To enable statistical comparison between contexts, random sets of identical size from all data sets were required. Seventy-five different collections of tracked segments were randomly generated from each data set so that the total duration of each collection was approximately 30% of the total length of the smallest data set. Entropy was then calculated for each of the resulting subsets to produce a distribution of entropies for each data set (box plots, Fig 6D). Entropy was also calculated for each of the data sets in its entirety (inset in Fig 6D).
2019-11-14T17:07:58.291Z
2019-11-06T00:00:00.000
{ "year": 2020, "sha1": "65e8aa3938c598214132001146ba79012f5749bc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000571&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e475998fe27f9b021a4f47b72bd8aeb996fbd75", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
257173978
pes2o/s2orc
v3-fos-license
Impact of Acute High Glucose on Mitochondrial Function in a Model of Endothelial Cells: Role of PDGF-C An increase in plasma high glucose promotes endothelial dysfunction mainly through increasing mitochondrial ROS production. High glucose ROS—induced has been implicated in the fragmentation of the mitochondrial network, mainly by an unbalance expression of mitochondrial fusion and fission proteins. Mitochondrial dynamics alterations affect cellular bioenergetics. Here, we assessed the effect of PDGF-C on mitochondrial dynamics and glycolytic and mitochondrial metabolism in a model of endothelial dysfunction induced by high glucose. High glucose induced a fragmented mitochondrial phenotype associated with the reduced expression of OPA1 protein, high DRP1pSer616 levels and reduced basal respiration, maximal respiration, spare respiratory capacity, non-mitochondrial oxygen consumption and ATP production, regarding normal glucose. In these conditions, PDGF-C significantly increased the expression of OPA1 fusion protein, diminished DRP1pSer616 levels and restored the mitochondrial network. On mitochondrial function, PDGF-C increased the non-mitochondrial oxygen consumption diminished by high glucose conditions. These results suggest that PDGF-C modulates the damage induced by HG on the mitochondrial network and morphology of human aortic endothelial cells; additionally, it compensates for the alteration in the energetic phenotype induced by HG. Introduction Metabolic diseases, including diabetes, are considered the main risk factor for the development of cardiovascular diseases (CVD) [1,2]. The origin of CVD has been related to early loss of vascular endothelial function, which decreases the production or bioavailability of vasodilator molecules such as nitric oxide (NO) and predisposes to a blood vessel contraction phenotype [3]. Transient and sustained glucose levels greater than 5.5 mmol/L considered average fasting glucose levels [4] induce endothelial dysfunction [5][6][7]. Consequently, identification of the initial steps that lead to endothelial dysfunction under high glucose conditions is crucial for early intervention in diabetes. Reactive oxygen species (ROS) production, especially superoxide radical (O − ) by the mitochondria, is one of the intracellular mechanisms that reduce the biodisponibility of NO. Due to the high reactivity of O − and NO, the production of peroxynitrite (ONOO -) is promoted, altering the structure of nucleic acids, proteins, and lipids and leading to cell death [8]. Although the mitochondrial content in endothelial cells (ECs) is low compared to other cell types with higher energy demands [9], they are considered signalling organelles that act as local microenvironmental sensors [9][10][11]. Its optimal function is driven by a balance between fission and fusion processes [3,[12][13][14][15]. However, in diabetic patients and hyperglycemic conditions, a decreased expression of OPA1 and MFN1/2 proteins (related to the fusion mechanism) and an increased expression of DRP1 and FIS1 proteins (related to the fission mechanisms) are observed. This imbalance, coupled with an impaired autophagy mechanism, leads to a disrupted mitochondrial network and the accumulation of tiny, damaged, and inefficient organelles that contribute to the increased ROS production and loss of ECs function or death [12][13][14]16]. In this context, looking for new therapies that mitigate the mitochondrial damage induced by high glucose is critical for the reduction of CVD risk in diabetic patients. Recently, we reported the role of PDGF-C on the modulation of mitochondrial oxidative stress induced by high d-glucose in human aortic endothelial cells (HAECs). PDGF-C is a growth factor that exerts its effects by binding to PDGFRα and PDGFR αβ tyrosine kinase receptors. PDGF-C reduced the increase in mitochondrial superoxide production, and it was associated with the up-regulation of SOD2 expression and activity and the modulation of Keap1 gene expression [5]. Now, here we report the effect of this growth factor on the modulation of the fragmented mitochondrial morphology and the mitochondrial functional changes induced by high glucose in endothelial cells. PDGF-C Restores the Integrity of Mitochondrial Network of HAECs Going on High d-Glucose Treatment To better understand the effect of PDGF-C on mitochondrial damage induced by 35 mmol/L of d-glucose for 7 h (herein referred to as HG) in HAECs, mitochondrial network integrity was evaluated by confocal microscopy. As shown in Figure 1A, mitochondria of cells cultured in 5 mmol/L of d-glucose (herein referred to as NG) exhibited a continuous and elongated network with peripheral localization (upper left and right). In contrast, HG induced a shorter and fragmented mitochondrial morphology (lower left), which changes to dense and hyperfused aggregates with nuclear localization when cells were treated with 50 ng/mL of PDGF-C for 1 h (lower right). The reduction of 64% of the count of branches (*** p < 0.001) ( Figure 1A), 63% of the count of junctions (*** p < 0.001) ( Figure 1B), and 71% of the mitochondrial area (**** p < 0.0001) ( Figure 1C)) in HG conditions for 7 h and compared to NG support the morphology observations. Treatment with PDGF-C 50 ng/mL for 1 h significantly increased the number of branches (# p < 0.05), junctions (# p < 0.05) and a tendency to increase the total mitochondria area of cells treated with HG. These results suggest that PDGF-C modulates the damage induced by HG on the mitochondrial network and morphology of HAECs. HAECs were seeded in 35 mm glass-bottom culture dishes and exposed to 5 or 35 mmol/L d-glucose for 7 h, treated or not with 50 ng/mL PDGF-C for the last hour of the glucose exposure. Live cells were stained with Mitotracker green (mitochondria visualization) and Hoechst (nucleus visualization) before the confocal images were acquired for analysis of mitochondrial network integrity. Representative images for cells in NG (upper left), NG+PDGF-C (upper right), HG (lower left) and HG+PDGF-C (lower right). Analysis of mitochondrial network integrity represented by the count of (A) branches (n NG: 9, n NG+PDGF: 6, n HG: 12, n HG+PDGF: 6), (B) count of junctions (n NG: 9, n NG+PDGF: 6, n HG: 12, n HG+PDGF: 6), and (C) mitochondrial area (n NG: 5, n NG+PDGF: 3, n HG: 5, n HG+PDGF: 3). Data represent the mean ± SEM of three independent experiments. *** p < 0.001 and **** p < 0.0001 regarding NG, # p < 0.05 regarding HG, ns (non-significant). Mitochondrial Dynamic-Related Proteins Expression To reinforce these results, the expression of fission and fusion proteins was measured by western blot in the same conditions mentioned above. Regarding NG conditions, and as described in Figure 2A,B (Supplementary Figure S1A,B), HG did not significantly change MFN1 and MFN2 expression. Similarly, PDGF-C did not affect the expression of these fusion proteins in any of the evaluated glucose conditions and times. Opposite, HAECs going on HG for 6 and 7 h diminished the OPA1 protein expression ( Figure 2C, Supplementary Figure S1C), regarding NG (* p < 0.05); PDGF-C treatment restored to the basal level, the expression of this fusion-related protein (# p = 0.0486). HAECs were seeded in 35 mm glass-bottom culture dishes and exposed to 5 or 35 mmol/L dglucose for 7 h, treated or not with 50 ng/mL PDGF-C for the last hour of the glucose exposure. Live cells were stained with Mitotracker green (mitochondria visualization) and Hoechst (nucleus visualization) before the confocal images were acquired for analysis of mitochondrial network integrity. Representative images for cells in NG (upper left), NG+PDGF-C (upper right), HG (lower left) and HG+PDGF-C (lower right). Analysis of mitochondrial network integrity represented by the count of (A) branches (n NG: 9, n NG+PDGF: 6, n HG: 12, n HG+PDGF: 6), (B) count of junctions (n NG: 9, n NG+PDGF: 6, n HG: 12, n HG+PDGF: 6), and (C) mitochondrial area (n NG: 5, n NG+PDGF: 3, n HG: 5, n HG+PDGF: 3). Data represent the mean ± SEM of three independent experiments. *** p < 0.001 and **** p < 0.0001 regarding NG, # p < 0.05 regarding HG, ns (non-significant). Mitochondrial Dynamic-Related Proteins Expression To reinforce these results, the expression of fission and fusion proteins was measured by western blot in the same conditions mentioned above. Regarding NG conditions, and as described in Figure 2A Effect of PDGF-C on fusion-related proteins in HAECs exposed to high glucose. Cells were seeded in 6 well plates and exposed to HG (35 mmol/L d-glucose) for 6 and 7 h, treated or not with 50 ng/mL PDGF-C for 1 h, and (A) MFN1, (B) MFN2, and (C) OPA1 expression was evaluated by western blot. Images correspond to representative blots for each protein. Densitometry analysis corresponds to the band of each protein normalized with the band of β-actin. Data represent the mean ± SEM of at least three independent experiments (* p < 0.05, # p < 0.05 regarding HG). On the other hand, results about mitochondrial fission-related proteins showed that HG did not change the expression of either FIS1 or DRP1 alone or in combination with PDGF-C ( Figure 3A,B, respectively. Supplementary Figure S2A,B). Additionally, the phosphorylation of DRP1 at Ser616, which is known for promoting the fission of the mitochondrial network [17,18], also was evaluated by western blot. As shown in Figure 3C (Supplementary Figure S2C), HG for 6 (** p < 0.01) and 7 h (*** p < 0.001) increased the phosphorylation of Ser616 residue in DRP1 and PDGF-C treatment diminished this to basal level (### p < 0.001). Effect of PDGF-C on fission-related proteins in HAECs exposed to high glucose. Cells were seeded in 6 well plates and exposed to 35 mmol/L d-glucose for 6 and 7 h without and with 50 ng/mL Figure 2. Effect of PDGF-C on fusion-related proteins in HAECs exposed to high glucose. Cells were seeded in 6 well plates and exposed to HG (35 mmol/L d-glucose) for 6 and 7 h, treated or not with 50 ng/mL PDGF-C for 1 h, and (A) MFN1, (B) MFN2, and (C) OPA1 expression was evaluated by western blot. Images correspond to representative blots for each protein. Densitometry analysis corresponds to the band of each protein normalized with the band of β-actin. Data represent the mean ± SEM of at least three independent experiments (* p < 0.05, # p < 0.05 regarding HG). On the other hand, results about mitochondrial fission-related proteins showed that HG did not change the expression of either FIS1 or DRP1 alone or in combination with PDGF-C ( Figure 3A,B, respectively. Supplementary Figure S2A,B). Additionally, the phosphorylation of DRP1 at Ser616, which is known for promoting the fission of the mitochondrial network [17,18], also was evaluated by western blot. As shown in Figure 3C (Supplementary Figure S2C), HG for 6 (** p < 0.01) and 7 h (*** p < 0.001) increased the phosphorylation of Ser616 residue in DRP1 and PDGF-C treatment diminished this to basal level (### p < 0.001). On the other hand, results about mitochondrial fission-related proteins showed that HG did not change the expression of either FIS1 or DRP1 alone or in combination with PDGF-C ( Figure 3A,B, respectively. Supplementary Figure S2A,B). Additionally, the phosphorylation of DRP1 at Ser616, which is known for promoting the fission of the mitochondrial network [17,18], also was evaluated by western blot. As shown in Figure 3C (Supplementary Figure S2C), HG for 6 (** p < 0.01) and 7 h (*** p < 0.001) increased the phosphorylation of Ser616 residue in DRP1 and PDGF-C treatment diminished this to basal level (### p < 0.001). Effect of PDGF-C on fission-related proteins in HAECs exposed to high glucose. Cells were seeded in 6 well plates and exposed to 35 mmol/L d-glucose for 6 and 7 h without and with 50 ng/mL Figure 3. Effect of PDGF-C on fission-related proteins in HAECs exposed to high glucose. Cells were seeded in 6 well plates and exposed to 35 mmol/L d-glucose for 6 and 7 h without and with 50 ng/mL PDGF-C for 1 h, and (A) FIS1 and (B) DRP1 expression and the ratio between (C) DRP1 pSer616 /DRP1 were evaluated by western blot. Images correspond to representative blots for each protein. Densitometry analysis corresponds to the band of each protein normalized with the band of β-actin and DRP1 for phosphorylation residue. Data represent the mean ± SEM of at least three independent experiments. (** p < 0.01, *** p < 0.001, ### p < 0.05 regarding HG). These results suggest that PDGF-C modulates the mitochondrial network and morphology by regulating the fission through phosphorylation and dephosphorylation of DRP1 and intensifying the fusion process by upregulating OPA1 expression in HAECs going on HG conditions. Bioenergetic Analysis To know the implications of acute elevated high glucose concentrations on the mitochondrial function of HAECs and the role of PDGF-C in these conditions, cells were treated as mentioned above. The oxygen consumption rates (OCRs) were measured with Agilent Seahorse XFe24 Analyzer Mitostress Test (Seahorse Bioscience, Agilent, Santa Clara, CA, USA), according to the manufacturer's protocol [16,19]. The live-cell bioenergetics was conducted to determine the basal mitochondrial functions, including oxygen consumption rates (OCR), extracellular acidification rates (ECAR), ATP production, proton leak, maximal respiration, spare respiratory capacity, mitochondrial stress, and nonmitochondrial respiration. Basal OCR and OCR in response to injection of oligomycin (ATP synthase inhibitor), FCCP (mitochondrial uncoupler), and rotenone/antimycin (Complex I and III inhibitors, respectively; Figure 4, central upper panel) were assayed. Evaluation of the six mitochondrial parameters showed that HG significantly reduced basal respiration (** p < 0.01; Figure 4A), maximal respiration (** p < 0.01; Figure 4C), spare respiratory capacity (**p < 0.01; Figure 4D), non-mitochondrial oxygen consumption (* p < 0.05; Figure 4E) and ATP production (* p < 0.05; Figure 4F), regarding NG conditions. PDGF-C significantly increased the non-mitochondrial oxygen consumption diminished by HG conditions (# p < 0.05; Figure 4E). PDGF-C for 1 h, and (A) FIS1 and (B) DRP1 expression and the ratio between (C) DRP1 /DRP1 were evaluated by western blot. Images correspond to representative blots for each protein. Densitometry analysis corresponds to the band of each protein normalized with the band of β-actin and DRP1 for phosphorylation residue. Data represent the mean ± SEM of at least three independent experiments. (** p < 0.01, *** p < 0.001, ### p < 0.05 regarding HG). These results suggest that PDGF-C modulates the mitochondrial network and morphology by regulating the fission through phosphorylation and dephosphorylation of DRP1 and intensifying the fusion process by upregulating OPA1 expression in HAECs going on HG conditions. Bioenergetic Analysis To know the implications of acute elevated high glucose concentrations on the mitochondrial function of HAECs and the role of PDGF-C in these conditions, cells were treated as mentioned above. The oxygen consumption rates (OCRs) were measured with Agilent Seahorse XFe24 Analyzer Mitostress Test (Seahorse Bioscience, Agilent, Santa Clara, CA, USA), according to the manufacturer's protocol [16,19]. The live-cell bioenergetics was conducted to determine the basal mitochondrial functions, including oxygen consumption rates (OCR), extracellular acidification rates (ECAR), ATP production, proton leak, maximal respiration, spare respiratory capacity, mitochondrial stress, and nonmitochondrial respiration. Basal OCR and OCR in response to injection of oligomycin (ATP synthase inhibitor), FCCP (mitochondrial uncoupler), and rotenone/antimycin (Complex I and III inhibitors, respectively; Figure 4, central upper panel) were assayed. Evaluation of the six mitochondrial parameters showed that HG significantly reduced basal respiration (** p < 0.01; Figure 4A), maximal respiration (** p < 0.01; Figure 4C), spare respiratory capacity (**p < 0.01; Figure 4D), non-mitochondrial oxygen consumption (* p < 0.05; Figure 4E) and ATP production (* p < 0.05; Figure 4F), regarding NG conditions. PDGF-C significantly increased the non-mitochondrial oxygen consumption diminished by HG conditions (# p < 0.05; Figure 4E). In the same experiments, the parameters baseline phenotype, stressed phenotype (after oligomycin injection), and metabolic potential were evaluated to assess the cell energy metabolism phenotype ( Figure 5, central upper panel) of HAECs going on NG and HG conditions, and the effect of PDGF-C on changes induced by HG. As shown in Figure 5, HG significantly reduced the baseline OCR (* p < 0.05; Figure 5A), the baseline OCR/ECAR ratio (** p < 0.01; Figure 5C), the stressed OCR (** p < 0.01; Figure 5D) and the stressed OCR/ECAR ratio (**** p < 0.0001). PDGF-C increased the stressed OCR (* p < 0.05; Figure 5D) and stressed ECAR (* p < 0.05; Figure 5E) and slightly reduced the metabolic potential (% baseline OCR; non-significant; Figure 5G). Interestingly, PDGF-C significantly reduced the metabolic potential (% baseline OCR; # p < 0.05; Figure 5G) and increased the metabolic potential (% baseline ECAR; # p < 0.05; Figure 5H) in basal d-glucose conditions (5 mmol/L), suggesting that PDGF-C potentiates the glycolytic metabolism even in normal glucose conditions. In the same experiments, the parameters baseline phenotype, stressed phenotype (after oligomycin injection), and metabolic potential were evaluated to assess the cell energy metabolism phenotype ( Figure 5, central upper panel) of HAECs going on NG and HG conditions, and the effect of PDGF-C on changes induced by HG. As shown in Figure 5, HG significantly reduced the baseline OCR (* p < 0.05; Figure 5A), the baseline OCR/ECAR ratio (** p < 0.01; Figure 5C), the stressed OCR (** p < 0.01; Figure 5D) and the stressed OCR/ECAR ratio (**** p < 0.0001). PDGF-C increased the stressed OCR (* p < 0.05; Figure 5D) and stressed ECAR (* p < 0.05; Figure 5E) and slightly reduced the metabolic potential (% baseline OCR; non-significant; Figure 5G). Interestingly, PDGF-C significantly reduced the metabolic potential (% baseline OCR; # p < 0.05; Figure 5G) and increased the metabolic potential (% baseline ECAR; # p < 0.05; Figure 5H) in basal d-glucose conditions (5 mmol/L), suggesting that PDGF-C potentiates the glycolytic metabolism even in normal glucose conditions. Discussion Although mitochondrial content in endothelial cells is low because of their low energy demand [9] and their mainly glycolytic ATP production [7,[20][21][22][23], they have a long and extensive mitochondrial network that undergoes balanced cycles of fission and fusion and exerts essential functions related with environmental sensing and signaling [9][10][11], and maintaining the balance among calcium concentrations, ROS production and nitric oxide [23]. Mitochondrial network fragmentation has been previously reported in endothelial cells and in in vivo models of high glucose environment and diabetes [3,[12][13][14][15]; this condition has been associated with the development of vascular dysfunction [3,15]. It is clearly stated the influence of increased ROS production in the induction of mitochondrial fission [24,25]; our results support these affirmations. In a previous study published by our group, we found augmented mitochondrial ROS in HAECs treated with HG for 6 to 9 h. It was related to the diminished expression of the antioxidant enzyme SOD2 and the activity of the Nrf2/Keap1 pathway [5]. Now here, in the same endothelial model, we report the induction of mitochondrial network fragmentation by HG conditions, reflected as short and discontinuous mitochondria localized at the cellular periphery, reduction in the count of branches, junctions, and total area, diminished expression of fusion protein OPA1, and augmented levels of DRP1 pSer616 , regarding NG conditions (Figures 1 and 2). Concerning the PDGF-C effect, there are no reports about its involvement in the mitochondrial dynamics process associated with any pathology, so this is the first report showing PDGF-C as a mitochondrial morphology modulator in endothelial cells subjected to metabolic stress conditions. The mechanisms could be associated with the induction of SOD2 expression and consequent reduction in mitochondrial ROS production [5], which could regulate the mitochondrial fission mechanisms and maintain mitochondrial integrity and functionality [26]. Although no changes in DRP1 expression were observed, it is known that its profission role depends on its translocation from the cytoplasm to the mitochondrial outer membrane [24]. This process is controlled by the phosphorylation of Ser616 and Ser637 residues [24,27]. In our model, HG conditions induced the phosphorylation of DRP1 in Ser616, which promotes the transit of DRP1 to mitochondria and leads to its fragmentation [17]; interestingly, this effect was reverted by PDGF-C treatment, possibly through the parallel activation of phosphatases whose target is DRP1; however, the main mechanisms remain unclear. In this context, the increased expression of OPA1 and the modulation of phosphorylation of DRP1 in Ser616 residue induced by PDGF-C probably promotes the fusion of dysfunctional mitochondria, leading to a distribution of damaged components, including mitochondrial DNA, uncoupling proteins, and antioxidant enzymes [13,14]. Additionally, the phosphorylation of DRP1 on Ser637 residue is known for reversing the effects of Ser616 phosphorylation [24]. Although the phosphorylation state of this residue was not evaluated in our study, it is known that PDGF-C drives different signalling pathways, including PI3K/Akt, MAPK, and PLCγ [28], leading to the activation of kinases such as AMPK, MAPK, and cyclin-dependent kinase 1/cyclin B1, involved in the phosphorylation of this residue [24]. Changes in mitochondrial morphology induced by environmental conditions, such as increased extracellular glucose levels, can alter the typical mitochondrial bioenergetics profile [29]. As shown in our results, HG-induced alterations in mitochondrial function are evidenced by diminishing basal respiration, maximal respiration, reserve capacity, nonmitochondrial OCR, and ATP-linked OCR (Figure 4). The reduction in maximal respiration, reserve capacity, and ATP-linked OCR has been related to diminished mitochondrial mass, mitochondrial dysfunction, low ATP demand and severe electron transport chain damage, respectively [16,19]; which is according with the diminished total mitochondrial area, mitochondrial network fragmentation (fewer branches and junctions regarding NG conditions) observed by confocal microscopy (Figure 1) and the reduction of mitochondrial fusion evidenced by the diminished expression of OPA1 (Figure 2). Although the nonmitochondrial OCR has been related to the increased production of extramitochondrial ROS (cytosolic) [16,19], in our model, we did not find evidence that suggests the high production of cytosolic ROS in EC exposed to HG [5]. Our results are supported by different studies that indicate that DRP1-induced mitochondrial fission is associated with a diminished OXPHOS capacity and the increased activity of glycolytic metabolism [24,30]. On these affected parameters, PDGF-C recovered the mitochondrial morphology, possibly through increasing the expression of the mitochondrial fusion protein OPA1; however, PDGF-C only exerted a restauration role on the non-mitochondrial OCR parameter ( Figure 4E), which could be related to the induction of the initial response of endothelial cells to metabolic stress. Even though our results indicate a diminished OXPHOS activity in HG cells (Figures 4 and 5), proteomic analysis by [31] shows that energy production in diabetic primary rat cardiac microvascular endothelial cells (RCMVECs) is shifted from glycolysis to OXPHOS after high glucose stress (25 mM by 2 weeks). However, we demonstrated that acute HG stress (7 h) in non-diabetic human aortic endothelial cells decreases OXPHOS metabolism (Figure 4) by the assessment of oxygen consumption and, similarly, Hapsula et al., 2019 [31], reported diminished oxidative phosphorylation and increased glycolysisrelated protein expression in non-diabetic RCMVECs after HG exposure, regarding cells in NG conditions. These results suggest differential metabolic responses to HG exposure dependent on cell origin and phenotype (i.e., microvasculature vs microvasculature, diabetic vs non-diabetic). Typically, when extracellular glucose increases, the endothelial cells enhance the glucose uptake mainly through GLUT 1 transporters and metabolism through glycolysis and glycolytic side branches [32], while the OXPHOS capacity diminishes, as reported by [23] in the EA.hy926 cell line and confirmed by our results (Figure 4). Similarly, in a high glucose HUVECs model, Zeng et al., 2019 [33] evidenced an unbalanced process of mitochondrial dynamics promoting fission through the decreased expression of MFN1 and increased expression of FIS1, which was associated with the decreased expression of complex I (NADH: ubiquinone oxidoreductase core subunit 1) and complex II (Succinate dehydrogenase) of the electron transport chain, leading to a deficient aerobic metabolism. Our results suggest that PDGF-C modulates the damage induced by HG on the mitochondrial network and morphology; additionally, it compensates for the alteration in the energetic phenotype induced by HG. Nevertheless, our work proposes an initial approach to show the changes that acute HG induces in a macrovascular endothelial cell model and the role that PDGF-C can exert on these changes. It constitutes a guide for future experiments, including the assessment of endothelial function parameters (i.e., angiogenic capacity, nitric oxide production) and the evaluation of the behavior of each mitochondrial complex in the established conditions. Cell Culture and Treatments All experiments were established according to the conditions selected before and reported in [5]. Briefly, Human Aortic Endothelial Cells (HAECs) from passage 4 to passage 7 were grown in standard culture conditions in EGM-2 BulletKit containing 5.5 mmol/L glucose (normal human fasting blood sugar average) [4]. Confluent cells were seeded in multiwell plates, and after 24 h, cells were deprived in an EBM-2 medium containing 5.5 mmol/L glucose and 0.2% fetal bovine serum. After 12 h, cells were treated with 29.5 mmol/L d-glucose to reach a final concentration of 35 mmol/L (HG) for 6-9 h; these times were selected according to a previous study where we identified increased production of mitochondrial ROS after 6-9 h of HG [5]. Treatments with 50 ng/mL of hrPDGF-C were made for 1 h after 6 h of 35 mmol/L d-glucose stress induction, considering the short half-life of PDGF in HUVECs, which has been reported to be between 50 min and 3 h [34]. All comparisons were made from cells treated with glucose 5.5 mmol/L. Mitochondrial Network Analysis HAECs were seeded at 3 × 10 4 cells/well in 35 mm glass-bottom culture dishes (MatTek) coated with 0.2% gelatin and cultured until 60% of confluence. Once deprived for 12 h, cells were treated with d-glucose 35 mmol/L for 6 h and 1 additional hour with 50 ng/mL of PDGF-C to evaluate mitochondrial network integrity. Briefly, live cells were washed once with PBS and stained with 100 nmol/L Mitotracker Green FM and 5 µg/mL Hoechst to define mitochondria and nucleus, respectively [35]. 2D and 3D cell imaging were acquired with an Olympus FV1000 confocal microscope, using a 60× oil immersion objective and an excitation/emission range of 400/545 for MitoTracker Green FM and 361/497 for Hoechst. Images were pre-processed according to the protocol suggested by Chaudhry et al., 2020 [36], data about the count and length of branches, count of junctions, and total mitochondria area were obtained according to the protocol suggested by Valente et al., 2017 [37], using the Fiji plugin for Image J. Mitochondrial Dynamics-Related Proteins Expression Expression of mitochondrial fusion OPA1, MFN1, MFN2 and fission DRP1, and FIS1 proteins was measured by western blot. HAECs were seeded in 6-well plates, treated as above, and lysed on ice in RIPA buffer supplemented with protease and phosphatase inhibitors cocktail. Total protein was quantified by bicinchoninic acid. Obtained protein was electrophoresed and transferred to PVDF membranes. Membranes were incubated overnight at 4 • C with the antibodies and dilutions mentioned in Table 1. The next day, membranes were washed and incubated with anti-rabbit IgG HRP-linked or anti-mouse IgG HRP-linked (Table 1) antibodies at room temperature for 1 h. Protein bands were detected with SuperSignal TM west pico PLUS chemiluminescent substrate and captured by the iBright 1500 imaging system from ThermoFisher Scientific (Chelmsford, MA, USA). Analysis of obtained bands was evaluated by densitometry with Image J software [38]. Bioenergetics Analysis Oxygen consumption rate (OCR) and Extracellular acidification rate (ECAR) were measured in a Seahorse XFe24 analyzer (Seahorse Biosciences, MA, USA) through mito stress and energy phenotype tests, respectively. Cells were plated in a Seahorse microplate at a density of 7 × 10 4 cells/well and treated as mentioned before. After completing the abovementioned treatments, cells were equilibrated in DMEM without sodium bicarbonate, containing 5 mmol/L or 35 mmol/L (according to cell treatments) of d-glucose, 2 mmol/L of glutamine and 1 mmol/L of sodium pyruvate. Basal OCR and OCR in response to sequential injection of Oligomycin 1.5 µmol/L (ATP synthase inhibitor, mitochondrial complex V), FCCP 1 µmol/L (mitochondrial uncoupler), and Rotenone/Antimycin 0.5 µmol/L (mitochondrial complexes I and III inhibitors, respectively) were registered. Parameters such as proton leak, maximal respiration, spare respiratory capacity, non-mitochondrial oxygen consumption, and ATP-linked OCR were analyzed through the mito stress test to reflect mitochondrial function. Basal ECAR and Stressed ECAR in response to oligomycin injection were registered, and parameters such as baseline and stressed OCR, baseline and stressed ECAR, and the ratio between baseline OCR/ECAR and the metabolic potential were analyzed through the energy phenotype test. OCR values (pmolO 2 /min) and ECAR values (mpH/min) were normalized to total protein concentration per well (µg/µL) measured by a Bradford assay. Statistic All experiments were done at least in triplicate, and data are expressed as mean ± SEM. An unpaired t-test was used for comparisons between the 2 groups. A p-value < 0.05 was considered statistically significant. Graph Pad Software, San Diego, CA, was used for all analyses. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data is contained within the article. Additional information is available on request. Conflicts of Interest: The authors declare no conflict of interest.
2023-02-25T16:13:59.137Z
2023-02-23T00:00:00.000
{ "year": 2023, "sha1": "e2e32bcd2129741d133dbfe4e27aa8bb852f84a3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/5/4394/pdf?version=1677136765", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79a280384af81099a1dd1bf11647e4b8b7e40fbf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15555623
pes2o/s2orc
v3-fos-license
Osteoid osteoma of the femoral head treated by radiofrequency ablation: a case report Introduction We present a case report highlighting the unusual location and atypical imaging characteristics of an osteoid osteoma in the juxta-articular region of the femoral head, and treatment of the condition with radiofrequency ablation. This treatment option is low in both risk and morbidity and is therefore the best option in lesions that are difficult to access surgically because of the risks involved. Case presentation A 40-year-old Indian man from West Bengal presented to our facility with a history of progressively severe left hip pain of insidious onset, requiring analgesics. Imaging with plain radiographs, computed tomography and magnetic resonance imaging confirmed findings of osteoid osteoma in a subarticular location in the femoral head, although imaging features were atypical due to the intra-articular subchondral location. Conclusion Radiofrequency ablation is a newer treatment modality for osteoid osteoma that, being minimally invasive, offers comparable results to surgery with a significantly lower morbidity. To the best of our knowledge, treatment of osteoid osteoma in the foveal region of the femoral head with radiofrequency ablation has not been reported to date. We wish to highlight the successful outcome in our index case using this technique. Introduction Osteoid osteomas represent 12% of benign bone tumors and were first described by Jaffe in 1935 [1]. They are twice as common in males; 90% occurring between 5 and 30 years of age [2]. In over 50% of cases they are centered on the cortex of the diaphysis of the femur or tibia [1]. Within the femur, lesions are usually found proximally, most commonly within the neck and intertrochanteric region [1]. It is known that location of osteoid osteomas in cancellous bone is rare and even rarer in intra-capsular locations [2]. However the exact incidence of juxta-articular osteoid osteomas in the femoral head is not known. In most cases, affected individuals complain of severe pain related to the lesion which is worse at night and relieved by ingestion of non-steroidal anti-inflammatory agents [3]. Plain radiographs demonstrate the nidus in 85% of cases. A total of 20% of cases may be intra-medullary and have less reactive sclerosis [4]. When intra-capsular in location, an osteoid osteoma may present with clinical features that mimic inflammatory synovitis and with atypical radiological findings such as lack of both sclerosis and periosteal reaction [5]. Magnetic resonance imaging (MRI) is less sensitive than computed tomography (CT) and allows detection of marrow edema and associated soft tissue edema; a nidus is identified in only 65% of cases with MRI. CT scanning improves detection of the nidus to more than 85% [6]. Surgery remains the standard treatment in cases where histology of the lesion is in doubt, neurovascular structures are within 1.5 cm, or in cases with repeated failure of any other minimally invasive ablative technique or percutaneous resection [7]. Successful surgical therapy occurs in 88% to 100% of cases. Primary radiofrequency ablation in a case series of over 200 patients has had a success rate of 76% to 100% [6]. In another series the primary and secondary success rates of this technique were 87% and 83%, respectively. Surgical resection and open curettage show comparable success rates, but are associated with higher complication rates [8]. Case presentation A 40-year-old Indian man from West Bengal presented to our facility with progressive left hip pain of insidious onset for a duration of five years. The pain had worsened in the six months prior to presentation, and was continuous, dull and aching in nature and relieved with analgesics. His clinical examination was unremarkable except for mild tenderness over the left hip anterior joint line. All hip movements were normal and pain free. Plain radiographs of the pelvis revealed a 15 × 11 mm, well defined lytic lesion with a thin sclerotic rim located in the subarticular portion of the left femoral head. Figure 1 shows a plain radiograph in anteroposterior view showing a well defined lytic lesion with a thin sclerotic rim located in the subarticular portion of the left femoral head (white arrow). On MRI, the lesion was hypointense on T1-weighted imaging and hyperintense with a hypointense rim on T2-weighted imaging. Figure 2 shows a T1-weighted axial MRI showing a corresponding hypointense lesion (white arrow). Figure 3 shows a T2-weighted coronal image showing hyperintense focus with a hypointense rim (black arrows). Figure 4 shows T2 fat-suppressed images in coronal sections showing hyperintense focus with a hypointense rim (black arrows). CT sections confirmed the above findings and revealed a distinct nidus measuring 11 × 10 mm. Figure 5 shows an axial CT section, confirming a clearly defined lucent nidus with surrounding sclerotic rim (white arrow). A radionuclide bone scan ( Figure 6) revealed a focal hot spot at this site (black arrow). Despite the uncharacteristic location, based on the imaging features a diagnosis of osteoid osteoma was made. After informed consent was obtained it was decided to perform a radiofrequency ablation. Under general anesthesia the nidus was localized with 3 mm CT sections and osseous access was established with a 4.5 mm drill. Figure 7 shows an axial CT section with radiofrequency ablation (RFA) needle placed within the drilled tract. After localization, the RFA needle (Starburst SDE, RITA Medical Solutions, Mountain View, CA, USA) was introduced through the drilled canal and tip placed in the nidus. Monopolar RFA was performed at a 90°C for a period of 5 minutes at 60 W. Figure 8 shows residual air pockets post radiofrequency ablation. The procedure was deemed successful as our patient was pain free within 24 hours of the procedure and remained so at follow-up. Figure 9 shows a plain radiograph in anteroposterior view (white arrow) at review 4 months post procedure. Figure 10 shows plain radiograph frog leg lateral views (black arrow) showing resolution of the lesion. Conclusion RFA is an excellent alternative to surgical excision in the foveal region as it avoids the complications associated with surgical exposure of the femoral head, including injury to the capsular vessels and post-operative capsular laxity. It also avoids weakening of the femoral neck by large diameter drilling for surgical access and chondral or osteochondral damage from resection of the subchondral lesion. Furthermore, in this location there exists a potential risk of avascular necrosis owing to close proximity of the foveal artery in the ligamentum teres. The foveal artery is a branch of posterior division of the obturator artery, which becomes important to avoid avascular necrosis of the head of the femur when the blood supply from the medial and lateral circumflex arteries are disrupted. In summary, the unusual finding in this index case is the relative absence of bone thickening, which could be due to the intra-capsular location. RFA is a better option than surgery in this location as it avoids injury to the articular margin, prevents capsular injury and reduces the risk of weakening the femoral neck. Injury to the foveal artery with the potential risk of avascular necrosis must be kept in mind when the lesion is close to the fovea of the femoral head. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2016-05-12T22:15:10.714Z
2011-03-24T00:00:00.000
{ "year": 2011, "sha1": "a77a2da953be64cfb83ce239164f168bf911253f", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-115", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32800e411c5f1b8d850157ca9e7005db1aec1fed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268751141
pes2o/s2orc
v3-fos-license
Conventional herniorrhaphy followed by laparoscopic appendectomy for a variant of Amyand’s hernia: a case report Background Amyand’s hernia (AH) is an appendix (with or without acute inflammation) trapped within an inguinal hernia. Most AH with acute appendicitis had a preexisting appendix within the hernia sac. We herein report a variant of AH that has never been described before. An inflamed appendix that was managed conservatively was found to have migrated and trapped in the sac of a previously unrecognized right inguinal hernia 6 weeks after the index admission, resulting in a secondary Amyand’s hernia. Case presentation A 25-year-old healthy Taiwanese woman had persistent right lower abdominal pain for 1 week and was diagnosed with perforated appendicitis with a localized abscess by abdominal computed tomography (CT). No inguinal hernia was noted at that time. Although the inflamed appendix along with the abscess was deeply surrounded by bowel loops so that percutaneous drainage was not feasible, it was treated successfully with antibiotics. However, she was rehospitalized 6 weeks later for having a painful right inguinal bulging mass for a week. Abdominal CT revealed an inflamed appendix with abscess formation in an indirect inguinal hernia raising the question of a Amyand’s hernia with a perforated appendicitis. Via a typical inguinal herniorrhaphy incision, surgical exploration confirmed the diagnosis, and it was managed by opening the hernial sac to drain the abscess and reducing the appendix into the peritoneal cavity, followed by conventional tissue-based herniorrhaphy and a laparoscopic appendectomy. She was then discharged uneventfully and remained well for 11 months. Conclusions Unlike the traditional definition of Amyand’s hernia, where the appendix is initially in the hernia sac, the current case demonstrated that Amyand’s hernia could be a type of delayed presentation following initial medical treatment of acute appendicitis. However, it can still be managed successfully by a conventional tissue-based herniorrhaphy followed by laparoscopic appendectomy. Background Amyand's hernia (AH) is defined as an inguinal hernia containing an inflamed or noninflamed appendix within the hernia sac [1].AH can be seen in patients of all ages and consists of much less than 1% of all inguinal hernias [2,3]. The symptoms and signs of AH with concomitant appendicitis generally present as nausea, vomiting and a nonreducible inguinal bulging mass with local tenderness and swelling.Typical signs of acute appendicitis, such as tenderness over McBurney's point, psoas sign, and Rovsing sign, are absent in these patients due to the unique positioning of appendicitis [4,5]. Although the exact mechanism of AH with concomitant appendicitis is not well clarified, several common hypotheses have been reported in the literature, including adhesion between the appendix and the inguinal sac followed by venostasis and hypoperfusion of appendix due to contraction of abdominal wall muscle [6,7]; incarceration of the appendix leading to inflammation and swelling, which turns AH into a nonreducible hernia [8].All the hypotheses of AH with simultaneous appendicitis have one thing in common: the preexistence of an inflamed appendix within the inguinal sac.Herein, we report one case of incarcerated AH which is caused by migration of a ruptured appendicitis 6 weeks after conservative treatment. Case presentation A 25-year-old healthy Taiwanese woman without any underlying medical disease or inguinal hernia history had experienced persistent right lower abdominal pain for 1 week.The pain was dull, progressive, and not related to food intake.Associated symptoms included anorexia and nausea.She was brought to the emergency department (ED) due to progressive symptoms, including a positive McBurney's point tenderness, mild muscle guarding, mild leukocytosis with a left shift, and elevated CRP levels.Abdominal computed tomography (CT) showed an engorged appendix with wall thickening, appendicolith along with small amount of abscess (Fig. 1).No inguinal hernia was noted by CT, and the appendix was located in the paracecal position.Percutaneous drainage was contraindicated because of the engulfing surrounding intestinal loops and the more minimal volume of abscess.Therefore, she was treated with empiric antibiotics for a week and was discharged uneventfully. Throughout the course of hospitalization, she did not complain of a bulging inguinal mass or inguinal pain.An interval appendectomy was scheduled at about three months after discharge. However, she came to the ED a month later due to a persistent painful right inguinal bulging mass for one week.It was firm, tender and nonreducible.No recent history of coughing or constipation was mentioned.Pelvic CT was then arranged and revealed an inflamed appendix with abscess formation in an indirect inguinal hernia raising the question of an Amyand's hernia with a perforated appendicitis.(Fig. 2), including the preperitoneal region and right inguinal canal.It was most likely due to a perforated appendix incarcerated in the hernia sac.After thorough irrigation and debridement of the infected right inguinal region, right inguinal herniorrhaphy with McVay repair was performed by opening the sac, reducing the inflamed appendix into abdominal cavity, carefully avoiding contamination of the surgical field at inguinal region, and conducting high ligation of the sac.Use of a mesh-based repair was contraindicated because of the associated abscess.The reasons not to perform appendectomy in situ was that the cecal-appendiceal junction was still inside the peritoneal cavity and the appendiceal stump could not be safely secured if appendectomy was to be performed through the narrow opening of the sac.Furthermore, the peritoneal cavity had to be irrigated and cleared anyway.Therefore, closing the inguinal wound followed by laparoscopic appendectomy seemed to be the best choice under those circumstances.A laparoscopic appendectomy as well as irrigation of intra-abdominal abscess was then performed successfully (Fig. 3), with one Jackson Pratt drain left at the cul-de sac.The drain was successfully removed 5 days after the operation.(Fig. 3).The postoperative course was smooth without complications and the patient was discharged 5 days after the operation. Discussion and conclusions AH is an uncommon but complicated type of inguinal hernia, arising in much less than 1% of all inguinal hernia cases [2].It is an appendix inside the hernia sac (usually not inflamed).However, the trapped appendix in the hernia sac can become inflamed and the incidence of appendicitis within the inguinal hernia sac is reported to range from 0.07% to 0.13% of all inguinal hernias [9][10][11]. We herein reported an atypical type of AH that the inflamed appendix was initially located in the abdominal cavity and there was no history or physical findings of an inguinal bulge or an inguinal hernia.However, following conservative antibiotic-treated treatment for the perforated appendicitis, the appendix then migrated and trapped in a previous undiagnosed inguinal hernia.The likely pathophysiology might be that the local inflammation and surrounding intraperitoneal abscess has led to subsequent adhesions between the inflamed appendix and peritoneum.The peritoneum then became part of the hernia sac with the appendix trapped in it.When the appendicitis deteriorated, the incarcerated AH then became symptomatic. A noninflamed AH can be treated with inguinal incision followed by inguinal herniorrhaphy, and the appendix is reduced into abdominal cavity with or without subsequent appendectomy [3,12].It is believed that such approach may keep the herniorrhaphy to be a clean surgery rather than a clean-contaminated surgery [2].Although our case is a secondary Amyand's hernia that was noted following initially conservative treatment of a perforated appendicitis, we took a similar approach to avoid extensive contamination of the operative field for herniorrhaphy.For our case, an additional advantage that appendectomy is performed after herniorrhaphy but not simultaneously with herniorrhaphy is that it is easier to clear intra-abdominal abscess via this approach.Furthermore, we chose laparoscopic appendectomy rather than open appendectomy for this patient because laparoscopic appendectomy is no longer a contraindication for perforated appendicitis at modern era [13]. In conclusion, our patient had a variant of AH that has never been reported before.The likely pathophysiology of this secondary AH was inflammation and adhesion of appendix to the part of peritoneum that subsequently became part of the indirect inguinal hernia sac.Herniorrhaphy followed by laparoscopic appendectomy provided good outcome for this patient. Fig. 1 Fig. 1 Initial contrast-enhanced abdominal CT revealed an engorged appendix with wall thickening, appendicolith and surrounding tumor formation, compatible with ruptured appendicitis with local abscess formation.(Arrow: Appendicolith with local abscess formation) Fig. 2 Fig. 3 Fig. 2 The follow-up contrast-enhanced abdominal CT revealed interval progression of the right lower abdominal abscess with transcompartment involvement, including the preperitoneal region and right inguinal canal, highly suspicious of incarcerated AH, secondary to ruptured appendicitis.(Arrow: Transcompartment inflammation into right inguinal canal)
2024-03-31T06:19:00.175Z
2024-03-30T00:00:00.000
{ "year": 2024, "sha1": "d3971f6cbc3f619325900acc2f98f2b0323e1fad", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "775d42e540a4a6cdc563912fdbc7330b5bc8f15b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
142130006
pes2o/s2orc
v3-fos-license
SOUTH AFRICA’S QUIXOTIC HERO AND HIS NOBLE QUEST – CONSTITUTIONAL COURT JUSTICE ALBIE SACHS AND THE DREAM OF A RAINBOW NATION Albie Sachs has always been clear-sighted in his vision of a rainbow nation at the southern tip of Africa, characterised by tolerance and mutual respect among and between its citizens. Using the well-known story of Don Quixote de la Mancha as a metaphor, this article sets out to chart the “quest” undertaken by Albie Sachs in pursuit of his noble dream. It traces a number of important personal and political transitions that he has made along the way, from his initial emphasis on solidarity and revolutionary struggle, to his later focus on issues of diversity and tolerance. The article touches briefly on aspects of Albie Sachs’s inspiring dignity jurisprudence which it applauds, but then poses the question as to whether or not his views represent real hope to a country which, a decade and a half after the end of apartheid, remains fractured and traumatized. INTRODUCTION In a book published in 2004, approximately one decade after the demise of apartheid, former South African Constitutional Court Justice Albie Sachs tells of visiting Belgium for the purpose of raising funds for the South African Constitutional Court's Architectural Artworks. This particular plea for funds was rejected and he describes his reaction to this as follows: "It is so tiring to be a supplicant, despite the fact that everybody 'really loves' the cause. Oh well, even if like my hero Don Quixote I find myself knocked off my horse and having to get up out of the dust once again, I have at least done something for South Africa, strengthening the idea that we are a cultivated people serious about democracy, human rights and culture." Following the above, the article focuses on a specific issue which has been of central concern to Sachs throughout his career, and which is of central concern to South Africans in their efforts to establish a "rainbow nation". This is the issue of tolerance and diversity within a fractured and traumatised country; the issue of what precisely it means to be a "South African" within a newly democratic South Africa. In a paper written in 1990, as the country began to emerge from apartheid, Sachs asked his comrades in the ANC the following question: "Can we say that we have begun to grasp the full dimensions of the new country and new people that is struggling to give birth to itself, or are we still trapped in the multiple ghettoes of the apartheid imagination?" 4 Over the years, as South Africa emerged from apartheid and established itself as a democratic country, Sachs has himself attempted to answer to this question. This article focuses particularly on aspects of his sensitive and inspiring dignity jurisprudence. The purpose is not, however, simply to trumpet the virtues of this jurisprudence. The article seeks to address the more critical question as to whether or not this jurisprudence is unduly idealistic. It highlights certain lines of tension which cast a shadow over the ability of Sachs, and his rainbow vision, to slay the dragon of intolerance in South Africa. ALBIE SACHS AS QUIXOTIC HERO The bare bones of Sachs's remarkable career are well-known. He was born in Johannesburg in 1935 and practised as an advocate at the Cape Bar between 1957 and1966. He was mainly involved in civil rights work, defending those charged under racist apartheid laws and strict security legislation. He was harassed by the security police and was placed under banning orders restricting his movement. He was detained without trial for two extended periods, during which time he was subjected to solitary confinement. As a result of this treatment, in 1966 he opted for political exile in England. He obtained a PhD from the University of Sussex and taught law at the University of Southampton. In 1977 he returned to Africa and became a Professor of Law at the Eduardo Mondlane University in Maputo, Mozambique. In 1988 he was almost killed by a car bomb planted by agents of the apartheid regime, losing his right arm and the sight in one eye. In 1989 he taught at Columbia University in New York and became the founding director of the South African Constitution Studies Centre, based at the Institute of Commonwealth Studies, University of London. In 1992 the Centre moved to the University of the Western Cape, South Africa. As a member of the national executive of the ANC, as well as a member of the ANC's Constitutional Committee, Sachs helped to negotiate South Africa's new constitution. In 1994 he was appointed by Nelson Mandela as a Justice of South Africa's Constitutional Court. He is a prolific author, and one of South Africa's best-loved jurists. In accordance with the provisions of the Constitution, he stepped down as a Constitutional Court judge in October 2009. 4 Sachs "Preparing Ourselves for Freedom" in De Kok and Press (eds) Spring is Rebellious -Arguments About Cultural Freedom by Albie Sachs and Respondents (1990) 19. What, however, of the man behind the stark details set out above? A romantic and idealistic thread is apparent throughout both the judgments and other writings of Sachs. Indeed, it is precisely the emotional thrust behind his ideas, his sensitivity to the pain of others, his willingness to dream, and his courage not to allow the dream to die, which has endeared him to lawyers, academics and ordinary South Africans. A good place to start is with his attitude to love. In a paper written to his ANC comrades in 1990, all of whom were still involved in the uncompromising life and death struggle against apartheid, Sachs laments the lack of works, within the revolutionary struggle-art of the time, dealing with the subject matter of love: "And what about love? We have published so many anthologies and journals and occasional poems and stories, and the number that deal with love do not make the fingers of a hand. Can it be that once we join the ANC we do not make love any more, that when the comrades go to bed they discuss the role of the white working class? Surely even those comrades whose tasks deny them the opportunity and direct possibilities of love, remember past love and dream of love to come. What are we fighting for, if not the right to express our humanity in all its forms, including our sense of fun and capacity for love and tenderness and our appreciation of the beauty of the world?" 5 In similar vein, the following words written as he was recovering from the terrible injuries suffered in the notorious apartheid car bomb attack against him in 1988 are, perhaps, representative of his view on women: "Men talk to you and smile and use words like brave and strong and the struggle, but after shaking my back-to-front hand and sensing relief that the introduction has been so easy, they sit back in their chairs and engage me with words. Women come forward, they hold your arm and nuzzle against your cheek, enabling you to crook your arm around their heads and stroke their hair and feel the tenderness in yourself coming out while you are receiving physical love and comfort from them. The doctors have done their bit, now what I need is endless stroking and warmth." 6 Perhaps because of the romantic and sensitive side to his nature, Sachs is known for his love of, and involvement with, the world of architecture and art. He was actively involved "in the development of the Constitutional Court 5 Sachs in De Kok and Press (eds) (1990) 20-21. It is interesting to note that Sachs's paper caused much controversy within the ANC at the time. The Transvaal Interim Cultural Desk of the ANC responded as follows: "We challenge cultural workers to root themselves in the democratic movement so that their creative responses to life will be informed by an understanding and experience of the struggle. We thereby reiterate the view of the Gaberone and CASA (Culture in Another South Africa) conferences on culture that one is first part of the struggle and then a cultural worker." (Transvaal Interim Cultural Desk "The Cultural Boycott and Albie Sachs' Paper" in De Kok and Press (eds) (1990) 107-108.) From this clearly neo-Marxist viewpoint of the relationship between an artist and his or her art, the artist's duty to the "struggle" comes before whatever duty he or she may have to "art". All art is regarded as political, and the artist is not seen as a neutral observer or commentator, but as a political actor engaged in struggle, occupying a particular class position and situated at a particular historical conjuncture. Although this approach may have been necessary and effective at the height of the struggle against apartheid, the question raised by Sachs in his paper was essentially whether or not such an approach would still be desirable in post-apartheid South Africa. As will be argued in the remainder of this article, similar tensions -radical vs liberal, class solidarity vs individual autonomy, rich vs poor, black vs white -continue to bedevil South Africa, a decade and a half after the end of apartheid. building and its art collection on the site of the Old Fort Prison in Johannesburg". 7 He was a driving force behind the design of the building and its surrounds, which were deliberately conceptualised as the opposite of a "grand dominant monument". His explanation for this reveals something of the essence of the man and his values: "Grand dominant monuments are only needed to represent victories of war, exclusivity in the face of threat to an unpopular social system, economic or elite social power, or the unattainable -places of God or the gods. The Constitution, and therefore its houses and precinct, have nothing in common with any of these situations. The Constitution represents the opposite; an alternative means should be found to achieve symbolic importance for the building among the citizens of South Africa. We have chosen to seek the power of a pre-eminent building without the monumentality." 8 A central concern of Sachs and those responsible for designing the building was that all South Africans, including previously marginalised groups, should feel a sense of ownership and belonging when entering the building. One of the ways in which this was achieved was to call on South Africans from all walks of life to produce works of art which were then incorporated into the very fabric of the building. In the words, once again, of Sachs: "Like the Constitution, the Court belongs to and serves the whole nation. We want the eyes, hands and hearts of all our artists famous and unknown, to be involved. We do not want to acquire loose art and place it in the building but rather ensure that the art is integrated into the very fabric of the building. We want this to be a national project." 9 It is, perhaps, not too far-fetched to state that the South African Constitutional Court building and its art represent not only the spirit of a young democratic South Africa, but also that of Sachs himself. To conclude this section linking Sachs to quixotic notions of romance and idealism, brief mention may be made of his approach to spirituality. Although he does not appear to subscribe to one or other formal religion, it is clear that he is a deeply spiritual person, and in one of his recent novels he describes himself as "the non-believing Jew, spiritual and dreamy by nature, who believes in belief but has no faith in any supernatural being directing the world". 10 This deeply spiritual approach to life is reflected in his judgments, and one would be hard pressed to find a more sensitive and insightful description of the importance of spirituality than the following: "The right to believe or not to believe, and to act or not to act according to his or her beliefs or non-beliefs, is one of the key ingredients of any person's dignity. Yet freedom of religion goes beyond protecting the inviolability of the individual conscience. For many believers, their relationship with God or creation is central to all their activities. It concerns their capacity to relate in an intensely meaningful 7 See the South African Constitutional Court website: http://www.constitutionalcourt.org.za/site/ judges/justicealbiesachs/index1.html (accessed 2009-09-14). fashion to their sense of themselves, their community and their universe. For millions in all walks of life, religion provides support and nurture and a framework for individual and social stability and growth. Religious belief has the capacity to awake concepts of self-worth and human dignity which form the cornerstone of human rights. It affects the believer's view of society and founds the distinction between right and wrong. It expresses itself in the affirmation and continuity of powerful traditions that frequently have an ancient character transcending historical epochs and national boundaries." 11 Many other examples could be given, but it is submitted that the above adequately demonstrates the romantic, idealistic and, perhaps, quixotic nature of the hero of this article. Attention must now be paid to his specific quest to bring about a rainbow nation at the southern tip of Africa. TOLERANCE AND THE DIGNITY OF DIFFERENCE -ALBIE SACHS AND THE QUEST FOR A RAINBOW NATION In 1990, as South Africa was about to enter the post-apartheid period, Sachs raised a crucial question. South Africans knew what it was to struggle against a brutal regime, but did they know what it meant to be South African? Speaking at an art exhibition in Sweden, he stated as follows: "You Swedes know who you are. Perhaps your artists have to explore underneath all your certainties, dig away at false consciousness. We South Africans fight against real consciousness, apartheid consciousness, we know what we struggle against. It is there for all the world to see. But we don't know who we ourselves are. What does it mean to be a South African? The artists, more than anyone, can help us discover ourselves. Culture in the broad sense is our vision of ourselves and of our world. This is a huge task facing our writers and dancers and musicians and painters and film-makers. It is something that goes well beyond mobilising people for this or that activity, important though mobilisation might be." 12 Even though it is now almost twenty years since Sachs called attention to the issue of what it means to be South African, it is submitted that this vital question has not been properly answered. Furthermore, as the rest of this article will attempt to show, it is a question that needs urgently to be answered if South Africa is finally to overcome the legacy of apartheid. Sachs has done more than his fair share in pointing the way to a possible answer, but the question which remains is whether or not his has been a hopelessly idealistic quest, doomed to failure in the face of harsh realities. During the course of his quest in search of the rainbow nation, and after the fashion of any true quixotic hero, it is submitted that Sachs has marked his journey by making a number of important personal and political transitions along the way. It is important to trace certain of these transitions. He starts out as a dedicated revolutionary, committed to class struggle and fired up by neo-Marxist theory, but ends as a reconciler, who seems much more concerned with the protection of individual autonomy in the liberal sense championed by John Stuart Mill, than with the violent overthrow of the bourgeoisie as predicted by Marx. He starts as a dedicated cadre of the ANC operating at the very highest levels of that organisation, but then is required to relinquish his party membership when he dons the robes of a Justice of the newly formed South African Constitutional Court. It may be somewhat simplistic to describe his journey as being one from Marx to Mill, but something of this simple distinction is reflected in the following passage written by Sachs himself: "I was engulfed for some years of my life by the philosophy of historical materialism, which, I told myself at the time, was not orthodoxy but science … Later I came to be a member of the biggest party in the world, the party of excommunists, which is full of bewildered people from every continent. People who had asked the hardest questions of the age and shown the greatest courage in fighting for a world without exploitation … My concern today with avoiding the imposition of orthodoxies of behaviour or belief by the state influences the way I interpret our Constitution. I have gone further than any of my colleagues in emphasizing that the Constitution calls for the widest recognition of openness, difference and pluralism … It is easy to tolerate beliefs and practices that are familiar and enjoy strong political support. The true test of tolerance comes when the practices exist on the margins of society and appear bizarre, even threatening to the mainstream." 13 It seems clear, therefore, that whereas his journey may have started with the emphasis on solidarity and struggle, it ends with a focus on diversity and tolerance -in effect, with his vision of a "rainbow nation". Another way, perhaps, in which to describe this transition, is that it was a move from a position characteristic of a Critical Legal Studies approach to law, which is distrustful of Western liberal rights discourse, to one of a deep commitment to Western style constitutional democracy, with its liberal notions emphasising the importance of individual autonomy and human rights. Sachs reflects this transition as follows: "At an earlier stage of my life … I was hostile to a Bill of Rights because it took out of the political arena issues that were really political in character. I believed it was far better to allow such matters to be resolved through struggle and democratic processes than to convert them into juridical questions to be settled by elite and usually conservative lawyers … Today I see withdrawing certain questions from the political arena as being the principal virtue of a Bill of Rights. Disputes over relatively small issues can induce intense alarm if seen as the thin end of the wedge for something more significant … Intense mobilization and counter-mobilization can take place, and the country can be torn apart over a matter that has more symbolic than real importance. It is precisely these tinderdry, inflammable issues … that a Bill of Rights can embrace and respond to. It converts potentially destructive clashes into contained legal disputes, to be decided by a dedicated body of jurists in a rational manner, according to agreedupon processes and internationally accepted principles." 14 Having discussed certain of the personal and political transitions made by Sachs during the course of his journey, it is important to point out that his ultimate goal, the focus of his quest, has remained remarkably consistent over the years. Even before the end of apartheid, Sachs was clear-sighted in his vision of a rainbow nation, characterised by tolerance and mutual respect 13 Sachs (2004) 67-68. 14 Sachs (2004) 39. OBITER 2010 among and between its citizens. In 1990, as apartheid was coming to an end, he enunciated his vision as follows: "In rejecting apartheid, we do not envisage a return to a modified form of the British Imperialist notion, we do not plan to build a non-racial yuppie-dom which people may enter only by shedding and suppressing the cultural heritage of their specific community. We will have Zulu South Africans, and Afrikaner South Africans and Indian South Africans and Jewish South Africans and Venda South Africans and Cape Moslem South Africans (I do not refer to the question of terminology -basically people will determine this for themselves). Each cultural tributary contributes towards and increases the majesty of the river of South African-ness. While each one of us has a particularly intimate relationship with one or other cultural matrix, this does not mean that we are locked into a series of cultural 'own affairs' ghettoes. On the contrary, the grandchildren of white immigrants can join in the toyi toyi -even if slightly out of step -or recite the poems of Wally Serote, just as the grandchildren of Dinizulu can read with pride the writings of Olive Schreiner. The dance, the cuisine, the poetry, the dress, the songs and riddles and folk-tales, belong to each group, but also belong to all of us … Each culture has its strengths, but there is no culture that is worth more than any other. We cannot say that because there are more Xhosa speakers than Tsonga, their culture is better, or because those who hold power today are Afrikaans-speakers, Afrikaans is better or worse than any other language." 15 The above vision permeates many of the writings and judgments of Sachs. In his book The Soft Vengeance of a Freedom Fighter he tells of being visited in hospital by Jacob Zuma, one of his comrades in the leadership of the ANC at the time, and the current president of South Africa. At that time, Sachs was recovering from the terrible injuries he suffered in a car bomb blast orchestrated by agents of the apartheid regime, and speaks in moving and personal terms of the visit as follows: "Zuma's African-ness, his Zulu appreciation of conversation and humour is mingling with my Jewish joke, enriching it, prolonging and intensifying the pleasure. We are comrades and we are close, yet we do not have to become like each other, erase our personal tastes and ways of seeing and doing things, but rather contribute our different cultural inputs so as to give more texture to the whole. This is how one day we will rebuild South Africa, not by pushing a steamroller over the national cultures, but by bringing them together, seeing them as the many roots of a single tree, some more substantial than others, but all contributing to the tree's strength and beauty." 16 It is this dream of a rainbow nation which, years later, Sachs brings to his judgments as a Justice of the Constitutional Court. It is submitted that one of the most moving articulations of his vision is to be found in the well known Constitutional Court judgment of Fourie. 17 This was the case which resulted in the legalisation of same-sex marriages in South Africa, making it only the fifth 15 Sachs in De Kok and Press (eds) (1990) 25. Elsewhere in the same work he makes a similar point when he states as follows: "We believe in a single South Africa with a single set of governmental institutions, and we work towards a common loyalty and patriotism. Yet this is not to call for a homogenised South Africa made up of identikit citizens. South Africa is now said to be a bilingual country: we envisage it as a multi-lingual country. It will be multi-faith and multicultural as well. The objective is not to create a model culture into which everyone has to assimilate, but to acknowledge and take pride in the cultural variety of our people". Sachs in De Kok and Press (eds) (1990) 24. country in the world, and the first in Africa, to take this step. Sachs wrote the judgment which, apart from a single dissenting judgment on a point not related to the substance of his argument, was unanimously adopted by the Court. There is one paragraph in the judgment which deserves to be quoted at length, since it may be said to represent the noble dream of Sachs: "A democratic, universalistic, caring and aspirationally egalitarian society embraces everyone and accepts people for who they are. To penalise people for being who and what they are is profoundly disrespectful of the human personality and violatory of equality. Equality means equal concern and respect across difference. It does not presuppose the elimination or suppression of difference. Respect for human rights requires the affirmation of self, not the denial of self. Equality therefore does not imply a levelling or homogenisation of behaviour or extolling one form as supreme, and another as inferior, but an acknowledgement and acceptance of difference. At the very least, it affirms that difference should not be the basis for exclusion, marginalisation and stigma. At best, it celebrates the vitality that difference brings to any society. The issue goes well beyond assumptions of heterosexual exclusivity, a source of contention in the present case. The acknowledgement and acceptance of difference is particularly important in our country where for centuries group membership based on supposed biological characteristics such as skin colour has been the express basis of advantage and disadvantage. South Africans come in all shapes and sizes. The development of an active rather than a purely formal sense of enjoying a common citizenship depends on recognising and accepting people with all their differences, as they are. The Constitution thus acknowledges the variability of human beings (genetic and socio-cultural), affirms the right to be different, and celebrates the diversity of the nation. Accordingly, what is at stake is not simply a question of removing an injustice experienced by a particular section of the community. At issue is a need to affirm the very character of our society as one based on tolerance and mutual respect. The test of tolerance is not how one finds space for people with whom, and practices with which, one feels comfortable, but how one accommodates the expression of what is discomfiting." 18 It is submitted that the above sets out the ultimate goal of Sachs's lifelong quest. 19 In the final sections of this article, the extent to which this goal may be 18 Minister of Home Affairs v Fourie supra par 60. Further on in his judgment, in terms which may be said to be distinctly reminiscent of the thinking of John Stuart Mill and Ronald Dworkin, Sachs J drives home the basic point made in the section quoted, when he states as follows: "The hallmark of an open and democratic society is its capacity to accommodate and manage difference of intensely-held world views and lifestyles in a reasonable and fair manner. The objective of the Constitution is to allow different concepts about the nature of human existence to inhabit the same public realm, and to do so in a manner that is not mutually destructive and that at the same time enables government to function in a way that shows equal concern and respect for all." See Minister of Home Affairs v Fourie supra par 95. 19 Note that it is not only in the judgment quoted that Sachs emphasises the need to be tolerant of difference, particularly in the South African context. It is a central theme in many of his judgments. Eg, in the case of Prince v President of the Law Society, Cape of Good Hope 2002 2 SA 794 (CC) par 170, he states as follows: "Given our dictatorial past in which those in power sought incessantly to command the behaviour, beliefs and taste of all in society, it is no accident that the right to be different has emerged as one of the most treasured aspects of our new constitutional order." In the case of Christian Education South Africa v Minister of Education supra par 23, he states "that if society is to be open and democratic in the fullest sense it needs to be tolerant and accepting of cultural pluralism". He also speaks of the constitutional value of acknowledging diversity and pluralism in our society" and goes on to mention the right of people to be who they are without being forced to subordinate themselves to the cultural and religious norms of others", as well as "the importance of individuals and communities being able to enjoy what has been called the 'right to be different'". (Christian Education South Africa v Minister of Education supra par 24.) In the case of S v Lawrence, S v Nagel, S v Solberg, he points out that: "What comes through as an innocuous part of daily living to one person who happens to said to be realizable will be interrogated, as well as the extent to which Sachs's views may be said to be akin to tilting at windmills. This section will be concluded by quoting again from the writings of Sachs, this time his thoughts on the place of the white racial group in a democratic South Africa. Written in 1990, his words are full of optimism, but he seems acutely aware that beneath the surface of his rainbow vision is a struggle in black and white: "Whites are not in the struggle to help the blacks win their rights, they (we) are fighting for their own rights, the rights to be free citizens of a free country, and to enjoy and take pride in the culture of the whole country. They are neither liberators of others, nor can their goal be to end up as a despised and despising protected minority. They seek to be ordinary citizens of an ordinary country, proud to be part of South Africa, proud to be part of Africa, proud to be part of the world. Only in certain monastic orders is self-flagellation the means to achieve liberation. For the rest of humankind, there is no successful struggle without a sense of pride and self-affirmation." 20 Fifteen years into democracy, there are worrying signs that many South Africans of all colours of the rainbow both feel despised by, and in their turn despise others of, this or that particular group of their fellow citizens. It may be argued, perhaps, that the fault lines of race, gender, political affiliation, economic class and sexual orientation, to name but a few, are posing an everincreasing threat to South Africa's rainbow. SLAYING THE DRAGONS OF RACISM, SEXISM, POVERTY AND XENOPHOBIA -A HOPELESS QUEST? The ideal world of tolerance and respect for the "dignity of difference" posited by Sachs in his writings and judgments, may be counterpoised with the real world which is South Africa today. 21 This is a world in which scores of "foreigners" were brutally murdered in a series of xenophobic attacks perpetrated inhabit a particular intellectual and spiritual universe, might be communicated as oppressive and exclusionary to another who lives in a different realm of belief. What may be so trifling in the eyes of members of the majority or dominant section of the population as to be invisible, may assume quite large proportions and be eminently real, hurtful and oppressive to those upon whom it impacts. This will especially be the case when what is apparently harmless is experienced by members of the affected group as symptomatic of a wide and pervasive pattern of marginalisation and disadvantage" (S v Lawrence, S v Nagel, S v Solberg 19974 SA 1176 par 161). 20 Sachs in De Kok and Press (eds) (1990) 27. At around the same time, in commenting on the fact that, in his opinion, the ANC had been successful in resolving cultural tensions between its members, Sachs stated as follows: "This must be one of the greatest cultural achievements of the ANC, that it has made South Africans of the most diverse origins feel comfortable in its ranks. To say this is not to deny that cultural tensions and dilemmas automatically cease once one joins the organisation: on the contrary, we bring in with us all our complexes and ways of seeing the world, our jealousies and preconceptions. What matters, however, is that we have created a context of struggle, of goals and comradeship within which these tensions can de dealt with" (Sachs in De Kok and Press (eds) (1990) 22-23). This may have applied to those united in struggle, but what is left moot is the manner in which cultural tensions between those inside and those outside the ANC and its traditions were to be resolved during the postapartheid period, once the ANC switched from being a liberation movement involved in a revolutionary struggle, to a political party governing the country as a whole. 21 The term "dignity of difference" has been borrowed from Jonathan Sacks The Dignity of Difference: How to Avoid the Clash of Civilizations (2003). throughout the country during 2008. It is a world in which the levels of inequality between rich and poor are amongst the highest in the world. 22 It is a world in which, in scenes reminiscent of the struggle against apartheid, the poor are increasingly resorting to violent protest against "poor service delivery". 23 It is a world in which the levels of violence perpetrated against women by men are truly appalling. 24 Finally, it may be argued, it is a world in which issues of race and racism cast a deep shadow over almost every public debate, subjecting South Africans to a constant barrage of anger and recrimination. It is on this last-mentioned issue that the focus will now fall, since it provides a stark contrast to Sachs's vision of a rainbow nation. The extent to which racial polarisation and animosity still exist in South Africa, a decade and a half after the end of apartheid, is well illustrated by a series of debates which erupted on the issue of an alleged attempt in May 2008 by Judge John Hlophe, the Judge President of the Cape High Court, to influence certain Justices of the Constitutional Court improperly in a number of matters which involved Mr Jacob Zuma. At that time Mr Jacob Zuma was in the running for the position of President of the Republic of South Africa, and he was subsequently elected to that position. The Justices of the Constitutional Court laid a complaint against Judge Hlophe with the Judicial Services Commission and he laid a counter-complaint against them for allegedly violating his rights by the premature publication of the complaint against him. In August 2009, after a period of protracted legal wrangling, the Commission stated that it did not intend to continue with the investigation of the complaint against Judge Hlophe. A retired white judge of the Constitutional Court, Judge Johann Kriegler, on behalf of an organisation by the name of Freedom Under Law (FUL), then announced that this organisation would seek judicial review of the decision taken by the Judicial Services Commission. This decision by Freedom Under Law (as with many of the decisions taken during this drawn out saga) unleashed a storm of protest, much of it with a strong racial undertone. For example, in an article in 22 According to a recent report in a respected daily newspaper: "South Africa has overtaken Brazil as the country with the widest gap between rich and poor, according to figures put together by a leading South African academic. Haroon Bhorat, an economics professor at UCT, told a briefing at Parliament on Friday that South Africa was now 'the most unequal society in the world' with a significant increase in income inequality … Bhorat said South Africa's Gini coefficient indexwhich shows the level of income inequality -stood at 0.679" (28 September 2009 The Mercury 13). 23 As long ago as 2004, activists and scholars such as Ashwin Desai and Richard Pithouse were tracing growing levels of resistance within poor communities in South Africa in opposition to perceptions of increasing impoverishment and marginalisation. They submitted that "the rebellions that are breaking out around the country with increasing frequency are almost always fuelled by the exclusion of poor communities from services that they already have and not the failure of the government to "deliver" fast enough" and argued that "the rich are getting richer and the poor are getting poorer, amidst a raging orgy of dispossession and enrichment by primitive accumulation" (Desai and Pithouse "Sanction All Revolts: A Reply to Rebecca Pointer" 2004 39 Journal of Asian and African Studies 295 297). 24 Eg, according to "AfroAIDSinfo", a web-based information portal developed by the South African Medical Research Council: "Sexual violence against women and girls is a problem of epidemic proportions in South Africa, with child rape as one of its particularly disturbing features." According to the same source: the Business Day on 9 September 2009, Sipho Seepe, a higher education and strategy consultant, stated as follows: "From time to time some among us remind us of our place in the sun. They crack a whip to ensure that we are in line. They see it as their God-given duty to determine not only what we should talk about but also how we should conduct ourselves … This group would want us to believe that we are hung up and obsessed with the issue of racism. Since racism is an abstract subject to them, their reaction is to denigrate, to caricature as lunatics those who dare raise the subject. Labels such as 'racial nationalists' and 'black chauvinists' are hurled at those who decry racism in our national life. The idea is to intimidate them into silence. And like slaves in the plantation we fall into line … The tendency to impugn the integrity of those who hold views different from our own is a variation of this master-slave mentality. The masters posit themselves as custodians of wisdom. Judge Johann Kriegler joins a number of those who seem to suffer from this arrogance." 25 Racial polarization was equally apparent in debates within the mainly white Afrikaans press. For example, in an article on 11 September 2009 in the Afrikaans newspaper Beeld, struggle theologian Nico Smith placed the worrying racial dimension of the debates surrounding the judiciary in South Africa into sharp focus: "These days almost no conversation between white South Africans takes place without a discussion of the dreadful situation in which the country finds itself. And always it is the black government and all black people that are blamed … White South Africans will have to consider their presence in Africa in light of what happened to the white French colonists in Algeria … If they carry on with their opposition to black South Africans and the black government, the day might arrive when the black population simply says: 'Enough is enough. We are tired of constantly being criticized and humiliated by white people'… Should this lead to mass violence against whites, they would be like rats in a trap. They have no motherland to send ships to rescue them. They would have to endure a massacre … We have got no choice but to become loyal South Africans and to forget about our minority rights, our language and culture, our demands for an end to affirmative action and the return of the death penalty … If we go on as we are, ten to one we shall move closer to the abyss into which we could plunge … It is too much to hope that we shall be pulled back from the edge of the abyss for a second time. White South Africans must take action before they are pushed over the edge of the abyss in the same manner as the white colonists in Algeria …" 26 The newspaper's editor, Tim du Plessis, responded inter alia as follows to the views of Smith quoted above: "[T]he great settlement of '94 and the Constitution of '96 do not provide for a regime in which white people travel second class, look at the ground and remain silent. Nothing is said about 'revenge' for apartheid in the sense of punishment. This is also not what the ANC proclaims in its writings or in its speeches, although its body language and sub-text do indicate fairly strongly that white people should rather be thankful and remain quiet … The remonstrations of [Nico] Smith in particular will inflame emotions. Perhaps it is a good thing that we speak straightforwardly about these issues for once." Reflecting the worrisome deterioration of race relations within South Africa during this period, an issue of the respected South African weekly newspaper the Mail & Guardian was devoted almost entirely to "The Race Issue". 28 One of the contributors to this issue, Andile Mngxitama, delivered, inter alia, the following sobering indictment of race relations in the post-apartheid period: "A black grammar of suffering ends dialogue and demands justice. Such grammar locates the creation of blackness at the vortex of three dispossessions: land, labour and the African sense of being. These dispossessions created white wealth and black poverty. Post-1994 sustains this anomaly … Racism is going nowhere as long as the structures of white supremacy remain intact, covered in the language of democracy and nonracialism … The end of racism depends on what blacks do and has little to do with whites. Right now blacks lack a grammar of black suffering and that's the problem." 29 An article by Njabulo S Ndebele in the same issue of the Mail & Guardian, struck a more conciliatory tone. Entitled "Of Pretence and Protest", the subtitle of the article provides one with a feel for the essence of the debate, indicating clearly the writer's belief that South Africa today is far from a rainbow nation characterised by tolerance and respect for difference, as envisaged in the writings and judgments of Sachs. 30 The subtitle reads: "The collective anguish of a nation trying to find the way past race and into leadership." Ndebele begins with the following quote from a previous Mail & Guardian article by David Smith, in which Smith speaks of the pretences of both white and black South Africans: "The whites are pretending it didn't happen; the blacks are pretending to forgive." 31 Ndebele views these pretences as coping mechanisms adopted by both white and black South Africans to get by in present day South Africa, and calls for leadership which will "place the shared anguish of coping through pretence within the realm of responsibility" and "use it as a basis for a sensitive attempt to create ever-expanding circles of social solidarity across the great barriers of race, ethnicity, gender and class without fudging their impact." 32 Ndebele further calls for a commitment "to finding an appropriate political instrument that will set a foundation of trust for South Africans to recover their shared idealism" and points out that "[t]his demands that we reconnect with the founding compromises of the negotiated settlement that led to 1994." 33 He concludes his article by calling on South Africans to "recommit to diversity in solidarity, collaboration, trust, accountability and civility, all of which have a binding effect that should allow us to be aware of barriers that could be permissive or inhibitive, but to learn to think and feel beyond them and across interesting to note that Ndebele in particular points to the need for some sort of reconciliation to take the nation beyond its present state of guilt, fear, anger and recrimination. He does not spell out in sufficient detail, however, precisely how South Africans are to find their way past their present collective anguish, concealed behind the pretence of normality. In the concluding section of this article, it is suggested that the approach adopted by Sachs towards issues of dignity, identity and belonging in post apartheid South Africa may, if properly understood, provide the sort of foundation which will allow South Africans to begin to move beyond the present impasse. In other words, it is submitted that the question which was posed at the outset of this article, may be answered by suggesting that Sachs is more than a Don Quixote tilting at windmills, and that his ideas are in fact a real source of hope. That said, it seems clear that the quest for a rainbow nation is far from over, and will have to be taken forward by equally courageous thinkers and jurists in the future. CONCLUSION -THE QUEST CONTINUES Sachs seems to embody, in both his life and in his life's work, a reconciliation between worlds which, more often than not, seemed set to collide rather than synthesise. He has moved, successfully it may be argued, between noble dream and harsh reality; between committed struggle hero and impartial judge; between an absolute commitment to the solidarity of struggle and an equally absolute commitment to individual autonomy and the dignity of difference; between Critical Legal Studies and Liberal Democracy; between Marx and Mill. His dignity jurisprudence, in particular, has made a real difference. South Africans may not yet be ready to embrace his vision, and it may only be a starting point, but he has set the tone. Finally, it is submitted that the work of Sachs points to a potential synthesis between two great philosophical strands of thought, the one originating in Africa and the other in Western Europe, each having its own profound implications for the dignity and wellbeing of individuals and communities. The one strand is the African concept of ubuntu, linking the individual to the community by focusing on deep bonds of common humanity. 35 The other strand is the Western liberal conception of the autonomous individual, which holds sacred each particular hope and dream, refusing to allow the particular to be sacrificed completely to 35 It would seem that the concept of ubuntu is, at present, contested terrain. During recent years it has been subjected to a variety of formulations. There are some who maintain that attempts to define the concept too closely are, in fact, harmful to the concept itself. At the risk of greatly oversimplifying a complex issue, it may be worth noting the following extract from the well known Constitutional Court Case of S v Makwanyane, which resulted in the outlawing of the death penalty in South Africa, and in which Mokgoro J stated as follows: "Generally, ubuntu translates as humaneness. In its most fundamental sense it translates as personhood and morality. Metaphorically, it expresses itself in umuntu ngumuntu ngabantu [a person is a person because of other people], describing the significance of group solidarity on survival issues so central to the survival of communities. While it envelops the key values of group solidarity, compassion, respect, human dignity, conformity to basic norms and collective unity, in its fundamental sense it denotes humanity and morality" (see S v Makwanyane 1995 3 SA 391 (CC) 308). the general. 36 Reconciliation between these two strands of thought may not be easy. Africans have good reason to be suspicious of Western ideas. In the same way that colonialism raped the African continent and sought to rob Africans of their identity, there is deep concern among certain African intellectuals that the concept of ubuntu will be appropriated, co-opted and eventually strangled by the totalising discourse of Western philosophy. Viewed from this perspective, it is not difficult to understand the ideological gulf which could develop between an evolving African philosophy based upon the concept of ubuntu on the one hand, and the notion of constitutional democracy, viewed in the context of its Western liberal roots, on the other. It is at this point, perhaps, that the life and work of Sachs may serve to put us on the path to true reconciliation and harmony. 36 The philosopher Grayling, for example, argues against the notion that the interests of an individual may legitimately be sacrificed for the good of the community, without the consent of the individual concerned. Citing Enlightenment thinkers as diverse as David Hume, Adam Smith, Edmund Burke and William Hazlitt, he points out that "they all refused to allow the particular to be so subordinated to the general that it vanished from view" and explains further as follows: "By this I mean that they saw, in true Enlightenment spirit, that the good life is an individual thing, even though it is lived in community; for the community cannot itself be goodcannot constitute the Good Society -unless each of its members is living a life he or she finds satisfying and flourishing both from the individual's own point of view and from the point of view of relationships with others. The general good is thus a function of many individual good lives; and these latter cannot be sacrificed without reducing the good of all. And arguably, if the interests, the hopes -and certainly if the life -of a single individual is sacrificed unfairly or against the individual's will, the general good is diminished thereby; not only by the loss of what is sacrificed, but by the fact that it happened at all" (Grayling The Heart of Things -Applying Philosophy to the 21 st Century (2006) 159.) Furthermore, John Stuart Mill, in his well-known Essay on Liberty of 1859, provides the following classic formulation of the general principle: "[T]he sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection … [T]he only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant" (Mill On Liberty (1859) Chapter 1 par 9).
2019-05-02T13:08:16.488Z
2021-09-17T00:00:00.000
{ "year": 2021, "sha1": "870d49b970d12167b685c01f6eaeb5015c66164c", "oa_license": "CCBY", "oa_url": "https://obiter.mandela.ac.za/article/download/12374/17346", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2625fb96f03ad299b57f0880ce68bafe3fef8b04", "s2fieldsofstudy": [ "Political Science", "History" ], "extfieldsofstudy": [ "Sociology" ] }
234538378
pes2o/s2orc
v3-fos-license
Contribution of radiant component to thermal conductivity of the medium Radiation-conductive heat transfer in a semi-transparent medium has been studied. Two methods for measuring thermal conductivity components are described. Thermal resistance of a remote reflecting surface, the thickness of radiation-conductive relaxation, the depth of penetration of thermal radiation, as well as radiant and conductive components of thermal conductivity of foamed polyethylene were measured using an aluminum foil radiation screen. The dependences of the total thermal conductivity of simple and aluminum-metalized polyester bulk non-woven canvases on the thickness to which it is compressed were measured. Radiant and conductive components of the thermal conductivity of non-woven material are calculated. It is shown how metal nanocoating of fibers reduces radiant thermal conductivity of the material. Introduction Unfairly much less attention is given to the role of the medium in radiation heat transfer than to the role of the surface. Often, when considering, radiation heat transfer in the medium is not separated from the conductive one. In the fundamental domestic work on Thermophysics [1], description of the phenomenon of thermal radiation diffusion and the determination of the radiant thermal conductivity of the medium were found only in the last Chapter. Moreover, conditions for the applicability of such an approximation are not considered there. Radiation permeability of the medium is unjustifiably neglected. There is an extremely erroneous opinion that the best thermal insulator is a vacuum, whereas it should be perceived by a medium with an infinitely large radiant thermal conductivity. In the world scientific literature, there are a number of analytical works on radiation-conductive heat transfer [2,3], but simple and clear solutions have not been obtained there either. Currently, most of the work on this topic is related to attempts to solve emerging problems numerically [4]. But in this case, the laws of propagation of diffuse radiation are not applied. This is surprising, because there is a whole class of technically important materials for which this approximation works perfectly. This work is devoted to practical measurements of heat transfer components in fibrous and foamed lightweight insulation materials. Materials and methods In one of our recent papers [5] it was shown that far from the borders of the medium (at a large optical thickness, when distance to border is much more than penetration depth of thermal radiation а), in accordance with Einstein's diffusion law, process of heat propagation is linear, when temperature gradient is small enough (2) Proportionality coefficient L [1][2][3] is called the radiant thermal conductivity of the medium Main parameter of the medium is penetration depth of thermal radiation а. Studied materials have random inhomogeneous structure that not only absorbs, but also effectively scatters radiation. Because the size of non-homogeneities is much larger than the radiation wavelength, penetration depth for all wavelengths is approximately the same. Under ambient conditions temperature of the sample T is usually significantly higher than the temperature difference applied to it. Because typical penetration depth for the materials under study is а ~ 1 mm, condition (1) is usually fulfilled exactly. It should be noted that penetration depth of radiation plays the same role for radiation heat transfer as free path length of molecules for conductive one. These two types of transfer are described by similar equations. In both cases random inhomogeneous medium can be characterized by average value of thermal conductivity, when sample size is much larger than the size of the non-homogeneities. In such far from the borders medium the Fourier equation is valid in a generalized form for radiation-conductive heat transfer . T      (4) With a stationary parallel heat flow in the medium far from the borders a temperature field with the same temperature gradient T   is established. The total thermal conductivity of the medium  consists of radiant L and conductive D components . LD   (5) In [5] the problem of stationary radiation-conductive heat transfer in a gray medium near a flat opaque surface was solved analytically. For a constant perpendicular surface heat flux density Φ the dependence of the medium temperature on the distance x to the opaque border is where b is thickness of the radiation-conductive relaxation and τ is near-surface temperature jump, (10) For measurements a layer of 0.75 mm thick foamed polyethylene was taken, from which samples were cut according to the dimensions (85×85 mm 2 ) of the working part of the unit [6]. Weight of each sample is 177 mg. Density is 32.6 kg/m 3 . Share of occupied by polyethylene volume is 3.3%. The unit used for measurements is assembled in such a way that convective heat transfer is almost completely excluded. To do this, horizontal heater is placed directly above the cooler. A samples stack, controlled by the thickness, is placed to the gap between them. To direct all the heat generated by the electric current in the heater through the samples to the cooler, a heat shield is contained behind the heater. During measuring, we set and stabilize the screen temperature, find a voltage supplied to the heater at which the temperature of the heater does not change and is equal to the screen temperature. In this case the heat flow from the heater to the screen is small because of two reasons. First reason is the good thermal insulation between them, and second one is their small temperature difference. Heat flow from the heater to the cooler is almost constant. Flow density is almost the same throughout the entire working volume of the unit. Duration of each measurement is determined particularly by the relaxation time of the sample temperature. A separate measurement usually takes from 10 minutes to half an hour. Having sufficient duration and professionalism of measurements, accuracy better than 1% can be achieved. Results At figure 1 two dependencies of the thermal resistance R on the thickness of the medium d (the number of layers of foamed polyethylene) are shown, differing in that in the second case an aluminum foil screen is placed in the middle of the stack. total thermal conductivity of foamed polyethylene was calculated λ = 0.0483 W/(m•K). Theoretically predicted type of second dependency Therefore, data of the second dependency was converted to the form 2 /. A R d   (13) By means of their approximation ( figure 1(a) (15) obtained values allowed us to calculate radiant thermal conductivity L = 0.0101 and conductive thermal conductivity D = 0.0382 W/(m•K) for foamed polyethylene, as well as γ = 1.124 and penetration depth of thermal radiation a = 1.09 mm. Self-consistency of our measurements was verified by formula (3) at average temperature T = 312 K. Then L`= 0.0100 W/(m•K). Relatively low (~4%) accuracy of measuring thermal conductivity components is due to small depth of radiation penetration, only 1.5 times the thickness of the material layer. However, we were able to observe thermal resistance of remote surface and thickness of radiation-conductive relaxation directly and estimate contribution of each thermal conductivity component. Theoretical justifications and results of practical measurements showed good accordance. Conductive thermal conductivity of massive polyethylene 0.4 W/(m•K) is many times greater than thermal conductivity of air, at measurement temperature equal to DA = 0.026 W/(m•K). Polyethylene occupies 1/30 of the volume, so we can assume that contribution to conductive thermal conductivity of material, due to the movement of heat through polyethylene, is near 30 times less than thermal conductivity of massive polyethylene. Accordingly, conductive component of thermal conductivity of the material D = 0.038 is composed of one part due to the movement of heat through air DA = 0.026 and the other one due to the movement through the polyethylene DS = 0.012 W/(m•K) AS . D D D  (16) In the second part of the work, we studied how the contribution of each of thermal conductivity components changes depending on density of the medium. For measurements, we took a bulk nonwoven canvas with surface density of 70 g/m 2 , produced under brand "hollowfiber" (in the world its close analog, produced under brand "tinsulate", is better known). The canvas consists of hollow thermally molded polyester (polyethylene terephthalate) fibers with thickness near 30 microns. This is the thinnest version of currently produced canvases. For the second series of measurements, aluminum with front layer thickness near 100 nm was applied to the canvas on both sides by means of vacuum thermal evaporation. Our estimates have shown that depth of penetration of thermal radiation into such canvas is slightly more than half of its thickness. Sprayed metal should penetrate the canvas to about the same depth. Therefore, we can consider that the aluminum metallization is applied fairly evenly throughout the thickness of the material. For both material types the dependence of total thermal conductivity λ on thickness d, to which it is compressed, is measured (figure 2). The samples are stacked in two layers with total initial thickness of 20 mm and weight of 998 mg. Then portion of the volume, occupied by polyethylene terephthalate, is equal to 0.5%. As the material expands, part of conductive component of its thermal conductivity due to movement of heat through air stays almost constant, and part of conductive component due to movement of heat through solid (DS) decreases inversely proportional to thickness A1 / . Under the same conditions, material transparency and radiant component of its thermal conductivity increases. Radiant component grows, firstly, in direct proportion to the thickness due to optical density of the medium decrease, but, secondly, there is an addition due to change of direction of fibers It should be noted that with this thickness, portion of volume occupied by solid substance in the "hollowfiber" is the same 3.3% as that in foamed polyethylene. It is not difficult to notice that contribution of each component to thermal conductivity of both materials is also approximately the same. The fact that two essentially different measurement methods produce similar results is significant confirmation of correctness of our ideas. Metal layer of negligible thickness doesn't lead to increase in conductive thermal conductivity of the material. However, despite the fact that average distance between fibers (250 microns at d = 10 mm) is quite large compared to wavelength of radiation, radiant thermal conductivity decreased significantly. At a thickness of 7.5 mm for simple and metalized "hollowfiber", it is 0.024 and 0.021 W/(m•K), respectively, at 20 mm -0.045 and 0.040 W/(m•K). As material thickness increases, effect due to metallization increases in absolute value. But relative difference in radiant thermal conductivity decreases and is 15, 12 and 11% at d = 3, 7.5 and 20 mm. Conclusion Behavior of metalized material under compression is similar to behavior of radiation transmission by metal lattice, when the wavelength is much longer and approaches the size of the cell. When distance IOP Publishing doi:10.1088/1742-6596/1697/1/012055 6 between fibers approaches wavelength of radiation, radiant thermal conductivity of metalized material should decrease sharply. Metalized bulk non-woven canvas is material of the future. At the same distance between the fibers and, respectively, at approximately the same radiant thermal conductivity, density of material is proportional to the square of the thickness of fibers. Metallization of materials with very thin fibers is a way to obtain ultra-light insulation materials. In mass production such materials are still too expensive, but in high-tech products -in cryo-or in space technology, it is time to use them. Thus, two methods showing qualitative and quantitative accordance are presented and practical measurements of the thermal conductivity components of foamed and fibrous insulation materials are carried out. It is found how each of these components depends on density of the medium. Contribution of radiation component to heat transfer and ways to reduce it are described. A method for constructing ultra-light insulation materials is shown. Measurements of radiation-conductive heat transfer are unique in their information content and are extremely promising for the study and understanding of properties of materials with a complex structure that is close to chaotic.
2020-12-24T09:12:37.661Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "ea76eb5e993983c9be3e1dfa928927c2076672c0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1697/1/012055", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "973e60c24e935e1ac456e021f7e0dfd6c4e08bff", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
2516882
pes2o/s2orc
v3-fos-license
Microarray analysis identifies candidate genes for key roles in coral development Background Anthozoan cnidarians are amongst the simplest animals at the tissue level of organization, but are surprisingly complex and vertebrate-like in terms of gene repertoire. As major components of tropical reef ecosystems, the stony corals are anthozoans of particular ecological significance. To better understand the molecular bases of both cnidarian development in general and coral-specific processes such as skeletogenesis and symbiont acquisition, microarray analysis was carried out through the period of early development – when skeletogenesis is initiated, and symbionts are first acquired. Results Of 5081 unique peptide coding genes, 1084 were differentially expressed (P ≤ 0.05) in comparisons between four different stages of coral development, spanning key developmental transitions. Genes of likely relevance to the processes of settlement, metamorphosis, calcification and interaction with symbionts were characterised further and their spatial expression patterns investigated using whole-mount in situ hybridization. Conclusion This study is the first large-scale investigation of developmental gene expression for any cnidarian, and has provided candidate genes for key roles in many aspects of coral biology, including calcification, metamorphosis and symbiont uptake. One surprising finding is that some of these genes have clear counterparts in higher animals but are not present in the closely-related sea anemone Nematostella. Secondly, coral-specific processes (i.e. traits which distinguish corals from their close relatives) may be analogous to similar processes in distantly related organisms. This first large-scale application of microarray analysis demonstrates the potential of this approach for investigating many aspects of coral biology, including the effects of stress and disease. Background Cnidarians are the simplest animals at the tissue level of organization, and are of particular importance in terms of understanding the evolution of metazoan genomes and developmental mechanisms. Members of the basal cnidarian Class Anthozoa, which includes the sea anemone Nematostella and the coral Acropora, have proved to be surprisingly complex and vertebrate-like in terms of gene repertoire [1][2][3], and are therefore of particular interest. Scleractinian corals are also of fundamental ecological significance in tropical and sub-tropical shallow marine environments as the most important components of coral reefs. Surprisingly, both the general molecular principles of cnidarian development and many aspects of the functional biology of corals are only poorly understood. Whole genome sequences are now available for both the textbook cnidarian Hydra magnipapillata and the sea anemone Nematostella vectensis. However, corals are distinguished from Nematostella and other cnidarians by the presence of an extensive skeleton composed of calcium carbonate in the form of aragonite. The ability to carry out calcification on a reef-building scale is enabled by the obligate symbiosis between scleractinians and photosynthetic dinoflagellates in the genus Symbiodinium. Expressed Sequence Tag (EST) projects carried out on Acropora millepora and Nematostella vectensis have provided insights into the evolution of animal genomes [2,3]. The latter publication, based on ca 5800 unigenes from the coral Acropora and 10,500 unigenes from the sea anemone Nematostella, revealed the surprisingly rich genetic repertoire of these morphologically simple animals. The genomes of anthozoan cnidarians encode not only homologs of numerous genes known from higher animals (including many that had been assumed to be 'vertebrate-specific'), but also a significant number of genes not known from any other animals ('non-metazoan' genes; [3]). This picture of genetic complexity has been augmented by the recently completed whole genome sequence (WGS) of Nematostella vectensis [1], for which approximately 165,000 ESTs are now available. Similar resources exist for Hydra magnipapillata [4,5] although the much larger genome size of this organism has consequences for the completeness of the assembly. Both of these other cnidarians not only lack a calcified skeleton, but also do not enter symbioses. Entry into a symbiosis can have profound effects on gene expression patterns, with changes to immune function, and to many metabolic functions including CO 2 cycling, nutrient cycling, metabolite transfer and reactive oxygen quenching [6,7]. The phylogenetic position of Nematostella makes this a particularly useful comparator because both Nematostella and Acropora are classified into the anthozoan subclass Hexacorallia (Zoantharia). Information and resources relevant to microarray studies on corals have recently been summarised [8]. Few precedents exist for the approach used here; the most directly relevant previous study is an array experiment comparing symbiotic and aposymbiotic sea anemones [9]. To gain insights into the molecular bases of coral development, including nematocyst formation, metamorphosis, and the processes of symbiont uptake and calcification, developmental microarray experiments were carried out using 12000 spot cDNA arrays representing 5081 Acropora millepora unigenes which, based on the EST sequence, are predicted to give rise to a bona fide protein. Four stages of coral development were compared, spanning the major transitions of gastrulation and metamorphosis ( Figure 1). These comparisons, which constitute the most comprehensive analysis of the development of any cnidarian to date, provide insights into the overall dynamics of the transcriptome during development as well as candidate genes for roles in metamorphosis, calcification and sym-Scanning electron micrographs of developmental stages in the Acropora millepora lifecycle Figure 1 Scanning electron micrographs of developmental stages in the Acropora millepora lifecycle. At spawning egg-sperm bundles are released by the colony and float to the surface, where they break up into individual eggs and sperm. Upon release and fertilization of the egg, cell division first produces a spherical bundle of cells which then flattens to form a cellular bilayer called the prawnchip (PC). Following gastrulation the spherical gastrula elongates to a pear shape as cilia develop. Further elongation produces a motile presettlement planula larvae (PL), possessing a highly differentiated endo-and ectoderm and an oral pore. Upon receipt of an appropriate cue, the larva settles and metamorphoses, forming the primary polyp (PO). Following calcification, symbiont uptake, and growth and branching, the adult colony is formed (A). The stages labelled with yellow letters represent those from which RNA was extracted, labelled and hybridized to the slides. Stages circled in red are those from which ESTs were spotted onto the slides. biont uptake. Spatial expression patterns were determined for many of the candidate genes identified in the array experiments. Comparisons with Nematostella, Hydra and other animals imply that nominally coral specific processes are executed by both conserved and novel (taxonspecific) genes, and suggest some intriguing parallels with other systems. The identification and composition of synexpression clusters Of the 5081 unigenes giving rise to predicted peptides that are represented on the arrays, a total of 1084 unigenes (2462 spots) were found to be up-or down-regulated (P = < 0.05) between any two consecutive stages. The microarray results were validated by virtual northern blots. The results for eight arbitrarily chosen clones are shown in Additional File 1; in each case the observed expression pattern corresponds with the microarray results. Cluster analysis identified six major synexpression clusters ( Figure 2A) which map onto the major stages of coral development ( Figure 2B). Three of these clusters (CII, CIII and CIV) are of most interest from the perspective of coral-specific biology. Candidates for roles in nematocyst development, receipt of settlement cues and the implementation of metamorphosis may be represented in cluster II (genes up-regulated in planula) or cluster III (genes up-regulated in planula and primary polyp). Similarly, genes involved in the early stages of calcification are predicted to occur in cluster IV (genes up-regulated in primary polyp) and cluster III (genes up-regulated in planula and primary polyp). These same two clusters (CIII and CIV) may also provide candidates for roles in the establishment of symbiosis. Two other synexpression clusters (CI and CVI) are of more general developmental interest. The largest, cluster I (genes down-regulated after embryogenesis), consists of 567 unigenes whose transcript levels decreased after gastrulation and remained low ( Figure 2A). Cluster V (genes up-regulated in adult) consists of only 43 unigenes. The small size of this cluster may be due to the absence of adult material amongst the cDNAs spotted on the array and therefore presumably reflects only a small proportion of the total number of genes that are upregulated in adult coral. Functional breakdown data for the genes in these clusters are summarised in Table 1. Overall, approximately 15% of the differentially expressed genes are coral specific (no match to database sequences at < 1 × e -5 ), but the relative proportion of these nominally taxon-specific genes varies widely between the synexpression clusters. Clusters II (genes up-regulated in planula) and IV (genes up-regulated in primary polyp) contained the highest proportions (23.5% and 26%; 26 and 20 unigenes respectively) of unique genes, but these accounted for only 12% of cluster I ( Figure 2C). Conversely, cluster VI contained the highest proportion (59%) of 'core' genes, which are defined as genes represented in animals and other kingdoms ( Figure 2C). The proportion of Acropora unigenes matching only to other cnidarians was relatively constant across clusters, cluster VI (7%; 14 unigenes) being somewhat below the 9-11.5% range of the other clusters (data not shown). Approximately 10% of cluster I (genes down-regulated after embryogenesis) consists of genes in functional category AIII, genes involved in cell replication [10], probably reflecting the extent to which cell proliferation dominates early embryogenesis. 29.1% of cluster VI (genes up-regulated after embryogenesis) were classified into functional category AV: protein synthesis cofactors, tRNA synthetase, and ribosomal proteins, whereas all other clusters contained very few genes in this category. 27.2% of cluster III (genes up-regulated in planula and primary polyp) were classified into AVI: Intermediary synthesis and catabolism enzymes; this is significantly more than in any other cluster. Planula larvae are primarily dependent upon stored lipid, whereas the energy requirements of adult corals are often largely met by photosynthetic products exported from their dinoflagellate symbionts. These physiological changes are reflected by shifts in the coral transcriptome. For example, lipases are highly represented amongst the planula ESTs, but strongly down-regulated thereafter. Also of note are dramatic differences in representation of genes in category BII (intracellular signalling) between cluster I (10.5%) and cluster II (0.9%), and in genes in category BIII (extracellular matrix and cell adhesion) between cluster I (0.9%) and cluster II (14.4%). These shifts, and the sharp spike in expression of ECM and cell adhesion genes, are associated with the transition from an undifferentiated proliferative stage and the emergence of differentiated cell types. Lectins related to sea cucumber CEL-III are strongly expressed during metamorphosis in Acropora Whilst our understanding of metamorphoses in marine invertebrates is very incomplete, in several cases key molecules implicated in the underlying processes have been identified, and these include lectins [11,12]. Studies of coral settlement and metamorphosis have indicated that the inductive morphogenetic cue is exogenous/environmental and, whilst the exact structure of the metamorphosis inducing morphogen remains elusive, lipopolysaccharides are prime candidates [13] suggesting that cell surface recognition by coral larvae may be mediated by lectins. Lectins are therefore of particular interest as candidates for roles in settlement and metamorphosis as well as in other developmental processes including the Summary of microarray results Figure 2 Summary of microarray results. (A) Graphical representation of the six expression clusters: yellow corresponds to upregulation and blue to downregulation. Each row corresponds to an EST and each column to a developmental stage as labelled in Figure 1. Clusters I-VI consist of genes with their highest expression in the prawnchip, presettlement, presettlement and postsettlement, post-settlement, adult, and post-gastrulation stages, as diagrammed in (B). Presettlement orientation is oral to the left, postsettlement orientation is oral pointing out of the plane of the page. (C) Pie charts classifying the genes in each cluster into unique genes (blue-unique to Acropora), core genes (purple-matching a database entry in non-Metazoa, Radiata and Bilateria) and other (light yellow-any combination of any two of non-Metazoa, Radiata or Bilateria). Note that whilst 1084 unigenes were differentially expressed, the total number of unigenes in clusters is 1161. This is because 70 unigenes fall into two or more clusters, possibly due to the existence of splice variants for some unigenes. The number of unigenes in each cluster is in brackets. uptake of Symbiodinium (see below). Indeed, a mannosebinding lectin has recently been described from A. millepora which binds both bacteria and Symbiodinium and may therefore have roles in both immunity and symbiosis [14] A search for genes encoding lectin domains in clusters II, III and IV identified six unigenes, two of which, A036-E7 and A049-E7, have significant overall similarity to a haemolytic lectin from sea cucumber. They lack clear Nematostella (or Hydra) counterparts, but a homologous gene is present in the Caribbean coral, Acropora palmata [15]. The two A. millepora proteins are 82.1% identical to one another ( Figure 3A), and 50.4% and 48% identical to Cucumaria echinata CEL-III [16] respectively. These were amongst the most highly represented of the differentially expressed unigenes (A036-E7 was represented by 13 ESTs and A049-E7 by 4) and, based on their expression patterns, they are candidates for roles in metamorphosis. In situ hybridization ( Figure 3B, C) revealed that both A036-E7 and A049-E7 are expressed in a subpopulation of ectodermal cells in the oral half of the larva (Figure 3B1-2; C1-2). In the post-settlement primary polyp they are exclusively expressed orally on the side that is exposed to the environment, the other, non-expressing side being against the substratum (Figure 3B3-4; C3-4). C. echinata CEL-III functions as an oligomer, apparently causing osmotic rupture of cell membranes after attachment to membranebound sugars [16,17], and their high sequence similarity suggests similar roles for the two Acropora proteins in cell recognition and lysis for tissue remodelling during metamorphosis. Alternatively, expression on the exposed surface of the polyp is also consistent with a role in selfdefence, and could indicate a function in lysis of invading microorganisms by a similar mechanism, as suggested by Kouzuma et al [17]. Other lectins in nematocyst differentiation Three of the four remaining lectin domain containing proteins (A044-C2, A032-H1, and A043-H7) share an unusual structure, as they are each predicted by InterProScan [18] to contain an N-terminal signal peptide (for transport to the ER and secretion or organelle targeting), a central collagen domain, and a C-terminal galactose binding lectin domain ( Figure 4). Blast searching showed that all three were most similar to Nematostella proteins, and structural comparisons indicate that these Nematostella and Acropora proteins, although resembling the mini-collagens known from Hydra [19], are thus far known only from anthozoan cnidarians. Canonical mini-collagens [19,20] are components of the walls of cnidarian nematocysts, and are defined by the presence of approximately fourteen Gly-X-Y repeats flanked by proline-rich and Cysrepeat regions. The Acropora molecules described here, together with Nematostella mini-collagen-like proteins, are distinct in also containing lectin domains; there are no Hydra proteins which contain both of these domains. Both A044-C2 and A032-H1 have uninterrupted minicollagen repeats, and for these, whole mount in situ hybridization revealed a common expression pattern, transcripts first appearing in scattered ectodermal cells which are more abundant toward the oral end of the planula and then becoming limited to the oral side of the post-settlement polyp ( Figure 4A, B). Nematocysts are first apparent in the early planula larva (Additional File 2) and sections of embedded whole mount in situ preparations reveal expression in presumed cnidoblasts (Additional File 3), but also in other cells without the characteristic cnidoblast morphology. Whether these cells are developmental stages of cnidoblasts, or an entirely different class of cell, remains to be established. However, in the third of these related proteins, A043-H7, the minicollagen repeat is interrupted, and a completely different expression pattern is observed (see below). Whereas the five proteins discussed above all contain galactose-binding lectin domains, the last of these six differentially expressed proteins (A043-D8) contains a C type lectin domain. Moreover, whilst a signal peptide is present, A043-D8 does not contain a mini-collagen domain. As in the case of A044-C2 and A032-H1, expression of A043-D8 appears in scattered ectodermal cells as the planula is developing ( Figure 4C), although the distribution of these cells appears to differ somewhat from those shown in Figures 4A and 4B. Histological sections fail to reveal any evidence of expression in obvious cnidoblasts. A potential mediator of symbiont uptake Acropora species acquire symbionts directly from the environment and although uptake in the wild has only been observed a few days after settlement [21], larvae of Acropora [22,23] and a number of other coral species [24,25] are competent to take up symbionts. However, the exact time and mode of uptake remain to be established. Lectin/ polysaccharide signalling is used in many systems as a mechanism for symbiotic recognition [26], and has been implicated in the establishment of symbiosis in various marine invertebrates (e.g [27]). In the octocoral Sinularia lochmodes a lectin is involved in the conversion of Symbiodinium from a motile to the non-motile form required for symbiosis [28,29]. Also, masking cell surface glycoproteins with lectins decreases the rate of Symbiodinium infection of the sea anemone Aiptasia pulchella [30] and enzymatic digestion of cell surface glycans prevents Symbiodinium recognition and the establishment of symbiosis in the coral Fungia scutaria [31]. Although Smith [32] has argued otherwise, these more recent experiments point to a possible role for lectins in symbiont recognition/uptake in corals. Sequence comparison and whole mount in situ hybridization of lectin coding genes A036-E7 and A049-E7 Alignment of A036-E7 and A049-E7 amino acid sequences with C. echinata CEL-III reveal that they are 82.1% identical (90.6% similar) to one another and 50.4% (65.1%) and 48% (64%) to CEL-III respectively. Black boxes represent identities and grey shaded boxes similarities. Localisation of A036-E7 (B) and A049-E7 (C) transcripts (dark purple) in presettlement planula larvae (1), metamorphosing larvae (2), and postsettlement polyps viewed from the oral side (3), and in cross section with the mouth pointing upward (4). Expression in the oral ectoderm is consistent with a role in metamorphosis or defence against pathogenic microorganisms. Whole mount in situ hybridization of lectin coding genes A044-C2, A032-H1, A043-D8 and A043-H7 The one differentially regulated coral protein containing a lectin domain and with an expression pattern consistent with a role in symbiont uptake is A043-H7, introduced in the previous section as a mini-collagen-like protein. Unlike those in the proteins with similar domain architecture (A044-C2 and A032-H1), the mini-collagen domain of A043-H7 is interrupted, (which may have structural consequences) and the gene's expression pattern is completely different. The expression pattern of A043-H7 immediately prior to settlement ( Figure 4D) is consistent with a role in symbiont uptake since, in contrast to many other cnidarians, the endoderm of the Acropora planula is tightly packed with yolk cells and frequently is hollow only immediately adjacent to the oral pore. As the endoderm is the most common route of cnidarian infection (see Discussion), the endodermal region immediately adjacent to the oral pore (i.e. the zone of A043-H7 expression) is a probable site of symbiont infection in the case of Acropora larvae. Confocal microscopy was recently used to demonstrate the binding of an A millepora mannosebinding lectin, which was not among our ESTs, to Symbiodinium, but its localization within the coral remains unknown [14]. Conserved and novel genes with roles in calcification The molecular basis of calcification in corals is not well understood; the process involves the deposition of calcium carbonate in an area defined by an organic matrix [33] and is initiated immediately after settlement and prior to metamorphosis [34]. Initially a flattened plate is laid down, upon which are deposited radiating vertical walls corresponding to the septa which give the polyp its six-fold symmetry. Initial calcification can, and in the case of Acropora millepora does, happen in the absence of Symbiodinium, but the massive calcification of larger colonies is dependent on the photosynthetic symbiont through interacting cycles of respiration, photosynthesis and calcification. Although many animal phyla include calcifying representatives, few components of the calcification machinery appear to be conserved between different lineages. For example, in the scleractinian Galaxea fascicularis, one of the most prevalent protein components of the calcifying organic matrix is galaxin [35], which appears to be unique to corals. One exception to this heterogeneity is the alpha type carbonic anhydrase family, which has been implicated in CaCO 3 deposition from sponges to vertebrates [36]. Most animals have multiple carbonic anhydrases; distinct subfamilies are recognised [37,38] each of which is widely distributed phylogenetically, but in addition some calcifying animals have atypical carbonic anhydrases that may represent lineage specific adaptations to facilitate CaCO 3 deposition. For example, nacrein -a soluble organic matrix protein in the nacreous layer of pearl oysters -contains a carbonic anhydrase domain that is split by a Gly-X-Asn repeat domain [39] which may have a regulatory role [40]. In a directly relevant example, Tambutte et al. [38] have recently demonstrated that active carbonic anhydrase is present in the organic matrix of Tubastrea aurea and plays a direct role in the calcification process. In another recent paper Moya et al [41] have cloned, sequenced and immunolocalized a previously undescribed CA from the coral Stylophora pistillata. It is localized in the calicoblast ectoderm, from which it is secreted, and has a CA catalytic function. In terms of understanding the bases of skeleton deposition, carbonic anhydrases are therefore of particular interest. Two carbonic anhydrase genes, C007-E7 and A030-E11 (cluster III) are up-regulated in the planula larva and postsettlement stages, and in situ hybridization shows that the expression of each gene is spatially restricted at those stages of development. C007-E7 is expressed most strongly in a restricted area at the aboral end of the metamorphosing larva and primary polyp ( Figure 5A1, A2). The expression of this gene in a disc at the aboral end is consistent with a role in calcification as this is the site where the process is initiated [34,[42][43][44]. In the slightly older polyp the expression in the aboral disc decreases to a circumferential ring ( Figure 5A3), and still later ( Figure 5A4), this ring is maintained, and expression commences in the tentacles. This expression pattern in the basal plate is consistent with involvement of carbonic anhydrase C007-E7 in the onset of calcification, but indicates that this carbonic anhydrase is not involved in the phase of calcification during which the adult structures are formed. The second carbonic anhydrase, A030-E11, was expressed in the oral half of the metamorphosing larva ( Figure 5B1) and the entire ectoderm of the primary polyp, except the aboral disc ( Figure 5B2) and the oral pore (data not shown). In older polyps this carbonic anhydrase is expressed in the septa, where calcification is occurring to form adult structures ( Figure 5B4). Expression analysis reveals that some "unique" coral genes have spatial expression patterns strikingly like that of carbonic anhydrase C007-E7, i.e. consistent with roles in the initiation of calcification. Figure 5C1 and 5D1 show genes with expression at the aboral end of the metamorphosing larva and in the basal plate of the metamorphosing larva, respectively. However, differences are apparent slightly later -C012-D9 expression becomes restricted to an aboral ring, and then appears to be switched off ( Figure 5C3, C4). Whilst B036-D5 expression also appears to be down-regulated in the basal plate, transcripts can be visualised in the mesenteries ( Figure 5D4) at a stage when C012-D9 transcripts are undetectable. Neither of these genes encodes known domains or could be functionally classified (using BlastP, Phi-Blast and InterPro Scan). Whole mount in situ hybridization of two carbonic anhydrases and two genes of unknown function which may be involved in calcification However, their expression patterns are consistent with roles in early calcification. A synexpression cluster of coral-specific genes As indicated above, the proportion of unique genes was highest in synexpression clusters II ('planula') and IV ('primary polyp'). To investigate their possible roles, in situ expression patterns were determined for many of these coral-specific genes. Many gave specific expression patterns, some of which are consistent with roles in processes such as calcification, as previously discussed. In other cases, although groups of "unknown" genes appear to be expressed in the same cells, it is more difficult to interpret the likely biological significance of the patterns. One example of this phenomenon is provided by three 'planula' cluster unigenes (A044-A9, C008-B2 and C014-E10) with no clear hits to genes in other organisms; the corresponding proteins are each predicted to contain a signal peptide, and C014-E10 contains a SEA domain (an extracellular domain involved in carbohydrate binding). In situ analysis showed that in the planula, the three transcripts are co-localised in a subpopulation of ectodermal cells that is concentrated orally. The post-settlement expression patterns of these three genes were also very similar, transcripts in each case being localised in scattered ectodermal cells of the polyp ( Figure 6A-C). The apparent co-localisation and co-expression of these unrelated but unique unigenes suggests that they may function in a common process or signaling pathway. The size of the synexpression group to which these three genes belong is unknown, but such gene clusters are of great interest, since they may represent coral-specific pathways or functions. Unfortunately, such genes also present great analytical difficulties; since their lack of clear homologs limits the inference of function from structure and the molecular tools required to test function are not yet available in corals, although progress is being made in that direction with other cnidarians [45][46][47]. Validation of the approach and methodology Virtual northern blots for eight genes were consistent with the microarray results, thus confirming them. In addition, and consistent with the microarray results being accurate, several mini-collagen-like proteins were upregulated in the planula. Mini-collagens have thus far only been described from nematocysts, cnidarian-specific structures which first appear at the planula stage in A. millepora (Additional Files 2, 3). Taxonomic and functional breakdown of the genes The composition of the EST set used in these microarray experiments has previously been considered specifically with respect to the complement of developmental signalling pathway components [2,3], but this paper is the first to examine broad scale changes in gene expression during development for any cnidarian. The use of different criteria and thresholds, and the ever-changing baseline provided by the databases, complicates making direct comparisons with other developmental studies. For example, although a recent paper on developmental gene expression in the ascidian, Molgula [48] addressed many of the same questions, it focussed specifically on highly expressed genes (i.e. only those accounting for more than 0.2% of the total number of ESTs) so it is not possible to interpret apparent differences, such as in the percentage of unique genes. In terms of developmental changes, it is particularly noteworthy that the percentage of "core" genes (59%; i.e. those genes shared with members of other kingdoms as well as other animals) is highest in cluster VI and that the percentage of unique genes (12%) is lowest in cluster I. Presumably these figures reflect shifts from common cellular pathways during very early development to greater cellular and molecular diversification later. As in many other animals, the early development of Acropora appears to involve many stored maternal mRNAs. The composition of the maternal mRNA pool is complex, consisting principally of low abundance transcripts including those involved with cell division, RNA metabolism, and regulation of gene transcription (L McFarlane, unpublished). Among genes of particular interest, H2A.Z and H1, histones with roles in priming chromatin for developmental gene expression [49] in a variety of other systems, are highly represented in the prawn chip ESTs and strongly down regulated thereafter, as are cyclins A and B3. In Drosophila and Xenopus, maternal cyclin transcript levels are initially very high and then decrease dramatically after the onset of gastrulation [50][51][52]. Acropora may therefore follow this pattern of abundant maternal cyclin transcripts that drive very rapid cell proliferation early in embryogenesis, followed by lower transcript levels with the onset of slower developmentally regulated cell cycles. Cell cycle transcripts such as cyclin A and B were also abundant among the cleaving embryo ESTs of Molgula tectiformis [48] and in pre-gastrulation stages of Xenopus [53] and Drosophila [54]. Lectin domain proteins are potentially involved in diverse processes There are a number of precedents for the involvement of lectin-containing proteins in metamorphosis. Lectins are differentially expressed at metamorphosis in two ascidians, Herdmania curvata [55] and Boltenia villosa [11,12]. In Boltenia, four lectins and two key lectin pathway genes are up-regulated in the larva or the newly settled adult [11]. The lectin induced complement pathway, which is initiated by a mannose-binding lectin, is important in Boltenia for the recognition of those bacteria which induce metamorphosis and tissue remodeling [12]. It is possible that the lectins up-regulated at metamorphosis in Acropora Whole mount in situ hybridization of three genes of unknown function Figure 6 Whole mount in situ hybridization of three genes of unknown function. In addition to the temporal synexpression established by microarray these three genes share common expression patterns and thus form a temporo-spatial synexpression group. Localisation of (A) A044-A9, (B) C008-B2 and (C) C014-E10 transcripts (dark purple) in (1) prawnchip, (2) presettlement larva, and (3) postsettlement polyp. Orientation in presettlement and postsettlement larvae is oral upward. Lack of expression in the prawnchip is followed by expression in a subset of ectodermal cells concentrated at the oral end of the presettlement larvae and postsettlement polyps. Their synexpression both temporally and spatially, suggest that they may be a novel group of genes interacting with one another. have an analogous role in activating tissue remodelling. Consistent with this idea, a possible complement effector, the perforin domain protein apextrin, is expressed in a strikingly similar pattern to those of the CELIII lectins during metamorphosis in Acropora [56]. Lectin domain-containing proteins also potentially function in the recognition of symbionts by corals. Lectin/ polysaccharide signalling is used in many systems as a mechanism for symbiont recognition, the most widely known example being the recognition of sugars on the surface of nitrogen-fixing bacteria by the lectins of their host legume during the establishment of their symbiosis. Symbiodinium in scleractinian corals reside in the endoderm, and two mechanisms of entry have been described in those corals that acquire them from the environment. The first is directly into the endoderm via the oral pore after it is formed 3-5 days post fertilization in association with feeding, as was demonstrated in the coral Fungia scutaria [25] and the anemone Anthopleura elegantissima [57]. The second, also demonstrated in Fungia [24], is that they can enter via the epithelium pre-or post-gastrulation. Those which have entered by the ectoderm are then shunted to the endoderm where they are retained [24]. Elegant studies in the latter half of the last century described the cell biology of symbiont uptake and retention, for example [58], and it has recently been established that members of the Rab family of proteins are involved in determining whether symbionts are digested or retained [59][60][61]. Symbiodinium are not transmitted through the eggs of A. millepora, and while planulae can be infected [23] this may only occur after the oral pore has opened shortly before settlement ( [22] and AH Baird, pers comm.) although the timing and mode of symbiont uptake remain to be firmly established. The limited available field observations indicate that infection normally does not occur until a few days after settlement in A. millepora [21]. These observations point to the endoderm as the likeliest point of Symbiodinium uptake, but do not rule out a possible role for the ectoderm. There is clear evidence from a number of cnidarian species of selective maintenance of the most "appropriate" clade of symbiont, while conclusions on specificity of uptake and its possible mechanisms are equivocal, perhaps due to interspecific variabity. Nevertheless, there is evidence that lectins function in symbiont recognition, as previously summarised, and these molecules therefore remain obvious candidates for roles in symbiont uptake and maintenance by Acropora. Genes involved in calcification Two alpha type carbonic anhydrases are expressed in patterns that are consistent with roles in calcification. However, these genes are not restricted to heavily calcifying cnidarians, as both have probable orthologs in sea anem-ones and other cnidarians. This is perhaps not surprising, as carbonic anhydrases are involved in pH and CO 2 /bicarbonate homeostasis in all organisms, and the ability to deposit some form of calcified exoskeleton is taxonomically widespread among cnidarians. For example, polyps of the hydrozoan Hydractinia symbiolongicarpus secrete a mat of calcium carbonate, in the form of aragonite, on their substrate [62]. Two membrane-associated carbonic anhydrases have been described from planulae of the coral Fungia scutaria, but they are short and missing amino acids thought to be necessary for CA activity, although the authors hypothesize that they could play a role in the onset of calcification at the time of settlement [63]. The first Acropora carbonic anhydrase, C007-E7, matches most strongly to vertebrate IV/XV-type carbonic anhydrases, and consistent with this, is predicted to be GPI anchored. C007-E7 has likely orthologs in both Nematostella and Hydra. The second carbonic anhydrase, A30-E11, is a I/IItype carbonic anhydrase and is likely to be the Acropora ortholog of a protein identified in the sea anemone, Anthopleura elegantissima (29.8% identity and 43.1% similarity) as a "symbiosis gene" -it is strongly up-regulated when this facultatively symbiotic anemone takes up endosymbionts [64]. However, clear counterparts of this soluble cytosolic type carbonic anhydrase are present in both Nematostella and Hydra magnipapillata, neither of which harbours symbionts. Whereas the two carbonic anhydrase genes are not restricted to calcifying cnidarians, a number of other coral genes with similar expression patterns have no apparent sea anemone or Hydra homologs. One possible scenario is that many of the genes involved in calcium processing will have a widespread distribution while some of those involved in secreting the organic matrix may be more specific, as in the case of galaxin. It will be particularly interesting to see whether different gene repertoires play a significant part in the determining the dramatic differences in colony morphology that are characteristic of the various corals or whether this is due mainly to deploying the same genes in different ways. "Coral-specific" processes as variations on known themes One conclusion that follows from the work presented above is that many of the molecules involved in "coralspecific" processes such as metamorphosis and calcification are not coral specific -genes whose expression patterns imply key roles in implementing metamorphosis, such as the lectins A036-E7 and A049-E7 and apextrin [56] have homologs in other animals even though they are not present in Nematostella. Both of the carbonic anhydrases implicated in calcification also have clear counterparts in non-calcifying cnidarians. A second conclusion is that processes central to coral biology, such as symbiont recognition, may have analogous biochemical bases in phylogenetically distant systems. Lectins function in symbiont recognition in the legume-Rhizobium system; this analogy may be useful in understanding how specificity might be achieved in the coral/dinoflagellate symbiosis and in exploring the roles of the candidate molecules identified here. As in ascidians, metamorphosis in Acropora involves activation of an innate immune response, as both lectins and the perforin domain protein apextrin are strongly and specifically expressed at this time. Inevitably, other genes implicated in coral-specific processes appear at this stage to be taxon-restricted, but it is unclear to what extent this simply reflects the limited number and range of animals for which whole genome data are yet available. Genes that are today considered "coral-specific" may actually be more widely distributed; the number of genes considered vertebrate-specific shrinks with the publication of each additional animal whole genome sequence. Moreover, genes with no clear homologs may simply be old genes that have evolved beyond recognition. One promising approach arises from the prediction that genes involved in "coral-specific" processes such as symbiont recognition are under positive selection. With the imminent availability of large EST datasets for several corals, a combination of in silico and in situ approaches should identify these genes and build on the pioneering study reported here. Microarray description The microarrays used in this experiment consisted of 13,392 spots derived from 12,240 cDNA clones (1,152 clones are represented more than once) and 432 spots representing positive and negative controls. The cDNA clones spotted onto the array were randomly selected from cDNA libraries that had been constructed in Lambda ZAP (Stratagene), and include 3456 clones from the prawnchip developmental stage, 4608 clones from the planula larva stage [65], and 4128 clones from the primary polyp. All of the material used for making the libraries came from Nelly Bay, Magnetic Island, Queensland, Australia (19°08'S 146°50'E). All cDNAs spotted onto the slides were derived from cDNA libraries of the appropriate developmental stages. They were isolated by TempliPhi (GE Life Sciences) on excised clones except for 2,000 postsettlement polyp clones which were PCR amplified directly from individual phage suspensions and 3,012 planula larva cDNAs which were isolated previously [2] Generation Microarrays were generated by spotting the amplified cDNA onto GAPSII slides using a Biorad Chipwriter Pro, and then fixed by UV light exposure (150 mJ) followed by baking at 80°C for 3 hours. All cDNA clones represented on the arrays were sequenced from the 5' direction using standard Sanger (ABI Big Dye) sequencing technology. EST analyses After data filtering, ESTs were clustered using CAP3 [66]. The coding potential of the resulting unigenes was analysed using ESTScan [67]. 5081 were predicted to give rise to bonafide proteins, using the criterion of a coding potential of 25 or greater. The EST contigs which had predicted peptides were used to search the Uniprot database using BlastX [68] with a threshold of e = 1 × 10 -5 in order to functionally classify the predicted proteins according to the scheme in [10]. Experimental design To assay for changes in gene expression during Acropora development, mRNA was isolated from four different developmental stages: the pre-gastrula "prawn chip" stage (8 hpf), the planula larva stage (83 hpf), the post-settlement primary polyp (130 hpf) and the adult colony. The rationale for selecting these stages is that they span key developmental events including the establishment of tissue layers and body axes at gastrulation, the transduction of settlement cues, settlement and metamorphosis, and the initiation of calcification and uptake of symbionts. Prawn chips, planula larvae and primary polyps were the offspring of colonies collected from Nelly Bay, Magnetic Island (19°08'S 146°50'E). Adult tissue was obtained from a colony in the same bay. Pools of approximately 1000 embryos were made to create each biological replicate [69]. Total RNA was extracted from these for each of our stage specific 'targets'. Tissue from a single colony was used in the case of adult RNA extraction. The entire experiment was replicated on different days using separate collections of material thus giving two biological replicates. Within each biological replicate, each developmental stage was compared with every other twice; once in each dye orientation. Thus, there are two biological and two technical replicates for each comparison (Figure 7). Since there are six possible comparisons with this design the entire experiment used 24 slides -12 for each biological replicate. cDNA for probing arrays was produced from unamplified total RNA which was extracted using TRI Reagent (Ambion) according to the manufacturer's instructions. The quality was assessed using denaturing gel electrophoresis using standard methods [70]. For each hybridized sample, total RNA (80 ug) was reverse transcribed, labelled and hybridised using standard protocols [71]. Data analysis and verification Slides were scanned using a GenePix 4200A scanner, and data extracted using Spot [72]. All further analyses were carried out using the limma package [73] for the R system [74]. Print-tip loess normalisation [75] was performed on each slide. Quantile normalisation was applied to mean log-intensities in order to make the distributions essentially the same across arrays. The methodology used for statistical analysis is described in Smyth [76]. The prior probability of differential expression, for each pair of comparisons between stages, was taken as 0.1. The Benjamini and Hochberg method [77] was used to adjust the sequence-wise p-values, so that a choice of sequences for which the adjusted p-value is at most 0.05 identifies a set of differentially expressed genes in which 5% may be falsely identified as differentially expressed (see Additional File 4 for more detail). Array data have been deposited in the Gene Expression Omnibus (GEO) database (accession number GSE11251). Results were also verified using M vs A plots, where M = the log ratio of the spot fluorescence intensity values and A = the log of the average spot fluorescence intensity. An example is given in Additional File 5. Spots for which no fluorescence was expected, including salmon sperm DNA, empty vector and primers, plotted near the origin of the MA plot, as expected. Negative controls for differential expression (i.e. spots expected to show hybridization but no differential expression), had an M value of or near to zero, but ranged in fluorescence intensity, also in accordance with expectations. Differentially expressed positive controls (i.e, spots expected to show both hybridization and differential expression between presettlement and postsettlement on the basis of virtual northern results) were positioned on either side of an M value of zero with a range of fluorescence intensities. Cluster analysis was used to search for clusters of expression profiles in the data. K-means clustering was used to split the genes into 6 groups of differential expression profiles. Clustering was carried out using Cluster 3.0 [78] and the results viewed with Java TreeView [79]. Unigenes with protein coding potential > 25 and p-value < 0.05 in the test for differential expression between temporally sequential developmental stages were removed prior to cluster analysis. Results for the microarray experiments were verified using "virtual northern blots" which were made using the Clontech SMART cDNA Synthesis Kit, according to the manufacturer's instructions using RNA from the same stages used in the microarray experiment. DNA used to probe the blots was generated by PCR (see section 2.5.4 PCR and spotting of cDNAs), purified using the Qiagen PCR Purification kit according to the manufacturer's instructions, and radiolabelled with 32 P-dATP using the Prime-A-Gene Labeling System (Promega) according to the manufacturer's instructions. Hybridization was conducted according to standard protocols [70] and visualized by exposure to a Phosphorimager (Molecular Dynamics) cassette overnight. Digital images were viewed with Quantity One software. Low-throughput sequencing In order to obtain the entire open reading frame, some unigenes selected for in situ hybridization required further sequencing. This was done either as described for EST sequencing or using 300 ng of plasmid as template. Raw data were viewed and edited with Chromas Lite and sequences were aligned with LaserGene (DNASTAR). cDNA sequences for genes characterized by in situ hybridization have been deposited in GenBank under accession numbers EU863776-EU863788. Figure 7 Microarray experimental design. Each developmental stage used in this experiment was directly compared to all others. Each arrow represents four hybridizations; two in one dye orientation (Cy3-Cy5) and two in the other (Cy5-Cy3), hence 24 slides were used in total. Further details are given in Materials and Methods. In situ hybridization Templates for riboprobe production were generated by PCR. Riboprobe synthesis and in situ hybridization were performed as reported by [80]. In order to view further histological detail embryos stained in whole mount were embedded in LR White Resin sectioned at various thicknesses and counterstained with Saffranin O.
2014-10-01T00:00:00.000Z
2008-11-14T00:00:00.000
{ "year": 2008, "sha1": "515bc4c85f799300949d00cb37f39122cb75efa4", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-9-540", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "515bc4c85f799300949d00cb37f39122cb75efa4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54737228
pes2o/s2orc
v3-fos-license
Music Becomes Emotions : The Musical Score in Two Productions of A Streetcar Named Desire From today’s perspective, Alex North’s score for the 1951 film A Streetcar Named Desire, which was considered remarkable even at the time, can claim legendary status. The titles of the 16-track score suggest that the music focuses on the characters, the setting, main motifs, crucial events and states of mind. The film soundtrack could thus be denoted as integral to and harmonized with the dramatic action. This is not the case in the 2008 staging at the Slovene National Theatre in Maribor, where the music composed and selected by Hrvoje Crnić Boxer seems to focus on the protagonist only. The performance revolves around Blanche and could be interpreted as a psychoanalytic study of the play through her subconscious. Analysing the musical layers of these two considerably different productions of Williams’ play opens new interpretative aspects of this complex theatre and film classic from the Deep South literary tradition. Music Becomes Emotions: The Musical Score in Two Productions of A Streetcar Named Desire 1 Introduction Alex North's score for the legendary 1951 film A Streetcar Named Desire (hereafter Streetcar) is by common acknowledgement "the first functional, dramatic jazz score for a film.Up until then, jazz had been generally used only as source music" (Lochner 2006, np).Even though the film was a success, it earned North only an Academy Award nomination, while the Oscar went to Franz Waxman and A Place in the Sun (Academy of Motion Picture Arts and Sciences 2016). 1 As with several other full music scores by Alex North, this music today is celebrated by critics and audience; unbelievably, North had 14 more nominations but never made it to the top until 1986 when he was voted an honorary Academy Award for his life's work, as the first composer to receive it.The Streetcar musical score can, nevertheless, be considered a classic, in which it is possible to identify the major traditional roles of music in film.It thus can be seen as the antithesis of the musical strategies deployed the Slovene National Theatre (SNg) production. The 2008 SNg Maribor production directed by Damir Zlatar Frey was not a traditional staging of this classic play; it was rather a "different world, inhabited by personal theatre mythology" (Delbianco 2008, 11).The main element of Dora Delbianco's scene design, occupying most of the stage, was a huge pile of white river sand, about 2 meters high, stretching from stage right, and gradually descending towards stage left and the front of the proscenium.This basic component of the scene "underlines the psychological states of the characters" and has the ability to transform itself from "quicksand that mercilessly gobbles up its victims" into "time sliding away as in an inexorable hour glass" (Delbianco 2008, 12).Particularly in such quantity, sand can convey numerous symbolic connotations, and the theatre ensemble made considerable use of this quality.Delbianco also reports that the decision to place the whole setting of the play inside Blanche's psyche was a conscious choice, based on an agreement between Frey and herself as dramaturge.Obviously, this important fact had crucial implications for the choice of music and sound.Consequently, this permits the interpretation of the performance through a psychoanalytical lens, as will be done in the following sections.The article considers how the manipulation of music in the film version of Streetcar influences the viewer's emotions, and then parallels the characteristics of the film music score to the music used in the Maribor stage production, which is intrinsically different from the former.2 Diegetic and Non-Diegetic Music A crucial issue for the analysis and interpretation of both productions is whether the music heard by the audience is internal to the plot or whether it serves as a musical underscore beyond the internal dramatic structure.The distinction is most commonly addressed with the terms diegetic and non-diegetic.Cohen (2001), for example, understands the former as relating to music that 1 Streetcar received four Oscars: Best Actress (Leigh), Best Supporting Actor and Actress (Malden and Hunter) and Art Direction (Day and Hopkins).Brando, like North, was only nominated, losing to Bogart in The African Queen (Academy of Motion Picture Arts and Sciences 2016). occurs within the narrative of the film, where the viewer can see the source on the screen/stage.It can be a radio, a musical instrument, a juke box or an orchestra in a concert hall.It is necessary that a character in the film can see the source of the music and, even more crucial, can hear it.In contrast, non-diegetic music is not part of the narrative, and its source is neither visible on screen nor seen or heard by the characters.Only the viewer can hear the music and associate the action or the characters with it.Chion (1994, 71-85) defines the same phenomena with the expression acousmatic when referring to a sound that the audience can hear but for which they are unable to identify a source.This sound can appear in a film, but it is off-screen (invisible).He defines and illustrates the term acousmêtre in his The Voice in Cinema (Chion 1999, 17-29), where he reflects upon the power of invisible sound.The opposite of acousmatic sound is visualized sound, where the source is visible.A sound can be visualized and later acousmatized or vice versa, which is a frequent mystery film technique, where the tension is raised by keeping the source of the sound a secret. Film music emphasizes the dramatic line.Music and moving images have to be brought together into harmony within a dramatic context.One of important features of music in films is the musical accent.Composers need to be careful about when they introduce music to a scene.If it is brought in suddenly, it can be too obvious and can draw attention to itself and away from the action.It is best when the entry point for music has a dramatic function.The musical entrance can also be connected with meaning or with a change in the dramatic line.It can be triggered by what a character says or does, or only by the expression on his face (Burt 1994, 79-82).In the film version of Streetcar, there is a rich combination of both diegetic and non-diegetic sounds; there are even cases when one dissolves into the other, for example, when Blanche reveals to Stella that Belle Reve has been lost, the tension of the conversation increases.This is supported by the musical score of rather dissonant trombones in the Belle Reve Reflections theme, which -when Blanche runs into the courtyard -transforms into a train whistle.In the Maribor production, the music is almost wholly non-diegetic, to support the psychoanalytical interpretations intended by Delbianco and Frey. Functions of Film Music In his critical study of film music, Prendergast (1992) provides a broad variety of functions that the musical score can have in a film.These were adapted from a 1949 article by Aaron Copland in The New York Times.Among several that he mentions, a few are particularly relevant to this study: film music can "create a more convincing atmosphere of time and place", "provide the underpinning for the theatrical build-up of a scene and then round it off with a sense of finality", "serve as a kind of neutral background filler", and "underline or create psychological refinements -the unspoken thoughts of a character or the unseen implications of a situation" (see Prendergast 1992, 213-22).These will be illustrated with the examples from Streetcar in the following paragraphs. When music has the role of creating a persuasive atmosphere of time and place in a film, Prendergast (1992, 213) speaks of musical colour.This is immediate and flexible, since a composer can bring it in and out with relative ease.Moreover, the mood that is created can be easily understood by an average audience without particular musical knowledge.Musical colour can be achieved with a musical instrument specific for a certain time or area in the type of music and its style.The music in this role can be diegetic or non-diegetic, although the other functions selected above seem better suited to non-diegetic music, since in these cases, the music is not part of the plot but is intended to transfer certain additional information to the viewer.A good example of creating an "atmosphere of time and place" with music in Streetcar is the soundtrack piece at the beginning of the film, when Blanche arrives in Elysian Fields in New Orleans.As she walks down a busy street, the viewer hears what seems to be authentic music from jazz clubs mixed with talking, laughter and the sound of glass breaking.Additionally, there are street sounds like cars braking, horns blowing and shouting.After she finds the right address and Eunice directs her to the bowling club, the setting instantly changes, most notably with the help of auditory imagery -a whistle and the sound of a ball hitting bowling pins; these sounds take effect even before the viewer can absorb the place visually.In both cases, the music and the sounds are diegetic, which means Blanche hears them too. Another of Copland's functions, "the theatrical build-up of a scene" to a climax, can be identified in the scene where Stanley, Stella and Blanche are celebrating Blanche's birthday; Mitch has failed to show up, since Stanley informed him of his findings regarding Blanche.The background symphonic music is quiet and slow while the tension in the dialogue is rising, since both women jokingly comment on Stanley's table manners.His silence is vexing -up to the moment when Stella tells him to help her clear the table.The music ascends to a climax and stops abruptly when Stanley crashes his plate to the floor.This function thus creates an intense dramatic atmosphere; however, its effect is notable because the scene itself is dramatically effective.Prendergast also adds that it would be unreasonable to expect the music to make a weak scene stronger, or turn a badly written script into a good film just because of a strong musical score. In the initial part of this same scene, the music can be classified as "neutral background filler", i.e. the music used to fill the empty spots between utterances in a dialogue or running beneath the conversation.In this case the music's function is merely to be present and is usually referred to as the underscore.Film composers traditionally use less complex musical lines, since it makes no sense for such music to interfere with the dramatic action of the plot.This example is one of many similar cases of this function of film music in Streetcar.Among Copland's functions of film music, the most relevant to the topic of this study is the music that can "underline or create psychological refinements -the unspoken thoughts of a character" (see Prendergast 1992;216).This aspect is relevant for both Kazan's film and the Maribor theatre production.In the latter, a considerable share of the music has this function, while in the film, this is, for example, the case when Blanche exits from the bathroom, shortly after Stanley has fetched her trunk from the train station and ransacked several drawers.She decides to try to handle the situation by playing a cool sister-in-law.The music is light and playful, reflecting her feelings, or at least the image of her feelings as she chooses to project them.When Stanley steps out of "her" part of the apartment and she draws the curtain, which also strikes the viewer as a salient visual metaphor for removing the foreign body from her immediate personal zone, she notices the disorder and realises that the intrusion has already taken place.The music suddenly moves away from harmony and sinks into a more dissonant and disharmonic tune.In fear of revealing her true self, she recovers quickly, resuming the flirtatious tone, and the music reverts to the previous joyful atmosphere.One of the most valuable contributions of this musical score is the representation of such psychological and emotional points relevant to the scenes.The music in a film can appear simultaneously with the speech and thus constitutes a third dimension of the filmed play, an addition to the images and words. Music as a Source of Emotions in Film Music in films is often used to involve the viewer emotionally.Identifying and rationally understanding the feelings of a character is frequently possible from the visual features of the film, while the empathetic quality, i.e. simulating the situation in which the viewer could feel these emotions, is usually provided by the musical score, or rather the combination of the two.This idea is supported by many film music composers and theoreticians.In an interview with Meryl Ayres, the composer Wes Hughes suggests that "the music's main job is to flesh out the emotional and dramatic nuance in a film's narrative" (2015, n.p.).Michel Chion (1994, 4) speaks about the added value with which sound enriches a given image, while george Burt (1994, 9-10) calls this quality the associative power of music, claiming that film music cannot represent something by itself -neither very concrete images nor abstract issues like, for example, a political system.It is, however, in the music's very nature to stimulate associations.When music co-appears with the image, it is practically impossible for a viewer not to perceive them as a unified entity, and this joint perception need not always be conscious; on the contrary, the viewer is frequently unaware of the influence of music on perception.Cohen (2001, 249) agrees that film music is one of the strongest sources of emotion in film, even though it was composed with the understanding that it would not draw conscious attention.gianetti (1999,207) goes even a step further to claim that, since visual imagery dominates when we are watching a film, music automatically works on a subconscious level.If a viewer fails to remember the music of a certain scene but is able to recall the emotions felt while viewing it, this could be in line with gianetti's claim.Chion (1994, 4) identifies two ways for film music to create a specific emotion regarding the image on the screen: empathetic and anempathetic music.In the former case, music immediately expresses the feelings that the characters and the viewers feel and absorb, while in the latter, music is indifferent to the mood of the character or the development of film and pretends not to notice it. According to gianetti (1999,, the viewers' responses to music are influenced by pitch, volume and tempo.He suggests that high-pitched sounds usually generate feelings of tension and suspense, particularly just before the action reaches the climax, often even throughout its duration, while it is the opposite with low-pitched sounds.These can often be used to emphasize the seriousness of a scene, or they can suggest emotions like anxiety, fear, disappointment, regret or grief.The implications of volume and tempo are similar: loud sounds are forceful and threatening, accelerating music enhances tension, while quiet, slow tones slow down the action.They are weak and intimate and often suggest that the visible event is transferring onto the emotional and spiritual level of the character.This situation is frequent in the film version of Streetcar, since Blanche's actions are usually in direct opposition to her thoughts and emotions.The music sometimes supports the former, sometimes the latter, while the viewer witnesses the switching between them.The lazy blues sound in the underscore often represents an antithesis to Blanche's mind which is "swimming", or "swirling", and her head "buzzing". gianetti (1999,207) also speaks of a musical motif that accompanies specific characters, actions, situations or mind states.It appears simultaneously with the corresponding visual material, while it can also be used as foreshadowing or as an alert or warning.Cohen (2001, 258) proposes a similar idea and calls it the technique of leitmotif, where a particular musical theme is repeatedly coupled with a character or event, so that it becomes an integral part of the film through association and thus enables the symbolization of past events.In Alex North's soundtrack, two such notable motifs reappear almost constantly: the Varsuviana and Belle Reve reflections.The former is the polka tune, closely associated with Blanche's memory of her young husband's suicide.They were dancing to this music when, after a brief fight, he ran out of the ballroom and shot himself.Whenever this event is brought up in conversation in Blanche's presence, the musical motif appears in the background and continues until the shot when the music stops.This motif first appears at the initial meeting of Blanche and Stanley, when he asks her if she was married.It recurs in the scene when Stanley demands to see the documents regarding the loss of Belle Reve.When he picks up a bunch of love letters Blanche's trunk, and she tosses them on the floor in an attempt to recover them, the viewer learns that these are Allen's, and Varsuviana sounds again.The next occurrences are when a young boy comes to the door to collect for The Evening Star newspaper and when Blanche is telling the story to Mitch on one of their dates, while also vividly and emotionally re-living it.Finally, the melody recurs towards the end of the play, when Mitch comes to break up with Blanche; this time it comes with some variation, i.e. the music does not stop after the shot but a little later, which could be symbolic of Blanche having lost another man in her life and a potential husband. "Belle Reve Reflections" is a more dissonant motif than the Varsuviana, and it reminds Blanche of the loss of the family estate.The first time we hear this tune, it appears simultaneously with its verbalized gist: to Blanche's accusation of her sister: "I knew you'd take this attitude" and to Stella's question "About what?", Blanche replies: "The Loss".This theme by North, although called somewhat euphemistically "Belle Reve Reflections", represents not the estate itself but its loss, which symbolically recurs later in the film, and it accompanies each of Blanche's losses.It appears again when Stanley leaks the information about Stella's pregnancy to Blanche, which is in a way the loss of her sister as a confidante (which she probably never really had after she came to New Orleans) and a half-safe haven that she tried to build in Stella and Stanley's home.The arrival of the baby rounds out the family and ties it together, making Blanche the odd one out.Finally, when Mitch comes to clear up the situation about her past and eventually breaks up with her, this theme again announces a loss, that of a potential husband and thus the last chance for future happiness. Certain other motifs or themes in the film contribute considerably to characterization.Even though the track titles suggest that the focus lies primarily in situational issues (e.g., "Stan meets Blanche", "Blanche and Mitch" and "Star and Stella"), the music still unmistakably contains elements of characterization."Stan meets Blanche" is a slow moving, flirtatious jazz tune in a major key with a brass orchestra base and an outstanding, high-pitched trombone melody.It greatly supports the characterization of Blanche, who is teasing Stanley."Blanche and Mitch", on the other hand is a rich, pleasant romantic tune with a rather melancholy lyrical character given to it by a minor key.The melody yields several promising harmonious waves, but it never opens into a broad major-key theme.In context, this musical representation of a beautiful love story perfectly fits the relationship between Mitch and Blanche, thus providing a flash-forward of what will later be revealed to the viewer.The only character accorded an individual theme, which is also suggested by its title, is Blanche (the track "Blanchie").and post-WW2 periods.In 1950, Bryllion Fagin reported that three major plays by Tennessee Williams -Streetcar being one of them -"are full of typical situations in which psychopathological characters are involved" (1950,304).The Maribor production could also be seen in the context of Felman's understanding of the relation between literature and psychoanalysis, in which the latter is usually in a superior position: "[w]hile literature is considered as a of language -to be interpreted -psychoanalysis is considered as a body of knowledge, whose competence is called upon to interpret" (Felman 1977, 5).Not only with reference to Streetcar, however; Frederic Crews' claim from 1975 that "[p]sychoanalysis is the only psychology to have seriously altered our way of reading literature" is to a considerable degree still relevant today (see Stone 1976, 309).Apart from reflecting Blanche's state of mind, i.e. her conscious thoughts as well as her subconscious ones, music also serves to set a boundary between the scenes; however, even in the latter function it is not completely detached from its psychological role. One of the immediate observations regarding the musical aspect of this production is that practically all stage music is non-diegetic, which means it is not part of the dramatic action.The only exception is the piece played on poker night after Blanche and Stella return from their night out.Blanche turns on the radio, and both women face a rude objection by Stanley.The tune chosen for this scene is the single most pleasant song in the whole play: a soft male voice singing in a Slavic language (not Slovene) is accompanied by a melodious romantic blues with no electronic effects.The contrast with the half-drunk Stanley's shouting that she switch it off is thus even more striking, possibly representing his brutal intrusion into Blanche's vulnerable intimate world.The same tune recurs towards the end of the play when she flirts with the young newspaper boy and kisses him.In both cases, this is Blanche's first contact with somebody new, a potential gentleman caller, representing for her the safe haven she has sought in her desperate history of failed romantic involvements.The only two moments in the play that light a spark of hope for Blanche are thus thematically connected by the same musical theme, which is pleasant and unmistakably positive.While the audience sees the first promise of happiness at the beginning of the play as believable and possible, the second one is obviously a brief spark that makes the night seem darker when it is gone.The first time this music is played it can be heard in the background throughout Mitch and Blanche's first meeting, stopping abruptly when Stanley bursts onto the stage and throws the radio over the heap of sand.This could be seen as a sound based flash-forward: symbolically, Blanche's romantic dream with Mitch is destroyed by Stanley. The rest of the music in the Frey's production is electronic.It is instrumental or a combination of instrumental and vocal.The instruments can generally be recognized, (piano/harpsichord, guitar, drums or voice), but in most cases the effect of an artificial electronic echo is strongly felt.The music thus acquires a certain quality of mystery; it sounds less realistic as well as less diegetic, since it is unnatural.The performance features seven (in most cases recurring) themes that follow gianetti's previously explained claim that the viewers' responses to music are influenced by pitch, volume and tempo.The effects of music in this production mostly comply with this claim. The director Frey decided to start with the last scene of Williams' play.He first shows Blanche, who is led away by the doctor, and repeats this scene, with variations, at the end.This approximately 12-minute opening is accompanied by a piano theme, extremely slow moving with a strong electronic echo.The simple sequences use three high pitch tones that, in combination with unison or second accompaniment, mostly give the impression of a minor key flavour with occasional tone combinations in major key.Blanche seems to be past her most turbulent moments; her mental instability has drifted into long periods of passivity, and the music reflects this state of her mind.The theme is repeated in the middle of Blanche's date with Mitch as she is telling him about Allen; in the background we see a naked actor, representing Blanche's young husband, with a male lover (the scene and the part of the plot that has been omitted from the film version). 3It is possible to understand this musical recurrence as the fundamental event that initiated Blanche's mental decay. A guitar theme, also considerably echoed, appears a few times as a division between the scenes, e.g. the scene when Blanche reproaches Stella that she left all the responsibility on Blanche's shoulders, and the scene of her first meeting with Stanley.The same theme reappears when Blanche is given a bus ticket back to Laurel from Stanley, when Mitch accuses her of lying and when he tells her she is not pure enough for his mother.However, in the latter two cases, the theme includes a voice -a curt female voice singing "Flores para los muertos" (Flowers for the dead), the famous utterance originally shouted by the Mexican street seller.This is a strong textual reference, offering a large variety of associations and symbolic interpretations in any production.When synchronized with the musical theme, its sinister tone is considerably intensified. Powerful and distressing music appears when Stanley mentions Blanche's husband in front of her for the first time.It is a loud and fast electronic harpsichord theme that is soon joined by a rapid, almost howling falsetto.This sound implies what must be going on in Blanche's mind, the representation of the turbulent labyrinth of her thoughts.The theme reappears after a stylized violent conversation among the members of the poker gang -we only see their heads in spotlights on a completely blackened background.This fits into the concept of the play as a representation of Blanche's mind, which reacts strongly to any form of behavioural intensity. Two more musical divisions between the scenes are introduced with a slightly lighter but still loud and determined harpsichord theme in the rhythm of a waltz.With this, Crnić lightens the atmosphere -possibly also in Blanche's subconscious -to prepare the field for the diegetic romantic song of her first meeting with Mitch.This scene ends violently with the half-drunk Stanley destroying the radio and Stella retreating to the upstairs flat with Eunice.Stanley's famous shout for his wife ("Stellaaaa!") is followed by what is potentially the strongest and the most disturbing musical theme, composed of sinister electronically deformed low male voices -this is also the most frequently recurring theme, as will be seen later -in the middle of which Stella runs down the sandy slope back to her husband.The stage darkens while the music remains and slowly fades as Blanche attacks Stella the next morning with the reproach for returning to Stanley.After Blanche's monologue about Stanley ("He acts like an animal, has an animal's habits![…]"), during which he returns and overhears at least part of it, the same theme re-emerges.During this interlude, a dozen white birch trees are lowered onto the sandy slope, and the image of Allen reappears among them.Blanche runs to him -again allowing the interpretation that this is a desperate internal cry for help -but he detaches himself from her and leaves.The stunning visual image of a forest of birches is rich with connotations: apart from being a visualisation of the main character's name (Blanche Dubois in French means "white from the woods"), the bare branches suggest autumn and decay, and together with the image of a forest itself, which in literature often has a secretive and macabre undertone, also has psychoanalytic associations with the hidden parts of one's mind.This perplexing visual scene is intensified with music and is just the beginning of Blanche's sad fall.From that point on, every blow she receives in the developing plot is accompanied with this musical theme: Stanley asking her about Shaw, 3 The censorship issue in connection to the 1951 film is dealt with in detail by A. Davison (2009, 64).LITERATURE Stella asking her to stop drinking, Blanche's distressing suspicion that Stella knows something (about her previous life) and is, therefore, acting strangely.Significantly, this theme also recurs after Mitch kisses Blanche and the relationship seems to be going the right way, but the director can obviously not allow the audience to be pleasantly deceived and share one of the infrequent moments of Blanche's "magic".The concluding visual image, which comes after a strong, loud drum interlude accompanying the rape, is that of a sand heap covered in poppies and Blanche wandering among them.Two headlights turn on behind her, representing a silent reminder of a streetcar.Pezdir (2008, 17) sees these as implying a deadly encounter with the unmanageable and unexpected mechanism of A Streetcar Named Desire. Conclusion With its charisma as well as its time-tested power and influence, Kazan's 1951 film version of Streetcar acquired an entry ticket to the prestigious club of films that represent milestones not only in the world of film but also in the general cultural universe.The soundtrack by Alex North is what today can be declared a classic music score, so its benchmark status in this analysis is plausible: to provide a criterion against which this considerably less traditional Maribor production could be measured or commented.Despite a strong musical part, the prevailing images in the Maribor production are visual: a brown massive wooden wall that is lifted and lowered at the beginning and at the end, a red cloth covering the stage that disappears in front of the viewer's eyes sliding into the orchestra pit, white sand, white birches, red poppies, and intense colours for the clothes of the characters, particularly Blanche; all this is difficult to overcome.Music thus seems like a side effect, merely one layer of the play, overshadowed by the visual.Even though this statement might seem to denigrate the score, it complies with the thesis of this article: music from beneath the surface that gives the impression of being all-encompassing is more than appropriate for the psychoanalytical interpretation of the production.Frey's choice not to stage a classic performance of this challenging and eternally relevant text was probably sound, since it is an almost futile task to try to re-create earlier stagings, including the 1951 film classic with Vivien Leigh and Marlon Brando.
2018-12-14T19:10:08.796Z
2016-06-20T00:00:00.000
{ "year": 2016, "sha1": "9db4106a50e84e399712e86e4bc614463c40a609", "oa_license": "CCBYSA", "oa_url": "https://journals.uni-lj.si/elope/article/download/6356/6240", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9db4106a50e84e399712e86e4bc614463c40a609", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
254520724
pes2o/s2orc
v3-fos-license
Synergistic effect of pyrvinium pamoate and posaconazole against Cryptococcus neoformans in vitro and in vivo Background Cryptococcosis is a global invasive mycosis with high rates of morbidity and mortality, especially in AIDS patients. Its treatment remains challenging because of the limited antifungals and their unavoidable toxicity, and as such more efforts need to focus on the development of novel effective drugs. Previous studies have indicated that pyrvinium pamoate (PP) has individual and synergistic fungistatic effect. In this study, the effects of PP alone and in combination with azoles [fluconazole (FLU), itraconazole (ITR), voriconazole (VOR), posaconazole (POS)] or amphotericin B (AmB) were evaluated against Cryptococcus neoformans both in vitro and in vivo. Methods A total of 20 C. neoformans strains collected from cryptococcal pneumonia and cryptococcal meningitis were studied. The effects of PP alone, PP-azoles and PP-AmB interactions against C. neoformans were evaluated via the microdilution chequerboard technique, adapted from broth microdilution method according to the CLSI M27-A4. The in vivo antifungal activity of PP alone and in combination with azoles and AmB against C. neoformans infections was evaluated by Galleria mellonella survival assay. Results The in vitro results revealed that PP individually was ineffective against C. neoformans (MIC>16 μg/ml). Nevertheless, the synergistic effects of PP with ITR, VOR, POS, FLU or AmB was observed in 13 (65.0%, FICI 0.188–0.365), 3 (15.0%, FICI 0.245-0.301), 19 (95.0%, FICI 0.188-0.375), 7 (35.0%, FICI 0.188-0.375), and 12(60.0%, FICI 0.281-0.375) strains of C. neoformans, respectively. There was no antagonism. The survival rates of larvae treated with PP (3.33%) showed almost no antifungal effective, but the larvae survival rates improved when PP combined with AmB (35% vs. 23.33%), FLU (40% vs. 25%), ITR (48.33% vs. 33.33%), VOR (48.33% vs. 53.33%) and POS (56.67% vs. 36.67%) comparison with AmB or azoles alone, and statistical significance was observed when PP combined with POS versus POS alone (P = 0.04). Conclusions In summary, the preliminary results indicated the potential of PP in reduction the MICs of azoles and AmB, also itself against C. neoformans; the combination of PP with AMB, FLU, ITR, VOR and POS improve the survival rates of C. neoformans infection larvae, compared with they are alone. The in vitro and in vivo data show that PP could enhance the activity of POS against C. neoformans. This study contributes with data of PP in combination with classical drugs of choice for cryptococcosis treatment. Introduction Cryptococcus neoformans is an important pathogenic fungal species that cause one of the mainly opportunistic mycoses cryptococcosis, which involved multiple invasive fungal diseases of the human body including the cryptococcal meningitis, cryptococcemia and cutaneous disseminated cryptococcosis (Rajasingham et al., 2017;Centers for Disease Control and Prevention, 2022). However, in recent years, the incidence of cryptococcosis elevated more than 223,000 cases and 181,100 deaths worldwide per year, and mainly because of the immunodeficiency populations increasing (Mirza et al., 2003;Rajasingham et al., 2017). This has been accompanied by a high mortality rate, especially among with HIV -AIDS patients, where data show that 1-year mortality rates for HIVinfected cryptococcal meningitis patients ranges from 50% to 100% in low-resource settings compared to 10% to 30% in resource-rich countries (Loyse et al., 2013;Williamson et al., 2016;Rajasingham et al., 2017). The reasons for the different mortality involved in the severity of the infection, the status of host immunity and the availability of antifungal (Bermas and Geddes-McAlister, 2020). Antifungal drugs for treatment of cryptococcosis currently is largely limited to fluconazole (FLU), amphotericin B (AmB) in combination with FLU or 5-fluorocytosine (5FC) (Perfect et al., 2010;Grossman and Casadevall, 2017). However, the adverse effects of antifungal drugs and the resistance of fungi have become two problems that need attentions in the treatment of cryptococcosis (Bermas and Geddes-McAlister, 2020). The adverse events, the cost and unlicensed limit the widespread use of AmB (Longley et al., 2013). Although the combination of AmB and 5FC has becoming a firstline induction treatment for cryptococcal meningitis with effective and synergistic interactions, however, the 5FC is unavailable in most of Asia and Africa because of lacking of manufacturers, and the cost and bone marrow toxicity, as well as four times daily dosing resulting in more noncompliance (Loyse et al., 2013;Bermas and Geddes-McAlister, 2020). Study indicated that Cryptococcus is resistant to FLU up to 10.6%, and the relapse isolates have higher rates up to 24% (Bongomin et al., 2018). Cryptococcus isolates exhibits resistance to azole antifungals, especially, an increase of FLC resistance is prevalent across the globe (Pfaller et al., 2009;Bermas and Geddes-McAlister, 2020), and the intrinsic resistance to echinocandins which may be related to the calcineurin pathway and cytoplasmic calcium homeostasis regulated by CRM1 and CDC50, and the protection of melanins (Arastehfar et al., 2020;van Duin et al., 2002;Cao et al., 2019). Thus, the search for new antifungal targets or combinations to improve the efficacy and reduce the toxicity might be a promising option, scientists have attempted to use established, FDA-approved drugs, to help devise appropriate treatment options. Pyrvinium pamoate (PP) as a classical anthelmintic drug, was approved by FDA used for the treatment of pinworms in humans dating back to the first FDA approval for treatment of enterobiasis in 1959 (Beck et al., 1959;Fozard, 1978). Previous studies have shown PP has fungistatic, and the underlying mechanisms were that PP may toward the aneuploidy-related azole resistance in the Candida albicans and Aspergillus fumigatus; and interfere the fungal biological processes in Exophiala dermatitidis (Chen et al., 2015;Gao et al., 2018;Sun et al., 2020a;Sun et al., 2020b). In consideration of the importance of aneuploidy among the C. neoformans azole sensitivity, we speculate that PP might also exert some antifungal effect and positive interactions with conventional antifungals against C. neoformans (Kwon-Chung and Chang, 2012; Yang et al., 2021). In the present study, the antifungal efficacy of PP alone and in combination with triazoles and AmB against C. neoformans were investigated both in vitro and in vivo. Materials and methods Fungal strains, antifungals, and chemical agents All 20 C. neoformans isolates were obtained from patients with clinically confirmed cryptococcosis disease. Z1-Z3, 08061, 05338 were obtained from patients with cryptococcal pneumonia. G5-G10, 05781, 07406, 07109, 07394 were obtained from patients with cryptococcal meningitis. 07764, 07789, 08026 were isolated from the blood of patients with cryptococcal pneumonia, and all isolates were characterized microscopically and molecularly according to URA5 RFLP analysis (Meyer et al., 2003). Candida parapsilosis ATCC 22019 and Candida krusei ATCC 6258 was used as a control strain for susceptibility testing in this study. Fungal isolates were required to be cultured on potato dextrose agar medium (Haibo Biotechnology Co., Ltd.) at 35°C for 3 days before susceptibility testing. In vitro effect of pyrvinium pamoate alone and combined with azoles or amphotericin B against Cryptococcus neoformans The effects of PP alone, PP-azoles and PP-AmB interactions against C. neoformans were evaluated via the microdilution chequerboard technique, adapted from broth microdilution method as described in the CLSI M27-A4 (Drogari-Apiranthitou et al., 2012; Clinical and Laboratory Standards Institute, 2017). Inoculum concentrations were adjusted to 1×10 6 to 5×10 6 CFU/ml ensuring that the final concentration in the 96-well plates was 0.5×10 3 to 2.5×10 3 CFU/ml. The final concentrations of PP ranged from 0.25 to 16 µg/ml in rows, the azoles and the AmB concentrations ranged from to 0.03 to16 mg/ml columns. After 72h of incubation at 35°C, the MIC values (MICs) were determined that PP and azoles were read as the lowest concentration required to support 50% growth inhibition compared with the growth in the control wells, and AmB MICs were determined to be at 100% inhibition. A fractional inhibitory concentration index (FICI) value of ≤0.5 indicates synergy, a FICI value of >4 indicates antagonism, and a FICI value of >0.5 and ≤4 indicates no interaction; the FICI was calculated by the formula: FICI = (Ac/Aa) + (Bc/Ba), where Ac and Bc are the MICs of antifungals in combination, and Aa and Ba are the MICs of antifungals A and B alone, respectively (Odds, 2003). All tests were performed in triplicate. In vivo efficacy of pyrvinium pamoate alone and in combination with azoles or amphotericin B in Galleria mellonella The in vivo antifungal activity of PP alone and in combination with azoles and AmB against C. neoformans infections was evaluated by G. mellonella survival assay as described previously (Maurer et al., 2015). Twenty larvae with sixth instar for every group (∼300 mg, Sichuan, China) were maintained in the dark at room temperature before experiments. Fungal cultures of G7 were grown on PDA at 37°C for 48 h. Yeast were adjusted to 1 × 10 6 CFU/ml in sterile saline. For evaluation of the in vivo effects of PP alone and combined with azoles or AmB, the following intervention groups were included: PP group, ITR group, VOR group, POS group, FLU group, PP with ITR group, PP with VOR group, PP with POS group, and PP with FLU group, AmB group, PP with AmB group, and three control groups including untreated control group, saline control group and yeast control group. The concentrations of drugs were 200 mg/l. A total of 10 ml yeast suspension or saline were injected into the larvae via the last right proleg using a Hamilton syringe (25 gauge, 50 ml). Larvae were infected with fungal suspension 2 h before introducing therapeutic agents 5 ml (1mg/worm). All groups of larvae were incubated at 37°C in the dark. For survival studies, the death of larvae was monitored by visual inspection of the color (brown-dark/ brown) every 24 h for a duration of 6 days. The experiment was repeated triplicate using larvae from different batches. Statistical analysis Data were presented as mean ± SEM. Graph Pad Prism 7 was used for graphs and statistical analyses. The survival curves were analyzed by the Kaplan-Meier method. Differences were considered significant when P < 0.05. In vivo efficacy of pyrvinium pamoate alone and in combination with azoles or AmB against Cryptococcus neoformans The survival rates of larvae treated with PP (3.33%) showed almost no antifungal effective, the survival curve is consistent with the yeast control group (Figure 1). The survival rates of larvae for single AmB (23.33%), FLU (25%), ITR (33.33%), POS (36.67%) and VOR (48.33%) group showed less than 50%, but when PP combined with them, they were increased to 35% (PP+AMB), 40% (PP+FLU), 48.33% (PP+ITR), 56.67% (PP+POS), and 53.33% (PP+VOR). Especially, the survival rate for the PP+POS combination was significantly increasing compared to POS treatment alone with statistically differences (P = 0. 04). Also, the survival rates of all of the combination groups were significantly increasing compared with PP treatment alone (P < 0.000). Discussion Fungal infections are emerging as critically important threats to global health. Since continuously, requirement of prolonged treatment regimens, the limited selection of clinically effective, nontoxic antifungal therapeutic options have plagued scientists for decades. However, recent years, the evolution of drug resistant strains, and emergence of new pathogens presents a new challenge (Bongomin et al., 2017;Perfect, 2017). For Cryptococcus infections, these problems are even more acute, especially in patients suffering from HIV/AIDS. With regional differences, treatment of cryptococcosis was limited by the access to first-line antifungal drugs and increasing rates of antifungal resistance, which results the rates of morbidity and mortality were increasing in immunocompromised individuals (Rajasingham et al., 2017). So, exploring new treatment options against Cryptococcus infections which were more effective and less toxic side-effect, and even reduce the resistance evolution of strains, will improve morbidity and mortality rates of cryptococcosis. In the present study, the in vitro antifungal tests showed MICs of PP against C. neoformans were relatively high, consider with the survival rates of larvae treated with PP in vivo, PP alone exerts no inhibition effective against C. neoformans unlike the antifungal effective in C. albicans, A. fumigatus and E. dermatitidis (Chen et al., 2015;Gao et al., 2018;Sun et al., 2020a;Sun et al., 2020b). But when PP combined with the commonly used antifungal drugs in vitro, synergy effects were detected in all combinations, especially with POS, synergism up to 95.0%, just one isolate showed no interaction. FLU, as one of the mainly widespread drugs against cryptococcosis in many regions, 35.0% isolations show synergy effects. Also, the first line antifungal AmB, often combined with FLU or 5FC in clinical settings, when combined with PP, synergy effects were found in most isolates (60.0%). According to the in vivo survival assays, the in vitro data were confirmed due to PP combined with azoles and AmB improved larvae survival, especially when PP combined with POS, the larvae survival rate significantly increase (P =0.04). Recent years, PP has been found to be a potent inhibitor of tumor cells including pancreas, colon, breast, brain, myeloma, and other hematological malignancies. Its inhibition for tumor cells not only selectively that means not affecting the normal/ healthy cells of the body, but also with different mechanisms among the tumor types (Esumi et al., 2004;Harada et al., 2012;Momtazi-Borojeni et al., 2018). By inhibiting the Wnt/b-catenin pathway and mitochondrial activity were the most closely watched research hotspot. But, for PP's fungistatic, the underlying mechanisms remain unclear. In the present study, PP exhibited no inhibition for C. neoformans alone in vitro and in vivo, but the combinations showed synergy effects, we speculate aneuploidy and its related evolutionary trap (ET) participated in this process. Aneuploidy is a genomic state due to the gain or loss of chromosomes, and the eradication of aneuploids via dual-stress application means ET (Torres et al., 2008). Previous studies had demonstrated PP could strongly inhibit the growth of the aneuploidy-based azole resistance C. albicans strain i(5L) (Selmecki et al., 2006;Chen et al., 2015), and aneuploidy formation has been found appears to be associated with FLC resistance in C. neoformans. Previously studies suggest endoplasmic reticulum (ER) play an important role in the aneuploidy formation under azole stress that ERassociated genes in C. neoformans appears amplification under azole stress, although the mechanism of how ER influences aneuploidy formation remains unknown (Kwon-Chung and Chang, 2012). The ER integrity is essential for fungal cells due to ergosterol is produced in the ER and then delivered to plasma membrane, but PP has been proved that can suppress the unfolded protein response (UPR) through glucose starvation to exert anti-tumor activity and play a role in synergistic effective when combined with other drugs (Yu et al., 2008;Ishii et al., 2012;Krishnan and Askew, 2014). In fungal, the UPR is an adaptive Galleria mellonella survival curves following infection with Cryptococcus neoformans. Untreated group, uninfected larvae; Saline group, larvae injected with saline; yeast group, larvae infected with C. neoformans without any treatment; POS, C. neoformans-infected larvae treated with posaconazole (POS) alone; POS+PP, C. neoformans-infected larvae treated with POS combined with pyrvinium pamoate (PP); ITR, C. neoformans-infected larvae treated with itraconazole alone; ITR+PP, C. neoformans-infected larvae treated with ITR combined with PP; VOR, C. neoformans-infected larvae treated with voriconazole alone; VOR+PP, C. neoformans-infected larvae treated with VOR combined with PP; FLU, C. neoformans-infected larvae treated with fluconazole alone; FLU+PP, C. neoformans-infected larvae treated with FLU combined with PP; AmB, C. neoformans-infected larvae treated with amphotericin B alone; AmB+PP, C. neoformans-infected larvae treated with AmB combined with PP; PP, C. neoformans-infected larvae treated with PP alone (*p< 0.05. ****p< 0.0001). signaling pathway, its activation can help eukaryotic cells to adapted ER stress, which protect the fungus from the adverse conditions including antifungal drugs encountered in the host environment and increase the survival of fungal (Sullivan et al., 2006). So, same as tumor cells, PP may disturb the UPR, and then interfere fungi's ER stress response, combined the stress from antifungals, the pathogenicity and drugs response were weakened or even withdrawn. However, further investigations are needed to elucidate the underlying mechanism. In summary we show here that PP as an anthelmintic agent, with tolerated and cross the blood-brain barrier, enhance the in vitro and in vivo activity of POS against C. neoformans. This study provides an example of a drug repurposing strategy that PP can be used in the treatment of C. neoformans in the future. Data availability statement The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors. Author contributions YL and JX conceived and designed the study. YL performed all the experiments. YL and SL analyzed the data and wrote the manuscript. MC and HF provided general guidance and revised the manuscript. All authors contributed to the article and approved the submitted version. Funding This work was supported by the Zhejiang Provincial Natural Science Foundation [LY20H110001 to YL] and the National Natural Science Foundation of China [81701982 to YL].
2022-12-11T16:08:34.728Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "9b88ed7b26175e43925911c88d3cb3487cb7eac5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2f6a039fba3be50865c4eb6c24cbd627137faaad", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
247134194
pes2o/s2orc
v3-fos-license
Inland vessels emission inventory: distribution and emission characteristics in the Red River, Hanoi, Vietnam Purpose – Shipping is a major source of air pollution, causing severe impacts on the environment and human health, greatly contributing to the creation of greenhouse gases and influencing climate change. The research was investigated to provide a better insight into the emission inventories in the Red River in Hanoi (Vietnam) that is often heavily occupied as the primary route for inner-city waterway traffic. Design/methodology/approach – The total emissions of seven different pollutants (PM 10 , PM 2.5 , SO x , CO, CO 2 , NO x and HC) were estimated using the SPD-GIZ emission calculation model. Findings – The results show that CO 2 has the most significant contribution to the gas volume emitted: 103.21 tons/day. Remarkably, bulk carriers are the largest emission vehicle, accounting for more than 97% of total emissions, due to their superior number and large capacity. Socialimplications – Theresulttohavearoadmapformakingeffortstofulfilitscommitmentsothatitcould achieve its net-zero climate target by 2050 in Vietnam as committed at COP26. Originality/value – In this research, the number of vehicles and types of vessels travelling on the Red River flowing within Hanoi territory and other activity data are reported. The tally data will be used to estimate emissions of seven different pollutants (PM 10 , PM 2.5 , SO x , CO, CO 2 , NO x and HC) using a method combining both top-down and bottom-up approaches. Introduction Shipping, a significant share of the global cargo is transported by waterways, which is a very important factor in the economic growth over the world (Iv ce et al., 2019;Thach, 2014).Vessels sailing on inland rivers, lakes, canals and reservoirs include the categories listed for oceangoing ships, but in most coastal countries a higher proportion of smaller boats and recreational watercraft prevail (Iv ce et al., 2019).Today, waterways transportation is one of the most economical and least pollutant emissions measured by ton-km tonnage of goods transported (Thach, 2014).On average, as measured in ton-km, shipping by waterways can save fuel from 3.5 to 4.0 times compared to road transportation and up to 1.5 times compared to rail transportation (Blancas and El-Hifnawi, 2014).In Vietnam, estimated results by Blancas and El-Hifnawi (2014) sent to the World Bank showed that the inland waterway transport by 2030 will only be second to road transportation with 35% of the total volume of the goods transported, corresponding to 395 million tons/year.Nevertheless, the inland waterway transport means Vietnam are mainly small-capacity vessels, many of which are self-constructed and unregistered, making them difficult to be managed (Blancas and El-Hifnawi, 2014;JICA, 2010).Furthermore, cheap and poor-quality fossil fuels (diesel, coal, etc.) are often used to operate vessels (MONRE, 2016).The combustion of these fuels produces various air pollutants (i.e.black carbon -BC, particulate matters PM 10 , PM 2.5 , SO 2 ), greenhouse gases (including CO 2 , N 2 O, CH 4 ) and other gases.In order to appreciate the contribution of these pollutants to the total local emissions, it is necessary to conduct an emissions inventory (EI) (Khue et al., 2019;Le et al., 2020). Consequently, to manage air quality, EI is an essential task that needs to be carried out (MONRE, 2016).In Vietnam, there is currently no national EI programme (Le et al., 2020;MONRE, 2021).However, EI in scientific research for one or a few specific fields have been conducted in several cities, such as Hanoi, Ho Chi Minh, Bac Ninh and Can Tho.Significantly, the EI for inland vessels is limited within research conducted in the Me Kong River Delta area, including Ho Chi Minh City (Bang et al., 2019;Khue et al., 2019) and Can Tho (Bang et al., 2018).However, it is not conducted in the Red River Delta area (RRD).Furthermore, most vessels EI research were conducted only in ports with ocean-going vessels (OGVs), and calculations for emissions from smaller vessels operating in the river were still missing, such as the research in Can Tho (Bang et al., 2018).In this research, the number of vehicles and types of vessels travelling on the Red River flowing within Hanoi territory and other activity data are reported.The tally data will be used to estimate emissions of seven different pollutants (including PM 10 , PM 2.5 , SO x , CO, CO 2 , NO x and HC) using a method combining both top-down and bottom-up approaches. Study area and framework Hanoi currently has seven long interprovince rivers flowing through its territory (Red River, Duong River, Da River, Nhue River, Cau River, Day River and Ca Lo River).Inside the city, there are three short rivers (To Lich River, Kim Nguu River and Tich River) and two small, narrow rivers (Set and Lu River) (Bao et al., 2019).However, only the Red River and Duong River are the main corridors in the North Vietnam river system with vessel activities (Blancas and El-Hifnawi, 2014).Significantly, the Red River is the largest and most important among rivers flowing through Hanoi as it is a water supply source for all remaining rivers.Therefore, we only carry out an EI on the Red River. The Red River (RRI, Sông Hồng in Vietnamese) runs in three countries, including Vietnam, Laos and China, with a total watershed of 156.451 km 2 and flows 1,200 km south-eastward (Trinh et al., 2017).A part of the 130 km long river, out of the total 510 km in Vietnam, runs from northwest to southeast in Hanoi capital (Figure 1) that was selected as a study area; hereinafter it is noted as RRH. The RRH is knowledge as it is an arterial waterway of RRD in Vietnam.Downstream, the RRH expands and has many convenient tributaries for trade, attracting an overwhelming number of water means of transport.The investigation involved tallying the number of waterway vehicles and surveying to determine the total emission of the RRH.The technical route for the estimate of an air pollutant emission for the inland vessels of the RRH is shown in 2.2 Data collection 2.2.1 Investigation data.To investigate the EI for inland vessels transportation on the RRH, the method used both activity data and emission factors (EF).The study area (Figure 1) was separated into two sections for accounting.The first section (S1), 71.35 km long, starts from where the RRI joins Hanoi (Phong Van commune, Ba Vi district) to where the Duong river branches from RRI (Ngoc Thuy commune, Gia Lam district).At the end of S1, many vessels enter the RRI from the Duong river and vice versa.Thus, the number of vessels on the RRH changes markedly will affect the total emission of the study area.The second section (S2), 2, 57.11 km long, connects with S1 and follows the downstream till the end of Hanoi area (Quang Lang commune, Phu Xuyen district). Subjects selected for the study include two main types of vessels on the RRH: bulk carrier and ferry.Ferry is mainly divided into two classes: Class I includes large vessels that allow cars to get on, and class II includes smaller vessels that can only transport people and small motorized vehicles such as motorbikes and bicycles across the river.The larger the vessel, the larger the capacity, and it will affect the emissions more severely.Notably, ferries only move horizontally between the two sides of the river to serve the travel needs of people, while the other type of vessel moves along the stream and mainly carries local produce, coal and building material. The daily number of bulk carriers was simultaneously counted at two sites, Lien Mac port and Khuyen Luong port, representing the S1 and S2, respectively (Figure 1).These are Vessels' emission inventory in the Red River two major ports of Hanoi, so they attract a noticeable number of vessels.In particular, they have convenient locations and clear views, suitable for research teams to conduct the tallying process.The hydro-meteorological characteristics of North Vietnam influenced the RRH hydrological regime.Usually, the flood season (rainy season) is from May to October, and the dry season is from November to May of the following year (Blancas and El-Hifnawi, 2014).Therefore, the dry season in the study area includes six months, equivalent to about 180 days per year.During the dry season, low water levels and river speed in the RRH make navigation difficult (Blancas and El-Hifnawi, 2014).However, the tallying process was still conducted on 3 April 2021, because it rained in Hanoi the previous days, causing the riverbed water level to rise, creating favourable conditions for waterway transportation.Thereby, the study results could represent the emissions from vessels at the RRH on days with high traffic. The project selected a tallying method in EI for the following reason: Vessels on the RRH originate from many localities, and many do not have registration numbers.As a result, the data of inland waterway vehicle quantities from national registry centres will be incomplete.Therefore, the direct tally of the vehicles appearing at each section will allow collecting the exact number of vehicles compared to reality.The number of vehicles counted during the study period is presented in Table 1. 2.2.2 Vessel parameters.Engine power data recorded on the registration certificate of inland waterway vessels are inaccurate because the vessels in the study area are usually tuned to improve productivity.Hence, to collect data that is accurate and consistent with the current state of the study area, data on engine power, maximum speed and actual speed are collected by questionnaire and later averaged.The average values of engine power, engine speed and actual speed are shown in Table 2. 2.2.3 Emission factors selection parameter.Data on round per minute (RPM) and fuel type were collected in the survey to select the EFs.The RPM value, available for approximately 68% of the main engines, was used to determine if the engine is high-speed diesel (with RPM > 1,000), medium speed (with 300 < RPM ≤ 1,000) or slow speed (with RPM ≤ 300) speed (ENTEC, 2010).Based on the survey results, most vessels operating on the RRH have an RPM of more than 1,000 and use cheap diesel fuel.Thus, most engines in the study framework are high-speed diesel type and operate with marine gas oil 0.5%S, commonly sold at gas stations for trucks and diesel-powered equipment. 2.2.4 Ferries activity data.To gather information on operating frequency, we surveyed ticket gatekeepers of each ferry pier.Then, the number of ferry rides during the study period was calculated based on the survey data and considered similar across piers.In this study, ferry operation is analysed with two modes: cruising and hotelling, in which hotelling emissions were estimated to capture emissions from vessels while waiting at the piers before departure (Winijkul, 2020).Every time ferries enter hotelling mode, they are stationary, but the engine is still running, affecting the emission capacity.The activity data of ferries are shown in Table 3. 2.2.5 River flow rate.In this study, only an EI of waterway transport vehicles in the dry season was conducted so that the river flow rate value will be averaged at 1.99 km/h (VMHA, 2021).The river flow rate value was not directly used to estimate emissions in the Sustainable Port Development (SPD) model created by the Deutsche Gesellschaft f€ ur Internationale Zusammenarbeit (GIZ, 2015) (hereinafter, SPD-GIZ model).However, the study conducted an EI for vessels traveling with/against the river currents.Thus, the river flow rate value was used to calculate vessel speed affected by the river currents. 2.2.6 Vessel speed.When vessels move against significant river currents, the vessel speed should be calculated based upon the following: for vessels travelling with the river current, the vessel speed should be the actual speed plus the river speed; for vessels travelling against the river current, the vessel speed should be the actual speed minus the river speed (US EPA, 2009).Thus, the vessel speed values of each type of vessel are calculated and shown in Table 4. 2.2.7 Bulk carrier cruising time.For vessels moving along the stream, this research only conducted cruising mode EI.Due to numerous vessels coming from different locations, and most of them being spontaneous activities, it was not possible to know the full schedule of all vessels.The cruising time of each vessel type was calculated by dividing the length of each section by the vessel speed.The vessel found in any section of the study area in the tallying process is considered to have travelled that whole section with the cruising time shown in Table 4.This was slightly problematic in that: there could be errors for vessels that do not travel the entire stretch as previously assumed.However, after an actual investigation, it is discovered that vessels tend to pass through the study area and do not stop at any point inside this area.This finding helps to reduce errors, and the research results are reasonably robust. Emission inventory Emissions for inland vessels of the RRH were estimated using the SPD-GIZ model.The SPD-GIZ model is a product of technical cooperation between the ASEAN region and Germany., 2009).The model was proved to be perfectly suitable for Vietnamese waterways EI conditions as it has been successfully applied to calculate emissions for the port system in Ho Chi Minh City (Bang et al., 2019;Khue et al., 2018).In essence, the SPD-GIZ model is a well-designed Excel spreadsheet with pre-programmed commands and functions, making it like a completed emissions calculation programme.The only work that needs to be done is to enter vessel information and activity data collected from the tallying process into the model.Based on the input data, engine EFs could be looked up automatically.Later, emissions for cruising and hotelling conditions were calculated by modified equations such as Equations ( 1) and ( 2). (1) where E is emissions (tons); P is maximum continuous rating power (kW); LF is load factor (per cent of vessel's total power); A is activity (h); EF is emission factor (g/kWh); AS is actual speed (knots) and MS is maximum speed (knots).The EFs used in this study include both EFs under cruising and EFs under hotelling conditions referenced from the US EPA's protocol (US EPA, 2009). Estimation of total air pollutants emission Based on activity data collected, the SPD-GIZ model illustrated the total emissions of seven different pollutants.Thus, the daily emissions of PM 10 , PM 2.5 , SO x , CO, CO 2 , NO x and HC are 0.052, 0.048, 0.32, 0.23, 103.21, 2.05 and 0.11 tons/day, respectively.It could be seen that CO 2 has the most significant contribution to the gas volume emitted, 103.21 tons/day, due to CO 2 having the most prominent EF out of all seven air pollutants.In contrast, hydrocarbons (HC) have a low emission volume, only about 0.11 tons/day.According to the data in Figure 3 Vessels' emission inventory in the Red River 3.2 Emission in two river sections (S1 and S2) After the fieldwork and observing the vessels' navigation habits, the study area was divided into two sections: The location where Duong River is branching from the RRI was chosen to separate the two sections.As can be seen in the bar chart from Figure 4, it is important to note that the total emissions of bulk carriers are twice as in the S1 compared to the S2.In contrast, emissions from ferries in the S2 will be more outstanding than in the S1.The S2 has significantly few major road bridges over the river, making ferries the primary means of transport for people to move across the river.Therefore, the S2 has more ferry piers than the S1 and mostly only class I ferries can be seen here.The emissions from class II ferries are equal since both sections have the same number of class II ferries piers.However, the emissions from both classes of passenger ferries are minimal and insignificant compared to those from bulk carriers.So, although the emissions from the ferries in the S2 are more significant compared to S1, data in the pie chart from Figure 4 suggest that total emissions in the S1 are much more abundant than those in the S2.All pollutants emission in the S1 accounts for more than 67%.In addition, this disparity is due to the sudden change in vehicles where the Duong River branches from the RRI.Many vessels travel through the S1 but do not move through the S2 and turn into the Duong River.Thus, the vessels in the S2 are significantly reduced compared to the S1, leading to notable emissions in the S1. Emissions by vessel types The subjects of this study include bulk carrier, class I ferry and class II ferry.Each has different technical specifications, activity data and the number of vessels operating during the study period, causing extreme discrepancies in emissions between vessel types.Total emissions as categorized by vessel type are shown in Figure 5.The results show that bulk carriers are the largest emission vehicle (accounting for more than 97% of total emissions) due to their superior number and large capacity.Meanwhile, class II and class I ferries were only responsible for nearly 0.5 and 2.5% respectively.The total emissions of the study area were mainly contributed by vessels travelling along the river with about 97%.Otherwise, passenger ferries have negligible emissions: only nearly 3% of total emissions. Emission on vessels affected by river currents Vessels in the study area were evaluated according to two types of movement: moving along the stream and moving across the stream, in which, vessels that move along the stream are affected by the river currents.It is clear from Figure 6 that vessels travelling with the currents are the primary source of emissions among vessels that move along the stream, accounting for 60.21%.Meanwhile, emissions from vessels travelling against the currents account for 39.79%.Our results are in significantly good agreement with (JICA, 2010): Waterway transport from the west of Hanoi mainly carries goods downstream and returns without cargo (empty) (JICA, 2010).The tally data also shows that the number of vessels travelling Vessels' emission inventory in the Red River with the current is much higher than that travelling against the current, causing significant emissions differences. Emission on two operation modes In our study, ferries moving across the river were inventoried in two operation modes: cruising and hotelling.Our results differ from those in the previous study by Winijkul (2020) as our study shows that cross-river ferries PM 2.5 emissions in the hotelling mode were higher than those in cruising mode.For emissions shared between cruising and hotelling conditions (Figure 7), the emissions from hotelling dominated as their figures are almost double.In contrast, the percentage of HC emissions in cruising is higher (60.46 vs 39.54%).While the number of CO emission coming from these two types are almost the same at approximately 50%.It is notable that both hotelling and cruising (release the same amount of NO x , CO 2 , SO x ) about 70 and 30% respectively.This difference refers to the significant distinction between the actual cruising speed and maximum speed in both classes of ferries.Due to the short travel distance per ride (about 600 meters), ferries only operate at a very low actual speed (7 km/h) compared to the maximum speed (18 km/h).Hence, the ferries' main engine load factor in cruising mode is tiny, making emissions were reduced. Conclusions Air pollutants EI for inland vessels on the RRH were estimated using the SPD-GIZ emission calculation model.The results show that the largest amount of gas emitted from this activity is CO 2 (103.21tons/day).The rest includes PM 10 , PM 2.5 , SO x , CO, NO x and HC with an emission of 0.052, 0.048, 0.32, 0.23, 2.05 and 0.11 tons/day, respectively.It is also remarkable that bulk carrier is the largest emission vehicle type, accounting for more than 97% of total emissions.In the future, it is necessary to have comprehensive studies and annual EI generated for waterway vehicles to develop a more comprehensive national clean air programme. FEBEFigure 2 . Figure 2. To investigate the EI, it was separated into three phases as shown in Figure 2, those are including study area selection, data collection and emission estimation. Figure 1 . Figure 1.Overview of the Red River runs in Hanoi capital.This map was generated using ArcGIS version 10.2 (https://desktop.arcgis.com/en/arcmap/) Figure 2 . Figure 2. Technical route to obtain inland vessel EI of the Red River in Hanoi capital , as expected, in good agreement with Res ¸ito glu et al. (2015), NO x has the highest proportion of vessel pollutant emissions with a rate of 74.21%.Nevertheless, in contradiction to Res ¸ito glu et al. (2015), after NO x emissions, SO x has the second-highest proportion in pollutant emissions due to the fact that sulphur dioxide (SO 2 ) could be released during the combustion process of high sulphur content fuels (Res ¸ito glu et al., 2015). Figure 3 . Figure 3. Ratio of pollutant emissions (%) Figure 4. Total emission in two river sections Figure 5.Total emissions of four vessel types Figure 7.Total emissions between cruising and hotelling conditions Table 3 . Activity data of ferry piersIt applied the vessel's EI methodologies suggested by US EPA (US EPA
2022-02-27T16:19:43.689Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "a3a2c25f1591e7b6771e8e7956846f5b2884d768", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/FEBE-11-2021-0052/full/pdf?title=inland-vessels-emission-inventory-distribution-and-emission-characteristics-in-the-red-river-hanoi-vietnam", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "14f8b0c52c046f014e24906ee68088586ba87d2b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
247010620
pes2o/s2orc
v3-fos-license
The Effectiveness of Low Dead Space Syringes for Reducing the Risk of Hepatitis C Virus Acquisition Among People Who Inject Drugs: Findings From a National Survey in England, Wales, and Northern Ireland Abstract Syringes with attached needles (termed fixed low dead space syringes [LDSS]) retain less blood following injection than syringes with detachable needles, but evidence on them reducing blood-borne virus transmission among people who inject drugs (PWID) is lacking. Utilizing the UK Unlinked Anonymous Monitoring cross-sectional bio-behavioral surveys among PWID for 2016/18/19 (n = 1429), we showed that always using fixed LDSS was associated with 76% lower likelihood (adjusted odds ratio  = 0.24, 95% confidence interval [CI]: .08–.67) of recent hepatitis C virus infection (RNA-positive and antibody-negative) among antibody-negative PWID compared to using any syringes with detachable needles. Hepatitis C virus (HCV) is a bloodborne virus that heavily affects people who inject drugs (PWID) [1]. The primary interventions for preventing HCV transmission among PWID are needle and syringe programs (NSP) and opioid substitution therapy (OST) [2]. PWID either use syringes with fixed or detachable needles. Syringes with fixed needles are traditionally termed low dead space syringes (fixed LDSS) because their design minimizes the amount of dead or residual space between the syringe hub and needle when the plunger is fully depressed [3,4]. Conversely, traditional syringes with detachable needles have greater dead space and are termed high dead space syringes (HDSS). Recent modifications to these syringes have reduced their dead space and are denoted detachable LDSS. Laboratory studies suggest that fixed LDSS transfer less virus than detachable LDSS and HDSS when re-used, while detachable LDSS transfer less virus than HDSS [4,5]. Epidemiological studies suggest lower human immunodeficiency virus (HIV) and HCV prevalence among PWID that use fixed LDSS compared to those that use HDSS [6][7][8]. No studies have evaluated whether use of LDSS is associated with reduced incident infection. The World Health Organization (WHO) [9] recommend that NSPs provide and encourage the use of LDSS by PWID. However, fixed LDSS only come in a limited range of volumes and needle gauges, with studies showing that PWID prefer greater variety to meet their differing needs [6,10,11]. Some PWID also prefer detachable needles so they can be swapped during an injecting episode if it becomes blunt [10]. This preference for syringes with detachable needles led to the development of detachable LDSS, with numerous settings expanding their distribution [12] to minimize the risks associated with using syringes with detachable needles. Our recent UK cost-effectiveness analysis suggested that this strategy could be costsaving [13]. This analysis tests the hypothesis that using syringes with less dead space could reduce the risk of HCV acquisition. Data This analysis focusses on the association between usage of fixed LDSS and the risk of recent HCV infection. We utilized the Unlinked Anonymous Monitoring (UAM) Survey, an annual cross-sectional bio-behavioral survey of people who have ever injected psychoactive drugs recruited from specialist harm reduction services across England, Wales, and Northern Ireland; the UAM Survey has been described elsewhere [14]. Those who participate in the survey completed a questionnaire about their drug use behaviors and demographics and provided a dried blood spot (DBS) sample that was tested for HCV antibodies (anti-HCV). From 2016, DBS samples that tested negative for anti-HCV were also tested for HCV RNA [14], indicating a recent primary HCV infection. Further details are in Supplementary Materials. Participants were included in this analysis if they reported injecting in the past month, tested antibody-negative, and had an RNA test result. For each participant, we calculated the self-reported percentage of syringes used in the past month that had either detachable or attached/fixed needles (details in Supplementary Materials); excluding participants that received no needles. A binary variable was created for PWID that received 100% fixed LDSS (full use of syringes with fixed needles) or < 100% fixed LDSS (any use of syringes with detachable needles). We used multiple imputation by chained equations to account for missing data in covariates or the fixed LDSS variable, using 25 imputed data sets. Statistical Methods We used logistic regression to estimate the unadjusted and adjusted association of 100% fixed LDSS use with recent primary HCV infection compared to < 100% fixed LDSS use. Variables assessed for inclusion in the adjusted model were pre-selected based on our previous analysis of associations of LDSS use with HCV prevalence (see Supplementary Materials) [6]. Ethics The UAM Survey has longstanding multisite ethics approval from London Research Ethics Committee (98/2/051) and the UK Health Security Agency (UKHSA: previously Public Health England). Demographics and Injecting Characteristics We included 1031 participants with information on type of syringe used in past month and 434 with imputed values for the fixed LDSS variable, giving 1465 participants in total. Of these 63.8% always used fixed LDSS, 25.5% always used syringes with detachable needles, and 10.7% used both. Among 1465 PWID analyzed (Table 1), 92.4% had injected heroin in the past month, and 46.9% had injected crack. The mean age was 37.3 years, 26.2% were female, and duration of Table 1 injecting was 13.0 years. There were 33 (2.3%) recent primary HCV infections (antibody-negative participants testing RNApositive) in the sample. Characteristics of the PWID always using fixed LDSS and those using any syringes with detachable needles were similar, except that fewer in the 100% fixed LDSS group had injected into the groin (13.5% vs 55.7%). LDSS Use and Risk of HCV Acquisition Over the whole sample (for the first imputed data set), there were fewer recent HCV infections among individuals always using fixed LDSS (1.3%; 95% confidence interval [CI]: .7-2.3%) than among individuals using any syringes with detachable needles (3.8%; 95% CI: 2.5-5.8%). These percentages were similar in a complete case analysis. Compared to any use of syringes with detachable needles, exclusive use of fixed LDSS was associated with lower odds of having recent HCV infection ( Table 2, adjusted odds ratio [aOR] .24; 95% CI .08-0.67, P = .007). The only other variable associated with recent infection was injecting crack in the past month (aOR 3.09; 95% CI 1.24-7.69). The association between LDSS use and recent HCV infection was slightly attenuated if imputation was not used: aOR 0.31 (95% CI: .12-.81, P = .016). Although the odds ratios (ORs) for other variables remained consistent between the univariable and multivariable analyses, the OR for injecting in the groin went from 1.16 (95% CI: .54-2.47) to 0.59 (95% CI: .24-1.47). DISCUSSION Our analysis shows for the first time that exclusive use of low dead space syringes with attached needles (fixed LDSS) could be associated with reduced risk of HCV acquisition among PWID compared to using syringes with detachable needles. Comparison With Other Studies Our study are consistent with and builds on previous laboratory studies [4,5,13] by producing the first empirical estimate for the effectiveness of using fixed LDSS to reduce the risk of HCV acquisition. Consistent with this study, previous studies have found that injecting crack or other stimulants is associated with heightened HCV incidence [14,15]. Recent systematic reviews have found that currently being on OST or high coverage NSP can reduce HCV acquisition risk [2], although incarceration [16] or homelessness [17] can increase HCV acquisition risk. Our study findings broadly agree with these systematic reviews, although our results lack power. The only exception is high coverage NSP where our study suggests no association with reduced HCV risk [14]. Strengths and Limitations Our analysis's main strength was that we could assess whether the use of syringes with fixed or detachable needles was associated with recent incident HCV infection, however, there were limitations. First, we used a marker of recent infection instead of the gold standard for incidence studies of using longitudinal follow-up for identifying new infections. The short window period associated with this marker means only 33 incident infections were identified. This dependence on few incident infections emphasizes the importance of replicating the study in other settings. Additionally, using a marker of recent infection means there could be some misclassification of recent infections, although previous studies suggest this should be small [18]. Our analysis depended on self-reported data for all behavioral and intervention related factors, which may bias some variables, such as sharing of injecting equipment due to stigma associated with this behavior. This bias could mean that the association of injecting equipment sharing with incident HCV infection may be masked in this dataset. This is unlikely to explain the lower risk of HCV infection associated with using fixed LDSS because fixed LDSS use is associated with greater equipment sharing (19.9% vs 16.1%). Our analysis was also limited by using a variable that could only distinguish between syringes with fixed or detachable needles. This meant we could only assess whether using syringes with attached needles (fixed LDSS) was associated with reduced infection risk. This is still crucial information because it suggests that syringe dead space is an important determinant of infectivity. Many survey participants also did not complete all the questions needed to create the LDSS variable. This meant that we relied on imputed data in our main analysis; however, associations were similar when we did not use imputed data set. Due to the observational nature of our study we cannot rule out confounding factors that may be associated with both the risk of HCV acquisition and use of LDSS. Controlling for a wide range of potential confounders minimizes this risk. Our sample was mostly heroin injectors who had been injecting for over a decade, which may limit its generalizability to younger injecting cohorts or those predominantly using stimulants. Our analysis did not consider whether use of LDSS reduces the risk of HIV acquisition; data are needed on this. IMPLICATIONS AND CONCLUSIONS That the use of fixed LDSS is associated with a large reduction in an individual's risk of HCV acquisition suggests that a syringe's dead space is an important determinant of its infectivity. We encourage further studies to collect data on LDSS exposure to corroborate our findings, ideally with longitudinal follow-up. Nonetheless, given this evidence and our cost-effectiveness data [13], programs should encourage PWID to use fixed LDSS to minimize their risk of acquiring HIV and HCV infection, and provide syringes with detachable needles that minimize the dead space associated with that type of syringe. These findings have global implications because they suggest NSPs should focus on how they minimize the dead space of syringes that they distribute, while still meeting the varying syringe needs of PWID [6,10,11]. Although there are now many different syringe options that attempt to minimize the dead space of syringes with detachable needles (detachable LDSS), studies suggest that some have greater dead space than others [4]. It is therefore important that different types of detachable LDSS are evaluated using standard methods to determine their dead space and to assess their acceptability for PWID [10]. This needs to feed into international guidance on the best syringes for NSPs to use for improving their effectiveness, something that is important for achieving HCV and HIV elimination among PWID. These changes need to occur in parallel to increases in NSP coverage, which is currently low globally [19]. Supplementary Data Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. Notes Author contributions. P. V. and A. T. had the original concept for the study and developed the analysis plan. A. T. performed the analyses with input from P. V. and S. C.; P. V. wrote the first draft of the article with input from A. T. Also, S. C., E. E., S. I., M. D., and C. E. collected, provided, and verified the data. A. T., M. D., M. H., J. K., C. T., S. C., E. E., C. E., and P. V. contributed to data interpretation, writing the report, and approved the final version.
2022-02-22T06:23:05.781Z
2022-02-20T00:00:00.000
{ "year": 2022, "sha1": "9cfdaf9ff0054e04737ac524c1dd78fa8fb59bc0", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cid/advance-article-pdf/doi/10.1093/cid/ciac140/43516438/ciac140.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a31a0d6bb262661be60612bba6862b37f594c866", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220970244
pes2o/s2orc
v3-fos-license
Cobbler’s Awl Causing a Rare Pediatric Paraspinal Injury Managed Using 3D CT Pediatric spinal injuries are very uncommon, accounting for a small percentage of all spinal injuries. Domestic accidents such as falling and bumping are frequent events during childhood. In this case report, we present a rare penetrating trauma by a cobbler’s awl at the paraspinal level. The patient was referred to the ED after a needle became impaled into his back due to an accident that occurred at home. The patient’s neurologic assessment was normal. A radiologic study of the patient showed a cobbler’s awl penetrating the paravertebral muscle at the fourth lumbar vertebra level. The needle was removed promptly after an emergency surgical procedure. Postprocedure no complications occurred. Introduction Pediatric spinal injuries are very rare conditions and account for 1%-10% of all spinal injuries [1]. Children often get involved in falls and trips and thereon suffer from injures of varying severity, some requiring surgical intervention. Amongst these, penetrating injuries involving the spinal and paraspinal area secondary to sharp devices are rare. The case of a young boy is presented here, in whom a cobbler's awl pierced the paraspinal region during a domestic household accident. Our case is unique in the unusual location and the instrument causing such a mode of injury and to our knowledge has not been reported till date. Case Presentation A seven-year-old male child of a local cobbler presented to the ED with a history of an accidental impalement with a cobbler's awl. He was transported in a lateral position with the awl in situ. The child was conscious and oriented with vital parameters recorded at a pulse rate of 110 per minute and blood pressure of 100/60 mmHg. On examination, the awl was seen impaled on the left of the spine at the level of fourth lumbar vertebra (L4), 3 cm above the left sacral bone (Figure 1). FIGURE 1: Patient lying with cobbler's awl impaled in the back. A neurologic examination of his limbs showed a complete range of movement with normal strength with normal anal tone and contraction. His abdomen was soft and he voluntarily passed clear urine. He was put on systemic antibiotics and once stable was taken up for a CT scan with 3D reconstruction to visualize and plan the appropriate surgical intervention. The scan revealed that the awl missed the spine and was lodged in the soft tissue around the lumbar vertebra at the L4 level ( Figure 2). The lateral view demonstrated that the path of awl was between L3 and L4 vertebra and the direction and depth of penetration were deflected off the body of the fourth lumbar vertebra. There was no obvious injury noted to the surrounding structures ( Figure 3). FIGURE 3: 3D reconstruction lateral view showing entire track of cobbler's awl. He was posted for surgery for removal of the awl under general anesthesia and the awl was removed cautiously after which there was no active bleeding or discharge. The wound was washed copiously with warm saline. The postoperative period was uneventful and after a period of three days, a neurologic examination ensured he had no neurologic deterioration and was discharged. Discussion Spine injuries are rare in children. Osenbach and Menezes studied childhood spinal trauma in 179 children and found that cervical trauma (63%) was the most frequently encountered condition, while thoracic (13%), thoracolumbar (11%), and trauma to the lumbar region (14%) were rare [2]. The etiology of pediatric injuries differed from that of adult injuries in that falls were the most common causative factor (56%) followed by vehicular accidents (23%) [3]. We wish to report our case to highlight three aspects which as per our knowledge are unusual. Firstly, the uniqueness of the implementation of injury. Trauma due to penetrating foreign objects to the spine is rare. The instruments of such injuries include knives, wooden materials, glass, pencils, firearms, screw driver, and described herein, a cobbler's awl has not been reported before. Secondly to highlight the behavior, clinical approach, and sequelae of such trauma in a pediatric group. The spinal column of children is different from that of the adult. As children grow, the ossification centers enlarge leading to reversal of cartilage/bone ratio [4]. This is also thought to be the reason why the neurologic recovery in children with spinal cord injuries is thought to be better than that in the adult population, this being adequately demonstrated in our patient. Penetrating spinal trauma in children also differs from the therapeutic approach applied to adults. A child may not be able to express pain and sensitivity, and therefore a complete systemic examination must be conducted and local dermal lesions and injuries detected. In a pediatric age, it maybe difficult to determine the site of entrance of a foreign object as this relies heavily on the history which in this group may not be very accurate. Hence our third point emphasizes on the role of appropriate diagnostics in managing such situations effectively. The CT scan can illuminate the trail of the injury and help in the safe removal of the foreign body during surgery. The 3D reconstruction of plain CT images gave us added advantage in assessing the depth and path of injury caused by the awl and ruled out the possibility of other organ injuries or complication. A broken piece of foreign body may present later with neurologic symptoms, hence it is imperative not to miss such injuries before surgery [5][6]. Metal artifact may obscure some images, however, bone density images show the relationship between the metallic object, the spinal cord, and bony fragments [7]. Preoperative magnetic resonance (MR) is not recommended in such cases due to the risk of movement by the strong magnetic field that in turn may worsen the neurologic deficit [8]. Infections originating from the normal dermal flora may complicate such injuries, hence prophylactic antibiotic therapy must be started against these bacteria [9]. In our case, ceftriaxone was used as prophylactic antibiotics and no infections developed. Anatomically direct central backstabbings rarely produce injuries to the spinal cord and central retroperitoneal structures due to the protection provided by the layers of muscle and the spinal column, with the spinous and transverse processes deflecting blades laterally [10]. No immediate or delayed complications developed in our patient. Conclusions Trauma due to penetrating foreign objects in the pediatric spinal region is important because of the location, and therefore early surgical intervention should be considered. The use of 3D reconstruction CT images to assess the path of penetrating foreign objects, the penetrated tissue, the site and the damage caused help to precisely dictate the nature of the surgery and its outcome. Managing pediatric spinal injuries is a challenge, as the clinical approach and the recovery process are different and using 3D CT scan reconstruction helped in our case and we wish to highlight these aspects by our case report. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-07-02T10:31:31.854Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "d1098e1a222a21365fab2123412c884e5115c80e", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/35088-cobblers-awl-causing-a-rare-pediatric-paraspinal-injury-managed-using-3d-ct.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d19efaca13d2e6019131bbd5199c3ef2b483f31e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250685981
pes2o/s2orc
v3-fos-license
Trapped Bose-Fermi mixture in an optical lattice We study distinctive features of local Mott insulator of trapped Bose-Fermi mixture systems on an optical lattice by using Monte-Carlo simulations of one-dimensional Bose-Fermi-Hubbard model. It was found that each species of the bosons and the fermions in a Mott insulator state had a finite stiffness but the density correlation function exhibited a quasi-long-range-order. This strongly suggested that in actual experiments the bosons and fermions of a commensurate filling with the lattice sites would form a Mott insulator state under strong interatomic interactions and have an order in their positional configuration. Introduction Technologies of confining ultra-cold atoms on optical lattices have opened up new possibilities of quantum phases. Bosonic atoms on a lattice, for example, could become Mott insulator when interatomic repulsiveinteractions and atomic density per site meet a certain condition [1,2,3]. Confinement potential creates a non-uniform density profile, resulting in the coexistence of superfluid and Mott insulator. The boson density has a peak at the deepest position of the confinement potential and decreases as going away from the peak position. It may reach an integer value at some points, where the commensurability with lattice sites is locally satisfied and a local Mott insulator appears under sufficiently large repulsive interactions. The density profile could thus have several plateau areas, called Mott plateaus, at which local Mott insulator is formed. Situation is more complicated with trapped Bose-Fermi mixture systems on an optical lattice [4]. The density profile of trapped bosons and fermions on an optical lattice could also have Mott plateaus as one sees in the boson systems. However in trapped Bose-Fermi mixture systems in a confinement potential, only the total number of the bosons and the fermions at each site is fixed to an integer value in the Mott plateau and the number of the atoms of each species is not fixed. In other words, in the local Mott state of mixture systems, the total number of the atoms does not fluctuate at each site, but the number of the fermions or that of the bosons at each site could fluctuate compensating each other's fluctuation to fix the total local density. From this viewpoint, the bosons and fermions seem to be able to move in the local Mott state, unlike bosonic or fermionic Mott insulators. This picture leads to a fundamentally new issue of Mott states. In this study we conducted quantum Monte-Carlo simulations of one-dimensional Bose-Fermi-Hubbard model to address this issue and find distinctive features of local Mott state of Bose-Fermi mixture systems. Since the Mott plateau in the density profile of the mixture systems, realized in numerical simulations, is too small to examine its detail, we removed the confinement potential and tuned the number of the atoms instead to realize commensurability. The paper is organized as follows. Section 2 shows the model that we used in the simulations and Sec. 3 presents the result. Conclusions are given in Sec. 4. Model We performed world-line quantum Monte Carlo (QMC) simulations of bosons and fermions on a one-dimensional periodic lattice [5,6,7]. We employed one-dimensional Bose-Fermi Hubbard Hamiltonian, given by U bb denotes boson-boson interaction and U bf boson-fermion interaction. We set t b = t f = 1 as energy unit. Assuming that all the interatomic interactions were repulsive, we measured charge stiffness and B-F correlation function, which we will define below, to clarify the Mott state property of the mixture with significantly strong interactions. To observe stiffness of the system, we measured the following current-current correlation functions in the zero-frequency limit, which correspond to the averages of squared winding number [6,7]: Here j b (ω) presents the Fourier transform of current operator j b (τ ) of the bosons with τ being imaginary time in the path integral formalism, and j f (ω) that of the fermions. For the observation of particle configuration in the local Mott state, we measured B-F correlation function, defined by where S i = n bi − n f i . Since in the limit of U bb , U f b → ∞ our Hamiltonian is equivalent to an antiferromagnetic Heisenberg chain with a constraint that the number of up-spins (say, bosons in our model) and that of down-spins (fermions) are fixed, we can expect staggered spin-densitywave like correlation for C. It was already shown that the two component systems undergo a mixing-demixing transition [8,9,10] at a certain value of U bf and the transition point shifts as U bb increases [11]. So we need to choose the interaction parameters appropriately to prevent demixing of the bosons and fermions. Stiffness We first present the result of stiffness measurement in the QMC simulations of 30 bosons and 30 fermions on 60 sites at a temperature T = 0.04 with the Trotter decomposition number N τ = 200. Figure 1 shows boson stiffness J b , fermion stiffness J f and total stiffness J tot as functions of the boson-fermion interaction U bf with the boson-boson interaction U bb being fixed to 10. We see from the figure that, at around U bf = 2, J tot becomes zero and the mixture forms a Mott insulator. (We also checked the state by measuring compressibility of the mixture system and verified that the system was incompressible in the Mott insulator state.) On the other hand, J b and J f remain finite in the Mott insulator state. Therefore the fermions and bosons apparently move around, keeping the total local density fixed to an integer value. J b and J f have the same value in the Mott state because, when the total number of the atoms on each site is fixed to 1, the bosons and the fermions move only by exchanging their neighboring positions. Namely, when we have a boson current, we always have a fermion current in the opposite direction. We show in Fig. 2 the current-current correlation function of the bosons for different system sizes with the total density of the bosons and fermions being kept unchanged. We see almost no recognizable size-dependence of the correlation function, which indicates that the finite stiffness of J b is not induced by a finite size effect. So, apparently, we have a strange Mott insulator of the boson-fermion mixture, inside which each species of the atoms are moving by switching their positions. As shown below, however, this could be a special case realized in one dimension and would not be expected in three dimensional systems. B-F correlation function Next we present the result of the B-F correlation function C(l) in Figs. 3 and 4. As mentioned above, our Hamiltonian is equivalent to one-dimensional antiferromagnetic Heisenberg model when U bb and U bf are both infinite. We know that the B-F correlation (SDW correlation in the language of spins) of antiferromagnetic Heisenberg model exhibits a power-law decay l −1 since we could only have a quasi long-range order in one dimension. In Fig. 3, C(l) demonstrates a power-law decay l −α for 30 bosons and 30 fermions on 60 sites with U bf fixed to 3.0. The power α depends on the interaction strength and becomes closer to 1 as the interactions get stronger. Figure 4 shows C(l) for 40 bosons and 20 fermions. In this case, the factor (−1) l in Eq. (3) was modified appropriately according to the number ratio of the bosons and the fermions. (To be more specific, the factor was changed to (−1) l where l = 1 when l is a multiplier of 3 and l = 0 otherwise.) We could see the power law behavior of the correlation function also in Fig. 4. Conclusions We performed QMC simulations of one-dimensional bose-fermi mixture systems on a periodic optical lattice with strong repulsive interactions and found that the strong interactions could drive the systems into a Mott insulator state. The calculation of the B-F correlation function indicated the presence of an antiferromagnetic quasi long-range order where the bosons and the fermions align alternately. We could therefore expect a clear long-range order in a threedimensional mixture. The stiffness calculation showed that the bosons and fermions moved inside the Mott insulator, unlike pure-boson or pure-fermion Mott insulator. However, we believe that this is characteristic to one-dimensional systems and would not be observed in three-dimensional ones. In one dimension, we have only quasi long-range order in the atom configuration inside the Mott insulator and so the bosons and fermions are located alternately only approximately, which would not contradict the finite stiffness of each species of the atoms. However in three dimensions, this quasi order would become a real long-range order and the bosons and fermions align exactly alternately (if the number of the bosons and that of the fermions are the same), which prohibits the motion of the bosons and fermions. In actual experimental situations, therefore, the bosons and fermions of the mixture systems would have a rigid order in their positional configuration in the Mott insulator state.
2022-06-27T23:45:18.007Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "30bb7dd947ee29df3209d5cea16789f54d088075", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/150/3/032050", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "30bb7dd947ee29df3209d5cea16789f54d088075", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221219506
pes2o/s2orc
v3-fos-license
Immune profiling of plasma-derived extracellular vesicles identifies Parkinson disease Objective To develop a diagnostic model based on plasma-derived extracellular vesicle (EV) subpopulations in Parkinson disease (PD) and atypical parkinsonism (AP), we applied an innovative flow cytometric multiplex bead-based platform. Methods Plasma-derived EVs were isolated from PD, matched healthy controls, multiple system atrophy (MSA), and AP with tauopathies (AP-Tau). The expression levels of 37 EV surface markers were measured by flow cytometry and correlated with clinical scales. A diagnostic model based on EV surface markers expression was built via supervised machine learning algorithms and validated in an external cohort. Results Distinctive pools of EV surface markers related to inflammatory and immune cells stratified patients according to the clinical diagnosis. PD and MSA displayed a greater pool of overexpressed immune markers, suggesting a different immune dysregulation in PD and MSA vs AP-Tau. The receiver operating characteristic curve analysis of a compound EV marker showed optimal diagnostic performance for PD (area under the curve [AUC] 0.908; sensitivity 96.3%, specificity 78.9%) and MSA (AUC 0.974; sensitivity 100%, specificity 94.7%) and good accuracy for AP-Tau (AUC 0.718; sensitivity 77.8%, specificity 89.5%). A diagnostic model based on EV marker expression correctly classified 88.9% of patients with reliable diagnostic performance after internal and external validations. Conclusions Immune profiling of plasmatic EVs represents a crucial step toward the identification of biomarkers of disease for PD and AP. To date, an effective causal treatment for Parkinson disease (PD) is missing, and the diagnosis still relies exclusively on motor symptoms that appear too late for a disease modifying intervention. 1 Hence, there is urgent need for biomarkers that can stratify patients with PD for clinical trials. Furthermore, the differential diagnosis between PD and atypical parkinsonisms (APs) like multiple system atrophy (MSA) is challenging. 2 According to the misfolded protein aggregates present in the brain, PD and MSA are collectively termed as alpha-synucleinopathies and are distinct from AP with tauopathies (AP-Tau). Extracellular vesicles (EVs) are a heterogeneous population of secreted membrane particles involved into physiologic cell-tocell communication and transmission of biological signals. EVs are subdivided based on physical characteristics such as size, into small (30-150 nm) and large (150-500 nm) vesicles; members of the tetraspanin protein family (CD9, CD63, and CD81) are considered specific markers of EVs. 3 CNS neurons release EVs 4 able to cross the blood-brain barrier and reach the peripheral blood. 5 EVs express surface antigens, which affect the cellular uptake and allow their tracking to the cell of origin. 6 So far, most of the studies on EVs in neurodegenerative diseases focused on their possible role on transmission of pathologic misfolded proteins and fewer on their functions in cell-to-cell signaling. Indeed, immune system is involved in PD, as demonstrated by neuroinflammatory changes in brain histopathology as well as by elevated immune markers in peripheral blood, suggesting that immune system may play a primary pathogenic role in PD. 7,8 Therefore, we hypothesized that circulating EVs carry important information on brain inflammatory immune response and that their characterization can be exploited for diagnostic purposes. Study design This was a cross-sectional, case-control study aiming (1) to characterize distinctive EV subpopulations in plasma of patients with PD, MSA, and AP-Tau healthy controls (HCs) by immunophenotyping 37 different membrane proteins using an innovative flow cytometry multiplex bead-based platform 9,10 ; (2) to correlate the differential expression of EV surface antigens to clinical scales of gravity; and (3) to build diagnostic models based on distinctive EV surface proteins through supervised machine learning algorithms. Finally, because EVs are taken up by surrounding and distant cells, we performed a functional evaluation of their protein interactors with the purpose to highlight protein targets, biological pathways, and molecular functions potentially affected in PD, MSA, and AP-Tau. Subjects Twenty-seven patients with idiopathic PD, 8 with probable MSA, 9 with probable AP-Tau, and 19 age-matched HCs for the PD group were consecutively enrolled from July 2015 to January 2019. These subjects served as the training cohort for the diagnostic model. Patients were recruited from the movement disorders outpatient clinic at Neurocenter of Southern Switzerland in Lugano; HCs were recruited among patients' partners. The inclusion criteria for PD were (1) a definite clinical diagnosis according to the UK Parkinson's Disease Society Brain Bank criteria for diagnosis 1 and (2) no family history and no major cognitive impairment or major dysautonomic symptoms in the history. The inclusion criteria for AP were based on published diagnostic criteria for MSA, 11 progressive supranuclear palsy (PSP), 12 and corticobasal degeneration (CBD). 13 A separate cohort of 40 subjects (20 HC, 10 PD, 5 MSA, and 5 AP-Tau) served as the validation cohort for the diagnostic model (see below the paragraph "Diagnostic modeling and validation in an external cohort"). Standard protocol approvals, registrations, and patient consents Subjects were consecutively included in the NSIPD001 study, according to the study protocol that was approved by the Cantonal Ethics Committee. All enrolled subjects gave written informed consent to the study in accordance with the Declaration of Helsinki. Blood collection and plasma preparation Ten milliliters of blood were collected into anticoagulant ethylenediamine tetraacetic acid (EDTA) tubes in the morning after 4-hour fasting, and the following protocol was performed to obtain plasma enriched in EVs 15 : fresh whole blood was centrifuged for 15 minutes at 1,600g at 10°C to eliminate cellular components. To further deplete platelets and cellular debris, the supernatant was centrifuged 15 minutes at 3,000g at 4°C; then, 2 consecutive centrifuges were performed at 10,000g for 15 minutes and 20,000g for 30 minutes at 4°C, allowing the elimination of apoptotic bodies and larger EVs (figure 1A). The obtained plasma was aliquoted and stored at −80°C. The storage period varied among samples according to the consecutive enrollment of subjects in the study, between July 2015 and January 2019. Nanoparticle tracking analysis Nanoparticle concentration and diameter were measured by NanoSight LM10 (Malvern Instruments, Malvern, UK) equipped with a 405-nm laser and nanoparticle tracking analysis (NTA) 2.3 software. One microliter of plasma was diluted 1:1,000 in particle-free phosphate buffered saline. Three consequent videos of 60 seconds each were acquired. Minimum expected particle size, minimum track length, and blur setting were set to automatic, and the detection threshold was set to 4 to reveal all particles, as previously described. 16 The particle concentration and the distribution graph of the particle size were determined per each sample by averaging the results from the analysis of 3 independent videos. MACSPlex exosome assay and flow cytometry analysis The screening approach (MACSPlex Human Exosome Kit; Miltenyi, Bergisch Gladbach, Germany) was previously described. 9,10 Briefly, it is based on 4.8-μm diameter polystyrene beads, labeled with different amounts of 2 dyes (phycoerythrin and fluorescein isothiocyanate), to generate 39 different bead subsets discriminable by flow cytometry analysis. Each bead subset is conjugated with a different capture antibody that recognizes EVs carrying the respective antigen (37 EV surface epitopes plus 2 isotype controls). The list of 37 antigens is reported in table e-1 (links.lww.com/NXI/A293). After beads + sample overnight incubation, EVs bound to beads are detected by allophycocyanin-conjugated anti-CD9, anti-CD63, and anti-CD81 antibodies (figure 1A). Plasma samples (60 μL) diluted 1:2 in buffer solution were analyzed with the MACSQuant Analyzer-10 flow cytometer (Miltenyi). Triggers for the side scatter and the forward scatter were selected to confine the measurement on the multiplex beads. A blank control composed only by MACSPlex Buffer and incubated with beads and detection antibodies was used to measure the background signal. Each EV marker's median fluorescence intensity (MFI) was normalized to the mean MFI for specific EV markers (CD9, CD63, and CD81) obtaining normalized MFI (nMFI). All analyses were based on nMFI values. Samples were analyzed blindly to the clinical diagnosis. To test the reliability/specificity of MACSPlex Human Exosome Kit for EVs, we compared the procedure described above with and without EV enrichment by ultracentrifugation, and we found no differences between procedures (figure e-1, links.lww. com/NXI/A293). Therefore, plasma samples were directly processed without EV enrichment by ultracentrifugation. Technical consistency and reproducibility of the assay were confirmed by analyzing repeatedly the same sample and by assessing plasma from the same subject at different time points (figure e-2, links.lww.com/NXI/A293). Network analysis of EV surface markers' protein interactors Protein interactors of differentially expressed EV surface markers were retrieved by Cytoscape PESCA plugin, 17 and a global Homo sapiens protein-protein interaction (PPI) network of 1,588 nodes and 36,984 edges was reconstructed. For each quantitative comparison (PD vs HC, MSA vs HC, AP-Tau vs HC), a specific PPI subnetwork was built considering the first neighbors of each EV surface protein. Each subnetwork was analyzed at a topological level by Cytoscape Centiscape plugin 18 ; to select putative hubs and bottlenecks, we took into account the network size, and only nodes with all Betweenness, Bridging, and Centroid values above the average calculated on the corresponding whole network were retained as previously reported. 19,20 At the same time, nodes belonging to each subnetwork were evaluated at a functional level by DAVID 21 and the most enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway databases. Molecular functions were extracted; specifically, H sapiens set as background, count > 5 and p < 0.001, corrected by the Bonferroni test. Statistical analysis Statistical analyses were performed with IBM SPSS Statistics 22.0, PYTHON 2.7, and GraphPad PRISM 7.0a. Variable distribution was assessed by the Kolmogorov-Smirnov test. Normally distributed variables (age) were expressed as mean ± SD and analyzed by the 1-way analysis of variance test with the post hoc Bonferroni test for multiple comparisons. Nonnormally distributed variables (disease duration, H&Y, MDS-UPDRS, BDI-II, MMSE, MoCA, olfactory test, RBD, LEDD, NTA, and MACSPlex analysis) were expressed as medians and interquartile range and analyzed using the Kruskal-Wallis test. Categorical variables (sex) were expressed as absolute number and percentage (%) and analyzed by χ 2 or Fisher exact tests. Univariate logistic regression analysis was performed to assess the ORs. Receiver operating characteristic (ROC) curve analysis was used to evaluate the area under the curve (AUC) and to compare diagnostic performances of selected variables. The Youden index (J = Sensitivity + Specificity − 1) was calculated to determine the cutoff with the greater accuracy. Correlations were evaluated by the Pearson R test and regression curve analysis; correlations were considered strong for R between |1.0| and |0.5|, moderate between |0.5| and |0.3|, and weak between |0.3 and |0.1|. A p value less than 0.05 was considered significant. Diagnostic modeling and validation Machine learning supervised algorithms are exploited in clinical practice to formulate predictions of selected outcomes based on a given set of labeled paired input-output training sample data. 22,23 The linear discriminant analysis was used to build the 3D canonical plot (figure 2B); canonical components 1, 2, and 3 were calculated from weighted linear combinations of variables to maximize separation between the 4 groups (HC, PD, MSA, and AP-Tau); in the plot, each patient is represented by a point, the center of the spheres indicates the mean of (canonical 1; canonical 2; canonical 3) for each diagnosis, and spheres include patients with a linear combination coefficient that falls within the mean ± SD (canonical 1 ± SD; canonical 2 ± SD; canonical 3 ± SD). A diagnostic model was built through a random forest (RF) classification algorithm on the training cohort (n = 63); the algorithm created 20 different classification trees with a maximum number of 8 splits for each tree. The diagnosis derives from the outcome of each classification tree of the RF: for example, if at least 11 of 20 trees of the RF predict PD, the patient will be classified as PD. The model was validated by a leave-one-out algorithm (internal validation) and in a different cohort (n = 40) (external validation). The leave-oneout validation was used to exclude overfitting bias and to evaluate generalizability of the model; briefly, the algorithm is trained on n−1 patients (where "n" is the total number of patients), and the remaining patient is used to test the model. The test patient is then changed and accordingly the training subgroup. The process is repeated a total of n times, with the test patient rotating at each round and the remaining subgroup used for model training. The external validation was performed with the same RF model trained on the training cohort. Data availability The raw data that support the findings of this article are available on request to the corresponding author. Results Demographic and clinical characteristics of study groups Demographic data and clinical assessments for each group are summarized in table 1. The MSA group included 4 MSA-C and 4 MSA-P; the AP-Tau group included 6 patients with probable PSP and 3 with possible CBD (table e-2, links.lww.com/NXI/ A293). Subjects with AP-Tau were significantly older than HC. Sex ratio and disease duration did not differ across groups. It is known that AP is characterized by a more aggressive disease course than PD; indeed, MSA and AP-Tau had a more severe disease gravity measured by the H&Y and by the MDS-UPDRS; in addition, they displayed a higher cognitive impairment measured by the MMSE and MoCA. Finally, subjects with AP-Tau resulted more depressed than PD as measured by the BDI-II. LEDD was not different between groups of patients. The PD group shows an increased number of EVs NTA showed that the PD group had the highest number of nanoparticles/mL compared with HC and AP-Tau (p = 0.001) not with MSA, whereas no differences in diameter were found between groups (figure 1B, table e-3, links.lww. com/NXI/A293). Because NTA is not specific for EVs, we used the MFI of CD9/CD63/CD81 (specific markers of EVs) by flow cytometry analysis as a measure of EV concentration. Mean MFI of CD9/CD63/CD81 was significantly higher in PD compared with HC (p = 0.023) and AP-Tau (p = 0.037) not with MSA ( figure 1C). Importantly, mean MFI for CD9/CD63/CD81 correlated with nanoparticle concentration obtained with NTA analysis ( figure 1D). EVs were furtherly characterized according to current standard guidelines. 3 After EV immunocapture by MACSPlex kit capture beads, we performed a Western blot analysis showing the presence of EV-specific luminal markers (TSG101, Alix), EV-specific tetraspanin (CD81), and the absence of contaminants (APOA1 and GPR94) ( figure 1E). These results confirm the presence of EVs and the absence of relevant contamination in samples analyzed by flow cytometry. relatively low expression of EV markers, in analogy to the AP-Tau group, whereas PD and MSA were characterized by higher levels of expression. Furthermore, a linear discriminant analysis model based on differential expression of all EV markers allowed the separation of subjects according to their diagnosis, as shown in the canonical plot ( figure 2B). Protein network hubs and functional pathway analysis of EV surface antigens The most relevant interactors of differentially expressed EV markers were selected by PPI network topological analysis in terms of hubs. Hubs refer to proteins with the greater number of connections within the cell or occupying crucial network positions, suggesting therefore a critical role on the control of information flow over the network. 20 3D, table e-6). Most represented KEGG categories included immune system, signal transduction, endocrine system, and signaling molecules and interaction. Except for the endocrine system, they were more enriched in PD and MSA, suggesting potential stronger activation of immune response in these groups. Of note, FoxO signaling pathway was higher in AP-Tau. EV surface antigens correlate with cognitive impairment and disease gravity in PD and MSA In PD there was a negative correlation between CD25 and MMSE and MoCa scores, a negative correlation between CD146 and MMSE score, whereas CD62P directly correlated with the BDI-II (figure 4, A-D). No significant correlations were found between EV antigen's expression and LEDD in the PD and AP groups ( figure 6A). The model discriminated patients of the 4 different groups (HC, PD, MSA, and AP-Tau) with high accuracy (88.9%): all subjects with PD were correctly diagnosed, and 1 MSA and 1 HC were, respectively, misdiagnosed as HC and PD, whereas among 9 patients with AP-Tau, 2 were predicted as HC and 3 as PD ( figure 6B). Subsequently, pairwise comparisons were performed (figure 6, C-H). The RF model was validated by the leave-one-out algorithm, which confirmed the generalizability of the model and excluded overfitting bias (accuracy of internal validation 63.8%, with a 72.2%-91.5% range for pairwise comparisons). Finally, we validated our model in an external cohort of 40 subjects: the overall accuracy was 77.5%, resulting in the correct diagnosis of 31 of 40 subjects (figure 6, I and J). The accuracy after external validation was consistent with the one resulting from the internal validation, supporting the reliability of the diagnostic model. Demographic data of the external cohort were similar to those of the training cohort and are shown in table e-9 (links.lww.com/NXI/A293). Discussion The major finding of this study consists in the setup of a diagnostic model for the stratification of patients with PD and AP, based on immunologic profiling of plasmatic EV subpopulations, obtained from minimally invasive peripheral blood sampling. We systematically evaluated the diagnostic performance of differentially expressed EV antigens, and a diagnostic model was built using supervised machine learning algorithms. The model showed an overall reliable accuracy, correctly predicting patient diagnosis, with the best performance for the diagnosis of PD (97.8%) and MSA (100%) vs HC. These results were supported by ROC curve analysis on the compound marker, originated from the linear combination of all the differentially expressed EV markers, showing very high sensitivity and specificity for PD and MSA (AUC 0.908 and 0.974, respectively). Previous works have explored the utility of EVs as biomarkers for PD by quantifying brain-derived exosomes (AUC 0.75-0.82) 25 or by measuring specific target proteins like alpha-synuclein (αSyn) or DJ-1 in plasma neuronal-derived exosomes (AUC 0.654, 0.724). 26 The combination of multiple markers improved the diagnostic accuracy of neuronal-derived exosomes as shown by a recent work on quantification of both αSyn and clusterin differentiating PD from other proteinopathies and from MSA with high accuracy (AUC 0.98 and 0.94, respectively). 27 This analysis of multiple immune surface markers of circulating EVs in PD and AP shows a high diagnostic performance, likely due to the advantage of simultaneously profiling several EV subpopulations. First of all, we demonstrated that plasma EV concentration was higher in patients with PD. Previous reports have shown that the total number and size of EVs were not augmented in serum of PD, 28 whereas a more recent study demonstrated an increased number of plasmatic brain-derived EVs in PD. 25 Methodological factors such as isolation/extraction and quantification of EVs explain these differences. However, at the molecular level, it is recognized that endosome/lysosome pathway is a common defective pathway in sporadic and genetic PD, 29 and EVs are generated and secreted by the endosomal compartment called multivesicular bodies by fusion with plasma membrane. The process of EV secretion may be enhanced when there is an inhibition of fusion of multivesicular bodies with lysosomes, as expected in PD, 30 so that an increased production of EVs in PD is likely. It is difficult to track the origin of EVs because the majority of markers are shared by several cell types and virtually any cell can release EVs in blood. In blood normally, a large number of EVs arises from platelets, erythrocytes, however leucocytes, endothelial cells, monocytes, neutrophils and lymphocytes may release EVs. 31 The flow cytometry analysis demonstrated that 16 and 12 EV markers, related to immune cells, were upregulated respectively in PD and MSA, only 4 in AP-Tau compared with healthy condition. In particular, PD and MSA shared 11 EV surface markers. Considering functions and roles of EV surface markers analyzed in this study, this result favors the hypothesis of a major, or at least different, immune dysregulation in PD and MSA vs AP-Tau. Despite sharing several overlapping clinical features, synucleinopathies and tauopathies are distinguished by distinctive neuropathologic hallmarks: deposits of aggregated αSyn (Lewy bodies) in neurons and in glial cells in the former group and neurofibrillary tangles of Tau in the latter as shown by immunohistological studies. 32 Although inflammatory features have been described in patients with both synucleinopathies and tauopathies, by PET studies, 33 from Toll-like receptor 4. 37,38 Moreover a recent multicenter study has shown higher levels of CSF inflammatory biomarkers in PD with dementia and MSA compared with controls and not in AP-Tau vs controls, plus those markers correlated with motor and cognitive impairment. 39 Likewise, our analysis showed a moderate correlation between CD25, CD146, and cognitive impairment in PD suggesting a link between inflammation and a major cognitive decline: CD25 is a costimulatory molecule supporting immune cell activation, 40 and CD146 acts as an essential regulator of pericyte-endothelial cell communication in the blood-brain barrier and it has been identified as a potential key therapeutic target for cerebrovascular disorders. 41 In MSA, the concentration of EVs measured by NTA and flow cytometry analysis correlated with disease duration and cognitive impairment. These findings favor the hypothesis of a perpetuation of toxic effects by circulating EVs due to chronic immune activation, even if a compensatory/neuroprotective role of EVs in response to the progressive neurodegeneration cannot be excluded. Among EV markers differentially expressed in PD, CD146 and MCSP are of interest because they have been associated with melanoma and used for detection of circulating tumor cells. 42 Consistently, a link between PD and melanoma has been supported by many epidemiologic studies, showing that patients with PD have a higher incidence of this tumor, even if the underlying pathogenic mechanisms are unknown. 43 The network analysis of potential interactors of EV surface markers demonstrated that functional pathways and network hubs in PD and MSA were coincident and different from those of AP-Tau. Of interest, among hubs shared by PD and MSA, we found SP1, a transcription factor playing a key role in regulating neuroinflammation in MS. 44 The most represented KEGG pathways were immune system, signal transduction, signaling molecules and folding, sorting and degradation in alpha-synucleinopathies, whereas FoxO signaling pathway and some pathways of the endocrine system were higher in AP-Tau, matching with the relation that has been found by many authors between endocrine signaling, tauopathies, and FoxO. 45,46 However, this exploratory network analysis should be interpreted with caution because AP-Tau had less differentially expressed EV markers, consequently the smaller network was a limiting factor to recover potential pathways and functions in tauopathies. Anyhow, among the identified hubs, it has been encouraging to find some of them described in the literature: cytoplasmic protein NCK2 was recently described as a PD-associated gene. 47 Tyrosine-protein kinase Lyn (LYN), a specific hub of MSA, was related to enhanced microglial migration by αSyn. 48 Of note, signal transducer and activator of transcription 3 (STAT3), a specific hub of AP-Tau, has been found to be a direct target of C3 and C3a receptor signaling that functionally mediates Tau pathogenesis. 49 However, these network analyses are hypothetical, and further validation studies are required to assess their possible roles in causing PD and AP. Limitations of this study are the relatively low number of subjects, especially in AP groups, and the inclusion of patients only with long duration of disease: larger studies and inclusion of different cohorts of patients, especially at early stages of disease, are strongly recommended. Moreover, a customized panel of EV surface proteins including CNS and microglia markers would probably increase the diagnostic model. Finally, this is an antemortem study, and it lacks the diagnostic confirmation of postmortem brain histopathologic analysis. In conclusion, we systematically characterized circulating EVs in plasma of patients with PD or AP. Several EV surface antigens were differentially expressed and correlated with disease gravity and cognitive impairment, suggesting EVs as potential biomarkers of disease, also in clinical trials for disease-modifying drugs. We propose a diagnostic model built through supervised machine learning algorithms, based on EV-specific signature, which was able to discriminate patients with PD and MSA with high accuracy. Finally, we provided internal and external validations of our model, confirming reliable diagnostic performance. This is a highly relevant result with a potential impact on clinical practice, allowing with a noninvasive, low-cost blood test to identify patients with PD and MSA. Furthermore, circulating EV surface protein analysis can shed light on the differential inflammation/immunity pathways involved in protein aggregation-related neurodegenerative disease, to be confirmed by functional analysis in experimental models of diseases.
2020-08-20T10:06:56.549Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "3fd0e672c6198b328dd7938ecd3e04332627a218", "oa_license": "CCBYNCND", "oa_url": "https://nn.neurology.org/content/nnn/7/6/e866.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e2b00df85a860a94c281dde98c89c6320548083", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
210936245
pes2o/s2orc
v3-fos-license
Endometrial Polyp Removed by a Manual Hysteroscopic Tissue Removal Device We report one of the first cases where an endometrial polyp was removed using a manual hysteroscopic tissue removal (HTR) device. The case showed its feasibility with potential reduction in the required setup time and tubing required compared to routine HTR device. This technique is ideal in the removal of endometrial polyps, particularly within the outpatient settings. In recent years, polypectomy using a hysteroscopic tissue removal (HTR) system has gained popularity. However, all of such techniques are associated with the use of an electric motor-driven mechanical morcellation device with a machine-driven fluid pump. In this case report, we report one of the first polypectomies performed using a manually driven HTR device instead of one being driven via an electric motor and distension fluid introduced purely via gravitational pressure. case report A 56-year-old premenopausal female presented with a history of on and off per vaginal intermenstrual bleeding for over a year. Transvaginal ultrasound showed a thickened endometrium measured 1.3 cm with appearance suggestive of an endometrial polyp. A flexible hysteroscopy performed without anaethesia confirmed of an endometrial polyp that measured 1cm and arise from the left lower wall of the uterus. The patient then underwent a hysteroscopic resection of polyp using a manually driven HTR device (MyoSure ® Manual Tissue Removal Suite-Hologic ® USA) under general anesthetics. 500 ml bag of normal saline was used as distending media and was driven purely by gravitational force as routine diagnostic hysteroscopy. Diagnostic hysteroscopy confirmed the presence of the polyp [ Figure 1], and with the use of the device, the polyp was removed completely. The procedure (including anaesthetization, setting up of patient and device equipment) took a total of 15 minutes. The morcellation of the polyp itself took approximately 3 minutes [Video 1]. Time of morcellation of the polyp was 2 min with no residual polyps left remain. There was 100 ml saline of fluid deficit. The patient went home on the same day. Follow-up at 3 months showed no further IMB bleed. Pathology from the procedure was confirmed as a benign endometrium polyp with surrounding inactive endometrium. We report one of the first cases where an endometrial polyp was removed using a manual hysteroscopic tissue removal (HTR) device. The case showed its feasibility with potential reduction in the required setup time and tubing required compared to routine HTR device. This technique is ideal in the removal of endometrial polyps, particularly within the outpatient settings. dIscussIon Blind removal of endometrial polyp with a polyp forceps without the use of hysteroscope under direct vision only yielded 41% of complete polyp removal. Malignant cells at the base of the polyp can be missed while the recurrence rate can be as high as 15%. [2] Resections under direct vision such as using cold scissors, diathermy resectoscopes, or HTRs are safe, simple, and superior to blind techniques When compared to resection using the resectoscope, HTRs have shown to reduce the mean operative time; [3,4] and associated with less complication risks. [5,6] It is also more simple and easier to use compared to fine hysteroscopic scissors. When performing any hysteroscopic surgery, absolute fluid input and output measurement is vital to prevent excessive fluid absorption. This is usually achieved with a motorized fluid pump which require a separate machine and set up. According to The American Association of Gynecologic Laparoscopists (AAGL), it is recommended that the absolute fluid deficit should remain at 2500 ml when normal saline is used as distending media. [7] However, removing endometrial polyps using the manual HTR device generally uses very limited amount of fluid throughout the entire procedure. The majority of cases, maximum fluid input of 500 ml would have been enough to complete the procedure. Hypothetically, if all the fluid input was absorbed without deficit, the total amount of fluid deficit remained far below the AAGL's recommended value. Equipment setup, with the electric pump in particular, may take a considerable amount of time, and extra tubing may also be required. With the manual HTRs, motorised fluid pump is not required. Fluid delivery using gravitational fluid flow is all that is required. This reduces both operative time and cost and is ideal for removing polyps, especially in the outpatient settings. The only disadvantage of this is the fact that the hysteroscope is 6 mm diameter and hence may be uncomfortable for the patient when dilatation is required especially if no anesthesia is used. The other concern was whether the operator will generate fatigue if the morcellation device is manually activated. During this case, the operator activated the device manually for <20 times. There was no fatigue generated, but it is important to accurately assess the pathology preoperatively. Manual morcellation device should not be used for fibroids and very large polyps. This case demonstrated the feasibility of removal of endometrial polyp using a manual HTR. It has also shown that machine-driven fluid pump and morcellators are not required for simple cases such as endometrial polyps. Although this case was done under general anesthetics, the ideal setup for this device would be in the outpatient or office settings with no anesthetics or under MAC. Ethical statement The ethical approval of this study is exempted by the Kowloon Central Research Ethics Committee. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given her consent for her images and other clinical information to be reported in the journal. The patient understands that her name and initials will not be published and due efforts will be made to conceal identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-01-29T14:07:17.105Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "bd3cf371e3fb43c97672814d72dcd0333c1370f0", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/gmit.gmit_116_18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd3cf371e3fb43c97672814d72dcd0333c1370f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
595036
pes2o/s2orc
v3-fos-license
RNase L restricts the mobility of engineered retrotransposons in cultured human cells Retrotransposons are mobile genetic elements, and their mobility can lead to genomic instability. Retrotransposon insertions are associated with a diverse range of sporadic diseases, including cancer. Thus, it is not a surprise that multiple host defense mechanisms suppress retrotransposition. The 2′,5′-oligoadenylate (2-5A) synthetase (OAS)-RNase L system is a mechanism for restricting viral infections during the interferon antiviral response. Here, we investigated a potential role for the OAS-RNase L system in the restriction of retrotransposons. Expression of wild type (WT) and a constitutively active form of RNase L (NΔ385), but not a catalytically inactive RNase L mutant (R667A), impaired the mobility of engineered human LINE-1 (L1) and mouse intracisternal A-type particle retrotransposons in cultured human cells. Furthermore, WT RNase L, but not an inactive RNase L mutant (R667A), reduced L1 RNA levels and subsequent expression of the L1-encoded proteins (ORF1p and ORF2p). Consistently, confocal immunofluorescent microscopy demonstrated that WT RNase L, but not RNase L R667A, prevented formation of L1 cytoplasmic foci. Finally, siRNA-mediated depletion of endogenous RNase L in a human ovarian cancer cell line (Hey1b) increased the levels of L1 retrotransposition by ∼2-fold. Together, these data suggest that RNase L might function as a suppressor of structurally distinct retrotransposons. INTRODUCTION Transposable elements comprise at least 45 and 37.5% of the human and mouse genomes, respectively (1,2). They are classified by whether they replicate via a DNA (transposons) or an RNA intermediate (retrotransposons) [reviewed in (3)]. DNA transposons originally were discovered in maize as mutable loci capable of mobilizing to new genomic locations (3,4). DNA transposons comprise $3% of the human genome (1) and were active during primate evolution until $37 million years ago (5). However, with the exception of certain bat species (6), DNA transposons appear to be inactive in most mammalian genomes (1). Unlike the 'cut-and-paste' mobility mechanism used by DNA transposons, retrotransposons mobilize via a 'copyand-paste' mechanism that uses an RNA intermediate [reviewed in (7)]. There are two major groups of retrotransposons that are distinguishable by the presence or absence of long terminal repeats (LTRs). LTR-retrotransposons include human endogenous retroviruses (HERVs) as well as murine intracisternal A-particle (IAP) and MusD sequences [reviewed in (8)(9)(10)]. Endogenous LTRretrotransposons are structurally similar to retroviruses, but generally lack or contain a defective envelope (env) gene, which relegates them to intracellular replication [reviewed in (11)]. While HERVs appear to be inactive in the human genome, it is estimated that $300 copies of IAP and 10 copies of MusD remain functional in the mouse genome (12)(13)(14). L1 retrotransposition can lead to local genomic rearrangements (e.g. deletions and inversions) at their integration sites [reviewed in (10)]. Moreover, L1 retrotransposition events may influence the expression of genes near the integration sites [reviewed in (10)]. Thus far, 96 L1-mediated retrotransposition events have been reported to be responsible for a wide range of single-gene diseases in humans [reviewed in (18)]. In addition, ORF2p may generate double-strand breaks in genomic DNA, which have the potential to be mutagenic (42,43). The host cell has evolved various strategies to regulate retrotransposon activity at both the transcriptional and posttranscriptional levels [reviewed in (7)]. For example, retrotransposon-derived Piwi interacting RNAs, in conjunction with Piwi proteins, can degrade L1 and other transposable element transcripts in the germ line of mice and flies, and they are thought to be involved in the epigenetic silencing of retrotransposons via promoter methylation in murine embryonic male germ cells [reviewed in (44,45)]. It also is proposed that hybridization of the L1 sense and antisense transcripts may serve as double-strand RNA triggers for Dicer-dependent RNA interference mediated regulation of L1 retrotransposition (46), although this supposition requires further study. Besides these small RNA-based inhibition pathways, L1 retrotransposition can be inhibited by several proteins, including the apolipoprotein B mRNA editing enzyme 3 (APOBEC3) family of cytidine deaminases [reviewed in (47)], Trex1 (48) and MOV10 (49)(50)(51). Recent evidence also suggests the ataxia telangiectasia mutated protein may limit the length and/or number of engineered L1 retrotransposition events in cultured cells (52). In addition, heterogeneous nuclear ribonucleoprotein L (hnRNPL) binds L1 RNA and interferes with L1 retrotransposition (53,54). HnRNPL and several other cellular inhibitors of L1 retrotransposition were also identified in the L1 ORF1 protein interactome (54). In contrast, the poly(A) binding protein C1 was recently shown to promote L1 retrotransposition (55). The interferon (IFN) regulated 2 0 ,5 0 -oligoadenylate (2-5A) synthetase (OAS)-RNase L system inhibits viral replication, but it is unclear whether it restricts retrotransposon activity [reviewed in (56)]. The OAS genes encode IFN inducible enzymes that are expressed at basal levels in many mammalian cell types (57). Viral dsRNAs activate OAS-1, -2 and -3, which use ATP to generate 2-5A molecules with the following structures: [p x 5 0 A(2 0 p5 0 A) n ; x = 1-3; n ! 2] (58). The 2-5A then binds to the ankyrin repeat domain of latent RNase L, causing it to form an enzymatically active dimer (59). Active RNase L cleaves single-strand regions of viral and cellular RNA, suppressing viral protein synthesis, replication and spread [reviewed in (56)]. Moreover, cleavage products generated by RNase L, mostly short duplex RNAs with 3 0 -phosphoryl groups, can bind and activate the RIG-I and MDA5 helicases (60). Interaction of these helicases with the mitochondrial adapter MAVS then results in a signaling cascade, allowing type I IFN production (60). The prolonged activation of RNase L results in cell death through apoptosis, leading to the elimination of virusinfected cells (61)(62)(63). The antiviral activity of the OAS-RNase L pathway combats the infectivity of numerous RNA and DNA viruses [reviewed in (56)]. Here we demonstrate that wild type (WT) RNase L and a constitutively active (NÁ385) RNase L mutant potently restrict both L1 and IAP retrotransposition in cultured human cells. In contrast, RNase L (R667A), catalytically inactive due to a mutation in the active site, does not restrict L1 or IAP retrotransposition. Consistent with the above observations, siRNA-mediated knockdown of endogenous RNase L leads to a $2-fold increase in L1 retrotransposition. Finally, the expression of active forms of RNase L, but not the R667A RNase L mutant, leads to the degradation of L1 mRNA, which, in turn, leads to a decrease in the expression of L1 ORF1p and ORF2p. Thus, in addition to its role in restricting the infectivity of several viruses, RNase L may act to restrict the retrotransposition of certain endogenous retrotransposons. Plasmid constructs Schematic maps of L1 and IAP plasmids used in this study are shown ( Figure 1A and B and Supplementary Figures S4A and S5A). Brief descriptions of each plasmid used in Figure 1. An overview of the L1 and IAP retrotransposition assays. (A) Schematics of L1 and IAP constructs: The L1 and IAP constructs contain a NEObased (mneoI) or EGFP-based (mEGFPI) retrotransposition indicator cassette near their 3 0 ends. The indicator cassettes are in an anti-sense (backward) orientation relative to the transcriptional orientation of the L1 or IAP elements. The indicator cassettes also contain an intron that is in the same transcriptional orientation as the retroelement. SD and SA indicate the splice donor and splice acceptor sites of the intron, respectively. Pr 0 indicates the promoter driving the expression of the retrotransposition indicator cassette. Closed lollipops indicate the polyadenylation signal on the indicator cassette. A CMV promoter enhances the expression of the pJM101/L1.3, pAD2TE1, pES2TE1 and pAD3TE1 L1 vectors. An SV40 polyadenylation signal is present at the 3 0 end of each L1 expression cassette. Notably, the mneoI-based L1 vectors are expressed from a pCEP4 vector that contains a HYG and an EBNA-1 Nucleic Acids Research, 2014, Vol. 42, No. 6 3805 (continued) this study and the original references describing the plasmid construction are provided below. pLRE3-mEGFPI: is a pCEP4-based plasmid that contains an active human L1 (LRE3) equipped with an mEGFPI retrotransposition indicator cassette (65). The pCEP4 backbone was modified to contain a puromycin resistance (PURO) gene in place of the HYG. The CMV promoter also was deleted from the vector; thus, L1 expression is only driven by its native 5 0 -UTR (65). pAD2TE1: is a pCEP4-based plasmid similar to pJM101/ L1.3. It was modified to contain a T7 gene10 epitope-tag on the carboxyl-terminus of ORF1p and a TAP epitopetag on the carboxyl-terminus of ORF2p. Its 3 0 -UTR contains the mneol retrotransposition indicator cassette (66). pES2TE1: is identical to pAD2TE1, but was modified to replace the TAP tag on the carboxyl-terminus of ORF2p with a FLAG-HA tag (66). pAD3TE1: is identical to pAD2TE1, but was modified to contain 24 copies of the MS2 stem-loop RNA binding repeats upstream of the mneoI indicator cassette (66). pDJ33/440N1neo TNF : is a gift from Thierry Heidmann (Institut Gustave Roussy, Paris, France). It contains a mouse IAP tagged with a neomycin resistance gene (NEO) retrotransposition indicator cassette similar to the one present in pJM101/L1.3 (13). The human HA epitope-tagged APOBEC3A (A3A) expression plasmid was obtained from Dr Bryan Cullen at Duke University (68). The A3A cDNA was subcloned into pFLAG-CMV-2 (Sigma-Aldrich) to ensure that it was expressed from the same context as the RNase L constructs used in this study. The human cDNAs for RNase L (69), A3A and RIG-I (a gift from Michael Gale, Seattle, WA, USA) were cloned into pFLAG-CMV-2 (Sigma Aldrich). They all contain a FLAG tag at their amino terminus. Plasmid pIREShyg (Clontech) contains a hygromycin B phosphotransferease gene under control of a CMV promoter and downstream of an internal ribosome entry site from encephalomyocarditis virus. The catalytically inactive (R667A) RNase L mutant (70) was generated by site-directed mutagenesis and verified by DNA sequence analysis. The constitutively active (NÁ385) RNase L mutant was described in the same study (70). Myc-tagged WT and mutant RNase L cDNAs were cloned into a modified pcDNA 3.0 (Gibco/Life Technologies/InVitrogen) vector that lacks a NEO using standard molecular cloning protocols. Briefly, the plasmids were double digested with BstBI and SfoI, followed by 3 0 end filling with Klenow fragment of DNA polymerase, and blunt-end cloned into the modified pcDNA 3.0 vector using T4 DNA ligase (New England BioLabs). Cells and culture media HeLa-M cells, which are deficient for RNase L (71), and Hey1b cells (a human ovarian cancer cell line that was a gift from Alexander Marks, University of Toronto, Toronto, Canada) (72) were maintained in Dulbecco's modified Eagle's medium (DMEM) and RPMI medium, respectively. The complete medium was supplemented with 10% fetal bovine serum (FBS), 50 U/ml of penicillin, Figure 1. Continued gene. The mEGFPI-based L1 vectors are expressed from a pCEP4 vector that was modified to contain a PURO gene; it also contains the EBNA-1 gene. Flag symbols indicate the names of epitope-tags present in some L1 vectors. The SP and ASP labels indicate the sense and anti-sense promoters located in the L1 5 0 -UTR. The MS2 24x designation indicates the 24 copies of the MS2-GFP RNA binding motif in the pAD3TE1 construct. The PCR primers for pAD2TE1 are labeled F1, R1, F2 and R2 (see 'Materials and Methods' section for details). In the IAP vector [pDJ33/440N1neo TNF (13)], Pr indicates the viral LTR promoter. The IAP GAG and POL genes also are indicated. (B) Rationale of the assay: Transcription from a promoter driving L1 or IAP expression allows splicing of the intron from either the mneoI-or EGFP-based indicator cassettes. Retrotransposition of the resultant RNA leads to activation of the reporter gene, conferring either G418-resistance or EGFP-positivity to host cells. TSD indicates a target site duplication flanking the retrotransposed L1. (C) Experimental protocols to detect L1 retrotransposition: Cells were co-transfected with an engineered L1 or IAP retroelement and either an empty vector (pFLAG-CMV-2) or amino-terminal FLAG-tagged RNase L expression plasmid. For the mneoI-based assays, the transfected cells were subjected to G418 selection 2 days after transfection. The numbers of G418-resistant foci serve as a readout of retrotransposition efficiency. For the mEGFPI-based assays, FACS analysis was used to measure the percentage of EGFP-positive cells 4 days after transfection (See 'Materials and Methods' section for further details about each assay). 50 mg/ml of streptomycin and 2 mM L-glutamine (Gibco/ Life Technologies). Retrotransposition assays Retrotransposition assays were performed as described previously with minor modifications (65,73). Briefly, for G418-resistance-based retrotransposition assays, HeLa-M cells ($8 Â 10 4 per well) were seeded into two sets of six-well plates. The next day, the cells were co-transfected with 0.5 mg of the indicated L1 or IAP expression plasmid and 0.5 mg of a corresponding expression plasmid for RNase L, A3A, RIG-I or an empty vector (pFLAG-CMV-2) using 3 ml of the Fugene6 transfection reagent (Roche) per well. Forty-eight hours after transfection, the cells were collected from one set of plates and were analyzed for protein expression in western blot experiments. Cells from the other set of plates were trypsinized and resuspended in complete DMEM medium supplemented with G418 (500 mg/ml) (Gibco/ Life Technologies). Cells from each well were split into three 10-cm tissue culture dishes, generating triplicate cultures. After 10 days of G418 selection, the remaining cells were treated with 10% neutral buffered formalin for 5 min to fix them to tissue culture plates and then were stained with 0.05% crystal violet for 30 min to facilitate their visualization. The dishes were washed in phosphate buffered saline (PBS), scanned and foci numbers were determined using Integrated Colony Enumerator software (National Institute of Standards and Technology) (74). Notably, toxicity control reactions were performed in a similar manner (in triplicate). Briefly HeLa-M cells were co-transfected with 0.5 mg of the pcDNA 3.0 NEO expression vector and 0.5 mg of the RNase L expression plasmids using 3 ml of the Fugene6 transfection reagent per well (Roche). After G418 selection (500 mg/ml) for 10 days, the remaining cells were fixed, stained and counted. For enhanced green fluorescent protein (EGFP)-based retrotransposition assays, HeLa-M cells were transfected with 0.5 mg of an active (pLRE3-mEGFPI) or inactive (pJM111-LRE3-mEGFPI) L1 expression plasmid and 0.5 mg of a corresponding RNase L expression plasmid, using 3 ml of Fugene6 transfection reagent (Roche) per well. The transfected cells then were subjected to puromycin selection (1 mg/ml) (Gibco/Life Technologies) to enrich for cells containing the L1 expression plasmids. After 4 days, the cells from each well were detached with a nonenzymatic cell dissociation solution (Cellgro), washed with PBS containing 1% FBS and analyzed on a FACScan (Becton-Dickinson) without fixation. For each sample, 2 Â 10 5 cells were analyzed. Data were analyzed with FlowJo software (TreeStar Inc.). In experiments to study the effect of endogenous RNase L on L1 retrotransposition, Hey1b cells (4 Â 10 5 cells/well) were plated in a six-well tissue culture dish. The next day, the cells were transfected with 50 nM of a control siRNA pool (sc-37007, Santa Cruz Biotechnology) or an siRNA pool against RNase L (sc-45965, Santa Cruz Biotechnology) using the DharmaFECT 1 transfection reagent (Thermo Scientific). Twenty-four hours later, the cells in each well were transfected with pLRE3-mEGFPI or pJM111-LRE3-mEGFPI (1 mg), using 3 ml of the Fugene6 transfection reagent (Roche). After another 12 h, cells were trypsinized and counted as noted above. An aliquot of the cells (one-tenth) was used to monitor the endogenous RNase L protein level. The remaining cells were replated (one well was split into three wells to generate triplicate technical replicates) and were subjected to 4 days of puromycin selection (1 mg/ml, Gibco/Life Technologies). After 4 days of puromycin selection (5 and 6 days after transfection with L1 construct and siRNA, respectively), the percentage of GFP positive cells was determined by flow cytometry as described above. Preparation of L1 RNPs, total cell lysates and western blot assays L1 RNPs were isolated as described previously (38) with some modifications. Briefly, HeLa-M cells were plated into two identical sets of six-well plates. The next day, the cells in each well were co-transfected with 0.5 mg of an engineered L1 expression construct (pAD2TE1, pDK500 or pAD500) and 0.5 mg of a corresponding RNase L plasmid (FLAG-WT RNase L, FLAG-RNase L R667A or FLAG-RNase L NÁ385) using 3 ml of the Fugene6 transfection reagent (Roche). The cells in one set of plates were harvested 48 h after transfection, and RNase L expression was monitored using an RNase L monoclonal antibody by western blot. Cells from the other set of plates were replated into a 10-cm dish and were subjected to selection in DMEM medium supplemented with hygromycin (200 mg/ml) (Gibco/Life Technologies) for four additional days to detect L1 protein expression 6 days after transfection. The remaining cells then were harvested and were resuspended in 1 ml of lysis buffer (20 mM HEPES, pH 7.5; 1.5 mM KCl; 2.5 mM MgCl 2 ; 0.5% NP-40) containing complete mini EDTA-free protease inhibitor cocktail (Roche) per 0.5 ml of packed cell volume. After incubation on ice for 10 min, cell lysates were centrifuged at 3000 Â g for 10 min at 4 C to remove cell debris. Protein concentrations were determined with Bradford assays (Biorad). One-fiftieth of the supernatants ($50 mg of total protein) was used for protein analysis (total cell lysates in Figure 8 and Supplementary Figures S4 and S5). Aliquots of total cell lysate ($150 mg) were ultracentrifuged at 160 000 Â g for 90 min to concentrate the L1 RNP fraction. After ultracentrifugation, the supernatants were removed, the pellets were resuspended with 50 ml 1 Â sodium dodecyl sulphatepolyacrylamide gel electrophoresis sample buffer (Novagen) and 20 ml were used for western blot analysis (RNP fractions in Figure 8 and Supplementary Figures S4 and S5). For control experiments, an EGFP-encoding plasmid, pEGFP-C1 (Clontech), was co-transfected with a corresponding RNase L expression plasmid (WT, R667A and NÁ385) into HeLa-M cells. Total cell lysates were prepared 48 h after transfection (as described above) and were analyzed in western blots. In general, western blots were developed using the ECL substrate (GE Healthcare) and exposed to autoradiography film (Denville Scientific). The Western Bright ECL HRP Substrate (Advansta) was used to detect ORF1p expressed from the pAD2TE1 and pDK500 expression constructs, as well as ORF2p from the pAD500 expression construct. The SuperSignal West Pico Chemiluminescent Substrate (Pierce) was used to detect ORF2p expressed from pAD2TE1 construct. The following antibodies were used in western blotting experiments: mouse anti-T7 (1:5000 dilution, Novagen), Quantitative real time polymerase chain reaction to detect L1 RNA HeLa-M cells were co-transfected with 0.5 mg of pAD2TE1 and 0.5 mg of one of the following vectors: pFLAG-CMV-2 empty vector, FLAG-tagged WT RNase L or FLAG-tagged RNase L R667A. Forty-eight hours later, total RNA was prepared with Trizol (Gibco/ Life Technologies) according to manufacturer's protocol. After contaminating DNA was removed using a Turbo DNA-free kit (Ambion), cDNA was synthesized using the High Capacity cDNA Reverse Transcription (RT) kit (Gibco/Life Technologies). The resultant cDNA was amplified using Sybr Green PCR master mix (Gibco/Life Technologies) on a StepOnePlus system according to manufacturer's protocol. Primers were designed to amplify a 91-bp fragment specific to pAD2TE1 mRNA. The product spanned the junction of L1 ORF2 gene and the coding sequences of the engineered TAP epitope-tag in pAD2TE1 ( Figure 1A). All the primers were designed with Primer Express 3.0 software (Applied Biosystems) and data were analyzed with the 2 -ÁÁCT method (75). Immunofluorescence Immunofluorescence experiments to detect the co-expression of the L1 and RNase L proteins were performed as described previously with minor modifications (66). Briefly, HeLa-M cells (8 Â 10 4 ) were plated onto sterile glass cover slips in each well of six-well tissue culture plates. The next day, adherent cells were co-transfected with 1 mg of pES2TE1 and 1 mg of one of the following constructs: an empty vector (pcDNA 3.0) (Gibco/Life Technologies/InVitrogen), a Myc-tagged WT RNase L or a Myc-tagged catalytically inactive RNase L mutant (R667A), using 6 ml of the Fugene6 transfection reagent. Forty-eight hours after transfection, the cells were fixed with freshly prepared 4% paraformaldehyde (Electron Microscopy Sciences) in 1 Â PBS for 10 min at room temperature. The fixed cells were washed three times with PBS and permeabilized by treatment with ice-cold anhydrous methanol for 1 min at À20 C. After another 1 Â PBS wash, cells were blocked by incubation with 3% goat serum and 0.1% Triton X-100 in 1 Â PBS for 1 h at room temperature. The permeabilized cells then were incubated with primary antibodies overnight at 4 C in a humidified chamber. The cells were washed three times with 1 Â PBS and incubated with secondary antibodies for 1 h at 37 C in the dark. After four 1 Â PBS washes, the coverslips were mounted with Vectashield mounting media with DAPI (Vector Labs) and sealed with nail polish to prevent drying. L1 RNA was detected with the MS2-GFP labeling technique (66,76). HeLa-M cells were co-transfected with pAD3TE1 and pMS2-GFP as noted above. A nuclearlocalization signal restricts GFP-MS2 chimera to the nucleus, where it associates with L1 RNA through MS2binding sites present in the pAD3TE1 3 0 -UTR (66). Cytoplasmic GFP signals (white arrows, Figure 7) indicate the location of engineered L1 RNA after nuclear export. RNase L suppresses the retrotransposition of engineered L1 and IAP elements We used previously established cell culture assays to determine whether the expression of an RNase L cDNA affects the retrotransposition of engineered human L1 and mouse IAP retrotransposons ( Figure 1) (13,41,73). Briefly, HeLa-M cells, which are deficient in endogenous RNase L expression (71) ( Figure 3C, Lane 1), were cotransfected with either an engineered retrotransposon [human L1 pJM101/L1.3 (20)] or mouse IAP [pDJ33/ 440N1neo TNF (13)] and either an empty vector (pFLAG-CMV-2) or a plasmid that expresses an amino-terminal FLAG-tagged version of RNase L [WT RNase L, a catalytically inactive RNase L mutant (R667A), or constitutively active (NÁ385) RNase L mutant (70)]. While the HeLa-M cells used in this study are deficient in RNase L, it should be noted that other types of HeLa cells (including ATCC CCL-2 and S3) express normal levels of endogenous RNase L (77,78). Both the L1 and IAP constructs contain a retrotransposition indicator cassette in their 3 0 ends (Figure 1). The indicator cassette consists of either an antisense copy of a neomycin phosphotransferase gene (mneoI) or an enhanced green fluorescent protein-coding gene (mEGFPI) equipped with a heterologous promoter (Pr 0 ) and polyadenylation signal (lollipop symbol, Figure 1A) (41,65,79). Notably, both the mneoI and mEGFPI indicator cassettes are disrupted by an intron [IVS2 of the g-globin gene in the L1 constructs (41) and intron 2 of the murine tumor necrosis factor beta (TNF-b) gene in the IAP construct (80)] that is in the same transcriptional orientation as the L1 or IAP retrotransposon. This arrangement ensures that the reporter gene only will become activated and expressed if the retrotransposon RNA is reverse transcribed and integrated into genomic DNA ( Figure 1B). The resultant numbers of G418-resistant foci or EGFP-positive cells serve as a read out of L1 or IAP retrotransposition efficiency ( Figure 1C) (41,65). Retrotransposition assays revealed that expression of FLAG-tagged WT or constitutively active RNase L mutant (NÁ385) proteins (70) led to a reduction in L1 retrotransposition efficiency in the mneoI-based reporter assays ($72 and $97%, respectively) (Figure 2A and B). In contrast, the FLAG-tagged catalytically inactive RNase L mutant (R667A) did not significantly inhibit L1 retrotransposition (Figure 2A and B). As a positive control, we demonstrated that the expression of a FLAG-tagged A3A protein reduced L1 retrotransposition by $50%. The lower level of A3A expression ( Figure 2C) may lead to its reduced suppression of L1 retrotransposition when compared with previous studies (68,81). As a negative control, we demonstrated that the expression of FLAG-tagged RIG-I protein (82), a pathogen recognition receptor for viral RNA, had no significant effect on L1 retrotransposition. Western blot analyses confirmed that FLAG-tagged RNase L, A3A and RIG proteins were expressed in Hela-M cells ( Figure 2C). Moreover, the expression of the RNase L proteins did not significantly impact cell viability ( Figure 3). To corroborate the above findings, we next tested whether the expression of FLAG-tagged WT RNase L, constitutively active RNase L mutant (NÁ385) and catalytically inactive RNase L mutant (R667A) proteins could inhibit the mobility of an engineered human L1 that contains the mEGFPI-based retrotransposition cassette located in the 3 0 -UTR ( Figure 1A; pLRE3-mEGFPI) (19,65). Once again, we found that both the WT and NÁ385 RNase L proteins inhibited L1 retrotransposition and that the catalytically inactive R667A RNase L mutant did not ( Figure 4A and Supplementary Figure S1). The control plasmid (pJM111-LRE3-mEGFPI), which carries two missense mutations in ORF1p rendering it inactive, showed only background EGFP expression. Western blot analyses confirmed that the epitope-tagged RNase L and A3A proteins were expressed ( Figure 4B). Notably, the decrease in retrotransposition efficiency ($50%) caused by RNase L expression was less pronounced in the EGFP-based retrotransposition assays when compared with the mneoI-based retrotransposition assay. These differences may be due to the shorter time duration of the mEGFPI-based retrotransposition assays (6 days) when compared with mneoI-based (12 day) assays. To test whether endogenous RNase L restricts L1 retrotransposition, we used siRNA-based experiments to deplete RNase L in a human ovarian cancer cell line, Hey1b (Supplementary Figure S2). Hey1b cells express relatively high levels of RNase L typical of some human cancer cell lines, in contrast, RNase L is barely detectable in HeLa-M cells (71). Western blot analyses, using a monoclonal antibody that detects endogenous RNase L, revealed that cells transfected with an siRNA pool that targets RNase L exhibited an $90% reduction in endogenous RNase L protein levels when compared with cells transfected with a control siRNA pool ( Figure 5A). Reduction of the endogenous RNase L protein level was evident 24 h after transfection and was maintained for $96 h (data not shown). Retrotransposition assays using pLRE3-mEGFPI (19,65) revealed that $0.4% of cells treated with control siRNA became EGFP-positive after 4 days of puromycin drug selection ( Figure 5B and C and Supplementary Figure S2), which enriched for cells containing the L1 expression plasmid. In contrast, $0.75% of cells treated with siRNA against RNase L were EGFPpositive ( Figure 5B and C). As expected, the retrotransposition-defective L1 (pJM111-LRE3-mEGFPI) only exhibited background EGFP expression levels regardless of RNase L depletion ( Figure 5B). We next determined if RNase L was able to repress the retrotransposition of an engineered mouse IAP element ( Figure 1A, construct pDJ33/440N1neo TNF ). Consistent with our L1 findings, expression of the WT and constitutively active RNase L proteins severely reduced IAP retrotransposition efficiency by $90% (Figure 6). The catalytically inactive RNase L (R667A) mutant did not significantly affect IAP retrotransposition. Again, controls indicated that the expression of A3A reduced IAP retrotransposition, whereas RIG-I expression did not significantly affect retrotransposition ( Figure 6B). Together, the above data strongly suggest that RNase L is a potent inhibitor of engineered L1 and IAP retrotransposition and that this inhibition requires RNase L nuclease activity. RNase L reduces levels of L1 RNA Since RNase L is a ribonuclease, we next asked if its expression affected steady state L1 mRNA levels in transfected cells. To accomplish this goal, we designed primers that would amplify a 91-bp fragment that spans the junction of the ORF2/TAP-tag coding region in a transfected L1 expression construct (pAD2TE1) (66). Notably, these primers should specifically amplify L1 mRNA derived from pAD2TE1 and should not amplify endogenous L1 mRNAs, which lack the TAP epitope-tag at the end of the ORF2 coding sequence. Primers capable of amplifying mRNA from the hygromycin phosphotransferase gene (HYG) present on the pCEP4 L1 expression plasmid backbone served as an internal/ normalization control. [The polymerase chain reaction (PCR) strategy is illustrated in Figure 1A, see PCR primers beneath the map of pAD2TE1.] Quantitative reverse transcriptase PCR (RT-PCR) analysis revealed that the expression of WT RNase L reduced L1 mRNA levels by $80% ( Figure 7A). However, expression of the catalytically inactive R667A RNase L mutant failed to significantly reduce L1 mRNA levels ( Figure 7A). In a separate control experiment, we demonstrated that cotransfection of WT RNase L did not affect HYG mRNA levels expressed from pIREShyg (Clontech) (Supplementary Figure S3). These data suggest that RNase L preferentially targets L1 mRNA for degradation, and that its nuclease activity is required for the decrease in L1 mRNA levels. To validate our quantitative RT-PCR (qRT-PCR) findings, we analyzed the effect of RNase L on the accumulation of L1 mRNA in the cytoplasm of cells. To visualize the L1 mRNA, we took advantage of a previously described construct, pAD3TE1, which encodes an L1 element that contains 24 copies of MS2 RNA binding element in its 3 0 -UTR ( Figure 1A) (66). The coexpression of pAD3TE1 and a plasmid encoding a nuclear localized MS2-GFP protein would allow the MS2-GFP protein to bind the MS2 RNA sequences in pAD3TE1 mRNA, allowing the indirect visualization of the L1 mRNA via immunofluorescent confocal microscopy (66). We observed punctate L1 cytoplasmic foci in cells cotransfected with pAD3TE1, the MS2-GFP protein expression construct and either the empty vector pcDNA 3.0 or the inactive RNase L R667A mutant ( Figure 7B). In contrast, we did not observe punctate L1 cytoplasmic foci in cells co-transfected with pAD3TE1, the MS2-GFP protein expression construct and the WT RNase L expression construct ($200 cells were examined per slide and representative images were captured) ( Figure 7B). As an additional control for this experiment, we determined that EBNA-1, which is present on the backbone of the pAD3TE1 expression vector, was still expressed in the presence of WT RNase L. Together, these data indicate a ribonuclease-dependent effect of RNase L on L1 RNA levels and likely explain, in part, how RNase L adversely affects L1 retrotransposition. Expression of RNase L leads to a decrease in L1 protein expression During viral infections, the OAS-RNase L system degrades certain viral and cellular mRNAs, thereby preventing protein synthesis [reviewed in (56)]. Thus, we performed western blot analyses to determine if the observed reduction in L1 mRNA correlated with a reduction in the accumulation of the L1-encoded proteins. To accomplish this goal, we co-transfected HeLa-M cells with pAD2TE1 (66) and either an empty vector (pFLAG-CMV-2) or a FLAG-tagged RNase L expression construct (WT, the constitutively active NÁ385 RNase L mutant or the catalytically inactive R667A RNase L mutant). The transfected cells then were subjected to selection in hygromycin B containing medium. The total cell lysates and L1 RNP preparations then were subjected to western blot analyses (see 'Materials and Methods' section). An anti-T7 antibody detected the $40 kDa ORF1p in both total cell lysates and RNP fractions derived from cells co-transfected with pAD2TE1 and either an empty vector (pFLAG-CMV-2) or the catalytically inactive RNase L (R667A) mutant ( Figure 8A). Similarly, an anti-TAP antibody detected the $170 kDa ORF2p in both total cell lysates and RNP fractions. In contrast, ORF1p and ORF2p were markedly reduced in cells transfected with pAD2TE1 and either the WT RNase L or constitutively active NÁ385 RNase L mutant. Notably, RNase L did not affect the level of endogenous ribosomal S6 protein in total cell lysates and RNP fractions. We also observed an RNase L-dependent reduction of ORF1p in cells transfected with pDK500 (a T7-gene 10 epitopetagged ORF1p expression construct), as well as a reduction in ORF2p in cells transfected with pAD500 (a TAP epitope-tagged ORF2p expression construct) (Supplementary Figures S4 and S5, respectively). Control assays again demonstrated that the WT and mutant FLAG-tagged RNase L proteins are expressed at Figure 2B. Data are shown as the mean ± standard deviation (SD) from a single experiment with three technical replicates. The experiment was conducted three times (biological replicates) with similar results. No statistically significant difference was found with one-way ANOVA and post hoc tests. (C) Protein expression analyses: The WT RNase L, catalytically inactive RNase L mutant (R667A) and constitutively active (NÁ385) RNase L mutant were detected from total cell lysates in western blots with anti-RNase L antibody 2 days after transfection. b-Actin served as loading and transfer control. Size standards are indicated in kDa at the left of the gel. similar levels in total cell lysates 2 days after transfection ( Figure 8B). Importantly, we did not observe an RNase L-dependent reduction in HeLa-M cells co-transfected with pEGFP-C1, suggesting that RNase L may preferentially target L1 RNA, thereby adversely affecting the expression of the L1 proteins ( Figure 8B). Expression of RNase L prevents the formation of L1 cytoplasmic foci Previous immunofluorescence studies demonstrated that the L1-encoded proteins and L1 RNA begin to appear as discrete cytoplasmic foci that often associate with stress granules and/or processing bodies (P-bodies) at 48 h posttransfection (66,85). Because the expression of RNase L adversely affected L1 mRNA levels and, in turn, ORF1p and ORF2p protein accumulation, we next examined whether RNase L expression adversely affects L1 cytoplasmic foci accumulation. To accomplish this goal, we co-transfected HeLa-M cells with an L1 expression vector (pES2TE1) (66) including an HA-tagged ORF2p, and an empty vector (pcDNA 3.0), a plasmid HeLa-M cells were co-transfected with an expression construct containing an active human L1 (pLRE3-mEGFPI) and an empty vector (pFLAG-CMV-2), a plasmid encoding an amino-terminal FLAGtagged RNase L expression plasmid or an amino-terminal HA-tagged A3A expression plasmid. Experiments with a retrotransposition-defective L1 pJM111-LRE3-mEGFPI served as a negative control. The cells were subjected to puromycin selection for 4 days after transfection. Fluorescence Activated Cell Sorting (FACS) was then used to screen for EGFP-positive cells. The X-axis indicates the construct name. The Y-axis indicates the percentage of EGFP-positive cells. For each sample, 2 Â 10 5 cells were analyzed and the percentage of EGFP-positive cells was calculated with using the FlowJo software package. Data were analyzed with one-way ANOVA with post hoc tests and are shown as mean ± SD from a single experiment with three technical replicates. *P < 0.01 (Dunnett's Multiple Comparison Test). The experiment was conducted four times (biological replicates) with similar results. (B) Protein expression analyses: The WT RNase L, catalytically inactive RNase L mutant (R667A), and constitutively active (NÁ385) RNase L mutants were detected in total cell lysates by western blot with anti-RNase L antibody 2 days after transfection. The A3A protein was detected using an anti-HA antibody. b-Actin served as loading and transfer control. Size standards are indicated in kDa at the left of the gel. expressing the amino-terminal Myc-tagged WT RNase L or a catalytically inactive R667A RNase L mutant expression construct. Consistent with the findings reported above, immunofluorescent confocal microscopy revealed the presence of ORF2p cytoplasmic foci in the presence of the empty vector and the catalytically inactive R667A RNase L mutant expression construct (Figure 9). In contrast, L1 ORF2p foci were not observed on co-expression of WT RNase L. Additional control experiments revealed expression of the EBNA-1 protein from the pCEP4 plasmid backbone in either the presence or absence of WT RNase L (Figure 9, green signal), confirming that the plasmids were successfully transfected into cells. Moreover, we confirmed that both the WT and R667A RNase L mutant were expressed in Hela-M cells, and exhibited a diffuse cytoplasmic localization (Figure 9, magenta signal). Although RNase L previously was reported to associate with stress granules after viral infection (86), we did not detect co-localization of RNase L and ORF2p, possibly because RNase L degraded L1 RNA, thereby inhibiting L1 protein expression and RNP formation. Together, the above data are not inconsistent with the conclusion that RNase L may preferentially target L1 transcripts for degradation. Mechanism of L1 restriction by RNase L Our findings strongly suggest that the relatively general antiviral pathway mediated by RNase L may also restrict certain non-LTR and LTR retrotransposons. The available data indicate that transient expression of RNase L inhibits both L1 and IAP retrotransposition (Figures 2, 4 and 6). Conversely, the siRNA-mediated knockdown of endogenous RNase L increased L1 retrotransposition by $87% (Figure 5). Regarding our IAP results, while there are limitations in drawing conclusions from cross species transfection experiments, our findings suggest a relatively general role for RNase L in restricting retrotransposons that have different integration mechanisms. IAP elements in mice are the result of transmission of the viral progenitor IAPE (87), and therefore, inhibition of IAP transposition by RNase L is not inconsistent with its antiviral function. L1 and IAP retrotransposition are not suppressed in the presence of an RNase L containing a single amino acid substitution (R667A), which abrogates its enzymatic activity. These data indicate that a principal mechanism by which RNase L inhibits L1 and IAP retrotransposition likely involves the posttranscriptional cleavage of retrotransposon RNA. In addition, it is unknown if R667A also affects RNA binding, but if it does it could explain why the catalytically inactive mutant cannot be co-localized with L1 RNA. The expression of WT RNase L, as well as a constitutively active RNase L mutant (NÁ385), led to a reduction in L1 RNA levels; this reduction, in turn, led to a decrease in both ORF1p and ORF2p expression (Figures 7, 8 and 9). In contrast, RNase L expression had little effect on HYG mRNA, which also is expressed from the L1 expression construct (pCEP4) backbone (Figure 7). These findings and a lack of ribosomal RNA cleavage products (data not shown) suggest that RNase L preferentially targets L1 RNA for degradation. The specificity of RNase L has been previously studied (88,89). RNase L cleaves single-strand RNA, predominantly after UpAp and UpUp dinucleotide sequences (88,89). However, the structural context of the RNA substrate greatly influences the choice of cleavage sites (90). Certain viral and cellular single-strand RNAs are subject to degradation. For example, ribosomal RNA present in intact ribosomes can be cleaved by RNase L producing a characteristic pattern of discrete products in some IFN-treated and virus-infected cells (91,92). The molecular mechanism by which RNase L targets L1 RNA requires further study. We hypothesize that doublestranded structures (e.g. stem loops) within L1 RNA may activate OAS to produce microdomains of 2-5A (an activator of RNase L) from ATP. Alternatively, RNAs produced from the L1 sense and antisense promoters (SP and ASP, respectively) (46) may hybridize to form a Figure 8. Expression of RNase L reduces L1 protein expression. (A) L1 protein expression: HeLa-M cells were co-transfected with pAD2TE1 and either an empty vector (pFLAG-CMV-2) or a plasmid that encodes an amino-terminal FLAG-tagged RNase L expression plasmid. Two days after transfection, cells were selected with hygromycin containing medium for an additional 4 days when total cell lysates and L1 RNPs were prepared. Western blotting, using anti-T7 and anti-TAP antibodies, was used to detect ORF1p and ORF2p, respectively. Shown are two exposures of the ORF2p anti-TAP western blot. Endogenous ribosomal S6 protein was used as the loading/transfer control. b-Actin detection discriminated the total cell lysate (left side of panel) from the L1 RNP fractions (right side of panel). The experiments were repeated twice (biological replicates) with similar results. Shown are data from one representative experiment. (B) RNase L does not inhibit exogenous EGFP protein expression: HeLa-M cells were co-transfected with pEGFP-C1 and either an empty vector (pFLAG-CMV-2) or a plasmid that encodes an amino-terminal FLAG-tagged RNase L expression plasmid. Total cell lysates were harvested and the expression of RNase L and GFP was detected in western blot experiments using anti-RNase L and anti-GFP antibodies at 48 h after transfection. GAPDH served as a loading and transfer control. region of dsRNA that activates OAS (46,93). This proposed localized activation of OAS, and RNase L may then result in the targeted degradation of L1 RNA. The above model has precedent. For example, it previously was hypothesized that partially double-stranded RNAs, such as replicative intermediates from certain picornaviruses including encephalomyocarditis virus, could lead to localized OAS-RNase L activation (94,95). RNAs linked to double-strand RNA were preferentially degraded by RNase L when compared with singlestranded RNAs lacking a double-strand segment (94). By analogy, there is evidence for another localized activation model involving production of a different small molecule second messenger, cyclic adenosine monophosphate (cAMP). The distribution in rat neonatal cardiac myocytes of enzymes that synthesize and degrade cAMP produce microdomains of cAMP that specifically activate a subset of localized protein kinase A molecules (96). Thus, in principle, 2-5A microdomains would form near the sites of OAS complexed with its RNA activators. Within these 2-5A microdomains, RNase L would become active causing cleavage of the RNA stimulators of OAS, in this case L1 RNA. This model could be relevant to both retrotransposition and viral infections and warrants further investigation. Finally, it is noteworthy that RNase L may not be the only host protein that regulates L1 retrotransposition by nucleic acid degradation. For example, it is hypothesized that the 3 0 -5 0 exonuclease Trex1 inhibits L1 retrotransposition by degrading L1 cDNA intermediates (48). Notably, we did not observe co-localization of RNase L with L1 cytoplasmic foci (Figures 7 and 8). There are a number of possible explanations for this result. First, the degradation of L1 RNA by WT or constitutively active RNase L would be predicted to inhibit the formation of L1 RNPs, thereby hampering the visualization of colocalized foci. Second, the inability to observe co-localization of L1 cytoplasmic foci with the catalytically inactive R667A RNase L mutant could reflect the transient nature of the association of RNase L with its RNA substrate (97). Similarly, the absence of RNase L in the interactome from isolated L1 ORF1 protein and its RNPs is not inconsistent with a transient interaction between in L1 RNA and RNase L (54). Lastly, it remains possible that the ectopic expression of RNase L leads to an artifactual degradation of L1 RNA. However, this scenario is unlikely because RNase L did not inhibit HYG mRNA production or EGFP protein production, and siRNAmediated depletion of RNase L from Hey1b cell led to Figure 9. Expression of RNase L blocks L1 RNP formation. HeLa-M cells were co-transfected with pES2TE1 and either an empty vector (pcDNA 3.0) or a plasmid that encodes an amino-terminal Myc-tagged RNase L expression plasmid. Immunofluorescent confocal microscopy was used to examine L1 ORF2p accumulation in cytoplasmic foci by exploiting the FLAG-HA epitope-tag in pES2TE1 48 h after transfection. The top labels indicate the antibodies used to detect the indicated proteins: anti-HA-ORF2p, red; anti-EBNA-1, green; anti-Myc RNase L, magenta. The labels on the left side of the figure indicate the empty vector or RNase L constructs that were co-transfected into cells. The rightmost column indicates the merged overlay staining. L1 ORF2p formed discrete cytoplasmic punctate localization in co-transfection experiments performed with the empty vector and RNase L catalytically inactive mutant (R667A), but not with WT RNase L. For each condition, either two or three slides were examined per experiment. About 200 cells were examined per slide and representative images were captured. The experiment was conducted three times (biological replicates) with similar results. a modest increase in L1 retrotransposition ( Figures 5, 7A and 8B). Implications for RNase L in the control of retrotransposons RNase L was previously suggested to be involved in prostate carcinogenesis after being mapped to the hereditary prostate cancer 1 locus (98). Mutations in RNase L discovered from linkage analysis include two protein inactivating mutations (Á157-164X and E265X), a mutation that abrogates translation (M1I) and a missense variant 1385G!A (R462Q) that reduce RNase L activity by 3-fold (99). The connection between RNase L and prostate cancer was further expanded to other types of cancer. Genetic variations in RNASEL have been identified in cancers of head and neck, uterus, cervix and breast (100). They are also associated with disease aggressiveness and metastasis in familial pancreatic cancer (101) and with age of onset of hereditary nonpolyposis colon cancer (102). Studies from our group and others found that loss-offunction mutations in RNASEL potentially contributed to cancer development by dysregulating apoptosis of cancer cells (71,(103)(104)(105). However, inconsistent findings on the same RNASEL mutations among studies, some that show an association with cancer and others that do not, suggest that RNase L might act as a modifier of disease progression with possible interactions between environmental factors and genetics [reviewed in (99,106)]. In this regard, it would be interesting to see if a loss of RNase L activity correlates with an increase in L1 retrotransposition activity in certain tumors. Recent studies have shown somatic L1 retrotransposition activity in a subset of colorectal, liver and lung tumors (107)(108)(109)(110). In conclusion, we have identified a potential restriction mechanism for retrotransposition involving the antiviral protein RNase L. These findings emphasize the complex and dynamic interplay between retrotransposons and the cell. Our data provide evidence that RNase L inhibits L1 RNA accumulation and the subsequent formation of L1 RNPs, thereby impairing the completion of the L1 retrotransposition cycle. By inhibiting L1 retrotransposition in somatic cells, RNase L might contribute to the maintenance of genomic stability.
2016-05-17T02:50:49.285Z
2013-12-25T00:00:00.000
{ "year": 2013, "sha1": "88599257eafdc8f7994d42af5dca697fd9283dd4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/nar/gkt1308", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45b4b6a49cc3373b0f67b014910073336dbca667", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244464962
pes2o/s2orc
v3-fos-license
Comparing the Expressions of Vitamin D Receptor, Cell Proliferation, and Apoptosis in Gastric Mucosa With Gastritis, Intestinal Metaplasia, or Adenocarcinoma Change Background: This study aimed to compare the expression of vitamin D receptor (VDR), cell proliferation, and apoptosis in the gastric mucosa of patients with gastritis, intestinal metaplasia (IM), and adenocarcinoma using artificial intelligence. Material and Methods: This study retrospectively enrolled patients at the Keelung Chang Gung Memorial Hospital from November of 2016 to June, 2017, who were diagnosed with gastric adenocarcinoma. The inclusion criteria were patients' pathologic reports that revealed all compartments of Helicobacter pylori infection, gastritis, IM, and adenocarcinoma simultaneously in the same gastric sample. Tissue slides after immunohistochemical (IHC) staining were transformed into digital images using a scanner and counted using computer software (QuPath and ImageJ). IHC staining included PA1-711 antibody for VDR, Ki67 antigen for proliferation, and M30 antibody CK18 for apoptosis. Results: Twenty-nine patients were included in the IHC staining quantitative analysis. The mean age was 69.1 ± 11.3 y/o. Most (25/29, 86.2%) patients had poorly differentiated adenocarcinoma. The mean expression of Ki67 and CK18 increased progressively from gastritis and IM to adenocarcinoma, with statistical significance (P < 0.05). VDR expression did not correlate with Ki67 or CK18 expression. Survival time was only correlated with tumor stage (correlation coefficient = −0.423, P value < 0.05), but was not correlated with the expression of VDR, Ki67, and CK18. Conclusion: Ki67 expression and CK18 expression progressively increased in the areas of gastritis, IM, and adenocarcinoma. No correlation between VDR expression and Ki67 or CK18 expression was found in this study. INTRODUCTION Gastritis and intestinal metaplasia (IM) are common findings in patients with Helicobacter pylori (H. pylori) infection (1,2). The prevalence of IM in patients with H. pylori infection is 30-40% at the age of 50 years old (1,2). Gastric epithelial hyperproliferation has been observed in patients with gastritis and IM caused by H. pylori infection (3)(4)(5). Previous studies have also revealed that IM is associated with an increased risk of gastric cancer (6)(7)(8)(9). Detection of IM in gastric adenocarcinoma samples is a common histological finding (9). Apoptotic cells are rare in the glandular neck region (the generative cell zone) of normal gastric mucosa. With progression of atrophic gastritis, the generative cell zone shifts downward and a relatively large number of apoptotic cells occur (10). H. pylori infection induces apoptosis in gastric epithelial cells (10). The effect of H. pylori apoptosis could result from molecules produced by H. pylori or the host immune/inflammatory response (10). Molecules such as cytotoxin (VacA), lipopolysaccharide, or nitric oxide may directly induce apoptosis (10,11). Many cytokines produced by type 1 T helper cells, such as TNF-α and IFN-γ, markedly potentiate apoptosis. The balance between cell proliferation and apoptosis is important for carcinogenesis in precancerous lesions, such as IM and H. pylori infection (10,11). Gut epithelial vitamin D receptor (VDR) signaling appears to play an essential role in controlling mucosal inflammation and thus could be a useful therapeutic target in the management of some gastrointestinal diseases (12). Although 1, 25-Dihydroxyvitamin D [1, 25 (OH) 2 D3] is not produced by the stomach, it affects the immune regulatory responses via the VDR of the stomach (13)(14)(15). Vitamin D deficiency has been associated with risk of several cancers including gastric cancer (16). VDR is a superfamily of steroid hormone receptors that act as a transcription factor for a target gene. Moreover, 1, 25 (OH) 2 D3 was reported to be associated with the inhibition of cell cycle progression, induction of cell apoptosis, and differentiation of various types of cancer cells. Hence, 1, 25 (OH) 2 D3 has been reported to inhibit proliferation and anti-tumor effects via VDR (15,16). VDR was also reported to play an important role in gastric mucosa homeostasis and host protection against H. pylori infection (13). VDR could regulate cathelicidin antimicrobial protein (CAMP) and has an antimicrobial activity against H. pylori (13,17). Past studies found an important role of the VDR/CAMP pathway in innate immunity and an antiinflammatory mechanism of vitamin D. VDR mRNA expression levels were significantly up-regulated in H. pylori-infected patients and positively correlated with chronic inflammation scores. There was a significant positive correlation between VDR and CAMP mRNA expression in H. pylori-positive gastric mucosa (13). In animal model study using wild-type and VDR knockdown mice to demonstrate that VitD3 inhibits H. pylori infection by enhancing the expression of VDR and CAMP. VDR knockdown mice were more susceptible to H. pylori infection. In cultured mouse primary gastric epithelial cells, VitD3/VDR complex binds to the CAMP promoter region to increase its expression (17). 1, 25 (OH)2D3 binding to VDR could transcriptionally activate the expression of a number of target genes, finally executing the antitumor functions. A lot of genes have been identified as its direct targets, such as p21 (18,19) and c-Myc (20) which are involved in different signaling pathways during tumor genesis. A previous study has shown that vitamin D suppresses proliferation and stimulates cell cycle arrest in gastric cancer cells but not in immortalized normal gastric cells (12). Vitamin D has increased p21 expression and decreased cyclin-dependent kinase 2 (CDK2) expression via VDR route (12). The hypothesis of this study was that patients infected with H. pylori might have gastritis, gastric IM, and gastric adenocarcinoma. VDR expression in the stomach may differ in the areas of gastritis, IM, and gastric adenocarcinoma. When compared with the area of gastritis, the pathologic presentation of cell proliferation and apoptosis may be different in the areas of IM and adenocarcinoma. A progressive increase or decrease in cell proliferation or apoptosis may be detected in the areas of gastritis, IM, and gastric cancer, respectively. The current study aimed to evaluate VDR expression, cell proliferation, and apoptosis in gastric adenocarcinoma samples using digital quantitative immunohistochemistry (IHC) statin analyses. In this study we aimed to: Subjects and Tissue Samples This study retrospectively enrolled patients who were diagnosed with gastric adenocarcinoma at the Keelung Chang Gung Memorial Hospital (KCGMH) from November of 2016 to June 2017. Gastric samples that were confirmed as malignant after pathologic examination were stored in the tissue bank of KCGMH. For a patient with both endoscopic biopsy and surgical resection specimens, samples from surgical resection were used for IHC staining expression analyses in this study. For patients without an operation for gastric malignancy, endoscopic biopsy samples were used for IHC analyses. The inclusion criteria were patients' pathologic reports that revealed all compartments of H. pylori infection, gastritis, IM, and adenocarcinoma simultaneously in the same gastric sample. The exclusion criteria were incomplete records of demographic data, tumor stage, and clinical course. Patient demographics, tumor location in the stomach, tumor stage, and survival time were recorded. No chemotherapy, radiotherapy, or other therapies were performed in these patients before endoscopic biopsy or surgical resection. Two pathologists (Dr. Chang LC and Dr. Cheng TC) provided all the equipment and histological examinations. Every specimen was reviewed by an experienced pathologist (Dr. Chang LC) under microscopic examination to localize the areas of gastritis, IM, and adenocarcinoma. Commercial kits for IHC staining and publicly available software applications for digital image creation and quantitative analysis were used. This study was approved by the Ethics Committee of the Chang Gung Memorial Hospital (IRB No 103-7463A3, 105-4426C). Histology and IHC Stain for H. pylori Detection and IM Histology (hematoxylin and eosin) and IHC H. pylori antibody staining (polyclone, Zytomed Systems GmbH, Berlin, Germany) were performed to confirm H. pylori infection. Histological sections of all specimens were routinely examined to determine gastritis, IM, and malignancy. Because the gastric mucosa adjacent to malignancy is always infiltrated by inflammatory cell, a diagnosis of gastritis is made for this non-tumor, non-IM mucosa by pathologists. IM was detected based on the morphological features in the stomach observed by H&E and Alcian blue staining (6,21). Digital Quantitative Analyses for IHC Images Tissue sections were cut from the tissue blocks at 4 µm and stained with H&E and IHC. The slides were scanned at 400 × magnification using a Hamamatsu Nanozoomer S360 scanner and NDP image (Hamamatsu, Japan). Digital data analysis was performed using computer software to prevent manual or inter-observer bias for IHC score counting. The computer softwares ImmunoRatio (ImageJ plugin) (22) and QuPath (23) were applied for digital slide bioimage analyses (24)(25)(26). The application of ImmunoRatio calculates the percentage of positively stained nuclear area (labeling index) by using a color deconvolution algorithm for separating the staining components (diaminobenzidine and hematoxylin) and adaptive thresholding for nuclear area segmentation (25). Every digital image was quantitatively analyzed in three adjacent areas: gastritis, IM, and adenocarcinoma. PA1-711 Antibody for VDR After appropriate blocking and management of gastric tissue, VDR staining was performed using an anti-VDR polyclonal antibody (PA1-711, Thermo Scientific, Fremont, CA, USA). M30 CytoDEATH Antibody for Detecting Apoptosis Mouse monoclonal antibody (Clone M30, mouse IgG2b) was used to detect apoptosis in epithelial cells (caspase cleavage product of cytokeratin 18, CK18). Apoptosis was detected by applying the M30-antibody to fixed samples, and then secondary detection systems were used for IHC staining. The M30 CytoDEATH antibody (Roche Diagnostics GmbH, Mannheim, Germany) binds to a caspase-cleaved, formalin-resistant epitope of the CK 18 cytoskeletal protein (27). Gastric Cancer Stage Gastric cancer staging was performed according to the American Joint Committee on Cancer (AJCC, 7th edition) (28). Statistical Analysis Continuous data are expressed as mean ± standard deviation (SD). A two-sample t-test was used to compare the mean values. Categorical data were analyzed using chi-square and Fisher exact tests. One-way analysis of variance (ANOVA) was used to compare the mean values of multiple samples. The Scheffe method was applied for post-hoc analysis. All statistical tests were two-tailed. Differences were considered statistically significant at p < 0.05. Statistical analyses were performed using the Statistical Package for the Social Sciences (version 18.0) for Windows (PASW, Chicago, IL, USA). RESULTS Initially, 69 gastric tissue samples were collected from the tissue bank of KCGMH. Among these 69 samples, 32 originated from the same patients (from endoscopic biopsy and surgical resection). Five samples were inadequate for IHC staining examination. We were unable to trace the origins of three samples. Finally, 29 patients (24 surgical resections and 5 endoscopic biopsies) were included in the IHC staining analysis (Figure 1, Study flow). IHC stains of PAI 711 antibody (VDR), Ki67 antigen (cell proliferation), and M30 antibody CK18 (cell apoptosis) were performed for all samples from these 29 patients. The demographic and clinical characteristics of the patients are listed in Digital images of H. pylori infection (H&E and antibody stain, Figure 2). Figures 3, 4 reveals digital images of H&E and CK18 stain in different sites (total, gastritis, IM, and adenocarcinoma). Digital analyses of IHC expression using ImageJ and ImmunoRatio software are presented in Figure 5. in Table 2. The mean expression of Ki67 and CK18 increased significantly (P < 0.05) from gastritis, IM to adenocarcinoma. However, mean values of VDR expression were no statistical difference among gastritis, IM, and adenocarcinoma by ANOVA (P = 0.404). Table 3 shows the Scheffe post-hoc analysis. The main location for a different expression of Ki67 and CK18 were between gastritis and cancer. The expression of Ki67 and CK18 was similar between the IM (premalignancy) and adenocarcinoma (malignancy) sites (Figure 7). When a correlation analysis between VDR and Ki67 or CK18 was performed, VDR expression was not correlated with Ki67 or CK18 expression in gastritis, IM and adenocarcinoma site ( Table 4). Survival time was only correlated with tumor stage (correlation coefficient = −0.423, P-value < 0.05), but was not correlated with the expression of VDR, Ki67, and CK18. DISCUSSION VDR could regulate cathelicidin antimicrobial protein (CAMP) and has an antimicrobial activity against H. pylori (13,17). Past studies found an important role of the VDR/CAMP pathway in innate immunity and an anti-inflammatory mechanism of vitamin D (13,17). VDR mRNA expression levels were significantly up-regulated in H. pylori-infected patients and positively correlated with chronic inflammation scores. There was a significant positive correlation between VDR and CAMP mRNA expression in H. pylori-positive gastric mucosa (13). quantitative digital analyses were performed to prevent intra-or inter-observer bios. All three parts (gastritis, IM, and cancer) were adjacent and came from the same patient. Because the major type of gastric adenocarcinoma was poorly differentiated in this study, similar VDR expression in the mucosa adjacent to the tumor may be found in patients with poorly differentiated gastric adenocarcinoma. The relationship between serum vitamin D deficiency and gastric cancer development remains under debate (13-15, 33, 34). Because 1, 25 (OH) 2 D3 is not produced by the stomach, 1, 25 (OH) 2 D3 affects the immune regulatory responses via the VDR of the stomach (12)(13)(14). However, no serum vitamin D level was recorded in this study because checking the vitamin D level was not a regular guideline for gastric adenocarcinoma treatment. To understand the relationship between serum vitamin D level and gastric VDR expression, it is necessary to include more patients with serum vitamin D and VDR expression in the gastric mucosa by endoscopic biopsy. It is common to disclose increased proliferation and decreased apoptosis in malignant cells (10). Our hypothesis was an increased Ki67 (cell proliferation) and a deceased CK18 (apoptosis) in cancer, but the results of our study revealed that both Ki67 and CK18 expression increased in IM and malignant tissues. Apoptosis is defined by characteristic changes in nuclear morphology, including chromatin condensation and fragmentation, overall cell shrinkage, and blebbing of the plasma membrane. CK18 expression was high (mean 77.6%) in gastric adenocarcinoma in the current study. The expression of CK18 was also high in the non-tumor gastritis mucosa in the current study (mean 66.9%). Chronic inflammation and IM are associated with increased apoptosis but primarily occur at the mucosal surface and not in the deeper layers (27). In the current study, the mucosa of gastritis was adjacent to the IM and the malignant areas. Chronic inflammation may induce high CK18 expression (35). Previous studies on apoptosis in gastric adenocarcinoma mostly used cell lines and the counting score for apoptosis was variable (10,35). CK18 expression in the mucosa of gastritis adjacent to IM and gastric adenocarcinoma (especially poor differentiation type) may be high in nature. An imbalance between cell proliferation and apoptosis may be the reason and mechanism of carcinogenesis (10, 17). Wagner et al. (29) found decreased apoptotic activity following increased proliferation in chronic H. pylori infection; they assumed that increased proliferation might play a role in carcinogenesis. In normal gastric mucosa, apoptotic cells are rare and in the generative cell zone near glandular neck region. When the sequential change of atrophic gastritis, IM, and dysplasia, the number of apoptotic cells increase (10). Numerous molecules produced by H. pylori including cytotoxin (VacA), lipopolysaccharide, monochloramine, and nitric oxide may directly induce apoptosis. Moreover, H. pylori-stimulated host inflammatory/immune responses lead to release of a large This study applied artificial intelligence (AI) methods, including a high-resolution scanner (Hamamatsu Nanozoomer S360 scanner) for digital image creation, QuPath, and ImageJ/ImmunoRatio software for IHC scoring. This software could be downloaded freely from specific webs. The strength of AI use is rapid and avoids intra-or inter-observer bios. AI for IHC score counting is easier than traditional manual counting. The preparing of samples in this study were paraffin-embedded tissue sections from tissue bank. Paraffin-embedded tissues are frequently used for pathological examinations, included IHC analyses. Patients who get H. pylori infection may develop the conditions of gastritis, IM and gastric adenocarcinoma, which conditions could be detected in one slicing slide from the surgical or biopsy specimens. IHC examination could evaluate the expressions of VDR, Ki67 and CK18 with precise locations in one slide simultaneously. However, there are some limitations to use AI for quantitative IHC tests like our study. First, the precisely targeted areas for scanning and scoring still depend on the pathologists in our study. Second, reports issued the correlation between IHC quantitative analyses and gene expressions in this field of H. pylori related gastric adenocarcinoma are rarely now. Third, the application of ImageJ/ImmunoRatio software for other different IHC examinations may require further studies. Hence, further studies are needed for validation and clarification the correlation between digital IHC quantitative tests and gene expression examination. Conclusively, cell proliferation by Ki67 expression and apoptosis by CK18 expression progressively increased in the areas of gastritis, IM, and adenocarcinoma. Ki67 expression positively correlated with CK18 expression in gastritis. No association between VDR expression and Ki67 or CK18 expression was found in this study. Survival time was only correlated with tumor stage, but was not correlated with the expression of VDR, Ki67, and CK18. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The Institutional Review Board of the Chang-Gung Memorial Hospital approved this research (IRB No. 103-7463A3, 105-4426C). All participants agreed to the study conditions and provided informed consent before the enrollment in this study. AUTHOR CONTRIBUTIONS L-WC, L-CC, and C-CH contributed to conception and design of the study. L-WC organized the database, wrote the first draft of the manuscript, and got the grant. L-WC, C-CH, and C-CL performed the statistical analysis. L-WC and L-CC wrote sections of the manuscript. L-CC and T-CC provided all the equipment and histological examinations. All authors contributed to manuscript revision, read, and approved the submitted version. This study was supported by the Chang Gung Medical Foundation and Keelung Chang Gung Memorial Hospital Tissue Bank (CRRPG 2H0052).
2021-11-22T14:16:52.420Z
2021-11-22T00:00:00.000
{ "year": 2021, "sha1": "8cbcf40d4571b65599da14c3489b422f7ec3e6af", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.766061/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cbcf40d4571b65599da14c3489b422f7ec3e6af", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
151461972
pes2o/s2orc
v3-fos-license
Cognition application in preschool teaching and learning through communication component in national standard preschool curriculum Article history: Received 22 August 2016 Received in revised form 17 October 2016 Accepted 28 October 2016 This study aims to identify the teacher’s shortcomings in teaching preparation, and cognition application in teaching and learning through communication component in National Standard Preschool Curriculum. Teachers’ Daily Teaching Plan (DTP) from a teacher for a week in communication component was analyzed using Hermeneutics method. The results obtained showed that there were two main factor of teachers’ shortcoming in the DTP which is the repetition of the same activities and the use of uninteresting teaching aids. In addition, the findings also showed that teachers still lacking in implement the cognition application in teaching and learning of communication component. Therefore, the guidance of lecturers and mentors need to address this issue. Teachers also need to attend special training courses to gain knowledge and experiences in order to improve the weaknesses that have been identified. Introduction *Literacy and language skills, is vital and necessary to every child. This is because language and literacy is a mean of communication used by humans throughout life. Students from infancy until the age of eight years will experience a period of critical change and development for a child to achieve physical and mental skills that will be used for the rest of their life (Noor Aini, 2014). Ministry of Education (2010) emphasizes the skills which include language skills of listening, speaking, reading and writing through the communication component in the National Preschool Standard Curriculum. Activities based on communication and language needs to be applied in preschool. Language development of students is assisted by social interaction of their environment. By watching appropriate movies or music, helps develop cognitive level of students in preschool through the process of assimilation (Nachiappan, 2015). Meanwhile, reading is an important component for the development of students' cognitive, as reading affect the process of thought, emotion, imagination and personality of the child (Husin and Isa, 2012). Therefore, preschool teachers need to emphasize on the teaching of language and literacy in preschool. This includes practicum teachers who are under training in preschool. Teaching strategies are important. To ensure that the learning objectives are achieved, a teacher must know how to manage, administer and control the students in the classroom to ensure that the content of the lesson can be understood by all students. Teachers no longer act as a source of knowledge, but as facilitators, change agents and a source of inspiration to students (Razak and Nordin, 2013). To make the teaching and learning effective, interesting and fun, a teacher must be creative, innovative and have a high knowledge. Literature review Language and human thoughts are closely interlinked because the language is not formed in the surface structure alone, where language has multiple meanings of the word or phrase is used. Language should also be accompanied by human thought to speak. This clearly shows the relationship and the influence of language on cognitive processes. According to Goldstein (2011), cognition refers to two functions, first, what the mind and secondly, how the mind processes the mental activities. Therefore, the development of cognition refers to changes in processes and mental skills according to physiological maturity and the experiences of childhood. These changes are closely related to the interaction between genetics and the environment. For education, the level of ability of cognition depends on intrinsic and extrinsic motivation of students (Nachiappan, 2015). In the area of early childhood education, previous studies showed that the approach used by teachers in the classroom greatly influences the development of language and literacy for students. Indirectly, it also influences the cognitive development of preschool students. According to Alvestad and Sheridan (2015), education can become worthless if the teacher does not have awareness of pedagogy in planning structured or unstructured learning. Teaching in the classroom is very influenced by their pedagogical practices. Chen and McNamee (2011) stated that a positive approach to learning is a major contributor to the performance of the students, but their effectiveness is not the same in all activities. In addition, the effectiveness of the approach depends on the characteristics of the activity itself in which the child is involved. Various activities can be used by teachers in applying cognition in communications component. Aliza and Zamri (2015) stated that the play in curriculum is needed in teaching and learning in preschool, and it is recognized by preschool teachers. Preschool teachers recognize the importance and effectiveness of the play approach, but they do not practice it in teaching, and provide reason that they have no guidance on how to implement learning activities through play in teaching language and literacy in the classroom. Puteh and Ali (2012) also stressed the play approach in teaching language in preschool because it affects the development of cognitive, social, emotional and physical of a child. During play, students interact with their peers and adults through questioning, and through play students learn to communicate, interact and adapt to the environment. In fact, previous studies showed that it is difficult for pre-school teachers to change the pattern of teaching, despite the effective use of play in teaching activities (Aliza and Zamri, 2015;Puteh and Ali, 2012;Einarsdottir, 2014). In addition, language development can be promoted through activities like drama, poetry, social play, watching video or listening to music (Nachiappan, 2015;Holmes et al., 2015;Holmes and Romeo, 2013;Meacham et al., 2013). Holmes and Romeo (2013) found that gender and administration of the preschool is a factor that influenced language abilities and students' social games. According to Holmes and Romeo (2013), students who regularly perform activities such as participation in drama, performed well in vocabulary test. However, preschool teachers are seen still difficult to plan their teaching because they lack knowledge about teaching content, and are less experienced in implementing a variety of approaches in the classroom (Jamian et al., 2015;Ehrlin and Wallerstedt, 2014). According to Jamian et al. (2015), practicum teachers still lack practical experience in a real situation, which makes them face some difficulties in implementing the planned activities. While the findings showed that teachers are too comfortable with their teaching methods, despite the lack of knowledge about teaching content itself (Ehrlin and Wallerstedt, 2014). Therefore, this study is aimed to look at the preparatory teaching through the Daily Teaching Plan and the extent to which teachers apply cognition through the communication component in their preschool teaching. Research objectives i) To identify the teachers' shortcomings in the DTP preparation in teaching and learning of communication component in preschool. ii) To identify the effectiveness of teaching aids in teaching and learning of communication component in preschool. iii) To suggest ways to overcome the cognition application in teaching and learning of communication component in preschool. Methodology This qualitative research is based on Daily Teaching Plan practicum teachers of early childhood education. Therefore, the analysis of practicum teachers' Daily Teaching Plan cannot be generalized to all preschools. The researchers analyzed the Daily Teaching Plan document of a teacher, for a week, using the Hermeneutics method. Researchers analyzed explicitly and implicitly the data. According to Nachiappan (2015), the text is anything produced by humans in the form of written or oral used for transporting anything intended meaning, i.e. feelings, thoughts and people's behavior. The text contains implicit and explicit content to be interpreted. This text reflects the characteristics of the cultural, social, feelings, thoughts of the present and the past, as well as the knowledge of the author, who was born in the form of the content (content) called "episodes". Attributes such as bias or prejudice or "bias" should be avoided during the process of interpreting texts. When researchers began to interpret the essay, the researcher will have "ontoenigma" which means ambiguity or lack of clarity about the content of the text. Next, when researchers started trying to find an explanation for understanding the meaning of the text, in the process those researchers began to discover the structure of the external and internal structure of the text. The combination of these two structures can help researchers get away from the situation "ontoenigma" to "ontopretation" i.e. the process of deep understanding of the texts (Loganathan, 1992). This in-depth analysis of the elements in the unconscious that is implied in the texts will allow researchers to understand its meaning. This understanding is meta-texts that can give meaning to the original text. Meta-texts means of interpretation of the text of the findings (essay). An essay has what you want to understand the use of language by the individual style and also aspects of information processing experienced by individuals when writing essay. This essay text interpretation of results will be analyzed in the style that gives strength to the essay writing. Results The findings through interpretation of DTP documents, to see the application of cognition in the process of teaching and learning (TandL), showed that teachers often use question and answer as method of teaching about the previous lesson. Teachers repeats on what they have taught before to help the students master it to read the phrasing correctly. The students also listen to the instruction given by teachers well and repeat after the teacher reading the phrases. Teachers gives out exercise to the students as a way to strength the student's understanding on what they have learnt and the students works on well on the exercises given. The teachers checks on the exercises and see how the students have understood the teaching and on what to actually do in the exercise. This will help to increase the student's understanding on learning about phrases. The students also will feel happy and motivated when teachers' compliment them for doing a good job. (BM 3.5) Reading Words (BM 3.5.1)combine two syllables into words with guidance Teachers questioned the last lesson. Associate teacher last lesson with new lessons. 1. Teacher distributes the exercise to the students. Teachers instruct the students to tell what is available in the classroom. 2. The teacher directs students to spell all of the pictures contained in the exercises. Teacher sets activities to guess the word. Example: B _ _ a. 3. Teachers praise student's answers. Teachers instruct students coloring the picture in the exercise. With a sense for enthusiastic, teachers and students together start a session of teaching and learning. Teachers and students began to answer questions about the lessons that have been learned in the last teaching sessions. The students work more diligently to remembering the lessons back. Students take exercises distributed by teachers and with a sense of eager to see the exercises provided by the teacher. Students observe attentively the given exercise and students try to mention the pictures on the exercise. Students feel more confident to spell words on the exercises that have been distributed by the teacher. Students carefully spell the words one by one. Students get a positive strengthening of the teachers and they feel happy with the praise given by the teacher. (WM 3.5) Read the word. (WM 3.5.1) Divides two syllables to be words with guidance. 1. Teacher asks students past lessons. 2. Repeat back what the Teacher taught. Teachers teach back way read the phrase. 3. Teachers distribute workbooks to students. The teacher monitors the students make a workbook. 1. Teacher check students work. Teachers praise student's response. Students feel curious with the activities to be done by the teacher in the session today. Before continuing the lessons, the teacher told the students to remember past lessons back. Students with vigorous mention back fill the lessons they have learned properly. Students try to read back the phrase words about the content of the lesson today. Child mentions the word carefully and repeat the action until they can mention the phrase the word very well. With a sense of confident students to practice at training book distributed by the teacher. Students to practice in the book exercises with diligent and thorough. Students try to complete the exercise up to send it to the teacher. Students receive praise from teachers and feel fun to continue teaching after this session. The teacher told the students to send training provided as the process of revision and teachers can find out errors made by students and students can improve the offence has been committed. (WM 3.5) Read the word. (WM 3.5.1) Divides two syllables to be words with guidance. 1. Teacher asks back past lessons. Teacher associate past lessons with new lessons. 2. The teacher distributes exercises to the teacher asks students what there is in the work sheet. 3. Teachers refer to exercises and instruct student's reorganized alphabet to form words that correspond to the pictures. 4. The teacher directs students coloring exercises upon completion of arranging words. Teachers praise students coloring results Students in need of hearing more rigorous current teachers would like to explain the content of learning at the session. Teachers try to look back at the lessons. Students cooperate with each other to pronounce words in the exercise. Teachers and students together in a question and answer session about the picture in the exercise have been given. Students try to answer the questions teachers well. Students try to complete the exercises provided by the teacher by arranging the letters found on the exercise. Students actively Students feel enthusiastic to continue and complete the training given by the teacher before sending to be checked. Students feel proud of the compliments given by teachers and are willing to continue learning at the next slot. Through this method students will be able to process existing information and add new knowledge. They are even able to recall previous learning's if they have forgotten. But teachings are also seen as less attractive when teachers repeat the activities. Teachers also often use the same teaching materials, namely workbooks and exercises. This clearly shows that practicum teacher is not creative, less informative, and the teaching is more teachercentered. This is considered as weakness in application of cognition in teaching. In fact, this situation bores the students and it will be difficult for them to concentrate in class. Discussion Based on the findings of teachers' Daily Teaching Plan (DTP) interpretation in communication component, shows there are some shortcomings in the teachers' preparation. The factors identified were repetitive activities and teaching aids that are not attractive. The findings were also showed teacher often ask students about previous lesson, and as well as frequently use exercises and workbooks. This situation is repeated for the whole week of learning in the classroom. DTP showed that practicum teachers also did not specify the approach and the types of teaching aids used. DTP prepared by the teachers is very important because it is a framework that guides teacher in doing the activities, from the initial process of teaching until the end of the teaching process. Therefore, this situation is very troubling because the effectiveness of the teaching is influenced by the teachers' teaching. Conclusion The practicum teachers have a few problems in the preparation of DTP. They frequently use Question and Answer technique and teaching materials such as workbooks and exercises. The effectiveness of teaching aids is not creative enough and less informative. The authorities should therefore find ways to overcome these problems.
2019-05-10T13:07:01.480Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "5095d6cce7310ff1c9afb5a3eca61028dd627189", "oa_license": "CCBY", "oa_url": "http://science-gate.com/IJAAS/Articles/2016-3-11/02%202016-3-11-pp.08-11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3f481bf4585b942d76347fc2e3467c0906f3f86a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
257957249
pes2o/s2orc
v3-fos-license
River dataset as a potential fluvial transportation network for healthcare access in the Amazon region Remote areas, such as the Amazon Forest, face unique geographical challenges of transportation-based access to health services. As transportation to healthcare in most of the Amazon Forest is only possible by rivers routes, any travel time and travel distance estimation is limited by the lack of data sources containing rivers as potential transportation routes. Therefore, we developed an approach to convert the geographical representation of roads and rivers in the Amazon into a combined, interoperable, and reusable dataset. To build the dataset, we processed and combined data from three data sources: OpenStreetMap, HydroSHEDS, and GloRiC. The resulting dataset can consider distance metrics using the combination of streets and rivers as a transportation route network for the Amazon Forest. The created dataset followed the guidelines and attributes defined by OpenStreetMap to leverage its reusability and interoperability possibilities. This new data source can be used by policymakers, health authorities, and researchers to perform time-to-care analysis in the International Amazon region. www.nature.com/scientificdata www.nature.com/scientificdata/ in particular, stands out for the absence of roads, lack of regular road-based transport, and significant geographic barriers 10 . The movement of people and goods in transit between the countryside and cities is highly dependent on fluvial transportation [11][12][13] . In this large low-resource region, it is crucial to include rivers in developing a geographical representation of transportation route networks to better model real-world movement of people. The current inability of using rivers as a platform to perform distance analysis in the Amazon hinders appropriate data generation for research, effective disease intervention implementation, and policy-making. Any distance analysis performed in the Amazon region not taking into consideration the rivers as a pathway of transportation is doomed to be inaccurate and erroneous. Despite its importance for characterizing the Amazon Forest, to our knowledge, there is no study dedicated to developing transportation route network data sources that includes rivers as the means of people's transportation. The river routing literature provides methods and approaches to define hydrological connectivity, represented by the river's connectivity network, flow direction, and surrounding terrain 14,15 . Without good methods to create a river connectivity network, the geographical representation of rivers as a means of transportation would not be possible. Our proposed database uses elements of river connectivity, but our goal was to use the geographical elements that describe river connectivity and convert them into a database structure that would be linkable to existing road-transportation networks and create vehicle-accessible paths that combined roads and rivers in the Amazon Forest. Therefore, our dataset represents how rivers can be used to create routes reflecting travel time or distance linking two points within a transportation network 16 . Our objective was to develop a database that adapts the existing digital representation of the Amazon Forest rivers into a river-based transportation route network. To achieve this objective, we combined the river's spatial representation with existing parameters of road-based transportation networks to allow the creation of multi-modal transportation pathways. In our database development study, we leverage previous work dedicated to studying river routing from the hydrology perspective. We combined the OSM road-based transportation network with previous existing rivers representation data freely available: the Hydrological Data and Maps Based on Shuttle Elevation Derivatives at Multiple Scales (HydroSHEDS) and the Global Rivers Classification (GloRiC) 14,17,18 . Methods Data sources. In this database development study, we combined three different data sources to develop our river-and road-transportation route network dataset applied to the Amazon Forest. Our first data source was the OSM database, which served as the primary data model for our development framework as our transportation routing notation. The OSM is the largest available source of representation of road-based transport networks. By applying the same notation to our database, we are expanding the possible uses of our dataset. The regular transportation route approaches widely applied to OSM could be replicated for this new dataset while prioritizing interoperability. Our second data source was the HydroSHEDS database. This served as the basis for our river's geographical representation. While countless hydrographic maps exist for well-known river basins and individual nations, there is a lack of seamless, high-quality data on large scales such as continents or the entire globe 17 . In response to these limitations, a team of scientists have developed the HydroSHEDS, a database and mapping representation of the world's rivers that provide the research community with reliable information about rivers locations on the Earth's surface and how water drains the landscape 19 . To create a rivers-based transportation route network dataset for the Amazon Forest rivers as the basis to perform travel time and distance analysis, we adapted the Hydrological Data and Maps Base of HydroSHEDS 14,17,[19][20][21] . The processing steps of generating HydroSHEDS are detailed in the data set's technical documentation 22 . We used HydroSHEDS because it provided the best information across the geographical space of the Amazon Forest. Although HydroSHEDS has known limitations (discussed in the HydroSHEDS v.2 development website), such as low-quality resolution on some areas of the globe, it proved to be the best solution for our problem 23 . We tested the National Waters Agency (ANA) from Brazil; however, the database was composed of multiscale mapping data that provided heterogeneous resolutions of the hydrology in the area 24,25 . These resolution changes impacted the density of river streams in the analytical space, affecting the standardization of the displacement estimation. This resulted in a bias in terms of distance or travel time to access. Thus, we opted to use the HydroSHEDS data source because the river distribution is homogenous across the entire region analyzed. In addition, the HydroSHEDS dataset has several characteristics not present in the ANA database that are similar to the road-based transportation networks, facilitating interoperability. The possibility of having lines connected by nodes to other lines creates the network structure needed to perform transportation route estimation. A transportation route estimation can be understood as an effort to estimate the best route between two or more points spatially distributed in terms of distance or time. A transportation route network dataset comprises the geographical representation of line segments (river segments) interconnected (river connectivity). Beyond the spatial representation, each line in the dataset has secondary attributes such as length, travel time, maximum supported width, direction, and maximum speed. Based on the network notion of HydroSHEDS, we were able to develop an approach to convert the geographical representation of the rivers to a transportation route network dataset. In addition to the HydroSHEDS database, we incorporated variables from the GloRiC database, our third data source. GloRiC provides river types and sub-classifications for all river reaches contained in the HydroRIVERS database. From GloRiC, we included river characteristics such as flow regime and river stream speed. We chose GloRiC due to its compatibility with the geographical representation provided by the HydroSHEDs dataset. Creating the dataset. We prioritized the interoperability of our database with the community already using the OSM to perform routing analysis, facilitating the incorporation of our new dataset into existing data processing pipelines. We used the variables existing in the GeoFabrik Routable ShapeFiles from OSM. The option to use the GeoFabrik allows for additional quality checking performed over the raw version of the OSM files. We conducted a cross-reference and compatibility assessment of the HydroSHEDS hydrographic dataset to the OSM standard. Despite the spatial representation similar to a road-based transportation network, the HydroSHEDS hydrographic dataset bears no similarity in columns informing the river code, hydrological, physio-climatic, and geomorphic data. Thus, to be able to convert the hydrologic information from HydroSHEDs into a transportation route network dataset, we had to match the OSM data notation with the available information of river characteristics in the Amazon Forest region. www.nature.com/scientificdata www.nature.com/scientificdata/ The HydroSHEDS raw dataset provides data on the length of the river segment (in kilometers), but does not provide information on key features needed to calculate travel time to the closest healthcare facility. For each river segment, these features include: (a) the coordinates for the starting and ending point; (b) the estimated navigation speed for a standard boat/ship; (c) the segment access connectivity to the river network; (d) the estimated travel time through the river segment; (e) the river's water stream flow speed; (f) seasonal variation on the flow regime; and (g) the river's water discharge volume. To address this issue, we created additional fields in the database to characterize the Amazon Forest rivers using information available through literature review (Supplementary Table 1). These fields can be added/modified to characterize any rivers if applicable to other areas. The Amazon Forest-specific parameters we included to www.nature.com/scientificdata www.nature.com/scientificdata/ translate the HydroSHEDS into a transportation route network dataset included the average navigation speed for a standard vehicle (a boat), the variations of river stream speed for a given river characteristics, the river size, and flow regime variability 26,27 . Using these parameters, we were able to calculate the average travel time to navigate through a given river segment. Following the OSM notation, some required variables do not contribute when considering rivers as a transportation route network. Variables such as a bridge or tunnel, the number of lanes, road sizes, and presence of electricity support had no parameters meaningful for rivers. We kept the aforementioned variables in the data model to preserve OSM compatibility, but flagged as 'do not apply' . All the processing to adapt the HydroSHEDS rivers database to an OSM routable format was done using R Statistical Language for reproducibility and reusability. Since the spatial representation of a river's course changes through time, we opted to build reproducible codes, allowing users to update our database to identify the best available river-based pathways. Similarly, the steps performed in this manuscript can be adapted with new geospatial databases representing the rivers of other locations. Transportation route database. The river-based transportation route database developed is stored and shared using the ESRI shapefile format, a widely adopted standard format to share geographical information. The majority of GIS software, including open-source solutions, are capable of reading and editing shapefiles. To create a routable dataset from the Amazon River geographic representation, we performed the following steps: Definition of the origin-destination matrix to each river course. The first step defined the direction each river was flowing. This information was important to the patterns of connection of each line segment representing the rivers network. Without a proper connection standard, the data lines could not be used to solve route creation problems adequately. The rivers connect to each other differently than a network of streets. Each river connects to its tributaries at specific points. The raw dataset from the HydroSHEDS repository had a node indicating the code of how the rivers connect to each other. Despite having the code of the river segment connection, the HydroSHEDS has no information regarding the geographical coordinates of each node connection. Thus, we generated the latitude and longitude for each connection junction to be able to create the connectivity network of the river segments. This information was extracted and combined into a matrix to serve as a reference to create the rivers network. This analysis was done using R, and the code used to process this step is available on a repository 26 . Implementation of transportation attributes to each river course. The second step defined the creation of the transportation attributes for each river. The transportation attributes are used to estimate travel time over a segment. These attributes include: the maximum speed allowed over a river segment; the maximum width, weight, and height allowed for a boat/ship using a specific river segment; the description of whether a river segment is accessible on foot; the estimated speed (in kilometers per hour) for a normal motorized boat/ship on this river segment; the latitude and longitude of the start and endpoints of the river segment; and the others parameters defined in Supplementary Table 1. No specific restrictions were applied for the size of the boat/ship supported by the river course, as the Amazon Forest rivers in some parts can span up to 70 km of breath. To the best of our knowledge, the only available resource to estimate travel time and maximum speed was the work performed by Moura, which indicated the average speed of a boat in the Amazon Forest 11 . Thus, we used the speed parameters presented in Moura's study to define the average boat speed as 18.52 km per hour. According to the study performed in the Amazon region, 40% of the boats navigate at a speed of 18.52 km/h, but some boats (e.g., small boats with powerful engines) can reach up to 37 km/h 11 . In our database, we created a variable to represent the average speed and the maximum speed allowed. Still on the second step, we added three additional variables that can be used to characterize the rivers from the GloRiC database. The variables are river stream speed, river flow regime variability, and river discharge level. The first variable is a factor that should be applied to the boat speed, as the maximum speed can change considering the navigation upstream or downstream. The riverboat velocity is the physical resultant of the boat plus or minus the river stream speed, depending on the stream navigation direction. The river flow regime can be used to identify rivers with large variability during the dry season that could affect navigability. Most parts of the Amazon rivers present a medium variability in terms of water level during the dry season. A total of 91% of the rivers in the dataset were categorized as having low (5%) or medium variability (86%) between seasons. The river discharge level variable can be used to filter rivers in terms of size, reflecting the rivers with larger chances of being navigable. Labeling of parameters following the OSM standard. The other variables comprising type of surface and transportation class were set to follow the codebook associated with the OSM standard, as defined in the third analytical step 14 . We opted to use this labeling strategy to maximize the potential uses of the data generated. The resulting dataset from the three analytical steps is tabular data containing all the attributes of a routable dataset. To combine these attributes with the geographical representation of the rivers, we performed a linkage between the tabular data and the shapefile with the geographical representation of the Amazon Forest rivers. Figure 1 represents the distribution of OSM pathways versus rivers in the Amazon Forest region. The potential increase in the use of rivers to map distances between points can be observed by the extension of coverage of water routes in comparison to roads and highways. The shapefile containing the river-based transportation route network database is stored in a repository 28 . The data source contains 897,846 river segments and 997,548 streets, roads, and highways covering the international Amazon Forest region. The river courses in Brazil, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana, and French Guiana are covered in the new dataset, as represented in Fig. 2. www.nature.com/scientificdata www.nature.com/scientificdata/ Data Records Here we present the metadata of our database, stored at a public repository 28 . The main file is a shapefile including metadata regarding river characteristics in the International Amazon Region. The dataset is described in Supplementary Table 1 and includes information about the size and speed of the boat, river nodes with starting and ending points, water speed, and length. www.nature.com/scientificdata www.nature.com/scientificdata/ technical Validation Due to the use of the new data from the Shuttle Radar Topography Mission (SRTM), the HydroSHEDs database represents the best hydrological data available. 17 No other dataset uses a Digital Elevation Model with a higher resolution and precision as the data provided by the SRTM. The NASA SRTM mission overcomes challenges in terms of data generation, surpassing limitations related to spatial resolution, data structure, multiscale approach, representation of hydrological connectivity through river networking routing, and integrated data and modeling framework 14 . Usage Notes To demonstrate the potential uses associated with the new dataset, we analyzed the catchment area of a healthcare facility. Catchment area is defined as the geographic coverage using our rivers-and road-transportation route network from a set travel time distance. This type of analysis creates a catchment area by navigating the transportation network originated at the point of interest, a healthcare facility, and uses this covered area as a reference to perform calculations of the population reached, availability of health services ratio per population, rates of diseases by location, and other types of supply-demand analysis. One application of catchment areas is to assess the population that would fall within reach of a specific health facility. Populations facing geographic barriers to access tend to present with worse health indicators in comparison to groups not facing access challenges. For example, the lack of information on where the eligible population is located represents a major issue to reach the target outcomes of health campaigns. Creating indices of access to health services often depends on data representing access routes. Inaccuracies in the access routes can lead to erroneous results in the identification of underserved areas 6,29 . The resulting misplacement of healthcare infrastructure can result in crises, such as the oxygen scarcity in Manaus at the beginning of 2021, which led to several deaths from complications of COVID-19 30 . Thus, the correct measurement of distances, travel time to reach care, and available pathways between the population and a healthcare facility are essential to assuring adequate care, better health care policies, and adequate interventions aiming to improve the organization of the health system network. Three comparative approaches were used. In the first, we created the catchment areas close to community health centers in Amazonas state, Brazil using the road-based transportation route network dataset usually provided by GeoFabrik-OSM. The second part of the analysis was done using only the new rivers-based transportation route network database. The third analysis combined both datasets. All analyses followed the same methodology to create catchment areas using routable pathways as defined by Rocha et al. 2021 29 . The resulting outcome of this analysis is a geographical polygon representing the maximum threshold of distance or travel time to reach a facility considering the transportation network available. Thus, the polygon represents the coverage area of a health facility up to the limit of distance or time, defined by the end-user evaluating access. To perform the comparative analysis, we used 622 community health centers (CHC) located in the Amazonas state, Brazil as a reference for health facilities. Each CHC is responsible for offering primary health care to the surrounding population. In total, we created 517 catchment areas using only roads, ferry lines, and highways as paths of access to the CHC (Fig. 3A). The catchment areas created using the new routable rivers dataset comprised 455 catchment areas (Fig. 3B). The combination of both datasets created 558 catchment areas. A total of 41 CHCs that were not reached using regular roads, ferry lines, and highways now can be included in access analysis to help health authorities better formulate policies and decisions. The use of the new dataset increases the coverage associated with health facilities by 7.5%. It is worth highlighting that the new areas covered were in remote areas not previously covered without the use of the new dataset we are proposing. Figure 3C represents an example of areas not covered by streets, but now covered by rivers as a pathway to healthcare access. Without the use of the rivers, 41 CHCs in the state would have inaccurate information on the time to reach the facility and its surrounding population. Our results represent only the analysis of a part of the Brazilian Amazon region. We believe that the dataset presented can help several disciplines to assess more precisely time and distances in the entire Amazon region. The use of straight-line distance does not represent the actual challenges in terms of transportation in the Amazon region. A routable dataset using the rivers as pathways is an essential tool in the policy discussions regarding access to services in the Amazon region. Code availability The codes used to convert the HydroSHEDS river database to a routable dataset are freely available 28 . The code was written in R programming language, version 3.6.2. Beyond R, there is no need for any special software or program to replicate our results.
2023-04-06T14:20:00.513Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "b234fedd48255810fe488f1aa79240bb047468a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b234fedd48255810fe488f1aa79240bb047468a9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
264455938
pes2o/s2orc
v3-fos-license
Building a comprehensive online course series to enhance infodemic management globally Abstract Background Infodemic management has become key to supporting public health prevention and response to epidemics. 91% of WHO Member States declared having capabilities to track and address infodemics and health misinformation (WHO pulse survey). However, the scientific field of infodemiology is quickly evolving, and the practice of infodemic management consequently requires the health workforce to have a multidisciplinary and cross-functional skill set and to update it regularly. Objectives Since 2020, WHO and its partners have developed an innovative global blended training program and created a network of over 1300 infodemic managers from 142 countries. To keep the pace of science development, WHO has created an additional series of online courses allowing frontline workers easy, self-paced, and free access to updated and evidence-based infodemic management frameworks and best practices. Results Mid-2023, a comprehensive “infodemic management” channel, including eight interconnected courses, will be launched on the award-winning platform OpenWHO.org. The first course, “Infodemic management 101”, has already attracted more than 21000 enrolments in English. The seven additional two-hour courses will allow health workers to learn how to generate an infodemic insights report, address health misinformation, be efficient in the field, or co-construct efficient interventions with communities. The course series will be available in English, French, and Spanish. Conclusions This new e-learning program will enhance practice in infodemic management in all countries, even those with low bandwidth. Health workers can access the courses on OpenWHO from anywhere, at any time, and no fees. Key messages • Infodemic management requires the health workforce to have an updated multidisciplinary and cross-functional skill set. • The new comprehensive WHO course series on infodemic management, available in three languages, will help to build skills and competencies for frontline health workers. Background: When teaching infodemic management (IM) to health professionals and emergency response staff, teaching only technical skills is not sufficient because infodemics affect health workers both professionally and personally.Infodemic response also requires lateral thinking to problem-solve and thus requires experiential learning to teach IM competencies.WHO has developed, delivered and continuously improved a simulation-based training approach to teach IM to health workers and public health practitioners. Objectives: To deliver an effective immersive IM simulation the following components must be considered: Designing the simulation world; Designing the tasks in the simulation world; Introducing humor and causing infodemic experience; Teaching cultural humility and ensuring psychological safety; Providing support to trainees; Considerations for virtual vs inperson delivery; Preparing and assembling the delivery team; Delivering the performance and simulation experience. Results: WHO has designed delivered and evaluated teaching immersive simulations in in-person, online and hybrid formats, in English and French, with over 1400 trainees in 4 online and 4 offline trainings.Building and delivering an immersive world requires a team of people with different skills but a common vision and training philosophy.The facilitator team must build rapport between trainees quickly and offer multiple ways how trainees can ask for help, and encourage experience sharing and encourage further diffusion of materials, concepts and training approaches within the trainee community.Conclusions: WHO immersive teaching simulations for infodemic management competence building have used human-centered design and flipped classroom approaches in design and delivery of the learning and teaching programmes.Simulations have been adapted to other languages, topics, and adapted to regional cultural contexts. Background: Infodemic management has become key to supporting public health prevention and response to epidemics.91% of WHO Member States declared having capabilities to track and address infodemics and health misinformation (WHO pulse survey).However, the scientific field of infodemiology is quickly evolving, and the practice of infodemic management consequently requires the health workforce to have a multidisciplinary and cross-functional skill set and to update it regularly. Objectives: Since 2020, WHO and its partners have developed an innovative global blended training program and created a network of over 1300 infodemic managers from 142 countries.To keep the pace of science development, WHO has created an additional series of online courses allowing frontline workers easy, self-paced, and free access to updated and evidence-based infodemic management frameworks and best practices. Results: Mid-2023, a comprehensive ''infodemic management'' channel, including eight interconnected courses, will be launched on the award-winning platform OpenWHO.org.The first course, ''Infodemic management 101'', has already attracted more than 21000 enrolments in English.The seven additional 16th European Public Health Conference 2023 Background: Violence against healthcare workers is a global health problem threatening healthcare workforce retention and health system resilience in a fragile post-COVID 'normalisation' period.There is an urgent need for action to make violence against healthcare a greater priority.Our novel contribution to the debate is a comparative health system and policy approach, aiming to explore major trends and identify policy gaps. Methods: We have chosen a most different systems comparative approach concerning the epidemiological, political, and geographic contexts.Brazil (under the Bolsonaro government) and the United Kingdom (under the Johnson government) serve as examples of countries that were strongly hit by the pandemic in epidemiological terms while also displaying policy failures.New Zealand and Germany represent the opposite.A rapid assessment was undertaken based on secondary sources and country expertise. Results: We found similar problems across countries.A global crisis makes healthcare workers vulnerable to violence.Furthermore, insufficient data and monitoring hamper effective prevention, and lack of attention may threaten women, the nursing profession, and migrant and minority groups the most.There were also relevant differences.No clear health system pattern can be identified.At the same time, professional associations and partly the media are strong policy actors against violence. Conclusions: All countries in our sample failed to respond effectively to growing violence and improve the prevention and protection of healthcare workers.Much more involvement from political leadership is needed; attention to the political dimension and all forms of violence are essential.Violence against HCWs is and will remain a problem long after the pandemic subsides.If political action is not taken, healthcare workers will have an additional reason to leave their profession and workplace, thus reinforcing the healthcare workforce crisis. Key messages: Getting prevention of violence against healthcare workers and effective protection right, enhances the retention of the existing workforce and will attract new generations of healthcare workers.Governments must prioritise developing feasible and effective policy responses to tackle the risk factors that healthcare workers face at the workplace and on social media.To achieve both objectives we used parallel complimentary methods.We began by reviewing literature for research prioritisation and curriculum review.Both processes were undertaken as team endeavors and are in the process of being written up as scoping reviews.As such we are also contributing to the knowledge base in the area of PHPHSR doctoral training, while also updating our own programme.Key findings included the need for continuous stakeholder engagement in setting research priorities, to ensure focus of training is orientated towards emerging needs.In terms of curriculum the need for competency focused curriculums emerged clearly.For example, leadership skills are increasingly required.While traditionally seen as incidental, these now need to be integrated within the curriculum.To achieve this we are consulting with stakeholders using a range of participatory methods such as World Cafe
2023-10-26T15:05:35.136Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "d826576a8141ce3431fdda7838515ac29f7b26eb", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/eurpub/article-pdf/33/Supplement_2/ckad160.1473/52417264/ckad160.1473.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ce9b91c1dbda8a9008ed9293af77286b77973346", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [] }
252323182
pes2o/s2orc
v3-fos-license
Rate of forgetting is independent from initial degree of learning across different age groups It is well established that the more we learn, the more we remember. It is also known that our ability to acquire new information changes with age. An important remaining issue for debate is whether the rate of forgetting depends on initial degree of learning. In two experiments, following the procedure used by Slamecka and McElree (Exp 3), we investigated the relationship between initial degree of learning and rate of forgetting in both younger and older adults. A set of 36 (Exp 1) and a set of 30 (Exp 2) sentences was presented four times. Forgetting was measured via cued recall at three retention intervals (30 s, 1 hr, and 24 hr). A different third of the original sentences was tested at each delay. The results of both experiments showed that initial acquisition is influenced by age. However, the rate of forgetting proved to be independent from initial degree of learning. The conclusion is that rates of forgetting are independent from initial degree of learning. It is well established that the more we learn, the more we remember (Bahrick et al., 1975;Carpenter et al., 2008). What is less clear is whether the rate of forgetting changes as a function of the degree of initial acquisition. This has both theoretical and practical implications. The lack of influence of the degree of learning on the rate of forgetting is theoretically relevant as it poses a challenge to the manner in which forgetting rates are usually analysed. Many researchers have tried to fit the rates of forgetting to one function (e.g., logarithmic, linear, etc.; for recent discussions see Radvansky et al., 2022;Wixted, 2021). However, forgetting functions that start at different levels of performance and yet are parallel cannot be accounted for by a single function. Finding parallel forgetting rates which start at different levels of performance challenges the idea that all forgetting can be explained by fitting a single function. From a practical viewpoint, the relationship between initial degree of learning and forgetting rates is relevant for studies that perform cross-group comparisons to explore the differences in forgetting rates between groups with different encoding capacity, such as clinical populations relative to healthy controls or older versus younger healthy participants. Frequently, such studies assume that the rate of forgetting depends on initial degree of learning (Kopelman, 1985;Mary et al., 2013;Walsh et al., 2014).). Under this assumption, initial performance is matched through procedures that might add confounding variables. The few studies that explored the relationship between initial degree of learning and rate of forgetting achieved different levels of initial acquisition between groups by varying the number of exposures to the to-be-remembered material (e.g., Rivera-Lares et al., 2022;Slamecka & McElree, 1983). A recent study by Rivera-Lares et al. (2022) found that forgetting curves were independent of initial degree of learning. The aim of this study was to expand on these previous findings and investigate if parallel forgetting curves starting at different levels result only from specific experimental manipulations such as varying the number of learning trials, or if they occur as well when the initial degree of acquisition varies as a result of natural group differences in encoding ability. One variable that naturally results in different degrees of learning is age, which is associated with a decline in the ability to acquire new information (Craik & Rose, 2012;Kausler, 1994). Thus, we compared the rates of forgetting of younger and older adults. One of the first studies to explore the relationship between initial degree of learning and forgetting rates was carried out by Slamecka and McElree (1983). By varying the number of exposures to lists of verbal material, they manipulated the initial degree of learning of groups of young adults. They tested memory at three retention intervals of 30 s, 1 day, and 5 days after acquisition by means of free recall, associative matching, cued recall, and semantic recognition. A higher performance at initial degree of learning was found after more repetitions of the to-beremembered material, and performance decreased with each retention interval. Importantly, the rate of decrease across retention intervals did not vary as a function of the initial performance. The authors concluded that the rate of forgetting is independent from initial degree of learning. Later, Kauffman and Carlsen (1989) found a similar pattern comparing the forgetting curves of participants who achieved different levels of initial acquisition of musical excerpts based on their prior musical knowledge. Participants with more musical knowledge achieved higher initial scores than their less-experienced counterparts. However, all groups showed forgetting at a similar rate. Further evidence of the independence between forgetting rates and initial level of retention comes from a recent study by Rivera-Lares et al. (2022). In four experiments, the authors explored whether the level of initial acquisition influenced the forgetting rates using the Slamecka and McElree method. Participants were exposed to different numbers of repetitions, using two different modalities of presentation, and in two different languages. Participants were tested in person, and remotely by email and telephone at intervals from 30 s to 1 week by means of cued recall. In all four experiments, consistent with Slamecka and McElree (1983) findings, the forgetting curves were parallel for groups with different levels of initial performance. In contrast, a study by Yang et al. (2016) concluded that higher degrees of initial learning are associated with slower forgetting. Some methodological differences rendered their study difficult to compare with Rivera-Lares et al. (2022) and Slamecka and McElree (1983 Yang and colleagues are not alone in concluding that forgetting rates depend on initial degree of learning. Following the publication of Slamecka and McElree's (1983) study, a heated debate ensued regarding their conclusions. Loftus (1985) posited that Slamecka and McElree's method was not appropriate to measure forgetting rates, since the psychological mechanisms that underlie the performance measures (e.g., number of correct responses) could decrease proportionally with time, producing scaling problems. He suggested a different method of comparing forgetting curves that is immune to scaling problems if the psychological mechanism that underlies the observable measure of forgetting follows an exponential function. His method consisted of measuring the time that a given memory requires to drop to a certain level of performance. Instead of comparing two memories at the same time as Slamecka and McElree did, Loftus compared the amount of time it took for two participants or groups to reach a given score. However, this method, referred by Loftus as the "horizontal comparison," as opposed to Slamecka and McElree's "vertical comparison," presents a problem. As noted by Wixted (1990), most forgetting curves reported in the long-term forgetting literature follow a negatively accelerated function that consists of initial rapid forgetting followed by slower, steadier forgetting at longer intervals, a pattern also reported by Ebbinghaus (1885Ebbinghaus ( /1964 and many others (e.g., Bahrick & Phelphs, 1987;Murre & Dros, 2015;Roe et al., 2021). According to Wixted (2021), this pattern could be consistent with a consolidation process that underlies forgetting, rendering newer memories more fragile than older memories. This process, consistent with Jost's (1897) second law of forgetting, implies that memories of different ages have different strengths. The Loftus method requires the comparison of memories of different ages, and therefore, of different strengths, and this is confounded with initial levels of performance following learning. For this reason, in this study, we use the Slamecka and McElree method to compare forgetting rates, which has also been used by Giambra and Arenberg (1993), Tombaugh and Hubley (2001), and Yang et al. (2016). The objective of this study was to investigate if the rates of forgetting are independent from initial degree of learning also when the latter is a result of natural occurring differences in encoding, and not only of experimental manipulations. For this reason, our approach was to fix the number and length of exposures and compare groups that usually perform at different levels following initial encoding. For this purpose, we compared groups of different ages (Kausler, 1994;Trahan & Larrabee, 1992), not to explore effects of ageing as such, but to take advantage of an expected difference between groups in initial levels of memory performance following encoding. In a review of multiple studies, Salthouse (1991) compared the rate of forgetting for older and younger adults and found different patterns of forgetting between the groups in half of the studies. Similarly, Kausler (1994) found no consistency in the pattern of forgetting rates based on material or type of test. Rybarczyk et al. (1987) tested participants at 10 min, 2 hr, and 48 hr, and Harwood and Naylor (1969) tested participants at 4 weeks. Both experiments were carried out using line drawings as material to be remembered. However, Rybarczyk et al. found similar forgetting rates, whereas Harwood and Naylor found that older adults forgot faster. Stamate et al. (2022) found that when memory was not refreshed by intervening tests (at 1 day and 1 week), older adults forgot faster than younger adults over the course of 1 month. Whenever a difference in the rate of forgetting was found between age groups, older adults seemed to forget faster relative to the younger adults. Studies comparing forgetting rates in older and younger adults, typically have matched initial levels of memory performance across groups by exposing older adults to more repetitions of the material, or by making their study trials longer. However, using such procedures involves the comparison of memories of different ages, because a longer time had elapsed between the start of encoding and the memory test for the older participants, who required more encoding time or trials, than the younger group. One experiment that did not equate initial degree of learning across age groups was carried out by Giambra and Arenberg (1993, Experiment 1). This experiment, based on Slamecka and McElree's (1983) Experiment 3, compared the forgetting rates of younger and older adults to examine performance across four retention intervals: 30 s, 3 hr, 6 hr, and 24 hr. A different subset of sentences was tested at each retention interval. Initial degree of learning was significantly higher for younger adults, but this difference had no effect on the rate of forgetting, suggesting that forgetting rate is independent of initial degree of learning when there is no confound with the time elapsed since the start of encoding. The two experiments reported here have a few differences compared to Giambra and Arenberg (1993) Experiment 1. Since Wheeler (2000) found age-related differences in memory performance as early as at 1 hr, we used a 1-hr interval instead of the 3-and 6-hr retention intervals used by Giambra and Arenberg, and compared this with the 30 s and the 24-hr delays used by Slamecka and McElree (1983). Furthermore, both Slamecka and McElree, and Giambra and Arenberg analysed their data by means of analyses of variance (ANOVAs), which require that the observations be independent from each other. Their repeated measures design violates this assumption, since the same participants were tested at each retention interval. Their dependent variable was treated as a continuous variable. However, at the item level, their dependent variable is a binomial outcome (1 correct, 0 incorrect). Generalised mixed effects models are recommended when dealing with binomial data (Bye & Riley, 1989) as they can account for the multi-level structure of the data (Quené & van den Bergh, 2004). We followed this recommendation for the analysis reported here. Before carrying out the two experiments reported in this study, we conducted a pilot study to evaluate the memory performance of older adults after a delay of 30 s following encoding. We did not have access to the materials (simple sentences) used by Slamecka and McElree (1983), and so generated our own from the description given in their paper. Each sentence comprised a unique combination of subject, verb, and object. The results of this pilot indicated that, following the same procedure as Slamecka and McElree, even younger adults performed poorly after learning 48 sentences. Therefore, we decreased the number of sentences to 36 (Experiment 1) and 30 (Experiment 2) and tested a different subset of 12 (Exp. 1) or 10 (Exp. 2) at each retention interval to avoid the impact of the testing effect (Rickard & Pan, 2018) known to enhance memory performance when the same material is tested after different retention intervals (Roediger & Butler, 2011). Sampled testing can influence the retrieval in subsequent tests, either producing retrieval-induced facilitation (Baddeley et al., 2019) or retrieval-induced forgetting (Anderson et al., 1994). Retrieval-induced facilitation occurs when the material has a high degree of integration, such as sentences within a coherent narrative. The sentences in this study cannot be integrated in this way, thereby minimising retrieval-induced facilitation (Baddeley et al., 2019;Chan et al., 2006). On the contrary, retrieval-induced forgetting occurs when different items are associated with one common cue. In this study, there was no overlap in the wording across sentences, and the subject served as a unique cue for the verb and its respective noun in each sentence, thereby minimising retrievalinduced forgetting. Experiment 1 In this experiment, we compared the forgetting rates of older and younger adults using a list of 36 sentences at retention intervals of 30 s, 1 hr, and 24 hr. Memory performance was assessed via cued recall, using a different subset of the studied material at each retention interval. Method Participants. A total of 90 healthy participants were recruited into two age groups: 60 younger adults (M age = 21.09, SD = 2.44, range: 18-30, 23 men) and 30 older adults (M age = 65.52, SD = 4.6, range: 60-75, 9 men) 1 . Two participants from the younger group and one from the older adult group were excluded due to a lack of commitment with the task, which was evident in the activities they were engaging with while being tested. All participants provided their written, informed consent before participation and upon completion received a small honorarium for their time. All were native English speakers with normal or corrected-to-normal vision. All participants scored 26/30 or over on The Montreal Cognitive Assessment (MoCA - Nasreddine et al., 2005). They were not on medications that may affect memory functions and did not report a history of head injuries, medical (e.g., heart attack), neurological (e.g., epilepsy), or psychiatric diseases (e.g., depression). All participants had completed at least 11 years of education. This study was approved by the relevant Research Ethics Committee. Materials. A list of 36 sentences was created for this experiment, each one with the form of subject-verb-object. Each subject, verb, and object were used only once. The sentences were constructed using objects that were plausible but, to minimise guessing, were not commonly or uniquely associated with the verb. For example, "The musician played a harp" or "The hunter followed the hare." To minimise the effects of repeated retrieval, the 36 sentences were independent from one another. The complete set of sentences is given in the Supplementary Material. Memory performance was tested by means of cued recall in written form. It has been demonstrated that repeated retrieval of encoded material slows forgetting (review in Roediger & Butler, 2011). To reduce these practice effects, a different subset of sentences was tested at each delay (Baddeley et al., 2019). The subsets were created by evenly splitting the 36 sentences into three groups to create three different response sheets, using only the subject of the sentences as cues. For example, "The musician" and "The hunter" from the example sentences above. Each subject was followed by a line in which the participants were asked to complete the sentence by writing down the corresponding verb and direct object. For the examples above, the correct responses would be "played a harp" and "followed the hare" respectively. The order of the subjects in each response sheet was fixed. Design. The dependent variable was the binomial outcome correct (1) or incorrect (0) response to each sentence. The independent variables were age group (younger and older), and retention interval (30s, 1 hr, 24 hr) which was measured within subjects. Procedure. Participants were tested one by one in a quiet room. Each participant sat comfortably in front of the computer. During the encoding phase, each participant was asked to read the list of sentences that would appear on the screen, and to try to memorise them for further testing. Participants were informed that each list consisted of 36 sentences, which would appear in random order four times. The encoding phase consisted of the presentation of the 36 sentences one by one on a computer screen, written with black letters on a white background. The list of 36 sentences was presented four times, and at each time, the sentences were presented in a randomised order. Each sentence was on screen for 5 s, with a 2 s gap between sentences. Between each list of 36 sentences, the screen remained blank for 15 s. Two seconds after the last sentence of the last study trial was presented, the instructions for a distractor task were shown on screen, asking participants to perform subtractions by sevens from a three-digit number. After 30 s, the screen showed the word "stop," indicating the end of the encoding phase. The aim of this distractor task was to prevent rehearsal of the sentences, removing the support of short-term memory from retrieval. This task was not scored and was practised once before the encoding phase started, using a different three-digit number for the distractor task for each phase. The testing phase started immediately after the participant finished the distractor task. Each participant was presented with the first response sheet and was asked to try to retrieve the sentences for at least 5 min, and to leave the response field empty if they could not remember the response. At the end of this first session, the participants were reminded that they would have to return for the second and third tests after 1 hr and after 24 hr. The three response sheets created were counterbalanced across all conditions. Planned analysis. Each response sheet was scored per sentence with either 1 (correct) or 0 (incorrect). Since all participants and all items were tested at the three retention intervals, the data in this study has a multilevel structure. Generalised linear mixed-effects models are best suited to handle binomial outcomes, data that violate the assumption of independence required for more traditional methods such as ANOVA (Jaeger, 2008), and hierarchical data such as the ones in this study. Moreover, mixed-effects models avoid losing information since the data do not need to be averaged as in ANOVA information (Bliese et al., 2018). Mixed-effects models include random effects of participants and items, so the model accounts for the variance in the data due to the differences in memory capacity of the participants, and the different level of difficulty of the items. As a consequence, these models allow for a better understanding of forgetting over time compared to traditional analyses such as ANOVA. A Bayesian generalised linear mixed-effects model was fit using the Stan modelling language (Carpenter et al., 2017) and the R package "brms" (Bürkner, 2017(Bürkner, , 2018 using the default priors since the information we had from previous studies was not applicable to the data from older adults. Parameter uncertainty is described by the 95% credible interval (CI) of the posterior distribution in addition to the mean parameter value. Substantial in the context of Bayesian inference means that 0 is not within the boundaries of the 95% CI. We used a Bernoulli data distribution. The dependent variable was the binary outcome correct (1) or incorrect (0) response per sentence per participant. Correct responses were defined as the recall of the verb and the direct object that corresponded to the subject presented as cue in the response sheet. A random intercept was modelled over items and participants, as well as a random effect of the retention interval over both items and participants, and a random effect of the age group over items. Age group was a between-subjects factor; hence it was included only as a fixed effect over participants in the model. Figure 1 depicts the forgetting rates for each age group across the three retention intervals. Results Age effect of initial degree of learning. There was substantial evidence of an age effect, with older adults presenting a lower probability of correctly retrieving a sentence com- Errors. The incorrect responses were classified as omissions, and as intrusions of studied and non-studied verbs and objects. However, the number of errors of each type was too small to allow for any meaningful statistical comparisons, and therefore, were not analysed further. Summary and comment As expected, we found a substantial difference in initial degree of learning between age groups. With the same number of exposures to the sentences, older adults recalled fewer sentences at 30 s relative to their younger counterparts. Performance declined with each retention interval. Most of the forgetting occurred during the 1 hr interval, consistent with the classic forgetting curve first described by Ebbinghaus (1885Ebbinghaus ( /1964. Importantly, both groups forgot at the same rate despite their initial differences, indicating independence of forgetting rates from the differences at initial acquisition. The focus of this study was not to explore age-related differences in encoding or forgetting. Rather, the main objective was to investigate if the pattern of parallel rates of forgetting after different degrees of initial retention found in previous studies (Slamecka & McElree, 1983;Rivera-Lares et al., 2022) replicates when the difference in initial degree of learning was not the result of laboratory manipulations during encoding, but resulted from natural differences in encoding capacity due to age. The results of this experiment replicated the pattern found by Slamecka and McElree (1983), and by Rivera-Lares et al. (2022) but are inconsistent with Yang et al. (2016). To ensure that our results from Experiment 1 were sufficiently robust to replication, a second experiment was carried out with a reduced number of sentences to maximise initial levels of memory performance in both older and younger participants, while avoiding ceiling effects. In Experiment 2, we used the same material from Experiment 1, but excluded the six sentences with the lowest scores at the 30 s retention interval. Again, our focus was on the impact of differential initial memory performance on forgetting rate, taking advantage of an expected initial performance between the two age groups. Experiment 2 In this experiment, we compared the forgetting rates of older and younger adults using a list of 30 sentences at retention intervals of 30 s, 1 hr, and 24 hr. Memory performance was assessed via cued recall, using a different subset of the studied material at each retention interval. Method Participants. Following the same constraints set out in Experiment 1, a further 60 healthy participants were recruited into two age groups. Three younger adults and one older adult were excluded from the final analysis as they failed to engage with the task. The final analyses included the performance of 27 younger adults (M age = 22.89, SD = 3.46, range: 18-30, 6 men), and 29 older adults (M age = 69.8, SD = 8.13, range: 60-89, 10 men). None of the participants had taken part in Experiment 1. The criteria for participant inclusion and ethics approval were the same as for Experiment 1. Materials. The materials were 30 sentences from the 36 used in Experiment 1. Each response sheet was scored per sentence with either 1 (correct) or 0 (incorrect). With 10 sentences tested at each retention interval, the score range was 0 to 10 at each assessment. Design. The dependent and independent variables are the same as in Experiment 1. Procedure. The procedure was identical to that from Experiment 1, except that the response sheets were created with subsets of 10 items each (i.e., a third of the original material per response sheet). Planned analysis. The data from this experiment were analysed in the same manner as Experiment 1. Results A depiction of the forgetting rates of each group across the three retention intervals is displayed in Figure 2. Age effect of initial degree of learning. There was substantial evidence of an age effect across retention intervals at 30s Errors. The incorrect responses were classified in omissions, and intrusions of studied and non-studied verbs and objects. As in Experiment 1, no further analyses were carried out in these data since the number of errors per category was too little to make any meaningful comparisons. Summary and comment In Experiment 1, older adults recalled substantially less than younger adults, and the performance of both groups decreased with each delay at the same rate. Out of 12 sentences, younger adults retained a mean of 10 sentences at 30 s, and older adults slightly above eight sentences. In Experiment 2 with a smaller number of sentences, the difference at 30 s was similar. Younger adults performed better at the initial interval than older adults. This performance decreased rapidly within the first hour, and slower at the second, longer delay, in similar fashion to the Ebbinghaus (1885/1964) forgetting curve. Both groups forgot at a similar rate, showing independence from the different degrees of learning. General discussion The objective of this study was to investigate if forgetting rates are independent of the initial degree of learning when the difference at initial recall is given by natural variations in encoding ability, such as the ones produced by ageing. Instead of manipulating the encoding process to create differences in initial acquisition, we tested older and younger adults as it is known that ageing is associated with a decline in the ability to acquire new information (Kausler, 1994). In two experiments, both age groups were presented with four repetitions of a list of 36 sentences in Experiment 1, and 30 sentences in Experiment 2. Participants were tested at intervals of 30 s, 1 hr, and 24 hr by means of written cued recall. As in previous studies with manipulations during the presentation of the material (e.g., Feng et al., 2019;Rivera-Lares et al., 2022;Sinyashina, 2019), a substantial difference between groups was found at the initial test. The correct recall of the sentences decreased with each delay, showing forgetting. Forgetting was faster during the first and shorter interval, and levelled out during the second, longer interval. This pattern matches the classic Ebbinghaus forgetting curve (Ebbinghaus, 1885(Ebbinghaus, /1964, a pattern that has consistently been found in several studies with different designs, types of tests, interval lengths, and materials (for a review see Rubin & Wenzel, 1996). Critically, the rates at which information was forgotten were the same for both age groups, in both experiments. Currently, there is no consensus regarding whether the rate of forgetting is independent of initial degree of learning, and the question of whether learning and forgetting are two sides of a similar process is still without a definitive answer. However, the evidence seems to be mounting in favour of the independence of forgetting from initial degree of learning, as the present results are consistent with Slamecka and McElree (1983), Giambra andArenberg (1993), andRivera-Lares et al. (2022). Our results are, however, inconsistent with Yang et al. (2016) and with Loftus (1985). The inconsistency with Yang et al. could stem from the different methods used during encoding. Their material consisted of words and word pairs, and during encoding the participants were asked to perform a concreteness judgement task, and to form a sentence with the word pairs, whereas we simply asked participants to read and remember a list of unconnected sentences. Our results also are inconsistent with Loftus (1985), most likely due to differences in the method for measuring forgetting. Loftus stated that the observable measures of forgetting, such as the number of items correctly recalled, must be related to an unobservable psychological process that will not necessarily have a linear relationship with the observable measure. If the decline of the unobservable measure of forgetting was, for example, exponential, as is the decay of the radioactivity, two forgetting rates should only be compared when both have achieved the same level of the observable variable, such as the same number of correct responses. This method of horizontal comparison is immune to scaling issues because the transformations of the dependent variable would adjust the forgetting slopes in the vertical direction (i.e., y-axis), leaving intact the differences in the horizontal direction (i.e., x-axis). As noted in the introduction, the Loftus method also confounds the age of the memory with the rate of forgetting. A further problem we see with the Loftus method is that although there must be underlying psychological mechanisms of forgetting, to this date there is no strong evidence to suggest that it follows an exponential function. The pattern that has emerged with most consistency in the forgetting literature is a negatively accelerated curve, which describes rapid forgetting at the initial intervals followed by slower forgetting. This pattern is consistent with Jost's (1897) second law of forgetting, which suggests that older memories are less susceptible to being forgotten than newer ones. It follows that the Loftus method confounds the comparison of memories of different ages, and therefore of different strength, with retention interval and initial levels of performance. Our results raise an interesting problem. In the quest to find the shape of forgetting, most researchers have concentrated their efforts into fitting a single function, be this exponential, logarithmic, power, or linear (e.g., Fisher & Radvansky, 2019;Loftus, 1985;Rubin & Wenzel, 1996;Wixted & Ebbesen, 1991), which implicitly assumes that there is a unitary source of trace strength. One exception is the model proposed by Bogartz (1990) that assumes that there could be more than one source of the rate of forgetting. Of all the single function proposals, the one that has been reported more frequently is the negatively accelerated curve from Ebbinghaus, with which our data are consistent. The problem that our data present is that it is unclear how forgetting data that start from different levels can result in parallel forgetting slopes that are negatively accelerated. We agree with Rubin and Wenzel (1996) in that psychology could advance as a science if research establishes robust regularities to describe phenomena. Together, the data from Slamecka and McElree (1983), Rivera-Lares et al. (2022) and the data reported in this article, seem to indicate that there is a function of forgetting that has not been described to this date. Although the intention of this article is not to explain this phenomenon, we offer a proposal for future research: since it is unclear how a single function could explain our data, a solution to be explored would be that there are two or more contributions to the initial recall and the subsequent course of the forgetting slopes. One source of forgetting could be represented by a gradual erosion of traces over time following a linear function, and the other one would assume that different kinds of information have different rates of forgetting (Radvansky et al., 2022) due to differences in their resistance to such erosion. This perhaps would depend on the nature of the remembered material, for example, with detail encoded less robustly than gist (e.g., Sacripante et al., 2022), producing a non-linear function. Although we used groups of different ages to investigate forgetting curves, the goal of this study was to determine the relationship between initial level of acquisition and forgetting rates, and not to examine age-related differences in acquisition or forgetting. Therefore, this discussion will not focus on previous findings related to age differences in forgetting, especially because most of the relevant studies match initial acquisition between age groups, hindering conclusions about the influence of different initial degrees of learning on the rates of forgetting. However, it could be argued that the mechanisms that are affected during learning are the same, or related to, the mechanisms of forgetting. A study that explored individual differences and rate of forgetting was carried out by Zerr et al. (2018). The authors reported differences in forgetting rates after their participants reached criterion through the drop-out method. The forgetting rates, however, did not vary with initial degree of learning, which was identical for all participants, but forgetting rates did vary with the learning rate of each participant. Faster learners retained more information for longer relative to slower learners. Usually, older adults have slower rates of learning for verbal material (Kausler, 1994). In our experiments, after four repetitions of the sentences, there was an initial difference in performance between age groups, which indicates a slower rate of learning in older adults. However, older adults forgot at the same rate than younger adults, regardless of their initial deficit. This study and the results from Zerr et al. are difficult to compare since their focus and the paradigms are different. Due to the nature of our study, an effect of initial degree of learning was essential. In their study, however, initial degree of learning was matched, and the forgetting curves of each participant were compared to each participant's rate of learning. In this study, individual differences were controlled for in the statistical analyses since the statistical models we employed accounted for individual differences in both initial degree of learning and the forgetting slope. Another important difference between the study by Zerr and colleagues and our design is that to reach criterion and to decide which words needed to be repeated, tests were intercalated between the repetitions of the material. Our study, on the contrary, had only four study trials followed by the tests. One concern that arises when testing the same participants over multiple delays is the possibility of retrievalinduced forgetting, which occurs when various targets compete during retrieval for their association with a common cue (Anderson et al., 1994). Our material minimised the possibility of retrieval-induced forgetting since the subject used as cue for each sentence was unique to each verb and each object. If retrieval-induced forgetting had been present in our results, we would expect to have seen intrusions of studied items more frequently. However, the intrusions of not-studied items and omissions were the most common errors. Evidence seems to be accumulating in favour of an independence between initial degree of learning and rates of forgetting. The results of both experiments reported here, together with the data obtained by Slamecka and McElree (1983) and Rivera-Lares et al. (2022) show clearly that forgetting rates remain stable regardless of different initial degrees of learning, even when the difference in acquisition is not the result of manipulations during encoding, but a result of natural changes in encoding ability such as the ones that occur during healthy ageing. It is important to note that the evidence we report in this study is limited to the type of material we used, and material used by other studies (e.g., Cohen-Dallal et al., 2018;Kauffman & Carlsen, 1989) that have also found parallel forgetting curves following different initial levels of performance, and only when forgetting is measures as the number of items forgotten over time. Different results could be obtained if forgetting rates are evaluated using a different measure of forgetting such as proportion of loses (e.g., Loftus, 1985) or if differences in the rate of forgetting are assessed using a different approach such as curve-fitting (e.g., Carpenter et al., 2008). Further research is needed to investigate if the pattern found in the two experiments reported here can be replicated across different materials and to explore possible accounts for the negatively accelerated forgetting function observed here and in a range of previous studies. Moreover, the pattern of forgetting could be different at longer intervals, which is worth exploring in future research, providing that floor effects can be avoided.
2022-09-17T15:07:00.690Z
2022-09-15T00:00:00.000
{ "year": 2022, "sha1": "3ea4bc7a78e518ef0c00a4225fb9aef771229510", "oa_license": "CCBYNC", "oa_url": "https://www.pure.ed.ac.uk/ws/files/324753176/17470218221128780.pdf", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "dd9aedebf1370d442428fa8ae0519074f74e2477", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
4481863
pes2o/s2orc
v3-fos-license
Interactive effects of genotype x environment on the live weight of GIFT Nile tilapias In this paper, the existence of a genotype x environment interaction for the average daily weight in GIFT Nile tilapia (Oreochromis niloticus) in different regions in the state of Paraná (Brazil) was analyzed. The heritability results were high in the uni-characteristic analysis: 0.71, 0.72 and 0.67 for the cities of Palotina (PL), Floriano (FL) and Diamond North (DN), respectively. Genetic correlations estimated in bivariate analyzes were weak with values between 0.12 for PL-FL, 0.06 for PL and 0.23 for DN-FL-DN. The Spearman correlation values were low, which indicated a change in ranking in the selection of animals in different environments in the study. There was heterogeneity in the phenotypic variance among the three regions and heterogeneity in the residual variance between PL and DN. The direct genetic gain was greater for the region with a DN value gain of 198.24 g/generation, followed by FL (98.73 g/generation) and finally PL (98.73 g/generation). The indirect genetic gains were lower than 0.37 and greater than 0.02 g/generation. The evidence of the genotype x environment interaction was verified, which indicated the phenotypic heterogeneity of the variances among the three regions, weak genetic correlation and modified rankings in the different environments. INTRODUCTION The projection by 2024 of world fishery production is 191 million tons, in which aquaculture will be the main responsible and that can reach production of 96 million tons, one of the productive sectors of accelerated growth, surpassing fishing in 2023 (OECD/FAO 2015).Nile tilapia, Oreochromis niloticus, is one of the most promising fish species for fish farming because it presents characteristics of zootechnical interest, it has tasty meat and great acceptance by the consumer market, its body growth SHEILA N. DE OLIVEIRA et al. is characterized by increased weight, length, height and circumference as a function of age (Rodrigues Filho et al. 2011). According to data from the Brazilian Institute of Geography and Statistics (IBGE 2014), Brazil reached a production of more than 367 thousand tons of fish in 2014 and is one of the seven largest producers of tilapia in the world, being this species one of the between three most cultivated on the planet (ABPA 2014).Despite this commercial production potential, until a few years ago, there was no appropriate fish selection program.This fact can be led to intense national inbreeding because of the use of the small parental population that decreased the growth rate and the produce quality standards.In 2005, however, a partnership between the Universidade Estadual de Maringá and the WorldFish Center in Malaysia introduced approximately 600 breeder fish from a program based on 20 years of selection (Lupchinski et al. 2008). Promising results were obtained in some of the fish breeding programs with the growth rate showing gains of up to 15% at every generation (Ponzoni et al. 2005).Is very important, when choosing the species, consider some background knowledge on fish production and reproduction and proper raising conditions, environmental conditions and market.However, for achieving high gains is establishing a base population for a genetic improvement program in aquaculture, possessing great genetic variability, which can be obtained from the use of various subpopulations (Hilsdorf et al. 2014). The appropriate selection of the best animals for the population requires accurate estimates of the genetic parameters and components of covariance.Based on Resende (2007), estimates of the variance components are essential for achieving three goals: (i) the genetic control of the traits to design efficient breeding strategies, (ii) prediction of genetic values of the applicants to the selection program, and (iii) the sample size, the methods for estimating the genetic parameters and the selective accuracy. In Brazil, the large territory and the various fish raising systems have made it necessary to evaluate the genotype x environment interaction because the phenotype is a consequence of the genotype under the influence of the environment, for this reason it is important to know if there is significant when there are several environments being tested (Ponzoni et al. 2008).Similar experiments were performed in Malaysia, where the responses from the GIFT tilapia were evaluated in ground ponds and net-cages (Khaw et al. 2009a); and in the Philippines under seven environmental conditions with different agro-climatic conditions and raising systems (Eknath et al. 2007). The evaluation of this interaction is also important because it can promote genetic, phenotypic and environmental variations that affect the estimates of these parameters based on the environment where outstanding genotypes in one region cannot match the same ranking in another region.The genotype is assessed using several techniques and can interact with environmental factors that affect the responses and influences the animal phenotype (Baye et al. 2011).If this interaction is not considered, there will be modifications in the animal ranking selected under different environments (Cerón-Muñoz et al. 2004) with inefficient and biased selection when the aim of the genetic gain is not achieved.The current experiment was aimed at evaluating the effect of the genotype x environment interaction on the GIFT Nile tilapia live weight in three regions of the Paraná State (Brazil) using Bayesian inference. MATERIALS AND METHODS Records from 1,132 fish (males and females) were collected from the data bank of the fifth generation (G5) of the tilapia breeding program at the Universidade Estadual de Maringá (UEM), Paraná the brothers were distributed randomly getting representatives of each family in all evaluated environments. A group of fish stayed in ground ponds with hapas measuring 200 m³ in the Floriano Fisheries Experimental Station (UEM), where the average annual air temperature was approximately 21.9 °C.The second group was sent to Diamante do Norte County, where the average annual temperature was approximately 24 °C, and a third group of fish were grown in cage-nets measuring 600 m³ set up in the watercourse of the Corvo River.Finally, the fourth group was sent to city of Palotina with the average annual air temperature was approximately 20.8 °C to grow in ground pond conditions. A summary of the data set is reported in Table I.The quality of these data was monitored by the SAS® program (SAS Institute 2000), where outliers (less than 0.1%) were excluded to maintain the 1,132 fish.The live weight (g) and age (d) in the current experiment were collected from the fifth and last records taken from these animals.The covariance components and genetic parameters for live weight (g) were estimated using Bayesian inferences with the Statistical program MTGSAM (Multiple Trait Gibbs Sampling in Animal Models) developed by Van Tassel and Van Vleck (1995).The animal model with fixed effects of sex, linear and quadratic effects of the covariate State, Brazil.First, the experiment started in the aquaculture system of the UEM, where fish from the fourth generation (G4) were mated to achieve the G5 generation and was responsible for raising siblings and half-siblings in the following regions of the Paraná State: Palotina, Floriano and Diamante do Norte counties.The mating proportion was two females to one male allotted in individual ground hapas measuring 1 m 3 under plastic film protection. Every week, males and females were monitored to detect the ideal mating evidence: in males, an expanded urogenital opening and in females, a reddish urogenital opening with a swollen and soft ventral side that indicated the presence of eggs.Thus, males and females with the best characteristics were mated.Aggressive fish performance, hatching or any other improper fish performance in the hapas were always monitored to avoid progenitor losses and death during the reproduction season that lasted for approximately four months (from November 2011 to March 2012).Thereafter, the larvae stayed with the females during the entire reproductive season, and this period was proposed as the common environment of larvae culture (c).At the end of the reproduction season, all of the larvae were counted and kept apart.Shortly thereafter, groups (families) of siblings were divided and transferred into two hapas for raising the fingerlings that were randomly distributed to avoid bias within the pond.However, this procedure produced a common effect that was referred to as the "common effect of fingerlings" (w), which is the result of maintaining individuals from the same family from the reproductive season in the same hapas until transfer to the evaluation plots.When approximately 50 individuals of the same family had at least ten grams of live weight, they were numbered based on the passive integrated transponder (pit) tags in their visceral cavity.Seven days later, they were sent to the evaluation regions while maintaining the genetic connection among these three environmental conditions, where SHEILA N. DE OLIVEIRA et al. age (d) was applied to the data set.Furthermore, additive genetic effects (a), common environmental effects of the larvae culture (c), common fingerlings environmental effects (w) and the residual effect were evaluated using the animal model for the unicharacteristic analysis as follows: in which y is the observation vector; X , 1 Z , and 2 Z , 3 Z are incidence matrices from the identified environmental effects; direct genetic effects; and the common environment of fingerlings and larvae culture, respectively.β is the sex effect vector, raising place, and age; a, c, w and e are, respectively, the vectors of the additive genetic effect, common environment of larvae culture, fingerlings and residual. The model in the analysis of bi-characteristics was: where y1, y2 are observational vectors of live Based on y, a, c, w and e with conjunct multivariate normal distribution as where ⊗ where A : is the parentage matrix; ⊗ : is the Kronecker product; G: is the additive genetic covariance matrix: where l I is the identity matrix, with rank equal to the group of siblings number.* C is the covariance matrix from the common environmental effect of larvae culture (c), as where m I is the identity matrix of rank equal to the hapas number of the fingerlings structure used every year.* W is the covariance matrix of the common environmental effect of fingerlings (w) as Ie is the identity matrix with rank equal to the hapas number in the fingerlings structure used every year. R is the covariance matrix of the residual effect: Uni-and bi-characteristics were analyzed combining the live weight from every region as distinct characteristics.Based on the diagnosis test described by Heidelberg and Welch (1983), the entire generating chain achieved convergence.The selection intensity was the same for males and females with approximate genetic gain in every generation of fish selection.The direct genetic gain was based on ; 2 where m a : genetic average of males; and f a : genetic average of females; The indirect gain was based on where 1 2 a a σ = genetic covariance of characteristics of the environment 1 and 2; and 1 a σ = additive genetic standard deviation of one characteristic in the environment 1. The gain percentage (%) was calculated to identify the rate at which the indirect selection has participated in the direct selection: The Spearman correlation were the estimates that were used to verify the animal ranking based on the genetic values predicted in bi-characteristic analysis thus to monitor the different ranks of the animals in every region (Palotina, Floriano and Diamante do Norte), and the Pearson correlation evaluated the level of association among the environments.These correlations were calculated from the predicted genetic values in the bicharacteristics analyses. RESULTS AND DISCUSSION Estimates with their credibility intervals (ICr) and the high-density regions (HPD) for the variance components of live weight (g) in three sites are shown in Table II.Heritability estimates (h²) and genetic participation in phenotype expression from common environment to larvae culture (c²) and fingerlings (w²) are shown in Table III, both from the uni-characteristic analyses. The bi-characteristics analyses in Table IV had posteriori means for the additive genetic variances and genetic correlation, residual and phenotypic with the credibility intervals and high-density region (all values were positive) for all three regions.The results in Table V indicate the values of heritability, credibility intervals and high-density regions. The highest genetic variation of 37.629 was found in Diamante do Norte (DN), and the lowest of 6.244 was found in Palotina (PL) (Table II).Knowing that in both environments exist the same genetic representation, we can understand the greater genetic variation in DN environment as the environment that most favored the expression of the genetic potential of the animals.Rutten et al. (2005) reported an additive genetic variance from 1.481 to 2.778 from GIFT tilapia mated with other lines.Charo-Karisa et al. (2007) reported low estimates for additive genetic variation with weight of 782.8 g at the harvesting period.These results show the great genetic progress because of outstanding animals, and the genetic variation suggests continual weight gain (Ponzoni et al. 2005). The differences in the "a posteriori" means for all of the parameters (Table II) were significant (p < 0.05) in all of the regions based on the analyses of Bayesian contrasts.These results show that the genetic expression among the several environments are different, and they indicate a genotype x environment interaction, being that the evaluated environments, presented variations on the production system (land farmed and network tank) and also management, each system kept the usually adopted routines (feed, number and times Variance genetic: σa 2 ; covariance genetic: cov a ; genetic correlation: r a; residual correlation: r y ; credibility intervals ICr and high density regions: HPD. of treatment, care and water quality fertilizer tanks -when necessary).In all of the environments, the participation of the common environment for fingerlings σ c 2 , 280(PL), 634(FL), and 1,519(DN), and fingerlings σ w 2 , 241(PL), 585(FL), and 1,438(DN), were similarly and relatively lower than reported by Santos et al. (2011), who found σ c 2 = 1,147.64 in cage-nets where the common fingerling environment was a motherhood effect.The residual variation was 1,892(PL) and 3,942(FL), which were also lower than the values reported by Santos et al. (2011), σ e 2 = 5,965.53 in contrast to DN, where the variation was higher (14,968) than reported in the literature.The phenotypic variance among the regions, 8,657(PL), 18,841(FL) and 55,556(DN), and it performed similarly to the genetic (relatively low). The "a posteriori" distribution of all parameters was symmetric based on the closeness of the credibility interval and the high-density region.Because either the mean or the median could be used to represent the distribution, the current option was the mean "a posteriori". Based on the credibility intervals (Table II), we found residual heterogeneity of variance in the Palotina x Diamante environment and phenotypic heterogeneity variance in the Palotina x Floriano environment.Heterogeneity occurs when the credibility interval of a parameter in one region is not contained in the other region interval.In Table II, the Palotina interval (ICr = 624 -3,717) is not contained in the ICr of Diamante do Norte (ICr = 4,420) and phenotypic heterogeneity of variance in Palotina (ICr = 6,720) x Diamante do Norte (ICr = 42,090 -70,650) also exists as in Floriano (ICr = 14,040) x Diamante do Norte (ICr = 42,090 -70,650) because of the genotype x environment interaction. These heterogeneities can occur because the differences in local management, stress, farming system (ground pound or cage-nets) and conditions, water and feed quality, daily serving, weather SHEILA N. DE OLIVEIRA et al. sanitary conditions can affect the animal responses in every experimental condition.Variance differences with subclass (regions) can reduce the accuracy of predicting parent values with an inadequate selection of fish in different environments and can consequently reduce the genetic progress (Weigel and Gianola 1993).Without residual heterogeneity, the results can overwhelm the data from animals raised in large environments. The heritability values were higher from all of the regions (Table III) with 0.71 in Palotina, 0.72 in Floriano and 0.67 in Diamante do Norte of than authors working with previous generations, in this same lineage as presents Oliveira (2011) when the estimates of h² for live weight were 0.15 using Bayesian Inference and Santos et al. (2011), with a heritability of 0.39 for live weight at the harvesting time using frequentist inference.The increase in heritability results can be seen as a response to the selection that has occurred over the years in this tilapia (fifth generation -G5) strain in Paraná, which favors increased genetic and yield performance with each generation.Ponzoni et al. (2005) found h²=0.34 for live weight, similar to Nguyen et al. (2007), who reported an average of 0.35.The closest result to this current report was described by Charo-Karisa et al. (2006, 2007), who found a heritability of the live weight of 0.60 at harvesting time. With high estimates of heritability, the emphasis of the selection must be centered at the individual level, which means that the best individuals are chosen as the reproducers.However, based on the average to low estimates of h 2 , the best choice is based on families.Individual selection exhibits fast responses in genetic gain/generation but the variability in small group is reduced in less time than the selection within the family. Works with previous generations, show that participation in the genetic variation of the common environment of larvae production and fingerlings were close to zero, as reported by Oliveira (2011), and lower than reported by Santos (2009) from 0.20 to 0.05 for σ c 2 (common maternal environment = common larvae production environment).The explanation for such high heritability values is that of all of the fish participated in the genetic selection process and the current fifth generation and that the selection criterion is daily weight gain highly correlated with live weight (0.99) (Porto et al. 2015). The close estimates of the credibility intervals (ICr) and HPD confirm the symmetrical "a posteriori" distributions (Table III).The small interval of credibility for all of the parameters indicate high accuracy in the estimates.The bicharacteristic estimates are similar to the unicharacteristic estimates (Tables II and IV), thus strengthening all of the results of genotype x environment interaction (variance heterogeneity) and indicating that the uni-characteristic analyses are sufficient to explain the current genetic parameters. The genetic and phenotype correlation were very low (Table IV), which indicated the interference of the region in the genetic and phenotypic variation of the live weight in all of the animals.Several experiments with tilapias were performed in Asian countries.In Malaysia, the values of genetic correlation for live weight with the GIFT variety were 0.70 (Khaw et al. 2009b) in two environments (ground pond and cage-net), and from 0.36 to 0.99 in the Philippines, also with the GIFT variety in seven environments (Eknath et al. 2007).Therefore, similar environments as ground pounds exhibit high correlations from 0.76 to 0.99 and cage-nets of 0.99, in contrast to the results obtained from these distinct environments, where the correlations tend to be lower, from 0.36 to 0.86.Similarly, these types of responses were found in Brazil (Santos 2009), where a genetic correlation of 0.89 was reported for similar environments (cage-net), which showed no genotype x environment interaction, in contrast with a distinct environment, where the estimates from 0.58 to 0.65 were responses to this interaction.In Vietnam, the genetic correlation of weight at harvesting time from animals farmed in brackish and fresh water was 0.45 (Luan et al. 2008). A genetic correlation higher than 0.8 can discharge the genotype x environment correlation (Robertson 1959), but with estimates lower than 0.8-0.7, the fully genetic gain can only be achieved when the animals are selected and farmed in the same environment (Mulder et al. 2006) because a correlation lower than 0.7 indicates the presence of interaction, as we found in the current experiment. The estimates from the uni-characteristic analyses were close to the bi-characteristic analyses (Tables II and V), thus sustaining the results for high heritability.The credibility intervals for bi-characteristic were lower, and they showed high precision in the results The Spearman correlation from the unicharacteristic analysis was low (Table VI), where the highest was for Diamante do Norte x Floriano at 0.30 and the lowest was for Floriano x Diamante at 0.08.These values are lower than those reported by Santos (2009), who reported correlations from 0.75 to 0.84 (ground pound and cage-net) and 0.96 (cage net).The low values in the current experiment indicate interaction in the animal ranking.The same was observed with the genetic association from the Pearson correlation lower than 0.22 from Palotina x Diamante to 0.01 in Floriano x Diamante.These results indicate that after selecting animals from one region, they will not occupy the same rank in another location, which is strongly indicative of the interaction.Based on the environment, the animal ranking was modified, and no single species exhibited similar records in all of the regions (Table VII).This fact also sustains the interaction for daily mean weight because of the numerous environmental factors such as weather conditions, management and farming systems.Additional evidence of interaction was the direct gains with values higher than the indirect gains (Table VIII).This was the primary data obtained for fish breeding because when farming occurs under similar selection conditions, the fish can reach their VIII) because previous generations were selected in net-cages and the cumulative genetic gain can benefit the current generation with positive effects.The values from Floriano (198.24g/generation) and Palotina (98.74 g/generation) are distinct because 100 g is a significant difference for two similar ground pond environments.Therefore, this difference may be the result of different management approaches, such as feed quality, water quality, or pond fertilization, which could affect the quantity of phytoplankton because tilapias are omnivorous filter fish that can use this resource when it is available.The differences between Palotina and Floriano were highlighted when the animals were selected in Diamante do Norte and evaluated in Palotina.The lower genetic gain of 7.77 g/generation was incipient compared with the animals selected in Floriano and evaluated in Diamante do Norte, whose gains were 74.66 g/generation (Table VIII). The low participation of indirect genetic gains on the direct ones is evidence of the interaction.The lower participation was in Diamante do Norte with Palotina, where the gain was 0.0276 g/generation, and the highest was 0.3766 g/generation in Floriano with Diamante do Norte (Table IX). CONCLUSIONS The evidence of the genotype x environment interaction was verified by the results of the uniand bi-characteristic analyses, which indicated the phenotypic heterogeneity of the variances among the three regions, weak genetic correlation, modified rankings in the different environments based on the higher levels of direct genetic gains compared with indirect gains, and the lower participation (%) of the indirect gains in the direct gains.Such results can guide further fish breeding programs. ε weight and the indices 1 and 2 are the experimental regions.X1 and X2 are the incidence matrices of sex and age effects in the vectors β1 and β2, correspondent to every region.are random error vectors associated with the vectors y1 and y2. TABLE V Estimates using the bi-characteristic analyses. h²: heritability; c²: common environment larvae culture participation; w²: common environment of fingerlings. TABLE VII Fish ranking within the families based on high genetic values for live weight (g) from 1-10, intermediate genetic values from 11-20 and lower genetic values from 21-30 in the uni-characteristic analyses. full (Hulata 2001, Reis Neto et al. 2014 compared with farming in distinct environments.This result can guide future decisions about selection.Selecting cores and farming conditions in Brazil could intensify the results of breeding programs.Such decisions, however, require high initial investments that may hamper such work, but the incipient results (TableVIII) from the indirect selection show genetic gains because there were no situations with negative values, which indicate losses.In a situation where direct selection for weather and management is not possible, the indirect selection with lower genetic gains can increase productivity(Hulata 2001, Reis Neto et al. 2014).The highest genetic gain was observed in Diamante do Norte (281.35 g/generation) (Table
2018-04-03T00:31:13.465Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "5ffb34b1d5c76bbc9a3755cb55b337ef8bd6d997", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/aabc/v89n4/0001-3765-aabc-201720150629.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5ffb34b1d5c76bbc9a3755cb55b337ef8bd6d997", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258064454
pes2o/s2orc
v3-fos-license
Use of surgical glue versus suture to repair perineal tears: a randomised controlled trial Background Surgical glue has been used in several body tissues, including perineal repair, and can benefit women. Objectives To evaluate the effectiveness of n-butyl-2-cyanoacrylate surgical glue compared to the polyglactin 910 suture in repairing first- and second-degree perineal tears and episiotomy in vaginal births. Design A parallel randomised controlled open trial. Setting Birth centre in Itapecerica da Serra, São Paulo, Brazil. Participants and methods The participants were 140 postpartum women allocated into four groups: two experimental groups repaired with surgical glue (n = 35 women with a first-degree tear; n = 35 women with a second-degree tear or episiotomy); two control groups sutured with thread (n = 35 women with a first-degree tear; n = 35 women with a second-degree tear or episiotomy). The outcomes were perineal pain and the healing process. Data collection was conducted in six stages: (1) up to 2 h after perineal repair; (2) from 12 to 24 h postpartum; (3) from 36 to 48 h; (4) from 10 to 20 days; (5) from 50 to 70 days; and (6) from 6 to 8 months. ANOVA, Student's t, Monte Carlo, x-square and Wald tests were used for the statistical analysis. Results One hundred forty women participated in the first three stages, 110 in stage 4, 122 in stage 5, and 54 in stage 6. The women treated with surgical glue had less perineal pain (p ≤ 0.001). There was no difference in the healing process, but the CG obtained a better result in the coaptation item (p ≤ 0.001). Conclusions Perineal repair with surgical glue has low pain intensity and results in a healing process similar to suture threads. Trial registration Brazilian Registry of Clinical Trials (UTN code: U1111-1184-2507; RBR-2q5wy8o); date of registration 01/25/2018; www.ensaiosclinicos.gov.br/rg/RBR-2q5wy8/ Introduction Perineal trauma in vaginal birth can negatively influence women's physical, physiological, psychological and social well-being with short-and long-term consequences [1,2]. Nearly 70.3% of women present some perineal trauma at delivery, 18.2% present first-degree tears and 40.6% second-degree tears [3]. Nulliparous women present approximately 2.5 times more chances of suffering some perineal trauma at delivery than multiparous women [4]. Page 2 of 10 Caroci-Becker et al. BMC Pregnancy and Childbirth (2023) 23:246 The literature indicates that perineal pain related to perineal traumas is present in many primiparous women during the first year after birth, reported in one out of ten mothers [5]. The incidence of complications in the healing process resulting from perineal traumas varies between 0.1% and 23.6% due to infection and from 0.2% to 24.6% to dehiscence [6]. Currently, the fast-absorbing polyglycolic suture thread (Vicryl ® rapid) with the continuous technique is the primary choice for perineal repair, as it presents better results in pain and perineal healing [7]. However, adhesive glue shows excellent potential for changing the perineal repair technique, as it presents similar or better results to the Vicryl ® rapid suture thread [8][9][10]. One of the first studies that compared the use of fastabsorbing polyglycolic suture with octyl-2-cyanoacrylate surgical glue in the perineal repair of first-degree tears was conducted with 102 women (divided into two groups: 28 sutured women and 74 with glue repair), monitored during six weeks. It concluded that the use of glue presented cosmetic and functional results similar to those of suturing with thread and also several advantages, such as reduction in perineal repair time and perineal pain intensity, exemption from the need for local anaesthesia, and more satisfaction among women [11]. A literature search showed the use of surgical glue in the perineal repair of first-degree tears and perineal skin in second-degree tears. Still, it remained a lack of knowledge in Obstetrics related to the effectiveness of the perineal repair of all tissue layers in second-degree tears and episiotomy [12]. In addition, it is essential to compare several types of surgical glues with other existing methods for perineal repair concerning perineal pain intensity, the long-term perineal healing process, the procedure duration, and the postpartum infection rates. The study aimed to evaluate the effectiveness of surgical glue compared with standard suture thread in repairing first-and second-degree perineal tears and episiotomy in vaginal births concerning perineal pain and the healing process. Design A parallel randomised controlled open trial. Setting The study was conducted at the birth centre of a municipal emergency and maternity hospital in the metropolitan region of São Paulo (Brazil), which assists women with low-risk full-term pregnancies. Participants and sample size The population consisted of women with first-or second-degree spontaneous perineal tears or episiotomy. After delivery, this population was allocated into two experimental groups (EG) and two control groups (CG). The EG consisted of EG1: women who underwent repair of first-degree tears with glue, and EG2: women who underwent repair of second-degree tears or episiotomy with glue. The CG were as follows: CG1: women who underwent repair of first-degree tears with polyglactin 910 thread; and CG2: women who underwent repair of second-degree tears or episiotomy with polyglactin 910 thread. The Bioestat ® 5.3 software was used to estimate the sample size. The sample was constituted to detect a significant minimum difference of 2 points in the pain score between both perineal repair methods. A priori, a residual standard deviation of 3 points, a 5% alpha error and 80% test power, were considered. It resulted in a minimum sample of 35 parturient women in each group. Thus, the sample consisted of 140 women: 70 allocated to the EGs (EG1: n = 35; EG2: n = 35) and another 70 to the CGs (CG1: n = 35; CG2: n = 35). Inclusion criteria The eligibility criteria were as follows: no previous vaginal birth; having up to 6 cm of cervical dilation at the time the woman was invited to participate in the research; not using steroid substances; not presenting leukorrhea or any signs of infection at the repair site; no difficulty understanding the Portuguese language or in communication; accepting to be subjected to perineal repair methods with surgical glue or suture thread. The women included in the study underwent vaginal birth with first-and second-degree spontaneous perineal tears or episiotomy. Randomisation The sequence for inclusion of the parturient in each group was randomised through an electronicallyproduced table of random numbers using the Statistical Package for the Social Sciences (SPSS) statistical program. Opaque envelopes were employed, which were only opened at the perineal repair moment and contained the allocation to the glue or thread repair groups. One of the researchers was in charge of opening the envelopes. Interventions and materials The interventions used surgical glue or suture thread to repair first-and second-degree perineal tears or episiotomy. N-butyl-2-cyanoacrylate (Glubran-2 ® ) is a synthetic surgical glue to be used on internal and external tissue, registered at the National Health Surveillance Agency (Agência Nacional de Vigilância Sanitária, ANVISA) under No. 80159010003. In contact with living tissue or a humid environment, the glue polymerises quickly, creating both an antiseptic barrier and a thin elastic film with high tensile strength, which ensures solid tissue adhesion that is not damaged by blood or organic fluids. Proper glue application leads to solidification that starts in 1-2 s, finishing its reaction after nearly 60-90 s. In typical surgical procedures, the glue film is removed via hydrolytic degradation. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid, which is fully absorbed in approximately 35 days via hydrolysis. The thread used for this study was a Vicryl rapid ® 2.0 fast absorption thread with a continuous suture technique for perineal repair. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid and is fully absorbed in approximately 35 days via hydrolysis. The procedure described by Caroci-Becker et al. (2021 [13] was used to apply the Glubran-2 ® glue. It is worth noting that the woman was subjected to a new repair process with the same material in case of failure in perineal repair with surgical glue. The new repair procedure was performed with suture thread only in case of impossibility to repair with surgical glue due to bleeding, for instance. For the suture with the Vicryl rapid ® thread, local anaesthesia was applied with lidocaine 2% without vasoconstrictor. The perineal repair procedure was performed with thread using the non-anchored continuous technique in all the tissue layers. Outcomes Pain occurrence and intensity were the primary outcomes evaluated, whereas the secondary outcome was perineal healing. The perineal repair time was also evaluated. Training of the team and pilot study In order to improve the technique of applying the Glubran-2 ® glue, a training session was conducted with the researchers before data collection, in which the surgical glue was applied to beef tongue and other pieces of beef. After training the researchers, a case-series study was conducted [13] to implement the necessary adjustments to develop the current study. Data collection and measurements The data were collected from March 2017 to September 2018 in six stages: stage 1: during labour and up to 2 h after the perineal repair procedure; stage 2: from 12 to 24 h postpartum; stage 3: from 36 to 48 h; stage 4: from 10 to 20 days; stage 5: from 50 to 70 days; and stage 6: from 6 to 8 months. A form for the interview and data recording was explicitly developed for this research, which contained the following baseline characteristics: maternal age, ethnicity, schooling level, occupation, marital status, nutritional status, parity, gestational age, body mass index (BMI), newborn weight, and the outcomes variables. A pre-test was conducted to evaluate the form and the procedures that would be done during data collection. As a first step, the researchers presented the study to professionals working in the service in order for them to accept, collaborate and integrate themselves into the research. During the recruitment, the researchers visited the study locus daily to locate the women who met the study's eligibility and inclusion criteria. The eligible women were invited to participate in the study when hospitalised. Aiming to avoid bias in the data, the classification of the perineal trauma and the evaluation regarding the need for the repair procedure was in charge by the nursemidwives of the birth centre, who were not part of the research team. Nevertheless, the nurse-midwives of the research team were in charge of the perineal repair procedure. Both groups used a digital stopwatch to measure the perineal repair time. The professionals were asked to prescribe analgesics or anti-inflammatory medications if the puerperal women complained about pain so that perineal pain intensity could be better assessed. The participating women were instructed to request pain medications anytime they needed them. A medical evaluation was requested in case of complications related to the perineal repair procedure in the women from any research group. In order to assess perineal pain intensity, the women were handed in the Visual Numeric Scale (VNS) to visualise and indicate the number corresponding to pain intensity. VNS consists of a horizontal line with values expressed in centimetres from 0 to 10, where zero is the total absence of pain and ten represents the worst pain possible [14]. This evaluation was performed 2 h postpartum to ensure a proper pain assessment between the groups, avoiding the anaesthesia bias in the suture group and all the other study stages. The perineum healing process was evaluated using the REEDA scale in stages 1 and 4 of the study. The scale is indicated to evaluate the tissue recovery process after perineal trauma through five healing items: redness, oedema, ecchymosis, discharge, and approximation (coaptation of the wound edges) [15]. Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result [16]. Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result [16]. A Peri-Rule ® ruler was used to measure hyperemia, oedema, ecchymosis and coaptation of the edges [17]. This ruler was wrapped in polyvinyl chloride (PVC) film and reused after cleaning with soap and water, followed by antisepsis with 70% alcohol. In addition to the items on the REEDA scale, the researchers evaluated any other tissue damage or morbidity related to perineal repairs, such as hematoma, itching, wound infection, or allergic reaction. Given the nature of the interventions and outcomes, there was no possibility of blinding, as both the women and the researchers were aware of the type of perineal repair performed and because, in the evaluation of the healing process, it is possible to see whether glue or suture thread was used. Statistical analysis The data were double-typed into Epi-Info 6, and the database was validated and imported into Excel. The mean and standard deviation (SD) were calculated for the descriptive analysis of the continuous quantitative variables. The Student's t-test was used to determine whether there was a statistical difference between the means of the two groups and analysis of variance (ANOVA) with the coefficient of determination for the multiple comparisons of means. Absolute and relative frequencies were calculated for the categorical variables. The test used in the inferential analysis was Pearson's chi-square, and the approximate chi-square test in the Monte Carlo simulation was used in cross-tabulation. In the longitudinal analysis, the generalised linear model (GLM) was employed, with Wald's chi-square test and analysis of the interactions of the effects (group and time or group and tear degree) based on linearly independent pair comparisons between estimated marginal means. The significance level adopted was p ≤ 0.05. The analyses were performed in the following statistical packages: SAS System for Windows V8, SPSS for Windows (version 12.0) and Minitab Statistical Software -Release 13.1. Results A total of 254 women met the eligibility criteria. Among these, 114 were excluded for the following reasons: not meeting the inclusion criteria (n = 76; caesarean section indicated during labour = 55; intact perineum = 21); refusing to participate (n = 7); other reasons (n = 31; firstdegree tear when the number of EG participants was completed = 12; included in the pilot study = 19). Consequently, 140 women were included and randomised in the allocations: EG1 (n = 35), EG2 (n = 35), CG1 (n = 35), and CG2 (n = 35) groups, according to the type of trauma and the repair procedure performed (Fig. 1). All the women enrolled on this study were nulliparous. There was no significant difference between the EGs and CGs (EG1, EG2, CG1, and CG2) concerning the sociodemographic and clinical characteristics (Table 1). Perineal pain intensity was evaluated in both types of perineal repair, from stage 1 to stage 6, verifying that perineal pain intensity was lower in the EGs (p ≤ 0.001), with a decrease in pain over time (p ≤ 0.001) (Fig. 2). The healing process according to groups is shown in Fig. 3. The separate analysis of the REEDA scale items in the EGs and CGs presented a variation in the scores of the "edge approximation" item, observed with the group, to the study stage and the tear degree. Approximation was better among the women with first-degree tears who had perineal repair with suture (for the group and tear degree effects: p ≤ 0.001). Over time, approximation was also better in the CG women (for the group, and time p ≤ 0.001). It is worth noting that the lower the REEDA score, the better the healing process. In the EG, a new repair procedure with surgical glue was necessary for six women (8.6%; EG1 = 2; EG2 = 4) between 12 and 48 h postpartum. It is worth mentioning that these women continued in the study. No need for a new repair procedure was verified in any of the CG women. The studied groups did not observe other tissue damage or morbidity related to perineal repairs, such as hematoma, itching, wound infection, or allergic reaction. The perineal repair time was lower in the EG compared to the CG, with a mean of 12.1 (SD = 12.4) minutes vs 18.2 (SD = 10.1). It is worth noting that the repair time was not recorded in 22 (31.4%) women from the EG and 9 (12.9%) from the CG ( Table 2). Discussion The principal findings of this study were that the use of surgical glue for the perineal repair of first-and seconddegree tears and episiotomy in all tissue planes (skin, mucosa, and muscle) proved to be as effective as the standard suture method. It showed less pain, shorter procedure time, and a similar healing process. The strengths of the present study were the design of a clinical, controlled, and randomised trial, in which the researchers rigorously followed all the eligibility and inclusion criteria to minimise selection biases. Also, a surgical glue suitable for deep tissue layers, such as muscles, allowed it to be used in second-degree tears and episiotomy. In addition, the follow-up for a more extended period (up to 8 months) allowed the evaluation of the healing process until its complete resolution. Another strength was the development of the surgical glue application technique and training for the team that participated in the study, which will allow the future sharing of this method. There was a good acceptance among the women to participate in the research, which can be considered a strong study point. This finding surprised the researchers, as it was believed that, for being a new procedure, most women would not accept participating in the research out of fear, but this was not the case. On the contrary, some women allocated to the control group requested that the glue be used. However, the importance of randomisation in the types of perineal repair was explained by not allowing changing the method used. The weaknesses found in the current study were the extended data collection period due to the small number of deliveries per day at the research site and the high number of exclusions related to the indication of cesarean section or intact perineum. Due to the rapid polymerisation of the surgical glue, there was also difficulty in using surgical glue in the presence of heavy bleeding. In some cases, it was necessary to use more than one surgical glue ampoule (0.5 ml) to repair the tear, increasing the cost of the procedure. Another problem observed in the EG was the need for a new repair with surgical glue between 24 and 48 h after the initial procedure, which did not occur with the CG group. On the other hand, it was also observed that one glue unit could be used for more than one woman, depending on the degree and extent of the perineal tear. A significant limitation was the price of the products. The surgical glue had an excessive cost (R$ 350.00) compared to the suture thread (R$ 22.00) in this study. However, it is worth mentioning that some materials and medications were not used when performing the repair with surgical glue, such as anaesthetics for the procedure and using an analgesic schedule for perineal pain after delivery, and the shorter time spent by the health professional to perform it. Although there was an option of using more economical surgical glues for skin and mucosa repair, only the glue chosen in the research is registered at ANVISA with approval to be used in the innermost layer (muscle) of perineal trauma. The favourable results with surgical glue for repairing first-and second-degree tears concerning perineal pain agree with the results from other studies. A study in women with second-degree tears compared three skin closure methods (glue, suture, and non-suture) and showed that the lowest perineal pain intensity was with surgical glue. Assessed with a 100 mm visual analogue scale, the mean pain in the second postpartum week was 3.0 with glue, 5.0 with suture and 7.0 with no suture (p = 0.02). This difference was no longer observed three months after delivery (p = 0.31) [18]. Other studies also confirm the positive findings of using glue [11,19,20]. A study with a sample of 135 women, aiming to compare the use of Histoacryl ® glue with the Monosyb ® suture thread to repair first-degree tears, was done. It showed that women repaired with surgical glue had less perineal pain intensity in all situations evaluated (at rest, when sitting, walking and urinating) than those with sutures in the first week after birth. Nevertheless, no difference in perineal pain was found at 30 days postpartum [8]. As for the healing process, evaluated by the REEDA scale, the groups were similar regarding hyperemia, oedema, ecchymosis and discharge. The difference in edge coaptation was due to a lower score in the CG than in the EG, and it occurred mainly among women with second-degree tears up to 10 days after delivery. Other clinical trials that compared the use of glue to suture for perineal skin repair showed no significant difference in any of the items of the REEDA scale [20,11,19,18,9]. Nonetheless, it is essential to point out that the coaptation of deeper tissue layers was not evaluated in these studies, which had different design than ours. The need to perform a new perineal repair procedure was also observed in a study conducted with 61 women where surgical glue was used to close the cutaneous episiotomy [21]. The percentage of 3.3% (2 women) who had superficial wound dehiscence in the first 48 h after birth was lower than the 8.6% observed in the current study, likely due to the study researching solely the cutaneous layer repair. Only the current study was the repair with surgical glue performed in all tissue layers affected (skin, mucosa and muscles), except for the anal sphincter muscles, as the women with third or fourth-grade tears were not included. As for the perineal repair time, in the EG, it was 6.1 min shorter than in the CG, corroborating the results of other studies [22,11,20,9,10]. It is worth emphasising that these studies used surgical glue on the mucosa or perineal skin and that this study evaluated the repair time of all tissue planes. Reducing the duration of the perineal repair procedure is essential, as it can decrease infections due to the lower exposure of tissues to microorganisms in the environment and abbreviate discomforts for women [23]. The results of this study related to less pain for women and a shorter procedure time are auspicious reasons for clinicians and policymakers to change the practice of perineal repair. Nevertheless, the excessive cost of surgical glue compared to suture thread can be an important limiting factor for its use in the delivery care practice, especially in health systems that face challenges due to the increased costs of materials and equipment, as well as in developing countries with few available resources. Therefore, future cost analysis research is suggested, comparing all materials, procedures involved, and time spent by the professional in performing the two types of perineal repair. It is also suggested that further studies be conducted with several types of glue available and application methods to find the materials and techniques that contribute to the best cost-benefit to women. In addition, another vital factor to be analysed is the woman's satisfaction with both types of perineal repair.
2023-04-12T14:08:32.007Z
2023-04-12T00:00:00.000
{ "year": 2023, "sha1": "125105906e4b96697086bca45b050b1c6a7f7f87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "125105906e4b96697086bca45b050b1c6a7f7f87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199621023
pes2o/s2orc
v3-fos-license
An Analysis of Severe Coal Mine Trauma Curing According to 144 Examples Introduction: improve the level of curing severe coal mine trauma patients, find out the regulations and deficiencies of coal mine severe trauma treatment Method: summarize and analyze 144 case data of severe coal mine trauma patients from 5 hospitals in Liupanshui City from March 2014 to March 2017 Result: 144 severe coal mine trauma patients are all males and their average age is 40.92 years old. Most of them hospitalize in the ICU (admission from April to June, about 29.17%) while fewest of them (admission from July to September, about 17.26%). Most patients hospitalize in the ICU about 0.85 days after the severe coal mine trauma happens and the average treatment time of ICU is 15.15 days. Patient source comes most from the people of emergency (94.44%), fewest from the people who transferred after experienced treatment in other hospital. The average score of SOFA evaluation is 4.1 while the average score in APACHE II evaluation is 16.2. Most kind of work which trauma takes place easily is coal mining (74.31%); other jobs which trauma also happens are driving (7.64%), ventilation (8.33%), transportation (6.94%), electromechanics (1.39%). The main reason of trauma is roof failure (41.67%), and there’re other reasons like harvesters extrusion (36.11%), falling (7.64%), electromechanics explosion (7.64%), high gas (5.56%). The main hurt organs are craniocerebal, lung, abdomen, limbs, centrum, maxillofacial. The implementation rate of tertiary rescue is 79.86%. Finally, 135 people survive after treatment while 9 people die. Conclusion: The treatment of severe coal mine trauma has its own regulation and characteristic, attaching high importance to the coal mine security, tertiary rescue and trauma curing is benefit to improve the success rate of curing severe coal mine trauma. cases of electromechanics, 10 cases of transportation, 11 cases of ventilation, 2 cases of other works. Reasons of trauma: 60 cases of roof failure, 52 cases of harvesters extrusion, 11 cases of falling, 11 cases of electromechanics explosion, 8 cases of high gas, 2 cases of other situation. Main hurt organs: 58 cases of craniocerebal, 29 cases of lung, 10 cases of abdomen, 23 cases of limbs, 22 cases of centrum, 2 cases of maxillofacial. People who accept tertiary rescue is 115. Method: 5 state-owned hospitals in the coal mine production area in Liupanshui City provide the original data, and all the patients will be mainly treated by the ICU. We adopt APACHE II evaluation and SOFA evaluation as the trauma judgment standard. The tertiary rescue standard adopts the General hospital-mine hospital-wellhead/ downhole rescue system. We will do retrospective analysis about the patient's age, trauma rescue time, treatment source, type of work, main trauma reason, main hurt organ, implementation rate of tertiary rescue and final outcome. Statistics: We adopt SPSS 17.0 software to do statistic analysis, we will use the measurement data to do the Student's t test, if the p value is below 0.05, these data will be significant. Findings 135 people survive (including 106 people who are cured by tertiary rescue), 9 people die. The contrast of treatment between the survivors and deaths from severe coal mine trauma and the deaths is shown in the following chart: Discussion The Liupanshui City's every governments pay high attention to the coal production security. During the twelfth Five-Year period, the whole city has invested to build a "safe cloud" data monitoring system and improve the wellhead and downhole telecommunication and monitoring system whose core is a safety production scheduling command center which includes positioning, video monitoring, gas and carbon monoxide on-time monitoring; besides, these systems can monitor all the 209 mining production in the whole city online. Liupanshui District's water mine and the Panjiang mine secure team all provide resource to the country-level rescue team, which means they will support the team when rescuing the coal mine patients and operating the tertiary rescue standard at the first time. Therefore, the saving rate can up to 79.86%. Since 2008, ICU has been independent as a secondary discipline, therefore it brings convenience for Liupanshui City's severe coal mine trauma patients can be rescued intensively in every coal mine hospitals and the success rate of this treatment can achieve 93.75%. During the twelfth Five-Year period, the death rate of whole city's billion GDP project is 4% and the number is lower than the national level and the provincial level [1]. In the recent 3 years, few disaster, death and gas explosion take place in Liupanshui City's coal mine trauma, but most are roof failure, harvesters extrusion and falling and they're obviously not same as the country investigation about gas explosion and high gas [2]. Liupanshui City isan important base for national plan of "West to Eat Power Transmission Project", most of them are thermal power. Each year's flood season peak of the Three Gorges's hydroelectric generation and the demand of thermal power is fewer. The driving task is slower and the workload is less; that's why from July to September, the trauma happens the fewest all the year round. However, middle-aged miners are special workers for their knowledge and social fitness abilities are not very well, so they have to driving mine and that's why severe coal mine trauma patients' average age is 40.92 years old and those people have high risks. Craniocerebal injury happens most in the severe coal mine trauma and our conclusion is as the same as peers' [2]. because of the speciality of coal driving environment, craniocerebal injury happens easily. At that time, the impact force is very high and the wounded area is large; chronic axonal injury, generalized cerebral contusion and delayed intracrnial hematoma are the characteristics of severe coal mine trauma. When the craniocerebalstem is injured, the craniocerebalstem reflection keeps decreasing, the GCS becomes lower and predicts patients can die in a short time [3]. For severe coal mine trauma patients, cerebal compression should be removed as soon as possible, which means in the early time, open your craniocerebal and clean the intracranial hematoma, all this should be done in "one golden hour". during the treatment, vital signs, changes on pupils and CT review should be the focus. Due to the limitation of coal mine trauma area and coal mine worker's knowledge, the behaviour of taking injured people to hospital is not very useful; after sending to the hospital, the craniocerebal CT is valued but the neck examination and cervical spine imaging are always ignored which leads to the review. According to the statistics, the rate craniocerebal injury which leads to mislead diagnosis ranges from 2.8%-6.9% [1,2]; therefore, when rescuing the craniocerebal injures at the first time to protect the cervical vertebra and avoid diagnosis. Respiratory system is the secondary damage organ. Miners' lung injury usually extensive contusion of both lungs, when the gas is high or explodes, the lung will suffer 45 from inhalation injury. Sharp knocks on the lung will lead to alveolar rupture, pulmonary hemorrhage, pulmonary edema and emphysema, and pleural effusion [1]; some serious contusion can cause death with the help of hyoxemia and ARDS [4]. In the early period, if patients can antagonize excessive inflammation in the alveoli, restricted fluid volume resuscitation [4], keep airways open and maintain the integrity of the chest gallery; he or she will recover faster. Serious flail chest should be fixed at the early operation. After lung injury, from 48 hours to 72 hours, pulmonary contusion will exudate in a peak, so patients have to pay attention to the CT review of lung and signs. In the early period of pulmonary atelectasis, medical ways are needed to use in clinic. At present, the blood observation can simultaneously observe the change of COHb, and we can judge when and how to use HPO method to treat patients who experience high gas or gas explosion [2]. Restricted by the present situation, our research data can't explain all the reason of dead people in the coal mine accident, so the findings will be influenced. But Liupanshui City's severe coal mine trauma has its own regulation and characteristics, the government and enterprises will continue attaching high importance to the coal mine safety, tertiary rescue; to build the ICU medicine as the comprehensive platform for medical rescuing and improve the rate of severe coal mine trauma.
2019-08-16T04:00:58.264Z
2019-07-23T00:00:00.000
{ "year": 2019, "sha1": "a5a47953a1174e1ef3d14987e51d0b50d9375eae", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ejcbs.20190503.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fa27d07e54f1ce42745c41f5e2974d9f0b0a46d3", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234405780
pes2o/s2orc
v3-fos-license
Zoom in on the levels of employee engagement, perception, satisfaction; employee roles influenced – health care sample study Purpose – The paper aims to considering quality that comes from quality employees taking discretionary efforts, having right perception towards quality, getting satisfied from their contribution. Exploring the relationship of engagement, perception and satisfaction, and mapping the levels and identifying managerial implications for improving the levels. Design/methodology/approach – William Kahn’s employee engagement dimensions, Parasuraman and Zeithaml’s quality dimensions and Harter et al.’s satisfaction dimensions applied and variables framed in healthcare context, tested and applied. Survey data collected from randomly selected medical and non-medical employees from south Indian state Tamil Nadu health-care organizations, using structured questionnaire. Findings – Age, experience and roles of the respondents in work have a significant association with the levels. It explores a significant positive relationship of perception, engagement and satisfaction. The study explores an average 28% of employees have high level of engagement, perception (18%) and satisfaction (22%), and the rest fall undermoderate and low levels. The roles of the respondents significantly predict the levels. Originality/value – The study focuses on engagement, perception and satisfaction of employees, not of patients. It registered the responses of trained physicians, nurses and administrative staff. It illustrates human resource strategic importance to improve the levels concerning quality measures. Introduction Employee engagement is a psychological condition expressed when employees closely associated with work and organization physically, cognitively and emotionally. William Kahn (1990) defines employees are harnessing themselves their self into the roles they play to achieve the goals. They associate their self into the role performance that underlines the researchers' view as effort (Hackman, and Oldham, 1980), involvement (Lawler and Hall, 1970), mindfulness and intrinsic motivation. They become physically involved in tasks, cognitively vigilant and empathically connected to others in their role performances and work environment (Olugbade and Karatepe, 2019). Kahn (1990) defines disengagement conversely a simultaneous withdrawal and defense of a person's preferred self in behaviors that promotes a lack of connectedness, physical, cognitive and emotional absence and passive, incomplete role performances. This kind of unemployment of the self underlies task behaviors researchers view as burned out (Maslach, 1982), being detached and effortless (Hackman and Oldham, 1980;Agung Nugroho Adi, 2015). It means uncoupling self from role; people's behaviors display an evacuation or suppression of their expressive and energetic selves in discharging role obligations. The self-determination theory (Deci and Ryan, 1985 b;Ryan and Deci, 2000b) provides discrepant viewpoints recognizing relatedness and autonomy that backgrounds engagement of employees in work. Relatedness refers to feeling connected to others, caring for and being cared for by those others, having sense of belongingness both with other individuals and with one's community. Autonomy concerns acting from self-interest and integrated values that engaged employees harness their self into their role performance from their own interest and values. They feel autonomous and experience their behavior as an expression of the self, feeling both initiative and value with regard to them (de Charms, 1968;Deci and Ryan, 1985 b;Ryan and Connell, 1989). The cognitive evaluation theory supports engaged employees who have intrinsically motivated behaviors in inherent satisfactions while association with work and organization values (Ryan and Deci, 2000). The organismic integration theory assumes engaged selfdeterministic people are naturally inclined to integrate their ongoing experiences in work environment. They assume they have necessary nutriments to do so, and they would internalize the activities initially external regulation and integrate it with their sense of self (Schafer, 1968). The JDR model (Bakker et al., 2003b(Bakker et al., , 2003cDemerouti et al., 2001aDemerouti et al., , 2007 assumes every occupation has risk factors carrying out job roles. That risk factors demand sustained physical, psychological, social and organizational efforts and cost efficiency. This has to be met by planning and allocating job resources: physicalphysical environment in workplace; facilities tangible and intangible, employees' physical availability; psychologicalemployees' cognitive association with work; capability, competence, skills, intelligence; emotional association with work; belongingness, owning, empathy; socialrelationship with colleagues and superiors, leadership quality; and organizationalwork culture fostering supportive work environment; learning, development-oriented appraisal, participation, recognition and rewards. The conservation of resources (COR) theory (Hobfoll, 2001) states the prime human motivation is directed toward the maintenance and accumulation of resources to meet the demands in every job role. Employee engagement Perception, according to Robbins (2001), is a process by which individuals organize and interpret their sensory impressions to give meaning to their environment. In a workplace, employees perceive and interpret from work environment. That fosters methods, ways, routines, values, beliefs and practices aiming to achieve the goals. The interpretation of employees is heavily influenced by personal attitude, motives, interest, past experience and expectations: situation factorstime, work setting and social setting (Christina Catenacci, 2017); and target factorsnovelty, motion, proximities, background, sounds, size. Perception theorists say the perception and interpretation of a perceiver influenced by the external stimulienvironment, situations, surrounding elements and the internal stimuliknowledge, hypothesis and expectations. Gibson perception theory (1950, 1966, 1979: cited in Eysenck and Keane (2008) of bottom up process discusses a perceiver how interprets from external environment providing information and Gregory, 1972, 1980perception theory (cited in Eysenck and Keane (2008) of top down process discusses a perceiver how interprets from internal stimuli constructing knowledge, experience and expectations. In health care, the employees perceive and interpret information from dynamic situations prevailing in the work environment. That environment fosters quality aspects, methods, standards, routines, values, practices to meet the need of patients. On the other hand, they perceive and interpret the quality aspects from their own past experiences and expectations as well. These will greatly influence their behavior at what extent they are harnessing themselves into their roles; a nurse plays the roles of health advocate, care taker, communicator; a physician plays the roles of care giver, administrator, manager, coordinator; and so forth. Eysenck and Keane (2008) argue that to be associated with work, there are job demands the employees to carry out the roles and the resources are required to complete the tasks in functional areas. Employee engagement in health care The health-care industry is highly competitive, dynamic, forcing organizations to continue focusing on the ways to become and to remain the quality service provider of choice. The success of any industry, including the health-care industry, greatly depends on employees who conduct day-to-day activities keeping the organization running (Roth and Leimbach, 2011). Given the current state of the health-care industry, which includes fast rising health-care costs and uncertainties relating to health-care reform, employee engagement is now more critical than ever. Employees caring for others (doctors, nurses) are frequently exposed to direct contact with people, would face crowded surroundings, stressful environment, service delays due to workload and limited communication. These factors result in a sense of disengagement among employees (Saleem et al., 2020). Health-care organizations are now being required to essentially do more with less; hence, they are requiring and in need of a more productive workforce (Roth and Leimbach, 2011). It is no longer sufficient for employees to just come to work; employees must now be engaged in the task at hand (Roth and Leimbach, 2011). In addition to the change in the health-care climate, research conducted by the Hay Group of organizations demonstrated that the levels of employee engagement are intrinsically linked to elements such as employee satisfaction, patient satisfaction, workplace safety, patient safety and employee retention. It is a strong predictor of patient safety and satisfaction. Therefore, for health-care organizations to improve quality service, they must begin by investing in improving employee engagement. Prins et al. (2010) gathered data from a sample of 2,115 Dutch resident physicians and found that physicians who were more engaged were significantly less likely to make mistakes. A study of 8,597 hospital nurses by Laschinger and Leiter (2006) found that higher work engagement was linked to safer patient outcomes. The perception what employees have in work would reflect in their association with work, co-workers and patients. It is believed that actionable quality outcomes come from quality engaged employees. The engagement of health-care employees is significant, as they are directly engaged in rendering service to improve mental and physical health of patients. It is participation, but not only participation. It is speaking with peace, walking with humility, working with love for patients. When action is accompanied by a depth of quality, this is a sign of engagement (Brahma Kumaris, 2000). It is when instead of staying within, it is chosen to venture through the door, taking something of value to all, within the engaged. Before the employees harnessing themselves into the roles they play, they perceive information. They interpret those by using their internal stimuli: the sensory image, knowledge, expectations and experience. And then, they relate themselves to the roles and gain experience. They would feel autonomous self-motivated, self-guided, creative in actions (Bakker and Albrecht 2018) using available resources that create them to feel satisfied from what they contributed and the sense of owning and belongingness. Engagement of health-care professionals With the above-discussed view and understanding the importance of employee engagement, the relationship of perception and satisfaction with engagement, the study framed objectives to explore the significant relationship and to report the various levels of employee engagement, quality service perception and employee satisfaction through conducting a sample study at south Indian health-care organizations. Research questions RQ1. Do the health-care employees: physician, nurse and administrative staff, have the different levels of engagement, quality service perception and satisfaction? RQ2. Do the roles of health-care employees play at work influence the levels of engagement, perception and satisfaction? Objectives Investigating the relationship between employee quality service perception, engagement and satisfaction. Reporting the association of demographic profile of health-care respondents with their levels of engagement, perception and satisfaction and reporting different levels as high, medium, low. Analyzing the relationship of the role of health-care respondents play in work with the levels of engagement, perception and satisfaction. Exploring implications for improving the levels of engagement, perception and satisfaction through understanding the insights of health-care role designs and analysis. Methodology The study was conducted among randomly selected 425 trained medical employees. They are physicians and nurses, and non-medical employees are administrative staff, namely, executives, assistants and ward secretaries. They were randomly selected from 20 health-care organizations functioning in Tamil Nadu state, South India. These National Accreditation Board for Hospitals & Healthcare Providers (NABH) accredited entry-level organizations have been listed in the Prime Minister Health Insurance Scheme and in Tamil Nadu Chief Ministers Health Insurance Scheme. These are "A1"-graded organizations functioning with many specialties and care units. These organizations have an average bed capacity of about 700, and overall workforce strength of about 2000. Tamil Nadu health-care sector is one of the leading sectors contributing to the gross domestic product (GDP) 15-20% to Indian economy. The available total bed capacity is mostly 60% occupied by private health-care organizations, which mostly serve in Tier I and II cities and towns. The remaining 40% is occupied by public organizations, which mostly serve in villages and suburban areas. Among all the other states in India, Tamil Nadu has proven credentials over the past decades to attract both domestic and international tourists for medical tourism. Data collection procedureusing a structured questionnaire for data collection, the permission of various people were sought like nodal officer, matron and nursing superintendents, working in public health care and senior human resource (HR) managers, training and development (T&D) managers working in private health care. Questionnaire variables were then restructured in accordance to their suggestions; finally, the questionnaire was validated. Firstly, the questionnaire was distributed to 600 sample respondents and 559 responses were received from which 425 (76 %) properly filled questionnaire responses were taken into account for analysis. The rest 134 responses (24 %) were omitted. The responses and the data were analyzed using SPSS 20 version. Employee Quality Service Perception Engagement Satisfaction scale (EQSPES) is a 40-item scale using Likert five-point scale to explore and measure the relationship between the variables employee Engagement (ENG), employee quality service perception (QSP) and employee satisfaction (SAT) and its levels. For the scale, William Khan's (1990) employee engagement components physical, cognitive and emotional association with work; Parasuraman et al. (1985Parasuraman et al. ( ,1988Parasuraman et al. ( ,1994) SERVE QUAL components tangible, assurance, reliable, responsiveness and empathy; and Harter et al. (2002) satisfaction components sense of belongingness and contribution were taken into account. The variables of the components were framed own by the researcher based on the existing literature. The 12 items for engagement, 22 items for QSP from employee's perspective and six items for satisfaction were framed and tested the internal consistency reliability, composite reliability, content validity and construct validity, and reported in the scale, which is annexed in appendices. Exploratory factor analysisit yielded three factors explaining a total of 53.375% of variance for the entire set of 40 variables. The first factor employee ENG explaining 26.095% of variance was framed due to high loadings of the items: physical, cognitive and emotional association with work. The second factor, employee QSP, explaining 14.656% of variance was framed due to high loadings of the items being tangible, being assured, being responsive, being reliable and being empathized to patients, and the third factor employee SAT explaining 12.624% of the variance was framed by the following items sense of contribution and belongingness. KMO and Bartlett's test of sphericityvalue 0.643 indicate the set of variables are at least adequately related for factor analysis to identify three clear patterns of response among respondentsemployee ENG, employee QSP and employee SAT. These three factors are independent of one another. Overall, these analyses indicated that three distinct factors were underlying health-care employees' responses to the items, and that these factors were moderately internally consistent. Variables/itemsemployee engagementthis is expressed when employees are closely associated with work physically, cognitively and emotionally. Kevin Kruse (2012) elaborates it as an emotional commitment for an organization. Jung and Yoon (2019) argue that it enhances commitment, job performance and organizational performance. Physical association with work (M = 3.84, SD = 0.418, l = 0.867, communality = 0.771)employees physically are to be engaged in work by making to be known the expectations of work, providing working needs materials and equipment. Cognitive association with work (M = 3.96, SD = 0.347, l = 0.791, communality = 0.643)when employees are in roles that match their inner potential, they will be encouraged cognitively by integrating employees' talents, skills and knowledge by sharing feedback must be specific to individuals (Liloia and Nicole, 2014). Employee QSPit concerns how employees receive and interpret work environment that fosters quality policy and standard, the expectations and needs of customers. The performance of employees will be influenced by their perception on quality service. For this study, universally applied quality service components have been applied to measure employee QSP. Being tangible (M = 3.67, SD = 0.572, l = 0.760, communality = 0.673)the availability of facilities and materials physically, and the presence of a person who renders service to patients, that can be sensed by patients. Being assured (M = 4.07, SD = 0.362, l = 0.571, communality = 0.366)the courtesy, the ability and knowledge of employees to put into words. Being reliable (M = 3.89, SD = 0.320, l = 0.752, communality = 0.615)it refers to the ability of employees who render service in a safe efficient manner and consistent performance. Pena, Mileide et al. (2013) say that a service renderer must comply with what was promised. Being responsive (M = 3.94, SD = 0.422, l = 0.636, communality = 0.468) - Harrison (2005Harrison ( /2013 says that the ability of employees to provide voluntary service attentively promptly with precision and speed response to patients. Being empathized (M = 3.78, SD = 0.576, l = 0.551, communality = 0.281)the care of the organization for the patients and assisting them individually, effort taking to understand the need of patients and personal attention (Parasuraman et al., 1988, pp. 35-43;De Jager and Du Plooy, 2007). Employee satisfactionthis is a sense of being realized when employees' contribution is being recognized, the way organizations treat them, the voice of employees is being listened to (Harter and Schmidt, 2002). Sense of contribution (M = 3.49, SD = 0.508, l = 0.654, communality = 0.498)it is realized when they have a feeling that they are making significant contributions to their workplaces (Custom Insight, 2014). Sense of belongingemployees at every level and in every function like to feel that they belong. Baldoni, John (2010) says that the employees who love what they do and the organizations to which they belong demonstrate their support for their organization in words, but most especially, in actions. Pearson correlation analysis, regression analysis, x 2 tests were carried out for identifying significant relationship of the variables. MANOVA is carried out for exploring the significant mean variance between the groups as admin staff, physician and nurse across the outcome variables. Results and discussion Firstly, the level of perception is significant in resulting expected behavior and performance of employees. According to Gregory and Gibson perception theories, the employees receive information externally available in work environment, and then, they interpret them internally and express it in the form of performance. Health-care work environment is dynamic and complex, as it involves a variety of stakeholders notably diverse patients and their expectations. The work environment, fostering quality standards, ensuring good service delivery practices, employees perceive all the quality service expectations and needs of the stakeholders. These can be met by using their personal past experiences they have gained in the field. They then interpret and internalize and finally fit into their roles. The organismic integration theory represents the internalization as a natural process in which people work to actively transform external regulation into self-regulation (Schafer, 1968), thus becoming more associated with work. A nurse plays roles of health advocate, communicator etc.; a physician plays roles of collaborator, manager and health advocate. Meanwhile, an administrative staff plays supportive roles to medication and treatment, and engage themselves in work. Hence, perception on work, environment, organization and quality practice play a significant role in determining engagement level. On the other hand, the level of engagement would give a greater work exposure and experience to employees. Secondly, the level of engagement that the employees have in their work results in a sense of mental satisfaction for the service they rendered. The self-determination theory, the relatedness (de Charms, 1968;Deci and Ryan, 1985 b;Ryan and Connell, 1989) support that the employees are naturally getting satisfaction on being engaged to the contribution. The extent to which the employees getting closely associated with work and taking discretionary effort to meet the expectations of stakeholders (patients) would give them a greater feel of satisfaction than any other monetary benefits (Ryan and Deci, 2000). Thirdly, the level of satisfaction on what they engaged and rendered service would naturally make them more loyal to the work and make them contribute more and more. That would ultimately result in an increased level of engagement that contributes to a greater value creation and competitive advantage to the particular healthcare organization. Statistical analyses Age of sample respondentsassociated with level of employee engagement (( Also, the study reports the mapped levels of perception, engagement and satisfaction among the sample respondents. Health-care employees are naturally inherent to work engagement, having zeal of perceiving quality experience to meet the expectations of stakeholders and getting a sense of satisfaction in their contribution. Based on mean score of the values assigned to the variables of engagement, perception and satisfaction, the levels categorized as high, medium and low. And,22,25.9,29.9% of the respondents are highly associated with work, and 48.2, 43.5, 40% of the respondents are moderately associated with work physically, cognitively and emotionally, respectively. On an average, 27.5% of the respondents highly engaged in work. 23.3, 14.6, 17.7, 8.2, 26.8% of the respondents have high level of perception and 48.9, 52.9, 54.1, 63.8, 56.7% of the respondents have moderate level of perception over being tangible, being assured to patients, being responsive, being reliable and being empathized to patients, respectively. On an average, 18% of the respondents have high level of perception over quality health-care service they render in sample organizations, 37.6% of the respondents are highly satisfied and 31.5% of respondents are moderately satisfied in engaged work. On an average, 22.1% of the respondents have high level of satisfaction on what they contribute through engaged work in sample organizations. Moreover, the respondentsmedical and non-medical employees play different roles and carry out different functions with different capabilities. Physicians and nurses play roles such as care giver, health advocate, communicator, trainer, collaborator, etc. Also, administrative staffward assistants, secretaries and executives do many supportive roles to medication and treatment. Health-care environment encompasses different role players meeting expectations of different stakeholders and facing diverse patient cases. The roles of sample respondents, they play in work, significantly predicts the level of engagement Finally, the nature of work, way of doing work routine practices, roles and responsibilities are the factors that differentiate the employees as administrative staff, physicians and nurses. This naturally makes them significantly different from each other in terms of engagement, perception and satisfaction. The study also confirms the same on the differences in the levels between them. Multivariate analysis results in a significant difference between administrative staff ( Implications Considering the results, the association of age and experience of respondents with the levels of engagement, perception and satisfaction and the influence of roles and responsibilities on the levels, it is likely assured that age, experience and roles would bring changes in the levels directly or indirectly. When age and experience of respondents increase, the level of perception on work environment will considerably change. This in turn directly impacts the level of engagement and satisfaction. When respondents play well-defined roles in highly complex work environment like health care, the nature of roles would bring a significant change in the level of perception that in turn enhances to the expected level of discretionary efforts of employees. The study reports an average 28% sample respondents having high level of engagement, perception (18%) and satisfaction (22%). The rest of the respondents fall under moderate and low level of engagement, perception and satisfaction. This is contradictory to what we assumed that mapping a huge level of sample respondents would be in the high-level group. This unexpected and undesirable state of levels may have been resulted by various understandable factors like health-care employees playing multiple roles, including multiple functions. And also, that functions encompassing many responsibilities. For example: a nurse is playing the role of a communicator. He/she communicates the details about patient disease information to physicians in the form of case sheets. For this, he/she prepares case sheets for the diagnoses and consultation of patients about their health improvement. This is, then, communicated to the patients or their relatives using prescriptions. Finally, the patient's full diagnosis case reports are prepared in digital format and submitted to the management. Then, a nurse also plays roles of health advocate, mediator, care taker, diagnosis analyst and so forth. A physician plays roles of consultant, health advocate, care giver, administrator, trainer, manager, counselor, diagnosis analyst and coordinator of medication and treatment activities, organizer and so forth. An administrative staff is playing roles of assistant to physicians and nurses. The clerk assists to patient's care playing administrative executioner role supporting to medication and treatment in outpatient and inpatient wards. Apart from these, there are also various other factors affecting the levels of engagement, perception and satisfaction. They include inflexible work schedule, employees working in shifts in a day without replacement, shortage of experienced nurses in inpatient wards and care units, presenteeism and being too fatigued. Moreover, absence of work sharing, especially while taking many responsibilities, work done on overtime, consulting many patients with different cases at the same time were other important factors. Further, improper timely support of colleagues and superiors, leadership style of superiors, improper supply of medication and treatment equipment are other notable factors. This state of levels can be improved to a desirable level by giving HR strategic importance to certain work practices in health-care work environment. This can be done by defining roles and responsibilities and charting out daily routines for each role. The key functional areas and key performance areas (KPA) for each role of physicians, nurses and administrative staffs playing can be assigned. The critical attributes, including personal qualities and characteristics, their sociability, integrity, dependability, empathy, etc., can be identified. This is valuable for physicians and nurses for carrying out KPA efficiently. Methods such as conducting role analysis that is collecting expectations of role incumbents, self-assessment practice in the form of appraisal can be very effective. The concept of development-oriented approach to appraisal (Udai and Rao, 2003) states performance-based reward and recognition, performance standard fixation, individualistic performance approach, group performance rewards can be used as tools of motivation. Equipping them to have a good control over the roles and doing their functions efficiently. This could be achieved by explaining to them the expected quality standards, practices, expectations and outcomes clearly. Providing them various training programs like emotion handling sessions, attitude building sessions and tactics for effectively handling patients would be helpful to update themselves. The employees' level of engagement should be good enough that the work environment allows them to know and realize their potential. Providing employees the job resources to do the key functions; work autonomy in roles, task identity, feedback from superiors, support from colleagues, flexible work schedule are other motivating factors. Creating a work culture to boost up personal resources; refer to the characteristics of the individual employee such as optimism, resilience, self-efficacy, selfesteem. Despite health-care employees being naturally capable and competent, and being able to do medication and treatment functions efficiently, there are a number of work environment factors that greatly affect the perception of engaged employees. It includes dynamic work environment with greater mental pressure on the employees to handle diverse patient cases, satisfying different stake holder expectations without well-established facilities. The work environment in health care, according to Hinks et al. (2003), must aim at well-established and updated facilities. Infrastructure facilities such as estate and property, indoor air, structure and fabric, water supply, rest rooms should be well designed. The hospital labs, diagnostic centers, ambulance services, intensive units, electricity and telecommunication systems and computer-aided systems should be intact. The integrated communication systems, room adjacency planning, patient pathways and process mapping and positioning of clinical services should be marvelous. The quality service aspects such as the demand and capacity planning, patient accommodation planning, inpatient and outpatient output specifications should be well planned. Moreover, the support services such as the catering, cleaning, waste management, security and laundry should be well taken care of. Beyond the supportive work environment, well-planned facilities for service delivery, the practice of moral and ethical medical treatment of patients, organization justice would greatly affect the employees' morale, perception, attitude toward their association with work and organization. The private health-care organizations in Tamil Nadu started to realize health-care workplace dynamism, role conflicts, ambiguity and role erosion among employees due to the following reasonslack of role clarity, functional deficiency and competency requirement, both technical and managerial. While recruiting employees for specific roles, priority for both behavioral and conceptual potential should have been emphasized. A well-designed development-oriented appraisal system should have been designed. A proper direct feedback mechanism from patients and their relatives should have been in place. The importance of planning facilities and infrastructure for creating value among both national and international medical tourists should have been thought of. This would have paved way for a greater competitive advantage as well. As far as public health-care organizations in Tamil Nadu are concerned, they need to travel a long way in various attributes. These would include workforce planning, capacity planning, employee-centric workplace fostering health and safety of both patients and employees. A proper development-oriented approach to employee retention, performance-based rewards and recognition and equipping workplace with updated facilities, creating work culture emphasizing quality improvement. Conclusion Considering health-care quality service from employees' perspective will be one of the parameters to draw strategies for competitive advantage and to create values among the stakeholders notably patients. Mapping and reporting the levels of employee engagement, employee perception on quality service and employee satisfaction in engaged work are the key quality measures that exhibit the benchmarking health-care organizations, the state of organizations at what extent they make employees to be associated with work and its environment the quality concern and the state of organizations at what extent they use the discretionary efforts of employees in every roles, by which they recognize the efforts, it makes the employees to have the sense of contribution and belongingness. Directions for further research For each and every role of the health-care employees, the levels of engagement, employee perception on quality service can be studied and mapped for different health-care functional areas. Competency requirements, critical attributes requirements for efficient role handling can be studied and explored.
2020-12-31T09:11:08.121Z
2020-12-28T00:00:00.000
{ "year": 2020, "sha1": "58f8480c16a80c8101c32d01f50a6446c76fddc6", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/XJM-07-2020-0046/full/pdf?title=zoom-in-on-the-levels-of-employee-engagement-perception-satisfaction-employee-roles-influenced-health-care-sample-study", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b67dac609ccc47e0533b725aaf7a83eb91e9a0e0", "s2fieldsofstudy": [ "Business", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
46920716
pes2o/s2orc
v3-fos-license
MicroRNA-365 Inhibits Cell Growth and Promotes Apoptosis in Melanoma by Targeting BCL2 and Cyclin D1 (CCND1) Background MicroRNA-365 (miR-365) is involved in the development of a variety of cancers. However, it remains largely unknown if and how miRNAs-365 plays a role in melanoma development. Material/Methods In this study, we overexpressed miR-365 in melanoma cell lines A375 and A2058, via transfection of miR-365 mimics oligos. We then investigated alterations in a series of cancer-related phenotypes, including cell viability, cell cycle, apoptosis, colony formation, and migration and invasion capacities. We also validated cyclin D1 (CCND1) and BCL2 apoptosis regulator (BCL2) as direct target genes of miR-365 by luciferase reporter assay and investigated their roles in miR-365 caused phenotypic changes. To get a more general view of miR-365’s biological functions, candidate target genes of miR-365 were retrieved via searching online databases, which were analyzed by Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses for potential biological functions. We then analyzed The Cancer Genome Atlas (TCGA) Skin Cutaneous Melanoma (SKCM) dataset for correlation between miR-365 level and clinicopathological features of patients, and for survival of patients with high and low miR-365 levels. Results We found that miR-365 was downregulated in melanoma cells. Overexpression of miR-365 remarkably suppressed cell proliferation, induced cell cycle arrest and apoptosis, and compromised the migration and invasion capacities in A375 and A2058 cell lines. We also found that the phenotypic alterations by miR-365 were partially due to downregulation of CCND1 and BCL2 oncogenes. The bioinformatics analysis revealed that predicted targets of miR-365 were widely involved in transcriptional regulation and cancer-related signaling pathways. However, analysis of SKCM dataset failed to find differences in miR-365 level among melanoma patients at different clinicopathologic stages. The Kaplan-Meier analysis also failed to discover significant differences in overall survival and disease-free survival between patients with high and low miR-365 levels. Conclusions Our findings suggested that miR-365 might be an important novel regulator for melanoma formation and development, however, the in vivo roles in melanoma developments need further investigation. Background Melanoma, a skin malignancy notorious for its aggressiveness and high metastatic potential, has had a fast increase in its incidence rate, and accounts for over 65% of skin cancer-related deaths [1,2]. Most localized melanomas are curable with surgical resection, while the prognosis of patients with distant metastases is usually poor with a 10-year survival rate of only 16%, mainly due to innate or acquired resistance to therapeutic regimens [3]. This situation highlights the significance of further understanding of the underlying molecular mechanisms that contribute to melanoma development and metastasis. MicroRNAs (miRNAs) are small non-coding RNAs that negatively regulate gene expression post-transcriptionally by pairing to the 3'-untranslated region (3'-UTR) of target mRNAs [4,5]. MiRNAs are involved in regulating a series of cellular functions, including development, differentiation, apoptosis, and proliferation. Deregulated miRNA expression has been shown to contribute to the development of cancers, including breast cancer [6], digestive tract cancers [7], melanoma [5], and so forth. MiR-365 is located on chromosome 16p13.12, a region that has been involved in multiple oncogenic processes. The expression pattern and biological roles of miR-365 are cancer type-dependent. MiR-365 is highly expressed in cutaneous squamous cell carcinoma [8,9] and breast cancer [10], while it is downregulated in colon cancer [11], lung cancer [12], and melanoma [13]. MiR-365 may display either a pro-proliferative or pro-apoptotic role in a specific cancer type. The roles of miR-365 in melanoma development is very poorly understood. Bai et al. have shown that miR-365 was downregulated in melanoma tissues and ectopic expression of miR-365 suppressed cell cycle progression and promoted apoptosis by targeting NRP1, which is an essential regulator of cell migration and invasion, suggesting that miR-365 exerted a tumor-suppressive effect in melanoma [13]. Cyclin D1 (CCND1) is a member of the cyclins, which are essential in regulating cell cycle [14]. CCND1 is a well-established human oncogene [14], which is commonly overexpressed in different types of cancers such as breast cancer [15], lung cancer [16], and melanoma [17]. CCND1 overexpression can result in a number of potentially oncogenic effects and have been shown associated with poor patient outcome [18]. BCL2 apoptosis regulator (BCL2) belongs to the BCL2 family proteins, which are important regulators of apoptosis [19]. Antiapoptotic BCL2 family members, including BCL2, BCLXL, MCL1, and BCLW, inhibit apoptosis by sequestering the activators from interacting with BAX and BAK [20]. Overexpression of anti-apoptotic BCL2 has been observed in many types of cancers, such as follicular lymphoma [21], breast cancer [22], prostate cancers [23] and melanoma [24]. Upregulated expression of BCL2 protein promotes tumorigenesis and tumor progression and is associated with poor patient prognosis [25]. CCND1 and BCL2 have been reported as targets genes of miR-365 in colon cancer [11]. Thus, in this study we investigated the functional relationship between miR-365 and these 2 genes. In this study, to further explore the roles of miR-365 in melanoma development and reveal the underlying molecular mechanisms, we investigated the effects of miR-365 overexpression on cell cycle, apoptosis, cell migration and invasion in 2 melanoma cell lines, A375 and A2058. We also investigated the roles of CCND1 and BCL2 in the cellular effects of miR-365. To obtain a comprehensive understanding of the potential biological functions of miR-365, the Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis of predicted targets of miR-365 were carried out. In addition, analysis of the The Cancer Genome Atlas (TCGA) datasets for melanoma patients was also performed to investigate the association between miR-365 level and the clinicopathologic features and outcomes of melanoma patients. Material and Methods Cell culture NHEM (Normal Human Epidermal Melanocytes) cell line was obtained from Miao Tong Biological Technology (Shanghai, China) and cultured in M2 medium. The human melanoma cell lines A375, A2058, SK-MEL-2, and SK-MEL-28 were obtained from China Center for Type Culture Collection (Wuhan, China). These cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum. All cell lines were incubated at 37°C with 5% CO 2 in a humidified atmosphere. Transfection of miR-365 mimics To transiently overexpress miR-365, A375 and A2058 cells were transfected with miR-365 mimic oligos (Life Technologies, USA) at a final concentration of 100 nM by using lipofectamine 2000 (Thermo Fisher Scientific, USA) according to the manufactures instructions. The control cells were transfected with the nontargeting control oligo (NC oligo for short, Life Technologies, USA) at the same concentration. Quantitative real-time-PCR Total RNA, including miRNA, was extracted from cells using the miRNeasy mini kit (Qiagen, USA) according to manufacturer's instructions. For measuring BCL2 and CCND1 mRNA levels, 1.5 μg total RNA was reversely transcribed into cDNA using the Omniscript RT kit (Qiagen, USA) according to manufacturer's instructions. The qRT-PCR was then performed to detect the levels of BCL2 and CCND1 as described by literature [26]. Briefly, 2 μL of cDNA (diluted in 1: 10) was used as templates for qRT-PCR using the iQ SYBR Green Supermix (Bio-Rad, USA) on a CFX Real-Time PCR Detection System (Bio-Rad, USA). The amplification protocol is as following: 95°C for 3 min; followed by 35 cycles of 95°C for 15 sec and 60°C for 30 sec. GAPDH was used as an internal control. Primer sequences are as follows: GAPDH forward: 5'-AAGCCTGCCGGTGACTAAC-3'and reverse: 5'-GGCGCCCAATACGACCAAA-3'; BCL-2 forward: 5'-ATGTGTGTGGA GAGCGTCAA-3' and reverse: 5'-GGGCCGTACAGTTCCACAAA-3'; CCND1 forward: 5'-CAGATCATCCGCAAACACGC-3' and reverse: 5'-AAGTTGTTGGGGCTCCTCAG-3'. All reactions were performed in triplicate. Fold changes in mRNA expression levels were calculated using the 2 -DD CT method [27]. For measuring miR-365 levels, a miScript II RT kit (Qiagen, USA) was used to converted miRNAs to cDNA and to add a universal tag to each cDNA product. The levels of miR-365 was detected via performing qRT-PCR assay using the miScript SYBR Green PCR kit (Qiagen, USA) according to the following protocol: 95°C for 15 min, followed by 40 cycles of 94°C for 15 sec, 55°C for 30 sec, and 70°C for 30 sec [28]. The qRT-PCR primers for miR-365 was 5'-GCTAATGCCCCTAAAAATCC-3'. U6 primer was 5'-GAACGATACAGAGAAGATTAGCA-3'. All reactions were performed in triplicate. The expression level of miR-365 was normalized to that of U6 snRNA. Fold changes in miR-365 levels were calculated using 2 -DD CT method [27]. MTT assay Cell viability was measured by using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay (Sigma, USA) as described in published study [29]. Briefly, cells were seeded in 96-well plates at 5000 cells/well in triplicates. To measure cell viability, we added 10 µL MTT (5 mg/mL) to each well and incubated the plates for 4 hours at 37°C. And then 100 μL solubilization buffer was added to each well without removing medium. The plates were incubated for overnight at 37°C and the absorbance at 595 nm was measured by using a spectrophotometer. Results shown are the mean ± standard error (SEM) of 3 independent experiments. Cell cycles analysis Cells were harvested by trypsinization and fixed in 70% ethanol at -20°C for at least overnight. The cells were then treated with 50 μL of 100 μg/mL boiled RNase A (Sigma, USA) for 30 min followed by staining with 200 µL propidium iodide (from 50 µg/mL stock solution). Cell cycle was acquired using a flow cytometry (LSR II, BD Biosciences, USA) and data was analyzed using the ModFit LT software (Verity Software House, USA). All experiments were performed three times independently. Apoptosis by flow cytometry Cell apoptosis was detected using Annexin V-FITC apoptosis detection reagent (BD, USA) according to the manufacturer's instructions. Briefly, cells were washed twice with cell staining buffer and resuspended in binding buffer at 1×10 5 cells/100 μL. To 100 μL of cell suspension, 5 μL of Annexin V-FITC was added and cells were incubated at room temperature for 15 min followed by addition of another 400 μL binding buffer. Before measuring samples, 8 μL of Hoechst 33258 was added to each sample. Cell apoptosis was then detected by using a flow cytometry (LSR II, BD Biosciences, USA) and data was analyzed using the FlowJo software (FlowJo LLC, USA). Western blotting Western blot analysis for BCL2 and CCND1 levels was performed as described by Li et al. [30], with minor modifications. Briefly, cells were lysed, and the protein concentration of total cell lysates was measured by using the BCA assay (Beyotime, China). About 25 μg of whole cell lysates were subjected to electrophoresis and then transferred to an Immobilon-P Membrane (Merck Millipore, USA). After blocked by 5% non-fat milk for 1 hour at room temperature, the membranes were incubated with rabbit anti-human BCL2 or rabbit anti-human CCND1 (both 1: 1000, Cell Signaling, USA) and mouse anti-human b-actin (1: 2000, Abcam, USA) for overnight at 4°C. After incubation with corresponding secondary antibodies for 1 hour, the membrane was washed 3 times with TBST, and then incubated with BeyoECL Plus chemical luminescence solution (Beyotime, China). The membrane was then imaged using a ChemiDoc XRS imaging system and analyzed using the QuantityOne software (Bio-Rad, USA). Colony formation assay For the colony formation assay, cells were seeded at 300 cells/ well on 6 well plates in triplicate and maintained in complete culture medium for 12 days until obvious colonies were formed. Colonies were then fixed by 70% ethanol for 5 min and stained with Coomassie blue dye for 5 min. A colony with more than 50 cells were considered as a colony and counted. Transwell migration and invasion assay Cells were transfected with miR-365 mimics or control oligos. Then cells were resuspended in 200 µL serum free medium and seeded in the upper part of the Transwell-inserts (8 μm pore membranes) in the 24-well plate with 20% FBS supplemented DMEM medium placed in the lower compartment. For the Matrigel invasion assay, 50 µL Matrigel at 300 µg/mL was added to a 24-well Transwell insert and solidified in a 37°C incubator for 1 hour to form a thin gel layer before cells were loaded. After incubation for 24 hours, cells that passed through the membrane were fixed on the membrane using ethanol and then stained with crystal violet. The average number of cells that passed through the membrane from 15 representative fields (5 replicates per individual experiments) was counted under a phase contrast microscope. Luciferase reporter assay The region of 3'UTR of the human CCND1 and BCL2 mRNA containing the miR-365 targeting site were cloned in between the XhoI(5') and NotI(3') sites in a psi-CHECK2 vector (Promega, USA). A375 cells were co-transfected with 1 μg of p3'UTR-CCND1 or p3'UTR-BCL2 or psi-CHECK2 vector and 100 nM miR-365 mimic oligos using Lipofectamine 2000 (Invitrogen, USA). Cells were harvested 24 hours after transfection and assayed by using a Dual Luciferase Reporter Assay (Promega, USA) according to the manufacturer's instructions. The assay was performed in triplicates. The expression level of CCND1 and BCL2 3'UTR was defined as Renilla luciferase activity normalized to firefly luciferase activity for each well. Bioinformatics analysis The targets genes of miR-365 were predicted using the mi-Randa, DIANA-microT and TargetScan database and the predicted targets that showed up in at least 2 databases were selected for further analysis. To further investigate the functional roles of miR-365, target genes were subjected to Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis using DAVID Bioinformatics Resources [31]. Analysis of TCGA melanoma dataset Skin Cutaneous Melanoma (SKCM) dataset for 470 melanoma samples was obtained from The Cancer Genome Atlas (TCGA). Level 3 miRNA-Seq data generated from the miSeq Illumina platform and clinicopathological features of patients were retrieved from the TCGA Data Portal (released on 04/08/16). All samples had clinical and follow-up information. The dataset contained 448 patients with miRNA-Seq data, therefore only patients with miRNA data were used in this analysis. As the data were obtained from TCGA, further approval by an ethics committee was not required. This study meets the publication guidelines provided by TCGA. Expression of miR-365 in patients at different disease stages was compared by using one-way ANOVA. Overall survival (OS) and disease-free survival (DFS) were assessed by using the Kaplan-Meier method and curves were compared by univariate (log-rank) test. Statistical analysis Statistical analysis was performed using the GraphPad Prism 6 (GraphPad Software Inc., USA). The difference between 2 groups was analyzed by Student's t-test; the means of more than 2 groups were compared by using one-way ANOVA followed by the Tukey-Kramer method. Each experiment was repeated at least 3 times and data was presented as mean ±SEM. A P value £0.05 was considered to be significant. MiR-365 inhibited cell proliferation and induced apoptosis in melanoma Initially, we found out that miR-365 was downregulated in 4 melanoma cell lines, A375, A2058, SK-MEL-2, and SK-MEL-28 compared to a melanocyte cell line, NHEM ( Figure 1A). We then investigated the biological functions of miR-365 in melanoma cell lines. MiR-365 was overexpressed in A375 and A2058 melanoma cell lines by transient transfection of a miR-365 mimic oligo, and the overexpression of miR-365 was validated by qRT-PCR ( Figure 1B). The results of MTT assays revealed that ectopic expression of miR-365 significantly reduced cell viability in both A375 and A2058 cells ( Figure 1C). We then explored the potential mechanisms for the decreased cell viability in miR-365 overexpressing cells. The colony formation assay indicated that melanoma cells overexpressing miR-365 has decreased proliferation ability than the control cells ( Figure 1D). The cell cycle analysis showed a significant increase in population at the G0/G1 phase and a decrease in the population at S phase ( Figure 1E), suggesting a G1/S blockage was caused by miR-365 overexpression. In addition, we also found that miR-365 led to a substantial increase in cell apoptosis ( Figure 1F). Together, these results indicated that miR-365 played a tumor suppressive role in melanoma likely through inhibiting cell proliferation, blocking cell cycle progression and enhancing cell apoptosis. miR-365 suppressed melanoma migration and invasion Bai et al. reported that miR-365 levels were inversely correlated with melanoma metastasis [13]. We therefore investigated the effects of miR-365 overexpression on the migration and invasion abilities of A375 and A2058 melanoma cells. The Transwell migration assay and Matrigel invasion assay indicated that overexpression of miR-365 significantly suppressed melanoma cell migration ( Figure 2A) and invasion ( Figure 2B) capacities, respectively. miR-365 targeted BCL2 and CCND1 Both BCL2 and CCND1 were reported as direct targets of miR-365 in colon cancer [11]. However, these regulatory associations have not been established in melanoma and the roles of BCL2 and CCND1 in miR-365-mediated phenotypes in melanoma have not been studied. Thus, we examined the effects of miR-365 overexpression on the endogenous mRNA and proteins levels of BCL2 and CCND1 in A375 and A2058 melanoma cells. As expected, mRNA ( Figure 3A) and protein levels ( Figure 3B) of miR-365 in melanoma we constructed luciferase reporter plasmids containing parts of BCL2 and CCND1 3'UTR with either a wildtype or deleted miR-365 target sites ( Figure 3C). As shown by the results of luciferase reporter assays, co-transfection of miR-365 suppressed the luciferase activity of the reporters containing wild-type miR-365 target sites of BCL2 and CCND1 but not that of the reporters containing deleted miR-365 target sites in A375 cells ( Figure 3D), indicating that both BCL2 and CCND1 were directly regulated by miR-365 in melanoma cells. The effects of miR-365 were partially mediated through targeting CCND1 and BCL2 To investigate the role of CCND1 in miR-365-induced cell cycle arrest, we co-transfected A375 and A2058 cells with miR-365 mimic oligos and a construct encoding CCND1 or an empty vector. We found out that restoration of CCND1 levels eliminated the effects of miR-365 overexpression on cell cycle arrest in both cell lines ( Figure 4A). To investigate the role of BCL2 in miR-365-induced cell apoptosis, we co-transfected A375 and A2058 cells with miR-365 mimic oligos and a construct coding BCL2 or an empty vector. The results revealed that co-transfection of miR-365 mimics with BCL2 expressing plasmid led to significant reduction in apoptotic rate in both cell lines ( Figure 4B). These data suggested that the anti-tumor effects of miR-365 were at least partially mediated by inhibition of its target genes, BCL2 and CCND1. GO analysis and KEGG pathway enrichment analysis To gain further insight into the cellular functions of miR-365, the predicted target genes of miR-365 were retrieved using the miRanda, DIANA-microT and TargetScan database ( Figure 5) and subjected to GO analysis and KEGG pathway enrichment analysis. The GO function enrichment analysis explored the functional roles of miR-365 target genes in terms of biological processes, cellular components and molecular functions. The top enriched biological processes were associated with "regulation of gene transcription" (Table 1); the top enriched cell components included "nucleus", "nucleoplasm", and "neuron projection" (Table 2); the top enriched molecular functions were involved in "sequence-specific DNA binding", "transcriptional activator activity" and "protein binding" (Table 3). KEGG pathway enrichment analysis was also performed to identify significant pathways enriched in miR-365 target genes. KEGG analysis indicated that these genes were mainly involved in cancer-related pathways, Rap1 signaling pathway, and phosphatidylinositol signaling system (Table 4). miR-365 was not correlated with clinicopathological stage and survival of melanoma patients We then analyzed the SKCM datasets in TCGA for correlation between miR-365 level and clinicopathological stage and the survival of melanoma patients. The results revealed that miR-365 level was not obviously different among patients at different T stage (P>0.05, Figure 6A), N stage (P>0.05, Figure 6B), M stage (P>0.05, Figure 6C) and overall clinicopathological stage (P>0.05, Figure 6D). Furthermore, Kaplan-Meier analysis failed to discover significant differences in OS and DFS between patients with high and low miR-365 levels (P>0.05, Figure 6E). Discussion Investigation into the molecular mechanisms responsible for melanoma carcinogenesis and progression, especially the exploration of dysregulated miRNAs in melanoma, represents a popular and promising field of study and is essential for developing novel therapeutics. MiR-365 is located in the chromosome region 16p13.12. The expression pattern of miR-365 is cancer type-dependent. MiR-365 was highly expressed in cutaneous squamous cell carcinoma [8,9], breast cancer [32], and invasive pancreatic ductal adenocarcinoma [10]. In contrast, miR-365 was downregulated in colon cancer [11] and lung cancer [12]. The function of miR-365 is complicated, it may function as a tumor suppressor or an oncogene depending on the caner types. For example, miR-365 was reported to suppress cell cycle progression and promote apoptosis of colon cancer cells by probably targeting Cyclin D1 and Bcl-2 [11,33]; miR-365 expression levels were reduced in lung cancer tissues and ectopic miR-365 expression could inhibit cell proliferation of lung cancer cell lines by targeting thyroid transcription factor 1 (TTF-1), which is an essential factor in lung developmental and a prognostic marker for non-small cell lung cancer. However, miR-365 was rarely studied in melanoma. In the only literature studying miR-365 in melanoma, Bai et al. found that the expression of miR-365 was significantly downregulated compared with that in matched normal tissue [13] and overexpression of miR-365 inhibited growth, invasion and metastasis of malignant melanoma through targeting neuropilin1 (NRP1). Therefore, we carried out the present study to further explore the roles of miR-365 in melanoma. In this study, we initially found that miR-365 expression was lower in melanoma cells compared with non-transformed melanocyte ( Figure 1A). And the levels of miR-365 was not relevant to the status of BRAF mutation, as it was significantly decreased in both BRAF wildtype cell lines, A375, A2058, and SK-MEL-28, and in BRAF mutant cell lines, SK-MEL-2. Next, we found that overexpression of miR-365 led to significant decrease in cell viability ( Figure 1C). Investigation into the underlying mechanism showed that ectopic expression of miR-365 remarkably suppressed clonogenic capacity and induced cell cycle arrest and apoptosis in melanoma cell lines ( Figure 1D-1F). In addition, miR-365 significantly compromised the migration and invasion capacities of melanoma cell in vitro (Figure 2A, 2B). These results were in consistence with Bai et al. study in melanoma and suggested that miR-365 functioned as a tumor suppressor in melanoma. Regarding the target genes of miR-365, a variety of genes have been reported, including CCND1, BCL2, NRP1, TTF-1. Among these genes, CCND1 is considered as an important proliferation-promoting molecule; BCL2 is a fundamental anti-apoptotic gene with a recognized role in cancer development ( Figure 4D). However, this regulatory association has not been reported in melanoma. We found that overexpression of miR-365 downregulated the mRNA and protein levels of these 2 genes in A375 and A2058 cell lines ( Figure 4A, 4B). To confirm that CCND1 and BCL2 are direct targets of miR-365, we performed a luciferase reporter assay, showing that CCND1 and BCL2 were directly targeted by miR-365 in melanoma cells ( Figure 4E). Finally, to explore the roles of CCND1 and BCL2 in miR-365-mediated cell cycle arrest and apoptosis, we co-transfected melanoma cells with miR-365 mimic oligos and a construct encoding either CCND1 or BCL2. The results revealed that restoration of CCND1 or BCL2 attenuated the effects of miR-365 overexpression on cell cycle and apoptosis, respectively ( Figure 5A, 5B). Our result was consistent with Nie et al. study which has shown that BCL2 and CCND1 are direct targets of miR-365 in colon cancer and that knockdown of the endogenous CCND1 or BCL2 was able to mimic the effect of miR-365 [11]. Together, these results suggested that miR-365 probably is a novel tumor suppresser in melanoma through targeting CCND1 and BCL2, as well as other potential targets. We should notice that the regulatory network by miRNAs is very complicated. One gene is regulated by multiple miRNAs, and a specific miRNA can regulate multiple genes. Therefore, other target genes in addition to CCND1 and BCL2 may also participate in the functions of miR-365. This notion highlighted the need for further studies to reveal the entire "targetome" of miR-365 in the development of melanoma. To gain further insight into the cellular functions of miR-365, we then performed the GO functional enrichment analysis and KEGG pathway enrichment analysis for the predicted target genes of miR-365 retrieved from the miRanda, DIANA-microT and TargetScan database. GO function enrichment analysis revealed that the most prominent functions of miR-365 target genes were involved in regulation of gene transcription. KEGG pathway enrichment analysis indicated that miR-365 target genes were mainly involved in cancer-related pathways. Thus, analysis of the "targetome" of miR-365 further revealed the association of miR-365 with cancer and dysregulated gene transcription may contribute to the function of miR-365. We then analyzed the TCGA SKCM dataset for correlation between miR-365 levels and clinicopathological stage or the prognosis of melanoma patients. The results revealed that miR-365 levels were not obviously different among patients with different T stage, N stage, M stage, and overall clinicopathological stage (all P>0.05, Figure 5A). Furthermore, Kaplan-Meier analysis failed to discover significant difference in OS and DFS between patients with high and low miR-365 levels (both P>0.05, Figure 5B, 5C). The prognosis of patients can be influenced by many factors, such as disease stage and treatment. Mir-365 is just one of the many factors, therefore, its effect on patient outcome may be covered or counteract by the effect of other factors, although miR-365 showed obvious biological effects in melanoma cell lines. The discrepancy between in vitro and in vivo data again highlighted the complexity of miRNA regulatory network and further studies are guaranteed to completely understand the roles miR-365 in melanoma development. Conclusions In sum, our data suggested that miR-365 was downregulated in melanoma cells and played a tumor suppressive role in melanoma development through regulating multiple target genes that are essential to key cellular functions, such as cell proliferation, apoptosis, migration, and invasion. However, the TCGA data failed to discover the association of miR-365 with progression and development of melanoma patients. Further studies in animal models and human samples are required for the full understanding of the roles miR-365 in melanoma development.
2018-06-21T12:41:03.864Z
2018-06-02T00:00:00.000
{ "year": 2018, "sha1": "1fc4d28baef1776dff3b9350e1a38bd9a30cb7a4", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc6011806?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1fc4d28baef1776dff3b9350e1a38bd9a30cb7a4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
198275628
pes2o/s2orc
v3-fos-license
Optimization of simulated cranial, thorax, and abdominal examination in paediatric digital radiography This study was aimed to obtain optimum parameter combination in simulated cranial, thorax, and abdominal examinations using computed radiography (CR) and direct digital radiography (DDR) systems. Optimization was performed using in-house phantom with contrast objects on Siemens Luminos Agile Max DDR and Siemens Axiom Luminos TF CR. Paediatric patients were separated into four age groups; 0-1 year (group A), 1-5 years (group B), 5-10 years (group C), and 10-15 years (group D). Slab phantoms with different total thickness were used to simulate patients belonging to each age group for different anatomical region (cranial, thorax, and abdomen). Optimization were performed in three steps; first kVp, followed by mAs, and then additional filter optimization. All the steps of optimization were performed based on FOM (figure of merit) values calculated as ratio of squared SDNR (signal difference to noise ratio) and entrance surface dose with the highest FOM representing the optimum condition. The results of this optimization were evaluated based on the highest FOM generated from each exposure. For both CR and DDR, optimum parameters (i.e. highest FOM) are different for each age group and anatomical region. Even with different X-Ray units, the CR device had slightly similar optimized parameters. Introduction Radiological imaging has undergone rapid development in medical diagnosis and treatment. The existence of digital radiography, such as computed radiography (CR) and direct digital radiography (DDR), has reduced the use of screen-film in conventional radiography. By using imaging plate in CR and FPD in DDR, the imaging chain becomes shorter and it allows images to be processed right after exposure. Diagnostic radiology examination provides information about the diagnosis of a disorder or disease in a patient. This examination presents its own challenges, especially in paediatric patients. This is because on the one hand, paediatric anatomy only displays a slight contrast due to imperfect bone development. On the other hand, paediatric is more radiosensitive than adults because their cells divide faster [1][2][3]. Therefore, dose delivery to paediatric patients must be kept minimum without reducing the image quality. In this regard, it is necessary to optimize digital x-ray imaging system using CR and DDR for paediatric patients. This is performed to ensure that radiation provides image with adequate quality to establish the diagnosis and that radiation dose received is as low as possible according to the ALARA (as low as reasonably achievable) principle. Optimization can also minimize the number of imaging repetition requests, thereby reducing the radiation dose and risk in paediatric patients [4]. Optimization of imaging techniques can be performed using figure of merit (FOM) as parameter. This parameter evaluates the image quality and the dose generated from an objective measurement unit. Devices and setup configuration This study was performed on Siemens OPTITOP 150/40/80 HC model in Harapan Kita Maternal and Children's Hospital, Jakarta. The device had Siemens Luminos Agile Max unit with the Max wi-D image receptor which is a Caesium Iodide scintillator detector (DDR device) and Siemens Axiom Luminos TF unit with Kodak DirectView Classic CR (CR device). To represent paediatric patients, slabs of polymethyl methacrylate (PMMA) and cork combination was used as an anatomical representation with thickness groups selected from a previous study by Setiadi et al., (2017) and is presented on Table 1. Thicknesses were varied according to age. Varied patient-simulating phantom slabs is positioned on top of in-house phantom sitting on the image receptor. In-house phantom The in-house phantom ( Figure 1) was designed to quantify the quality of conventional and digital planar radiographic images. The phantom is made from PMMA with a mass density of 1.18 g/cm 3 and size of 250 mm × 250 mm × 10 mm, equipped with four test modules: (1) collimation, (2) contrast linearity, (3) contrast consistency, and (4) modulation transfer function (MTF). Test modules no (3), and (4) were selected for this optimization purpose. The modules contain cylindrical objects with varied size and contrast. A 1 mm thick copper plate angulated by 3 degrees was available for MTF measurement. Dose measurement To obtain dose information, a Radcal® (Radcal Corporation, California, USA) series of the 10x6-6 ionization chamber with a weight of 0.05 kg and an active volume of 6 cm 3 detector were utilized. The entrance surface air kerma (ESAK) is measured at the top of the phantom surface. Image quality assessment Raw images in DICOM format were acquired and assessed using ImageJ. For quantitative contrast measurement, selected ROI was mathematically compared to calculate the signal difference to noise ratio (SDNR) using equation 1 [5][6][7]. with NL being the background pixel value, which is the mean pixel value of the ROI with a certain area outside the object's image; NO being the mean pixel value of the object, which is the average pixel value on ROI; SDL being the default background standard deviation value; and SDO being the standard deviation value on a certain area in the object image. Objects on modules (3) are used to quantify the variance of the image contrast (SDNR) of each objects with respect to objects size (using coefficient of variance, CV). Calculation of the MTF is performed using slanted-edge method on the angulated copper plate. Three-steps optimization The in-house phantom with certain thickness groups that represents the anatomy of paediatric patients were imaged using a range of varied parameters, i.e. kVp, mAs, and added filtrations, each in every steps. On every exposure, ESAK were measured and the resulting image was quantitatively assessed. As primary parameter, the figure of merit (FOM) is defined as SDNR 2 /ESAK [5], as greater FOM indicates a more desirable technical parameter combination for that corresponding thickness and anatomy. The first step was to find desirable peak tube current (kVp), followed by finding the tube loading (mAs) with greatest FOM on the second step, and finally choosing desirable added filtration. On each steps, all other parameters (especially the one parameter with greatest FOM from the previous step) are kept constant. The MTF and CV were taken into account when two or more variations yielded on statistically similar FOM values. Results From the measurements that have been made, FOM values are obtained for each variation of kVp, mAs, and additional filters on all thicknesses in each anatomy. Different results were obtained from different type of digital system and is presented separately. CR device The optimization of CR device was performed on Siemens Axiom Luminos TF unit with Kodak DirectView Classic CR. The results of CR optimization along with the ESD and SDNR values are shown in Table 2. Table 3. This FOM parameter is actually very useful in clinical use since in reality, radiodiagnostic examination does not only concern with image quality, but also the dose received by the patient. However, the use of FOM parameters should be combined with other parameters. In this study, we used MTF and CV parameters as comparative parameters for the ambiguous FOM. However, not all data can be processed into MTF, especially data from phantom examinations of large thicknesses. This is due to phantoms that are too thick while the angulated copper plate on the in-house phantom is too thin, so it could not be properly detected by the software. The use of CV as a comparison parameter also finally did not show satisfactory results. Many CV values are inconsistent as well, which makes it difficult to determine the optimum combination of parameters as a comparison of FOM values. Because of these, MTF and CV were not used as comparative parameters in this study. The results of this study are all based on FOM values and is generalized in Table 4. With the use of phantoms with greater thickness, a higher parameter combination is needed to produce a balanced dose and image quality. The use of kVp and mAs in each phantom thickness is adjusted to the age of paediatric patients. Patients with greater age certainly need a parameter combination that are higher so that the exposure given is sufficient to produce an image of adequate quality. With a higher parameter combination, it will certainly increase the dose received by the patients. Therefore, radiology department should have a reference parameter combination that can be used in clinical cases. However, this optimization results cannot be used as a reference parameter combination yet. Further research is required, especially with the combination of other parameters, to produce reference parameter combination. Results also indicates that CR and DDR devices require a slightly different parameter combination to achieve desirable image quality on reasonably low dose. Conclusions From this study, it can be concluded that this optimum conditions on simulated cranial, thorax, and abdomen examination using Siemens Axiom Luminos TF with Kodak DirectView Classic CR and Siemens Luminos Agile Max DDR vary on paediatric ages. In general, slightly different parameter combinations were produced in the group B, C, and D. The use of filters is highly recommended in obtaining the highest FOM. In-house phantom can be used for determining optimized parameter of image acquisition.
2019-07-26T07:23:45.538Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "2f1517d532af5caa8cc9c2a0f0f999acd36316e8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1248/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "35f6f32cb4876eb91dfa68d6f93e5cf41b2dd770", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
222220038
pes2o/s2orc
v3-fos-license
REVIEW OF CULTURE ISSUES AND CHALLENGES AFFECTING NURSING PRACTICE IN SAUDI ARABIA REVIEW OF CULTURE ISSUES AND CHALLENGES AFFECTING NURSING PRACTICE IN SAUDI ARABIA Cultural diversity is a prominent issue in nursing practice in Saudi Arabia, which hinders the development of the profession. The presence of an expatriate nursing workforce in the region leads to issues of linguistic and cultural differences between nurses and patients. The literature reveals that conflicts between expatriate nurses and patients are caused primarily by a lack of effective communication and interaction, which, in turn, originates from differences in cultural standards. Such conflict may compromise patient care and increase the probability of medical errors. The concerned authorities must consider these issues when recruiting foreign-educated nurses. Educational and training programs must also include guidelines for cultural sensitivity so that nurses can competently deal with culturally delicate situations. Nurses must proficiently handle various aspects of cultural diversity among patients because such competencies are needed to provide effective care. INTRODUCTION Several challenges in modern healthcare originate from social and technological issues and highly complex bioethical problems. Effectively addressing these challenges necessitates training that equips nurses with the proficiency essential to meet patient needs and enhance the quality of care (Almutairi, & Mousa, 2014). The realization of healthcare goals, however, is complicated by workplace diversity, which is a constant in every professional field. Nurses and patients always exhibit differences in behaviour and perception (Campinha-Bacote, 2011;Darnell & Hickson, 2015;Tucker, Roncoroni & Sanchez, 2015). The same situation is true for Saudi Arabia, which is confronted with several nursing-related obstacles, particularly the accumulation of a foreigner-dominated workforce (Al-Homayan et al., 2013;AlYami & Watson, 2014). Culture plays a major role in the nursing profession. In most cases, the lack of cultural sensitivity is a critical cause of misunderstanding between local patients and expatriate nurses, especially those from non-Arab countries (Felemban, O'Connor & McKenna, 2014). Nurses need to understand the cultural setting of a given region because such comprehension helps them develop strong and effective relationships with their patients. In the process, nurses can offer Department of Nursing, College of Applied Medical Sciences, Majmaah University, Saudi Arabia *Corresponding author's email: jalotaibi@mu.edu.sa optimum healthcare and reduce the potential risks associated with patient-nurse conflicts (De Beer & Chipps, 2014;Tucker et al., 2015). This article discusses the history of nursing education in Saudi Arabia and evaluates the cultural facets of Saudi society and their effects on nursing practice. Understanding the Structure and Role of the Saudi Society The Saudi society is characterized by a reverence for the extended family, with each member exhibiting a sense of responsibility and affinity for parents and other relatives. The family is assumed to represent the individuality of a person. Saudis visit family members, celebrate their achievements, provide them with support, and show them compassion and respect (Alsulaimani, 2014). The family structure can be advantageous to nurses because they can interact easily and communicate with a patient's family members and involve them in a patient's care. Parents and grandparents occupy a position of considerable honour and respect, which gives them substantial authority over their offspring's healthcare; this authority also affects expectations regarding nursing services (AlKhathami et al., 2010). Patients are expected to be treated under nurses' care as they are treated at home-with respect, understanding, and compassion. Moreover, family members expect nurses to dress properly, assume a reserved disposition when interacting with patients, and avoid gestures or acts that may damage a patient's self-respect (Al Mutair et al., 2013). In Gulf countries, a person's honour is considered parallel to that of his or her entire family. Actions such as cruelty, sexually immorality and inappropriate behaviour, and mistreating the elderly or the weak tarnish the honour of an individual. In family conversations, deaths and ailments are two of the most frequently discussed topics (Al-Shahri, 2002). When a family member is diagnosed with a disease, affiliation needed to increase with this member from the relatives. They must accompany patients when they visit the hospital for medical examinations or healthrelated interviews. In most cases, the family members accompanying the patient are the ones who communicate with the physician or other healthcare staff and respond on behalf of the patients. Such conduct is often frowned upon and prohibited by nurses on duty (AlKhathami et al., 2010). Good care is measured by allowing elderly patients to stay close to their relatives in all the phases of their illnesses. Family members demand that patients be provided with the best possible healthcare, thereby demonstrating their concern for the elderly. Like the case of other family members, the elderly is accompanied by relatives during important stages of their treatment, such as follow-up consultations (Al Mutair et al., 2013). Saudis attach considerable significance to family bonds and appreciate their families' efforts to satisfy their affiliation-related needs. When suffering from a disease or illness, Saudis tend to rely on these bonds to help facilitate healing. Individuals who are not visited as often by their families tend to feel lonely and rejected. In comparison to family and friends, nurses are considered outsiders by patients. Researchers have posited that the relationship between patients and nurses can improve when the nurse can interact with the patient in the same manner as families do. This level of involvement is regarded as more effective and compassionate than a professional approach (Felemban et al., 2014). Saudis tend to develop trust in nurses who care for their relatives compassionately and endeavour to know their families on a personal level. Nurses, for their part, are predisposed to sharing patient information with family members because such behaviour earns their trust. They must satisfactorily answer all the questions of family members so that nurses can reciprocate and refrain from withholding essential health-related information (Alshaikh et al., 2015). Understanding Gender Segregation Gender segregation is a common social norm practised by various government departments in Gulf countries. Males and females are prohibited from intermingling in open or public spaces, such as major hospitals and clinics. Specific zones are delineated for families, females, and males. In most social settings, career women are forbidden from working alongside or freely interacting with their male colleagues unless such interaction is crucial to a particular situation (Lamadah & Sayed, 2014). For those who choose to carve out a career as a nurse, gender segregation remains a principal challenge encountered by teams who implement nursing assignments. They are required to assemble groups that comprise people of a single gender. Gender segregation can be maintained in professions such as education, but the same cannot be said of the nursing profession because nurses typically work alongside patients, physicians, and individuals of the opposite gender. Consequently, most nurses opt to offer services to patients of their gender (Al-Fozan, 2013). In the case of emergency nurses, however, caring for patients of the opposite gender is allowed. The societal norm of maintaining separation between men and women discourages most Saudi females from pursuing nursing as a profession. Regardless of the prestige associated with nursing in the healthcare field and the promising career development that this discipline offers, many Saudi families are reluctant to allow their female members pursue this vocation because it cannot guarantee gender segregation in the workplace (Al-Homayan et al., 2013). Female nurses may fall into disgrace and risk the honour that society confers on their families (Al-Mahmoud, Mullen & Spurgeon, 2012). History of Nursing Education in Saudi Arabia In 1958, the first Saudi nursing program was initiated when 15 Saudi males enrolled in a one-year nursing program. Over time, similar programs were offered to females, beginning in Riyadh and Jeddah. Admission to nursing programs initially depended on whether candidates completed fifth-or sixth-grade education. In 1981, this requirement was raised to mandatory ninthgrade education for the eligibility to enrol in an expanded three-year nursing education program (Aldossary, While & Barriball, 2008;AlYami & Watson, 2014). In the 1970s, the Bachelor of Science in Nursing (BSN) program was launched, whereas master's-level programs were launched in 1987. In the beginning, all BSN programs were offered solely to females. The year CULTURE ISSUES AND CHALLENGES AFFECTING NURSING PRACTICE THE MALAYSIAN JOURNAL OF NURSING | VOL. 11 (4) April 2020 | 87 2005 saw the launch of the first male BSN program, which reportedly had more than 300 male students registered in a five-year academic program in Riyadh (Ministry of Health, 2012). Given that the Saudi healthcare system are built in accordance with Western design, it has become necessary to westernize healthcare facilities and related educational institutions . The problems encountered in the nursing profession in the Arab region reflected the cultural bias that is reinforced by the curricula and courses. The curricula indicate the significance of cultural diversity, but in practice, the Western culture is generally given priority (Almutairi & McCarthy, 2015). Generally, models of nursing education are considerably affected by these cultural variabilities. In the United States, for instance, baccalaureate nursing graduates are required to develop the knowledge and proficiency necessary to effectively serve a diverse population. This requirement compels them to understand the various dimensions of culture, religion, race, and gender, as well as the effect of these factors on the delivery of healthcare services (Darnell & Hickson, 2015;Reyes, Hadley & Davenport, 2013). The nursing staff must be educated on these issues because they may encounter stress from facing circumstances where culture shock occurs. The Nursing Practice The shortage of nurses has been a constant problem throughout the world, driving many skilled nurses to practice abroad, where they are offered more desirable working conditions and incomes. Moving from one place to another also affords nurses multicultural experiences (Lamadah & Sayed, 2014;Rooyen, Telford & Strümpher, 2010). In Saudi Arabia, the possibility of acquiring such experiences is significantly high, given that most of the nurses who serve in the country are expatriates with different cultural backgrounds. Nurses who practice in this region are confronted with a range of issues pertinent to local customs, healthcare practices, language, and communication (Almutairi & McCarthy, 2015;Hussein, 2014). Nursing is not considered one of the most desirable professions in Saudi society. Negative perceptions about the career, gender-based limitations, and an immense increase in the population have heightened the demand for expatriate nurses (Al-Fozan, 2013;. In 2012, the percentage of expatriate nurses working in the country was estimated to be 63% of the total nursing population. They hail mainly from India, Malaysia, South Africa, Canada, the Philippines, New Zealand, and South Africa, as well as from Middle Eastern countries (Ministry of Health, 2012). The differences in professional, social, and cultural backgrounds are manifested in all levels of interaction-within expatriate nurses, between expatriate nurses and Saudi nurses, and between expatriate nurses and Saudi patients. Studies have confirmed that expatriate nurses faced major issue of having to satisfy the cultural needs of their patients (Al Momani & Al Korashy, 2012;Almutairi & McCarthy, 2015;Al Neami, Dimabayao & Caculitan, 2014). A suggested approach to achieving this goal is to consult professional negotiators to work with practising nurses and help them overcome the issues associated with providing healthcare services to local patients. Negotiators or translators are not required to have nursing experience or training, but they must have considerable experience living within Saudi society. They serve as agents of culture, developing connections among different subcultures. They translate and interpret cultural symbols and language styles that characterize communication, values, and lifestyles; assistance from these agents that significantly facilitates the delivery of healthcare services (Almutairi & McCarthy, 2015). Time, context, and environment are the other facets of multicultural settings that are considered when examining the interaction between the Saudi public and expatriate nurses. Intense and strongly bonded relationships make the Saudi culture unique. In this cultural setting, events are evaluated with respect to the context that surrounds a relevant situation (Aldossary, 2013). As perceived by nurses, Saudis are invested in learning about a person and are always eager to form new relationships; however, this evaluation disregards an entire context and is based only on a current situation (Rooyen et al., 2010). Although Saudis are normally enthusiastic about conversing and interacting, they are strongly averse to engaging in such behaviour during crises, severe illnesses, disasters, and looming death. They tend to remain in denial during a crisis, which contrasts with Westerners' tendency to showpredilection for showing interest in every matter that is relevant to a situation (Al-Shahri, 2002). This denial is a principal obstacle to the effectiveness of healthcare staff, especially nurses. Being Muslims, Saudis have strong faith in divine aid, even in the most severe situations. They firmly believe that hope and faith help patients fight illness (Al Mutair et al., 2013). Accordingly, Saudis regard the act of informing patients about their diseases as unkind. In an effort to and tend to create conflicts, particularly in the healthcare field. Felemban et al. (2014) described examples of how cultural sensitivity affects the caregiving process. Limited physical contact with patients may increase the risk of mistakes that practising nurses will make during healthcare delivery due to cultural and linguistic barriers. Such situations considerably affect the quality of care being offered. The care planning process also differs across cultures. A patient may not comprehend matters such as the discontinuation of treatment, early discharge from the hospital, or refusal of hospital admission. In Saudi Arabia, ailments, apart from being medical conditions, are considered a "test" from Allah that a patient must endure with his or her faith intact for the duration of an illness (Felemban et al., 2014). By contrast, Western communities may deal with such issues through discussions and the consideration of alternatives. Healthcare delivery in Saudi Arabia may be affected by the lack of cultural sensitivity of healthcare staff, the social status of patients, the lack of affordable healthcare, and the unsuitability of services for different groups. As identified in the context of numerous countries, different cultural groups, to a certain extent, underutilize healthcare services (Ingram, 2012). Discussion of Potential Conflicts In the nursing profession, conflict may lead to waste of time and energy, distress, and confusion. Cultural insensitivity may generate conflict in different ways. Al-Fozan (2013) identified the factors that obstruct culturally competent care in Saudi Arabia as the inability to understand other cultures, inability to communicate effectively, diverse linguistic and cultural backgrounds of nurses, and the inability of healthcare organizations to address culturally diverse patients. In addition, no industry standards are in place for the regulation of the nursing profession. Hence, organizations are compelled to develop their strategies for monitoring the nursing practice in general and the roles of nurses in particular; this individual development of strategies enables hospital authorities to address relevant issues (Aldossary, 2013;AlYami & Watson, 2014;Zakari, Al Khamis & Hamadi, 2010). Recommendations for Nursing Practice The current era is characterized by highly diverse workplaces, especially in the healthcare field. Patients may differ considerably from nurses in terms of cultural refrain from hurting their loved ones, patients' relatives filter information about the patient's condition or completely withhold it from the afflicted family. This act is justified as a means of saving loved ones from potential emotional harm. Saudis assume that being aware of all the details of an illness causes a patient to lose hope (Alshaikh et al., 2015). In Saudi Arabia, nurses and other healthcare staff usually communicate such information indirectly (e.g., using non-verbal methods), tactfully avoiding reporting serious findings to patients and their families (Al Mutair et al., 2013). Cultural Diversity The rapid population growth in Saudi Arabia has increased the need for healthcare facilities and nursing staff. Owing to the ongoing social, technological, governmental, and economic changes in the country, the nursing profession has undergone various changes in the past years, thus affecting the provision of healthcare services in the region (Almalki, Fitzgerald & Clark, 2011). Cohesion between different groups advances the development of an effective organizational structure that facilitates healthcare provision. In the Saudi nursing sector, this structure is highly complex because it depends heavily on a foreign workforce-a reliance that engenders conflicts in the healthcare field (Almutairi & McCarthy, 2015;Zakari, Al Khamis & Hamadi, 2010). The facilitation of effective healthcare services is often hindered by cultural diversity because it creates problems for nurses, who are required to communicate and interact with patients of different linguistic and cultural backgrounds. Healthcare staff should thoroughly understand cultural diversity to competently operate under different cultural contexts (AlKhathami et al., 2010). Cultural Sensitivity According to Campinha-Bacote (2011), cultural care is manifested by addressing the variabilities and commonalities of ethical standards, principles, and ways of life. Cultural sensitivity entails consideration for dissimilarities in culture, race, gender, and social class when dealing with a range of circumstances. Hussein (2014) indicated that the risk of culture shock arises in situations where nurses and patients have different cultural backgrounds. These settings may create an environment for more common nursing errors. Although Arabs and Westerners share several cultural values (e.g., attaching significance to bonds with family and children, aiming for a peaceful life), the variances in their traditions and histories overshadow these commonalities attentive to matters of cultural importance and distinctiveness (Rooyen, Telford & Strümpher, 2010). Another imperative is for them to understand the diseases specific to Saudis and their culturally dictated care practices. Modern approaches to cultural sensitivity require that nurses evaluate their own cultural influences, beliefs about healthcare, biases, and heritage (Al Momani & Al Korashy, 2012;Almutairi & McCarthy, 2015). Nurses may use existing literature or participate in training sessions and discussions to strengthen their knowledge about cultural sensitivity in patient care. They should extensively develop the skills required to recognize and grasp cultural differences among patients and engage in cross-cultural interactions. Another equally valuable recommendation is for nurses to exercise self-motivation in familiarizing themselves with the cultural attributes of Saudi Arabia and its citizens. CONCLUSION According to the distinct cultural setting of Saudi Arabia, the cultural issues faced by the country's nursing profession necessitate an effective resolution. This article discussed the history of nursing education in the country and evaluated the cultural facets of Saudi society as well as their effects on nursing practice. Nurses must endeavour to comprehend issues of cultural diversity among their patients to effectively deal with them. This skill will enhance their efforts to provide effective care for their patients. identity, religion, values, and beliefs, thus giving rise to diversity-induced complexity. Nurses are expected to be knowledgeable about the cultural backgrounds of their patients to ensure optimum care that corresponds with the cultural requirements. Such proficiency is called cultural competence-that is, the combination of approaches, strategies, and attitudes that enables nurses and other healthcare staff to work efficiently within transcultural settings (Bauce, Fitzpatrick, & McCarthy, 2014;Harnegie, 2017). In this context, cultural competence entails respect for cultural principles and the appraisal of cross-cultural relations. It ascribes importance to awareness about cultural variation dynamics, the growth of cultural information, and the modelling of services aimed at fulfilling special cultural needs (Campinha-Bacote, 2011). Nurses are duty-bound to grasp the cultural differences among their patients, but every patient must be treated with a similar level of care and compassion (Noble & Rom, 2014). Several Saudi studies have shown that expatriate nurses are inadequately aware of the cultural knowledge that significantly affect nursing practice in the country (Al Momani & Al Korashy, 2012;Almutairi & McCarthy, 2015;Al Neami et al., 2014). Such factors must be brought to the attention of expatriate nurses applying for work in Saudi Arabia on their recruitment and orientation. This approach would significantly improve the standard of healthcare offered to patients. Nurses would therefore be substantially D R A F T 90 | VOL. 11 (4) April 2020 | THE MALAYSIAN JOURNAL OF NURSING
2020-10-08T20:19:58.323Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "827723d0b962f09da50b5742b0954276fa433c51", "oa_license": null, "oa_url": "https://ejournal.lucp.net/index.php/mjn/article/download/1012/1122/2581", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "827723d0b962f09da50b5742b0954276fa433c51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Political Science" ] }
211043927
pes2o/s2orc
v3-fos-license
Understanding evolutionary and ecological dynamics using a continuum limit Abstract Continuum limits in the form of stochastic differential equations are typically used in theoretical population genetics to account for genetic drift or more generally, inherent randomness of the model. In evolutionary game theory and theoretical ecology, however, this method is used less frequently to study demographic stochasticity. Here, we review the use of continuum limits in ecology and evolution. Starting with an individual‐based model, we derive a large population size limit, a (stochastic) differential equation which is called continuum limit. By example of the Wright–Fisher diffusion, we outline how to compute the stationary distribution, the fixation probability of a certain type, and the mean extinction time using the continuum limit. In the context of the logistic growth equation, we approximate the quasi‐stationary distribution in a finite population. Introduction The huge computational power available today allows more and more theoreticians to develop individual based models of high complexity in order to explore dynamical behavior in ecology and evolution.In this manuscript, we aim to make a link between these individual based descriptions and continuous models like (stochastic) differential equations that remain amenable to analysis.We review these techniques and apply them to some frequently used models in ecology and evolution. In ecology the probably most common description of population dynamics is the logistic growth equation (Verhulst, 1838).Its attractiveness draws from its simplicity.It has a globally attractive fixed point (when started with any non-zero population size), the carrying capacity of the population.However, this simplicity comes at the cost that biological observations as population size fluctuations or even extinction events are not captured by this deterministic model.To account for these stochastic effects one needs to change to a stochastic differential equation which can be derived from the individual based reaction (also called first principles) (Champagnat et al., 2006).We will outline this procedure along similar lines as reviewed in Black and McKane (2012) and additionally provide some insights on the population distribution in the probabilistic framework. In evolutionary game theory, the Moran process has become a popular model for stochastic dynamics in finite populations (Nowak et al., 2004).It is a model describing the dynamics of different alleles in a population of fixed size and overlapping generations.As this is a birthdeath process, quantities such as fixation probabilities, times, and the stationary distribution can be calculated based on recursions (Goel and Richter-Dyn, 1974;Karlin and Taylor, 1975;Traulsen and Hauert, 2009;Allen, 2011).A continuum approximation for quantities that are known exactly thus makes limited sense.Another important process in population genetics is the Wright-Fisher process -a model for allele evolution in a population of fixed size and non-overlapping generations (Wright, 1931).It is more popular in population genetics but is also used in evolutionary game theory (e.g.Imhof and Nowak, 2006;Traulsen et al., 2006;Taylor and Maciejewski, 2012;Wakano and Lehmann, 2014).However, the Wright-Fisher process is mathematically much more challenging to analyze exactly.Therefore, often continuum approximations resulting in stochastic differential equations are used to compute typical quantities of interest such as the probability of fixation of a certain genotype or the mean time until this fixation event occurs (Crow and Kimura, 1970;Ewens, 2004). Even though similar in the questions they try to answer, evolutionary game theory and population genetics are developing in parallel, sometimes with little interaction between them.As this is partly arising from the different methods applied, here we aim to provide an introduction to the continuum limit for those less comfortable with these methods and hesitant to go into the extensive, more mathematical, literature. Since our goal is to illustrate how to apply a continuum limit to individual-based descriptions of an ecological or an evolutionary process, the calculations and derivations below may remain a bit vague where more mathematical theory and knowledge is necessary.For a mathematically rigorous presentation of this topic we refer to the excellent lecture notes by Etheridge (2012) or the book by Ewens (2004).A more application-oriented treatment of stochastic processes in biology can be found in the books by Gardiner (2004), van Kampen (2007), Otto and Day (2007), and Allen (2011). Evolutionary and ecological proto-type processes 2.1.Wright-Fisher and Moran process We start by introducing the two most popular processes to model (stochastic) evolutionary dynamics, the Wright-Fisher and the Moran process.While in the Wright-Fisher process generations are non-overlapping and time is measured in discrete steps, the generations in the Moran model are overlapping and can be measured in discrete or continuous time.Both processes describe the stochastic variation of allele frequencies due to finite population size effects, also called genetic drift. Wright-Fisher model One of the oldest population genetics model is the finite size Wright-Fisher process (Fisher, 1930;Wright, 1931).Given a population of constant size, it describes the change in frequencies of alleles in non-overlapping generations over time, measured in (discrete) generations. Classically, one considers a population of N individuals where each individual is of type A or B .The population is considered to be in its ecological equilibrium and its population size N is therefore constant over time.One possible interpretation is that every generation each individual chooses, independently of all other individuals, an ancestor from the previous generation and inherits its type.Under selection, the likelihood of drawing type A individuals increases (or decreases) which introduces a sampling bias.The probability for an offspring to have a parent of type A, conditional on k individuals being of type A in the parental generation, is then given by where s ∈ R ≥0 is the selective advantage of type A. The number of type A individuals in the next generation is then given by a binomial distribution with sample size N and success probability p k .Denoting the number of type A individuals in generation n by X n we have Unfortunately the Wright-Fisher model, even though very illustrative, is difficult to study analytically.Through the developments in stochastic modeling in the last century, a lot of this new theory could be adopted to overcome this problem (e.g.Kimura, 1983;Ewens, 2004). Moran model Another way to resolve the difficulties associated with the Wright-Fisher process is offered by the Moran process (Moran, 1958).As already mentioned, the setup is the same as for the Wright-Fisher process (constant population size N with two types or -in population genetics -alleles A and B ) with one exception: time is not measured in generations but each change in the population configuration affects only one individual, the one that dies and gets replaced by an offspring of another randomly selected individual.This results in overlapping generations and allows for time being measured either in a discrete or a continuous way. Discrete time The Moran process in discrete time can be formulated as follows.Every time step, one individual is randomly chosen to reproduce and the offspring replaces a randomly chosen individual among the remaining N − 1 individuals (sometimes the replacement mechanism is not restricted to the remaining individuals but also includes the parent).Therefore, in a population with k type-A individuals, the probability that one of these replaces a type B individual is given by with p k as defined in Eq. (2.1).Analogously, the probability for the number of type A individuals to decrease from k to k − 1 is given by We have implemented selection on the reproduction step, however the Moran model also allows for selection on death.In a non-spatial setting, as considered here, this leads to the same transition probabilities.However, the Moran model can also be considered on a graph which aims to model spatial structure.Here, the order and also the precise implementation of selection matters and can potentially give rise to different evolutionary dynamics (Lieberman et al., 2005;Kaveh et al., 2015).We note further that without selection (s = 0) we have p k = k/N , i.e. the increase and decrease probabilities are equal for any choice of k.Usually this kind of dynamics is called neutral. Continuous time The same dynamics (albeit on a different time-scale) can be obtained by assuming that each pair of individuals is associated to a random exponentially distributed time (also described as exponential clocks) and the next pair to update their types is determined by the smallest random time (or the clock that rings first).At these times, one of the two individuals is chosen to reproduce, the offspring replacing the other individual of the pair.There is no standard choice when it comes to choosing the rate of these exponential times.However, in the neutral model (s = 0) the rate N 2 is mathematically convenient since then the time-scale of the Moran dynamics corresponds to the time-scale of Kingman's coalescent (Kingman, 1982) (see Wakeley (2008) for an introduction to coalescent theory). Both formulations of the Moran process are Markov chains, either in discrete or continuous time with the special property of only having jumps of ±1.These processes are called birth-death processes.The theory of these is well developed, see for example the books of Karlin and Taylor (1975), Gardiner (2004), or Allen (2011), so that the dynamics of Moran processes are often amenable to analysis (typically by solutions of recursion equations). Conclusion 1 The difference between the Wright-Fisher model and the Moran model is the progression of populations in time.In the Wright-Fisher process, generations are nonoverlapping, i.e. all individuals update their type at the same time.Therefore the distribution of types in the offspring generation is binomial.On the other hand, generations are overlapping in the Moran model and the type dynamics follow a birth-death process since only one individual is updated at a time, resulting in less complicated transition probabilities. Logistic growth In ecology one is typically interested in population sizes rather than allele frequencies.The simplest population growth model is that of exponential growth.However, a population that grows exponentially forever contradicts the physical boundaries of our plane.Obviously, a population's growth will be limited at some point, for example due to spatial constraints or resource depletion.This form of density regulation is enough to stabilize a population around its carrying capacity, the smallest positive population size at which in the deterministic process the growth rate equals zero. Here, we give the mechanistic basis that could potentially describe such a process.We denote a single individual of the population by Y .The birth-and death-reaction can then be written as (2.5) The parameters β and δ correspond to the rates at which the two reactions happen, i.e. each reaction corresponds to an exponential clock with rate either β or δ.For β > δ, the population tends to grow to infinity, whereas for β < δ, it goes extinct.Population regulation is achieved through a non-linear term that is typically interpreted as an interaction between two individuals, e.g.competition for space.The corresponding reaction is given by (2.6) The parameter γ is referred to as the intra-specific competition coefficient and K is a measure of the number of individuals at carrying capacity.For example, K = 10, 000 would result in a carrying capacity of this order of magnitude.The division by K in the competition rate is accounting for the probability of interaction of two individuals in a well-mixed population where space is measured by the parameter K so that Y /K becomes a density (or rate of encountering an individual when randomly moving around).For a more detailed derivation of these type of interaction rates we refer to Anderson and Kurtz (2015).The logistic process is, like the Moran process, a birth-death process.Therefore, it is amenable to the same type of finite population analysis.We will see in the next section, that in the infinite population size limit (we let K tend to infinity) the mechanistic description above results in the logistic equation of the form where r = β − δ is the per-capita growth-rate, c = (β − δ)/γ is the rescaled carrying capacity, and y = Y /K is the density of the population. Infinite population size limit The microscopic descriptions above are enough to implement a stochastic simulation algorithm.However, the theoretical analysis of finite size populations can be challenging.A common technique is therefore to consider a continuum approximation, i.e. studying the limiting model for N , K → ∞ (and usually s → 0 in evolutionary processes).Typically, the diffusion approximation is used to derive (stochastic) differential equations of the form where (W t ) t ≥0 is a standard Brownian motion (see Appendix A.1 for more details on the Brownian motion).This equation describes the population dynamics, i.e. the macroscopic evolution of a certain model.A solution of a stochastic differential equation of this particular form is also called a diffusion.We note that Eq. (3.1) is the compact writing of the following integral equation We call µ(x t ) the infinitesimal mean, i.e. the expected change of the stochastic process (x t ) t ≥0 in a very short time interval, and σ 2 (x t ) the infinitesimal variance, i.e. the corresponding expected variation of the diffusion in very small time steps (see also Appendix A.2).In the case where σ is zero, the limiting process is deterministic and Eq.(3.1) reduces to an ordinary differential equation.We now present how to derive Eq.Dependent on the field, this type of approximation is known as Gaussian or diffusion approximation (Norman, 1975;Ewens, 2004;Etheridge, 2012) or as Kramers-Moyal expansion (van Kampen, 2007).The general idea is to derive the dynamics of the probability density in the microscopic system and to perform a Taylor expansion, i.e. linearize this equation with respect to the parameter 1/N , the frequency change of the population dynamics in the finite population size description of the Moran model. The change in very small (infinitesimal) time of any continuous-time Markov process (x t ) t ≥0 can be described by the infinitesimal generator, denoted G .For a process that is homogeneous in time, i.e.where the transition rates are constant in time and only depend on the state of the stochastic process, the infinitesimal generator is independent of time t .Intuitively, one can think of it as the derivative of the expectation (of an arbitrary function) of a stochastic process.Formally, it is defined by where E[ f (x ∆t )|x 0 = x] denotes the conditional expectation of the stochastic process f (x t ) at time ∆t given the initial value x 0 = x with f an arbitrary function so that the limit is well-defined.For example, applying G to f (x) = x describes the dynamics of the mean of x t . The infinitesimal generator is useful in our context since it can be related to a diffusion process.To be more precise, the infinitesimal generator associated to the stochastic differential equation More details on this relationship are provided in Appendix A.2.Our goal is to find the limit of the finite-population size generator of the form given in Eq. (3.5) that directly translates to a diffusion.As an example we consider the continuoustime Moran process with transition rates T k+ and T k− for 0 ≤ k ≤ N and where each individual has a type-independent birth-rate of 1 (an exponential clock with rate 1) at which it replaces another individual with an offspring.Setting x = X /N , the frequency of individuals of type A, we find the infinitesimal generator for the model with fixed population size N , G N , to be of the form probability for an update until time ∆t (3.6) We have used the Landau notation O(∆t ) to summarize processes that scale with order ∆t or higher.Doing a Taylor expansion for large N and neglecting the terms of order higher than 1/N 2 we obtain (3.7) Lastly, we rescale time by the factor 1/2, i.e. we change the process onto the time scale τ = t /2. Recall that G N is the limit of the differential quotient for ∆t → 0 and as such incorporates the time-change.Now the infinitesimal generator of the rescaled process reads Translating this equation to a stochastic differential equation we can identify the single components as (3.9) Note that we have made no assumption on the dependence of the transition probabilities on the frequency x, such that this approach is applicable for constant selection, linear frequency dependence arising in two player games (Traulsen et al., 2005) or multiplayer games with polynomial frequency dependence (Gokhale and Traulsen, 2010;Peña et al., 2014). Conclusion 2 For time-continuous finite population size models with jumps of ±1, i.e. a birth-death process, the limiting diffusion process can be computed by Eq. (3.9). Example: Moran process with selection and mutation As an example we explicitly derive the stochastic differential equation corresponding to the Moran model with selection and mutation.Here, we decouple the reproduction and mutation processes but similar derivations can be made if we assume a coupling of mutations to reproduction events.The selection coefficient is denoted by s and the mutation rates from type A to B and type B to A are given by u A→B and u B →A , respectively.This time we consider transition rates instead of transition probabilities.This means that the transitions are now also defining the speed of these updates to happen.The transition rates can be written as and Instead of individuals giving birth to an offspring independent of their type, these rates translate to each individual having its own exponential clocks: One clock with rate (1 + s) for type A individuals and rate 1 for type B individuals corresponding to reproduction and one clock with rates u A→B and u B →A for mutation, respectively.Thus, in total there are k individuals that share the same rates which gives the first term for T k+ (reproduction) and the second term for T k− (mutation).Note that this is just a simple example.In general, all different types of updates could be envisioned, see for example Czuppon and Rogers (2019) for a sexually reproducing population under self-incompatibility. Inserting the just defined transition rates into Eq.(3.9) yields (3.12) Dependent on the choice of selection and mutation rates, these equations result in different limits.Typically one is interested in non-trivial limits for these equations, i.e. a limit so that not both components equal zero.Often this can be achieved by rescaling time (typically the inverse of the population size) and/or defining the strength of selection and mutation in terms of the population size N .As an example, we will focus on two specific limits: (i) strong selection and strong mutation and (ii) weak selection and weak mutation. Strong selection and mutation At first, we consider large selection and mutation effects.We assume that s and u i do not depend on N but are constant.In order to obtain a limit equation for the frequency of individuals of type A, x = X /N , we rescale time by N , which transforms Eq. (3.12) to The first equation is independent of N and the vanishing variance in the second equation implies that the limit process is deterministic.We obtain the ordinary differential equation (3.14) which describes the mean dynamics of the allele frequency in the population. Weak selection and mutation In contrast to the previous scenario, we now assume that both selection and mutation are weak.More precisely, we choose s and u i to scale inversely with N and define the constants α = sN and ν i = u i N .Inserting these into Eq.(3.12) and here without rescaling time (speeding up or slowing down the original process in order to obtain a reasonable continuous-time limit), yields which gives the diffusion limit (Eq.(3.1)) This stochastic differential equation in the context of population genetics (and typically without the factor two in the stochastic component) is also called Wright-Fisher diffusion (with selection and mutation).This difference in the variance term shows that the Moran model has twice as much variance as the Wright-Fisher process when compared in the diffusion limit (for a derivation of the diffusion limit when starting from the Wright-Fisher process see Appendix B.1).More details are provided in Appendix B.2 where the Moran and the Wright-Fisher process are compared in more detail.In short, the difference arises through the different sampling schemes in the two models, Binomial-vs birth-death-updating.We also note, that for this choice of selection and mutation strength (both are assumed to be weak and to scale with 1/N ), coupling mutation to reproduction and therefore changing the rates in Eqs.(3.10) and (3.11), would result in the same limit equation. In the following we will call the process solving Eq. (3.16) without the factor 2 in the variance (σ 2 (x) = x(1 − x)) a Wright-Fisher diffusion with selection and mutation. Conclusion 3 Different assumptions on the model dynamics, here selection and mutation, can lead to different continuum limits on the population level.In order to identify parameter combinations that result in reasonable diffusion approximations it is key to study Eq.(3.12) in detail. Conclusion 4 The Moran process, by nature, has the same mean behavior as the Wright-Fisher model.However, its variance in the diffusion limit is twice the variance of the corresponding Wright-Fisher diffusion derived from the classical Wright-Fisher model as defined in Section 2.1. Example: Logistic growth We apply the same technique to the logistic growth model.From the rates given in Eqs.(2.5) and (2.6) we first derive the transition rates T j + and T j − .They are given by We apply the same technique as before which resulted in Eq. (3.9) with one difference though: for the logistic process, as we have introduced it here, we do not need to rescale time by 1/2.The only rescaling we do is going from the number of individuals j to the corresponding density y = j /K .Then the calculation in Eq. (3.6) transforms to (3.18) Applying the limits in Eq. (3.9) with K as the parameter going to infinity we find This means that in the limit of infinite population sizes the variance vanishes and we end up with a deterministic process, the logistic growth equation: Indeed, as we see in Figure 1, the population size measure K does not need to be very large for the individual based model to approach the deterministic limit (K ≈ 1000 is enough in this example). Finite population size approximation We have seen that if we let the population size tend to infinity, we are able to recover a (stochastic) differential equation describing the studied evolutionary or ecological process.A natural question that arises is how these results relate to the finite population size models.One way of approaching this question is to study the variation around the limiting process in more detail.Without going too much into the formal details, we can, as a first approximation, identify this deviation as the infinitesimal variance (σ 2 in Eq. (3.9)) without taking the limit of N or K to infinity.A typical equation in this scenario would look as follows: where N is the parameter (typically the population size or a measure of population size) that in the infinite population size approximation would tend to infinity.We see that the variation around the deterministic part of the process scales with N −1/2 , reminiscent of the central limit theorem.It is worth pointing out that all the subsequent quantities and computations can also be done with this finite population size approximation.This type of approximation is particularly useful when we want to infer results where selection and/or mutation do not belong to a regime that is reasonably covered by the infinite population size limit.In these cases often the limiting process becomes deterministic and the description of the stochastic variation is lost (cf. the examples in the previous section). Stationary distributions For the Moran model we have derived two different limits dependent on the strength of selection and mutation.If both, selection and mutation, are strong we arrived at a deterministic limit equation.For weak selection and weak mutation we obtained a stochastic differential equation.One qualitative difference between these two limits is that trajectories of the deterministic equation will always converge to a fixed point while the stochastic differential equation fluctuates indefinitely for positive mutation rates.The deterministic fixed point is given by the solution of Eq. (3.14) equal to zero.In our simple example, a single fixed point x * lies within the interval between 0 and 1 and is stable.Therefore all trajectories will approach this value, e.g.see Figure 2(a). On the other hand, Eq. (3.16) is a stochastic equation.Hence, even if the trajectories approach or even hit the deterministic fixed point they will not stay there due to the introduced randomness by the Brownian motion, cf. Figure 2(b).Still, we can make predictions about the time a trajectory spends in certain allele configurations.These are summarized in the stationary distribution, a quantity that is the stochastic equivalent of a deterministic fixed point.More precisely, if the initial state of the population is described by the stationary distribution, then all future time points have the same distribution.For birth-death processes, the stationary distribution can be calculated based on detailed balance, i.e. the incoming and outgoing rates need to be equal for every possible state of the process (Gardiner, 2004;Claussen and Traulsen, 2005;Antal et al., 2009).In general, the stationary distribution ψ can be defined as the solution of where x 0 ∼ ψ denotes that x 0 is distributed according to ψ.The above equation can be expressed in terms of the infinitesimal generator This equation allows us to solve the stationary distribution numerically and in some cases even an analytic solution is possible.In Box 1 we introduce the scale function S(x) and the speed measure M (x) which are related to a particular stochastic diffusion.These functions are useful to compute not only the stationary distribution but, as we will see later, other Box 1: Scale function and speed measure The scale function is defined by where the lower boundaries of the integrals can be chosen arbitrarily.The name of this function derives from the fact that for a one-dimensional diffusion x t satisfying the scaled process S(x t ) becomes a (time-changed) Brownian motion on the interval [S(0), S(1)], i.e. there is no deterministic contribution in the scaled process.The time change is given by the speed measure M which defines how much faster (or slower) the original process is evolving compared to a standard Brownian motion.It is given by the density of the speed measure. The scale function and the speed measure describe diffusion processes analytically and can be used to compute the stationary distribution, fixation probabilities and mean extinction times, as we will see in the following sections.A more detailed assessment of the scale function and the speed measure can be found in Etheridge (2012). quantities like the hitting probability or the mean time to fixation.For example, in the case of a one dimensional diffusion, the stationary distribution, i.e. the solution of Eq. (5.2), can be expressed by the scale function S(x) (Eq.(B1)) and the speed measure density m(x) (Eq.(B3)).Using these quantities, the solution of Eq. (5.2) can be compactly written as Thus, all that is needed for the computation of the stationary distribution is the speed measure density m.For a detailed derivation see also Etheridge (2012, Chapter 3.6). Stationary distribution of the Wright-Fisher diffusion Let us consider the Wright-Fisher diffusion with selection and mutation, i.e. (5.4) The infinitesimal generator is given by (see also Eq. (3.5)) (5.5) The derivative of the scale function (Eq.(B1) in Box 1) reads where C is a constant dependent on the lower bound of the integral.The speed density (Eq.(B3) in Box 1) takes the form (5.7) Using a symbolic programming language, e.g.Mathematica, we find that integrating m(x) over the whole frequency space, the interval [0, 1], we obtain where Γ(x) is the Gamma function and 1 F 1 (a, b, z) is the generalized hypergeometric function.Then the stationary distribution (Eq.( 5.3)) can be written as . (5.9)Note that for α = 0 the generalized hypergeometric function reduces to unity and the stationary distribution can be expressed in terms of the Gamma function. As an example, some stationary distributions are plotted in Figure 3.For symmetric mutation rates ν A→B = ν B →A , the stationary distribution is either centered around 0.5 or peaks at the boundaries of the frequency space.Intuitively this can be explained as follows: if mutation rates are too small, genetic drift drives the allele population to fixation (either boundary is equally likely due to the symmetric choice of mutation rates).If mutation rates are increased, coexistence of the two alleles becomes possible due to the drift-mutation balance.For non-equal mutation rates, the stationary distribution is skewed towards the allele that is favored by the mutation mechanism.Similarly, for selection the distribution is skewed towards the favored allele. Conclusion 6 The stationary distribution of a one-dimensional diffusion can be expressed in terms of the density of the speed measure m(x) by Eq. (5.3).The density of the speed measure can be computed by the scale function corresponding to the stochastic diffusion process, Eqs.(B1) and (B3) in Box 1. Quasi-stationary distribution of the logistic process Next, we consider the finite population size approximation of the logistic growth model, i.e. (5.10) 0.00 0.25 0.50 0.75 1.00 0 1 2 3 4 5 allele frequency x prob.density eq.(5.9) Figure 3: Stationary distribution of the neutral Wright-Fisher diffusion.We plot the density of the stationary distribution, ψ in Eq. (5.9), for different sets of mutation rates (and selection).For small symmetric mutation rates, ν A→B = ν B →A < 1 2 , most of the density is at the boundaries of the frequency space (dashed line).For large symmetric mutation rates, ν A→B = ν B →A > 1 2 , most of the density is centered around 1 2 (solid line).Asymmetric mutation rates result in skewed stationary distributions, e.g. the dotted curve is skewed towards x = 1, since the mutation rate towards the considered allele is larger than the mutation rate away from it, ν B →A > ν A→B .A similar pattern emerges when selection is included which generates a bias towards the favored allele (dash-dotted line). This example has a (unique) absorbing state, y = 0.It is accessible from all positive population densities and therefore the population will go extinct almost surely.The only stationary distribution is the point-measure on 0, i.e. ψ(y) = 1 {y=0} . In contrast, the positive deterministic population equilibrium, y * = (β − δ)/γ, is a stable fixed point of the deterministic system.Considering large values of the deterministic equilibrium (K 1), we expect the finite population size process from Eq. (5.10) to remain close to this value for long times.In fact, the expected extinction time of the logistic growth process when started in the positive population equilibrium is of order exp(K ) (Champagnat, 2006).This suggests that the process will be in a quasi-stationary state, i.e. before its extinction, the population can be described by the stationary distribution of the corresponding logistic process conditioned on non-extinction. Formally, the quasi-stationary distribution can be computed by conditioning the original process on its survival.This means that the transition rates change and the novel process can be analyzed by the techniques described above.However, this method goes beyond the scope of this manuscript.For a theoretical treatment of this topic in the context of the logistic equation see for example Cattiaux et al. (2009); Assaf et al. (2010); Méléard and Villemonais (2012).For a general review on methods related to quasi-stationary distributions see Ovaskainen and Meerson (2010). Another way to approximate quasi-stationary distribution when extinction is very unlikely, is provided by the central limit theorem (sometimes also called linear noise approximation). Here, the distribution of the process is derived from its local dynamics around the deterministic fixed point y * (Ethier and Kurtz, 1986;Gardiner, 2004).The underlying assumption is that the population stays close to its positive steady state and just slightly fluctuates around this value.This is only a valid assumption when the probability of extinction within the studied time-frame is essentially zero.These small fluctuations can then be described by a Gaussian distribution.In terms of a formula this translates to the following: (5.11)where U is a Gaussian random variable.Writing µ(y) = (β−δ−γy)y and σ 2 (y) = (β+δ+γy)y, the dynamics of U can be rewritten as (5.12) The process U is therefore an Ornstein-Uhlenbeck process (see also Appendix A.3). Evaluating the process U at y K t = y * we thus obtain a description of the variance in the deterministic fixed point.By the properties of the Ornstein-Uhlenbeck process (cf.Eq. (A.9) in Appendix A.3) we find its stationary distribution as ψ U (y) ∼ N 0, − σ 2 (y) 2µ (y) . (5.13) This distribution describes the fluctuations of the process y K t around the deterministic steady state y * .Therefore, when plugging the distribution ψ U into the original process from Eq. (5.11), we find the quasi-stationary distribution of y K t around the deterministic equilibrium y * which is given by (5.14) We can see that for increasing population sizes K , the variance is decreasing and vanishes in the limit K → ∞.In Figure 4 we have plotted this quasi-stationary distribution for different parameter sets of the system.Two general rules arise: Firstly, the larger the overall population size (large K ), the smaller the variance due to the scaling of the variance by 1/K (Eq.(5.14)).This also becomes clear when considering the jump sizes, 1/K , of the corresponding birth-death process when viewed in the density-space; these jumps become smaller the larger K is and this directly translates to the distance the process gets pushed Figure 4: Linear noise approximation around the positive steady state of the logistic equation.We plot the density of the stationary distribution, ψ from Eq. (5.14), for different sets of parameters.The width of the stationary distribution decreases with increasing population size as can be seen by comparing the curves for K = 100, K = 1000 and K = 2000.Decreasing the competition parameter (and therefore increasing the per-capita death rate) results in a broader distribution (red line) when compared to the benchmark scenario (orange curve).The analogous conclusion, lowering the per-capita birth-and death-rate results in a less broad distribution, is visualized by the purple line.This shows that the per-capita birth-and deathrates have a stronger impact on the width of the stationary distribution than the competition parameter.away from y * .Secondly, the larger the birth-, death-and competition rates, β, δ and γ, the broader the distribution.This is explained by the variance σ 2 being the sum of these three values.Then, increasing these values, also increases the effect of the noise around the positive population equilibrium y * as can be seen in Eq. (5.14). Conclusion 7 If the studied process allows for a deterministically stable steady state but is almost surely going extinct for finite population sizes, a quasi-stationary distribution can be computed to describe the behavior of the process conditioned on survival.If the extinction probability is very low, an approximation of this distribution is given by the linear noise approximation where the variance around the deterministic steady state is modeled by an Ornstein-Uhlenbeck process given by Eq. (5.12). Fixation and first hitting probabilities We have seen that stochastic descriptions of processes can lead to outcomes that are different from their deterministic counterparts.Here, we want to investigate one of these phenomena, namely the probability for a certain type to become fixed in a population.One-dimensional stochastic differential equations allow for an explicit computation of these fixation probabilities.As before we denote by x t the frequency of type A individuals at time t ≥ 0 in the population.If this process is described by a one-dimensional diffusion, the fixation probability can be computed by where S(x) is the scale function from Eq. (B1) corresponding to the process (x t ) t ≥0 , and x 0 denotes the initial frequency of type A individuals.For the derivation of this formula, we refer to Otto and Day (2007, Chapter 15.3.3),Etheridge (2012, Lemma 3.14) or Kallenberg (2002, Theorem 23.10). As an example, let us consider the Wright-Fisher diffusion with selection which is given in Eq. (5.4) without mutations (ν A→B = ν B →A = 0).We have µ(x) = αx(1−x) and σ 2 (x) = x(1−x) such that the scale function simplifies to Recalling the definition of α = sN for a finite population of size N and plugging this into Eq.( 6.1) yields which for x 0 = 1/N becomes P 1/N (x ∞ = 1) ≈ 2s, the result of Haldane for the fixation of a single mutant copy in a population of size N (Haldane, 1927). This method is also applicable to more complicated stochastic differential equations where the sign of the deterministic dynamics µ(x) is dependent on the population configuration.Most classically, these frequency-dependent problems were studied in deterministic evolutionary game theory introduced by Maynard Smith and Price (Maynard Smith and Price, 1973) (see also Hofbauer and Sigmund (1998) for an introduction to evolutionary game dynamics). Our aim here is to derive the fixation probability of a process under frequency-dependent selection using the scale function.We consider a diffusion with linear frequency dependence (see Traulsen et al. (2006) for a physical formulation of this and Pfaffelhuber and Wakolbinger (2018) for a more general mathematical analysis).We denote the strength of selection by α and let u, v be arbitrary real numbers.We write αx(1 − x)(ux + v) for the linear frequencydependent dynamics of selection.Then, the allele frequency evolves according to the following equation For α 1, we can linearize the exponential and write the scale function as Plugging this into Eq.( 6.1), we obtain (6.6) In the context of evolutionary game theory, this result is a re-derivation of the 1/3−law (Nowak et al., 2004) (generalized by Lessard and Ladret (2007)).It states that for an allele starting with one individual, it is more likely to become fixed in the population than under neutral dynamics if the deterministic fixed point is smaller than 1/3.This can be seen Conclusion 8 The scale function can be used to analytically derive (or to compute numerically) the fixation probability of any one-dimensional diffusion representing trait frequencies by solving Eq. (6.1). Mean time to fixation A related quantity that can be derived from a one dimensional diffusion is the expected time to fixation (or extinction from the other species' point of view), i.e. the time the two types coexist in the population.Again, the calculation relies on a special function, this time Green's function G(x, y) which can be interpreted as the average time that a diffusion started in x spends in the interval [y, y + d y) before reaching one of the boundaries (Chapter 3.5 Etheridge, 2012).It is therefore sometimes also called sojourn time density (Ewens, 2004).Green's function is defined as where S(x) is the previously defined scale function and m(x) denotes the speed measure density (see Box 1).The expected time to fixation for a process started at frequency x, denoted E x [τ], is then given by (see also Kallenberg (2002, Lemma 23.10) or Ewens (2004, Section 4.4)) This corresponds to the summation of the sojourn times in the discrete case, see e.g.Ohtsuki et al. (2007) for an application in finite populations.It is possible to simplify this formula considerably when applying it to concrete examples.For a particularly simple stochastic process, the neutral Wright-Fisher diffusion (7.3) the scale function and speed measure density are given by The expected time to fixation of one of the two alleles can then be expressed as ). (7.5)As a more complex example, let us again consider the frequency-dependent selection process described by the stochastic differential equation (7.6) Similar to the computation of the fixation probability, we will consider the case of small initial frequencies and weak selection, i.e. α, x 1.More precisely, we neglect terms of order α 2 and αx 2 .We recall the approximation of the scale function in this case that we derived in Eq. (6.5) S(x) Employing these approximations, the first integral in Eq. (7.2) yields x (7.8) Approximating the second integral in a similar way we find ≈ 2x ln(x −1 ) + 2xαv . (7.9) Taking these two expressions together we re-derived the results already known in the literature (Altrock and Traulsen, 2009;Pfaffelhuber and Wakolbinger, 2018), i.e. (7.10) for α and x sufficiently small. Conclusion 9 Expected unconditional fixation times, i.e. the expected time of coexistence of two alleles in a population described by a stochastic diffusion process, can be calculated by integrating over Green's function (the mean occupation time of a certain frequency until extinction), as shown in Eq. (7.2). Summary and Outlook We have outlined how to derive a stochastic differential equation from an individual based description of two classical models in evolutionary theory and theoretical ecology, the Wright-Fisher diffusion and the logistic growth equation.The resulting stochastic differential equation in one dimension describes the evolution of the allele frequency or population density under study, respectively.Using probabilistic properties of this equation, i.e. transforming it to a rescaled Brownian motion (Box 1), it is possible to analytically derive the (quasi-) stationary distribution, fixation probability and the mean time to fixation.By way of example we derived these quantities for the Wright-Fisher diffusion without and with (frequency-dependent) selection. The diffusion process emerges as the infinite population size limit.However, as we have seen in Section 4, one can also derive a finite population size approximation of the dynamics, i.e. we do not take the limit N , K → ∞.As mentioned, the fixation probability or mean time to fixation can be analogously derived as in the previous sections and thus carry over to these finite population equations.Applications of these finite population size approximations are abundant and cover diverse topics (e.g.Traulsen et al., 2005;Reichenbach et al., 2007;Assaf and Mobilia, 2011;Houchmandzadeh, 2015;Constable et al., 2016;Débarre and Otto, 2016;Kang and Park, 2017;Koopmann et al., 2017;Serrao and Täuber, 2017;Czuppon and Gokhale, 2018;Czuppon and Traulsen, 2018;Parsons et al., 2018;McLeod and Day, 2019;Schenk et al., 2019). Apart from the fixation probability and the mean time to fixation, the (quasi-)stationary distribution is a commonly used measure to describe stochastic systems.Its calculation in an infinite population described by stochastic differential equations is straightforward.In finite population approximations though, populations can go extinct due to the inherent stochasticity of the microscopic reactions.Here, the quasi-stationary distribution describes the stationary distribution conditioned on the survival of the population.For negligible extinction probabilities, i.e. very large survival probabilities of the population, the functional central limit theorem (or linear noise approximation) can be used to approximate this quasistationary distribution.In the theoretical biology literature, this method is frequently used in models of gene regulatory networks (see Anderson and Kurtz, 2015, for a mathematical introduction) and, interestingly, less so in the context of ecology or evolution (e.g.Boettiger et al. (2010); Kopp et al. (2018); Wienand et al. (2018); Czuppon and Constable (2019); and Assaf and Meerson (2017) for a review of the physics literature related to this topic).Another interesting extension of this approach appears when dealing with a model where processes evolve on different time scales.Using the central limit theorem it is possible to capture the variance (noise) coming from different processes evolving on different time scales (see Kang et al. (2014) for the formal derivation and Czuppon and Pfaffelhuber (2018) for an application in the context of gene regulatory pathways).Ultimately, this can help to disentangle the single contributions from different process onto the quasi-stationary distribution. Lastly, we did not cover multi-dimensional stochastic differential equations in this methods review.These high-dimensional processes offer a way to explore multi-trait or multi-type evolutionary dynamics, or evolution in spatially structured populations.We decided not to cover this topic because the methods to analyze these differential equations are not yet well developed or go beyond the (mathematical) scope of this manuscript.Still, under quite restrictive and simplifying assumptions, analytical results can be derived by using similar ideas as those presented here (e.g.Constable and McKane, 2014;Lombardo et al., 2015;Manceau et al., 2016;Simons et al., 2018;Czuppon and Rogers, 2019;Czuppon and Constable, 2019).Even though these multi-dimensional processes might not be tractable analytically in general, they are amenable to numerical analysis.As such they provide a tool to explore the potential qualitative outcomes of an ecological, evolutionary or eco-evolutionary model without the need to run computationally heavy individual-based simulations. We hope that with our basic comparisons between different approaches used in different subfields of theoretical and mathematical biology, we help newcomers in the field to get more familiar with these methods.We simulate it by the Euler-Maruyama method (also stochastic Euler-method) with d t = 0.001. (1986, Chapter 5.3).Briefly, one needs to apply Itô's formula (Kallenberg, 2002, Theorem 17.18) to the process f (x t ) to see that the process (x t ) t ≥0 that solves the stochastic differential equation indeed has an expected change in infinitesimal time steps described by the infinitesimal generator in Eq. (A.3). Neutral diffusion We need to compute the expectations in Eqs.(A.4) and (A.5) using the probability distribution given in Eq. (2.2) (with s = 0).Setting the time between two generations to 1/N such that for large N it becomes approximately continuous, we find for the expected change in the number of alleles The infinitesimal variance computes to Dividing by ∆t = N −1 and replacing k/N by x, we find the neutral Wright-Fisher diffusion for the allele frequency dynamics Diffusion with selection and mutation Next, we include selection and mutation.We say that type A alleles are beneficial (deleterious) if s > 0 (s < 0).Given that there are k type A individuals in the population, the probability for an offspring to choose a type A individual as a parent is given by We can also add mutations to the Wright-Fisher model, i.e. type A individuals can mutate to type B and vice versa.We set u A→B as the probability to mutate from type A to B and u B →A as the mutation probability from B to A. Then the probability for an individual to be of type A given k type A individuals in the parental generation reads In this model mutation is intimately connected with the reproduction mechanism.For the Moran model, compare Section 3, these processes do not necessarily need to be coupled (even though this would, biologically speaking, make the most sense).Following the same methodology as for the neutral Wright-Fisher model, we can derive a diffusion process by computing the infinitesimal mean and variance.For the allele frequency B.2. Comparing the variances of the Wright-Fisher and the Moran process As we have seen in Eq. (3.1) in the main text the variance in the diffusion limit is given by σ 2 (x) = 2x(1 − x).The underlying model was assumed to be the Moran model.If we do the same derivation of the diffusion limit using the discrete Wright-Fisher model (binomial updating) as done in the previous section, we obtain the variance σ 2 (x) = x(1 − x).Here, we hope to give an intuition and an analytical explanation for this difference. For simplicity, we will only consider the neutral dynamics of the corresponding processes, i.e. we set the selection and mutation rates to zero, s = u A→B = u B →A = 0.In the Wright-Fisher process during one time-step all N individuals are updated (simultaneously).If we switch to the scale on which the diffusion process is evolving we need to rescale time by 1/N (see Eq. (B.9)).Under this scaling we obtain σ 2 (x) = x(1 − x). For the Moran model, we have seen that we can define the process either in discrete or in continuous time.To obtain the diffusion limit from the discrete time process, we apply the same reasoning as in the previous section, i.e. we compute the infinitesimal variance.In this case we get For this term not to vanish in the infinite population size limit (N → ∞) we need to set ∆t = N −2 .This corresponds to N jumps in a time interval 1/N which corresponds to the number of individuals updated in the Wright-Fisher process during the same time.In addition there is a factor 2 appearing which is due to the different update mechanisms.While for a Binomial-distribution the variance takes a multiplicative form (x(1 − x)), for a birth-death process the birth-and the death-rate are summed up resulting in the factor 2. For the continuous-time Moran process we refer to Eqs. (3.7) and (3.9) in the main text.In order to make sure that the variance does not vanish in Eq. (3.9) we need to rescale the time of the original process by N 2 resulting in the same reasoning as for the discrete-time Moran derivation. To conclude, we have seen that the Moran diffusion limit evolves twice as fast as the Wright-Fisher diffusion limit.This is the reason for the larger variance since a "faster" Brownian motion will also spread further when compared to a "standard" Brownian motion.In terms of the original discrete processes, one can summarize that when the same number of individuals gets updated, a lot of individual jumps, like in the Moran process, accumulate more variance than updates with large effects, like in the Wright-Fisher process where the whole population gets updated at once.The sampling therefore determines the speed (and the variance) of the process even in the limit of large population sizes. (3.1) from the time-continuous Moran model.This procedure can be applied to any time-continuous individual based model.The derivation of Eq. (3.1) from a discrete time individual based model, like the Wright-Fisher process, is outlined in Appendix B.1. the expectation of the change in frequency, σ 2 (x) = lim N →∞ T xN + + T xN − N , the variance of the change in frequency. Figure 1 :Conclusion 5 Figure 1: Individual based simulations of the logistic growth model.(a) For low population sizes, the individual based simulation (solid lines) fluctuates strongly around the deterministic evolution of the population (dashed lines) given by equation (3.20).(b) Increasing the stationary population size, the stochastic fluctuations around the deterministic prediction decrease.Further increasing the population size measure K would result in less and less variance, until eventually the individual based simulation is indistinguishable from the deterministic curve.The parameter values are chosen as follows: β = 2, δ = 1, γ = 1.The initial population sizes are given as stated in subfigure (b). Figure 2 : Figure 2: Wright-Fisher dynamics with selection and mutation.(a) The deterministic system given by Eq. (3.14) converges to the fixed point (dashed line) and remains there.(b) The stochastic process given by Eq. (3.16) fluctuates strongly in frequency space and spends most time close to the monotypic states x = 0 and x = 1. by plugging in u = a − b − c + d and v = b − d , where a, b, c, d represent the payoffs of an evolutionary game. Figure 5 : Figure 5: Random walk and Brownian motion.(a) The random walk is defined on the discrete state space Z and changes at discrete times N. (b) The standard Brownian motion, starting in 0, takes values in R and is defined for positive times t in R ≥0 .We simulate it by the Euler-Maruyama method (also stochastic Euler-method) with d t = 0.001.
2020-02-07T18:41:26.695Z
2020-02-06T00:00:00.000
{ "year": 2021, "sha1": "ddbc2ffde167f487830f9d3b014dcf8b42c23f8d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7205", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "ddbc2ffde167f487830f9d3b014dcf8b42c23f8d", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Biology" ] }
252545370
pes2o/s2orc
v3-fos-license
EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data Speed-control forecasting, a challenging problem in driver behavior analysis, aims to predict the future actions of a driver in controlling vehicle speed such as braking or acceleration. In this paper, we try to address this challenge solely using egocentric video data, in contrast to the majority of works in the literature using either third-person view data or extra vehicle sensor data such as GPS, or both. To this end, we propose a novel graph convolutional network (GCN) based network, namely, EgoSpeed-Net. We are motivated by the fact that the position changes of objects over time can provide us very useful clues for forecasting the speed change in future. We first model the spatial relations among the objects from each class, frame by frame, using fully-connected graphs, on top of which GCNs are applied for feature extraction. Then we utilize a long short-term memory network to fuse such features per class over time into a vector, concatenate such vectors and forecast a speed-control action using a multilayer perceptron classifier. We conduct extensive experiments on the Honda Research Institute Driving Dataset and demonstrate the superior performance of EgoSpeed-Net. INTRODUCTION Understanding and predicting driver behavior is an important problem for road safety, transportation, and autonomous driving. The National Highway Transportation Safety Administration claims that 94%-96% of auto accidents are due to different types of human errors and the majority of them belong to driver negligence or * Xun Zhou is the corresponding author. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Figure 1: Illustration of a typical challenge in forecasting speed-control from egocentric video data, where the same acceleration action is taken in highway and urban scenarios, respectively, but leading to contradictorily visual observations of the same vehicles. carelessness [3]. Shinar et al.. [37] also suggest that drivers are less aware of their skills with more experience, as most of the actions taken by the driver during the driving are unconscious. Building accurate prediction models helps us better understand how and why drivers make their decisions. Such knowledge can also be incorporated into the design of autonomous vehicles and driving simulators. In this paper we aim to address a challenging problem in driver behavior analysis, namely, speed-control forecasting, using only egocentric (i.e. first-person view point) videos collected along the trip. Instead of simply predicting the speed or trajectory of a vehicle, we focus on the prediction of acceleration and braking behaviors of drivers as they are the most common and essential actions. In addition, a large number of road fatalities are related to speedcontrol issues [4]. In particular, we formulate the problem using multi-class prediction: given a sequence of egocentric video frames, we try to predict the speed-control action (i.e. slight acceleration, full acceleration, slight braking, full braking) in the next (few) frame. Note that we use solely egocentric video data as the input because it best represents the driver's first-person visual input during driving. We use driver's braking or accelerating information from pedal sensor data to define the ground-truth behaviors. We do not use the vehicle's acceleration or GPS information in the input because they are not part of the road environment and may not be available to the driver in real-time. They are therefore unlikely to have a major and immediate impact on drivers' speed-control behaviors. Challenges. To address the problem of speed-control forecasting using egocentric video data, following human driving experience, we are motivated by the fact that the position changes of objects over time can provide us very useful clues for forecasting the speed arXiv:2209.13459v1 [cs.CV] 27 Sep 2022 change in future. However, in practice we are facing the following challenges: • The same speed-control action in different scenarios may lead to different visual observations that could confuse the prediction model significantly. As shown in Figure 1a, even the labeled car is moving away from the driver's field of view, his/her speed is still increasing (we know this due to the ground-truth data). Similarly we know that in Figure 1b the driver is still speeding up when the labeled car is approaching. Such visual contradictions led by the same speed-control actions widely exist in the real world as well as our evaluation data. • Drivers are heavily influenced by their surroundings such as traffic conditions, road conditions, nearby vehicles, etc. It will be challenging to identify the consistent relationship, spatially and temporally, among such objects/stuff using egocentric videos that are useful for our prediction. • The scenes in the egocentric video data are more complicated than those in static cameras because of rapidly dynamical change in objects and background, especially in driving scenarios. Approaches. Prior works on driver/human behavior prediction include mobility-based methods and vision-based methods. The first group predict behaviors (e.g. movement patterns) using GPS trajectories of vehicles or spatial data [30,54]. These techniques do not consider visual input. Vision-based methods typically use fixed camera video data to recognize actions, behaviors, or predict moving paths of pedestrians or drivers [24,36]. Some of these works use RCNN-based methods to extract objects from raw pixels. Others model the relationship between objects in the scene as graphs [12,16]. Few recent works in computer vision consider egocentric video data as the input. However, they either predict the behaviors of objects in the scene (not the observer), or use other auxiliary data to help the learning (e.g. spatial trajectories, vehicle sensor readings) [32,33]. In contrast, our problem differs from these prior works. We aim to predict the future speed-control behavior of a driver, rather than objects in the scene. Note that, the camera collecting our data is also moving, while in the literature the majority of the methods use videos from fixed cameras even from the first-person view. To address the limitations of prior works, we propose an EgoSpeed-Net to learn the spatiotemporal relations among the objects in the videos. Contributions. Our contributions are listed as follows: • To the best of our knowledge, we are the first to address the problem of speed-control forecast for driver behavior analysis purely using egocentric video data. • We propose a novel deep learning solution, namely, EgoSpeed-Net, a seamless integration of graph representations of videos, GCNs [20], LSTMs, and an MLP that can be trained end-to-end effectively and efficiently. • We demonstrate superior performance of our EgoSpeed-Net on the Honda Research Institute Driving Dataset (HDD) [34], compared with state-of-the-art methods. The rest of this paper is organized as follows. Section 2 discusses related work. In Section 3, we define the driver behavior prediction problem and elaborate our proposed EgoSpeed-Net framework. Section 4 presents comprehensive evaluation results. Finally, we conclude the paper in Section 5. RELATED WORK Mobility-based behavior analysis. Existing works mainly focus on pedestrian trajectory prediction [28,49] and cyclist behavioral studies [10,17]. For example, Mangalam et al. [28] proposed PEC-Net to forecast distant pedestrian trajectories conditioned on their destinations. Huang et al. [17] represented bicycle movements by taking other road users into consideration using social force model. Few existing works [5,27] pay attention to driver trajectories prediction. Liu et al. [27] estimated the lane change behavior of drivers to predict trajectories. Given a partial GPS trajectory, without any visual input such as images or videos, these methods aim to predict either the destination or the next location along the trajectory. Recent work considered map information and social context. Zaech et al. [55] predicted action sequence of vehicles in urban environments with the help of HD maps. Nevertheless, we only consider egocentric video as input. Vision-based behavior analysis. In recent years, the applications of deep networks on video analysis have been conducted widely. • Non-egocentric video data: Traditionally, most works in driver behavior analysis focus on the detection of driver drowsiness and distraction such as drowsing driving, tailgating, lane weaving behavior, etc. Many driver inattention monitoring systems [43,52] have been designed by taking advantage of eye state analysis [9,19], facial expression analysis [1,2], driver physiological analysis [6,38], etc. Recent studies took the scene context into consideration to predict the pedestrian trajectory [21,25,35] and vehicle movements analysis [8,15]. Liang et al. [24] predicted multiple future trajectories using multi-scale location decoders, while Introvert [36] proposed a conditional 3D visual attention mechanism to infer human-dependent interactions in the scene context. Some of these driver behavior analyses require facial information of the driver themselves, while others focus on video data recorded from fixed cameras. • Egocentric video data: Recently, some works have studied on predicting trajectories using egocentric videos [40,50,51]. Park et al. [40] associated a trajectory with the EgoRetinal map to predict a set of plausible trajectories of the camera wearer, while Yagi et al. [50] inferred future locations of people in the video with their past poses, locations and scales, as well as ego-motion. Qiu et al. [33] proposed an LSTM-based encoder-decoder framework for trajectory prediction of human captured in the scene, while Qiu et al. [32] integrated a cascaded cross-attention mechanism in a Transformer-based encoder-decoder framework to forecast the trajectory of the camera wearer. Meanwhile, some works focus on ego action anticipation [11,13]. Fernando et al. [11] anticipated human actions by correlating past with the future with Jaccard similarity measures, while Tran et al. [42] distilled information from the recognition model to supervise the training of the anticipation model using unlabeled data. However, most drivers behavior depend on movements of objects they have seen. At current stage, we assume all drivers' behaviors are taken only based on past videos without any future anticipation. …. In contrast, we aim at predicting the behaviors of a driver (i.e. the observer) whose movement patterns are likely to be different from those of the pedestrians/vehicles in the scene or the camera wearer. Note that some works utilized multiple modalities. However, the GPS signals are not always accessible or reliable. Our approach only uses the egocentric data as input without any other datasets such as drivers' trajectories data or vehicles' various sensor data. Graph neural networks. For video understanding, early works [39,44] considered the spatial and temporal relationship separately and then fuse the extracted features. Recently, more works pay attention to visual relationships among object instances in space and time [12,16]. Inspired by object graphs [18,53] and GCNs, Wang and Gupta [45] proposed to represent videos as space-time region graphs, which model spatial-temporal relations and capture the interactions between human and nearby objects. Xu et al. [48] adopted GCNs to localize temporal action with multi-level semantic context in video understanding. Li et al. [23] designed MUSLE to produce 3D bounding boxes in each video clip and represent videos as discriminative sub-graphs for action recognition problem. In contrast, our approach is also based on GCNs, but we differ from them in: 1) We aim to predict the behavior of the observer rather than actors in the scene; 2) The camera collecting our data is always moving, while the majority of the existing methods use videos from fixed cameras; 3) Our approach is designed with multiview object relation graphs, which can reveal different underlying patterns of moving objects from multiple views of the scene. OUR APPROACH: EGOSPEED-NET Given a sequence of frames from an egocentric video, our goal is to forecast the speed-control action of the driver in the next (few) frame * . We denote a video clip at the current time as I = { } ∈ [ − +1, ] , where each is an image of frame . We aim at learning a prediction function : I → +1 to forecast the speedcontrol action in the next ( + 1)-frame, +1 , based on the current * By default, for simplicity in explanation we consider the next frame in the paper without explicitly mentioning. data I , where is a multi-valued categorical variable (e.g., different levels of acceleration and braking). Overview of EgoSpeed-Net The framework of our proposed model is visualized in Figure 2. Given a sequence of frames from the egocentric video data, we first extract feature vectors of objects in the scene, which adopts Mask R-CNN [14] to detect objects for each frame. Then, we use the bounding box coordinates to represent objects and select top objects in each frame according to their identified confidence scores and traffic-related categories (e.g. car, pedestrian, traffic light, stop sign, etc.). Specifically, we identify objects and select top car , pedestrian , and traffic for vehicle-category (including cars, buses and trucks), pedestrian-category, and traffic-category (including traffic lights and stop signs) † , respectively. Therefore, the feature dimension is × × 4 after identifying objects, where = car + pedestrian + traffic . Formally, the object feature is defined as follows: where = { | ∈ {car, pedestrian, traffic }} ∈ R ×4 and X ∈ R × ×4 . In detail, for identified objects belonging to category in frame , we aggregate corresponding feature matrix is the bounding box coordinates of the -th object in category in frame , the total number of top identified objects is = , and ∈ {car, pedestrian, traffic}. Afterwards, based on the bounding box coordinates of the objects, we build object relation graphs, where each node denotes an object. Each edge in the graphs connects two objects from the same category in each frame. Besides, we classify these graphs according to their categories to provide multiple views of the scene. Next, † Empirically we observe that such six semantic object categories are sufficient for analyzing the driving scenarios in our data. Car Pedestrian Traffic Figure 3: Illustration of building multi-view object relation graphs. we perform K-hop localized spectral graph convolution on each object relation graph to capture the local spatial relations among objects in each frame. After graph convolution, we max pool over each object relation graph to aggregate the localized spatial features from the nearby objects, which is in -dimension. For each view, we extract the × -dimensional matrix to represent feature vectors of objects. Finally, we apply a standard multi-layer Long Short-Term Memory (LSTM) to learn the temporal dependencies between correlated objects. After graph convolution and a temporal layer, the representations of object relation graphs are fused together to generate spatiotemporal features of objects across space and time. Then, a classifier for predicting the speed-control behavior will be applied on these spatiotemporal features. We adopt a Multi Layer Perceptron (MLP) layer following with a softmax function for the driver behavior classification. Modeling Spatial Relations among Objects As mentioned before, object relation graph is the core of our EgoSpeed-Net. Because the objects identified in each frame are unstructured data and the movements of these objects are dynamic, it is hard to capture these patterns in fixed matrices. Meanwhile, the recent work of Graph Convolution Networks (GCNs) [20]-based models can successfully learn rich relation information from non-structural data and infer relational reasoning on graph [7,23,46]. Motivated by these characteristics, we first model the spatial relation between nearby objects by organizing them as graphs and extracting latent representations through graph convolution. Then, we adopt the LSTM-based method to model the temporal evolution of such object relations. We therefore model the spatiotemporal relations among objects across space and time. Graph definition. For each frame, the nodes in graph correspond to a set of objects = is the total number of top identified objects. (To be convenient, we ignore frame here.) We construct graph ∈ R × to represent the pairwise relations among objects in -category, where relation value of each edge measures the interactions between two objects. Currently, we assume each is a complete graph with all edges equal to 1. The importance of the edge value and the edge type will be explored in our future work. Multi-view object relation graphs. A single object relation graph typically refers to the -category in a specific frame . To capture dynamic patterns of objects in different category, we build a sequence of graphs G = { | = 1, . . . , } on the object feature matrices { | = 1, . . . , } of objects belonging to the -category across different frames, where is the number of frames. As a result, there are 3 graphs in each frame and 3 × graphs in total for each sample. Figure 3 illustrates this process. Building such multi-view graphs allows us to capture the local spatial relations of objects from the same category at different frames and then jointly consider the relations of relative changes across time from multiple view. Therefore, our approach can learn the spatiotemporal relations between objects more sufficiently. Graph convolution. Different from convolution operations on images, the graph convolutions allow us to compute the response of a node based on its neighbors defined by the graph relations. To save computational cost and extract localized features, we apply thehop localized spectral graph convolution [20]. Specifically, if = 1, we only use the features of the nearest object and itself to update the representations of each object. However, the nearest object is insufficient to represent the localized features. As mentioned in Section 1, the speed-control behavior is not only affected by the nearest object, but also by the relative position of nearby objects. To consider high-order neighborhood of each object, we can extract more sufficient information from its neighboring objects to reveal its moving pattern. Formally, given feature matrix ∈ R ×4 and adjacency matrix ‡ = 1 ∈ R × , we compute the normalized Laplacian matrix = (To be convenient, we ignore the category and frame at this part.) The Laplacian matrix is symmetric positive semi-definite such that it can be diagonalized via eigen-decomposition as = Λ ,where Λ is a diagonal matrix containing the eigenvalues, consists of the eigenvectors, and is the transpose of . We can approximate the spectral convolution by employing with -th order Chebyshev polynomial, Therefore, it is much more efficient with only learnable parameters ′ ∈ R and no more eigen-decomposition of . Besides, it also updates features of each object node by aggregating -hop neighborhood information. Modeling Temporal Dependencies Temporal context information of each frame is crucial to capture the dynamic dependencies of objects in the consecutive scenes. Particularly, for long-range temporal dependencies, it is necessary to learn the semantic meaning of relative relations from the movements of objects in the scenes. In order to model the dynamics of both shape and location changes, we first max pool the output features of graph convolution over nodes to aggregate the local spatial relation information and then apply a standard multi-layer LSTM to extract the temporal features of objects from the same category. Because the movements of objects from different categories are significantly different, we pay attention to the different underlying patterns of moving objects in the scene. For example, the objects from the traffic-category are always static, while the pedestrians and cars are moving randomly based on their own intentions. To implement this idea, we propose parallel LSTMs with joint training. Specifically, we jointly train three sub-LSTMs on the training set. Each of the sub-LSTMs models the moving pattern of objects from the same category. We parallelize the three sub-LSTMs and merge them with a concatenate layer to jointly infer the speed-control behavior. Finally, we can predict the speed-control behavior of a driver as follows: = MaxPool( where ∈ R × is the aggregated representation of the object relation graph in -category, ∈ {car, pedestrian, traffic}. ∈ R is the spatiotemporal features extracted from the object relation graph in -category. ∈ R 3× is the final feature. and are weight matrix and bias vector, respectively. (.) denotes an activation function, and we adopt Softmax here. Overall, the EgoSpeed-Net can be trained in an end-to-end manner. We adopt the Cross Entropy loss for this multi-class prediction problem. where , is the ground-truth behavior label and^, is the softmax probability for the -th class of sample , respectively. is the total number of egocentric video clip samples in a training batch. EXPERIMENTS In this section, we first introduce the dataset and the implementation details of our proposed EgoSpeed-Net. Then, we compare the performance with the state-of-the-art methods. We also conduct extensive experiments to validate the effectiveness of proposed components in our model. Finally, we demonstrate a case study. Dataset. We adopt the Honda Research Institute Driving Dataset (HDD) [34]. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. The current data collection spans from February 2017 to October 2017. The video is converted to a resolution of 1280 × 720 at 30 fps. We down sample the egocentric video data (i.e. only 3 frames per second are taken) and exclude the initial stop part (The driver in the first frame of a valid clip should be moving). Besides, we also exclude vague video clips that are difficult to identify objects in the scene, including heavy rainy road condition, overexposed light condition, dark night condition, etc. At current stage, we aim to forecast driver behavior on both highway and urban scenarios without turning (i.e. steering wheel angle is within range [-30, 30] degrees). Based on the observations, drivers mostly keep driving forward except for changing routes or overtaking. In reality, if a driver wants to change the direction, he/she has to observe the surroundings through the rear-view mirror for safety. Therefore, such decisions are self-determined and it is hard to predict without information from the rear mirror. We leave it for future work. After processing the data as described above, we generated 58721 video clips in total (29275 from highway while 29446 from urban) and we use 70% (41105 samples) for training, Figure 4, we use the brake pedal pressure and the percentage of accelerator pedal angle to derive our driver speed-control behaviors and use the steering wheel angle to select samples without turning. These sensor data are only used to generate ground truth labels and not used as the input of our model. In this paper, we define two levels of braking and acceleration behavior based on the braking pedal pressure and the percentage of accelerator pedal angle collected, respectively. We name the four actions Full Braking, Slight Braking, Slight Acceleration, and Full Acceleration. According to the histogram of brake pedal pressure in Figure 4a, we see that the braking behavior of drivers in different environment shows different patterns. Therefore, we set 958 kPa and 1461 kPa (median value of its histogram) as the threshold of two braking levels for highway and urban scenarios, respectively. In other words, given a pedal pressure with 1000 kPa, we assume the driver is full braking in the highway scene, but slight braking in the urban scene. Similarly, according to the percentage of accelerator pedal angle in Figure 4b, we set 22% and 19% (median value of its histogram) as the threshold of two acceleration levels for highway and urban scenarios, respectively. As a result, we have 19727, 18546, 9607, and 10841 samples for full braking, slight braking, slight acceleration, and full acceleration, respectively. In the training phase, we oversample the slight acceleration and full acceleration behaviors to overcome the imbalanced data problem. Implementation. We set up the experiments on a High Performance Computing Cluster using a 256 GB RAM computing node with 2.6 GHz 16-Core CPU with Nvidia Tesla P100 Accelerator Cards. The primary development package is based on PyTorch 0.4.0 in Python 3.6. Given a video clip with history length frame, we first apply the Mask R-CNN [26] using a ResNet-50-FPN backbone with pretrained weights on the COCO dataset to select Top (= car + pedestrian + traffic ) objects in each frame according to their confidence scores in related categories. For scenes with few identified objects, we fill the object features with 0 if there are not enough objects found in the scene. The histogram of identified objects from different categories are shown in Figure 5. To include most objects in the scene, we set car = 20 and pedestrian = traffic = 10. Therefore, we generate feature matrices in × 4 dimension, where = = 40 and 4 represents for the bounding box coordinates of each object in the frame. Then, we build object relation graphs and perform a two layer -hop spectral graph convolution on them with 16 and 32 hidden nodes, respectively. After max pooling, the spatial feature vector is in =32 dimension. We apply a two layer LSTMs with 64 hidden units and the dimension of the output spatial-temporal feature is =64. Finally, the behavior classifier consists of 2 hidden layers with 64 and 32 hidden nodes and followed by an output layer with an activation function softmax. In all experiments, we adopt early stopping criteria, set the batch size as 512, and select the Adam algorithm with default settings as our optimizer. In our experimental settings, if the validation loss decreased less than 10 −6 after 50 epochs, the training will be terminated. In the end, we save the best weights instead of the latest weights. State-of-the-art Comparison To validate the effectiveness of our proposed method on the driver speed-control behaviors prediction problem, we compare the EgoSpeed-Net with the state-of-the-art methods on the Honda Dataset [34]. For all experiments, our problem settings include history length ∈ {2, 5, 10, 15}, future time ∈ {1, 5, 10}, and neighborhood size ∈ {1, 3, 5}. The default setting is =10, =1, and =1. Here, future time indicates that instead of predicting the speed-control behavior at next frame + 1, we only predict he behavior at exactly the frame + . Table 1 summarizes the comprehensive experimental results on EgoSpeed-Net with the state of the art. Baseline models are as follows. • PointNet [31] is a well-known deep network architecture for point clouds (i.e. unordered point sets) and we consider the object features as a set of point clouds. Note that we conduct the necessary normalization and argumentation on our data, including random 4d rotation and adding random noise. Because we only select Top = 40 objects in each frame, we set the hidden dimensions of multi-layer perceptron layers as (16,32,128,64,32) instead of (64,128,1024,512,256) in the original experimental settings. • Temporal Recurrent Network (TRN) [47] is proposed to model greater temporal context for the online action recognition problem. For fair comparison, we exclude the parts involving sensor data and extract the same visual feature from the 2 7 1 × 1 layer in InceptionResnet-V2 [41] pretrained on ImageNet [22] as done in TRN. • Spatial-Temporal Interaction Network (STIN) [29] is proposed to model the interaction dynamics of objects composed in an action for compositional action recognition. To be fair, we use the bounding box coordinates of Top identified objects in the scene as input and follow the same hyper parameter settings in STIN. Metrics. Recall and accuracy are used to evaluate our model: where and are the numbers of correctly predicted samples and all samples for each action class , respectively. Table 1, our proposed method, EgoSpeed-Net, outperforms all the existing methods consistently with a good margin. Particularly, the Slight Acceleration behavior is the most difficult one to predict for all methods. EgoSpeed-Net is the only one whose accuracy exceed 60%. Others are all around 50% or less. For both Slight Braking and Full Braking behavior, EgoSpeed-Net can achieve 90% accuracy. This outstanding performance shows the effectiveness of our approach EgoSpeed-Net with capability of simultaneously and sufficiently capturing the local spatial relationships and temporal dependencies among identified objects across space and time. Ablation Study In this subsection, we investigate the impact of each design in EgoSpeed-Net. Comprehensive results are summarized in Table 1. Object relation graphs. We first evaluate the benefit of building object relation graphs over using the original object features. As an alternative, we substitute the object relation graph with a PointNet [31] module as described in subsection 4.1. For each frame, we feed the Top object features into shared Multi-Layer Perceptron (MLP) and then aggregate point features by max pooling over each image. Finally, we concatenate the informative points of point cloud across the frame and feed them into the MLP followed by a softmax function. To be fair, we compare its performance with the basic version of EgoSpeed-Net, namely, "Base" in Table 1. The base model only includes car-graph from multi-view and feeds the spatial features directly to the classifier to predict the speed-control behavior of a driver, while skipping the temporal layer. As shown in Table 1, "Base" outperforms PointNet for all speed-control behaviors. It demonstrates that building such graph structure allow us to model the interactions between objects in the same frame more completely and efficiently, while assuming these object features as a set of point clouds may miss some spatial relations. Multi-view graphs. We also perform the ablation study on the multi-view design to validate its effectiveness. The results are list in Table 1 with label "Base+Multi". It suggests that providing the multi view of different categories can model more diverse relations from each other and collect more sufficient information for better representation. The accuracy from multiple views is increased by 15% of the Base model. In addition, we also consider building a single objects relation graph using all objects from all categories, namely Base+Single. Given that we observe results similar to the Base model, we conclude that simply involving more objects into the car-graph does not provide additional information. Therefore, it is necessary to build the object relation graphs from multiple views. Temporal module. Furthermore, we investigate how the temporal module influences the overall performance. First, we evaluate the impact of introducing a standard two layers LSTMs into the Base model. As shown in Table 1, we observe that adding such a temporal module leads to significant and consistent improvement (overall accuracy increase up to 40% of original value) compared with the Base model and is able to further boost overall accuracy from 77.75% to 82% by combining with aforementioned multi-view design. Then, we compare the performance of including an extra temporal module on other methods. For example, we add the same LSTMs layer before the speed-control behavior classifier in Point-Net. We report the accuracy measures of these two methods with label "Base+T" and "PointNet [31]+T" in Table 1. We see that the temporal module is helpful to improving the performance of both methods. Moreover, our Base model can take full advantage of the extracted temporal features with higher improvement compared with PointNet. Note that combining the multi-view design and temporal module design together is our proposed approach, EgoSpeed-Net, which demonstrates that our framework can effectively and sufficiently model the spatiotemporal relation between objects across space and time. Efficiency analysis. We evaluate the efficiency of each component in EgoSpeed-Net. Figure 6 shows the comparison of training loss curve of different variations, as well as inference time on the total test dataset. Because of the early stopping criteria, Base and Base+Multi stop before overfitting. These curves clearly show that EgoSpeed-Net learns the driver behavior efficiently at first few epochs while its loss drops significantly. Comparing the inference time of Base+Multi and Base+T, we see that the multi-view component is time consuming. Particularly, Base+T model is the most efficient one at the expense of 5% decrease of accuracy compared with EgoSpeed-Net. However, if we average the inference time on the entire test set, the increased time of EgoSpeed-Net is still acceptable. Exploration of different settings. With all the design choices set, we also evaluate their performance with different settings. The results are summarized in Table 2. Here, we list four sets of experiments. Set 2 has the basic settings with history length = 15, neighborhood size = 5, and future time = 1. The variation of history length, neighborhood size and future time is shown in Set 1 ( = 2), Set 3 ( = 1) and Set 4 ( = 10), respectively. Comparing Set 1 and Set 2, we observe that the multi view design makes the most contribution to the improvement when history length = 2. As history length increases, the temporal module becomes the main contributor with 56% accuracy improvement, while multi view only boosts 27%. This observation shows that the longer the history, the more effective the time module. Comparing Set 2 and Set 3, except slight acceleration, when more objects are included in the neighborhood, the improvement of other 3 behaviors is limited. We see that the information of the nearest object is sufficient enough to capture the underlying spatial relation between objects in the scene for full braking, slight braking and full acceleration behaviors. However, the slight acceleration is the most difficult behavior to predict, which requires more information from nearby objects in the scene. Comparing Set 2 and Set 4, we see that our approach can predict the speed-control behavior more accurately at future frame. Given a video clip, a driver needs time to response to the environment. This reaction time differs from person to person. Therefore, it is hard to tell the most accurate predicted frame. Our model yields the best performance of slight acceleration behavior with a significant improvement of 11%. Exploration of different Top objects. We also evaluate EgoSpeed-Net with different Top objects. According to the histogram of identified objects in Figure 5, we select the Top with a set of smaller numbers ( car = 10 and pedestrian = traffic = 5) and a set of larger numbers ( car = 30 and pedestrian = traffic = 20). Results are shown in Figure 7. According to the histogram, Top 10-5-5 select only part of objects in the scene, while Top 20-10-10 include most objects. We see that including as many objects as possible will complement the information of the scene and improve the prediction accuracy of the speed-control behaviors. However, if we select all objects in the scene with Top 30-20-20 settings, the performance will decrease. For example, the number of identified objects from car-related categories in most scenes is less than 20 referring to Figure 5. Therefore, changing the number to a larger number (e.g., 30) may not make much difference and may cause the matrix to be very sparse. Similar to the pedestrian and trafficrelated categories, selecting too many objects may lead to the sparse matrix. As a result, we select the best settings with Top car = 20 and pedestrian = traffic = 10 in our experiments. Case Study As shown in Table 1 and 2, the slight acceleration is the most difficult behavior to predict with the lowest accuracy among the speed-control behaviors. Figure 8a showcases the speed-control behaviors of a driver during a few consecutive frames (245-259). To be convenient, we sample one frame per second to represent the scene observed along the route of the driver. After the pedestrian has crossed the road, the driver has already started to accelerate even if there is a car moving horizontally from right to left in the scene. In detail, the driver starts accelerating at frame 252 and there is a certain distance from the crosswalk. After slightly accelerating for 3 frames, the driver approaches the crosswalk and starts fully accelerating. Comparing frame 249 and frame 252, except for the moving car marked in the red rectangle, there is no changes of other objects in the scene. Based on the relative change of the locations of this moving car, the driver decides to accelerate. Compared with other methods, our model can predict it successfully. This example demonstrates that our approach can capture such small relative changes of objects in few consecutive frames and reveal the underlying pattern of slight acceleration behavior. Another example in Figure 8b shows the slight braking behavior from frame 7127 to 7139. When a car appears from a cross, the driver decides to slight braking. At frame 7133, our model predicts it as full braking, which is acceptable. We see that it is hard to distinguish between slight braking and full braking in lack of the speeding information. In future, we will extract more information from the scene to complement the video understanding. CONCLUSION This paper investigated the problem of predicting the speed-control in driver behavior. Given a segment of egocentric video recorded from a continuous trip, we aim at learning a model to predict the speed-control behavior of a driver based on the visual contents from his/her point of view in the past few seconds. This problem is important for understanding the behaviors of drivers in road safety research. Prior work did not address this problem as they either used static camera data or only predicted behaviors of targeted objects in the scene rather than the observer. Few of them estimated the egocentric trajectories but required extra data (e.g. trajectories, or sensor data). In this paper, we proposed EgoSpeed-Net, a GCNbased framework to address the problem. EgoSpeed-Net uses multi view object graphs and paralleled LSTMs design to model diverse relations of objects in the scene across space and time. Experiment results on the HDD showed that our proposed solution outperforms the state-of-the-art methods.
2022-09-28T06:44:51.114Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "969109db38188a5e8c7f9a72a23f1df14f48a9f9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "969109db38188a5e8c7f9a72a23f1df14f48a9f9", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
234504157
pes2o/s2orc
v3-fos-license
Simulation of a Neutron Source at the KFSH&RC CS-30 Cyclotron The aim of this work is to optimize the parameters of the CS-30 cyclotron neutron source at King Faisal Specialist Hospital & Research Center (KFSH&RC). The CS-30 cyclotron is a positive ion machine capable of accelerating protons with internal and extracted beam currents up to 100μA and 60μA respectively. Geant4 simulation toolkit based on Monte Carlo methods was used to study and compare the energy spectra and the angular distributions of the neutrons resulting from a 26.5 MeV proton beam on a 0.5 cm thick target Beryllium-9 with a 0.15 cm Copper-63 back stop. Introduction BNCT (Boron Neutron Capture Therapy) is a cancer treatment technique based on the reaction of 10 B with thermal or epithermal neutrons [1] used to produce an alpha particle 4 He and a 7 Li nucleus as shown in the following reactions [2,3]: The thermal neutron beam treats cancer when concentrated on affected tissue without destroying neighboring normal tissue [4]. Neutrons to be used with the BNCT cancer treatment technology are produced at the King Faisal Specialized Hospital and Research Center (KFSH&RC) using CS-30 cyclotron. In this work, the production of neutrons for the BNCT are simulated, studied and compared using a Geant4 (version 4.10.04) simulation code and ROOT. CS-30 Cyclotron The CS-30 cyclotron system at King Faisal Specialist Hospital & Research Center is designed for positive ion acceleration. It has been devoted for medical applications and research purposes since it was established in 1982 [5]. The cyclotron has a unique feature which is the capability to accelerate different particles in order to produce different beam currents [6]. Geant4 Simulation The Monte Carlo simulation code is a toolkit used to simulate the passage, interaction and transport processes of particles through matter [7]. The physics list FTFP-BERT was used, which includes three models: Fritiof ( FTF ) model, Precom-pound ( P ) model and Bertini cascade ( BERT ) model [8]. A Geant4 code was developed to simulate the CS-30 cyclotron proton beamline and target as shown in figure 1. Comparing Targets For the first part of the validation process a comparison of simulated neutron production in both Beryllium-9 and Lithium-7 [9,10] targets of similar dimensions bombarded by 26.5 MeV proton beams was made. Resulting energy spectra obtained from these simulations are shown in figure 2. In general, The 9 Be(p, n) gives higher neutron yield than 7 Li(p, n). The neutron energy spectrum of 7 Li(p, n) is softer and easier to be moderated. 7 Li(p, n) slows down fast neutrons to thermal energy levels more than 9 Be(p, n). Comparing Beams In the second part of the validation different proton beam energies as well as a deuteron beam were simulated and compared. Figure 11 clearly shows that the resulting neutrons are mostly fast neutrons that cover a wide spectrum of energies between 0 MeV to energies less than the incident proton energies. The polar angle distribution contains a large number of neutrons in the proton beam direction. The number of neutrons decreases with the broadening width of scattering angles and the increase of proton energy as shown in figures 12-15. Another method of producing neutrons with a Beryllium target is the 9 Be(d, n) reaction where the Q-value is large and positive [11] so the spectra will extend to cover higher energy ranges than that of the deuteron beam. The resulting neutron yields are mostly fast neutrons with figure 16. The relationships between the energy distribution and the polar angular distribution of neutrons are shown in figure 17. The results indicate that most of the yield at different energies is concentrated at the deuteron beam direction which has a polar angle of 0 • . However, the neutron yield decreases with increasing the energy and is equally distributed along all azimuthal angles between −90 • to 90 • except for the deuteron beam direction (0 • ), which has the most concentration as shown in figure 18. Conclusions The neutron yield and the characteristics of the neutron spectra depend on the element of the target and the energy of the incident protons. The results indicate that the 9 Be(p, n) reaction is suitable to produce fast neutrons with energies between 1 to 22.6 MeV and are produced at an angle between −90 • to 90 • relative to the incident protons mostly at zero degrees which means it would be preferable to place the neutron detector facing the beam direction. Using Lithium-7 as an alternative target is beneficial to moderating the fast neutrons into thermal neutrons. The possibility of producing high energy neutrons was investigated by increasing the energy of the proton beam and using a deuteron beam.
2020-12-24T09:05:46.577Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "ffd10e8645bb7badc3132766da817bdbfccb629e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1643/1/012199", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2baf261edb31285fecdc543ac5b7d34cf427367b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231856413
pes2o/s2orc
v3-fos-license
Protocol for the determination of intracellular phase separation thresholds Summary To date, phase separation studies have largely been limited to in vitro assays using non-native conditions and aggregation-prone recombinant proteins that are often difficult to purify. This protocol describes the determination of relative protein concentration thresholds for phase separation through fluorescent imaging of GFP-tagged proteins in cells. The commercial availability of various plasmids and antibodies, as well as advances in gene editing, allow this procedure to be modified for the study of various phase-separating proteins in their relevant contexts. For complete details on the use and execution of this protocol, please refer to Lee et al. (2020). SUMMARY To date, phase separation studies have largely been limited to in vitro assays using non-native conditions and aggregation-prone recombinant proteins that are often difficult to purify. This protocol describes the determination of relative protein concentration thresholds for phase separation through fluorescent imaging of GFPtagged proteins in cells. The commercial availability of various plasmids and antibodies, as well as advances in gene editing, allow this procedure to be modified for the study of various phase-separating proteins in their relevant contexts. For complete details on the use and execution of this protocol, please refer to Lee et al. (2020). BEFORE YOU BEGIN This protocol was employed in a recent publication to establish the relative threshold concentration of G3BP1 protein that dictates stress granule assembly (Lee et al., 2020). To examine this in cells, we transiently transfected cells lacking stress granule nucleators, G3BP1 and G3BP2 (collectively referred to as G3BP), with GFP-tagged G3BP1 and measured stress granule initiation time as a function of GFP-G3BP1 intensity on a cell-by-cell basis. We then measured the amount of endogenous G3BP1 in wild-type or mutant cells relative to the determined threshold concentration. By doing so, we demonstrated that moderate translational suppression of G3BP could significantly reduce stress granule formation. Others have also utilized this protocol to examine the effects that G3BP1 dimerization or binding partners have on the concentration threshold of G3BP1 necessary to initiate stress granule assembly in cells (Yang et al., 2020). MATERIALS AND EQUIPMENT Experiments were performed using a Bruker Opterra II swept field confocal microscope. Cells were maintained at 37 C and supplied with 5% CO 2 using a Bold Line Cage Incubator (Okolabs) and an objective heater (Bioptechs). Images were acquired using a 603 Plan Apo 1.4NA oil objective with Perfect Focus (Nikon) engaged for the duration of the capture. Timing: 3 days This section describes how to seed and transfect cells with the GFP-tagged gene of interest. CRITICAL: The assembly of stress granules and other phase separated membrane-less organelles is highly dependent on the concentration of key nucleating factors. Therefore, investigators should utilize cells lacking their gene of interest and optimize the amount of plasmid DNA used during transfection to minimize spontaneous stress granule formation while also providing a wide dynamic range for analysis. 1. The day before transfection, seed cells such that they will be 40% confluent on the day of transfection. a. Investigators should determine the optimal confluency at the time of transfection. i. For G3BP knockout U2OS cells, seed 20,000 cells per well in a 4-well chamber slide 16-24 h before transfection. 2. 16-24 h after seeding, dilute 200 ng of plasmid DNA containing GFP-G3BP1 or your GFP-tagged gene of interest in Buffer EC from the Effectene Transfection Reagent kit for a total volume of 60 mL. 3. Add 1.6 mL of Enhancer from the Effectene Transfection Reagent kit, vortex for 1 s, and briefly spin down the mixture. 4. Incubate at 20 C-25 C for 2-5 min. 5. Add 5 mL of Effectene Transfection Reagent, vortex for 10 s, and briefly spin down the mixture. 6. Incubate at 20 C-25 C for 5-10 min. 7. Meanwhile, remove cells from the incubator, gently aspirate growth medium, and add 350 mL of fresh growth medium. 8. Add 350 mL of growth medium to the transfection complex, mix well by pipetting, and add the transfection complex dropwise onto cells. 9. Gently swirl the slide to distribute the transfection complex and return cells to the incubator. 10. 24 h later, remove cells from the incubator, gently aspirate growth media and add 400 mL of fresh growth medium. 11. Return cells to the incubator for an additional 24 h. Note: Refer to the manufacturer's instructions for troubleshooting. Alternative transfection methods can be used. Timing: 1 day This section describes how cells should be visualized to measure relative GFP intensity and stress granule initiation time for intracellular threshold determination. a. The use of an objective heater (Bioptechs) can help to maintain the temperature at 37 C. 13. Set the 488 nm laser to 80 power and 100 ms frame exposure time using a 35 mm slit. a. An average power of 0.37 mW and irradiance at the sample of 2 W/cm 2 was used; however, investigators should optimize the imaging parameters. 14. Identify cells for imaging. a. Transient transfection of your gene of interest should result in cell-to-cell variation in expression levels. Select cells with varying intensities of GFP ( Figure 1A). i. Cells with GFP-G3BP1 intensities ranging from 700 to 5000 RFU were selected. b. Avoid cells with spontaneous stress granules. Cells should not exhibit stress granules prior to treatment with sodium arsenite. Troubleshooting 1 c. Within Prairie View, the selected fields can be stored as stage locations in the ''XY-Stage'' tab. 15. Take a multipoint time-lapse, imaging each field every 40 s for two iterations. a. Within Prairie View, a time-lapse ''T-Series'' can be set up to ''Run at all XYZ stage locations.'' b. These initial images will be used to quantify GFP intensity for each cell prior to stress granule induction and to ensure proper stage movement and focus between selected stage locations. 16. Treat cells with sodium arsenite (t = 0) for a final concentration of 500 mM. a. Dilute the 50 mM stock solution of sodium arsenite to 1 mM with growth medium. b. Add 400 mL of the 1 mM sodium arsenite solution to cells (cultured in 400 mL of growth medium) to allow for sufficient mixing without disturbing the slide. 17. Immediately begin a multipoint time-lapse and collect images for 2 h ( Figure 1B). a. Investigators should optimize the duration of the experiment for the specific cell line used. Pause point: The next steps can be performed at any time. Quantification of GFP intensity and stress granule initiation time Timing: 1 day This section describes how to measure GFP intensity and stress granule initiation time on a cell-bycell basis. 18. Import image files for analysis into ImageJ. 19. Create an XY table in GraphPad Prism with one row for each cell. GFP intensity will be entered as the X value and stress granule initiation time will be entered as the Y value. 20. Select a region of interest from the background and use the Analyze > Measure command to measure the mean intensity for background correction. 21. Quantify the GFP intensity of each cell from the frame before sodium arsenite addition. a. Within ImageJ, use the ''Freehand Selections'' tool to outline a cell ( Figure 1C). b. Use the Analyze > Measure command to determine the total cell integrated density, area, and mean intensities. c. To measure the GFP intensity of the cytoplasm only, use the ''Freehand Selections'' tool to outline the nucleus and use the Analyze > Measure command to determine the integrated density and area of the nucleus. These values can be subtracted from the total cell measurements to determine the average intensity from the cytoplasm. i. ((Total cell integrated density À Nuclear integrated density)/(Total cell area À Nuclear area)) À Mean background intensity = Corrected mean cytoplasmic intensity d. Investigators should determine the appropriate region of interest selection method. 22. To measure the stress granule initiation time of each cell, identify the time after sodium arsenite treatment when two stress granules form ( Figure 1B). 23. The associated graph should have GFP intensity on the X-axis and stress granule initiation time on the Y-axis with each cell plotted as a single point. Using this graph, investigators can determine the threshold for intracellular phase separation (Figure 2A). Pause point: The next steps can be performed at any time. Determination of endogenous protein levels Timing: 4-5 days This section describes how to seed and immunostain cells in order to measure endogenous protein levels relative to the determined threshold concentration. Note: In our case, we sought to quantify G3BP1 protein levels in wild-type cells or in cells lacking a translational repressor of G3BP relative to the identified threshold. To do so, we Cells were then treated with 500 mM sodium arsenite to induce stress granule assembly and the time at which stress granule assembly initiated was quantified and correlated to GFP-G3BP1 protein levels (n = 3, total 168 cells analyzed). GFP-G3BP1 expression beyond a threshold (dotted red line) results in enhanced stress granule formation. transfected G3BP knockout cells with GFP-G3BP1 as described in steps 1-9 and seeded either wild-type or mutant cells. Then, we used live-cell imaging to measure GFP intensity and subsequently fixed and immunostained cells to measure G3BP1 immunofluorescent intensity in wild-type, mutant, or transfected G3BP knockout cells. Note: As noted previously, this portion of the experiment can be performed at any time. 24. Use a gridded chamber slide or simply mark X-and Y-axes on the bottom of the slide with an oilresistant marker. a. U2OS cells are highly adherent, however, cell types that do not readily adhere to tissue culture plates may benefit from coating plates with a binding agent (e.g., poly-lysine). CRITICAL: It is essential that investigators are able to relocate cells because transfected cells will be visualized by live-cell imaging to measure GFP intensity and subsequently immunostained to measure immunofluorescent intensity at single-cell resolution. a. For 12-18 h incubations, place the slide in a container with a wet paper towel as a humidifier. If necessary, increase the volume of diluted primary antibody to prevent cells from drying out. 39. Wash cells 33 with 500 mL of Wash Solution for 5 min. 40. Meanwhile, dilute secondary antibody 1:500 in Blocking Solution and spin down at 21,000 3 g for 5 min at 4 C. 41. Add 500 mL of diluted secondary antibody to cells, cover with foil to prevent photobleaching, and incubate for 30 min at 20 C-25 C. 42. Wash cells 33 with 500 mL of Wash Solution for 5 min. 43. Add 500 mL of PBS to cells. OPEN ACCESS 44. Return the slide to the microscope in the same orientation as it was previously imaged. 45. Set the 488 and 561 nm lasers to 80 power and 100 ms exposure time using a 35 mm slit. a. Investigators should optimize the imaging parameters. 46. Relocate cells for immunofluorescent imaging ( Figure 1D). a. Each cell that was previously imaged for GFP should now be imaged for antibody staining. 47. Identify and image cells that lack GFP but exhibit signal from antibody staining ( Figure 1E). a. These are the wild-type (or mutant) cells that were seeded after transfection. Pause point: The next steps describe image analysis and can be performed at any time. 48. Import image files for analysis into ImageJ. 49. Create an XY table in GraphPad Prism with one row for each transfected cell. GFP intensity from live-cell imaging will be entered as the X value and immunofluorescent intensity from antibody staining will be entered as the Y value. 50. Select a region of interest from the background and use the Analyze > Measure command to measure the mean intensity for background correction. 51. Quantify the GFP intensity of each cell from live-cell imaging. a. Within ImageJ, use the ''Freehand Selections'' tool to outline a cell ( Figure 1C). b. Use the Analyze > Measure command to determine the total cell integrated density, area, and mean intensities. c. To measure the GFP intensity of the cytoplasm only, use the ''Freehand Selections'' tool to outline the nucleus and use the Analyze > Measure command to determine the integrated density and area of the nucleus. These values can be subtracted from the total cell measurements to determine the average intensity from the cytoplasm. i. ((Total cell integrated density À Nuclear integrated density)/(Total cell area À Nuclear area)) À Mean background intensity = Corrected mean cytoplasmic intensity d. Investigators should determine the appropriate region of interest selection method. 52. Quantify the immunofluorescent intensity of each cell from antibody staining in the same way. 53. The associated graph should have GFP intensity on the X-axis and immunofluorescent intensity on the Y-axis with each cell plotted as a single point. 54. Perform a simple linear regression analysis within GraphPad Prism to model the relationship between GFP intensity and immunofluorescent intensity. Troubleshooting 4 55. Quantify the immunofluorescent intensity of each wild-type (or mutant) cell from antibody staining. a. While the transfected cells should exhibit both GFP and immunofluorescent intensities, wildtype (or mutant) cells should not have any GFP signal. 56. Use the calculated linear regression line to determine the GFP equivalent of the average measured immunofluorescent intensity from wild-type (or mutant) cells. a. These results can be plotted with the determined threshold concentration included as a reference point ( Figure 2C). EXPECTED OUTCOMES Here, we present a protocol that takes advantage of natural variations in expression that arise from the transient transfection of cells ( Figure 1A). Live-cell imaging of transfected cells allows for quantification and correlation of the GFP-tagged protein of interest to stress granule initiation time (Figure 2A). Consistent with in vitro phase separation assays, we found that the relationship between intracellular protein concentration and condensate formation was switch-like, such that phase separation is dictated by a critical threshold concentration (Figure 2A, dotted red line). Moreover, linear regression analysis modeling the relationship between GFP intensity to immunofluorescent intensity will reveal how the determined threshold relates to endogenous protein levels ( Figures 2B and 2C). LIMITATIONS While this protocol was derived based on the fundamental principles of phase separation, the specific steps presented here have been optimized for studies of G3BP1 and it is possible that other proteins of interest may require additional optimization and modifications. As noted previously, some limitations of this study include the need for cells lacking the gene of interest as well as the availability of specific antibodies. In addition, this protocol relies on the use of a GFP-tag, which could alter protein dynamics. Therefore, care should be taken to demonstrate that the addition of a GFP-tag does not significantly affect protein function. As with any experimental technique, it is important to validate any findings made with this protocol with additional independent methods. For investigators seeking to quantify absolute intracellular protein concentrations, we suggest methods such as mass spectroscopy or fluorescence correlation spectroscopy (Beck et al., 2011;Politi et al., 2018;Unwin, 2010). TROUBLESHOOTING Problem 1 Spontaneous stress granule formation. Potential solution Check that the microscope is properly equilibrated to 37 C and 5% CO 2 . Avoid exposing cells to extended periods out of the incubator or microscope cage. Reduce the amount of DNA used during the initial transfection or refer to the manufacturer's instructions. Problem 2 The slide shifts out of desired stage location. Potential solution Proper sample stabilization is critical for obtaining high quality images. Use tight-fitting sample holders. Remove the slide cover before imaging to minimize handling during sodium arsenite treatment. Problem 3 Too few cells for imaging. Potential solution Confirm cell confluency prior to imaging, cells may need to be seeded at a higher confluency prior to transfection. Cell types that do not readily adhere to tissue culture plates may benefit from coating plates with a binding agent (e.g., poly-lysine). During immunostaining, add solutions and aspirate gently to prevent cells from lifting off. In addition, ensure that cells remain hydrated, particularly if incubating cells with primary antibody 12-18 h. Problem 4 Low R 2 value. Potential solution The R 2 value quantifies the strength of a linear relationship. A low R 2 value indicates a poor linear relationship between the GFP and the immunofluorescent intensities and suggests that the linear regression line should not be used to determine endogenous protein levels. Optimize the protocol such that GFP is expressed at varying levels and detected within a linear dynamic range. RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, P. Ryan Potts (rpotts01@amgen.com). Materials availability All materials generated in this study are available through request but may require a completed Materials Transfer Agreement. Data and code availability This study did not generate any unique datasets or code.
2021-02-10T05:22:38.473Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "aed9abe2df172c0cb7705c9afbbc4d7bd88b9895", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2021.100308", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aed9abe2df172c0cb7705c9afbbc4d7bd88b9895", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
249436965
pes2o/s2orc
v3-fos-license
Quercetin-Coating Promotes Osteogenic Differentiation, Osseointegration and Anti-Inflammatory Properties of Nano-Topographic Modificated 3D-Printed Ti6Al4V Implant The capabilities of osseointegration and anti-inflammatory properties are of equal significance to the bio-inert titanium implant surface. Quercetin has proved its capacities of activating anti-inflammation through macrophage modulation and promoting osteogenic differentiation. Herein, we fabricated quercetin-coating on nano-topographic modificated 3D-printed Ti6Al4V implant surface. Subsequently the biological cells responses in vitro, anti-inflammatory and osseointegration performance in vivo were evaluated. In vitro studies indicated that quercetin-coating can enhance the adhesion and osteogenic differentiation of rBMSCs, while modulating the polarization of macrophages from M1 to M2 phase and improving the anti-inflammatory and vascular gene expression. Moreover, quercetin-loaded implants reduced the level of peri-implant inflammation and ameliorated new bone formation and rapid osseoinegration in vivo. Quercetin-coating might provide a feasible and favorable scheme for endowing 3D-printed titanium alloy implant surface with enhanced the rapid osseointegration and anti-inflammatory properties. INTRODUCTION Restoring large bone defects caused by tumor, trauma and osteoporosis is undoubtedly a great challenge, especially in load-bearing areas such as jaws and limbs (Hassan et al., 2019). The clinical use of autogenous bone grafts, as the current gold standard treatment, is limited due to the lack of donor site availability. Three-dimensional (3D) printed bone substitutes have been applied to produce almost all kinds of biomaterials in clinical practice (Bose et al., 2018), exhibiting multiple advantages like design flexibility and higher efficiency. Titanium and its alloys are widely used in clinic because of their superior in mechanical properties and biocompatibility. Moreover, the extensibility of metal can realize the personalized and precise restoration through 3D printing. Yet the biological inertia of titanium alloys leads to unsatisfying long-term implant survival, for the nonbiological Ti implants may induce a soft foreign body response that results in fibrous tissue formation (Goodman et al., 2013), infections and bone resorption in the implanted area (Civantos et al., 2017), thus hindering its potential clinical application. The surface properties of implant materials, such as surface morphology and chemical composition, can directly impact the biological effects of cell adhesion, proliferation, differentiation, and ultimately affect the quality of osseointegration between implant and host bone (Chen et al., 2016). Current major strategies of titanium surface modification include physical modification, chemical modification and biochemical modification. With the advance of biochemical surface modification, bioactive agents such as protein, peptide, growth factor and drugs have been tentatively applied to implant surfaces, endowing the materials with multiple functions such as osteoinduction, osteoconduction and anti-inflammation. Host's inflammatory response to the implantation is inevitable, which is an essential process of tissue regeneration. Implant osseointegration originates from the inflammatory driving process on and near the implant surface (Zhao et al., 2012). Before osteogenesis and angiogenesis, the initial inflammatory response of immune cells [macrophages (m Φ)/ monocytes] to the surface of the material determines the fate of the implant. Macrophages are plastic and dynamic that can polarize to classically activated inflammatory phenotype (M1) or alternatively activated inflammatory macrophages (M2) when stimulated by different signals (Kang et al., 2017). Characteristic M1 pro-inflammatory profile exerts a strong cytotoxic activity through production of nitric reactive species (inducible nitric oxide synthase, iNOS), apart from a Th1 pro-inflammatory response [interleukin-1β (IL-1β), IL-6] ( Van den Bossche et al., 2017). Macrophages with this phenotype are beneficial for pathogens/tumour elimination but detrimental for the wound healing process (Zhang and Mosser, 2008). On the other hand, M2 anti-inflammatory profile, with mannose receptor (CD206) as typical surface markers, contributes to inflammation resolution and wound healing by producing anti-inflammatory cytokines such as IL-10 and angiogenesis mediators such as transforming growth factor-β (TGF-β) and vascular endothelial growth factor (VEGF) (Funes et al., 2018). M1 and M2 macrophages can transform into each other under external stimulation, and the transformation from M1 to M2 is the turning point from inflammation stage to repair stage (Landén et al., 2016). This functional plasticity of macrophages is the premise that the implant surface can play an immunomodulatory role. The physical and chemical properties of implant surface can affect the polarization of macrophages, and then affect the direction, degree and scope of the inflammatory process. Therefore, the design of implant materials should actively regulate the process of the inflammatory reaction rather than avoiding it, so as to make it turn to the direction conductive to tissue regeneration. Recent reports threw light on the field that nano-structured surface can regulate the function of inflammatory response related cells, especially the function of macrophages by modulating the polarization between M1 and M2 phenotypes of macrophages and the secretion of cytokines (Vishwakarma et al., 2016). In addition to the change of implant surface structure, the introduction of various bioactive molecules (e.g., functional elements, growth/differentiation cytokines and small molecule drugs) loaded on the biomaterial surfaces can harness macrophage polarization to generate an osteogenic immune microenvironment, so as to regulate the direction, scope and degree of inflammation, which is ultimately beneficial to bone tissue regeneration (Chen et al., 2017;Dong et al., 2017). Surface modification with dual functions of enhancing osteogenesis and regulating macrophage polarization may be a promising solution to the bio-inertia of titanium alloy implants. Quercetin is a flavonoid monomer compound of small polyphenolic molecules which widely exists in natural plants. It has many pharmacological effects such as anti-inflammation, anti-oxidation, anti-tumor, hypoglycemic and hypolipidemic (Russo et al., 2012). Recent reports illustrated that introducing quercetin onto nano-octahedral ceria could modulate the phenotypic switch of macrophages by not only inhibition of M1 polarization but also promotion of M2 polarization in periodontal disease (Wang Y et al., 2021). Meanwhile, numerous reports have confirmed its impacts on osteogenesis. Quercetin stimulated ALP activity of mesenchymal stem cells (MSCs) in a dose-dependent manner and up-regulated the expressions of ontogenetic marker proteins BGP and COL-1 besides the stimulation of MAPK/ERK signal pathway (Li et al., 2015). Quercetin could also promote OVX rBMSCs proliferation, osteogenic differentiation and angiogenic factor expression while rebuilding the balance of the RANKL/OPG system in a dose-dependent manner (Zhou et al., 2017). To sum up, quercetin might constitute an appropriate candidate as the loaded drug of titanium alloy implants to regulate macrophages polarization and enhance the osteogenesis at the same time. However, in the deficiency of active functional groups on titanium surface, how to realize the effective loading of quercetin is an urgent problem to be resolved. Our previous research has successfully fabricated hierarchical micro/nano-topography on the Ti6Al4V implant surface through the combination of 3Dprinting, alkali-heat treatment and subsequent hydrothermal treatment (Wang H et al., 2021). The graded micro/nanotopography, deposited with anatase phase of titanium dioxide (TiO 2 ) on the surface, possessed a high specific surface area to increase the adsorption of specific proteins that leading to better biocompatibility. Since quercetin has strong power to chelate metal cations (Sun et al., 2008), the drug was observed to be absorbed on TiO 2 in monomeric form by bidentate chelating the Ti atom in TiO 2 through two dissociated hydroxy functions at the catechol ring B (Zdyb and Krawczyk, 2016). 3D-printed Ti6Al4V implant with micro/nano-topography could also provide more binding sites for quercetin, so as to improve the drug-loading efficiency. Therefore, micro-nano hybrid 3D-printed titanium surface may be an ideal delivery carrier of quercetin. In this study, we constructed the nano-topographic surface on micro-scaled 3D-printed Ti6Al4V implants on the basis of our prior research, then introduced quercetin-coating on the implant surface. The regulation on the biological behavior of macrophages and rBMSCs stimulated by quercetin-loading was evaluated, along with the observation of anti-inflammation and osseointegration performance in animal models. Materials Preparation In this study, 3D-printed Ti6Al4V (Ti) samples were prepared as two shapes: square disks (10 mm in side length, 2 mm in thickness) and rod-like implants (2 mm in diameter, 3.5 mm in length), both were fabricated from 20 to 50 μm Ti6Al4V alloy powders as described in previous study (Zhang et al., 2019). The samples were thoroughly ultrasonic cleaned in acetone, ethanol and distilled water to remove the adhered particles and then placed in polytetrafluoroethylene-lined metal reaction kettle with NaOH solution (5 mol/L) at 80°C for 6 h and next in deionized water at 200°C for 4 h to obtain the nanostructured topography, namely the nano-3D group (Wang et al., 2018). Nano-3D samples were cleaned with deionized water, steam autoclaved and dried as described before in vitro or in vivo studies. Half of the nano-3D disks/implants were soaked into quercetin solution for drug-loading. Each piece of nano-3D samples was immersed in quercetin ethanol solution (0.05 mg/ ml, 10 ml) in a 15 ml centrifuge tube. The samples were ultrasonic treated for 5 min and then placed at 4°C for 1 h. After gentle rinse with deionized water to remove the non-adsorbed quercetin, the nano-3D + quercetin samples were dried at room temperature and UV light sterilized for standby. Surface Characterization of Materials The surface topography of both groups was observed via scanning electron microscope (GeminiSEM 300, ZEISS, Germany). Raman spectroscope (RW 2000, Renishaw, England) was utilized to verify whether the quercetin was coated on the sample surface. The release of quercetin from the samples was determined by using a UV-vis spectrophotometer (IMPLEN, Germany). Quercetin-loaded nano-3D disks were soaked in phosphate buffer saline (PBS, Hyclone, United States) and shaken with 100 rpm at 37°C. The PBS was collected at 1, 4, 8, 12, 18, 24, 36 h and 2, 3, 4, 5, 6 days respectively, and the concentration of quercetin released was observed at 254 nm wavelength. The data were presented as the percentage of cumulative release in total: Cumulative amount of release (%) = 100 × M t /M (M t for the amount of quercetin released at time t; M for the total amount of quercetin). The surface wettability was observed by Optical Contact Angle and Interface Tension Meter SL200KS (SOLON TECK, China). Cell Culturing We employed macrophages (RAW 264.7 from Shanghai cell bank of Chinese Academy of Sciences) and rat BMSCs (rBMSCs) in the present study. The latter was isolated from two-week-old male Sprague Dawley rats (from Shanghai Bikai Animal Laboratory, China). Rats were sacrificed through cervical dislocation after general anesthesia, then peripheral soft tissue of dissected femora and tibiae was removed. The bone was resected at both sides of the metaphysis, then bone marrow contents were flushed into 10 cm cell culture dish containing alpha-modified Eagle's medium (α-MEM, Hyclone, United States) supplemented with 10% fetal bovine serum (FBS, Hyclone, United States) and 1% penicillin/ streptomycin solution (Gibco, United States). The rBMSCs were incubated at 37°C in 5% CO 2 and the media was changed each 2 days, and third passages of rBMSCs were used in the following experiments. Cell Adhesion and Proliferation To assess the morphology of cells adhered on the samples, cells (RAW264.7: 1 × 10 5 /ml, rBMSCs: 1 × 10 4 /ml) were seeded onto each sample in a 24-well plate and cultured for 24 h respectively. Then samples were rinsed with PBS and fixed in 2.5% glutaraldehyde at 4°C overnight. After dehydration in graded ethanol series sequentially, samples were freezing dried and sputter coated with gold before SEM scanning (S-4800, Hitachi, Japan). Gene Primers (F = forward; R = reverse) China) were used to examine the concentrations of IL-1β, VEGFα and TGF-β in the supernatants according the manufacturer's instructions. The gene expression level was determined by quantitative realtime polymerase chain reaction (qRT-PCR) assay so as to evaluate the related genes expression. The RAW 264.7 were seeded with a density of 4×10 6 /well on the sample surfaces in 6-well plates. Total RNA was extracted and separated by RNAfast200 RNA Isolation Kits (Feijie, China) after 3 days, then reversely transcribed into cDNA through a Prime-Script RT reagent kit (Takara, Japan). The cDNA samples were 1:10 diluted in RNase-free water and stored at −20°C until the PCR reaction was performed. Primers used in the present study were synthesized commercially (Sangon, China), and are set out in Table 1. The real-time PCR procedure was performed with SYBR green PCR reaction mix (Takara, Japan) in Light Cycler ® 96 Real-Time PCR System (Roche, Switzerland). Quercetin-Coated Surface Promoting rBMSCs Osteogenic Differentiation Alkaline phosphatase (ALP) activity and staining assay of rBMSCs cultured on the discs were measured at 4 and 7 days. Cells were seeded at a density of 4 × 10 4 /well and at each time point, the samples were rinsed with PBS. For ALP staining, cells were fixed with 4% paraformaldehyde (PFA) and stained with ALP Color Development Staining Kit (Beyotime, China). After 12 h, stained materials were observed through optical microscope (Olympus, Japan). While for ALP activity assay, cells were lysed in 0.1% Triton ×100 buffer (Beyotime, China). After centrifugation, the supernatant was used to detect the ALP activity via ALP Assay Kit (Jiancheng, China), and the total protein concentration was determined with BCA Protein assay kit (Beyotime, China) according to the manufacturer's instruction. Finally, the ALP activity was calculated and normalized to the total protein level (U/g protein). Besides, qRT-PCR assay was employed to investigate the related genes expression at 4 and 7 days so as to evaluate the effect of quercetin-coating on osteogenic differentiation, primers in this part are describedd in Table 2. Surgical Procedures SD rats weighted approximately 250 g were used in the present study. General anesthesia was conducted by intraperitoneal injection with pentobarbital sodium (30 mg/kg, Beyotime, China) and then the surgical area was shaved and washed with povidone iodine. A 1 cm-long incision was made through the skin, muscle and periosteum at the lateral side of femoral condyles to expose the implantation position. Subsequently, a hole of 2 mm in diameter and 3 mm in depth was prepared with a bur and the drilling procedure was accompanied with constant irrigation of sterile saline. Implants of nano-3D and nano-3D + quercetin were inserted into the left or right femoral condyle randomly and finally tissues were sutured in layers. Animals were euthanized after 2 or 4 weeks of healing (n = 6 for each time point) and then the femoral condyles were resected and fixed in 4% PFA for further analysis. Micro-Computed Tomography (Micro-CT Assay) In order to evaluate the peri-implant new bone volumes in vivo, the harvested samples were scanned through micro-CT scanning system (Quantum GX, United States). The scanning parameters were set at 90kV, 88 μA and 14 min, then the images were 3Dreconstructed in the voxel size of 25 μm, so as to calculate the ratio of bone volume to total volume (BV/TV) with the attached analysis software. Histological and Histomorphometric Analysis Hematoxylin and eosin (H&E) and immunofluorescence (IF) staining were performed on the peri-implant tissues harvested at 2 weeks. After decalcification for 1 month, the implants can be softly screwed out of the samples with tweezers, then the remaining tissues were embedded in paraffin and sectioned into 5 μm slices. The sections were stained with H&E to validate the inflammatory level, and the IF staining was employed to evaluate the macrophage phenotypes infiltrating the peri-implant tissues. The staining procedures were conducted according to the manufacturer's instruction with the antibodies (Affinity, United States). The samples harvested at 4 weeks were dehydrated in graded ethanol series from 70% to 100% sequentially and embedded in methyl methacrylate (MMA) for undecalcified sectioning. The polymerized samples were longitudinally sectioned and polished with a Diamond Circular Saw Microtome and Micro Grinding System (Exakt 300, Germany). The sections were stained with Van Gieson's (VG) staining kit (Yuanye, China) and visualized under a light microscope (Olympus, Japan) for histological observation. For histomorphometric measurements, pictures captured by the digital camera attached to the microscope were analyzed, and the bone-to-implant contact (BIC) percentage was calculated via ImageJ. Statistical Analysis All quantitative data were performed as mean ± standard deviation (SD) and were statistically analyzed by t-test through GraphPad Prism 8.0 software. The difference was considered significant when p value was less than 0.05. Surface Characterization As presented in Figures 1A-D, there was no significant difference in surface morphology between nano-3D and nano-3D + quercetin. This may be due to the fact that quercetin is a small molecular substance that cannot be observed on SEM. The Raman spectrum ( Figure 1E) showed typical quercetin peak appeared on the surface of nano-3D + quercetin at about 1606 cm −1 , which indicated that quercetin had been successfully loaded on the surface of titanium scaffolds. The results of accumulative quercetin release ( Figure 1F) showed significant quercetin release at 37.70 ± 0.39% in the first hour. Afterwards the quercetin release displayed a linear trend of steep increase to 74.75 ± 2.78% at 36 h, then gently increased until reaching 86.01 ± 3.91% at 6 days. These results implied that the nano-3D titanium alloy disks as a quercetin delivery carrier could provide effective FIGURE 3 | Elisa results of IL-1β, VEGF-α and TGF-β at 24 h respectively (A-C) and relative genes expression of RAW 264.7 (D-I) on nano-3D and nano-3D + quercetin samples at 3 days (*p < 0.05). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org June 2022 | Volume 10 | Article 933135 6 and stable drug release, and the release amount could quickly reach a high level at the first 2 days. Water contact angle analysis showed that the surface water contact angle of nano-3D + quercetin sample was lower than that of nano-3D group, but the difference was not statistically significant ( Figure 1G). This may indicate that quercetin loading has little effect on the surface hydrophilicity of nano-modified 3D-printing implants. Cell Adhesion and Proliferation The SEM results of 24 h adhesion of RAW 264.7 (Figures 2A,B), the macrophages on both groups exhibited almost spherical shape, yet macrophages on the nano-3D + quercetin samples had more pseudopods, indicating more sufficient adhesion and spreading. As polarized M1 macrophages displayed round-like shape without any spreading in general, while M2 macrophages exhibiting spindle-shaped and better spreading morphology, the results might suggest that macrophages on the quercetin-loaded samples were more likely to polarize into M2 phenotype at the first day. Similar trends were also detected in the results of rBMSCs ( Figures 2D,E), for that the cells on the surface of nano-3D + quercetin group were better adhered and spread with more plate-like and filiform pseudopods. Good adhesion plays an important role in cell proliferation and differentiation, however, CCK-8 analysis results (Figures 2C,F) showed no significant difference between the two groups, neither in rBMSCs nor in macrophages, indicating good cytocompatibilities of the quercetin-loading and quercetin at this concentration does not promote cell proliferation. Quercetin-Coated Surface Modulating RAW 264.7 Polarization To further verify the representative cytokines secreted by macrophages in M1/M2 phenotype, ELISA was implemented to determine the concentrations of IL-1β, VEGF-α and TGF-β. The results are shown in Figures 3A-C. The expression levels of IL-1β, the typical inflammatory cytokine mainly secreted by M1 macrophages, was significantly lower on quercetin-coated samples ( Figure 3A). In contrast, macrophages on nano-3D + quercetin secreted the greater amounts of the anti-inflammatory cytokine VEGF-α largely produced by M2 macrophages (Figure 3B), yet no significant difference shown in TGF-β production between the two groups ( Figure 3C). Highly consistent with the ELISA results, as is presented in Figures 3D-I, the nano-3D + quercetin group was more conductive to the expression of anti-inflammatory phenotype (M2) related genes, such as VEGF-α, TGF-β, IL-10 and CD206, while reducing the expression of activated inflammatory macrophage (M1) related genes, such as iNOS and IL-1β, indicating that quercetin-coating could regulate the polarization of macrophages to M2 macrophages. M2 phenotype macrophages secrete a variety of immunoregulatory FIGURE 5 | H&E staining of the decalcified peri-implant tissues of nano-3D and nano-3D + quercetin at 2 weeks, black arrows indicate inflammatory cells such as macrophages, neutrophils, monocytes and lymphocytes; immunofluorescent staining results of the decalcified samples: red (IL-1β) and blue (DAPI). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org June 2022 | Volume 10 | Article 933135 8 factors and chemokines to recruit and integrate fibroblasts, bone marrow mesenchymal cells, endothelial cells and other repair cells to the wound, thereby maintaining the tissue homeostasis and promoting the inflammatory response to enter the stage of tissue regeneration as soon as possible. Quercetin-Coated Surface Promoting rBMSCs Osteogenic Differentiation At 4 days, there was no significant difference in ALP activity between the two groups; at 7 days, the ALP expression on the surface of nano-3D + quercetin group increased significantly than that of nano-3D group ( Figure 4A), and the ALP staining results ( Figure 4B) confirmed the trend. As demonstrated in Figures 4C-F, compared with the nano-3D group, the expression of osteogenesis related genes such as OCN, ALP and OPN on the surface of nano-3D + quercetin group increased significantly at 4 days, except that no significant difference shown in the expression of COL-I between the two groups. While at 7 days, the expression of COL-I, ALP and OPN on the surface of nano-3D + quercetin increased significantly, and the difference was statistically significant compared to the other group, besides no significant difference shown in the gene expression of OCN between the two groups at the time. The results of quantitative detection of ALP expression and qRT-PCR exhibited similar trends that ALP and osteogenic related gene expression were higher in nano-3D + quercetin group, which implied that quercetin-coating was more favorable for osteogenic differentiation of rBMSCs. Capability of Anti-Inflammation of Quercetin-Coated Nano-3D Implants The images of H&E staining in Figure 5 demonstrated the pathological changes in the peri-implant tissues. Compared to the quercetin-loaded group, the nano-3D sections showed more severe inflammatory state with abundant inflammatory cells infiltration, such as monocytes, neutrophils and macrophages. To further investigate the therapeutic effects of quercetin against inflammation, the level of inflammation-associated cytokine in the peri-implant tissues were observed by IF. The representative pro-inflammatory M1 biomarker IL-1β positive cells were notably detected in nano-3D group and significantly decreased in the quercetin-loaded samples, showing the anti-inflammatory effects of quercetin at the inflammatory site. Capability of Osteogenesis and Osseointegration of Quercetin-Coated Nano-3D Implants After implantation for 4 weeks, 3D-reconstructed images of nano-3D ( Figure 6A) and nano-3D + quercetin ( Figure 6B) showed that the volume of new bone formation in the quercetin-coated group was obviously larger than that in the non-coating group. The quantitative results of BV/TV ( Figure 6C) also illustrated the difference, which the new bone formation ration of nano-3D + quercetin implants was significantly higher than the other group. Moreover, the VG staining results of hard tissue slices showed a trend in consistence with the CT analysis, that there was more new bone formed around the surface of quercetin-coated implants ( Figure 6E) compared to the nano-3D samples ( Figure 6D). The quantitative analysis of new bone area percentages demonstrated that BIC percentage of nano-3D + quercetin group is markedly higher than that of the non-coating group ( Figure 6F). To sum up, histological and histomorphometric results implied that the nano-structural modified 3D-printed Ti6Al4V with quercetin coating could enhance the capacities of osteogenesis and osseointegration around the implants in vivo. Taking the in vitro and in vivo observations into account, the quercetin-coated nano-topographic modificated 3D-printed Ti6Al4V manifested superiority compared to the control group, which may owe to the capabilities of stimulating osteogenic differentiation and anti-inflammation of quercetin (Angellotti et al., 2020). CONCLUSION In the present study, we successfully loaded quercetin onto the surface of nano-structural modified 3D-printed Ti6Al4V implants, and then confirmed that quercetin-coating can promote the adhesion of macrophages and modulate the polarization from M1 to M2 phase, thus to improve the anti-inflammatory and vascular gene expression gene expression of the macrophages. Meanwhile, the nano-structural modified 3D-printed Ti6Al4V loaded with quercetin can promote the adhesion and osteogenic differentiation of rBMSCs. Quercetin-loading provided a feasible and favorable scheme for endowing 3D-printed titanium alloy implant surface with enhanced rapid osseointegration and antiinflammatory properties, and the specific mechanisms of quercetin promoting osteogenesis and anti-inflammation through modulating polarization are worthy of further study. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by Institutional Animal Care and Use Committee of Shanghai Jiaotong University (Shanghai, China). AUTHOR CONTRIBUTIONS NL and HW performed the experiments and wrote the original draft. ZF and WH helped to perform the surgical procedures. CZ helped to prepare the manuscript. JW, YZ and SL led the conceptualization and project administration, and supervised the writing and editing of the manuscript. All authors contributed to the article and approved the submitted version.
2022-06-08T13:12:29.213Z
2022-06-08T00:00:00.000
{ "year": 2022, "sha1": "6eacd1847017aa58e74b95bb175ee66ec79adef2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2022.933135/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "6eacd1847017aa58e74b95bb175ee66ec79adef2", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
245382408
pes2o/s2orc
v3-fos-license
Analysis of Long-Term Prestress Loss in Prestressed Concrete (PC) Structures Using Fiber Bragg Grating (FBG) Sensor-Embedded PC Strands : This study aims to develop a prestressed concrete steel (PC) strand with an embedded optical Fiber Bragg Grating (FBG) sensor, which has been developed by the Korea Institute of Civil Engineering and Building Technology since 2013. This new strand is manufactured by replacing the steel core of the normal PC strand with a carbon-fiber-reinforced polymer (CFRP) rod with excellent tensile strength and durability. Because this new strand is manufactured using the pultrusion method, which is a composite material manufacturing process, with an optical fiber sensor embedded in the inner center of the CFRP Rod, it ensures full composite action as well as proper function of the sensor. In this study, a creep test for maintaining a constant load and a relaxation test for maintaining a constant displacement were performed on the proposed sensor-type PC strand. Each of the two tests was conducted for more than 1000 h, and the long-term performance verification of the sensor-type PC strand was only completed by comparing the performance with that of a normal PC strand. The test specimens were fabricated by applying an optical fiber sensor-embedded PC strand, which had undergone long-term performance verification tests, to a reinforced concrete beam. Depending on whether grout was injected in the duct, the specimens were classified into composite and non-composite specimens. A hydraulic jack was used to prestress the fabricated beam specimens, and the long-term change in the prestress force was observed for more than 1600 days using the embedded optical fiber sensor. The experimental results were compared with the analytical results to determine the long-term prestress loss obtained through finite-element analysis based on various international standards. was prestressed using a hydraulic jack, a significant of instantaneous loss occurred depending on the wedge slip amount and the performance of the hydraulic jack. Therefore, hydraulic nut was the frame and the load cell to recover the loss immediately after prestressing. The prestress force was first introduced by the hydraulic jack, and the hydraulic nut was used to meet the target load in order to compensate for the immediate loss caused by the removal of the hydraulic jack. The normal PC strand to be compared was SWPC7BL, with a tensile load of 261 kN, and a 70% load at 182.7 kN was applied to the specimen in this study. After prestressing to 182.7 kN and maintaining the load for 120 s, the amount of load reduction and change in strain over 1000 h were examined. Furthermore, strain sensors were attached both sides of in preparation for in Introduction Prestressed concrete (PC) structures are widely used for most concrete structures because of their advantage with respect to the increased compression efficiency of concrete and the improved cracking characteristics when using PC steel strands. In general, cracks mainly occur in the area in which the tensile force acts because the tensile strength of concrete is significantly smaller than the compressive strength. PSC is an efficient structure that applies a compressive force to the region in which the tensile force is applied to remove some of the tensile stress generated in the concrete by external loads. The advantages of PC, such as a reduction in the cross-sectional area and the amount of reinforcement, have led to the construction of numerous PC structures. As more than 100 years have passed since the development of PC structures, the number of PC structures undergoing aging is increasing tremendously, with an increased importance given to maintenance through monitoring of the structures. Because the most typical method of applying prestress force in such PC is to use a PC strand, the long-term performance of this PC strand is a critical factor. In general, when introducing a prestress force into a structure using a PC strand, the prestress force is calculated based on the hydraulic change and the extension of the hydraulic jack, as shown in Figure 1. Therefore, a separate measurement method is required to examine the residual prestress force after removing the hydraulic jack. The lift-off test is often used [1][2][3] and is a method that is employed to install a load cell, a displacement meter, and a hydraulic jack on the strand and to calculate the prestress force of the strand using the relationship between the force applied to the load cell and the displacement measured with the displacement meter. Although the process is simple and accurate, it is difficult to apply this method to structures in service, as it involves separate hydraulic equipment and excessive hydraulic pressure may, thus, be introduced. Appl. Sci. 2021, 11, 12153 2 of 17 applying prestress force in such PC is to use a PC strand, the long-term performance of this PC strand is a critical factor. In general, when introducing a prestress force into a structure using a PC strand, the prestress force is calculated based on the hydraulic change and the extension of the hydraulic jack, as shown in Figure 1. Therefore, a separate measurement method is required to examine the residual prestress force after removing the hydraulic jack. The lift-off test is often used [1][2][3] and is a method that is employed to install a load cell, a displacement meter, and a hydraulic jack on the strand and to calculate the prestress force of the strand using the relationship between the force applied to the load cell and the displacement measured with the displacement meter. Although the process is simple and accurate, it is difficult to apply this method to structures in service, as it involves separate hydraulic equipment and excessive hydraulic pressure may, thus, be introduced. After examining the prestress force in the above manner and then removing the hydraulic jack, there are not many ways to manage the prestress force. Therefore, the maintenance of the prestress force tends to be neglected in PC bridges in the absence of a means to check the prestress force, and leakage may occur, owing to voids in the grout in the duct and cracks in the concrete around the installed strands, leading to the corrosion of the strands and even to the collapse of bridges in severe cases. Table 1 below shows examples of bridges showing severe corrosion of prestressed members and anchorages, as well as bridge collapse accidents that have occurred since 2000. Table 1. Examples of prestressed member damage and bridge collapse. Bridge Year Patterns Photo of Destruction Mid bay bridge [4] 2000 Corrosion of prestressed members and anchorages Jeongneungcheon viaduct [5] 2016 Corrosion and fracture of prestressed members After examining the prestress force in the above manner and then removing the hydraulic jack, there are not many ways to manage the prestress force. Therefore, the maintenance of the prestress force tends to be neglected in PC bridges in the absence of a means to check the prestress force, and leakage may occur, owing to voids in the grout in the duct and cracks in the concrete around the installed strands, leading to the corrosion of the strands and even to the collapse of bridges in severe cases. Table 1 below shows examples of bridges showing severe corrosion of prestressed members and anchorages, as well as bridge collapse accidents that have occurred since 2000. The conventional methods used for measuring prestress force include traditional methods such as attaching a gauge directly to the outside of a strand and installing a load cell at the end. There has also been a recent approach to estimating the prestress force based on the relationship between the magnitude of the tension force and the magnetic field variation rate, which is calculated using an electromagnetic (EM) sensor or a magnetic flux leakage (MFL) sensor [7,8]. In addition, methods for measuring the prestress force or tracking damaged areas by applying an acoustic emission (AE) sensor using acoustic effects are being developed [9,10]. As most structures tend to have several PC strands applied to one duct, an externally attached sensor is associated with problems such as the need for a method to protect the sensor area and to manage the measurement line. The magnetic field or acousticeffect-based sensors are not commonly used, as they require initial measurements, and each manufacturer employs different analysis methods for the large amount of accumulated data. Therefore, it is difficult to find a simple and stable method for long-term measurement [11,12]. This study attempted to use a PC strand with an embedded optical Fiber Bragg Grating (FBG) sensor, which has been developed by the Korea Institute of Civil Engineering and Building Technology since 2013 and evaluated the long-term performance of the strand by performing creep and relaxation tests. The trend of long-term prestress loss of each member was examined by applying optical sensor PC (OSPC) strands to the PSC beam specimens, classified as composite and non-composite, to which the prestress force was applied [13][14][15]. Furthermore, a finite-element analysis was performed on beam specimens with the same dimensions to obtain the analytical results for the long-term prestress loss by applying ACI, CEB-FIP, and KS standards and the validity according to each criterion was evaluated by comparing the analytical results with the long-term prestress force data collected by the optical fiber sensor. Based on these results, this study aims to provide basic data on long-term prestress loss, which is useful for the maintenance of PSC structures. After examining the prestress force in the above manner and then removing the hydraulic jack, there are not many ways to manage the prestress force. Therefore, the maintenance of the prestress force tends to be neglected in PC bridges in the absence of a means to check the prestress force, and leakage may occur, owing to voids in the grout in the duct and cracks in the concrete around the installed strands, leading to the corrosion of the strands and even to the collapse of bridges in severe cases. Table 1 below shows examples of bridges showing severe corrosion of prestressed members and anchorages, as well as bridge collapse accidents that have occurred since 2000. After examining the prestress force in the above manner and then removing the hydraulic jack, there are not many ways to manage the prestress force. Therefore, the maintenance of the prestress force tends to be neglected in PC bridges in the absence of a means to check the prestress force, and leakage may occur, owing to voids in the grout in the duct and cracks in the concrete around the installed strands, leading to the corrosion of the strands and even to the collapse of bridges in severe cases. Table 1 below shows examples of bridges showing severe corrosion of prestressed members and anchorages, as well as bridge collapse accidents that have occurred since 2000. Morandi bridge [6] 2018 Bridge collapse due to the corrosion of prestressed members and anchorages The conventional methods used for measuring prestress force include traditional methods such as attaching a gauge directly to the outside of a strand and installing a load cell at the end. There has also been a recent approach to estimating the prestress force based on the relationship between the magnitude of the tension force and the magnetic field variation rate, which is calculated using an electromagnetic (EM) sensor or a magnetic flux leakage (MFL) sensor [7,8]. In addition, methods for measuring the prestress force or tracking damaged areas by applying an acoustic emission (AE) sensor using acoustic effects are being developed [9,10]. As most structures tend to have several PC strands applied to one duct, an externally attached sensor is associated with problems such as the need for a method to protect the sensor area and to manage the measurement line. The magnetic field or acoustic-effectbased sensors are not commonly used, as they require initial measurements, and each manufacturer employs different analysis methods for the large amount of accumulated data. Therefore, it is difficult to find a simple and stable method for long-term measurement [11,12]. This study attempted to use a PC strand with an embedded optical Fiber Bragg Grating (FBG) sensor, which has been developed by the Korea Institute of Civil Engineering and Building Technology since 2013 and evaluated the long-term performance of the strand by performing creep and relaxation tests. The trend of long-term prestress loss of each member was examined by applying optical sensor PC (OSPC) strands to the PSC beam specimens, classified as composite and non-composite, to which the prestress force was applied [13][14][15]. Furthermore, a finite-element analysis was performed on beam specimens with the same dimensions to obtain the analytical results for the long-term prestress loss by applying ACI, CEB-FIP, and KS standards and the validity according to each criterion was evaluated by comparing the analytical results with the long-term prestress force data collected by the optical fiber sensor. Based on these results, this study aims to provide basic data on long-term prestress loss, which is useful for the maintenance of PSC structures. Short-Term Performance Verification-Prestress-Force Management Status and OSPC A stable long-term measurement method to overcome the disadvantages listed above is therefore required to manage the prestress force of a PC structure. It is desirable to avoid attaching the sensor to the outside of the strand for the protection of the sensor from external factors and to use a sensor that is easy to install and to manage the connection line. This study was conducted using a PC strand with an embedded FBG sensor developed by the Korea Institute of Civil Engineering and Building Technology (optical sensor PC strand; OSPC). The OSPC strand was manufactured by producing a core wire embedded in carbon-fiber-reinforced polymer (CFRP) containing an optical fiber with FBG that could perform measurements and replacing the core wire of a normal stand with the produced core wire. The sensor was located inside the core wire and moved together with the strand to provide a high level of accuracy while exhibiting excellent durability, owing to the use of the optical fiber. Because the FBG sensor could be deployed in several places in the core wire, it was possible to monitor the prestress force on each part inside the strand [16]. Figure 1 shows the appearance of the developed OSPC strand and the tensile test results. The maximum tensile strength of the OSPC strand was about 2200 MPa, and the yield strength was also higher than that of a normal PC strand. Nevertheless, the modulus of elasticity was about 195 GPa, which was lower than the tensile strength of 200 GPa of a normal PC strand. Although the reliability of the prestress force measurement of the OSPC strand was verified by KIM et al., it was a study that aimed to determine the reliability of short-term prestress force measurement, with no verification of the long-term performance. Therefore, in this study, strand specimens were produced for the creep and relaxation tests conducted inside a laboratory equipped with a constant temperature and humidity Appl. Sci. 2021, 11, 12153 4 of 16 function that could meet the conditions of constant temperature and humidity. After the long-term performance of the OSPC strand was verified, it was installed in a concrete beam specimen and introduced with the prestress force equivalent to the design prestress force to examine the change in prestress force over a long period of time while being exposed to external environments with varying temperatures and humidities. Long-Term Performance Verification-Creep Test A creep test was performed to verify the long-term performance of the OSPC strand alone. For comparison, the test was performed on a normal PC strand as well as the OSPC strand. The creep test examined the change in strain under constant stress, and it was crucial to apply a constant load to the OSPC strand throughout the test. To enable this, this study devised a load amplifier, as shown in Figure 2, and applied it to the experiment. The load amplifier used the principle of the lever and was designed to use a steel plate as a weight and to apply the amplified load to the specimen according to the distance ratio of the lever. Depending on the length ratios of a to b and c to d in Figure 2a, a load equivalent to about 80 times the actual weight was applied to the specimen. For accurate testing, monoheads were used at both ends of the specimen to create anchorages and were installed in the load amplifier. The strain was measured with an optical fiber sensor in the OSPC strand, and an electrical resistance strain sensor attached to the surface of the outer wire was used. In addition, to measure the relative slip between the core wire and the outer wire, a 10-mm displacement gauge was installed at the upper and lower parts of the specimen, and a load cell was installed to measure the change in the load. The load introduced by the load amplifier was about 175 kN, and data were obtained by performing measurements with each sensor while carrying out the test for more than 1000 h. elasticity was about 195 GPa, which was lower than the tensile strength of 200 GPa of a normal PC strand. Although the reliability of the prestress force measurement of the OSPC strand was verified by KIM et al., it was a study that aimed to determine the reliability of short-term prestress force measurement, with no verification of the long-term performance. Therefore, in this study, strand specimens were produced for the creep and relaxation tests conducted inside a laboratory equipped with a constant temperature and humidity function that could meet the conditions of constant temperature and humidity. After the long-term performance of the OSPC strand was verified, it was installed in a concrete beam specimen and introduced with the prestress force equivalent to the design prestress force to examine the change in prestress force over a long period of time while being exposed to external environments with varying temperatures and humidities. Long-Term Performance Verification-Creep Test A creep test was performed to verify the long-term performance of the OSPC strand alone. For comparison, the test was performed on a normal PC strand as well as the OSPC strand. The creep test examined the change in strain under constant stress, and it was crucial to apply a constant load to the OSPC strand throughout the test. To enable this, this study devised a load amplifier, as shown in Figure 2, and applied it to the experiment. The load amplifier used the principle of the lever and was designed to use a steel plate as a weight and to apply the amplified load to the specimen according to the distance ratio of the lever. Depending on the length ratios of a to b and c to d in Figure 2a, a load equivalent to about 80 times the actual weight was applied to the specimen. For accurate testing, monoheads were used at both ends of the specimen to create anchorages and were installed in the load amplifier. The strain was measured with an optical fiber sensor in the OSPC strand, and an electrical resistance strain sensor attached to the surface of the outer wire was used. In addition, to measure the relative slip between the core wire and the outer wire, a 10-mm displacement gauge was installed at the upper and lower parts of the specimen, and a load cell was installed to measure the change in the load. The load introduced by the load amplifier was about 175 kN, and data were obtained by performing measurements with each sensor while carrying out the test for more than 1000 h. Figure 3 shows the relative displacement-time curve, which is one of the creep test results of the normal PC and OSPC strands. The relative displacement was obtained by subtracting the displacement of the outer wire, located outside, from the displacement of the core wire, located in the center of the strand. The relative displacement of the OSPC strands ranged from about 0.3 mm to 1.6 mm at the beginning of loading in the upper and Figure 3 shows the relative displacement-time curve, which is one of the creep test results of the normal PC and OSPC strands. The relative displacement was obtained by subtracting the displacement of the outer wire, located outside, from the displacement of the core wire, located in the center of the strand. The relative displacement of the OSPC strands ranged from about 0.3 mm to 1.6 mm at the beginning of loading in the upper and lower parts. The length of the specimen used in the creep test was very short, and the relative displacement varied for the normal PC strand and the OSPC strand, depending on the installation condition of the anchorage before loading. However, both strands showed a tendency to decrease the amount of change in relative displacement as the load was continuously applied over time. After a certain period of time, there was only a difference in the amount of change in the initial relative slip amount, with almost no subsequent change being observed after a certain period of time. (c) (d) Figure 2. Creep test (a) Load amplifier, (b) Creep test, (c) Installation of sensor, (d) Upper and lower displacement meters. Figure 3 shows the relative displacement-time curve, which is one of the creep test results of the normal PC and OSPC strands. The relative displacement was obtained by subtracting the displacement of the outer wire, located outside, from the displacement of the core wire, located in the center of the strand. The relative displacement of the OSPC strands ranged from about 0.3 mm to 1.6 mm at the beginning of loading in the upper and lower parts. The length of the specimen used in the creep test was very short, and the relative displacement varied for the normal PC strand and the OSPC strand, depending on the installation condition of the anchorage before loading. However, both strands showed a tendency to decrease the amount of change in relative displacement as the load was continuously applied over time. After a certain period of time, there was only a difference in the amount of change in the initial relative slip amount, with almost no subsequent change being observed after a certain period of time. Figure 4 shows a comparison of the strain-time curves of the normal PC and OSPC strands. While the change in strain was observed somewhat in the initial stage of loading, there was a section where it gradually stabilized over time. The abrupt change seen in the middle of the curve indicated the difference in the condition from the initial condition in the constant-temperature and constant-humidity (CTCH) chamber, owing to a power outage that occurred during the test. The CTCH chamber was repaired to continue with the test. This changed the temperature inside the laboratory for a certain period of time, and such a change in temperature resulted in a change in the strain of the specimen. After the laboratory temperature was normalized, the strain returned to the original pattern for both the normal PC and OSPC strands. After examining the change in strain of the OSPC strand under the load condition for more than 1000 h after the load was introduced, almost Figure 4 shows a comparison of the strain-time curves of the normal PC and OSPC strands. While the change in strain was observed somewhat in the initial stage of loading, there was a section where it gradually stabilized over time. The abrupt change seen in the middle of the curve indicated the difference in the condition from the initial condition in the constant-temperature and constant-humidity (CTCH) chamber, owing to a power outage that occurred during the test. The CTCH chamber was repaired to continue with the test. This changed the temperature inside the laboratory for a certain period of time, and such a change in temperature resulted in a change in the strain of the specimen. After the laboratory temperature was normalized, the strain returned to the original pattern for both the normal PC and OSPC strands. After examining the change in strain of the OSPC strand under the load condition for more than 1000 h after the load was introduced, almost the same pattern was observed as in the normal PC strand. Therefore, the OSPC strand developed in this study showed high reliability measurement performance. Long-Term Performance Verification-Relaxation Test In order to verify the long-term performance of the OSPC strand, a relaxation test was performed in addition to the creep test. The relaxation test, which examined the change in the load under the condition with the displacement kept constant, was performed according to the test method of KS D 7002 [17]. The loading rate was (200 ± 50) N/mm 2 per min, and the load corresponding to 70% of the minimum value of the tensile load was applied and maintained for (120 ± 2) s. The decrease in load was measured while maintaining the distance between the anchorages for 1000 h. The final relaxation value was expressed as a percentage value for the reduced load with respect to the original load. Figure 5 shows a schematic diagram of the test apparatus. The specimen was designed to be mounted on a rigid steel frame to prevent deformation, and the amount of change in the load was measured through a load cell. Both ends of the specimen were fixed using a mono anchorage each, and the specimen was prestressed with a hydraulic jack. When a short strand was prestressed using a hydraulic jack, a significant amount of instantaneous loss occurred depending on the wedge slip amount and the performance of the hydraulic jack. Therefore, a hydraulic nut was installed between the frame and the load cell to recover the loss immediately after prestressing. The prestress force was first introduced by the hydraulic jack, and the hydraulic nut was used to meet the target load in order to compensate for the immediate loss caused by the removal of the hydraulic jack. The normal PC strand to be compared was SWPC7BL, with a tensile load of 261 kN, and a 70% load at 182.7 kN was applied to the specimen in this study. After prestressing to 182.7 kN and maintaining the load for 120 s, the amount of load reduction and change in strain over 1000 h were examined. Furthermore, strain sensors were attached to both sides of the steel frame in preparation for a case in which the load was reduced in the steel frame. Figure 6 shows the introduction of a load through a mono prestressing jack and a picture of the experiment. . 2021, 11, 12153 the same pattern was observed as in the normal PC strand. Therefore, th developed in this study showed high reliability measurement performanc Long-Term Performance Verification-Relaxation Test In order to verify the long-term performance of the OSPC strand, a was performed in addition to the creep test. The relaxation test, which change in the load under the condition with the displacement kept con formed according to the test method of KS D 7002 [17]. The loading rate N/mm 2 per min, and the load corresponding to 70% of the minimum valu load was applied and maintained for (120 ± 2) s. The decrease in load was m maintaining the distance between the anchorages for 1000 h. The final r was expressed as a percentage value for the reduced load with respect to th Figure 5 shows a schematic diagram of the test apparatus. The spe signed to be mounted on a rigid steel frame to prevent deformation, and change in the load was measured through a load cell. Both ends of the fixed using a mono anchorage each, and the specimen was prestressed w jack. When a short strand was prestressed using a hydraulic jack, a signif instantaneous loss occurred depending on the wedge slip amount and the Figure 7a shows the change curve of the load decrease with time for the normal PC and OSPC strands. In the figure, the degrees of load reduction, compared to the initial loads, were about 5.75 kN for the normal PC strand and about 5.25 kN for the OSPC strand. The initial loads were 180.50 kN for the general strand and 182.70 kN for the OSPC strand, which were about 3.19% and 2.87%, respectively, when calculated as a ratio. These results indicated that the relaxation performance of the OSPC strand was higher than that of the normal PC strand. However, such a load reduction did not satisfy the standard as it exceeded the standard value of 2.5% for normal low relaxation strands. Nonetheless, this value did not consider the load reduction in the steel frame. With respect to the strain reduction in the steel frame shown in Figure 7b, there was a difference in strain owing to temperature change over time, with a reduction of about 5 to 10 με being observed. Considering the dimension of the steel frame of 150 mm × 75 mm × 6.5 mm × 10 mm with a cross-sectional area of 2371 mm 2 , the load was about 1.2 to 2.4 kN. Because the frame was installed on both sides, the load reduction could be seen as about 2.4 to 4.8 kN. Considering the load reduction in the frame calculated in this way, the load reduction for the normal PC strand ranged from 0.95 to 3.35 kN, and the load reduction for the OSPC strand was 0.45 to 2.85 kN. Calculated as a ratio, it was about 0.53% to 1.86% for the normal PC strand and about 0.25% to 1.56% for the OSPC strand, with both strands satisfying the standard of 2.5% for low relaxation strands. Therefore, the OSPC strand demonstrated a relaxation performance equivalent to or higher than that of a normal PC strand, satisfying the standard, which indicated that it had sufficient applicability to actual structures. Optical Sensor (tension force) Load Cell Steel Frame Hydraulic Nut Figure 7a shows the change curve of the load decrease with time for the normal PC and OSPC strands. In the figure, the degrees of load reduction, compared to the initial loads, were about 5.75 kN for the normal PC strand and about 5.25 kN for the OSPC strand. The initial loads were 180.50 kN for the general strand and 182.70 kN for the OSPC strand, which were about 3.19% and 2.87%, respectively, when calculated as a ratio. These results indicated that the relaxation performance of the OSPC strand was higher than that of the normal PC strand. However, such a load reduction did not satisfy the standard as it exceeded the standard value of 2.5% for normal low relaxation strands. Nonetheless, this value did not consider the load reduction in the steel frame. With respect to the strain reduction in the steel frame shown in Figure 7b, there was a difference in strain owing to temperature change over time, with a reduction of about 5 to 10 µε being observed. Considering the dimension of the steel frame of 150 mm × 75 mm × 6.5 mm × 10 mm with a cross-sectional area of 2371 mm 2 , the load was about 1.2 to 2.4 kN. Because the frame was installed on both sides, the load reduction could be seen as about 2.4 to 4.8 kN. Considering the load reduction in the frame calculated in this way, the load reduction for the normal PC strand ranged from 0.95 to 3.35 kN, and the load reduction for the OSPC strand was 0.45 to 2.85 kN. Calculated as a ratio, it was about 0.53% to 1.86% for the normal PC strand and about 0.25% to 1.56% for the OSPC strand, with both strands satisfying the standard of 2.5% for low relaxation strands. Therefore, the OSPC strand demonstrated a relaxation performance equivalent to or higher than that of a normal PC strand, satisfying the standard, which indicated that it had sufficient applicability to actual structures. Fabrication of PSC Beam Specimens After the long-term performance verification of the strand specimens was completed, the beam specimens were fabricated to examine the behavior of OSPC strands installed in actual concrete structures. The dimensions of the specimens were based on a cross section Fabrication of PSC Beam Specimens After the long-term performance verification of the strand specimens was completed, the beam specimens were fabricated to examine the behavior of OSPC strands installed in actual concrete structures. The dimensions of the specimens were based on a cross section of 300 mm in width × 520 mm in height at the center, and the dimensions were 300 mm in width × 600 mm in height at the support, considering the anchorage. As shown in Figure 8, the main reinforcement bars and strands were arranged inside the specimen, and the effective depth of the tendon was 460 mm. The strands were divided into composite and non-composite specimens according to the attachment method, with three strands being arranged for each specimen, and one of them was replaced with an OSPC strand. Figure 9 shows the production of the specimen and the introduction of the prestressing force. If the wavelength data measured by the optical fiber sensor of OSPC Strand were converted into a prestress force, the maximum prestress force was 184.1 kN for the composite specimen and 179.5 kN for the non-composite specimen. A total of six specimens were produced in combination with specimens for different purposes, but only the results of specimens classified as composite and non-composite specimens were used in this study. Fabrication of PSC Beam Specimens After the long-term performance verification of the strand specimens was completed, the beam specimens were fabricated to examine the behavior of OSPC strands installed in actual concrete structures. The dimensions of the specimens were based on a cross section of 300 mm in width × 520 mm in height at the center, and the dimensions were 300 mm in width × 600 mm in height at the support, considering the anchorage. As shown in Figure 8, the main reinforcement bars and strands were arranged inside the specimen, and the effective depth of the tendon was 460 mm. The strands were divided into composite and non-composite specimens according to the attachment method, with three strands being arranged for each specimen, and one of them was replaced with an OSPC strand. Figure 9 shows the production of the specimen and the introduction of the prestressing force. If the wavelength data measured by the optical fiber sensor of OSPC Strand were converted into a prestress force, the maximum prestress force was 184.1 kN for the composite specimen and 179.5 kN for the non-composite specimen. A total of six specimens were produced in combination with specimens for different purposes, but only the results of specimens classified as composite and non-composite specimens were used in this study. The changes in prestressing force were observed for about 1600 days in the specimens after prestressing and preparation for long-term measurement. The place where the specimen was located was an area that experiences severe temperature changes, and the range of temperature change over the four seasons was about 40 • C. The long-term measurement started around December when the temperature was low, and the change in the wavelength of light emitted from the optical fiber sensor due to the change in temperature according to the seasons was converted into a prestress force, as shown in Figure 10 below. The process of converting the optical wavelength value of the optical fiber sensor into a prestress force was applied as in the study by Seongtae Kim et al. [18]. As shown in Figure 11, there was a slight difference in the wavelength values of the optical fiber sensors in the composite and non-composite specimens, but the change was in line with the trend of seasonal temperature change. Appl. Sci. 2021, 11, 12153 9 of 17 The changes in prestressing force were observed for about 1600 days in the specimens after prestressing and preparation for long-term measurement. The place where the specimen was located was an area that experiences severe temperature changes, and the range of temperature change over the four seasons was about 40 °C. The long-term measurement started around December when the temperature was low, and the change in the wavelength of light emitted from the optical fiber sensor due to the change in temperature according to the seasons was converted into a prestress force, as shown in Figure 10 below. The process of converting the optical wavelength value of the optical fiber sensor into a prestress force was applied as in the study by Seongtae Kim et al. [18]. As shown in Figure 11, there was a slight difference in the wavelength values of the optical fiber sensors in the composite and non-composite specimens, but the change was in line with the trend of seasonal temperature change. Experiment for Temperature Compensation-Calculation of Thermal Expansion Coefficient and Data Analysis The optical fiber sensor has excellent advantages in terms of its accuracy and durability, but it also has a disadvantage in terms of being vulnerable to temperature change. Therefore, temperature compensation is absolutely necessary where an optical fiber sensor is used. The OSPC strand behaved with materials with various coefficients of temperature expansion as it was a structure that was combined not only with the CFRP rod but also with the steel outer wire outside the strand, finally forming a composite structure with poured concrete. Therefore, this study attempted to extract only the pure change in prestress force in OSPC through a separate thermal calibration experiment. For the experiment for thermal calibration, composite and non-composite concrete beams were fabricated to match the conditions as similar as possible to the beam specimens fabricated for long-term performance verification. Figure 11 below shows the dimensions of the specimens and the production. A chamber capable of temperature and humidity control was used for the temperature compensation experiment. In order to minimize the effects other than temperature change, a Teflon sheet with a small coefficient of friction was laid on the floor, and the test Experiment for Temperature Compensation-Calculation of Thermal Expansion Coefficient and Data Analysis The optical fiber sensor has excellent advantages in terms of its accuracy and durability, but it also has a disadvantage in terms of being vulnerable to temperature change. Therefore, temperature compensation is absolutely necessary where an optical fiber sensor is used. The OSPC strand behaved with materials with various coefficients of temperature expansion as it was a structure that was combined not only with the CFRP rod but also with the steel outer wire outside the strand, finally forming a composite structure with poured concrete. Therefore, this study attempted to extract only the pure change in prestress force in OSPC through a separate thermal calibration experiment. For the experiment for thermal calibration, composite and non-composite concrete beams were fabricated to match the conditions as similar as possible to the beam specimens fabricated for long-term performance verification. Figure 11 below shows the dimensions of the specimens and the production. A chamber capable of temperature and humidity control was used for the temperature compensation experiment. In order to minimize the effects other than temperature change, a Teflon sheet with a small coefficient of friction was laid on the floor, and the test objects were placed on it to perform the experiment. Figure 12 shows the installation and measurement of a specimen in the temperature chamber. The temperature was maintained at 50 • C for 1 h, then lowered to −15 • C for 60 min, maintained at −15 • C for 60 min, and then increased to 50 • C for 60 min, where the humidity was fixed at 0%, and the temperature inside and outside the specimen and the strain of the OSPC strand were measured by repeating the same temperature pattern four times. Appl. Sci. 2021, 11, 12153 11 of 17 objects were placed on it to perform the experiment. Figure 12 shows the installation and measurement of a specimen in the temperature chamber. The temperature was maintained at 50 °C for 1 h, then lowered to −15 °C for 60 min, maintained at −15 °C for 60 min, and then increased to 50 °C for 60 min, where the humidity was fixed at 0%, and the temperature inside and outside the specimen and the strain of the OSPC strand were measured by repeating the same temperature pattern four times. (a) (b) Figure 12. View of temperature-compensation experiments (a) Installation of specimen, (b) View of measurement. Figure 13 shows the temperature change measured by the thermocouples installed inside and outside the specimens and the change in strain calculated by the wavelength value of the optical fiber sensors, measured in the composite and non-composite specimens. As shown in the figure, the thermal expansion coefficient of the composite specimens in which the entire OSPC strand was embedded in concrete was about 17.266 με/°C. Figure 13 shows the temperature change measured by the thermocouples installed inside and outside the specimens and the change in strain calculated by the wavelength value of the optical fiber sensors, measured in the composite and non-composite specimens. As shown in the figure, the thermal expansion coefficient of the composite specimens in which the entire OSPC strand was embedded in concrete was about 17.266 µε/ • C. This was a slightly smaller value than the sum of the thermal expansion coefficients of the steel and optical fiber used for the normal concrete and PC strand. The thermal expansion coefficient of the optical cable may have been almost consumed, owing to the influence of the CFRP rod, with a thermal expansion coefficient close to zero, thereby being mainly affected by the thermal expansion coefficient of the steel outer wire and the concrete surrounding the CFRP rod. On the other hand, for the non-composite specimen in which the middle part formed a tube, the thermal expansion coefficient was about 9.933 µε/ • C. Compared to the composite specimen, the behavior with respect to temperature did not show a constant pattern, but the thermal expansion coefficient was somewhat smaller than that of the composite specimen, owing to the existence of the non-composite section in the hollow tube. (a) (b) Figure 12. View of temperature-compensation experiments (a) Installation of specimen, (b) View of measurement. Figure 13 shows the temperature change measured by the thermocouples installed inside and outside the specimens and the change in strain calculated by the wavelength value of the optical fiber sensors, measured in the composite and non-composite specimens. As shown in the figure, the thermal expansion coefficient of the composite specimens in which the entire OSPC strand was embedded in concrete was about 17.266 με/°C. This was a slightly smaller value than the sum of the thermal expansion coefficients of the steel and optical fiber used for the normal concrete and PC strand. The thermal expansion coefficient of the optical cable may have been almost consumed, owing to the influence of the CFRP rod, with a thermal expansion coefficient close to zero, thereby being mainly affected by the thermal expansion coefficient of the steel outer wire and the concrete surrounding the CFRP rod. On the other hand, for the non-composite specimen in which the middle part formed a tube, the thermal expansion coefficient was about 9.933 με/°C. Compared to the composite specimen, the behavior with respect to temperature did not show a constant pattern, but the thermal expansion coefficient was somewhat smaller than that of the composite specimen, owing to the existence of the non-composite section in the hollow tube. Figure 14 below shows the results obtained by applying the thermal expansion coefficients calculated in the above thermal expansion coefficient experiment to the actual long-term prestress force-loss data to remove the amount of change due to heat. Figure 14 below shows the results obtained by applying the thermal expansion coefficients calculated in the above thermal expansion coefficient experiment to the actual long-term prestress force-loss data to remove the amount of change due to heat. As shown in Figure 15, the range of change in prestress force was significantly reduced in the upper and lower parts after removing the change in prestress force according to temperature. Hyunjong Seong et al. [19]. developed a ground anchor with a built-in optical fiber sensor and conducted a study to calculate the temperature compensation coefficient. Jianping He et al. [20]. conducted a study that analyzed the effect of temperature using a strand with a built-in optical fiber sensor similar to this study. As such, in the optical fiber sensor, the measurement data were significantly affected by the temperature change, and the effect of temperature was not considered when carrying out the finite-element analysis below for the long-term prestress loss according to the design standards in each country. Therefore, it is absolutely necessary to exclude the effect of temperature from the long-term prestress force change data in this study. As shown in Figure 15, the range of change in prestress force was significantly reduced in the upper and lower parts after removing the change in prestress force according to temperature. Hyunjong Seong et al. [19]. developed a ground anchor with a built-in optical fiber sensor and conducted a study to calculate the temperature compensation coefficient. Jianping He et al. [20]. conducted a study that analyzed the effect of temperature using a strand with a built-in optical fiber sensor similar to this study. As such, in the optical fiber sensor, the measurement data were significantly affected by the temperature change, and the effect of temperature was not considered when carrying out the finiteelement analysis below for the long-term prestress loss according to the design standards in each country. Therefore, it is absolutely necessary to exclude the effect of temperature from the long-term prestress force change data in this study. As shown in Figure 15, the range of change in prestress force was significantly reduced in the upper and lower parts after removing the change in prestress force according to temperature. Hyunjong Seong et al. [19]. developed a ground anchor with a built-in optical fiber sensor and conducted a study to calculate the temperature compensation coefficient. Jianping He et al. [20]. conducted a study that analyzed the effect of temperature using a strand with a built-in optical fiber sensor similar to this study. As such, in the optical fiber sensor, the measurement data were significantly affected by the temperature change, and the effect of temperature was not considered when carrying out the finiteelement analysis below for the long-term prestress loss according to the design standards in each country. Therefore, it is absolutely necessary to exclude the effect of temperature from the long-term prestress force change data in this study. Finite-Element Analysis and Interpretation In order to analytically verify the change according to the long-term prestress loss of the beam specimen installed with the OSPC strand, a finite-element model was prepared, as shown in Figure 15. MIDAS CIVIL 2020 (Seongnam, South Korea), which is a generalpurpose structural analysis program, was used for finite-element analysis, and beam elements were used for modeling. Modeling was conducted using the dimensions based on the shape of the finished specimen and the values from the compressive strength test results. Time-dependent materials required for long-term time history analysis, such as creep and shrinkage, were applied in accordance with CEB-FIP (2010), Korea Standard (2015), and ACI 318-02 (2002), the standards of each country embedded in the MIDAS program. With respect to the relaxation coefficient, the widely used equation developed by Magura [21] was applied to MIDAS, and it was entered as the value for a low relaxation strand. The curvature friction coefficient was not considered, and the wobble friction coefficient was considered to be 0.0066 (1/m). Both composite and non-composite specimens were considered, and the slip amount of the fixing unit was defined as 6 mm. The initial prestress force was 180.6 kN, and the change in the prestress force for about 1600 days was analyzed considering the construction sequences. The prestress force acting on the tendon was analyzed for each construction phase, and the analysis was carried out by dividing it based on regulations and the composite/non-composite status, as shown in Figure 16. The arrangement of the strands was modeled using straight lines at the same location as each OSPC strand installed in the specimen. Table 2 below summarizes the main input values for each criterion applied to the finite-element analysis. construction phase, and the analysis was carried out by dividing it based on regulations and the composite/non-composite status, as shown in Figure 16. The arrangement of the strands was modeled using straight lines at the same location as each OSPC strand installed in the specimen. Table 2 below summarizes the main input values for each criterion applied to the finite-element analysis. There was no significant difference in the long-term prestress loss between the composite and non-composite specimens regarding each criterion applied to the analysis. Because KS had little difference compared to CEB-FIP, the difference was even smaller regarding these two sets of standards. However, the analytical results showed a difference There was no significant difference in the long-term prestress loss between the composite and non-composite specimens regarding each criterion applied to the analysis. Because KS had little difference compared to CEB-FIP, the difference was even smaller regarding these two sets of standards. However, the analytical results showed a difference in terms of the change in the prestress loss in the initial stage after prestressing. This may have been due to the difference in the time of developing the compressive strength that affected the creep of concrete, which resulted as KS followed the compress strength expression of ACI instead of that of CEB-FIP. On the other hand, the analytical results based on the ACI standard were slightly lower than those based on KS or CEB-FIP, indicating a long-term prestress loss. On the other hand, in the MIDAS analysis program or in general numerical calculations, the wedge slip was assumed to be 3 to 6 mm for calculating the amount of immediate prestress loss generated at the anchorage. Because the total length of the specimen was very short (6350 mm) in this study, a large amount of immediate prestress loss was expected at the anchorage. Therefore, the wedge slip was set to 6 mm. According to the results of the finite-element analysis, the amount of immediate loss occurring at the anchorage was 14.62 kN. However, when a prestress force was applied to the actual specimens, it was reduced from 184.1 kN to 132.7 kN in the composite specimen, and from 179.5 kN to 128.3 kN in the non-composite specimen. This indicated that the immediate prestress loss while removing the hydraulic jack was significantly larger than the theoretical values based on the analysis, at about 51.2 kN to 51.4 kN. In general, the immediate prestress loss tends to become larger as the length of the specimen becomes shorter. When removing the hydraulic jack, the wedge must be pushed from the hydraulic jack itself to the end to fix the wedge of the anchorage well. Nevertheless, the performance of the hydraulic jack itself may not have supported this. Accordingly, in order to compare the trend of long-term prestress loss, a method was selected to match the analysis results with the experimental results with respect to the prestress force values immediately after loss compared to the initial prestress force. The trends of the changes in long-term prestress loss identified using this method were then classified according to those in the composite and non-composite specimens and are shown in Figures 17 and 18 below. tions, the wedge slip was assumed to be 3 to 6 mm for calculating the amount of immedi-ate prestress loss generated at the anchorage. Because the total length of the specimen was very short (6350 mm) in this study, a large amount of immediate prestress loss was expected at the anchorage. Therefore, the wedge slip was set to 6 mm. According to the results of the finite-element analysis, the amount of immediate loss occurring at the anchorage was 14.62 kN. However, when a prestress force was applied to the actual specimens, it was reduced from 184.1 kN to 132.7 kN in the composite specimen, and from 179.5 kN to 128.3 kN in the non-composite specimen. This indicated that the immediate prestress loss while removing the hydraulic jack was significantly larger than the theoretical values based on the analysis, at about 51.2 kN to 51.4 kN. In general, the immediate prestress loss tends to become larger as the length of the specimen becomes shorter. When removing the hydraulic jack, the wedge must be pushed from the hydraulic jack itself to the end to fix the wedge of the anchorage well. Nevertheless, the performance of the hydraulic jack itself may not have supported this. Accordingly, in order to compare the trend of long-term prestress loss, a method was selected to match the analysis results with the experimental results with respect to the prestress force values immediately after loss compared to the initial prestress force. The trends of the changes in long-term prestress loss identified using this method were then classified according to those in the composite and non-composite specimens and are shown in Figures 17 and 18 below. As shown in the graph, the results based on KS and CEB-FIP initially appeared to follow a trend that was similar to that of the experimental results in both composite and non-composite specimens up to about Day 200. After Days 300 and 400, the analytical results based on the ACI standard started to show a trend similar to that of the experimental results in the composite specimen. Even until after Day 1600, the change stayed in between the maximum and minimum values of the data obtained from actual experi- Figure 18. Comparison of long-term prestress loss in non-composite specimens. As shown in the graph, the results based on KS and CEB-FIP initially appeared to follow a trend that was similar to that of the experimental results in both composite and non-composite specimens up to about Day 200. After Days 300 and 400, the analytical results based on the ACI standard started to show a trend similar to that of the experimental results in the composite specimen. Even until after Day 1600, the change stayed in between the maximum and minimum values of the data obtained from actual experimental specimens. On the other hand, for non-composite specimens, the analytical results based on the ACI standard were maintained at a low level close to the minimum values, compared to the experimental data. The analytical results based on the KS and CEB-FIP standards were maintained in between the maximum and minimum values of the experimental data. Nevertheless, the difference among the analytical results based on each standard of ACI, KS, and CEB-FIP was about 5 kN, which was only about 3.8% compared to 130 kN, which was the prestress force that was actually introduced. Despite the slight difference in the pattern of prestress loss in the early and late stages, there appeared to be no difficulty in interpreting the long-term prestress loss. However, in actual structures, it is difficult to assess the state based only on analytical values, and sudden damage may occur owing to unexpected external factors. Therefore, it will be necessary to prepare a method for the long-term monitoring of the prestress force, such as the OSPC strand, for PSC structures. The use of the OSPC strand developed in this study enables the stable measurement of the prestress force change in a PSC structure over a long period of time throughout the lifespan of the structure from initial prestressing. As the OSPC strand can immediately detect any change in the prestress force due to sudden structural damage, it is expected to contribute to the effective maintenance of the structure. Conclusions In this study, a PC strand with an embedded FBG fiber sensor (optical sensor PC strand; OSPC strand) was developed and the tensile strength was determined, which was followed by tensile strength, creep, and relaxation tests. The following conclusions were made by performing long-term measurement of prestress force on the PSC structure for more than 1600 days and by comparing the experimental results with the analytical results obtained in the finite-element analysis. (1) Tensile test results indicated that prestress force measurement was performed smoothly from the elastic section to the fracture seed using the OSPC strand developed in this study, and the ultimate strength of the OSPC strand was about 2200 MPa, demonstrating a performance that is similar to or better than a general PC strand at 1860 MPa. (2) As a result of carrying out the creep test, to keep the load constant, and the relaxation test, to keep the displacement constant, for more than 1000 h, the long-term performance of the OSPC strand alone was also superior to that of the general PC strand. (3) From the results of measuring the change in prestress force for more than 1600 days after manufacturing a PSC beam specimen by installing an OSPC strand and introducing prestress force, the change in the prestress force was well represented according to the change in the external temperature. Furthermore, considering the properties of the optical fiber sensor, which was significantly affected by temperature, separate test specimens were produced by being divided into composite and non-composite specimens depending on whether the optical fiber sensor was embedded in concrete, and experiments were performed by varying the temperature to calculate the coefficient of thermal expansion for each case. (4) By performing finite-element analyses, the long-term change in prestress force according to standards such as ACI, CEB-FIP, and KS was numerically expressed over a long period of time, and the suitability was determined by comparing the analytical results with the prestress force data of the actual beam specimens, considering the thermal expansion coefficient. Long-term performance verification of the OSPC strand was completed, and the tension of the PC strand inside an actual structure was efficiently measured. This will enable the efficient maintenance of PSC structures.
2021-12-22T16:35:02.605Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "ff1c41466f5c02a497c64b3c7ea408387dabae6c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/24/12153/pdf?version=1640079417", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d9c093454eb3334944bda400676c0027d302a2a3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
19144962
pes2o/s2orc
v3-fos-license
Structural analysis of retinal photoreceptor ellipsoid zone and postreceptor retinal layer associated with visual acuity in patients with retinitis pigmentosa by ganglion cell analysis combined with OCT imaging Supplemental Digital Content is available in the text Introduction Retinitis pigmentosa (RP) is a hereditary degenerative disease of the retina. [1,2] As a cause of visual impairment, the prevalence of RP is about 1 in 4000, [3,4] which is characterized by slowly progressive, concentric constriction of the visual field. [2,5] The earliest histopathological changes in all forms of RP involve shortening of the photoreceptor segments, so there are a number of studies about outer retinal changes, [5][6][7][8][9] while relatively fewer articles about inner retinal thickness. [10,11] In addition, the results showed that the inner retinal layers were preserved or even thickening compared to the outer retinal layers, which was associated with neuronal-glial remodeling, especially in early stage of RP patients. [12] In addition, the morphometry study showed that the ganglion cell number was not statistically different in the various stages of RP. [13] Therefore, currently proposed strategies to treat RP are based on the assumption that the inner retinal cells are intact. [2] However, there were also articles reported less ganglion cells in RP patients than in normal controls. [14,15] Besides, the inner retinal layers including the inner limiting membrane, retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), and inner nuclear layer measured by manual methods were widely different in previous studies, [11,12,15,16] and RNFL thickness is dramatically different in individuals. [17] So, the general conclusion that the inner retinal cells are intact need to be reassessed, especially by the modern devices. Ganglion cell IPL (GCIPL), which is topographically less variable among normal individuals, reflects both the retinal ganglion cell (RGC) bodies and the dendrites originating from the macular region. It has been confirmed to be valuable for diagnosing glaucoma obtained by ganglion cell analysis (GCA) incorporate into the new optical coherence tomography (OCT) device, which reflects localized RNFL defects better than papillary RNFL in advanced glaucoma. [18,19] However, there is no article to study about GCIPL changes in RP patients by GCA, and the thickness in the previous studies was measured from 2 or 4 points by manual [20,21] ; therefore, it is necessary to reconfirm the condition of ganglion cells in RP. Visual function in RP cases was proved to be related with morphological changes of the photoreceptors in the macular area. [22][23][24][25] Unfortunately, the analysis of the relationship between GCIPL thickness (GCIPLT) and visual function has not been investigated. With the advent of higher resolution spectral domain (Cirrus high-definition 5000, Carl Zeiss Meditec, Inc.; Dublin, CA) OCT, combined with more precise segmentation of the retinal layers, GCIPLT measurement has become possible. [26] Therefore, it is necessary to use GCA to assess the morphological changes and thickness of GCIPL in RP patients. In the present study, we use the modern OCT with an axial resolution of 5 mm to evaluate the changes in receptor ellipsoid zone (EZ) and postreceptor retinal layer (GCIPL) in patients with RP. The relationship between best corrected visual acuity (BCVA) and EZ, and GCIPLT were investigated using a linear regression model. The factors affecting visual acuity were further analyzed by multiple linear regressions. Materials In this study, 70 eyes of 35 patients (11 female, 24 male; age range 29-76 years; median age 51.5 years) with a clinical diagnosis of RP were examined, and the average diagnostic duration was 32 ± 6.5 years. Sixty-five eyes of 35 patients (11 female, 24 male; age range 26-78 years; median age 51.8 years) without retinal disease served as normal controls. The study was conducted at Shanghai Tenth People's Hospital between December 2012 and April 2015. The study was conducted in accordance with the tenets of the Declaration of Helsinki. Data collection and analysis were approved by the hospital ethics committee, and informed consent was obtained from all participants. RP patients were diagnosed based on clinical history, funduscopic appearance, visual field testing, and full-field electroretinogram records. All participants underwent a complete ophthalmologic examination, including measurement of BCVA, noncontact tonometry, slit-lamp biomicroscopy, indirect ophthalmoscopy, color fundus photography, full-field electroretinography (ERG), central visual field, and spectral-domain OCT. BCVA, expressed as the logarithm of the minimum angle of resolution (log MAR), finger count, hand movement, light perception, and no light perception were designated as 2, 3, 4, and 5, respectively. All participants followed the selection criteria: ERG was markedly reduced or had no rod response, and the area of central visual field was lost severely. Although some eyes were with good BCVA, the central visual field defect was severe (mean deviation <À20 dB by Octopus Field Analyzer (Haag-Streit, Koeniz, Switzerland)), and the retinal atrophy of the eyes was severely shown by the fundus images. Optical coherence tomography evaluation High-definition (HD)-OCT (Cirrus high-definition 5000 OCT) with an axial resolution of 5 mm was performed on all 70 eyes with RP and 65 eyes of the normal controls. Cross-section images of 6 mm horizontal and vertical scans through the central fovea were obtained. A macular cube 512 Â 128 scan was obtained by Cirrus HD-OCT to obtain the central fovea thicknesses (CFT) and GCIPLT data. The macula thickness and the outer rings, bounded by 3 and 6 mm concentric circles, were constructed using the automated software algorithm. Central subfield thickness was defined as the average thickness in the central 1 mm subfield centered on the fovea. Average macular thickness, foveal thickness, outer macular (superior, nasal, inferior, and temporal) thicknesses, and macular volume were obtained for each eye. The GCA algorithm was incorporated into the newer Cirrus OCT software version 6.5 with the annulus of inner vertical and horizontal diameters of 1 and 1.2 mm, respectively, and outer vertical and horizontal diameters of 4 and 4.8 mm, respectively. As reported, [27] the outer ring size includes the area where the GCL is thickest in a healthy eye. The GCA algorithm identifies the outer boundaries of RNFL and IPL, between which is the GCIPL layer. GCIPLT will be analyzed according to 8 parameters-average, minimum, and 6 sectors (superonasal, superior, superotemporal, inferotemporal, inferior, and inferonasal)-by the GCA algorithm embedded in Cirrus OCT. Grayscale images were used for a more precise identification and measurement of the EZ. Length was measured in the horizontal and vertical scans, and an average value was obtained. Outer retinal thickness (ORT) was defined as the region from the outer plexiform layer to the retinal pigment epithelium (RPE) layer/ Bruch complex, consisting of the outer plexiform layer, outer nuclear layer, extend limiting membrane (ELM), myoid zone, EZ, interdigitation zone, and RPE/Bruch complex. ORT was measured 1 mm from the fovea in the horizontal and vertical scans, and an average value was obtained. To measure RNFL thickness, 200 Â 200 axial scans were used in a 6 Â 6 mm 2 area around the optic disc cube. The software automatically detected the center of the optic disc and extracted a 3.46-mm-diameter peripapillary circle to calculate RNFL thickness at each point of the circle. Four-quadrant thickness and global 360°average thickness provided by Cirrus OCT were used in the study, and the correlation between GCIPL and papillary RNFL thicknesses was analyzed. CFT and GCIPLT were classified into 3 grades according to the statistical percentile (33.3% and 66.6%). EZ appearance in the OCT images was graded from 1 to 3 as follows: Grade 1, shortened EZ (obtained from cross-section of 6 mm horizontal and vertical scans) more than 1 mm; Grade 2, shortened EZ less than 1 mm; and Grade 3, EZ not visible. Statistical analysis Statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS) for Windows (version 20.0; SPSS Inc.; Chicago, IL). Photographs were made using SPSS 20.0 and Graph Pad Prism5 software (Graph Pad Software, Inc., La Jolla, CA). Differences between the RP patients and the normal controls were tested with the Mann-Whitney U test and the Kruskal-Wallis test. Associations between the various OCT parameters and BCVA were examined using Spearman rank correlation. All factors affecting visual acuity were analyzed further by multiple linear regressions. Criterion significance was assessed at the P < 0.05 level. Results Seventy eyes of 35 RP patients and 65 eyes without retinal diseases were enrolled in the study. One eye with unacceptable GCIPL OCT image and 1 eye with an unreliable papillary RNFL OCT image were excluded in RP group, so there were 68 eyes analyzed in the study. There were no statistically significant differences in age or gender between the RP patients and the normal controls. Characteristics of RP As shown in Fig. 1, the typical fundus changes in RP patients include bone spicule-shaped pigment deposits, attenuation of the retinal vessels, waxy pallor of the optic disc, and various degrees of retinal atrophy. A disrupted EZ, concentric constriction of the visual field, and nonrecordable ERG responses are found in the late stage of RP. Analysis of retinal thickness in macular area The macular thickness analysis, including the central subfield, outer ring, area thickness, and posterior pole retinal volume are listed in Table 1 Thickness map and macular retinal ganglion cell-inner plexiform layer thickness As shown in Fig. 2, the images showed the retinal GCIPLT map, and the OCT parameter was measured in 6 sectors. Representative case of normal subject (left eye of 50-year-old male) ( Fig. 2A). Representative case of RP patient (left eye of 54-year-old male) (Fig. 2B). The images showed that RP patients had thinner GCIPL thickness than normal controls in 6 sectors. The original data of GCIPLT in all quadrants became thinner significantly (P < 0.001), especially in the temporal area (Fig. 3B). Detailed GCIPLT data in the different quadrants are presented in Table 2. The corresponding ORT was classified into 3 grades according to the statistical percentile (33.3% and 66.6%) of GCIPLT: 43.1, 73.3, and 101.4 mm, respectively. The GCIPLT thinning was consistent with the ORT thinning (P < 0.001). The correlation between GCIPLT and ORT was significant when evaluated with a linear regression model (r = 0.436, P < 0.001). Analysis of peripapillary RNFL thickness The thickness of the peripapillary RNFL in the different quadrants is presented in .0, and 73.7 ± 10.7 mm, respectively, in the control group. Compared with the normal controls, RNFL thicknesses in the RP patients were significantly thicker in the temporal and nasal areas (P = 0.001) and thinner in the superior and inferior areas (P < 0.05) (Fig. 4). Length of EZ and ELM line in RP patients In the RP patients, the EZ and ELM lines were present in varying lengths at the fovea and absent outside the macula. The lengths of the residual EZ and ELM lines depicted in the OCT images were measured (Fig. 5A), and the average lengths were 911.1 ± 208.8 and 1621.2 ± 233.5 mm, respectively. The length of the ELM line was significantly longer than the average EZ length (P < 0.005) (Fig. 5B). The correlation between ELM line length and EZ shortening was strong (r = 0.862, P < 0.001) (Fig. 5C). The original data of EZ length in RP patients are shown in Supplementary material, http://links.lww.com/MD/B487. Correlations between BCVA and CFT, GCIPLT, and EZ length CFT and GCIPLT were classified into 3 grades according to the statistical percentile (33.3% and 66.6%). The length of the EZ in the OCT images was graded from 1 to 3, as follows: Grade 1, average EZ greater than 1 mm, and the longest EZ was less than 3 mm; Grade 2, abnormal EZ less than 1 mm; and Grade 3, EZ not visible. As shown in Fig. 6A1-C1, RP patients with a thicker CFT, GCIPLT and longer EZ had better BCVA (P < 0.001). The results of an evaluation using a linear regression model showed that there was a significant positive correlation between BCVA and CFT (r = À0.5933, P < 0.001), GCIPLT (r = À0.452, P < 0.001), and EZ length (r = À0.7622, P < 0.001) (Fig. 6A2-C2); the EZ at the fovea demonstrated the strongest relationship with BCVA followed by GCIPLT and CFT. All of the factors affecting visual acuity were further analyzed by multiple linear regressions. Regression coefficients for significant factors are compared in Discussion In this study, we aimed to investigate the changes in retinal photoreceptor EZ and postreceptor retinal layer in RP patients by GCA and analyze the relationship between the OCT parameters and BCVA. First, we examined CFT, and the results showed that the RP patients had significant thinning of CFT compared with the controls. The RP patients with thicker CFT had better BCVA, so there was a positive correlation between CFT and BCVA, which was consistent with the previous studies. [15,28] The earliest histopathological changes in all forms of RP involve shortening of the photoreceptor segments, but there are fewer articles about inner retinal thickness. The previous study reported that the inner nuclear, inner plexiform, and retinal GCLs were relatively intact. [11] In addition, the clinical and morphometry studies showed the similar results that inner retinal layers were preserved in the RP groups. [10,13] What is more, even inner retinal thickening was detected, which was related with neuronal-glial remodeling response to photoreceptor loss in RP patients, [29,30] but the patients in later stage with decreased inner retinal thickness was also proved. [12] In addition, the histopathologic studies reported that RP had significantly fewer ganglion cells than those of the control group, [14] and the thickness of GCL combined IPL was significantly thinner in RP by OCT. [15] In addition, the new articles showed that ganglion cells decreased because of inherited photoreceptor degeneration in animal studies, [31,32] which was related with deficit of the retrograde axonal transport lead by inherited photoreceptor degeneration. [33] Therefore, in the present study, we used GCA as a new method for assessing the structural changes of GCIPL, which is related to the photoreceptor degeneration in RP patients. Compared with the measurement in previous studies, GCA assesses the thickness by a macular cube 512 Â 128 scan from more sites; as a result, we can obtain more detail and accurate GCIPLT data by the GCA combined with the modern OCT device with an axial resolution of 5 mm. Our results showed that GCIPLT in all quadrants exhibited significant thinning and that the thinning was greater in the superior and temporal quadrants than in the nasal and inferior quadrants, which indicated that the RP patients had decreased retinal thickness both in outer retina and inner retina in our study. While the thinning of ORT has been known, whether there is a relationship between GCIPLT and ORT has not been reported in RP patients. In the present study, we assessed the correlation The results showed that peripapillary RNFL thicknesses did not differ significantly between the RP and control groups, but in the temporal and nasal quadrants, the RP group had a significantly thicker RNFL, while RNFL thinning was seen in the superior and inferior areas. However, total RNFL thickness was greater in the superior and inferior areas than in the temporal and nasal areas. between GCIPLT and ORT, and the results showed that GCIPLT thinning was significantly related to the decreased ORT as well as the macular retinal thickness. We further assessed the correlation between GCIPLT and BCVA, RP patients with thicker GCIPLT had better BCVA, but the correlation was moderate. The reasons might be that some eyes with poor visual acuity still preserved the thickness of the RGC, and the present OCT could not distinguish the transition between the GCL and the IPL. So a larger study cohort and new OCT with a more precise segmentation of the retinal layers is needed in the future study. To summarize, the inner retina might be relative preserved in the early stage of RP; the thickness was decreased with the progression of the disease, especially in the late stage and advanced ones. RNFL is the axon originating from the ganglion cells, so we measured the thickness to see whether the papillary RNFL thinning occurred along with the degeneration of ganglion cell. In the previous studies, the changes in RNFL thickness are inconsistent, including thickening, thinning, or relative unifor-mity, measured by time domain OCT or spectral domain OCT. [34][35][36][37][38] In addition, the relevant studies have indicated that RNFL thickening was most common in the temporal area, and RNFL thinning was found in the nasal and inferior regions. [35,37] However, results in our study showed that the average papillary RNFL thickness did not change significantly compared with the normal controls. But there existed a spatial preference of RNFL thickening in the temporal and nasal directions and RNFL thinning in the inferior and superior areas. These findings suggest that RNFL thickness changed in the RP eyes along with the degeneration of ganglion cell. But the reasons accounting for RNFL thickening or thinning in different areas need to be investigated in further researches. Although the mechanism of RNFL degeneration is different from glaucoma, the orders of RNFL thinning are similar in the sequence of RNFL thinning. [39] A recent study reported that compromised axonal transport would lead to RGC loss and decreased RNFL thickness during the developing process of RP, Figure 6. Relationship between best corrected visual acuity (BCVA) and central fovea thicknesses (CFT), ganglion cell-inner plexiform layer thickness (GCIPLT), and ellipsoid zone (EZ). CFT and GCIPLT were graded from 1 to 3 according to statistical percentile (33.3% and 66.6%). The appearance of the EZ in the optical coherence tomography (OCT) images was graded from 1 to 3: Grade 1, shortened EZ greater than 1 mm; Grade 2, shortened EZ less than 1 mm; and Grade 3, EZ not visible. Mean visual acuity (log MAR units) as a function of the grade of the OCT parameters. (A1-C1) The difference among the 3 groups was statistically significant (P < 0.005). Retinitis pigmentosa patients with thicker CFT and GCIPLT and longer EZ had better BCVA (P < 0.001). (A2-C2) The correlations among BCVA and CFT, GCIPLT, and EZ were evaluated using a linear regression model. The results showed significant positive correlations among BCVA and CFT (r = À0.5933, P < 0.001), GCIPLT (r = À0.452, P < 0.001), and EZ (r = À0.7622, P < 0.001); EZ at the fovea demonstrated a stronger relationship with BCVA. and the maximum transneuronal damage occurred mainly in the inferonasal quadrant, [35] which was consistent with the results in our study. Apart from RNFL thinning, the reasons for RNFL thickening have been studied, it has been reported that with the progress of RNFL degeneration, the proliferative glial cell displacing the atrophic nerve fiber and edematous changes of the residual RNFL were attributed to a thickened RNFL. [34] Hood et al [11] speculated that a purely mechanical factor compelling the thickening RNFL partially filled the quadrants where the receptors degenerated. As a result, RNFL thickening is found in the area where GCIPL thinning is greatest in RP patients. So while the regions of GCIPL thinning are corresponded to the areas of RNFL defects in glaucoma, [40] it is almost the opposite in RP patients. The results suggest that it should be prudent to make optimal use of RNFL thickness for predicting the status of RGC in patients with RP. Refer to the discrepancies among the different studies, it might stem from differences in OCT devices and the discordant stages of RP in the study populations. With the progression of RP, EZ disappeared from the peripheral part toward the fovea on the OCT images. Therefore, measuring the length of the residual EZ in RP patients is very useful for estimating residual central visual function. According to the length of the EZ, RP patients in our study were graded into 3 groups: Grade 1, shortened EZ greater than 1 mm; Grade 2, shortened EZ less than 1 mm; and Grade 3, EZ not visible. The relationship between BCVA and EZ was significant, which is agreed with the results in the previous studies. [41,42] Thus, the average length of the EZ is an important OCT parameter for monitoring visual acuity in RP patients. With regard to the outer hyperreflected lines, the ELM line was found to be longer than the EZ. Pathological studies about RP confirmed that the earliest histopathological change is in the outer segments of photoreceptors, [9] where EZ is located. In addition, ELM consists of photoreceptor inner segment and Müller cell processes, [43] so EZ became disorganized earlier than ELM, but the shortened ELM lines were significantly related with EZ defect. Taken together, GCA is a new method to measure the GCIPLT, which helps to prove the real changes in the inner retinal layers of RP patients. Our further results showed that postreceptor retinal layer (GCIPLT) became significantly thinner along with the retinal photoreceptor EZ degeneration in all quarter, which was correlated with BCVA in RP patients. So, we assess retinal layer changes with the advent of the GCA software combined with the modern OCT devices from a new perspective and to quantify the changes both in outer and inner retinal layers, which are very useful for diagnosis, staging, and prognosis of RP.
2018-04-03T02:53:47.723Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "5a1ccdb77296b74648edb62b6756fdc76a255d11", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000005785", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "c850b17fe29a36a39c5a000573c9714a5c8f40b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
123027720
pes2o/s2orc
v3-fos-license
A study of the validity of the efficiency transfer method to calculate the peak efficiency using c-ray detectors at extremely large distances The full-energy peak efficiency (FEPE) curves of the (200 9 200 and 300 9 300) NaI (Tl) detectors were measured at seven different axial positions from their surfaces. The calibration process was done using radioactive point sources, which produce a wide energy range from 59.53 up to 1,408.01 keV. This work has been undertaken to explain the effects of source energy and sourcE-to-detector distance on the detector efficiency calculations. The study provides an empirical formula to calculate FEPE based on the efficiency transfer method for different detectors using the effective solid angle ratio at very large distances and for higher energies. A remarkable agreement between the measured and calculated efficiencies for the detectors at the sourcE-to-detector distances \35 cm and above that slight difference was observed. Introduction The c-ray scintillation detectors are forceful and low-cost spectrometer system (detectors and associated electronics), because spectra acquisition can be done at room temperature (no refrigeration); therefore, it can be used in various applications in the field under unfavorable weather conditions [1][2][3]. The full-energy peak efficiency (FEPE) was calculated before as described in [3][4][5][6][7][8]. Currently, it can also be calculated by using the efficiency transfer method empirically derived from an approximate calculation of the effective solid angle ratio. The effects of the distance and energy on the full-energy peak efficiency within the energy range of interest are explained in this work. The efficiency transfer method is considered to be a trendy model for calculating the full-energy peak efficiencies (FEPEs) of a sample of interest on the basis of an experimental efficiency curve measured in the same detector, but with a calibrated sample of a different size, geometry, density and composition [9]. The procedure saves time and resources, since samplE-specific experimental calibration is avoided. It has long been established and useful especially in environmental measurements [10]. The method is based on the assumption that the detector efficiency at a reference position, P o , is the combination of the detector intrinsic efficiency, e i (E), depending on the energy, E, and geometrical factors depending on both the photon energy and the measurement geometry [11]: where X eff (E, P o ) is the effective solid angle between the source and the detector, which must include absorbing factors taking into account the attenuation effects of the materials between the source and the detector end cap. Thus, for any point source at position, P, the efficiency can be expressed as a function of the reference efficiency at the same energy, E, [11]: The conversion ratio (R) of the effective solid angles is defined as: The effective solid angle subtended by the detector and the point source was calculated. Mathematical treatment Selim et al. using the spherical coordinate system derived a direct analytical elliptic integral method to calculate the detector efficiencies (total and full-energy peak) for any sourcE-detector configuration [12]. The pure solid angle subtended by the detector and the radioactive point source was defined as [13]: Taking into account all the absorber materials between the source and detector, the effective solid angle was defined as: where F att factor determines the photon attenuation by all the absorber materials between the source and the detector and expressed as: In which, l i , is the attenuation coefficient of the ith absorber for a photon with energy E c , and d i is the average photon path length through the ith absorber. For an arbitrarily positioned axial point source at height h from the detector of radius R, and side length, L, the polar, h, and the azimuthal, u, angles at the point of entrance of the detector are defined as in [14]. The extreme values of the polar angles are: In this situation, the lateral distance is equal to zero, and according to the present symmetry, the maximum azimuthal angles, u, are equal to 2p. Therefore, the effective solid angle of axial point source can be expressed as [12]: The previous integral is calculated numerically using the trapezoidal rule in a basic program. Experimental setup In this work, NaI (Tl) scintillation detectors (2 00 9 2 00 & 3 00 9 3 00 ) were used, where the detector setup parameters with acquisition electronics specifications supported by the serial and model number are listed in Table 1. The FEPE was measured using radioactive gamma-ray emitters (point sources) [ 241 Am, 133 Ba, 152 Eu, 137 Cs and 60 Co], which was obtained from the Physikalisch- Table 2. The data sheet states the values of half-life photon energies and photon emission probabilities per decay for all radionuclides used in the calibration process as listed in Table 3, which is available at the National Nuclear Data Center Web Page or on the IAEA website. The homemade Plexiglass holder shown in Fig. 1 was used to measure these sources at seven different axial distances heading in the right direction from 20 cm till 50 cm with 5 cm steps from the detector surface. The holder was placed directly on the detector entrance window as an absorber. In most cases, the accompanying X-ray was soft enough to be absorbed completely before entering the detector. The sourcE-detector separations started from 20 cm to neglect the coincidence summing correction. The spectrum was recorded as P 4 D1 where P refers to the source type (point) measured at distance number (4) which equals 20 cm and D1 refers to (2 00 9 2 00 ) detector; so P 5 D2 means that the point source was measured at 25 cm from the (3 00 9 3 00 ) detector, and so on. The spectrum was acquired by winTMCA32 software which was made by ICx Technologies. It was analyzed by the Genie 2000 data acquisition and analysis software (Canberra Equipments) using the automatic peak search and the peak area calculations, along with changes in the peak fit using the interactive peak fit interface when necessary to reduce the residuals and errors in the peak area values. The live time, the run time and the start time for each spectrum were entered into the spreadsheets. These sheets were used to perform the calculations necessary to generate the experimental FEPE curves with their associated uncertainties. Experimental efficiencies The experimental efficiencies were determined by using the previously described standard sources. The experimental efficiency in energy, E, for a given set of measuring conditions can be computed by: where N(E) is the number of counts in the full-energy peak, T is the measuring time (in seconds), P(E) is the photon emission probability at energy E, A S, is the radionuclide activity and C i are the correction factors due to dead time and radionuclide decay. The measurements were done by using low activity sources so that the dead time was always \3 % and the corresponding factor was obtained by simply using ADC live time. The statistical uncertainties of the net peak areas were\1.0 % since the acquisition time was long enough to get the number of counts which was more than 10,000 counts. The decay correction, C d , for the calibration source from the reference time to the run time is given by: where k is the decay constant and DT is the time interval over which the source decays corresponding to the run time. The uncertainty in the experimental full-energy peak efficiency, r e , is given by: where r A , r P and r N are the uncertainties associated with the quantities, A S , P(E), and N(E), respectively, assuming that the only correction made is due to the source activity decay. Results and discussion The experimental study was carried out in the radiation physics laboratory (Prof. Y. S. Selim Laboratory, Department of Physics, Faculty of Science, Alexandria University, Egypt). This laboratory contains several NaI (Tl) scintillation detectors (2 00 9 2 00 and 3 00 9 3 00 ) used in this study. The detectors were calibrated by measuring the lowest activity point sources as previously described. The effective solid angle as a function of the photon energy for both the scintillation detectors (2 00 9 2 00 and 3 00 9 3 00 ) is shown in Fig. 2a, b, where it was small at height distance P 10 and large at low distance P 4 . The effective solid angle below 121 keV sharply increased at each position. The experimental full-energy peak efficiency (FEPE) values of P 4 D1 and P 4 D2 are listed in Table 4 as a reference efficiency. The effective solid angle ratios for both detectors (D1 and D2) produced due to conversion from P 4 as reference FEPE curve to P 5 up to P 10 FEPE curves are listed in Table ( 5). Figure 3a, b shows that the effective solid angle ratio is approximately fixed for each position. The standard deviation for the effective solid angle ratio at each position was calculated and found to be \0.003 as listed in Table 5. The calculated FEPE of P 5 up to P 10 was obtained by multiplying the reference efficiency at P 4 by the average value (conversion ratio) of the effective solid angle ratio for each position in Table 5. The percentage of error between the calculated and the measured efficiency is given by equation (11) and tabulated in Table 6: where e cal and e meas are the calculated and measured efficiencies, respectively. The relation between the source height from the detector surface versus the average value of the effective solid angle ratio is shown in Fig. 4, where the effective solid angle ratio was obtained by using the conversion process from where R x is the conversion ratio from P 4 to P x , and x is the axial source height position from the detector surface in cm. The parameters of this equation are shown in Table 7. This equation is valid to determine the effective solid angle ratio values for different axial distances from the detector surface, which led to determine FEPE theoretically simply, without the need of experimental work at any distance, through the region of interest in this study. Therefore, Eq. (2) is: There is a relative difference between the measured and the calculated value jumps from one percent to several percents in Table 6, which indicates some sort of failure of the efficiency transfer methodology in general at very large distances and for higher energies which can be explained in some points as follows. • The efficiency increases with increasing the detector's volume and at lower distances from the detector surface, but the crystal is not long enough to have a reasonable efficiency for the highest energy gamma rays. This is due to the change in solid angle and the interaction of gamma ray with the detector's material beside the long distance from the detector end cap. These phenomena are related to the fact that the gamma ray intensity emanating from a source falls off with a distance according to the inverse square law. In addition, low efficiency values for point source are measured at 20 cm and more distance away from the detector. At the same time, there was also a strong increase in the efficiency value of the detector, experimentally observed for energy \100 keV [which is related to the decrease in the attenuation of the end-cap material, aluminum (2.69 g/cm3)] and this effect is almost negligible for a very long distance from the detector. • The contribution to the full-energy peak from the Compton process is large for larger crystals and at lower distances from the detector surface, where the photon path length of the crystal is large and it is almost negligible for the small crystal and at very long distance from the detector, while the full-energy peak feature results from the gamma-ray that has a photoelectric interaction that produces an electron, which deposits its entire energy in the detector. This result increases the overall efficiency. • The efficiency of the detectors is higher at low source energies (absorption coefficient is very high) and decreases as the energy increases (fall off in the absorption coefficient), because the photoelectricity is dominant below 100 keV, which means in other words that it is higher for the bigger detector or low source distance than the smaller one or higher source distance. It is higher for lower source energy than higher source energy because of the dominance of the photoelectricity at lower source energies. • There is an accuracy problem in measuring the height by increasing the distances between the source and the detector. Another problem is the finE-tuning adjustment problem with the detector's parameters and the geometry of the instrument used. Conclusion This work leads to a simple method to evaluate the fullenergy peak efficiency (FEPE) based on the efficiency Effective Solid Angle Ratio Photon Energy (keV) Fig. 3 The effective solid angle ratio for conversion from P 4 as the reference curve of FEPE to P 5 up to P 10 as a function of the photon energy transfer method over a wide energy range, which deals with the detector in the case of an axial isotropic point source. The method represents an empirical formula based on the effective solid angle ratio. The obtained data show that the discrepancy between the experimental and the calculated values of FEPE was \3 % at distances \35 cm and about 7 % at greater distance from the detector surface. Therefore, the present approach shows a great possibility for calibrating the detectors through the determination of a full-energy peak efficiency curve to avoid consuming time except at very large distances and for higher energies where the discrepancies increase due to the change in solid angle.
2019-04-20T13:08:12.113Z
2014-04-03T00:00:00.000
{ "year": 2014, "sha1": "9688158fd4a6b1b53a42e5f669a10860ce0bb545", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40094-014-0120-1.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c70f55a2b22db73e3129765ddcdec8f42ee6a5e0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15646212
pes2o/s2orc
v3-fos-license
A study of laparoscopic extraperitoneal sigmoid colostomy after abdomino-perineal resection for rectal cancer Objective: To established a procedure for laparoscopic extraperitoneal ostomy after abdomino-perineal resection (APR) and study safety aspects and complications. Method: From July 2011 to July 2012, 36 patients with low rectal cancer undergoing APR were included in the study and divided into extraperitoneal ostomy group (n = 18) and intraperitoneal ostomy group (n = 18). Short- and long-term complications were compared between the two groups. All patients were followed up and the median duration was 17 months (range: 12–24). Results: The rates of short-term complication related to colostomies were comparable between the two groups, except the rate for stoma edema was higher in the extraperitoneal group (33.3% vs 0%; P = 0.008). In the intraperitoneal ostomy group, two patients developed stoma prolapse, one had stoma stenosis, and two had parastomal hernia. In contrast, no long-term complications related to colostomies occurred in the extraperitoneal ostomy group. The rate of long-term complication was lower in the extraperitoneal ostomy group (0% vs 22.2%; P = 0.036). Conclusion: The laparoscopic extraperitoneal ostomy is a relatively simple and safe procedure, with fewer long-term complications related to colostomy. However the follow-up period was not too long and needs to be extended. INTRODUCTION Sigmoid colostomy created through the extraperitoneal route has been reported to produce a reduced risk of associated parastomal hernia and stomal prolapse [1]. A metaanalysis of 1071 cases found that extraperitoneal sigmoid colostomy prevented parastomal hernia without increasing the risk of other post-operative complications such as stoma ischemia, obstruction and prolapse [2]. Laparoscopy is being widely used for the treatment of rectal cancer, but laparoscopic construction of an extraperitoneal colostomy is technically difficult. Also, concerns persist about complications that might occur with this approach, namely stoma ischemia and necrosis and the development of intestinal hernias due to insufficiency of peritoneum. Therefore, the intraperitoneal route for laparoscopic construction of sigmoid colostomies is usually preferred, even though it is known to carry a risk of stomal prolapse and parastomal hernia. Hauters et al. used an intraperitoneal onlay mesh reinforcement at the time of stoma formation to prevent parastomal hernia [3]. Indeed, Hamada et al. reported a high incidence of parastomal hernia by CT examination of intraperitoneal colostomies, and accordingly recommended that extraperitoneal colostomy be the preferred procedure [4]. Akamoto et al. designed a special hook to facilitate stoma formation [5]. Leroy et al. also recommended laparoscopic extraperitoneal colostomy to prevent parastoma hernia [6]. In this report, we set out the results of a randomized, controlled study of extraperitoneal and intraperitoneal sigmoid colostomy. Complications related to colostomies were of special interest. PATIENTS AND METHODS Patients A single-institute, randomized and controlled trial was designed. Admission criteria were patients with distal rectal cancer undergoing abdomino-perineal resection (APR) at the National Center of Colorectal Surgery, the 3 rd Affiliated Hospital of Nanjing University of Traditional Chinese Medicine, between July 2011 and July 2012. Patients with synchronous metastases (M1) were excluded from the study. Patients were randomly assigned into two groups by the random table method: extraperitoneal or intraperitoneal ostomy. The study was approved by the hospital's ethics committee, and informed consent was obtained from each participating patient. Methods of colostomy construction Intraperitoneal colostomies were constructed by the conventional method [5]. Extraperitoneal colostomies were constructed as follows: the sigmoid colon was transected from the middle after the rectum was completely mobilized. The proximal sigmoid colon was mobilized in order that it could reach the abdominal wall without tension. A 3-4 cm cavity was made in the left side of the peritoneum as an internal opening of the extraperitoneal tunnel and marked with a no-damage clamp ( Figure 1). A circular incision of 2 cm diameter was made at the pre-planned position of the stoma ( Figure 2). The skin and subcutaneous tissues were removed and the anterior rectus sheath was opened with a cross incision. The rectus abdominis was separated, and 0.5 cm of muscle was cut off (the separation and removal is not necessary if the rectus abdominis is not very strong). The peritoneum was blunt-separated by use of Kocher forceps to the internal opening of the extraperitoneal tunnel and the diameter of the tunnel was dilated to two to three finger widths ( Figure 3). The proximal sigmoid colon was pulled out of the tunnel ( Figure 4) and sutured to the cavity between the peritoneum and colon with one or two stitches ( Figure 5). The gap between the rectus sheath and the intestinal wall was closed with sutures. Finally, the intestinal wall and skin were sutured manually before the end of the operation ( Figure 6). Follow-up examinations and statistical analysis Status of the stomas was closely monitored during hospital stay and was periodically reviewed at follow-up outpatient examinations. Short-term complications were defined as complications that occurred within 4 weeks, such as hemorrhage, ischemia, dermatitis and edema. Long-term complications were defined as complications that occurred after 4 weeks, such as prolapse, narrow stoma, retraction, and parastomal hernia. The count data were presented as a ratio and analysed by Fisher's exact test. The measurement data were expressed as median AE standard deviation and analysed with the Student's t-test. RESULTS From July 2011 to July 2012, 36 patients undergoing APR were included in this study and randomly divided into 59 intraperitoneal ostomy group (n = 18) and extraperitoneal ostomy group (n = 18). Three of the 36 patients had received pre-operative neoadjuvant chemoradiotherapy. One patient in each group underwent reconstruction of the stoma because of insufficient blood supply to the colostomy. Patient and tumor characteristics are described in Table 1. The mean operative time was 25.3 min in the extraperitoneal group, which was higher than that in the intraperitoneal group (14.7 min), but the difference did not reach statistical significance (P = 0.062). All the patients were followed for 12-24 months (median: 17) after operation. The rates of short-term complication related to colostomies were comparable between the two groups (44.4% vs 27.8%; P = 0.148) except for stoma edema was higher in extraperitoneal ostomy group (33.3% vs 0%, P = 0.008). The rate of long-term complication related to colostomies was lower in the extraperitoneal ostomy group (0% vs 22.2%; P = 0.036). In the intraperitoneal ostomy group, two patients developed stoma prolapse; one, stoma stenosis and two, parastoma hernia, whereas no long-term complication occurred in the extraperitoneal ostomy group (Table 1). DISCUSSION A sphincter-preserving operation is widely used for the treatment of distal rectal cancer, but 10-20% of patients still prefer APR and permanent colostomy. However, postoperative complications of this procedure can adversely affect quality of life [8,9]. Results of various studies have 60 shown that extraperitoneal sigmoidostomy is associated with a low incidence of complications, mainly in the form of parastomal hernia and stomal prolapse. Surgeons have thus increasingly come to prefer this procedure [2,10]. However, laparoscopic construction of a sigmoid colostomy is difficult. Extraperitoneal sigmoid ostomy is especially challenging because of difficulties encountered in closing the lateral peritoneum and pelvic floor. Also it has been questioned as to whether extraperitoneal colostomy will be therapeutically effective or have excessive surgical complications. Intraperitoneal colostomy was therefore adapted to the laparoscopic operation. In the present study, we evaluated intraperitoneal and extraperitoneal colostomies constructed via laparoscopy. The extraperitoneal operation was similar to that of conventional open surgery but with an easier operative technique. The average operation time was 25 minutes, close to those reported by others [3,4], and was only 10 min longer than that of intraperitoneal colostomy, without adversely affecting prognosis. The incidence of short-term complications-such as stomal ischemia or hemorrhage-was similar between extraperitoneal-and intraperitoneal colostomy patients. Two patients in each group required a second operation within 5-6 days of the initial one because of stomal retraction due to thrombosis, with subsequent ischemia and necrosis. In obese patients, extracting the proximal sigmoid can be difficult. We managed this problem by expanding the peritoneal tunnel to a diameter of at least two finger widths, so that the mesentery could be removed along with the resected sigmoid colon, as long as there was sufficient blood supply. Stomal edema occurred in one-third of patients who had extraperitoneal colostomy but did not occur in any of those who had intraperitoneal colostomy. This complication may be ascribed to poor intestinal blood circulation resulting from compression by the tunnel. In all patients, the stomal edema resolved, without specific treatment, within two weeks of the operation. The main rationale for using the extraperitoneal method of stomal construction is the possible decrease in associated long-term complications, such as parastomal hernia, stomal prolapse, and stomal retraction [11,12]. Our results justify the use of this approach, since the post-operative complication rate was lower in patients with an extraperitoneal stoma than in intraperitoneal. Indeed, there were no longterm complications in the group of extraperitoneal colostomy patients whereas, in the intraperitoneal colostomy group, there was one instance of parastomal hernia, two of stomal prolapse, and one of stomal stenosis. We suspect that the stomal stenosis resulted from drainage of a parastomal abscess and subsequent scar formation. Although the differences in complication rates between the two patient groups did not reach statistical significance, this may have reflected the small number of patients involved. It also is possible that more complications would have become evident had the follow-up period been longer. In conclusion, the laparoscopic extraperitoneal ostomy is an easy and safe procedure. It did not increase complications following the operation. The long-term complications were lower in the extraperitoneal ostomy group. However, the follow-up period was short and longer follow up is needed.
2016-05-12T22:15:10.714Z
2014-01-08T00:00:00.000
{ "year": 2014, "sha1": "138aee4a9789c9f672d1f8a3319153b6ed775c67", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gastro/article-pdf/2/1/58/1506628/got036.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9efd966c30b0e862d0cbc68c4b1d399021b233ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253446784
pes2o/s2orc
v3-fos-license
Atypical wound trajectory after a tangential pistol shot Three intermediate-range shots from a Browning, model 1955, 7.65 mm caliber, pistol were fired from the driver’s seat of a car at a woman in the passenger seat. She sustained three wounds: An, ultimately fatal, penetrating head wound, a graze wound across her forehead, and a tangential, perforating, wound, with bullet entry over the medial sternum and exit through the right flank. Neither postmortem CT nor forensic autopsy discovered bony thoracic injuries or perforations of the thoracoabdominal cavities. There was pulmonary contusion in the medial lobe of the right lung and hemorrhage in the adipose tissue around the right kidney. The tangential bullet had left an almost 40-cm-long wound channel through a pronounced layer of subcutaneous fat. Based on 3D reconstructed CT-data determinations, a straight bullet trajectory between entry and exit wounds would have traversed the abdominothoracic cavities, right lung, and liver. The actual trajectory, however, described a prominent curve, without signs of deflection by bone. Postulated explanations for this unusual bullet track are that the woman was twisting her body in a dynamic scene when the bullet struck; further, due to its shallow angle of incidence on the skin, the bullet was deflected to an intracutaneous path. Additionally, soft tissue resistance may have caused the bullet to yaw. Caution should, thus, be exercised when reconstructing bullet trajectories solely from entry and exist wounds, also for bullet wounds through basically homogenous soft tissues. Introduction The documentation and interpretation of firearm injuries are routine in forensic casework. These tasks are performed not only for deceased victims in the context of autopsies, but also for living victims within the scope of clinical medicolegal examinations. In both situations, the macromorphological examinations are performed in conjunction with imaging methods [1][2][3][4][5][6][7]. In the reconstruction of shooting incidents, projectile trajectory and shot range are important aspects that need to be determined. Commonly, the primary goal of these reconstructions is to allow the inclusion or elimination of various scenarios during the shooting incident. In the case of perforating bullet wounds, the bullet trajectory can, for example, be calculated by simple trigonometry from the measurements for the specific locations of the entry and exit wounds on the body. This approach is, however, only applicable if the bullet followed an essentially straight path through the body and was not deflected along its way [8,9]. When projectiles glance off anatomical structures such as bones or teeth, or when they tangentially pass through fluidfilled cavities, large deflections in trajectory are possible [8]. Generally, the magnitude of these deflections is influenced by the interaction of various factors, such as constructionrelated properties of the projectile, velocity and energy of the projectile as it enters the body, trajectory of the projectile, transfer of energy from the projectile to the penetrated material, length of the wound channel, and exact location where the projectile encounters body structures [8,[10][11][12][13]. In the following we report on an atypical wound trajectory through subcutaneous adipose tissue, in which the bullet followed a curved path, without being deflected by bone or organ structures. Case report A man in a private car brought a seriously injured woman to the emergency department of a hospital. Shortly after arrival, the woman had to be resuscitated; however, resuscitation was unsuccessful. Because the woman's injuries were ostensibly bullet wounds, the hospital informed the police. When the police subsequently questioned the man who had brought her, who was still in the waiting room of the hospital, about the incident, he stated that he had shot his partner in the car during a quarrel. He had been sitting in the driver's seat at the time and his partner in the passenger seat. Criminal police officers then recovered a Browning, model 1955, 7.65 mm caliber, pistol from the man's car, as well as a basically undamaged lead core bullet from the sill of the passenger door (Fig. 1). Autopsy results The cadaver of the middle-aged woman weighed 84 kg and was 161 cm long. Three bullet wounds were found. Due to the deposition of gunshot residue, the shots were considered to have been fired from an intermediate range. There was a horizontal graze wound across her forehead, running from left to right. In addition, there was a penetrating bullet wound in the left parietal bone. This was the ultimately fatal injury. The third injury was a perforating bullet wound, with an entry wound over the sternum, 2.5 cm to the right of the midline and 126 cm from the bottom of the feet (Fig. 2). The exit wound was located in the right flank, 22 cm to the right of the midline and 98 cm from the bottom of the feet (Fig. 3). Post-mortem CT had shown that there were no bony injuries in the thorax and that neither the thoracic nor the abdominal cavities had been penetrated. Furthermore, there were no metallic particles as a sign of potential contact between bullet and bone. An increase in density had, however, been noticed in the medial lobe of the right lung. The CT findings were corroborated at autopsy, in which pulmonary contusion in the medial lobe of the right lung and further signs of hemorrhage in the adipose tissue around the right kidney were found. The wound channel from the perforating shot described an almost 40-cm-long, curved path through a well-developed layer of subcutaneous fat. The shot was, therefore, considered to be tangential (Fig. 4). On the basis of the wound trajectory identified in 3D-reconstructed CT data, in which the bullet track was determined as a straight line from entry to exit wound (measured distance: 36 cm), the expectation had been that the bullet would have entered the thoracic and the abdominal cavities and that it would have perforated the right lung and the liver (Fig. 5). Further CT findings had been a pronounced thoracic kyphosis, narrowing of the intervertebral spaces and the Fig. 1 The bullet retrieved from the passenger door sill. Ultimately, this bullet could be attributed to the tangential shot. The bullet was undamaged, apart from the typical grooves left by the pistol barrel formation of bony bridges between the vertebrae, mainly in the region of the thoracic spine, as well as signs of sacroiliitis (Fig. 6). These findings are consistent with ankylosing spondylitis. Trial The following forensic findings were presented during the main hearing of the trial: All three pistol shots had been fired from a range of 3 to 30 cm. The bullet that had been found on the sill of the passenger door of the car could be identified as that from the tangential, perforating, shot. The bullet was undamaged, apart from the typical grooves left by the pistol barrel. In a joint demonstration, the ballistic and medicolegal expert witnesses used trajectory rods and a dummy that was only slightly larger, although less massive, than the victim to reconstruct the three pistol shots for the court (Fig. 7). The goal of this three-dimensional reconstruction was to illustrate the scene within the car graphically, in particular, to elucidate why the wound trajectory from the tangential shot could not be satisfactorily explained alone on the basis of the victim's body position when the shot was fired, e.g., leaning forward with a twisted torso. A curved trajectory was, therefore, postulated for this bullet. In the case of this shot, an incidence angle of 35° in relation to the body's longitudinal axis in the frontal plane had been calculated from the measured locations of the entry and exit wounds. While the bullets evoking the graze wound and the penetrating wound would have had to be fired with the pistol barrel oriented approximately in the frontal plane of the victim's head, the tangential bullet would have, thus, likely been fired from a position anterior to this plane, as might have occurred if the victim on the passenger seat had twisted her upper body towards the shooter in the driver's seat. During the trial, the expert witnesses, however, also cautioned that there were constraints on the accuracy of the calculated incidence angle due to the possibility that the bullet had been deflected along its path through the body. The actual incidence angle of the bullet in respect to the longitudinal axis of the victim's body might thus have been greater than calculated; similarly, the position of the pistol might have been lower. Furthermore, at the time the shot was fired, the pistol could also have been oriented closer to the frontal plane than assumed, as in the two other shots. Discussion In the reported case, a woman sustained three intermediaterange shot wounds after being fired at from the driver's seat on the left side of a car. From a medicolegal perspective, the bullet trajectories for a graze wound on the victim's forehead and the fatal penetrating gunshot wound to the head were simple to reconstruct. The wound track left by the third, tangential, bullet was, however, so unusual that we believe it merits reporting. In the trajectory analysis of this wound from 3D reconstructed CT data sets, the linear distance measured between entry 3D reconstructed CT datasets in volume-rendering mode, depicting the virtual wound trajectory and measurement of the straight connection between entry and exit wounds. After aligning the torso in the frontal plane, the cutting tool in OsiriX was used to cut the image along the bullet track. The parts of the torso that faced towards the lower left were eliminated. The ensuing cross section was then aligned in the frontal plane, and the exit and entry wounds were connected, using a measuring tool. The approximately 36-cmlong bullet trajectory can be seen to pass through the thoracic and the abdominal cavities and exit wounds was approximately 36 cm. In this straight trajectory through the body, the bullet would have had to traverse the thoracic and abdominal cavities; however, the actual wound channel was not found to pass through either of these cavities. The bullet had, instead, followed an approximately 40-cm-long, nonlinear trajectory though the subcutaneous adipose tissue. Contact with bone could be excluded as a reason for bullet deflection. The contusion in the medial lobe of the right lung and the hemorrhage in the adipose tissue around the right kidney were, therefore, likely caused by expansion of the temporary cavity around the wound channel. In an attempt to explain this atypical wound trajectory, possible body positions of the victim, in addition to ribcage positions while breathing, were considered, such as that she may have twisted her upper body, or have leaned forward or slumped sideways when the shot was fired. To aid this elucidation attempt, a dummy was used during the main trial to explore all conceivable positions of the victim's body at the moment the shot was fired. Even when possible contributing effects such as tissue compression while twisting-and hence local increase in tissue density or rigidity-were additionally taken into account, none of the explored body positions could satisfactorily explain the unusual trajectory. Moreover, due to the kyphosis and intervertebral ossification that were noted in the postmortem CT, the mobility of the victim's spine was manifestly restricted. This circumstance also had to be factored into the trajectory analysis. In the CT scan, it could, moreover, be seen that the victim's upper body remained noticeably bent forward even when the body lay in a supine (post-mortem) position. The virtual trajectory between entry and exit wounds would, hence, have passed through her thoracic and abdominal cavities also in this position. After consideration of all circumstances, the only viable explanation for the atypical trajectory of the bullet through the body is to postulate that it was deflected from its straight path to a curved trajectory, without glancing off bone. This hypothesis is strengthened by the observation that the only visible alterations on the perforating bullet were similar to those that would be expected for a bullet from the same pistol that had been fired into gelatin. In all other respects, the bullet was undamaged. The findings in our case report also tie in with the reported results from experimental studies in which significant deflections in trajectory were found for bullets passing through homogenous soft-tissue simulants [11,12]. In the case of projectiles from long-barreled firearms, this phenomenon may be explained by tipping or yawing of the bullet along its track through soft tissues. The resulting asymmetrical distribution of pressure on the bullet nose would exert a lateral force on the bullet, which would cause lateral acceleration and, thus, deflection of the projectile from a straight path on its way through the tissue [14]. Although this effect would be expected to be significantly less pronounced for short-barreled firearms, due to constructional properties, deflections of more than 6° for bullet tracks longer than 20 cm have been reported in the literature, depending on the combination of projectile and weapon [11]. A further explanation that could be postulated for the atypical wound trajectory in our case is that the bullet may, after tearing through the skin (entry wound) and passing Fig. 7 "Ballistic dummy" illustrating the reconstructed trajectories for the fatal penetrating bullet wound to the head (yellow) and the tangential shot (violet), in the court room through the subcutaneous adipose tissue for a few centimeters, have traveled towards the skin again. Due to the shallow incidence angle, the bullet may then have failed to penetrate the dermis from below and exit the body. Because of the skin's elasticity, the bullet may then have been forced to follow a path parallel to the dermis until it reached a point, on the medial axillary line, where the radius of the skin's curvature decreased far enough to increase the bullet's angle of incidence to the extent that it allowed the bullet to penetrate the skin and exit the body. In all, the case we report here further exemplifies the necessity of exercising due caution in identifying bullet trajectories solely on the basis of the locations of entry and exit wounds, even for wounds through essentially homogenous soft-tissues. Where the circumstances permit, it may, therefore, be more expedient to identify and use deflection points on bone, or perforations in serous membranes, along the first 10 cm of the wound channel, instead of the exit wound, to calculate the bullet trajectory [11]. Moreover, if CT data are available for the virtual identification of the wound trajectory, the difference between the supine position of the victim's body during the CT scan and the victim's actual position at the time the bullet was fired should be taken into account as a possible source of error in calculating the bullet trajectory [15]. Funding Open Access funding enabled and organized by Projekt DEAL. Declarations Ethics approval This study does not require approval by institutional ethics commission because it does not involve experimental protocols (information from the ethics commission of the faculty of medicine of the Goethe University, Frankfurt/Main, April 6, 2022). Investigation and autopsy were claimed by local attorneys, and our involvement was requested to clarify the cause and manner of death. Consent to participate It was not possible to obtain informed consent from relatives: The perpetrator was the victim's husband. No other near relations were known. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2022-11-11T06:17:48.201Z
2022-11-10T00:00:00.000
{ "year": 2022, "sha1": "6032c37fb751a078cc2436abe7bbf4c891c6c6a7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00414-022-02905-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "7f3f1b35c5d5a765ad7747557b4a88485e6ff881", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250977997
pes2o/s2orc
v3-fos-license
Technogenic Fiber Wastes for Optimizing Concrete A promising method of obtaining mineral fiber fillers for dry building mixtures is the processing of waste that comes from the production of technogenic fibrous materials (TFM). The novelty of the work lies in the fact that, for the first time, basalt production wastes were studied not only as reinforcing components, but also as binder ones involved in concrete structure formation. The purpose of the article is to study the physical and mechanical properties of waste technogenic fibrous materials as additives for optimizing the composition of raw concrete mixes. To assess the possibility of using wastes from the complex processing of TFM that were ground for 5 and 10 min as an active mineral additive to concrete, their chemical, mineralogical, and granulometric compositions, as well as the microstructure and physical and mechanical characteristics of the created concretes, were studied. It is established that the grinding of TFM for 10 min leads to the grinding of not only fibers, but also pellets, the fragments of which are noticeable in the total mass of the substance. The presence of quartz in the amorphous phase of TFM makes it possible to synthesize low-basic calcium silicate hydrates in a targeted manner. At 90 days age, at 10–20% of the content of TFM, the strength indicators increase (above 40 MPa), and at 30% of the additive content, they approach the values of the control composition without additives (above 35 MPa). For all ages, the ratio of flexural and compressive strengths is at the level of 0.2, which characterizes a high reinforcing effect. Analysis of the results suggests the possibility of using waste milled for 10 min as an active mineral additive, as well as to give better formability to the mixture and its micro-reinforcement to obtain fiber-reinforced concrete. Introduction An important direction in the development of concrete science is to minimize the use of cement as a result of its partial replacement with various fillers, including those obtained from production waste [1][2][3]. Many production wastes are sufficiently studied for use as structure-forming components of a binder and reinforcing elements. Examples from recent years include the study of nano-montmorillonite and carbon nanotubes [4], zeolite and metakaolin [5], recycled concrete aggregates [6], calcium silicate hydrates [7], etc. A promising direction for obtaining mineral fiber fillers for dry building mixtures is the processing of waste from the production of technogenic fibrous materials (TFM) [4][5][6]. A promising direction for obtaining mineral fiber fillers for dry building mixtures is the processing of waste from the production of technogenic fibrous materials (TFM) [4][5][6]. The most studied type of TFM is pulp and paper waste, consisting mainly of fibers with a length of 0.1 to 5 mm, of plant origin, isolated from coniferous and hardwood (including technical pulp and waste paper), the stems and bast of annual plants, and seed pods and leaves of some plants or mineral (asbestos) origin ( Figure 1) [7][8][9]. In the world, about 15 million tons of pulp and paper waste are generated annually, of which 12 million tons are suitable for processing [10][11][12][13]. Cellulose fiber has an amorphous structure, is insoluble in water, is resistant to acids and alkalis, and has a pH value in the range of 4 to 12 [14,15]. Cellulose fiber is insoluble in organic solvents (for example, oils), physiologically and toxicologically safe, and easy to use [16,17]. The use of natural cellulose fiber helps to improve the technological characteristics of mortar mixtures due to the fact that the fiber structure has a high absorbing and releasing ability in relation to water and organic liquids [18,19]. Due to the retention of moisture in the structure of cellulose microfibers, a concentrated aqueous medium is created between the mineral particles, which reduces the release of moisture into the hydrophilic base and eliminates the risk of shrinkage cracks. Since the fibers distribute and "transport" moisture from the lower layers of mortar mixtures to the upper ones, the drying of the upper layer is excluded [20,21]. At the same time, the maturation of the product occurs evenly [22,23]. A less studied type of TFM is the waste of mineral wool heat-insulating materials, for example, basalt fiber insulation [24,25]. With production volumes in Russia at the level of 50 thousand tons/year, up to 4 tons/year of waste is generated. Due to the relatively low bulk density (ρ = 200-220 kg/m 3 ), storage areas occupy a large share of the area in factory warehouses and landfills ( Figure 2). In the world, about 15 million tons of pulp and paper waste are generated annually, of which 12 million tons are suitable for processing [10][11][12][13]. Cellulose fiber has an amorphous structure, is insoluble in water, is resistant to acids and alkalis, and has a pH value in the range of 4 to 12 [14,15]. Cellulose fiber is insoluble in organic solvents (for example, oils), physiologically and toxicologically safe, and easy to use [16,17]. The use of natural cellulose fiber helps to improve the technological characteristics of mortar mixtures due to the fact that the fiber structure has a high absorbing and releasing ability in relation to water and organic liquids [18,19]. Due to the retention of moisture in the structure of cellulose microfibers, a concentrated aqueous medium is created between the mineral particles, which reduces the release of moisture into the hydrophilic base and eliminates the risk of shrinkage cracks. Since the fibers distribute and "transport" moisture from the lower layers of mortar mixtures to the upper ones, the drying of the upper layer is excluded [20,21]. At the same time, the maturation of the product occurs evenly [22,23]. A less studied type of TFM is the waste of mineral wool heat-insulating materials, for example, basalt fiber insulation [24,25]. With production volumes in Russia at the level of 50 thousand tons/year, up to 4 tons/year of waste is generated. Due to the relatively low bulk density ( = 200-220 kg/m 3 ), storage areas occupy a large share of the area in factory warehouses and landfills ( Figure 2). One of the important tasks of processing secondary basalt fiber, in bulk or compacted states, is significantly reducing its volume, imparting properties of flowability and generating the possibility of obtaining commercial products, such fibers of various lengths for subsequent use in the construction industry [26,27]. It is advisable to use a step-by-step process for the complex processing of basalt fibrous waste: − preliminary destruction of the original basalt fibrous waste, removal of non-fibrous inclusions [28]; − deagglomeration [29]; − classification with the removal of spillage and speck ("beadlet") [30]; − providing various sizes of fibers for their use in composite mixtures for various technological purposes, incl. for 3D technologies in construction [31,32]. Obtaining agglomerated, highly concentrated microfiber fillers has a number of technological advantages: increased flowability and better transport ability, which ensures a more uniform distribution of fibers in macro and micro volumes of a composite mixture with heterogeneous components, and expands the scope of the use of basalt waste (fibers) in an agglomerated form for various technological purposes (in the form porous aggregates, thermal insulation, adsorbents, etc.) [33]. The effects of using technogenic fiber in mortar or concrete [33][34][35]: − increase in the crack resistance of the product (resistance to shrinkage and operational cracks); − imparting thixotropic properties of the mortars; − improving the fixing ability on a vertical surface, preventing tiles from slipping (if in tile adhesive); − reduction in shrinkage that occurs during the maturation of mortar or concrete; − increase in frost-resistant properties of products; − improving the rheological properties of the mortar and increasing the time of work with them. The use of various industrial fibers for reinforcing concrete is sufficiently studied, for example, basalt and polypropylene [36], steel, glass, and carbon ones [37]. However, the use of recycled waste, not only as fiber, but also as an active additive in the cement system, is insufficiently studied. One of the important tasks of processing secondary basalt fiber, in bulk or compacted states, is significantly reducing its volume, imparting properties of flowability and generating the possibility of obtaining commercial products, such fibers of various lengths for subsequent use in the construction industry [26,27]. It is advisable to use a step-by-step process for the complex processing of basalt fibrous waste: − preliminary destruction of the original basalt fibrous waste, removal of non-fibrous inclusions [28]; − deagglomeration [29]; − classification with the removal of spillage and speck ("beadlet") [30]; − providing various sizes of fibers for their use in composite mixtures for various technological purposes, incl. for 3D technologies in construction [31,32]. Obtaining agglomerated, highly concentrated microfiber fillers has a number of technological advantages: increased flowability and better transport ability, which ensures a more uniform distribution of fibers in macro and micro volumes of a composite mixture with heterogeneous components, and expands the scope of the use of basalt waste (fibers) in an agglomerated form for various technological purposes (in the form porous aggregates, thermal insulation, adsorbents, etc.) [33]. The effects of using technogenic fiber in mortar or concrete [33][34][35]: − increase in the crack resistance of the product (resistance to shrinkage and operational cracks); − imparting thixotropic properties of the mortars; − improving the fixing ability on a vertical surface, preventing tiles from slipping (if in tile adhesive); − reduction in shrinkage that occurs during the maturation of mortar or concrete; − increase in frost-resistant properties of products; − improving the rheological properties of the mortar and increasing the time of work with them. The use of various industrial fibers for reinforcing concrete is sufficiently studied, for example, basalt and polypropylene [36], steel, glass, and carbon ones [37]. However, the use of recycled waste, not only as fiber, but also as an active additive in the cement system, is insufficiently studied. The novelty of the work lies in the fact that, for the first time, basalt production wastes are studied not only as reinforcing components, but also as binder components, involved in concrete structure formation. The purpose of the article is to study the physical and mechanical properties of waste technogenic fibrous materials for use as additives for optimizing the composition of raw concrete mixes. Materials Portland cement CEM I 42.5 N (Belgorod Cement, Belgorod, Russia) according to EN-197 was used as a binder. As an active mineral additive to concrete, wastes from the complex processing of TFM (mineral wool) were used. Microsilica, which is a technogenic waste from metallurgy, was used to provide the pozzolanic reaction. Table 1 lists the chemical composition and specific surface area of the raw materials. In particular, glass-forming oxides SiO 2 , CaO, MgO, Al 2 O 3 , Fe 2 O 3 , Na 2 O, TiO 2 , and K 2 O impurities represent mineral wool. The amount of SiO 2 is 44.11%, the specific surface area is 200 m 2 /kg, the bulk density is 1366 kg/m 3 ; and true density is 2884 kg/m 3 . The superplasticizer "PFM-NLK" (Poliplast, Moscow, Russia) was used in the mixes. For the developed mortars, sand with a particle size of 1.5-2.5 mm was used. Mix Design In the work, samples of 70 × 70 × 70 mm in size (according to Russian standard GOST 310) were molded with different waste content (10-40% by weight of cement) and the corresponding replacement of part of the cement. The water to cement ratio (W/C) was 0.4 and the sand/binder ratio was kept constant at 2.75. The compositions of concrete mixes are given in Table 2. Short-term mixing of the dry mixture with fiber for 60 s ensured the uniform distribution of basalt fiber. All mixes were designed based on the conditions of equal workability, providing a slump of 19 cm and a slump flow of 43 cm. Methods To assess the possibility of using wastes ground for 5 and 10 min for the complex processing of TFM as an active mineral additive to concrete, their chemical and miner-Materials 2022, 15, 5058 5 of 14 alogical compositions were studied using an X-ray fluorescence spectrometer. Using a scanning electron microscope, the shape and size of the particles were visually assessed before and after grinding. The specific surface area was determined by two complementary methods using the PSH-2 device (Khodakov's Devices, Moscow, Russia) and the Sorbi-M device (4-point BET method), (Sorbi, Moscow, Russia). The volume of pores with a radius <19.4 nm was also calculated. The microstructure of the materials was studied using a Mira 3 scanning electron microscope (Teskan, Brno, Czech Republic). Compressive and flexural strength tests were carried out after 7, 28 and 90 days according to the Russian standard GOST 310. Comprehensive Study of the Technogenic Fibrous Material Silica in this type of technogenic raw material is in an amorphous highly dispersed state, as evidenced by the maximum peak in the region of 31-33 • (Figure 3). All mixes were designed based on the conditions of equal workability, providing a slump of 19 cm and a slump flow of 43 cm. Methods To assess the possibility of using wastes ground for 5 and 10 min for the complex processing of TFM as an active mineral additive to concrete, their chemical and mineralogical compositions were studied using an X-ray fluorescence spectrometer. Using a scanning electron microscope, the shape and size of the particles were visually assessed before and after grinding. The specific surface area was determined by two complementary methods using the PSH-2 device (Khodakov's Devices, Moscow, Russia) and the Sorbi-M device (4-point BET method), (Sorbi, Moscow, Russia). The volume of pores with a radius <19.4 nm was also calculated. The microstructure of the materials was studied using a Mira 3 scanning electron microscope (Teskan, Brno, Czech Republic). Compressive and flexural strength tests were carried out after 7, 28 and 90 days according to the Russian standard GOST 310. Comprehensive Study of the Technogenic Fibrous Material Silica in this type of technogenic raw material is in an amorphous highly dispersed state, as evidenced by the maximum peak in the region of 31-33° ( Figure 3). The main crystalline phases of TFM that belong to plagioclases are augite Ca(Mg,Fe)Si2O6 and anorthite CaAl2Si2O8, which is characteristic of crystalline basalt. These phases begin to crystallize at temperatures above 900 °С during the melting of basalt in the production of mineral fiber. The presence of crystalline phases of quartz in the β-modification is evidenced by characteristic diffraction reflections. There is a high-temperature polymorphic modification of quartz β-crystobalite. The presence of quartz in the amorphous phase causes high solubility in saturated alkaline solutions of the hydrated binder and active interaction with the Ca(OH)2 released as a result of the hydration of clinker minerals. These processes allow the targeted synthesis of low-basic calcium silicate hydrates during hydration, which are responsible for the strength properties of the finished building product. The results obtained are confirmed by the energy dispersive microanalysis (EDM) of waste performed with the construction of a map of the distribution of chemical elements over the surface. Results also indicate the absence of additional inclusions in the material due to preliminary screening (Figure 4). The main crystalline phases of TFM that belong to plagioclases are augite Ca(Mg,Fe)Si 2 O 6 and anorthite CaAl 2 Si 2 O 8 , which is characteristic of crystalline basalt. These phases begin to crystallize at temperatures above 900 • C during the melting of basalt in the production of mineral fiber. The presence of crystalline phases of quartz in the β-modification is evidenced by characteristic diffraction reflections. There is a high-temperature polymorphic modification of quartz β-crystobalite. The presence of quartz in the amorphous phase causes high solubility in saturated alkaline solutions of the hydrated binder and active interaction with the Ca(OH) 2 released as a result of the hydration of clinker minerals. These processes allow the targeted synthesis of low-basic calcium silicate hydrates during hydration, which are responsible for the strength properties of the finished building product. The results obtained are confirmed by the energy dispersive microanalysis (EDM) of waste performed with the construction of a map of the distribution of chemical elements over the surface. Results also indicate the absence of additional inclusions in the material due to preliminary screening (Figure 4). SEM images obtained using a scanning electron microscope (Figure 5a) show that the mineral wool waste particles in their original form are represented by mainly cylindrically shaped fibers of various sizes and lengths. Due to the inhomogeneous properties of the melt, in the process of dispersion, along with the mineral fiber, so-called "beadlets" are formed from the solidified melt of spherical, droplet, and elongated shapes ( Figure 5b). In addition, a large number of particles have various kinds of defects. When grinding using a ball mill for 5 min, the size of the fibers is significantly reduced. First, there is the most intensive grinding of thin and long fibers; as a result, the ratio of length to diameter decreases. The beadlets themselves remain largely unchanged (Figure 5b), but sometimes their size exceeds the size of the fibers. SEM images obtained using a scanning electron microscope (Figure 5a) show that the mineral wool waste particles in their original form are represented by mainly cylindrically shaped fibers of various sizes and lengths. Due to the inhomogeneous properties of the melt, in the process of dispersion, along with the mineral fiber, so-called "beadlets" are formed from the solidified melt of spherical, droplet, and elongated shapes (Figure 5b). In addition, a large number of particles have various kinds of defects. ×350 ×2000 (c) Further grinding for 10 min leads to grinding not only the fibers, but also the beadlets, the fragments of which are noticeable in the total mass of the substance. There is a large number of the smallest nanosized crushed fractions, however, there is also a significant amount of larger particles of a fibrous form (Figure 5c). When grinding using a ball mill for 5 min, the size of the fibers is significantly reduced. First, there is the most intensive grinding of thin and long fibers; as a result, the ratio of length to diameter decreases. The beadlets themselves remain largely unchanged (Figure 5b), but sometimes their size exceeds the size of the fibers. Further grinding for 10 min leads to grinding not only the fibers, but also the beadlets, the fragments of which are noticeable in the total mass of the substance. There is a large number of the smallest nanosized crushed fractions, however, there is also a significant amount of larger particles of a fibrous form (Figure 5c). This is consistent with the obtained granulometric composition of the powders ( Figure 6). This is consistent with the obtained granulometric composition of the powders ( Figure 6). Figure 6 shows that during grinding there is a significant shift in the graphs to the area of the smallest particles with their smoother distribution over fractions, so the source material is characterized by a bimodal graph with a predominance of large particles (the average particle size is 91 μm (Table 3)). The particle size distribution curve after 5 min of grinding is characterized by one pronounced peak; the average particle size is 50.36 μm. The last graph is smoothly distributed in the region of small particles. The average diameter is about 14 μm, and 90% of the substance is less than 45 μm. It should be noted that the finely ground TFM waste is characterized by a finely porous structure with a pore volume R < 19.4 nm of 0.023 cm 3 /g (Table 4). Figure 6 shows that during grinding there is a significant shift in the graphs to the area of the smallest particles with their smoother distribution over fractions, so the source material is characterized by a bimodal graph with a predominance of large particles (the average particle size is 91 µm (Table 3)). The particle size distribution curve after 5 min of grinding is characterized by one pronounced peak; the average particle size is 50.36 µm. The last graph is smoothly distributed in the region of small particles. The average diameter is about 14 µm, and 90% of the substance is less than 45 µm. It should be noted that the finely ground TFM waste is characterized by a finely porous structure with a pore volume R < 19.4 nm of 0.023 cm 3 /g (Table 4). According to microstructural analysis, porosity can be due to the presence of defects both inside and on the surface of fine particles (Figure 7a). There are various defects in the form of cracks and shells on the surface of the TFM beadlets (Figure 7b). According to microstructural analysis, porosity can be due to the presence of defects both inside and on the surface of fine particles (Figure 7a). There are various defects in the form of cracks and shells on the surface of the TFM beadlets (Figure 7b). Properties of Concrete with TFM The selected consumption of polycarboxylate superplasticizer with a high water-reducing capacity (40%) made it possible to maintain the slump within 20 cm for almost all compositions (Table 5). Only at a dosage of TFM in the amount of 40 wt. % value of the slump decreased to 16 cm. At the same time, the parameter of the slump flow was kept at the level of 48-51 cm, allowing its decrease in the amount of TFM 30-40 wt. % up to 46 cm. The density of the fresh mixture modified with FMT reduced when the dosage increasing. The decrease in the density of the hardened concrete samples is justified by the fact that the average density of cement hydration products is lower than the average density of the initial mixture. Compositions with 40% TFM in all periods of hardening have the lowest strengths, with a 50% drop compared to the control composition. For all the mixtures only at the age of 7 and 28 days, the compressive strength decreased (by about 20-40% compared to the control samples) as the content of mineral wool waste increased (Figure 8). Correlations between samples with different fiber additions were necessary to find the optimal content, which has a deep practical meaning. Properties of Concrete with Tfm The selected consumption of polycarboxylate superplasticizer with a high waterreducing capacity (40%) made it possible to maintain the slump within 20 cm for almost all compositions (Table 5). Table 5. Fresh and physico-mechanical properties of concretes. The density of the fresh mixture modified with FMT reduced when the dosage increasing. The decrease in the density of the hardened concrete samples is justified by the fact that the average density of cement hydration products is lower than the average density of the initial mixture. Mix ID Compositions with 40% TFM in all periods of hardening have the lowest strengths, with a 50% drop compared to the control composition. For all the mixtures only at the age of 7 and 28 days, the compressive strength decreased (by about 20-40% compared to the control samples) as the content of mineral wool waste increased (Figure 8). Correlations between samples with different fiber additions were necessary to find the optimal content, which has a deep practical meaning. However, at the age of 90 days, at 10-20% of the content of TFM, the strength indicators increase, and at 30% of the content of the additive they approach the values of the control composition without additives. The decrease in compressive strength of samples with the replacement of cement by 30 and 40% of TFM is explained by supersaturation and weakening of the hardening system, confirming the optimal content of this component at the level of 20%. The decrease in strength at 28 days of age may be due to the high water demand of the finely ground filler, which intensively absorbs water, slowing down the rate of hydration. However, in the future, the filler can release stored water, providing long-term hydration. Based on the test results mentioned above, the inclusion of 10-20% mineral wool waste is optimal. Good results were obtained in the study of flexural strength ( Figure 9). The optimal content of TFM in the amount of 20% leads to the values of bending strength at the age of 7 days 4.1 MPa, at the age of 28 days 5.7 MPa, and at the age of 90 days 8.5 MPa. For all ages, the ratio of flexural and compressive strengths is at the level of 0.2, which characterizes a high reinforcing effect. However, at the age of 90 days, at 10-20% of the content of TFM, the strength indicators increase, and at 30% of the content of the additive they approach the values of the control composition without additives. The decrease in compressive strength of samples with the replacement of cement by 30 and 40% of TFM is explained by supersaturation and weakening of the hardening system, confirming the optimal content of this component at the level of 20%. The decrease in strength at 28 days of age may be due to the high water demand of the finely ground filler, which intensively absorbs water, slowing down the rate of hydration. However, in the future, the filler can release stored water, providing long-term hydration. Based on the test results mentioned above, the inclusion of 10-20% mineral wool waste is optimal. Good results were obtained in the study of flexural strength ( Figure 9). The optimal content of TFM in the amount of 20% leads to the values of bending strength at the age of 7 days 4.1 MPa, at the age of 28 days 5.7 MPa, and at the age of 90 days 8.5 MPa. For all ages, the ratio of flexural and compressive strengths is at the level of 0.2, which characterizes a high reinforcing effect. Thus, the analysis of the results suggests the possibility of using waste milled for 10 min as an active mineral additive, as well as to give better formability to the mixture and its micro-reinforcement to obtain fiber-reinforced concrete [55][56][57][58][59][60][61][62][63][64][65]. The presence of quartz in the amorphous phase causes high solubility in saturated alkaline solutions of the hydrated binder and active interaction with Ca(OH)2 released as a result of the hydration of clinker minerals. Conclusions The physico-mechanical properties of waste technogenic fibrous materials (TFM) used as additives for optimizing the composition of raw concrete mixtures were studied. The following results were obtained: − The processing of TFM for 10 min leads to the grinding of not only fibers, but also granules, fragments of which are noticeable in the total mass of the substance. There is a large amount of the smallest nanosized crushed fractions, however, there is also a significant amount of larger particles of a fibrous form. This is the reason for the effective use of the material to control the structure formation of the cement composite, both at the macro level (fiber) and at the micro level (cement substitute). − The reactive activity of TFM is confirmed by the high solubility in saturated alkaline solutions of the hydrated binder and active interaction with the Ca(OH)2 released as a result of the hydration of clinker minerals. These processes make it possible to purposefully synthesize low-basic calcium silicate hydrates during hydration, which are responsible for the strength properties of the finished building product. − At 90 days of age, at 10-20% of the content of TFM, the strength indicators increase (above 40 MPa), and at 30% of the additive content, they approach the values of the control composition without additives (above 35 MPa). For all ages, the ratio of flexural and compressive strengths is at the level of 0.2, which characterizes a high reinforcing effect. The increase in strength may be associated with the pozzolanic reaction, the completeness of which increases most at a later age. In addition, the microsize of the grains of TFM waste used in this study increases the compressive strength of the mortar by acting as a compactor. − Analysis of the results suggests the possibility of using waste milled for 10 min as an active mineral additive, as well as to give better moldability to the mixture and its micro-reinforcement to obtain fiber-reinforced concrete. Thus, the analysis of the results suggests the possibility of using waste milled for 10 min as an active mineral additive, as well as to give better formability to the mixture and its micro-reinforcement to obtain fiber-reinforced concrete [55][56][57][58][59][60][61][62][63][64][65]. The presence of quartz in the amorphous phase causes high solubility in saturated alkaline solutions of the hydrated binder and active interaction with Ca(OH) 2 released as a result of the hydration of clinker minerals. Conclusions The physico-mechanical properties of waste technogenic fibrous materials (TFM) used as additives for optimizing the composition of raw concrete mixtures were studied. The following results were obtained: − The processing of TFM for 10 min leads to the grinding of not only fibers, but also granules, fragments of which are noticeable in the total mass of the substance. There is a large amount of the smallest nanosized crushed fractions, however, there is also a significant amount of larger particles of a fibrous form. This is the reason for the effective use of the material to control the structure formation of the cement composite, both at the macro level (fiber) and at the micro level (cement substitute). − The reactive activity of TFM is confirmed by the high solubility in saturated alkaline solutions of the hydrated binder and active interaction with the Ca(OH) 2 released as a result of the hydration of clinker minerals. These processes make it possible to purposefully synthesize low-basic calcium silicate hydrates during hydration, which are responsible for the strength properties of the finished building product. − At 90 days of age, at 10-20% of the content of TFM, the strength indicators increase (above 40 MPa), and at 30% of the additive content, they approach the values of the control composition without additives (above 35 MPa). For all ages, the ratio of flexural and compressive strengths is at the level of 0.2, which characterizes a high reinforcing effect. The increase in strength may be associated with the pozzolanic reaction, the completeness of which increases most at a later age. In addition, the microsize of the grains of TFM waste used in this study increases the compressive strength of the mortar by acting as a compactor. − Analysis of the results suggests the possibility of using waste milled for 10 min as an active mineral additive, as well as to give better moldability to the mixture and its micro-reinforcement to obtain fiber-reinforced concrete.
2022-07-23T15:01:45.534Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "820a28f93a9c866fe833488991e3a41883a6459c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "eb6d723cb3b097135d400897c6b6376e7f58e10f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
144582592
pes2o/s2orc
v3-fos-license
Adult education in two public libraries in Cape Town : a case study This paper reports the findings of research (Tandwa 2007) into adult literacy programmes offered by two public libraries in Cape Town with a focus on their use of literacy materials. The study is a contribution to the documenting and analysis of the public library’s role in the struggle against illiteracy, a serious socio-economic problem in South Africa. Using the case study approach the researcher made an in-depth study of the programme offerings from the perspective of the adult learners, and tried to establish how and whether they made use of literacy materials, since their availability is so important in literacy instruction and the development of a reading habit. The paper describes the programmes and the cohorts of learners and their expectations, and analyses the availability and role of reading materials in the learners’ lives. It concludes by identifying the factors required for the successful implementation of a literacy programme in a public library. Introduction Illiteracy is viewed as a socio-economic problem and an obstacle to development that affects the economy and the individual's life because it is linked to unemployment, poor health, diseases, high birth rates, poverty, dependency on social grants and crime.Literacy does not only equip adults with employment skills but also with developmental skills to engage with health related issues like nutrition, a healthy life-style and children's education and also the ability to participate fully in democracy (Foulk et al. 2001;Roman 2004;Zapata 2004). The United Nations Educational, Scientific and Cultural Organisation (UNESCO) report (2005: 137) indicates that 'literacy goes beyond reading and writing because it provides access to the scientific and technical knowledge, legal information, cultural benefits and the ability to use and understand the media because it has a potential to meet peoples' most vital needs and to stimulate social, political, cultural and economic participation'.Based on several definitions of literacy, a literate person is an individual who is able to do one of the following: • Able to read and write a simple sentence • Able to pass a written test of reading comprehension at basic level • Able to engage in the majority of activities in which literacy is required for effective functioning in his/her community (this can include to ability to read road signs, filling in employment forms, voting, helping children with school work) (Pather 1995;Wagner 2001;UNESCO 2005).The provision of literacy education entails various processes and transitional stages such as the following: • from illiteracy to literacy (this stage is attained through the provision of literacy classes and the use of instructional and learning literacy materials in literacy centres) • development and maintenance of literacy skills (post literacy programmes) • independent learning (through regular access and usage of literacy materials) (Rogers 1994).These three stages are supported by the availability of and access to relevant and suitable literary reading materials.An important and common way of supporting adult education programmes by public libraries is the provision of reading materials. The investigation The aim of the study was to investigate the role of reading materials in adult education programmes and their availability and use in two public libraries in Cape Town.The research questions of the study were as follows: • How available are locally produced literacy materials for use in literacy programmes in public libraries? • What are the types and features of these literacy materials? • How suitable are the literacy materials in the adult learners' acquisition of literacy?By working with two public libraries, the researcher hoped to gain an in-depth understanding of literacy materials in adult education and the need for the provision of these materials in public libraries to support reading and adult literacy programmes.The intention was also to establish how and whether the literacy materials have an impact on fighting illiteracy and changing the lives of the illiterates. Since the experience of the adult learners was of key concern, qualitative research methods were viewed as suitable for this investigation.Two public libraries currently offering literacy programmes in Cape Town were selected as cases.One of these libraries had two sites: the library and one at a local hospital where classes were offered to employees.Participants from these two literacy programmes were interviewed. Respondents at each of the three sites were: • Adult learners • Facilitators • Librarians Data were collected through the use of face-to-face interviews, and observation.The researcher attended literacy classes twice a week over a six-month period.During this time she also examined the literacy material collections. Literature review The literature review frames the study conceptually highlighting the following related aspects: • An examination of the meanings of literacy/illiteracy Defining literacy is very difficult because it is associated with the use of a variety of skills such as reading, writing and calculation at different levels and for different purposes.Literacy is also influenced by the country's official language, that is one can be literate in his/her country's language and illiterate in another country because of the differences in official languages.The term literacy is also associated with the 'know how skills' and the use of those skills to improve one's life. Literacy is a major determinant factor of an individual's economic potential because literacy leads to higher employment participation, higher skilled employment, greater mobility and lower unemployment probabilities (Zapata 1994).Literacy is associated with the wealth of the country because the higher levels of education/literacy lead to greater productivity, and greater economic growth (Liu 2004).Generally illiteracy is associated with unequal access to resources, social wealth, information and knowledge (Zapata 1994).According to a UNESCO report (2005: 30) 'literacy strengthens the capabilities of individuals, families, and communities to access health, educational, political, economic and cultural opportunities and services'.Although literacy has several benefits, such benefits are not automatic but need to be exercised.This means that literacy strategies need to be delivered in such a way that they allow the illiterates to change their lives. Illiteracy rates vary from one region to another but there are countries, especially in the developing world, that have notably high illiteracy rates.UNESCO reports that in 2000 'there were about 862 million illiterates in the world.In countries like India and Pakistan illiteracy rates are increasing.The majority of the illiterates are in Sub Saharan Africa, South and East Asia and Arab states' (UNESCO 2004). Adult education programmes There is little conceptual clarity about the meaning and use of terms like illiteracy, literacy, adult education and adult basic education.This difficulty is also found in measurement so that statistics are not easily comparable (Aitchison, 2006).The difficulty is compounded when trying to understand the relationship between a literacy programme and adult basic education.Aitchison argues that literacy programmes teach reading, writing and numeracy skills, whereas adult basic education provides more than the fundamental skills of literacy but offers the equivalent of a primary school education (2006).Torres's view of adult education shaped by her experience in the developing world is useful.She defines 'adult basic education as a foundation or essential education that aims at meeting and expanding the basic learning needs of adults ' (2002: 21).She differentiates adult basic education from adult education saying that adult education is a broad term that entails basic and continuing education, vocational, technical higher education and professional development while adult basic education is basic or foundation education that enables the adults to grow and meet their daily needs (Torres 2002). According to Harley et al. (1996) general aims of literacy programmes include the following: • To enable adults to function properly without depending on others and to cope with written materials • To enable people to cope with modernisation such as the ability to read urban road signs and the ability to cope with technological developments.• The development of critical awareness ('conscientisation'). Critical awareness is specifically important for political participation because it enables the adults to be aware of structures that oppress them as a society.This is exemplified in programmes influenced by Paulo Freire.• Competence (know-how skills).The provision of literacy classes enables the adults to deal with written materials and skills such as the ability to sign one's name, fill employment forms and to use ATMs.• Development.Literacy enables increased productivity, improved health, fertility and gender issues, access to better jobs and environmental protection and economic growth (Harley et al. 1996).Post literacy programmes differ in their areas of interest but their general aim is to develop, maintain, reinforce and sustain literacy skills through the provision of reading materials.Post literacy initiatives include any reading related programmes such as family literacy and book clubs that aim to fight illiteracy through the provision of reading materials. Literacy materials Various authors have used different terms to refer to literacy and post literacy materials.These terms include the following: • 'Easy readers or Easy-to read materials include all the materials with text that is easy to read and understand not only because difficult words are avoided but because the presentation is specific and easy to follow' (Tronbacke 1997: 189).• French (1992: 240) uses the term 'easy reading materials' to refer 'to any reading matter in any language which makes concessions to a lack of proficiency in reading skills or difficulties with mastering the language of the text'.• Arnold (1982: 12) uses the term 'bridge literature' to refer to reading materials that act as a bridge between initial and habitual reading.Arnold further identifies various aims of the bridge literature such as developing the neo-literates' reading abilities so that they become habitual readers and improve their reading and writing abilities.For them to move successfully from being initial readers to habitual readers they need access to relevant and suitable reading materials (Arnold 1982).All these terms are used to describe materials that have special characteristics and features for a particular purpose that differentiate them from any other reading materials.They can be categorised as: • Curriculum materials including guides and workbooks used in classes by teachers and learners • Reading materials used to practise and sustain literacy skills The defining features of both reading and instructional materials are that the content, format and layout are adapted for easy comprehension by adults with limited reading skills (Tronbacke 1997).Most materials take the form of booklets and pamphlets but some producers also produce posters, newspapers, comic books, graphic novels and newspaper supplements. The following features characterise literacy reading materials (reading and instructional materials) that are used in literacy classes, post literacy programmes and public libraries: • Local, simple language and simple vocabulary (Tronbacke 1997). • Size of book, font size.number of pages, and number of sentences and paragraphs (Thumbadoo, 2006).The relevance and suitability are determined by the needs, culture and the situation of the particular audience (Thumbadoo 2006).The content areas of these materials should offer a wide range of choices to accommodate different needs, interests, background and expectations of the intended audience.Although these books should be simple and relevant with illustrations and pictures, they should not be childish because they are to be used by mature adults with adult emotions and experience (Wedgeworth 2003).Tronbacke stresses the importance of the desirable attributes of literacy materials necessary "to provide a positive experience and to encourage adult to read them" (1997).It is important therefore for publishers, literacy providers and any other party involved in the production of these materials to make sure that they meet the needs of the semi-literates when writing and providing literacy materials (Land 2006). Literacy materials are published at different levels to accommodate the varying levels of their readers.These levels start from when an adult joins literacy classes to graduation and post literacy.They all serve different purposes for different learners.Within these levels they should also accommodate learners with disabilities (Tronbacke 1997). Adult education in South Africa In South Africa literacy education is subsumed into Adult Basic Education and Training (ABET) which is regarded as a basic human right in the South African Constitution necessary as a foundation for work, training and career progression and an educated workforce that is required for a prosperous democratic society (Aitchison 2006).The government's approach has been criticised by a number of researchers as being instrumentalist tied as it is to narrow goals of productivity and a path to employment (cf Baatjes and Mathe, 2004).ABET is provided by various organizations such as Non Governmental Organizations (NGOs), Government departments, business, industry and (on a minor scale) by public libraries.Rule (2005: 19) notes the great need for adult education provision in South Africa: 'adult education in South Africa is essential because it is an important mechanism for poverty alleviation and economic development, essential contribution to personal and community development, as a component for a democratic citizenship and civic participation and as a response to the historical legacy of apartheid deprivation'.Ten million adults (50% of the South African adult population) have fewer than ten years' education, while three million have never been to school (Baatjes, 2003: 191).Following the 1996 Census, the 2001 Census showed that there had been a 50% increase in the number of adults never having gone to school (Baatjes 2003).These figures give an indication of the scale and scope of the problem which requires much more vigorous and sustained intervention by the government than has been evident in the last decade.Baatjes (2003), Baatjes and Mathe (2004) and Aitchison (2006) have documented and analysed government policies, plans and programmes and have shown that the government is unlikely to achieve the goal set by the Dakar Framework for Action which requires a reduction by 50% of adult illiteracy by 2015.Nearly 10 million adults have such poor literacy skills that they are unemployable (Baatjes and Mathe 2004: 415) while those with fewer than nine year's schooling in the mining sector are most vulnerable to retrenchment (Baatjes and Mathe 2004: 410). A number of commentators have analysed the problems of adult literacy programmes as follows: • The lack of political will and commitment from the government (Baatjes 2003;Sibiya 2005).Rule (2005) states that in South Africa the provision of adult education is regarded as a basic human right but the government neglects it and it operates on a limited budget that is less than 1% of the total education budget.• Limited budget for adult education provision and in some provinces such as KwaZulu-Natal it has gone to below than that of the apartheid era (Aitchison 1999).Vivian (2002: 16) notes that the funding is a major concern in adult education provision and the available NGOs' funding is very limited.• NGOs have experienced problems such as funding uncertainties, loss of experienced staff and poor management (after 1997); some of these problems resulted in the collapse of the National Literacy Cooperation, USWE and English Literacy Project in 1998 (Aitchison 1999;2006 andRule 2005).• The provision of ABET is also limited as the majority of government departments such as Department of Labour concentrate on their employees for skills development.This means that the majority of unemployed illiterates have limited access to literacy programmes (Rule 2005;Aitchison 2006).Land and Buthelezi (2004: 429) argue that there is a limited number of suitable literacy materials and ABET materials especially materials in indigenous languages for various reasons such as poor sales, reluctance to publish in African languages, high illiteracy rates and lack of government support in promoting publishing of Indigenous materials.However various organisations such as Pan South African Language Board (PANSALB) are involved in promoting the publishing of literacy materials in various ways (Land and Buthelezi 2004). Public libraries and adult education in South Africa Harley (1999: 29) notes that libraries and literacy are inseparable because without the literate community the library is not likely to have an impact in the community and it is also difficult or even impossible to maintain literacy skills especially the newly acquired literacy skills without the library as a provider of the literacy materials or information in general.Reading is the basis of library use and it is therefore the role of the library to create readers. Public libraries in South Africa are involved literacy in various ways such as the following: • The provision of literacy materials (both reading and instructional materials) such as manuals and workbooks for learners and tutors/ facilitators, fiction and non-fiction biographies, magazines and newspapers for adults with limited reading skills (Frylinck 1984;Makhubela 1998;Harley 1999).• Providing facilities such as venues and resources for the literacy classes (Frylinck 1984;Harley 1999;May and Nassimbeni 2005). • Promoting the creation of literacy materials by working with publishers and writers (Frylinck 1984).The scale of their involvement is rather modest as revealed by a national survey by May and Nassimbeni (2005: 12) showing that only 26.7% of South African public libraries are engaged in any way either through direct intervention and delivery of adult education programmes, or more usually through the provision of venue or materials.Most public libraries in South Africa do not have literacy materials and some public libraries do not know the publishers of these materials or where to find them (May and Nassimbeni 2005). "Ours is not a society of readers" (Jordan, 2007).This is the observation of Minister of Arts and Culture during his budget speech to Parliament shortly before the launch of the report of the National survey into reading and book reading behaviour of adult South Africans (South African Book Development Council 2007).The report revealed that 51% of South Africans have no books in their homes, and that only 14% read books.The extra funding to be released by the Department of Arts and Culture (R1bn announced in 2006; and a further R200m made available in 2007) has taken into account the need for special interventions to remedy the weak reading culture (R1bn boost for libraries 2006; Jordan 2007).Sisulu (2004) identifies the following characteristics of the literate nation and nation with a reading culture: • Nation with life-long readers who value their local literature • Nation with government that promotes the value of reading at all levels • Nation that integrates reading with education systems at all level and encourages reading for pleasure • Nation with flourishing writing and publishing industry • Strong library services backed with rich distribution of books from the book market. Research methodology Two public libraries, Library A and Library B were selected as cases.A case study approach was used because it is a qualitative research approach suitable for a study of social process (Miller and Salkind 2002) and it provides an ability to examine the real issues in their natural setting (Bogdan 1992).These sites were selected because they were involved in literacy programmes and they had literacy materials to support and maintain literacy programmes.The learners' behaviour and reactions in literacy classes and towards literacy materials were observed through attending the classes and interacting with the participants.Data and field notes were sorted according to the participants' categories, that is, learners, facilitators and librarians.Within these categories data was sorted according to themes and topics.Data was analyzed through physical sorting of field notes, reading and screening them for accuracy and double checking incomplete and unclear sections.Comparisons, insights and contrasts were made to facilitate the interpretation and analysis. Findings In this section we report the findings of the investigation following a brief description of the sites of study.Library A is in a relatively affluent middle class predominantly 'white' suburb with a very large informal settlement on its margins, and Library B is in a lower income predominantly 'coloured' suburb.Respondents were made up as follows: • 54 adult learners ° Library A: 35 ° Library B: 10 from the library and 9 from the hospital • 2 librarians: one from each of the libraries • Facilitators at each of the sites The learners: their motives and expectations In Library A, the principal motivation of the learners, drawn mainly from immigrants from other African countries wishing to learn English, was to enter the labour market from which they were excluded because of their poor English communication skills.They wished to improve their basic English in order to improve their lives, get jobs and solve their daily problems caused by their illiteracy.They indicated that their major problems were the inability to communicate and understand written information in English with the result that they were unable to get jobs, to express themselves in English and to read signs.The majority of learners were foreigners, literate in their home language, who viewed literacy in English as a prerequisite for functionality at work and within the family and the community.One of the learners said 'In South Africa you are nothing if you are unable to communicate in English because the majority of South Africans speak English and English is a major prerequisite for employment.' In Library B the library-based programme attracted learners wishing to improve their Afrikaans writing and reading skills for a variety of reasons.They wished to improve their lives, get better jobs, read to their grandchildren and gain their independence.Those in the hospital-based programme wished to improve their Afrikaans reading and writing as part of a skills development programme for employees without basic education.Learners expected at the end of the programme to be able to communicate and cope with written materials, to perform their jobs better and to get better jobs.One of them said, 'One day I want to finish my Matric therefore I have to start with the ABET classes in my language (Afrikaans)' while another explained, 'I want to help my grandchildren with their school work and to be able read for them'.They wished to be able to solve their daily problems such as reading instructions, filling out a form, using an ATM and communicating better in order to improve their lives as one of them in Library B said 'Through the classes I will be able to communicate better and to perform my duties better'. The programmes The range of offerings across all three programmes was similar: reading, writing, communication (in English at Library A and in Afrikaans in Library B), numeracy and general life skills.The learners were taught how to read and write their names, their addresses and identity numbers, and how to read timetables, address envelopes fill out forms.Their learning was based on their daily activities such as reading instructions and product labels, and how to write and address letters.The communication skills module emphasised initiating and maintaining a conversation, voicing opinions, and asking and answering questions.The major difference between the two libraries with respect to delivery of the curriculum was that of language: English in Library A, and Afrikaans in both programmes of Library B. The majority of learners in Level Three viewed the literacy programme as helpful and useful to them because they were able to read and write basic information such as timetables and road. Learners in the hospital-based programme of Library B were exposed to lessons in critical thinking or the ability to analyse information, reading and responding to advertisements, reading newspapers, and short stories were also included.They were also learning basic employment rights such as leave entitlements, how to select insurance policies, budgeting, bonuses, pension funds and tax related issues.One of the learners in Library B said, 'Although I know and understand Afrikaans I didn't know how to read and write it.Through these classes I have learned to write and read simple sentences and my biographical information'.The majority of learners in Library B already had jobs but since they were not satisfied with their level of employment and performance they joined literacy classes in order to improve their literacy skills and perform their duties better.Some of the learners in Library B joined literacy classes for social and personal reasons such as the ability to read medication and read to grandchildren.Learners in Library A viewed literacy and English as an economic need.Although they were literate in their home languages they were illiterate in English and they viewed English as a gateway to employment. The library-based programme of Library B included a life skills course consisting of income generation skills such as fabric painting, decorating candles, making greeting cards and working with beads and candles.All learners in library B enjoyed life skill course because they said it kept them busy and it allowed them to use their creativity.One of them said, 'I like the life skills course because it keeps me busy and its gives me an opportunity to be creative'.Computer skills were offered at Library A which also offered a business course focusing on business writing skills, using banks and basic business concepts.Learners regarded business and life skills courses as important aspects of their classes for functional purposes and for those who wanted to start their businesses. Different funding models were used at the two libraries.In Library A the programme was funded by the library, and by external sponsors such as the Rotary Club.The facilitators were unpaid volunteers, while those at Library B were paid.Library B received government support from the Department of Education for the library-based programme and from the Department of Health for the hospital-based programme.Both librarians complained that their funding was insufficient, and that they did not receive financial support from their parent organisation.In spite of the departments' subvention of the programmes run by Library B, the level of support was inadequate, and could not sustain the minimum requirement of an adequate supply of learning and teaching materials (discussed in 4.3). Learning Materials In Library A, the learning materials were written in simple English with pictures and illustrations.They were based on daily issues like doing shopping, going to town, reading road signs, job related issues, writing letters, basic comprehensive skills and health issues.Each level had its own materials.The library provided MediaWorks, produced by an ABET organisation, for their classes, which could be used both as computer-aided and facilitator-mediated instruction.The MediaWorks consists of workbook exercises and facilitators support.Computer aided materials were regarded as very helpful to the learners because they provided basic introduction to computers. None of the learners in Library B's programmes had any learning materials.Facilitators collected and used different sources such as newspapers, story books, selected lessons from Unit Standards (the official ABET curriculum material) and ad hoc library materials to create their own learning and teaching materials for instruction and for exercises.The facilitators viewed the process of collecting different sources of materials for their learners as time consuming and laborious.One of the facilitators complained, 'I ordered an Afrikaans book at our departmental library six months ago but I have not yet received it.It is very difficult and time consuming to collect materials from different books'.The learners unanimously viewed this as a serious obstacle to effective learning in their classes.The Department of Education, in particular, expected learners to be assessed based on Unit Standards while learners did not have such materials.Unit Standards are published by the Department of Education to support their literacy programmes. Both libraries had post-literacy materials displayed in a special section of the library.These post literacy materials were published by South African publishers such as New Reader's Project, Project Literacy and Kwela Books.They were written in South African languages, with pictures and illustrations.In addition Library A had a special literacy collection housed separately in the Centre where the classes were conducted. The learners' reading tastes and requirements The learners at all the centres were asked what kind of materials they liked to use.There was no marked difference in reading requirements and taste.They all expressed a preference for the Bible, magazines, newspapers, love stories, cookery books and business related books, and they all wished to have materials they could relate to, that would assist them with their learning, and to cope with the demands of everyday living.One of the learners in Library B said, 'I like love stories and cookery books that are written in simple language and I also like books with pictures' while one learner in Library A said, 'I like materials that can help me to deal with daily challenges such as health issues and social grants'.They all preferred simple written materials with bright colours, pictures and large fonts. The learners were asked whether they practised their skills at home.The majority of learners (26 or 74%) at Library A were practising at home, while the remaining nine (26%) were not.The reasons cited were lack of materials at home, and the absence of someone to guide and monitor them.Eight out of ten learners in the library-based programme of Library B were practising their skills at home, while the majority at the hospital-based programme were not, due to time constraints. Book ownership The majority of Library A learners (29 or 83%) owned books or reading materials.These were Bibles (in their home language), magazines and dictionaries.Very few had books in English.The minority (6 or 17%) did not own books because they did not see the need as they could not read.In the library-based programme of Library B seven out of ten learners owned Bibles, storybooks, high school textbooks, cookery books and biographies.The three who did not said that they saw no need as they could not read, and if they wanted to they could use the library, reasons similar to those advanced by the Library A group.Five of the nine learners at the hospital programme of Library B had books at home with a similar variety to those from the library-based programme: Bibles, cookery books, history books and school textbooks.Those who did not have any books said that there was no point as they could not read.The total book ownership was as follows: 41 (76%) learners owned books, while the minority of 13 (24%) learners did not. Library membership and use Low library membership was found across all groups.In Library A, 15 out of 35 learners were members of a library, while in Library B, 4 out of 10 at the library site were members of the host library and 4 out of 9 at the hospital site were members of their local library, not the host library which was too far for them to visit.Thus, a minority of learners across all sites were library members: 42.6%, or 23 out of 54.There is scope for growth in library membership at Library A where the 20 learners not yet members will receive a library card having completed Level One of their course and progressed sufficiently to complete the membership form.Library membership was used as a motivation for the learners to reach the milestone of being able to complete a form.Those learners at Library B who were not members of the library said that they were unable to read and write and they could not use the library which was a place for literate people and they did not want to disclose their status to the librarians.The learners who were using the library said that the materials they liked to read were available in their local library.They did not use the literacy collection but used newspapers, magazines and children's books. The respondents were asked about their general use of the library collection even if they did not take out materials.A slim majority of learners at Library A, that is 18 (51%), did not use the library collection while the remaining 17 (49%) did use the library.The majority of the reasons for not using the library were similar to those for not joining the library; they indicated that they were illiterate and therefore unable to use the library.Some of the learners viewed libraries as a place for educated people as the following two representative statements illustrate.'I don't use the library because I don't want the librarians to know that I am illiterate they will laugh at me' and 'The library is for educated people so I don't belong there'.Their responses were disturbing because they revealed that they had not made the connection between the library as provider of literacy classes, and the library as provider of reading materials to promote literacy through voluntary reading.These sentiments do not portray the library as an educational agency hospitable to all irrespective of reading or educational, level.Some said they were using the learners' literacy collection.Those who were using the library were asked if they were able to get the materials they needed.They all indicated that the materials were available and if they needed something that was not available at the library they used their literacy collection.They also indicated that they did not specifically use the adult literacy materials at the library but they used any materials such as magazines, newspapers and the children's section.None of the 35 learners was a member of a reading club. Only three learners of the hospital-based programme at Library B were using the library.The remaining learners said that they did not have time to go to the library; that the library was far from their homes and some said the library where they had registered did not have the materials they wanted.They also said that they had requested the materials they wanted from the librarian but they had not had any response.They indicated that they requested simple written adult materials that would help them to practise and maintain their newly acquired skills.There were only three learners who were using the library and they all indicated that their favourite materials were available at the library.They did not specifically use the adult literacy collection but they used a variety of materials such as magazines, newspapers and children's books. Conclusions and recommendations The selected centres provided literacy programmes to equip the adults with basic learning needs and life skills which map to UNESCO's (2005) definition of basic learning needs: 'literacy, oral expression, numeracy, problem solving skills, knowledge, values and attitudes'. Literacy in the lives of the learners The learners viewed literacy as an important prerequisite for survival, functionality and participation in a modern society because they all indicated that they wanted to improve their lives in various ways such as the ability to get employment or better jobs or to communicate better, either in their mother tongue, Afrikaans, or in English, the language of commercial sector. Two cohorts, viz.from Library A and the participants in the hospital programme, were mainly motivated by employment concerns.In the library-based programme of Library B, the learners had more generalised motives for joining the programme: coping with daily needs, some to find employment, and others to be able to read to their grandchildren.The learners who were unemployed (Library A) were very enthusiastic and participative in classes.This programme suffered to a far lesser extent from absenteeism than the programmes offered by Library B. The reasons for poor attendance were not clearly identified by learners, facilitators and the librarian at the library-based programme of Library B. The programmes' resources Neither of the libraries was satisfied with the level of funding available for the programmes.Inadequate funding for adult education is characteristic of the sector in South Africa and insufficient to meet the needs of literacy training (Rule 2005).Moreover, public libraries have experienced a sustained period of declining funding (Cole 2000) which has led to cuts in service and lack of innovation.Precarious funding was particularly acute in Library B which was unable to purchase any learning materials in spite of the centrality of their role in literacy education.Although there are relevant and suitable literacy materials available for purchase, Library B was not able to avail itself of this due to financial constraints.The availability of literacy materials is a critical requirement for any literacy programme without which there is unlikely to be any change and improvement in the learner's life (Rogers 1994;Cisse 2001;Mabomba 1992;Thomas 1993).The literature shows that the lack of resources in literacy programmes, literacy materials in particular, has an impact on learner motivation, absenteeism and drop out levels.It is recommended that the librarian at Library B and the government departments supporting the programmes should consider the provision of literacy materials as an essential requirement of any literacy programme and take steps to ensure an adequate supply for teaching, learning and voluntary reading. Evaluation of the programmes The literacy programmes in both libraries need continuous evaluation because the needs of learners are continuously changing.Through this process the learners and facilitators can be given the chance to identify their needs, expectations and problems and the provider in turn (librarian or other) can specify the objectives of the programmes.Absenteeism and dropout levels need to be monitored and figures to be collected and kept.It is recommended that learners should be motivated in various ways such as the inclusion of income generation programmes that are based on their interests.Such motivation is important for learner participation and retention. Promotion of the library Although learners were encouraged to use the library and their reading materials at home the majority of them (especially those working at the hospital) were reluctant to use the library and to practise their skills at home.It is recommended that the librarians should include library related programmes in literacy programmes for all learners irrespective of their levels.Such programmes can include library visits, story telling, and family literacy offered on a regular basis.Library visits can help the learners to be familiar with the library and be able to speak to the librarians about their needs, thus growing in confidence.Through such programmes the learners can identify their reading needs and the materials they like.Such information can help the librarian to select the materials based on learners' needs.The learners' requests for materials and any information needs need to be followed up by the librarians. 6.5 Necessary conditions for literacy programmes in the public library Factors such as the availability of literacy materials, cooperation and communication between stakeholders and strong commitment from them (stakeholders including the learners) and strong financial support from the government and library's parent bodies are necessary for a fully functioning literacy programme.Through such commitment, libraries will be in a better position to meet the needs of the adults in literacy programmes and so be active partners in nation building and economic and social development.
2018-12-27T05:16:20.972Z
2013-12-12T00:00:00.000
{ "year": 2013, "sha1": "a40e378ef56d2567937898a2360a26d19e781895", "oa_license": "CCBYSA", "oa_url": "https://sajlis.journals.ac.za/pub/article/download/1260/1403", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a40e378ef56d2567937898a2360a26d19e781895", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
14739191
pes2o/s2orc
v3-fos-license
Health Technology Assessment of Belimumab: A New Monoclonal Antibody for the Treatment of Systemic Lupus Erythematosus Objective. Systemic lupus erythematosus (SLE) is treated with anti-inflammatory and immunosuppressive drugs and off-label biologics. Belimumab is the first biologic approved after 50 years as an add-on therapy for active disease. This paper summarizes a health technology assessment performed in Italy. Methods. SLE epidemiology and burden were assessed using the best published international and national evidences and efficacy and safety of belimumab were synthesized using clinical data. A cost-effectiveness analysis was performed by a lifetime microsimulation model comparing belimumab to standard of care (SoC). Organizational and ethical implications were discussed. Results. Literature review showed that SLE affects 47 per 100,000 people for a total of 28,500 patients in Italy, 50% of whom are affected by active form of the disease despite SoC. These patients, if autoantibodies and anti-dsDNA positive with low complement, are eligible for belimumab. SLE determines work disability and a 2–5-fold increase in mortality. Belimumab with SoC may prevent 4,742 flares in three years being cost-effective with an incremental cost-effectiveness ratio of €32,859 per quality adjusted life year gained. From the organizational perspective, the development of clear and comprehensive clinical pathways is crucial. Conclusions. The assessment supports the use of belimumab into the SLE treatment paradigm in Italy. Introduction Systemic lupus erythematosus (SLE) is a chronic inflammatory autoimmune disease harming skin, joints, kidneys, lungs, nervous system, and serous membranes which mostly occurs in fertile women [1]. The feature that affects patient's long term survival is tissue damage, especially when organs such as kidneys are involved [2][3][4]. Recent EULAR (EUropean League Against Rheumatism) recommendations for the management of patients affected by SLE were published in 2008 [5]. In 2010, EULAR recommendations for neuropsychiatric lupus were also defined 2 BioMed Research International [6] while, in 2012, the American College of Rheumatology (ACR) released recommendations for the management of lupus nephritis [7]. In the last 50 years no new drugs for SLE have been approved. Therapy for SLE includes nonsteroidal antiinflammatory drugs (NSAIDs), corticosteroids, antimalarial agents, and immunosuppressant drugs [8]. None of these treatments has a specific target; rather, their aim is the reduction of inflammation and unspecific suppression of the immune system. In the last 15 years immunosuppressant and immunomodulating drugs, which act on specific target immune cells, were added as second-line treatment. Despite the large availability of these treatments, approximately 50% of patients have persistence of active SLE or the occurrence of a relapse [9,10] which both require modifications of therapy, most commonly with an increase of corticosteroid dosage and introduction of immunosuppressant drugs [8]. This paper summarizes the results of a health technology assessment (HTA) of belimumab, which was approved at the dosage of 10 mg/kg by the US Food and Drug Administration (FDA) on March 2011 and by the European Medicines Agency (EMA) on July 2011. Belimumab is indicated as an add-on treatment for SLE in adults with a positive autoantibody test whose disease is still highly active (e.g., anti-dsDNA positive and low complement) despite standard treatment with the exception of patients with severe active lupus nephritis or severe active central nervous system lupus [11]. Because of the unmet therapeutic need of these patients, belimumab could be a much awaited treatment from both physicians and patients and may potentially change the therapeutic framework of SLE in Italy. The HTA of belimumab aims to fill the need for knowledge about this treatment and to establish a good basis for its proper use. Moreover the HTA may direct the design of further research and the information or training of health professionals and/or patients. The final purpose is thus to contribute to a greater effectiveness and efficiency of the decision making process. Methods An HTA was performed to evaluate the value of using belimumab for treating patients affected by SLE in the Italian context. Epidemiology of the Disease. Target condition and population were defined using systematic reviews of the literature. Studies published on PubMed dealing with prevalence and incidence of SLE worldwide and in Italy were examined to define the frequency of the disease. The same approach was used to address the frequency of people with disability due to SLE. In order to perform these reviews the following keywords were used: "Lupus Erythematosus, Systemic" [Mesh], LES, "Epidemiology" [Mesh], incidence, prevalence, burden, and frequency. Efficacy and Safety of Belimumab. A literature search was performed in PubMed and Embase to identify randomized clinical trials (RCTs) reporting the efficacy and safety of belimumab. In order to perform these reviews the following keywords were used: Belimumab, Efficacy, Safety, "Clinical Trial. " Moreover European Public Assessment Report (EPAR) and data supplied by marketing authorization holder (Glax-oSmithKline) were reported. Cost-Effectiveness Analysis. A cost-effectiveness analysis was performed from both the Italian National Health Service (NHS) and societal perspectives. In particular, a microsimulation cost-effectiveness model was developed to assess the cost-effectiveness of belimumab (10 mg/kg) + the standard of care (SoC) compared to the SoC alone. The model was developed by Pharmerit International Company on Excel software and then adapted to the Italian context. Table 1 shows the main features of the economic model. The long term outcomes used in the model were based on data from the BLISS trials [12] and the Johns Hopkins observational cohort study [13]. The BLISS trials informed the likelihood of response at week 24, the change in SELENA-SLEDAI (Systemic Lupus Erythematosus Disease Activity Index) score up to week 52, the likelihood of discontinuation, and the effect of SELENA-SLEDAI score on utility and treatment costs. The efficacy recorded in the trials was projected over the 10-year maximum treatment period, in accordance with NICE report [14]. The main parameters used in the base case scenario are shown in Table 5. As the analysis was conducted from both societal and NHS perspectives, direct and indirect costs were considered [20,21]. In particular, direct costs were related to diagnostic tests, specialist visits, and organ damage. Regarding indirect costs, the human capital approach was followed to carry out the analysis. Utility data, used to calculate quality adjusted life years (QALYs) related to the different health states (Figure 1), were retrieved from international literature, on the basis of the utilities elicited within the BLISS trial that considered a representative sample of UK patients [14]. The horizon of the analysis was lifetime and costs and benefits were discounted at 3% yearly. Results were reported as incremental cost-effectiveness ratio (ICER), expressed in terms of incremental cost per life year (LY) gained and QALY gained. To assess the robustness of the base case results, univariate and probabilistic sensitivity analyses (PSA) were conducted on the following critical parameters (the distribution is reported in brackets): (i) change in SELENA-SLEDAI score at week 52 (multivariate normal distribution); (ii) change in SELENA-SLEDAI score according to the natural history model (multivariate normal distribution); (iii) discontinuation rate (normal distribution); (iv) probability of response (gamma distribution); (v) mortality and organ damage development probabilities according to the natural history model (multivariate normal distribution); (vi) standardized mortality rates (normal distribution); (vii) utility values (multivariate normal distribution); (viii) organ damage disutility (gamma distribution); (ix) costs associated with each SELENA-SLEDAI score (gamma distribution); (x) organ damage costs (gamma distribution); (xi) indirect costs (normal distribution). A cost-effectiveness acceptability curve (CEAC) was reported to assess how the probability of cost-effectiveness of belimumab varies according to different threshold values. Organizational Aspects and Impacts. A literature review was performed to analyze health needs, healthcare priorities, and quality of life (QoL) of patients with SLE. These aspects were identified as a result of a discussion with key opinion leaders involved in SLE management. The keywords used to perform the review were "Lupus Erythematosus, Systemic" [Mesh], LES, health needs, healthcare priorities, and quality of life. Ethical Evaluation. The ethical issues linked to the utilization of the product were taken into account through a framework including epistemological data, anthropologic reference, and ethical evaluation. With respect to human values, the following elements were considered: risk/benefit ratio, QoL, patient's autonomy, and social justice. 3.1. Epidemiology of the Disease. SLE is due to both genetic and environmental factors which leads to a deregulation of the immune response. The diagnosis relies on clinical anamnesis, medical investigation, and laboratory tests which are useful to exclude different diseases. Eleven criteria were developed and provided by the ACR to make diagnosis [22] and have been recently revised and validated by the Systemic Lupus International Collaborating Clinics (SLICC) group [23]. The diagnosis is often late because of the insidious onset [24]. The disease has a remitting-relapsing pattern with the occurrence of flares, with objective increase in disease activity marked by onset or worsening of signs and symptoms [2,3,25]. The SLEDAI, the British Isles Lupus Assessment Group (BILAG), the Physician's Global Assessment (PGA), and the SLE Responder Index (SRI) were developed to assess disease activity. The disease determines joint pain, which occurs in about 90% of patients [1,26,27]; skin rashes, which develop during the course of the disease in 85% of patients [1,24,[26][27][28]; glomerulonephritis, in about 50% of patients, which may cause renal failure in 20% [28][29][30]. The disease is more common in non-Caucasian people, in particular Black and Hispanic [31,32]. It affects women in 80-90% of cases with a female/male ratio ranging between 6 and 10 [31,32]. The peak of incidence is reached between 15 and 44 years of age [31,32]. The systematic review of the literature yielded 29 studies performed in Europe or America from 1980 on. Twenty-one studies were carried out in Europe on people belonging to Caucasian race mainly. The prevalence varied from 20 to 50 cases per 100,000 while the incidence ranged from 2 to 5 cases per 100,000 each year. In Italy only two small studies addressed the epidemiology of SLE [33,34]: prevalence ranged between 57.9 and 71 cases per 100,000 while incidence varied from 1.15 to 2.6 per 100,000. To calculate the population eligible to receive belimumab, data were searched in the literature, specifically, the percentage of patients with active disease and low complement levels. Chronic active disease was defined according to a SLEDAI-2K ≥ 2 (excluding the serology) in at least two out of three annual medical examinations, whereas the relapsing-remitting disease was defined as a SLEDAI-2K ≥ 2 in at least one out of three annual medical examinations. Two studies released estimates of patients responding to these criteria [9,10] for a mean value of 50%. A direct estimate of the presence of low complement together with anti-ds DNA positivity was provided by the Systemic Lupus Erythematosus Cost of Care In Europe Study (LUCIE) [35] which yielded a value of 39.6%. Considering a mean prevalence of 0.047% [36] the population of patients affected by SLE in Italy would be approximately 28,500, whereas the population eligible to receive belimumab would be approximately 5,300. The survival of SLE patients has improved throughout the years and it is now over 90% at 5 and 10 years [37,38]. Daily life activities most influenced by the disease include vigorous physical activities in 83.9% of cases, housework in 79.4%, sleep in 72.9%, work activities in 70.7%, and household business in 67.8% [39]. About one-third of patients became unable to work and are obliged to retire after 3-12 years following the diagnosis. Efficacy and Safety of Belimumab. The search identified one phase I study (LBSL01) [40], one randomized, double blinded phase II study controlled with placebo (LBSL02) [41], two randomized, double blinded phase III studies controlled with placebo (C1056 or BLISS-76 and C1057 or BLISS-52) [42,43], and a combined analysis of phase III clinical trials [12]. Main results of the phase II and III trials on belimumab are reported in Table 2. In the BLISS-76 and BLISS-52 phase III pivotal trials, the study population was treated with belimumab and SoC (corticosteroids, antimalarial agents, NSAIDs, cytotoxic chemotherapy, and immunosuppressive or immunomodulatory drugs) while controls received SoC plus placebo. Primary endpoint was a reduction in the SRI at week 52 in both studies. Secondary endpoints were flares frequency, time between flares (BLISS-76 study only), and effect of the treatment on corticosteroids dosage. All endpoints underwent an intention-to-treat analysis. Two belimumab doses were studied (1 mg/kg and 10 mg/kg). In both phase III trials, the primary endpoint was achieved in a significantly greater proportion of patients treated with the 10 mg/kg dosage compared to patients treated with placebo ( = 0.0006 in BLISS-52 and = 0.02 in BLISS-76). On the contrary, no statistically significant differences were observed between 1 mg/kg belimumab and placebo groups in the BLISS-76 study. A greater response to 10 mg/kg belimumab was also observed in the subgroup with more active disease (placebo: 31.7%; belimumab 1 mg/kg: 41.5%, = 0.002; belimumab 10 mg/kg: 51.5%, < 0.0001). This response maintained a statistically significant value even at week 76 only in the 10 mg/kg arm. The combined analysis was performed collecting data from 1,684 patients enrolled in the two phase III studies. This analysis confirmed the results of the BLISS-52 and BLISS-76 trials and showed that belimumab allowed a significantly greater number of patients to reduce the prednisone dosage below 7.5 mg/die (18% in the 10 mg/kg group compared to 12% in controls, < 0.05). Also, the average number of flares/year per patient was significantly lower in the 10 mg/kg belimumab group compared to controls (2.9 versus 3.5, < 0.001), but no significant differences were observed in the number of severe flares/year (0.8 in the 10 mg/kg group compared to 1.0 in controls). The safety profile of both doses of belimumab (1 mg/kg and 10 mg/kg) indicated that they were generally well tolerated. There were no significant differences between belimumab and placebo in terms of overall and serious dosedependent adverse events. No differences were observed in events leading to discontinuation of treatment. The majority of adverse events were mild or moderate. Table 3 shows the most common adverse events observed in phase II and III trials [41][42][43]. Cost-Effectiveness Analysis. The cost-effectiveness analysis showed that belimumab was cost-effective at the base case. In particular, from the Italian NHS perspective, ICER was equal to C22,990 and C32,859, respectively, for LY and QALY gained ( Table 4). The results from the societal perspective confirmed that belimumab can be considered even more costeffective achieving C20,119 for LY gained and C28,754 for QALY gained. The base case results were confirmed by the PSA. The PSA showed that when the threshold/QALY was equal to C30,000, belimumab was 29.1% more likely to be costeffective compared to the SoC. The CEAC showed that, when the willingness to pay/QALY was equal to C40,000, belimumab was 84.3% more likely to be cost-effective (Figure 2). The univariate sensitivity analysis showed that main drivers of cost-effectiveness were the treatment effect and the discontinuation rate. In conclusion, on the basis of CEAC, it is possible to state that the introduction of belimumab within the Italian context could be recommended as it is cost-effective from both NHS and societal perspectives. Organizational Aspects and Impacts. The functional status of the patients with SLE, especially during the phase of active disease, is generally compromised when compared with that of the general population. Patients also show a decrease in QoL in the sphere of both physical and emotional 20,000 40,000 60,000 80,000 100,000 120,000 Willingness to pay (€/QALY gained) Probability of being cost-effective functions. The reduction of Health-Related Quality of Life (HR-QoL) is comparable to that of serious diseases such as acquired immunodeficiency syndrome (AIDS) or other chronic diseases such as rheumatoid arthritis, hypertension, congestive heart failure, diabetes mellitus, and myocardial infarction [44,45]. The available studies concerning the impact of belimumab on QoL showed a significant improvement in QoL in comparison to the control group (SoC), confirming the positive effect of the drug on the different dimensions of HR-QoL [46,47]. The diagnosis of SLE is often late, largely due to the insidious onset of the disease [24]. Patients frequently need the advice and active cooperation of several specialists (rheumatologists, internists, nephrologists, immunologists, and dermatologists) and are hampered by the low prevalence of the disease [36], the late diagnosis [24], the strong interand intraregional heterogeneity in accessing new therapiesespecially in the case of infusion therapies-and the lack of specific clinical pathways. According to the results of a survey carried out in Italy in 2011, SLE is treated in one out of four hospitals, and only 55% of centers treating SLE provide patients with intravenous biological drugs commonly used in the treatment of other diseases [48]. The complexity of taking charge of the patient and the use of off-label drugs represent further issues to be considered. In this context, belimumab would help to bridge the therapeutic gap for SLE, by adding value in terms of offering appropriate treatments and improvement of QoL. Ethical Evaluation. On the basis of phase III clinical trials, belimumab has a favorable risk-benefit profile even though further studies are needed to address the safety profile outside clinical research studies. Available studies also showed that belimumab can improve QoL. An adequate communication process about the possible risks and benefits, way of administering, and follow-up schedule, is required to guarantee the autonomy of the patient. The involvement of general practitioners (GPs), if integrated with specialist centers, might be an appropriate solution for diagnosis and timely initiation of therapy and to facilitate the communication process. The only critical point seems to be social justice, which is threatened by economic constraints and heterogeneity in access to care within Italy. To guarantee the correct use of belimumab, several actions are needed: the promotion of integration between primary care and specialists, in order to allow a multidisciplinary approach to the management of this clinical condition; the collection of further evidence about efficacy, cost-effectiveness, and safety; and the guarantee of an equal access to clinical pathways and drugs for all the patients. Notwithstanding, current evidence justifies a positive ethical evaluation of the use of belimumab. Discussion Belimumab represents the first drug approved in the last 50 years specifically to treat patients with active, autoantibody positive SLE who are receiving standard therapy. Prior to belimumab, the last drugs approved by the FDA were Plaquenil (hydroxychloroquine) and corticosteroids in 1955. This delay of development of new lupus drugs could be attributed in conducting phase III studies for regulatory approval. In fact, these studies have an intrinsic complexity related to the particular characteristics of such a complex disease as SLE and to the uneasy definition of the endpoints used to evaluate the efficacy of drugs. The results of the HTA report presented in this paper show the advantages of belimumab by demonstrating its efficacy, cost-effectiveness, and ethical value which make it a useful therapeutic option with the potential to modify the course of SLE. With respect to efficacy, the HTA report by NICE [49] highlighted that there is an evidence of the clinical effectiveness of belimumab, although a greater consistency of results was observed in BLISS-52 trial, which is not as representative as BLISS-76 of European population. NICE assessment concluded that belimumab was not costeffective in comparison to SoC but judged appropriately the projection of data about outcomes from short to long term. Notwithstanding, NICE appraisal highlighted different concerns related to the model and its parameters, which may lead to either an over-or an underestimation of the ICER. In particular, since the discontinuation rate could have been underestimated, the ICER has been overestimated. This is relevant also because the discontinuation rate is likely to be higher as shown from the phase II extension study and agreed by clinicians [49]. Finally, the NICE HTA report considered the characteristics of the population affected by the disease and the lack of other licensed treatments. Furthermore, belimumab was considered steroid sparing with the potential to reduce the side effects of these drugs. Our report thoroughly dealt with epidemiological, economic, organizational, and ethical implications of the use of belimumab in the Italian context, making it possible to support a final positive opinion about the drug. In fact SLE is a highly health-threatening disease which mainly affects not only women of childbearing age, but also adult males, children, and adolescents. The disease is also burdened with high social and health services costs. Chronic treatment with standard care exposes patients to health risks Daily steroid dose (mg/day)-mean 11.6 The characteristics are determined by pooling the patient-level data from BLISS-52 and BLISS-76 for placebo and belimumab 10 mg/kg. and patients with high disease activity need alternatives to drug holidays or the increase in the dosage of other drugs. In some cases clinicians use therapies which are not indicated for SLE (off-label drugs), such as rituximab. Our literature review showed that the prevalence of SLE is about 47 per 100,000 for a total of 28,500 patients in Italy. About 50% of patients are affected by the active form despite SoC. These patients, if positive to anti-dsDNA and with low complement, are eligible to receive belimumab. Furthermore SLE determines an increase in mortality and work disability with costs varying according to disease severity and the development of flares. According to efficacy data, belimumab in association with SoC would prevent 4,742 flares in three years and would be cost-effective. The definition of a clear and efficient treatment pathway for SLE would be worthwhile and requires the involvement of GPs as well as several specialists. Furthermore, belimumab could improve the QoL with positive ethical implications. In conclusion, no other treatment obtained similar significant or comparable results. However there is a lack in long term efficacy data and also the evidence of the correlation between the SRI and survival should be strengthened. Furthermore safety profile was studied only for a maximum period of seven years of follow-up, which is still inadequate given the chronic nature of the disease. Efficacy data only come from studies versus placebo, since no other treatment proved a significant effect on the control of the disease. Despite these limits, the strength of this work was the collection, combination, and synthesis of all available data which are important in order to support the sustainable introduction of a new drug. In this view, belimumab may be an innovative and important drug, and postmarketing research will play a key role in updating the HTA and further supporting decision making. In conclusion this project demonstrates that belimumab may deserve to be introduced in the care of SLE patients in Italy. Our work suggests that tools such as HTA, characterized by a comprehensive approach to the evaluation of health technologies, should be used and implemented in the view of supporting an informed and evidence based decision making process [50]. Conclusion The HTA described in this paper shows the value of belimumab and gives important information for its proper use. In particular, the assessment demonstrated that belimumab may prevent flares and is cost-effective in patients with systemic lupus erythematosus who have a highly active disease despite standard of care.
2016-05-14T04:18:01.609Z
2014-08-17T00:00:00.000
{ "year": 2014, "sha1": "7da153fddb2b17aa674a758ea8a3750f6aaf9975", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/704207.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7ecf9a7da41790b0af9612ce56bbdd6f01bd842", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17157383
pes2o/s2orc
v3-fos-license
Paracoccidoides brasiliensis 30 kDa Adhesin: Identification as a 14-3-3 Protein, Cloning and Subcellular Localization in Infection Models Paracoccidoides brasiliensis adhesion to lung epithelial cells is considered an essential event for the establishment of infection and different proteins participate in this process. One of these proteins is a 30 kDa adhesin, pI 4.9 that was described as a laminin ligand in previous studies, and it was more highly expressed in more virulent P. brasiliensis isolates. This protein may contribute to the virulence of this important fungal pathogen. Using Edman degradation and mass spectrometry analysis, this 30 kDa adhesin was identified as a 14-3-3 protein. These proteins are a conserved group of small acidic proteins involved in a variety of processes in eukaryotic organisms. However, the exact function of these proteins in some processes remains unknown. Thus, the goal of the present study was to characterize the role of this protein during the interaction between the fungus and its host. To achieve this goal, we cloned, expressed the 14-3-3 protein in a heterologous system and determined its subcellular localization in in vitro and in vivo infection models. Immunocytochemical analysis revealed the ubiquitous distribution of this protein in the yeast form of P. brasiliensis, with some concentration in the cytoplasm. Additionally, this 14-3-3 protein was also present in P. brasiliensis cells at the sites of infection in C57BL/6 mice intratracheally infected with P. brasiliensis yeast cells for 72 h (acute infections) and 30 days (chronic infection). An apparent increase in the levels of the 14-3-3 protein in the cell wall of the fungus was also noted during the interaction between P. brasiliensis and A549 cells, suggesting that this protein may be involved in host-parasite interactions, since inhibition assays with the protein and this antibody decreased P. brasiliensis adhesion to A549 epithelial cells. Our data may lead to a better understanding of P. brasiliensis interactions with host tissues and paracoccidioidomycosis pathogenesis. Introduction Paracoccidoides brasiliensis is a dimorphic fungus and the etiologic agent of paracoccidioidomycosis (PCM). This disease presents prolonged evolution and may involve several organs [1]. P. brasiliensis is considered a facultative intracellular fungus that can adhere to and invade epithelial cells in vivo and in vitro [2]. The adhesion and invasion abilities of the fungus are dependent on the virulence of the isolate [3], which can be attenuated or lost after subsequent cycles of subculture for long periods [4] and reestablished after passage in animals [5] or in epithelial cell culture. P. brasiliensis has multiple mechanisms of pathogenicity, including adherence, colonization, dissemination, survival in hostile environments and escape from immune response mechanisms that allow it to colonize the host and cause disease [6][7][8]. The fungus also uses a variety of surface molecules to bind to the extracellular matrix of the host cell and establish infection [9]. The molecular mechanisms involved from first contact with the infectious agent to subsequent stages of the disease remain unknown. A necessary step in the colonization and, ultimately, development of diseases by pathogens is associated with their ability to adhere to the surface of the host. The ability to adhere is a widely distributed biological phenomenon that is shared by many organisms to enable them to colonize their habitats. Successful colonization is usually a complex event and involves surface proteins of the fungus and cellular receptors [10,11]. In this way, PCM development depends on interactions between the fungus and the host cell components. The large number of different tissues that fungi can colonize and infect suggests that fungi can use a variety of surface molecules for adhesion [36]. Mechanisms that may be responsible for determining the pathogenicity and virulence of P. brasiliensis have been extensively investigated by interaction experiments of this pathogen ex vivo in cell culture [26,27,[37][38][39][40][41][42] and experiments using high-throughput molecular tools, such as cDNA microarrays, insertion and/or gene deletion, and RNA interference [14,[43][44][45][46][47][48][49][50]. Studies have characterized extracellular matrix components involved in the interaction of P. brasiliensis with the host. The ECM consists of a network of proteins, including collagen, non-collagen glycoproteins, especially fibronectin and laminin, and proteoglycans, which seem to affect the proliferative capacity of the fungus [2]. In general, genes involved in adhesion are not constitutively expressed but activated when induced at the site of infection in the host [51,52]. The understanding and identification of molecules involved in the adhesion of microorganisms to different substrates in the host are important as targets for more effective new treatments in systemic mycoses. Some molecules of P. brasiliensis have been identified as ligands of extracellular matrix components. Gp43 was the first to be identified as a ligand for laminin [3,23,24]. The 43 kDa glycoprotein was found to play a role in adhesion because anti-gp43 serum inhibited the adhesion process by 85% [3]. Additional tests of binding affinity showed that gp43 was able to bind both fibronectin and laminin. In P. brasiliensis, other adhesins have also been described, and they are believed to play important roles in its pathogenesis [26,27,29,[32][33][34][35]39,53]. A 30 kDa adhesin of P. brasiliensis, which is capable of binding to laminin, was isolated and found to be expressed at higher levels in a P. brasiliensis isolate that showed high adhesion capacity [39]. P. brasiliensis also presents two proteins on its cell surface with molecular weights of 19 and 32 kDa that interact with different ECM proteins, including laminin, fibrinogen and fibronectin. Assays using conidia of P. brasiliensis pre-incubated with anti-32 kDa monoclonal antibody inhibited the adhesion of fungal proteins to the ECM in a dosedependent manner [29,54]. Recently, protein sequence analysis characterized the 32 kDa as a hydrolase, and knockout mutants showed changes in morphology, a reduced ability to adhere to human epithelial cells in vitro and decreased virulence in infection models in mice [31,54]. In addition to these adhesins, enzymes of P. brasiliensis that interact with host molecules are regarded as adhesin-like, such as GAPDH (glucose-6-phosphate dehydrogenase), a ligand of laminin, fibronectin and collagen type I [26], TPI (triosephosphate isomerase), which also binds to matrix components, such as laminin and fibronectin [34], and ICL (isocitrate lyase), a ligand of laminin, fibronectin and collagen type I [55,56]. Additionally, malate synthase (MLS) of P. brasiliensis, which functions in the glyoxylate cycle and allantoin pathway, is located in the cytoplasm and the surface, especially in budding cells. This protein is secreted and acts as an adhesin, indicating its multifunctional role [33]. Da Silva Castro et al., (2008) [57] described another fungal surface molecule, called PbDfg5p, that has the capacity to adhere to ECM proteins. This protein was characterized as belonging to the family of glycosyl hydrolases and is related to the formation and maintenance of the fungal cell wall. In P. brasiliensis, its presence was detected in the cell wall and cell wall protein extracts obtained from yeast treated with b-1-3 endoglucanase using electron microscopy and immunogold labeling. Recombinant PbDfg5p displayed an ability to bind to laminin, fibronectin, collagen type I and type IV and contained an RGD motif (Arg-Gly-Asp, which binds to fibronectin) in its predicted sequence, a common characteristic of some adhesins [57]. In our study, this 30 kDa adhesin was identified as a 14-3-3 protein using Edman degradation and mass spectrometry analysis. The 14-3-3 protein family is a highly conserved group of small acidic proteins that have been implicated in a variety of cellular processes in eukaryotes. However, although these proteins are involved in apoptosis, signal transduction, cell cycle regulation and transcription, their exact role in these processes remains unknown [58]. Members of this group function as accessory proteins in various processes, act as specific determinants that alter the cellular localization of other proteins with which they interact and are involved in the direct regulation of enzyme activity [59]. Thus, in this study, we characterized the 14-3-3 protein of P. brasiliensis by determining its localization, both in the yeast form of the fungus and in infection models (epithelial cells and a murine model), to better understand P. brasiliensis-host tissue interactions and paracoccidioidomycosis pathogenesis. P. brasiliensis Isolate and Growth Conditions A highly virulent P. brasiliensis (isolate 18), obtained from the mycology collection of the Faculty of Medicine, University of São Paulo (FM-USP), was used throughout this investigation. P. brasiliensis yeast cells were maintained by weekly subcultivation in semisolid culture medium. Fungal cells were grown for 3-4 days at 35uC on Fava-Netto solid medium [60]. Protein Characterization by Amino Acid Sequencing For internal peptide sequencing, the 30 kDa protein was subjected to two-dimensional electrophoresis. The gel was stained with Coomassie blue, and the band was excised from the gel, eluted, and digested with trypsin for endopeptidase digestion. The fragments were separated by reverse-phase HPLC and subjected to Edman degradation [61]. Amino Acid Sequence Homology Analysis of P. brasiliensis 30 kDa Adhesin The amino acid sequences were compared to other sequences deposited in a database. The homology searches were performed with the BLASTP program [62] and FASTA 3 [63]. Cloning cDNA Containing the Complete Coding Region of the 14-3-3 Protein into an Expression Vector Cloned cDNA containing the complete coding region of the 14-3-3 protein (GenBank accession number AY462124) [64] was amplified by PCR using sense (59-ATGGGTTACGAA-GATGCTG-39) and antisense (59-CTCAGCGGCCTTAG-GAGC-39) primers. The amplification parameters were as follows: an initial denaturation step at 94uC for 2 min, followed by 25 cycles of denaturation at 94uC for 30 s, annealing at 58uC for 30 s, and extension at 72uC for 1 min and 10 s. A final elongation step was performed at 72uC for 7 min. The PCR product was subcloned into the SalI/XhoI sites of the pET-32a(+) expression vector (Novagen, Inc., Madison, WI, USA.). The resulting plasmid was transformed into Escherichia coli DH10B. Bacteria transformed with pET-32a-14-3-3r were grown in LB medium supplemented with ampicillin (100 mg/mL) at 37uC to an optical density of 0.6 at 600 nm. Recombinant protein production was induced by adding 0.4 mM isopropyl-b-D-thiogalactopyranoside (IPTG) (Sigma-Aldrich, St. Louis, MO, USA) to the growing culture, and the bacterial extract was pelleted and resuspended in phosphatebuffered saline (PBS). After induction, the cells were incubated for 5 h at 37uC with shaking at 200 rpm. The cells were harvested by centrifugation at 10,000 6 g for 30 min at 4uC. The supernatant was discarded, and the cells were resuspended in lysis buffer (50 mM NaH 2 PO 4 , 20 mM imidazole, 300 mM NaCl, 1 mM PMSF, and 16 PLAAC) and lysed by extensive sonication (pulse on 4.4 s; pulse off 9.9 s; 60% extended for 2 min). The sample was centrifuged at 10,000 6 g for 30 min at 4uC. His-tagged Pb14-3-3r was purified using a Ni-NTA column (GE Healthcare, Buckinghamshire, UK) equilibrated with 10 column volumes of buffer A (50 mM NaH 2 PO 4 , 20 mM imidazole, and 300 mM NaCl). Clarified lysate was applied to the column at a flow rate of 2-3 mL/min. The resin was washed with 5 column volumes of buffer A supplemented with increasing concentrations of imidazole (10-120 mM in 10 mM increments) followed by 10 column volumes of buffer A +250 mM imidazole. Eluted portions (10 mL) were collected from each imidazole concentration and analyzed by polyacrylamide gel electrophoresis (SDS-PAGE) [65]. The gels were washed, and the proteins were stained with Coomassie blue [66]. After obtaining the purified protein, the histidine tail was removed using the Thrombin Clean Cleave TM Kit (Sigma Aldrich, St. Louis, MO, USA) according to the manufacturer's recommendations. The cleaved fractions were analyzed by SDS-PAGE. To confirm that the purified recombinant protein obtained was indeed the 14-3-3 protein of P. brasiliensis, the bands obtained in the 10% polyacrylamide gel were purified and subjected to tryptic digestion using 10 ng/mL Trypsin Gold (Promega). The tryptic fragments were subjected to LC-MS/MS using a Cap-LC coupled to a Q-TOF Ultima API mass spectrometer (Waters, UK). The spectra were processed using ProteinLynx v4.0 software (Waters) and MASCOT MS/MS ion search (www.matrixscience.com), and the sequences were identified by searching the SwissProt database. Antibody Production Purified recombinant 14-3-3 protein was used to generate specific rabbit polyclonal serum. Rabbit preimmune serum was obtained and stored at 220uC. The purified protein (1.5 mg/mL) was injected into one rabbit with Freund's adjuvant three times at 2-week intervals. The obtained serum, containing monospecific polyclonal antibody to 14-3-3, was aliquoted and stored at 220uC. The immunoglobulin fractions of the antisera were separated by precipitation with ammonium sulfate and stored at 270uC. Cell-free Antigen The cell-free extract was obtained from yeast cells of P. brasiliensis (isolate 18, with high adherence capacity to epithelial cells). The protein concentration of the extract was quantified by the Bradford method (BioRad), and the samples were analyzed by SDS-PAGE. Cell Wall Protein Extraction This procedure was performed as described in da Silva Castro et al., [57], with some modifications. Yeast cells were frozen in liquid nitrogen and disrupted by maceration, and the material was lyophilized, weighed and resuspended in 50 mM Tris buffer. The supernatant was separated from the cell wall fraction by centrifugation at 10000 6 g for 10 min at 4uC. To remove noncovalently linked proteins and intracellular contaminants, the isolated cell wall fraction was washed extensively with 1 M NaCl and boiled three times in SDS extraction buffer (50 mM Tris-HCl, pH 7.8, 2% w/v SDS, 100 mM Na-EDTA, and 40 mM bmercaptoethanol) and pelleted after the extractions by centrifugation at 10000 6 g for 10 min [67]. The protein concentration of the extract was quantified by the Bradford method (BioRad), and the samples were analyzed by SDS-PAGE. Western Blot Analysis The cell wall protein extracts and purified 14-3-3 recombinant protein separated by one-and two-dimensional electrophoresis were transferred to nitrocellulose membranes. The membranes were incubated with the polyclonal antibody obtained against the 14-3-3 recombinant protein and peroxidase-conjugated anti-rabbit IgG as the secondary antibody. The reaction was developed with a chromogen substrate consisting of 0.005 g of diaminobenzidine (DAB) diluted in 30 mL of PBS plus 150 mL of hydrogen peroxide. The negative control reaction was performed with non-immune rabbit serum. Mice and IT Infection C57BL/6 mice were obtained from the Isogenic Breeding Unit (Departmento de Imunologia, Instituto de Ciências Biomédicas, Universidade de São Paulo, São Paulo, Brazil) and used at 8 to 12 weeks of age. The mice were anesthetized and subjected to intratracheal (IT) P. brasiliensis infection as previously described [68]. Briefly, after intraperitoneal anesthesia, the animals were IT infected with 10 6 P. brasiliensis yeast cells in 50 mL of PBS. At 72 h and 4 weeks postinfection, the lungs were removed and fixed to analyze the subcellular localization of P. brasiliensis 14-3-3 protein. These experiments were approved by the Ethics Committee on Animal Experiments of the University of São Paulo, São Paulo, Brazil. Subcellular Localization of the 14-3-3 Recombinant Protein in P. brasiliensis Yeast Cells in vitro and in vivo To determine the subcellular localization of the 14-3-3 protein of P. brasiliensis, we performed immunocytochemistry at the ultrastructural level using immunogold labeling. For each experiment, both pneumocytes infected with P. brasiliensis (10 8 cells/mL) for 2, 5 and 8 h and lungs removed from C57BL/6 mice IT infected with P. brasiliensis (10 6 cells/mL) were fixed (2.5% v/v glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.2) for 24 h at 4uC and submitted to the electron microscopy service of the Institute of Biomedical Sciences (ICB-I) USP-SP for the preparation of ultrathin sections. After fixation, the cells were rinsed several times using the same buffer, and free aldehyde groups were quenched with 50 mM ammonium chloride for 1 h, followed by block staining in a solution containing 2% (w/v) uranyl acetate in 15% (v/v) acetone for 2 h at 4uC (4). The material was dehydrated in a series of increasing concentrations of acetone (30 to 100% v/v) and embedded in LR Gold resin (Electron Microscopy Sciences, Washington, PA). The ultrathin sections were added to nickel grids, preincubated in 10 mM PBS containing 1.5% (w/v) bovine serum albumin (BSA) and 0.05% (v/v) Tween 20 (PBS-BSA-T), and subsequently incubated overnight with the polyclonal antibody against the 14-3-3 recombinant protein (diluted 1:50). After washing with PBS-BSA-T, the grids were incubated overnight with the labeled secondary antibody (Au-conjugated rabbit IgG, 10 nm; diluted 1:10). The controls were incubated with rabbit preimmune serum at 1:50, followed by incubation with the labeled secondary antibody. After incubation, the grids were washed with the buffer described above, washed with distilled water, and stained with 3% uranyl acetate (w/v) and 4% lead citrate (w/v). Finally, the grids were observed with a Jeol 1010 transmission electron microscope (Jeol, Tokyo, Japan). Inhibition Assay of the Interaction between P. brasiliensis and Epithelial Cells Using Recombinant 14-3-3 Protein The infection inhibition assays were performed on coverslips in 24-well plates. Pneumocyte monolayers (A549 cells) were cultured for approximately 24 h in Ham-F12 medium (Cultilab). Then, these monolayers were treated with 25 mg/mL of purified recombinant 14-3-3 protein for 1 h at 37uC. BSA was used as a control (25 mg/mL). At the indicated treatment times, the cells were washed and infected with 10 6 cells/mL P. brasiliensis for 2 h, 5 h, 8 h and 24 h. Duplicates were analyzed in three independent experiments. After infection, the coverslips were washed and fixed with 4% paraformaldehyde for 1 h at room temperature. After fixation, the coverslips were stained with Giemsa and analyzed using an optical microscope. The number of fungi was counted in 5000 cells, and the total infection percentage was determined to determine the role of the 14-3-3 protein in the infection process. The data were confirmed by counting colony-forming units (CFUs). The test was also performed in 24-well plates without coverslips. After infection, the cells were washed, lysed with water and plated on Fava Netto's medium supplemented with 4% fetal bovine serum. After 4 days, the CFUs were counted, and the data were statistically analyzed using Origin Pro v7.5 software. Inhibition Assay of the Interaction between P. brasiliensis and Epithelial Cells Using Polyclonal Anti-14-3-3 Produced in Rabbits The infection inhibition assays were performed on coverslips in 24-wells plates. Pneumocytes monolayers (A549 cells) were cultured for in approximately 24 h in Ham-F12 medium (Cultilab). Then, suspensions of 10 6 cells/ml of P. brasiliensis were pretreated with polyclonal anti-14-3-3 produced in rabbits (1:100) and control with preimmune serum from rabbit 1:100 in for 1 h at 37uC. At the indicated treatment times, the fungi were properly washed and this suspension was used to infect the epithelial cells. The times of infection were 2 h, 5 h, 8 h and 24 h. Duplicates were analyzed in three independent experiments. After the time of infection, the coverslips were washed and fixed with 4% paraformaldehyde for 1 h at room temperature. After fixation, the coverslips were stained with Giemsa and analyzed using an optical microscope. The number of fungi was counted in 5000 cells, and the total infection percentage was determinate to determine the role of 14-3-3 protein in the infection process. The data were confirmed by counting colony-forming units (CFUs). The test was also performed in the same way, but in 24-wells plates without coverslips. After the time of infection, the cells were washed, lysed with water and plated on Fava Nettos medium supplemented with 4% fetal bovine serum. After 4 days, the CFUs were counted, and data the were statistically analyzed using the Origin Pro v7.5 software. Homology of the Internal Peptides of the P. brasiliensis 30 kDa Adhesin The 30 kDa adhesin was analyzed based on sequences of the internal peptide of P. brasiliensis, which spanned three amino acid sequences: IVASADKELSVEER, NLLSVAYK and NATE-VAQTDLAPTHPIR. These sequences were submitted to databases and analyzed by BLASTP (www.ncbi.nlm.nhi.gov/BLAST) and FASTA 3 (www.ebi.ac.uk/fasta33/). The results were the same in both analyses; the peptides shared similarity to the 14-3-3 protein of P. brasiliensis. The amino acid sequence of the peptides showed identity with two regions of the 14-3-3 protein of P. brasiliensis that were already deposited in GenBank (AAR24348): amino acids 28-50 shared 100% identity, and amino acids 153-169 shared 100% identity, as shown in Figure 1. Expression, Purification and Production of a Polyclonal Antibody to Pb14-3-3 Recombinant Protein cDNA encoding the Pb14-3-3 recombinant protein was subcloned into the expression vector pET-32a, and a recombinant fusion protein was obtained. After induction with IPTG, a 43 kDa recombinant protein was detected in bacterial lysates ( Fig. 2A). The 6 histidine residues fused to the N-terminus of the recombinant protein were used to purify the protein from bacterial lysates through nickel-chelate affinity. The recombinant protein was eluted and analyzed by SDS-PAGE (Fig. 2B). An aliquot of the purified recombinant protein was used to generate a rabbit polyclonal antibody to Pb14-3-3r. Western blotting confirmed the positive reaction of the antibody with the fusion protein (Fig. 1C) and identified a 43 kDa protein in bacterial lysates. After cleavage with the Thrombin Cleave kit (Sigma Aldrich, St. Louis, MO, USA), the immunoreactive band corresponded to a 30 kDa protein. The 14-3-3 antiserum obtained in rabbits reacted with P. brasiliensis 14-3-3 recombinant protein, and reactivity was observed up to 1:1000. Controls were incubated with rabbit preimmune serum at 1:100 (Fig. 2C). Cell Wall Protein Extraction The 14-3-3 protein expression was more evident in P.brasiliensis cell wall recovered from A549 infected cells. When the fungus was cultivated in Fava Netto medium, we observed a weak reaction in the cytoplasmic fraction and no reaction in the cell wall extract (Fig. 3). Additionally, no reaction was observed in uninfected A549 cells (control) or in the cytoplasmic fraction of P. brasiliensis recovered from A549 infected cells (data not shown). Subcellular Localization of the 14-3-3 Protein in P. brasiliensis Yeast Cells in vitro and in vivo The subcellular localization of the 14-3-3 protein was determined using an anti-14-3-3 polyclonal antibody in combination with immunoelectron microscopy. P. brasiliensis yeast cells, pneumocytes and lungs removed from C57BL/6 mice IT infected with P. brasiliensis were processed by postembedding with gold particles. Immunocytochemical assays revealed the ubiquitous distribution of gold particles in P. brasiliensis yeast cells, with some concentration in the cytoplasm (Fig. 4). Notably, the number of gold particles was increased in the P. brasiliensis yeast cell wall at the time of epithelial cell interaction (Fig. 5), suggesting that the 14-3-3 protein may play an important role in the host-pathogen interaction. Some cell wall fragments containing these gold particles were directed to epithelial cells (Fig. 5C) at longer interaction times (8 h). The 14-3-3 protein was ubiquitously distributed in fungi (Fig. 6) present at the sites of infection of C57BL/6 mice intratracheally infected with P. brasiliensis yeast cells for 72 h (acute infections) and 30 days (chronic infection). Control samples incubated with rabbit preimmune serum showed no gold labeling. Inhibition of the Interaction of P. brasiliensis with Epithelial Cells Using Recombinant 14-3-3 Protein The inhibition assay was performed by counting cells using optical microscopy (Fig. 7). Pretreatment with the 14-3-3 protein of P. brasiliensis significantly reduced (p#0.05) the infection at all times evaluated. These data were confirmed with CFU counts. BSA treatment (control) led to a slight reduction in the rate of infection, but these data were not statistically significant compared with those obtained in the absence of treatment. When the cells were pretreated with the recombinant 14-3-3 protein (25 mg/mL), we observed a reduction of approximately 40% at 2 h of infection, 54% at 5 h, 35% at 8 h and 28% at 24 h, demonstrating that this protein may be important in the P. brasiliensis infective process. Moreover, the rate of infection at 24 h was significantly different compared with earlier times (2, 5 and 8 h), but no difference was found between earlier times (p#0.01). This result could explain the increased rate of infection, but there is still inhibition by recombinant 14-3-3 protein. Inhibition of the Interaction of P. brasiliensis with Epithelial Cells Using Polyclonal Anti-14-3-3 Produced in Rabbits The inhibition assay was performed by counting cells using optical microscopy (Fig. 8). Antibody treatment (1:100) was also effective in inhibiting the infection, particularly at 2 and 24 h, demonstrating that this protein may be important in the infective process of P. brasiliensis. Discussion P. brasiliensis is considered a facultative intracellular fungus that may adhere to and invade epithelial cells in vivo and in vitro [2]. The adhesion and invasion ability of the fungus is dependent on the virulence of the isolate [3]. The ability of cells to interact with each other in an orderly manner depends on multiple adhesive interactions between adjacent cells and their extracellular environment and is mediated by cell adhesion molecules [69][70][71]. Pathogen adhesion requires the recognition of carbohydrate or protein ligands on the surface of the host cell or proteins of the ECM [20][21][22]. The large number of tissue types that fungi can colonize and infect suggests that fungi have a variety of surface molecules for adhesion [36]. Possible mechanisms responsible for determining the pathogenicity and virulence of P. brasiliensis have been extensively investigated by interaction experiments of this pathogen ex vivo in cell culture [26,27,[37][38][39][40][41][42] and high-throughput molecular tools, such as cDNA microarrays, insertion and/or gene deletion, and RNA interference [14,[43][44][45][46][47][48][49][50]. In our previous study, we characterized a 30 kDa adhesin as a laminin ligand and observed that this adhesin was more highly expressed in virulent P. brasiliensis isolates, indicating that this protein may contribute to the virulence of this important fungal pathogen [39]. In the present study, we aimed to obtain a better understanding of the role of the 14-3-3 protein in the relationship between P. brasiliensis and host cells using in vitro and in vivo models. Thus, we generated a recombinant 14-3-3 protein in bacteria and used it to generate a polyclonal antibody that specifically recognized the recombinant purified protein. Using amino acid sequencing, we determined that the adhesin belongs to the 14-3-3 family, and we showed that the P. brasiliensis protein may play an important role in the pathogenesis of this fungus, provided that inhibits by 50% the adherence to epithelial cells. The 14-3-3 protein was identified in the genome of the P. brasiliensis fungus, but its function was unknown. The pathogen must regulate adhesin expression to survive and cause disease [39]. 14-3-3 proteins are a family of adaptor proteins that modulate protein function in all eukaryotic cells [72]. Little is known about the function of 14-3-3 proteins in pathogenic fungi. Studies on Saccharomyces cerevisiae and Schizosaccharomyces pombe have demonstrated that both yeasts contain two genes that encode 14-3-3 proteins, and these proteins, as in higher eukaryotes, bind to numerous proteins involved in a variety of cellular processes [73]. The filamentous fungus Aspergillus nidulans contains a protein with high homology to 14-3-3 proteins (called Arta) that prevents the formation of the septum [74], and recently, this protein was described in P. brasiliensis vesicles [19,75]. A critical first step in the establishment of infection by pathogens is adhesion to host components. The recognition of host cells by a pathogen requires the presence of complementary molecules on the surface of the host cell. We previously demonstrated that P. brasiliensis is capable of adhering to and invading epithelial cells [27]. Adhesins that interact with receptors have been found to exist in a number of different pathogens, and host components of the ECM are often of great importance in the modulation of migration, invasion, differentiation, and microbial proliferation. In recent years, several proteins in P. brasiliensis with receptor-like characteristics have been found to be ligands of the ECM [26,27,33,35,39,76]. Using enteropathogenic E. coli (EPEC), Patel et al., (2006) demonstrated that the tau isoform (also known as theta) of 14-3-3 can bind specifically to Tir, a major effector protein that is delivered to the plasma membrane of the eukaryotic cell, where it acts as the receptor for the bacterial adhesin intiminin. 14-3-3tau is recruited to the site of the pedestal (3 h after infection) and can decorate attached EPEC in the later stages of the infection process (5-7 h after infection) [72]. Immunocytochemical analysis, confirmed by western blotting analysis of cell wall protein extracts, revealed ubiquitous distribution of the 14-3-3 protein in the cell wall of the yeast form of P. brasiliensis, with some concentration in the cytoplasm, and in in vitro (pneumocyte interaction) and in vivo (mouse infection) models. Interaction experiments were also carried out in animal models of infection (C57BL/6 mice) to elucidate the role of this protein in vivo and validate the data previously obtained in cell culture. Notably, we observed a large increase in the amount of P. brasiliensis 14-3-3 protein in the fungal cell wall during interaction with epithelial lung cells (A549) and in acute infection in mice, suggesting that this protein could play an important role in the host-pathogen interaction. Few fungi are found in acute infections; however, we observed the presence of the 14-3-3 protein in the fungal cell wall and a partial loss of this cell wall, similar to the cell culture model (A549). However, in chronic infection (30 days), the distribution of the 14-3-3 protein was similar to that found in fungus in culture media, and this feature may be related to an adaptive condition of the fungus. The 14-3-3 protein distribution in P. brasiliensis during the interaction with epithelial lung cells and in infected mice has never been demonstrated, and the large amount of 14-3-3 protein in the cell wall of this fungus during the interaction may suggest the importance of this protein in this context. The presence of the 14-3-3 protein at the P. brasiliensis cell surface raises interesting questions: for example, how is this protein incorporated into the cell wall in the absence of a conventional Nterminal signal sequence for targeting the protein to the secretory pathway. Additional studies will be necessary to identify putative signals related to P. brasiliensis cell wall targeting. The targeting of some classic cytoplasmic molecules lacking an N-terminal signal peptide to other cellular compartments is not uncommon for P. brasiliensis, as described for glyceraldehyde-3-phosphate-dehydrogenase (GAPDH) and triosephosphate isomerase (TPI) [34]. Proteins that lack an N-terminal signal peptide sequence have also been found in the cell wall of S. cerevisiae in addition to their usual cytoplasmic localization [77]. In addition, the cytoplasmic proteins GAPDH, TPI and formamidase have been detected in extracellular vesicles secreted by Histoplasma capsulatum [78] and Cryptococcus neoformans [79]. These data support our finding that the 14-3-3 protein is localized in both the cytoplasm and the cell wall of P. brasiliensis [75], and during the interaction, it can be exported to sites of infection. In conclusion, in the present study, we have shown that the P. brasiliensis 14-3-3 protein, with adhesin characteristics, may play an important role in the fungus-host cell interaction. Our data may lead to a better understanding of P. brasiliensis interactions with host tissues and paracoccidioidomycosis pathogenesis.
2016-05-12T22:15:10.714Z
2013-04-30T00:00:00.000
{ "year": 2013, "sha1": "23ff3b1ba17635803144f0fab33c88dd56ca064e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0062533&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8aa8d16cdaac3a2785e62d78cbb79b6401fa054b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268452927
pes2o/s2orc
v3-fos-license
Integration of iPSC-Derived Microglia into Brain Organoids for Neurological Research The advent of Induced Pluripotent Stem Cells (iPSCs) has revolutionized neuroscience research. This groundbreaking innovation has facilitated the development of three-dimensional (3D) neural organoids, which closely mimicked the intricate structure and diverse functions of the human brain, providing an unprecedented platform for the in-depth study and understanding of neurological phenomena. However, these organoids lack key components of the neural microenvironment, particularly immune cells like microglia, thereby limiting their applicability in neuroinflammation research. Recent advancements focused on addressing this gap by integrating iPSC-derived microglia into neural organoids, thereby creating an immunized microenvironment that more accurately reflects human central neural tissue. This review explores the latest developments in this field, emphasizing the interaction between microglia and neurons within immunized neural organoids and highlights how this integrated approach not only enhances our understanding of neuroinflammatory processes but also opens new avenues in regenerative medicine. Introduction Induced Pluripotent Stem Cells (iPSCs), first generated in 2006 [1], reprogram mature somatic cells, such as skin or blood cells, back into a pluripotent state.These reprogrammed cells can then be differentiated into various specific cell lineages [2].This versatility underscores the importance of iPSCs in both medical research about the disease pathogenesis of and clinical therapeutic applications [3,4]. Neurodegenerative diseases, which progressively and selectively deteriorate specific neuron populations, represent a growing medical challenge in an aging global population [5].Most current knowledge about the pathology and mechanisms of neurodegenerative diseases stems from histopathological studies.However, obtaining brain samples from patients is challenging [6,7].While animal models offer valuable insights in neurological research, they fail to completely replicate human neural disease phenotypes [8,9].A notable difference is in the neuron structure; for instance, human dopamine neurons form about 1-2.5 million symmetrical synapses, with an average total length of 4.6 m, which is ten times more than that of rats [10].Additionally, the production and development of neurons significantly differ between humans and mice.For example, human interneurons are generated from dorsal progenitors, a process not observed in mice [11,12].The unique neurogenesis and neurodevelopment processes in humans lead to complex neuronal networks that cannot be adequately studied using animal models.In light of this, the utilization of iPSCs presents a valuable approach to surmount the existing disparities in neurological research [13]. Microglia develop from erythro-myeloid precursors located in the yolk sac during embryonic development and subsequently take residence in the central nervous system [14]. Differing from this, monocytes that originate in the bone marrow, can also travel to the central nervous system and subsequently differentiate into macrophages.Some surface markers, such as Tmem119, help differentiate microglia within the central nervous system [15].In some diseases, microglia and recruited macrophages may play different pathological roles.For example, in multiple sclerosis (MS), active multiple sclerosis lesions primarily consist of microglia, and as the lesions mature, the number of microglia gradually decreases, while the quantity of macrophages recruited from peripheral monocytes increases [16].Inflammation, or neuroinflammation, is an essential driving factor of many neurodegenerative diseases [17,18].Abnormal activation of microglia in a patient's brain is associated with this disease pathology [19].Proinflammatory cytokines, chemokines, or reactive oxygen species released by activated microglia potentiate the interaction between neuroinflammation and α-Synuclein dysfunction, which drives the progression of Parkinson's disease (PD) [20].In this context, immunized neural organoids serve as an invaluable tool in the detailed exploration and understanding of neurological diseases [21].This comprehensive review centers on the crucial role of microglia within immunized brain organoids, delving into their impact on understanding neuroinflammation in neurological diseases.It covers technological advancements, potential applications in disease research, and the current challenges and limitations in the field. Interaction between Microglia and Neurons 2.1. Patterns of Interaction between Microglia and Neurons In the nervous system, interactions between microglia and neurons could be facilitated through a variety of mechanisms.These include the release of soluble factors, direct physical contacts between cells, or through indirect interactions that involve intermediary cell types [22]. Soluble messengers are crucial for bidirectional communication between microglia and neurons.This interaction is facilitated through the release of substances that act at a distance from their origin, enabling microglia and neurons to communicate effectively.For instance, microglia-derived brain-derived neurotrophic factors (BDNF) not only play an essential role in transmitting neuropathic pain signals to neurons, but also lead to a shift in the anion reversal potential of spinal lamina I neurons [23].Additionally, microglia-released interleukins, such as IL-10 and IL-1β, influence neuronal development and activity [24,25].In the context of neuronal influence on microglia, recent studies have shown that microglia can detect neuronal ATP and respond by producing adenosine, which then inhibits neuronal activity [26].During traumatic brain injury, macroglia can form a potential barrier between healthy and injured tissue in an ATP-dependent manner, highlighting the dynamic nature of neuroglial interactions in response to injury [27]. The direct microglia-neuron interaction mostly takes place on the microgliasynapse interface [28].Several molecules have been identified in microglia-neuron contacts at synapses, including Fractalkine (CX3CL1)/fractalkine receptor (CX3CR1) [29], CD200/CD200R [30], and the complement system [31].The CX3CL1-CX3CR1 interaction is crucial for synaptic pruning by microglia [29], as highlighted by the impaired functional maturation of synapses in CX3CR1-deficient mice [32].Microglia contribute to neurogenesis through crosstalk with neural precursors in the developing cortex [33].They also have the capacity to direct neuronal migration and differentiation [34].Synaptic pruning requires microglia during brain development, necessary for refining synaptic circuits [29].CX3CR1, a vital chemokine receptor essential for microglial function, is instrumental in synaptic pruning, a key process in maintaining neural network health and adaptability.Deficiencies in CX3CR1 lead to compromised synaptic pruning, resulting in diminished synaptic transmission and decreased brain connectivity.This impairment can subsequently trigger the onset of various neuropsychiatric disorders, highlighting the receptor's critical role in brain function and mental health [35].CD200 acts as a negative regulator for microglia.When CD200 is deficient, it can lead to an excessive increase in both the number and activity of microglia, consequently increasing the occurrence of experimental autoimmune encephalomyelitis [36]. Intermediate cells like astrocytes simultaneously receive and transmit signals from both microglia and neuros [37].Continuous exchanges of metabolites occur between neuros and astrocytes [38].Moreover, whereas astrocytes and microglia collaborated with glutamatergic neurons, constitute the "quad-partite synapse."This structure is crucial for neural activity and serves as a pivotal component in neuro-immune communication, highlighting the intricate and integrated nature of neuronal and immune system interactions in the brain [39]. The Significance of Microglia and Neuron Interactions The role of interactions between microglia and neurons may be context-dependent.Microglia perform surveillance functions through constantly monitoring their environment with highly motile processes under homeostatic conditions [40].Acute and rapid activation of microglia is generally considered neuroprotective [41], whereas persistent and prolonged microglial activation may be neurotoxic, ultimately leading to neuronal degeneration [42].Emerging data suggest that neuroimmune dysfunction in microglia-synapse interactions contributes to neurodegenerative diseases.In the early stages of Alzheimer's disease (AD), complement and microglia are implicated in synapse loss, characterized by increased C1q in synapses and subsequent microglial phagocytosis before the deposition of pathological plaques.Blocking C1q or the complement receptors on microglia protect against early synapse loss, indicating that targeting microglia-synapse interactions could be a potential therapeutic approach in AD [31].In PD, the accumulation of α-synuclein in synapses suggests a similar engulfment mechanism as seen in AD, although the role of complement in synapse loss in PD remains largely unknown [43].Reactive astrocytes, induced by activated microglia, contribute to direct neurotoxicity in neurodegenerative diseases [44], and regulating microglia-astrocyte crosstalk demonstrates a potent neuroprotective effect in PD [45].Enhancing the CX3CL1-CX3CR1 signaling pathway also plays a neuroprotective role in a PD mouse model [46]. From iPSC-Derived Microglia to Immunized 3D Neural Organoids Due to their unique anatomical location, human microglia are almost exclusively explored through in vitro research methods.However, the availability of primary microglia for such studies presents substantial challenges.Various microglial models have been utilized to elucidate the roles of microglia in neurological diseases and to develop potential therapeutic development.These models encompass postmortem microglia [47], microglia derived from iPSCs [48], two-dimensional (2D) co-cultures of microglia and neurons [49], 3D organoids containing microglia [50], and even more intricate organoid systems [51].The emergence of iPSC technology marked a significant breakthrough, paving the way for more comprehensive and effective studies of human microglial cells. Muffat et al. were the first to describe microglia derived from iPSCs.A fully defined serum-free medium that mimics the central nervous system environment was developed for the derivation of microglia-like cells in human iPSCs.The medium contains CSF1 and IL-34, which are critical for the survival and maturation of microglia [48].Subsequent studies published similar strategies to generate iPSC-derived microglia (iMGs), while each protocol is different in specific details [52,53].Embryoid bodies (EBs) were employed in Muffat's study for the generation of microglia in an early step [48], whereas Douvaras et al. used monolayer cultures instead of EB [52].Although the two protocols are comparable in efficiency, the latter requires less iPSCs.Another protocol improved the differentiation of iPSCs into microglia by simplifying the initial medium and adding glial cell-derived cytokines [53]. Transcriptomic analysis has verified that iMGs closely resemble primary microglia found in both human fetal and adult tissues.iMGs express key microglial markers, including TMEM119, P2RY12, and CX3CR1.iMGs also demonstrate the ability to replicate essential microglial functions.These include migrating to injury sites in response to damage within 3D culture systems, exhibiting phagocytic activity towards foreign substances, and releasing cytokines when stimulated by Lipopolysaccharides (LPS) or Interferon-gamma (IFN-γ).The findings underscore the potential of induced microglial cells as an effective model for studying the behavior of microglia in the context of neural disease pathogenesis [48,52,53]. Although genome sequencing has identified numerous genes associated with neurological disorders, understanding how these genes contribute to disease development remains a challenging area of study.Exposing iMGs to brain substrates like myelin debris, synthetic amyloid-β(Aβ) fibrils, synaptosomes, and apoptotic neurons has helped generate and identify diverse transcriptional states in microglia, including a neurodegenerative disease-associated microglial (DAM) state [54].The overexpression or mutation of key genes linked to Parkinson's disease progression, such as SCAN and LRRK2, has been identified.However, the difficulty in obtaining sufficient human tissue hindered further studies of these genes in microglia [55,56].iPSCs have been used to create microglia-like cells from familial PD patients with a triplication or A53T mutation of the SCAN gene.These iMGs exhibited reduced phagocytosis when exposed to high levels of α-synuclein.Different mechanisms have been observed in the microglial endocytosis of fibrillar and monomeric α-synuclein: fibrillar α-synuclein is taken up through actin-dependent pathways, while monomeric α-synuclein is absorbed via actin-independent pathways [57].This provides valuable insights into how microglia respond to excess α-synuclein.In a study focused on AD, researchers utilized an integrated, automated culturing platform comprising neurons, microglia, and astrocytes derived from iPSCs.That study confirmed that human iMGs exhibit neuroprotective effects, particularly by internalizing and compacting Aβ.However, it was also found that in instances of neuroinflammation, microglia lose this neuroprotective ability, which in turn exacerbates the progression of AD [58]. Given the limitations of 2D cell cultures in accurately mirroring the intricate structure and unique characteristics of the human brain, 3D-cultured organoids emerged as a superior alternative, offering a more complex and representative microenvironment for research purposes [50].The unintended activation of microglia during in vitro studies is a critical concern that requires careful management [59].When microglia are isolated from their natural in vivo environment and cultured in vitro, they experience a phenomenon known as 'culture shock'.This transition leads to the increased expression of certain disease-related genes, including APOE, LYZ2, and SPP1, within the cultured microglia, as opposed to the levels observed in primary microglia [60].However, when comparing microglia cultured under traditional monolayer conditions to those integrated into organoids, the latter tend to maintain a more rudimentary and ramified-like form [21].This suggests that microglia in organoids more closely mimic their natural, resting state found in vivo, highlighting the importance of the culture environment in maintaining physiological relevance.Furthermore, when these microglia-integrated immunized human brain organoids are transplanted into the cranial cavity of mice, the microglia demonstrate the capacity to respond not only to localized injuries but also to systemic inflammation [21].This highlights their potential for studying microglial behavior and responses in a more physiologically relevant context.However, whether microglia in organoids cultured in vitro truly reflect the state of microglia in the human central nervous system still requires further study. Innate Development of Microglia in Neural Organoids Neural organoid immunization primarily focuses on the development and differentiation of immune cells, particularly microglia-like cells, to explore and manipulate the immune responses within the organoid.Although the protocols generally used to prepare neural organoids do not yield immune cells, there have been reports of mesodermal-derived progenitors emerging in neural organoids [61], which can even differentiate into microglia.These innately developed microglia exhibit typical microglial morphology, express relevant cell surface markers, and display phagocytic capability [62].In research involving 2D ocular organoids, cells resembling microglia and positive for PAX6 were identified.However, these cells demonstrated limited phagocytic capabilities [63].The protocol for innately generating microglia in organoids often varies among different studies.However, there is currently no direct evidence to suggest that these modifications result in the production of Hematopoietic Progenitor Cells (HPCs) or provide the necessary conditions for the differentiation of HPCs into microglia.Consequently, researches into using spontaneously generated microglia within organoids for immunotherapy is still in its infancy, with methodologies across different studies lacking standardization.A recent method that has emerged involves ectopic expression of PU.1 in pluripotent stem cells, mixing them with standard pluripotent stem cells, and subsequently forming neural organoids.PU.1 serves as a crucial transcription factor for myeloid differentiation, providing a robust induction signal for stem cells to differentiate into microglia [64].This method presents a novel strategy for the consistent generation of microglia within neural-like organs.While it shows promise, it still requires additional research and validation for further development. Integration of iPSC-Derived Microglia into Neural Organoids The primary approach for creating immunized brain organoids involves differentiating iPSCs into microglia and neural organoids separately, which are then integrated.This key step of producing HPCs is not present in the differentiation process of neural organoids [65].The separate differentiation followed by fusion is an effective method for establishing microglia-containing immunized organoids [66].The integration with neural organoids can involve either microglia progenitor cells or more mature microglia.The fusion of microglia progenitors with neural organoids may provide more opportunities for interaction with the nervous system during the development of microglia, thereby better simulating their developmental process within the neural system.On the other hand, terminally differentiated microglia might offer better control over the ratio of microglia to neural organoids.These specific methods and their analyses have been described in other excellent reviews [67].The initial step in generating microglia from iPSCs involves inducing their differentiation into HPCs, the precursors of microglia.Several protocols are available for this differentiation process, with some requiring hypoxic conditions [13] and others utilizing EBs [68].Both approaches can effectively produce high-quality HPCs, which can subsequently be induced to form microglia.These methods have also been reported to integrate successfully with brain organoids [21,66].Among these methods, the EB-based strategy is particularly recognized for its ease of standardization, which is also employed by commercial kits.For EB formation, iPSCs need be digested into single cells, and then resuspended in EB medium at a concentration of approximately 100,000 cells per milliliter.A 100-microliter cell suspension should be seeded into a low-attachment round-bottomed 96-well plate, resulting in a seeding density of 10,000 cells per well.Cell aggregates are formed by spinning down the plate.Subsequently, the plate should be gently transferred to the incubator for further cultivation.Besides the basic culture medium, EB medium typically includes supplements such as Rock-inhibitor (Y-27632), BMP-4, SCF, and VEGF to enhance survival and induce hematopoietic differentiation.This EB formulation is designed to mimic the natural environment of microglia production and is sometimes referred to as yolk sac EB in the literature [66].EBs will attach to the surface of culture plates, and the medium will be supplemented with M-CSF, IL-34, and TGF-β.After continuing the culture for about 2-3 weeks, precursors of microglia will appear in the supernatant.This process can continue for several weeks, and microglia precursors can be collected from the supernatant by changing the culture medium [13].These collected precursor cells can be co-cultured with organoids, allowing the precursors to further differentiate and mature into microglia in the neural microenvironment. iPSC-derived brain organoids start with neural induction.In this step, iPSCs are dissociated into clumps and then allowed to form 3D structures in low-attachment cell culture plates.The neural induction stage requires about 9 days, during which the culture medium, based on iPSC medium, is supplemented with SMAD inhibitors, SB-431542 and dorsomorphin, as well as Y-27632 to promote cell survival.After the neural induction phase, starting from day 9, the medium is switched to a neural differentiation medium composed of Neurobasal media, supplemented with B27, N2, FGF, and EGF, among others.Neural differentiation will continue until the 25th day.From day 25 onward, growth factors like FGF and EGF should be omitted from the culture medium.Subsequently, the medium needs to be refreshed every four days to sustain the culture's health and progression.The length of time for maintaining the culture beyond this point will vary, tailored to the specific research objectives and the neural functions of interest [50,69].The process of differentiating iPSCs into brain organoids generally spans approximately 35 to 45 days, and the development of advanced neuronal function within these organoids may require additional time.Determining the optimal moment to incorporate microglia progenitors into brain organoids is a crucial consideration.The appropriate timing for this integration can vary, largely depending on the specific objectives of the scientific studies.Research suggests that, to maximize microglia viability, the best window for integrating microglia progenitors might be during the later phases of neural differentiation, typically around days 35 to 42 [21].Following the integration of microglia progenitors with organoids, another critical challenge is the selection of a suitable culture medium that supports the coexistence and growth of both microglia and neurons within the immunized brain organoids.Experimental evidence suggests that the neural microenvironment offered by brain organoids alone is inadequate for microglia development.Therefore, to ensure the survival of microglia incorporated into the organoids, it is essential to supplement the standard brain organoid culture medium with additional cytokines, such as M-CSF, IL-34, and TGF-β [21]. Regionally Specific Immunized Organoids Immune cells residing in various tissues often exhibit unique biological characteristics.However, limited research has explored whether microglia in different regions of the central nervous system possess distinct traits.This knowledge gap may be attributed to the challenges associated with obtaining samples from the central nervous system. Several studies have initiated investigations in this area.A research differentiating iPSCs into two distinct regions of the cerebral cortex: the dorsal and ventral forebrain, revealed that the integration of microglia into different brain regions led to diverse outcomes, influencing their migration capabilities and responses.When exposed to amyloid-beta 42 (Aβ42) stimulation, microglia integrated with the dorsal forebrain demonstrated an increased secretion of anti-inflammatory factors.In contrast, those integrated with the ventral forebrain showed heightened production of pro-inflammatory cytokines [70].Although further research is needed to enhance the stability and accuracy of differentiation in specific brain regions, this study offers new insights and approaches for exploring microglial functions across different anatomical areas within the central nervous system. The midbrain is a susceptible organ in various neurological diseases, with PD being the most prominent among them.A critical feature of PD is the loss of dopaminergic neurons in the substantia nigra of the midbrain [71].Abnormalities in microglial function are frequently linked to the onset of the disease [72].Animal experiments have validated that microglia in the midbrain exhibit distinct characteristics.For instance, distinct from the cortex, hippocampus, and striatum, the midbrain is noted for containing two unique populations of microglia: one group exhibits high levels of MHC-II expression, while the other prominently expresses TLR4.These microglia demonstrate an elevated responsiveness to inflammatory signals.However, within the midbrain environment, there is a tendency to observe an immunosuppressive response, notably marked by a decrease in MHC-II expression and an increase in anti-inflammatory mediators such as IL-10 and TGF-β [73].The underlying mechanisms of this phenomenon represent an intriguing field of study.It also indicates that research into the relationship between microglia and the human midbrain is still relatively limited, especially within the context of PD.The integration of microglia within midbrain organoids has the potential to fill this gap.Initial studies have found that microglia in midbrain organoids can influence the expression of the genes associated with synaptic remodeling and enhance neuronal excitability [74].Subsequent research in this area will likely aid in explaining the mechanisms underlying midbrain-related diseases and exploring potential therapeutic approaches. The Blood-Brain Barrier (BBB) serves as a vital interface between the central nervous system and the circulatory system, playing a key role in maintaining brain homeostasis and protecting neural tissue from harmful substances and pathogens.Adjacent to this barrier lies a unique set of non-parenchymal, tissue-resident macrophages.These cells are located within the perivascular spaces and are termed border-associated macrophages (BAMs) [75].BAMs are distinct from the parenchymal microglia and are crucial for maintaining brain health and immune surveillance [76].Considering that conventional brain organoids lack a functional vascular network, the creation and amalgamation of vascularized brain organoids to simulate immune functions offer a promising methodology for the exploration of BAMs [77].Research that separately cultivates vessel and brain organoids before combining them to form vascularized brain organoids has shown that macrophages can form around the vasculature.In these integrated organoids, BAMs not only exhibit specific macrophage markers and react to immune stimuli, such as LPS, but they also demonstrate the ability to engulf synapses [78].This significant finding opens new pathways for studying how the brain's vascular and immune systems interact, enhancing our understanding of their roles in brain health and disease.However, questions remain regarding the authenticity and functional similarity of the generated BAMs to their in vivo counterparts, their distinctions from microglia, and the necessity of incorporating iPSC-derived macroglia for a more accurate representation of the brain's immune environment. Alzheimer's Disease The study of AD pathologies in microglia-containing organoids has been advanced through a variety of strategies.These include the use of isogenic organoids, patient-derived organoids, and organoids induced with Aβ treatment.This diverse methodology enriches our comprehension of the role of microglia in AD pathogenesis (Figure 1).Subsequent research has expanded on these foundational models, offering deeper insights into the genetic and pathological aspects of AD. APOE4, a variant of the apolipoprotein E gene, is widely recognized as a significant genetic risk factor for AD.It plays a critical role in lipid transport and injury repair in the brain.Individuals carrying the APOE4 allele have a higher risk of developing AD, and it is associated with an earlier onset of the disease [79].Neurons carrying the APOE4 allele showed a heightened number of synapses and an increased secretion of Aβ42 compared to their APOE3 counterparts.Furthermore, APOE4 microglia-like cells demonstrated distinctive morphological changes that were associated with a reduced capability for Aβ phagocytosis.These findings highlight the multifaceted influence of APOE4 across different brain cell types, with a particular emphasis on the altered function of microglia in Aβ clearance, underscoring their potential key role in the development of AD pathologies [80].While GWAS has identified numerous AD-associated genes [81], there is limited direct evidence to conclusively confirm their contributions to the onset mechanisms of AD.Induced pluripotent stem cell (iPSC)-derived microglia immunized brain organoids in neurodegenerative disease studies.iPSC-derived microglia (iMGs) were generated by first forming embryoid bodies (EBs) for Hematopoietic Progenitor Cells (HPCs) generation from iPSCs, and then differentiating into iMGs.Meanwhile, iPSCs were induced into neuroepithelium differentiation, with self-assembly into cerebral organoids.Neuron-microglia interactions at 2D-3D levels were achieved by co-culturing iMGs with iPSC-derived neurons or cerebral organoids.To model neurodegenerative diseases using 2D co-cultures or 3D organoid cultures, mutations can be introduced into healthy control iPSCs or corrected in patient-derived iPSCs using gene-editing techniques, while maintaining the original genetic background of the iPSC line.Chemical induction methods, such as employing Aβ and p-tau to simulate Alzheimer's disease pathology, can be utilized to induce specific pathological conditions in both 2D co-cultures and 3D organoid models. APOE4, a variant of the apolipoprotein E gene, is widely recognized as a significant genetic risk factor for AD.It plays a critical role in lipid transport and injury repair in the brain.Individuals carrying the APOE4 allele have a higher risk of developing AD, and it is associated with an earlier onset of the disease [79].Neurons carrying the APOE4 allele showed a heightened number of synapses and an increased secretion of Aβ42 compared to their APOE3 counterparts.Furthermore, APOE4 microglia-like cells demonstrated distinctive morphological changes that were associated with a reduced capability for Aβ phagocytosis.These findings highlight the multifaceted influence of APOE4 across different brain cell types, with a particular emphasis on the altered function of microglia in Aβ clearance, underscoring their potential key role in the development of AD pathologies [80].While GWAS has identified numerous AD-associated genes [81], there is limited direct evidence to conclusively confirm their contributions to the onset mechanisms of AD. Utilizing CRISPR interference (CRISPRi) and CRISPR droplet sequencing techniques, specific AD-related genes, including TREM2, CD33, and SORL1, were selectively knocked out in the immune cells of Aβ-treated immunized neural organoids.While this genetic intervention did not significantly affect the microglial population size, it profoundly altered their functional attributes as microglia.The absence of TREM2 was found to potentially affect the clustering of microglia around Aβ peptides.Moreover, SORL1 was observed to play a role in maintaining a normal cholesterol turnover rate by promoting the expression of CYP461 [64].TREM2 and SORL1 are both genes associated with APOE in microglia [82,83].The role of APOE in AD has also been confirmed in a wider range of immunized organoids.Immunized organoids derived from the APOE4 genotype exhibit decreased autophagy levels, which could potentially contribute to the development of cerebral amyloid pathology [84].Down syndrome (DS) is identified as one of the primary Figure 1.Induced pluripotent stem cell (iPSC)-derived microglia immunized brain organoids in neurodegenerative disease studies.iPSC-derived microglia (iMGs) were generated by first forming embryoid bodies (EBs) for Hematopoietic Progenitor Cells (HPCs) generation from iPSCs, and then differentiating into iMGs.Meanwhile, iPSCs were induced into neuroepithelium differentiation, with self-assembly into cerebral organoids.Neuron-microglia interactions at 2D-3D levels were achieved by co-culturing iMGs with iPSC-derived neurons or cerebral organoids.To model neurodegenerative diseases using 2D co-cultures or 3D organoid cultures, mutations can be introduced into healthy control iPSCs or corrected in patient-derived iPSCs using gene-editing techniques, while maintaining the original genetic background of the iPSC line.Chemical induction methods, such as employing Aβ and p-tau to simulate Alzheimer's disease pathology, can be utilized to induce specific pathological conditions in both 2D co-cultures and 3D organoid models. Utilizing CRISPR interference (CRISPRi) and CRISPR droplet sequencing techniques, specific AD-related genes, including TREM2, CD33, and SORL1, were selectively knocked out in the immune cells of Aβ-treated immunized neural organoids.While this genetic intervention did not significantly affect the microglial population size, it profoundly altered their functional attributes as microglia.The absence of TREM2 was found to potentially affect the clustering of microglia around Aβ peptides.Moreover, SORL1 was observed to play a role in maintaining a normal cholesterol turnover rate by promoting the expression of CYP461 [64].TREM2 and SORL1 are both genes associated with APOE in microglia [82,83].The role of APOE in AD has also been confirmed in a wider range of immunized organoids.Immunized organoids derived from the APOE4 genotype exhibit decreased autophagy levels, which could potentially contribute to the development of cerebral amyloid pathology [84].Down syndrome (DS) is identified as one of the primary risk factors for AD.While not all individuals with DS are affected, many tend to develop AD as they age [85].Based on organoids immunized with microglia derived from iPSCs, it has been observed that microglia from DS patients show an enhanced or excessively strong capability for synaptic pruning.Moreover, upon exposure to pathological tau proteins, DS microglia demonstrate an increased activation of type 1 interferon signaling pathways [86].Targeting the interferon alpha/beta receptor (IFNARs) emerges as a promising approach to potentially improve the functionality of DS microglia and offer a therapeutic avenue for AD.Via immunized neural organoids, it has been discovered that microglia-derived cardiolipin enhances the endocytosis of Aβ by astrocytes and microglia themselves.The decrease in cardiolipin levels as one ages could potentially contribute to the onset of AD [87].A decline in cardiolipin levels with aging may contribute to the pathogenesis of AD. Parkinson's Disease and Atrial Septal Defect Leucine-rich repeat kinase 2 (LRRK2) is a gene significantly associated with PD.Mutations in this gene, particularly the G2019S mutation in certain populations, represent one of the most common genetic causes of PD [88].Compared to normal counterparts, astrocytes with the G2019S mutation exhibit reduced levels of MMP2 and TGFB1 [89].Both MMP2 and MMP3 play a key role in degrading α-synuclein [90], whose abnormal accumulation is believed to be a significant contributing factor to PD [91].TGF-β, on the other hand, is an important cytokine that negatively regulates inflammation mediated by microglia [92].A decrease in TGF-β could potentially lead to excessive activation of microglia.Research has shown that microglia originating from iPSCs with LRRK2 mutations demonstrate similarities in gene expression and functionality to microglia found in patients with PD [93].It has also been revealed that mutations in the LRRK2 gene within microglia can affect their motility and the expression of adhesion molecules [94].However, there has not yet been a comprehensive study using neural organoids to explore in depth how LRRK2 mutations impact neural cells and their surrounding microenvironment in the context of PD. Microglia in immunized brain organoids derived from Atrial Septal Defect (ASD) patients exhibit more primed morphology, fewer resting states after being transplanted into immune-deficient mice, along with larger soma and increased primary process thickness.Both microglia and the neural environment may contribute to the excessive inflammation.To confirm this, researchers introduced normal microglia into neural organoids derived from ASD patients and observed that the ASD neural environment induced similar morphological changes in the normal microglia, mirroring those observed in ASD-derived microglia [21].This study not only reveals that alterations in the brain environment due to disease significantly contribute to the hyperactivation of ASD-associated microglia, but it also offers fresh perspectives on the interactions between neural cells, the neural microenvironment, and microglia in the development of neurological disorders. Multiple Sclerosis From an immunological perspective, multiple sclerosis (MS) is recognized as a chronic autoimmune disorder where the immune system erroneously targets and damages the myelin sheath, the protective covering of neurons in the central nervous system, leading to disrupted neural communication and varied neurological symptoms.Although immune responses mediated by T cells and B cells are crucial in the development of MS [95], microglia could also play a pivotal role in the disease's onset [96].Microglia might present self-antigens to T cells, triggering an immune response and prompting B cells to create antibodies against these antigens.Neurons opsonized by self-antibodies become ideal targets for microglial phagocytosis, leading to their increased susceptibility to immune-mediated destruction.Genetic mapping has revealed numerous MS susceptibility genes linked to microglia, highlighting their critical function in the pathogenesis of MS [97].However, current research, mainly grounded in autopsies and animal studies, falls short of accurately representing the true involvement of human microglia in MS [98].The pathogenic factors of MS involve a complex interplay between genetic and environmental triggers, along with intricate cellular alterations in both neural and immune cells.Organoids prepared from iPSCs derived from healthy individuals showed fewer cell proliferation markers and stem cells compared to those from healthy donors.Additionally, organoids from MS patients exhibited a higher number of mature neurons and a reduced count of oligodendrocytes.These findings indicate abnormalities in the cells derived from neural stem cells, which may impact the progression of the disease [99].Research employing brain organoids to study MS remains in its early stages.Studies incorporating immunized organoids to explore the roles of cells derived from hematopoietic stem cells, such as lymphocytes, microglia, or infiltrating macrophages, in MS's cellular and molecular mechanisms, are lacking.Unlike AD and PD, the potential abnormalities in the microglia associated with MS may be more complex [16], potentially involving interactions with T cells and antibodies.This intricacy poses substantial challenges to developing immunized organoids that accurately simulate the immune microenvironment specific to MS. Infection Diseases Based on immunized organoids, it was found that viral infections can cause neurocircuit integrity damage by activating microglia-mediated synapse elimination, thereby manifesting a phenotype similar to neurodegenerative disorders [100].Although research on virus infection in neural organoids has demonstrated direct damage to neural cells, it is important to note that the absence of immune cells in neural organoids limits the understanding of the immune response and neuroinflammation triggered by infection [101].When the Zika virus infects immunized organoids, it leads to the activation of microglia, resulting in increased expression levels of pro-inflammatory cytokines such as IL-6, IL-1β, and TNF-α.Additionally, the expression of type 1 interferon receptors within microglia also rises, indicating an enhanced ability to respond to inflammatory signals.Furthermore, the phagocytic ability of microglia within immunized organoids significantly improves [66].These findings strongly suggest that microglia within immunized organoids following Zika infection may possess an augmented, or possibly even an excessively increased, synaptic pruning capability.Additionally, another study reported similar findings, indicating that SARS-CoV-2 infection can undermine neural circuit integrity through the overactivation of microglia, leading to excessive synaptic pruning [100]. Challenges and Limitations of Immunized Brain Organoids The integration of microglia into brain organoids has been accomplished effectively, and the resulting immunized organoids have exhibited notable potential in a wide range of disease contexts.Nevertheless, continuous challenges remain that hinder the progression and broader application in the field of neurology and immunology. The M1/M2 Paradigm in Brain Organoids with Integrated Microglia The distinction between pro-inflammatory M1 cells and the more reparative M2 cells marks a standard framework for classifying peripheral macrophages.Similarly, this segregation has been observed in microglia residing in the central nervous system, showcasing comparable phenotypic divisions [102].Notably, microglia integrated into neural organoids are capable of displaying M1 traits, such as the secretion of IFN-γ and exhibiting phagocytic activities [13].Moreover, microglia derived from iPSCs can replicate various M2 traits and have been implemented in disease research projects.For instance, iMGs with M2-like features may exhibit enhanced immunosuppressive effects, notably in inhibiting effector T cells and facilitating the induction of regulatory T cells [103].However, the investigation into M2-typed microglia within brain organoids, particularly with respect to neurological disorders, remains underexplored.Investigating the classical and alternative activation of microglia in immunized brain organoids provides a valuable opportunity not just to ascertain whether the M1 and M2 designations correspond to a continuum of functionalities or separate categories, but also to identify potential therapeutic targets for neuroinflammatory diseases. Heterogeneity of iPSC-Derived Microglia Advancements in high-resolution technologies, such as single-cell RNA sequencing (scRNA-seq), have led to the discovery of a broader spectrum of microglia subgroups [104].These subgroups are challenging the traditional binary classifications such as M1 or M2, or simplistic distinctions like pro-inflammatory versus anti-inflammatory, or universally good versus bad microglia.Instead, the characterization of these subgroups is being reshaped, influenced by a comprehensive analysis of factors including gene expression profiles, morphological features, and epigenetic markers [105].Once microglia are removed from their native in vivo environment to an in vitro setting, their heterogeneity significantly reduces.This reduction also extends to microglia derived from iPSCs, which maintain lower levels of heterogeneity even when incorporated into brain organoids.However, this heterogeneity notably increases when immunized brain organoids are transplanted back into an animal model, highlighting the critical role of in vivo factors in preserving microglia heterogeneity [50].Current research on immunized organoids implanted into animal models is limited, but these findings suggest that maintaining an in vivo environment may be crucial for preserving microglia heterogeneity.This is particularly relevant in research on neuroinflammatory diseases using immunized organoids to study the role of microglia.The in vivo context might significantly enhance the accuracy and relevance of such studies, highlighting the complex dynamics of microglia behavior and their contributions to disease pathogenesis. Immune Cell Diversity in Brain Organoids Another challenge in the development of immunized organoids is that the research primarily focuses on the incorporation of microglia.Although microglia represent the central immune cells in the central nervous system, recent studies have gradually unveiled the presence of additional types of immune cells within this system [106], most notably T cells [107].In AD animal models, Aβ-specific Th1 and Th17 cells have been observed to exacerbate memory impairment and systemic inflammation, alongside an increase in microglia activation [108].In contrast, Aβ-specific regulatory T cells have been demonstrated to inhibit the activation of microglia, thereby mitigating the accumulation of amyloid plaques [109].These discoveries highlight the complex interplay between different immune cells within the central nervous system and underscores the potential for broader immunological approaches in brain organoid research.However, a considerable portion of this research relies on animal models, particularly studies involving mouse T cells and mouse microglia.The question remains whether the mechanisms observed in mice parallel those in humans, specifically regarding the interaction between human T cells and microglia within the human nervous system.This gap highlights the necessity for additional research and validation.Furthermore, the specific functions of T cells, particularly Th cells, may require the participation of microglia to effectively influence neurological cells or conditions.Therefore, immunized brain organoids could serve as a valuable platform for advancing research in this area.Nevertheless, the representation of immune cells other than microglia in these organoids remains strikingly limited at present.Expanding the diversity of immune cells within brain organoids would not only enhance our understanding of the regulatory mechanisms between adaptive and innate immune responses within the neural microenvironment, but also promote the study of the complex interactions between the immune and nervous systems.This approach could significantly contribute to elucidating the roles and contributions of neuroinflammation in maintaining neural system stability and in the pathogenesis of neurological diseases. Immaturity of iPSC-Derived Microglia The maturation of microglia is dependent on signals provided by the central nervous system's microenvironment.These signals could originate from neurons or other cells; they might encompass cytokines or derive from direct intercellular contact.Microglia cells induced from iPSCs might face challenges related to insufficient maturation.While iMGs may display certain surface markers that are characteristic of mature microglia, current studies remain inadequate to affirm their functional maturity in comparison to the standards of fully developed microglia [13].Reports indicate that specific microglial functions, such as the capability for phagocytosis, are diminished in iMGs compared to macrophages, suggesting a less mature state from a functional standpoint.Microglia integrated into brain organoids, in contrast to those cultured in two dimensions, exhibit an increased number of cellular processes, indicating a closer resemblance to the mature microglia found within the stable in vivo environment [21].Furthermore, the in vivo environment provided by immunodeficient mice might also offer conducive conditions for promoting the maturation of human iMGs [110].Microglia within immunized organoids transplanted into immunod-eficient mice demonstrate a greater degree of developmental maturity, showcasing more extensively the branched, or ramified, state typical of mature microglia under homeostatic conditions [21].Nevertheless, the more sophisticated functions of mature microglia or macrophages, such as the ability to present antigens and activate T cells, currently lack comprehensive investigation within the context of immunized organoids. Ethical Concerns in Relation to Immunized Brain Organoids Immunized brain organoids represent a major breakthrough in biomedical research, providing unique opportunities to explore neurological diseases, brain development, and the interactions within the immune system in a detailed, three-dimensional setting.On one hand, iMGs offer a novel source for microglia research, circumventing the ethical dilemmas tied to acquiring microglia from post-mortem specimens or aborted fetuses.Nonetheless, the utilization of immunized brain organoids introduces several ethical challenges.The creation of organoids largely depends on human iPSCs, which originate from adult tissue.The source of these cells can spark ethical controversies.Issues such as informed consent, privacy protection, and the risks of commercial misuse are of paramount importance.As microglia contribute to making brain organoids more advanced in terms of neural signaling and functionalities, there emerges a possibility that these organoids could exhibit aspects of human consciousness.Even though there is no current evidence to support the presence of human consciousness in these structures, it is crucial to proactively discuss the ethical boundaries concerning the manipulation of such human-cell-derived tissues [111].Furthermore, a significant ethical dilemma emerges when immunized brain organoids are transplanted into animals, resulting in part-human chimeras.The appropriate implications of treating these creatures necessitate thoughtful and proactive discussion [112]. Concluding Remarks The difficulties associated with obtaining primary human microglia, particularly from individuals across different ages and from patients with diverse diseases, have impeded progress in neuroimmune science.iMGs have become an important alternative, mitigating some of the sourcing challenges faced by researchers in this domain.Current research on immunized brain organoids primarily focuses on the study of neurological diseases.However, their application should be further expanded.For instance, the potential link between microglial abnormalities and the changes observed in the aged brain represents an intriguing and significant area of study.Should immunized organoids yield breakthroughs in understanding age-related alterations within the nervous system, and subsequently lead to the development of strategies to decelerate brain aging in humans, it would signify a significant advancement.Immunized brain organoids serve as an innovative platform for neuroimmunological research, yet this technology is still in a nascent phase, with experimental methods lacking uniformity, standardization, and quality control.There is an urgent need for additional researchers to refine this technology and extend its applications to a broader range of research fields, thereby realizing its full potential in diverse scientific studies. Figure 1 . Figure1.Induced pluripotent stem cell (iPSC)-derived microglia immunized brain organoids in neurodegenerative disease studies.iPSC-derived microglia (iMGs) were generated by first forming embryoid bodies (EBs) for Hematopoietic Progenitor Cells (HPCs) generation from iPSCs, and then differentiating into iMGs.Meanwhile, iPSCs were induced into neuroepithelium differentiation, with self-assembly into cerebral organoids.Neuron-microglia interactions at 2D-3D levels were achieved by co-culturing iMGs with iPSC-derived neurons or cerebral organoids.To model neurodegenerative diseases using 2D co-cultures or 3D organoid cultures, mutations can be introduced into healthy control iPSCs or corrected in patient-derived iPSCs using gene-editing techniques, while maintaining the original genetic background of the iPSC line.Chemical induction methods, such as employing Aβ and p-tau to simulate Alzheimer's disease pathology, can be utilized to induce specific pathological conditions in both 2D co-cultures and 3D organoid models.
2024-03-17T17:17:17.511Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "a6540a960d71d527f33e1da4808a32cac0112c27", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/6/3148/pdf?version=1709962355", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ba38f59a01eff2eb7fe942343732d62d060e6c0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250469084
pes2o/s2orc
v3-fos-license
Phycobiliproteins—A Family of Algae-Derived Biliproteins: Productions, Characterization and Pharmaceutical Potentials Phycobiliproteins (PBPs) are colored and water-soluble biliproteins found in cyanobacteria, rhodophytes, cryptomonads and cyanelles. They are divided into three main types: allophycocyanin, phycocyanin and phycoerythrin, according to their spectral properties. There are two methods for PBPs preparation. One is the extraction and purification of native PBPs from Cyanobacteria, Cryptophyta and Rhodophyta, and the other way is the production of recombinant PBPs by heterologous hosts. Apart from their function as light-harvesting antenna in photosynthesis, PBPs can be used as food colorants, nutraceuticals and fluorescent probes in immunofluorescence analysis. An increasing number of reports have revealed their pharmaceutical potentials such as antioxidant, anti-tumor, anti-inflammatory and antidiabetic effects. The advances in PBP biogenesis make it feasible to construct novel PBPs with various activities and produce recombinant PBPs by heterologous hosts at low cost. In this review, we present a critical overview on the productions, characterization and pharmaceutical potentials of PBPs, and discuss the key issues and future perspectives on the exploration of these valuable proteins. Introduction Natural compounds derived from algae exhibit a wide variety of biological activities. Many algae have been used as food or food additives for many years. In east Asian countries where algae have been utilized in cuisine and medicine, a lower incidence of chronic diseases, such as hyperlipidemia, coronary heart disease, diabetes and cancer, is observed compared to Western countries [1]. Algae are rich in sugars/fiber, proteins/peptides, lipids/fatty acids, minerals and vitamins. They are also abundant sources of secondary metabolites such as polysaccharides, sterols, tocopherols, terpenes, polyphenols, phycobilins and phycobiliproteins (PBPs). These compounds have been shown to possess antioxidant, anticancer, anti-inflammatory, antihypertensive, anti-hyperlipidemia, immunomodulatory, neuroprotective, antiviral and antimicrobial activities [2][3][4]. Among these bioactive compounds, PBPs have received much attention in the past few decades. PBPs are a family of colored and water-soluble biliproteins found in cyanobacteria, rhodophytes, cryptomonads and cyanelles [5]. They function as major light-harvesting antennas for absorption of light energy and transfer it into reaction centers of photosystems. Algae contribute to over 90% of the primary production in the oceans and play critical roles in the oceanic food chains and global carbon cycle. Their survival and flourishment in different habitats depend largely on solar radiation. Photosynthetically active radiation ranging from 400 nm to 700 nm is captured by PBPs and converted into chemical energy to support cell metabolisms. For efficient light absorption, several types of PBPs with distinct absorption spectra have evolved. PBPs have been developed as fluorescent probes in immunofluorescence assay due to their high extinction coefficient and fluorescence yield [6]. They are also used as natural colorants and food additives in chewing gum, ice Allophycocyanin APC is located in the core of the phycobilisome, found in all phycobiliprotein-containing organisms. There are three subtypes of APC: allophycocyanin (APC), allophycocyanin-B (APC-B), and allophycocyanin core-membrane linker (APC-Lcm). Among the three subtypes, APC is the dominant PBP in the core of phycobilisome. APC assembles Each subunit of PBPs carries one to three linear tetrapyrrole chromophores (phycobilins) at specific cysteine residues. The absorption spectra of PBPs are largely determined by the types of phycobilins, which include phycocyanobilin (PCB, λmax = 640 nm), phycoerythrobilin (PEB, λmax = 550 nm), phycourobilin (PUB, λmax = 490 nm), and phycoviolobilin (PVB, λmax = 590 nm). Based on their spectral properties, PBPs can be classified into three main types: allophycocyanin (APC), phycocyanin (PC), and phycoerythrin (PE). PBPs can be further classified into subtypes to distinguish their spectral properties. For example, PE is divided into R-PE, B-PE and C-PE; PC is divided into R-PC and C-PC. The prefixes to PBPs historically indicated their taxonomic source: C-, Cyanobacterial; B-, Bangiophycean; and R-, Rhodophytan, and later were used to denote spectral properties of PBPs [5]. Allophycocyanin APC is located in the core of the phycobilisome, found in all phycobiliprotein-containing organisms. There are three subtypes of APC: allophycocyanin (APC), allophycocyanin-B (APC-B), and allophycocyanin core-membrane linker (APC-Lcm). Among the three subtypes, APC is the dominant PBP in the core of phycobilisome. APC assembles into core cylinders, while APC-B and APC-Lcm participate in the formation of two basal cylinders [9]. The α and β subunits of APC are highly conserved among different species and to a lesser extent between themselves [10]. Each α and β subunit bind a chromophore PCB through conserved cysteine residues α-84 and β-84. APC is typically isolated and purified as a trimer (αβ) 3 and has an absorption maximum at 650 nm with a shoulder at 620 nm. However, when diluted to low concentrations, the trimeric APC dissociates to the αβ monomers. As a result, the maximal absorbance for the APC shifts from 650 nm to 618 nm. Correspondingly, the maximal fluorescence emission shifts from 660 nm to 643 nm [6]. In some cyanobacteria, the photosynthetic apparatuses can be remodeled in response to far-red light (FRL), facilitating efficient capturing of FRL for photosynthesis [11][12][13]. Recently, Li et al. presented a new phycobilisome-derived complex that consists only of allophycocyanin core subunits with red-shifted absorption peaks of 653 and 712 nm [14]. These red-shifted phycobiliprotein complexes were isolated from the chlorophyll f-containing cyanobacterium Halomicronema hongdechloris. It was demonstrated that the protein environment surrounding the pyrrole ring A of PCB on the APC alpha subunit is mostly responsible for the FRL absorbance. In addition, it was found that interactions between PCBs bound to alpha and beta subunits of adjacent protomers in trimeric APC complexes are responsible for a large bathochromic shift of about~20 nm and notable sharpening of the long-wavelength absorbance band [15]. Phycocyanin PCs are found in almost all phycobiliprotein-containing organisms, including cyanobacteria, red algae, glaucophytes, and some cryptophytes. Based on their spectral properties, PC is divided into three subtypes: (1) C-PC (λ max~615-620 nm), found exclusively in Cyanobacteria, (2) phycoerythrocyanin (PEC, λ max~575 nm), only found in some Cyanobacteria, and (3) R-PC (R-PC, λ max~615 nm), mainly found in red algae [9]. PCs absorb light ranging from 580 nm to 630 nm and emit fluorescence with a maximum around 635-645 nm. In intact cells, PCs commonly exist in disk-shaped hexamers, and a rod-linker polypeptide or a rod-core-linker polypeptide is attached to the center cavity of hexamers. In general, a C-PC molecule carries a PCB at α-84 and two PCBs at β-84 and β-155. Analysis of energy transfer between chromophores has demonstrated that α-84 PCB and β-155 PCB in C-PC act as the excitation energy transfer donors and the β-84 PCB as the terminal acceptors. However, in some algal species, one or two peripheral chromophores (α-84 PCB and/or β-155 PCB) in PC are replaced by PVB, PEB, or PUB for adaptation to environmental conditions, which are rich in blue and green light [9,16,17]. When isolated from phycobilisome, PC exists as a hexameric structure (αβ) 6 at pH 5.0-6.0 and a trimeric structure (αβ)3 at pH 7.0 [18]. C-PC is among the most studied PBPs because of its various biological and pharmacological properties [4]. Phycoerythrin PEs are the most abundant PBPs in much red algae and in some unicellular cyanobacteria [6]. Compared with APC and PC, PE carries more phycobilins and therefore absorbs more light on an equal molar basis. The phycobilins attached to apo-PBPs can be PEB and PUB. Phycobilin contents differ for cyanobacteria living in either freshwater and soil or marine environments. PEs from freshwater and soil Cyanobacteria typically carry only PEB chromophores and exhibit absorbance spectra, with maxima around 565 nm [5]. Pes from marine unicellular Synechococcus sp. And Synechocystis sp. Strains bind the PUB chromophores at specific cysteine residues. Pes strongly absorb light, ranging from 480 nm to 570 nm, and emitted fluorescence from 575 to 580 nm. Based on the types of phycobilins and their spectral properties, PE is classified into three main subtypes: (1) B-phycoerythrin (B-PE) (λmax~540-560 nm, shoulder~495 nm), (2) R-phycoerythrin (R-PE) (λmax~565, 545 and 495 nm), and (3) C-phycoerythrin (C-PE) (λmax~563, 543 and~492 nm). C-PEs are the most abundant PEs in cyanobacteria and can be further classified into two subtypes: C-PE-I and C-PE-II [5]. Typically, PE carried five chromophores with extra phycobilins linked to α-143 and a PEB doubly linked to β-50 and β-61 [19]. PE α and β subunits form into trimers and form disk-shaped hexamers through face-to-face aggregation with the help of a special linker peptide (named γ subunit). In some cyanobacterial species, PEs are the most abundant PEs in cyanobacteria and can be further classified into two subtypes: C-PE-I and C-PE-II [5]. Typically, PE carried five chromophores with extra phycobilins linked to α-143 and a PEB doubly linked to β-50 and β-61 [19]. PE α and β subunits form into trimers and form disk-shaped hexamers through face-to-face aggregation with the help of a special linker peptide (named γ subunit). In some cyanobacterial species, the chromophore composition of PE can be changed in response to the light quality [20]. This process is known as complimentary chromatic adaptation and is proposed to be an important contributor to global primary productivity [21]. PEs exhibit quantum yields up to 0.98 and molar extinction coefficients of up to 2.4 × 10 6 . These exceptional spectral properties make PE a superior fluorescent probe over fluorescein and rhodamine, the most commonly used fluorescent dyes. Biosynthesis of Phycobiliproteins Biosynthesis of PBPs involves two processes: (1) expression of apo-PBPs and biosynthesis of phycobilin; (2) covalent attachment of phycobilin to apo-PBPs, which is mediated by various types of lyases. The phycobilins are derived from the heme metabolism ( Figure 2). Under the action of heme oxygenase (HO1), heme is cleaved at the α-methene bridge and converted to biliverdin IXα (BV). One or more double bonds of the outer rings of BV can be reduced by ferredoxin-dependent bilin reductases. The phycocyanobilin: ferredoxin oxidoreductase (PcyA) catalyzes the transfer of four electrons from ferredoxin to the double bonds of BV to form PCB. 15,16-dihydrobiliverdin: ferredoxin oxidoreductase (PebA) transfers two electrons to biliverdin IXα to form 15,16-dihydrobiliverdin, which can be then converted to PEB via another two-electron reduction catalyzed by phycoerythrobilin:ferredoxin oxidoreductase (PebB). The formation of PVB and PUB is mediated by specific lyase-isomerases using bound-PCB and PEB as substrate, respectively [22,23]. The attachment of phycobilin to apo-PBPs is mediated by autocatalytic or lyasecatalyzed binding. ApcE has lyase domains in its N-terminal region and can attach PCB autocatalytically to form native-like LCM [24]. For lyases, CpcE/CpcF is the first identified enzyme that catalyzes the attachment of PCB to α-84 of apo-CpcA [25]. It also catalyzes the addition of PEB to apo-CpcA, but with reduced affinity and kinetics compared with PCB. A number of lyases, including CpcS/CpcU [26], CpcT [27], MpeZ [23], MpeU [28], CpeF [29], MpeV [30], have been identified in recent years. However, there are many lyases yet to be discovered due to complexity of the large PBP family. Production of Phycobiliproteins Many efforts have been made on the efficient production of PBPs. There are two methods for PBP preparation. The conventional way is the extraction and purification of native PBPs from Cyanobacteria, Cryptophyta and Rhodophyta [4]. The bioactivities of PBPs reported in the literature are based mainly on native PBPs. With the progress in recombinant DNA technology, the complete pathways for PBPs have been successfully constructed in engineered heterologous hosts. A number of PBPs have been produced in E. coli (Table 1). Compared with native PBPs, a single subunit of α or β can be prepared more easily from engineered hosts and may offer some advantages in therapeutic use due to its small size. However, at present, less work is focused on the evaluation of recombinant PBPs and much attention should be paid to the recombinant PBPs in the future. Native Phycobiliproteins The main producers for PBP are cyanobacterium Arthrospira platensis (formerly Spirulina platensis) and the red microalga Porphyridium (Rhodophyta) [31]. Other algal species such as Synechococcus sp., Limnothrix sp. (Cyanobacteria) and Neopyropia yezoensis (formerly Porphyra yezoensis) (Rhodophyta) are also the sources for PBP preparations. A. Platensis can grow rapidly in open ponds in alkaline conditions, under which conditions most algae hardly grow. In 2021, a production of 10,000 tons of dry A. platensis cell mass is estimated in China. Environmental factors, particularly light, nitrogen and carbon sources, affect algal growth and PBP productions. In large-scale cultivations of cyanobacteria, poor permeation of light into deeper layers of water is the major problem that hampers high cell density cultivations. In open ponds, light is in excess of what is required for photosynthesis for algal cell growth in the surface layer, while in the deep layers, algal growth is limited by low light. The problem is partially solved by using well-designed photobioreactors, which provide equal light distribution in the cultures. Low and medium light is preferred to obtain higher productions of PBPs. Nitrogen is another important element for the production of PBPs, which act as the main storage of nitrogen algal cells under stress conditions. The common nitrogen sources for algae cultivation are nitrate, ammonium and urea. However, the preferred nitrogen sources may vary from species to species. It has been shown that some species such as Phormidium sp. and Pseudoscillatoria sp. (Cyanobacteria) preferred ammonium, while A. platensis grow better when nitrate is added into the media [32]. Carbon source along with other factors including temperature, pH, and salinity also influence the production of PBPs. Therefore, for a given algal species, it is necessary to optimize these factors to maximize PBP production. Methods for native PBP purification depend on algal species and the types of PBPs. Since PBPs strongly absorb visible light, their purity can be determined by the ratio of maximum absorbance to the absorbance at 280 nm. For C-PC, it is considered to be food grade, cosmetic grade, reactive grade and analytical grade when this ratio is >0.7, 1.5, 3.9 and 4.0, respectively [4]. In general, PBP purification involves two steps. The initial step is the extraction of PBP from algal cells. Different methods are adopted in practice to disrupt algal cells, including chemical treatment, physical treatment (freezing and thawing, grinding, high-pressure homogenization, ultrasonication, etc.) and enzymatic treatment (lysozyme digestion). Novel extraction techniques such as ultrasound-assisted extraction, microwave-assisted extraction, high pressure processing, pulsed electric fields and supercritical fluid extraction have been developed in recent years [33]. A proper method should ensure high efficiency of cell disruption and intact structure of PBPs with relatively low cost. Among these methods, freezing-thawing has been shown to be effective in disruption of cyanobacteria [34,35]. The second step is the purification of PBPs from crude extractions by multiple separation processes including ammonium sulfate precipitation, chromatography, membrane filtration or two-phase aqueous extraction. Ammonium sulfate precipitation is a simple process conducted in the initial purification stage to concentrate the PBP samples and remove most undesirable components. Generally, 20-30% of ammonium sulfate precipitates the undesirable proteins, which can be removed by centrifugation. The supernatant is treated with 60-70% ammonium sulfate, leading to precipitation of PBPs. To further improve the purity, chromatographic separation is required. Ion exchange chromatography has been shown to be effective for PBP purification [36,37]. Elution of PBPs works through a gradient elution of ionic strength, or a gradient of pH is efficient for PBP separation [38]. Other chromatographic techniques such as hydroxyapatite chromatography, hydrophobic chromatography and gel filtration chromatography are also adopted in PBP separation. Membrane filtration can concentrate PBP crude extract and increase the purity of PBPs [39]. A critical factor is the selection of a proper membrane, of which the cut-off value is suitable to reach the targeted degree of purity index [4]. Aqueous two-phase extraction is easy to scale up but less efficient compared to the conventional purification processes. This technique may be suitable to prepare a large amount of PBPs at low cost [40]. Recombinant Phycobiliproteins Biosynthesis of PBPs in a heterologous host offers an efficient method for the production of recombinant PBPs. As early as in the 1980s, genes coding for PC and APC were cloned, characterized and successfully expressed in E. coli or cyanobacteria. The expression of PC α and β subunit was probably driven by s native promoter centered about 374 bp upstream from the translation start for cpcB. The apcA and apcB genes of Cyanophora paradoxa (Glaucophyta) were expressed in E. coli under control of its native promoter. Lau et al. [43] reported that the cpcA gene from Asterocapsa nidulans (formerly Anacystis nidulans) was expressed in E. coli under the control of lacZ promoter of pUC8 vector. The expression level of the apo-CpcA was estimated to be between 0.5-1% of the total soluble proteins in E. coli cells. These early studies presented the successful expression of PBPs in E. coli and showed that recombinant PBPs were stable in this host. However, these PBPs were expressed in the apo-protein form. In a study by Lormimier et al. [41], the apcA and apcB gene from Cyanophora paradoxa (Glaucophyta) were transferred to the Synechococcus sp. PCC 7002 on plasmid replicon. The resulting showed that PBPs isolated from the transformed cells contained C. paradoxa APC subunits, which covalently carried a chromophore and were incorporated into the light-harvesting apparatus. In a milestone work reported by Tooley et al. [44], the entire pathway for the biosynthesis of holo-CpcA from cyanobacterial Synechocystis sp. PCC6803 was reconstituted in E. coli. The cyanobacterial genes responsible for PCB biosynthesis from heme was expressed from a plasmid under control of the hybrid trp-lac promoter. The genes coding for lyases (CpcE and CpcF) and CpcA were co-expressed from a second plasmid harboring a tra promoter. The recombinant E. coli cells produced holo-CpcA with spectroscopic properties similar to those of the same protein isolated from cyanobacteria. In another work, Tooley et al. showed that phycoerythrocyanin holo-α subunit could be produced in E. coli in a similar way [45]. From then on, a number of holo-PBPs were produced in recombinant E. coli cells. Guan et al. [60] reconstituted the pathway of holo-CpcA in E. coli by using one expression vector. In this work, an expression vector containing five essential genes for holo-CpcA biosynthesis was constructed. In the expression vector, the genes HO1 and pcyA were designed as an operon and then inserted into the second cassette of plasmid while cpcA together with cpcE and cpcF were designed as another operon and then inserted into the first cassette of the plasmid. The "one expression vector" strategy may offer better plasmid stability as compared to the "multiple expression vectors" strategy, which is important to maintain the full holo-CpcA pathway in E. coli. In addition, selection by only one antibiotic is cost-saving, especially for large-scale cultivations. These recombinant PBPs can be easily purified by using immobilized metal affinity chromatography, due to the addition of 6×His tag to N terminals of the recombinant PBPs. In E. coli, the PBP lyases are less specific to phycobilins as they could catalyze attachment of noncognate phycobilins to apo-PBPs. Namely, PBPs carrying noncognate phycobilins can be produced in recombinant E. coli cells. In a work by Alvey et al. [55], cpcA from Synechocystis sp. PCC 6803 and Synechococcus sp. PCC 7002 was coexpressed with the cpcE/cpcF from Synechocystis sp. PCC 6803 or pecE/pecF from Noctoc sp. PCC 7120. Both lyases were capable of attaching three different phycobilins (PCB, PEB and PVB) to CpcA. Therefore, six different CpcA variants, each with a unique phycobilin, could be produced in E. coli cells. In our previous work [51], cpcA from thermophilic cyanobacterium Thermosynechococcus vestitus BP-1 (formerly Thermosynechococcus elongatus BP-1), together with the cpcS, HO1 and pebS were co-expressed in E. coli. Holo-ApcA-carrying PEB was successfully produced, which showed a distinct spectral property from native holo-ApcA. PBPs are regarded as potential photosensitizers in cancer therapy [61]. The ability to produce unnatural PBP in E. coli would expand the types of PBPs and contribute to the exploration of novel photosensitizers. The main shortage of recombinant holo-PBPs produced in E. coli is the inefficiency of chromophorylation. Tooley et al. [44] showed that about a third of the apo-CpcA was converted to holo-CpcA in E. coli. Ge et al. reported a fraction of 81.4% for HT-ApcB produced in E. coli [49]. Biswas et al. [46] presented a systemic study on the chromophorylation efficiency and specificity of all bilin lyases from Synechococcus sp. strain PCC 7002. The recombinant holo-proteins included HT-CpcA, HT-CpcB, HT-ApcA/ApcB, HT-ApcD, HT-ApcF and GST-ApcE. The percentage of chromophorylation for these holo-proteins ranged from 17.4% to 71.9%, indicating inefficient chromophorylation for the proteins reconstituted in E. coli. The addition of aminolaevulinic acid or iron did not improve production of PCB and holo-protein [45,62]; it is unlikely that chromophorylation is limited by heme availability. Instead, the incomplete chromophorylation might be due to unfavorable codon usage or due to the aggregation of the recombinant proteins into insoluble inclusion bodies [44]. In our previous work, the codon for cpcS was optimized for E. coli, leading to an increase in expression level of cpcS and improvement of chromophorylation of the recombinant PBP. In addition, we showed that plasmid stability is also an important factor limiting the efficient chromophorylation of recombinant PBPs [52]. Compared with multiple plasmids, a single plasmid full of the entire PBP pathway is preferred, which would contribute to the maintenance of the entire heterologous pathway during cell divisions and avoid too much antibiotic selection markers. At present, recombinant PBPs are expressed mostly in E. coli (Table 1). Fewer studies are performed in eukaryotic hosts. Industrial microbes such as Saccharomyces cerevisiae and Pichia pastoris would be good candidates for the production of PBPs. The vast progress in genetic manipulation approaches such as CRISPR make the construction of a stable PBP pathway in these hosts more convenient and would promote the mass production of the valuable PBPs. Pharmaceutical Potentials Native PBPs have been utilized as food additives, natural colorants and fluorescent probes for tens of years. Much work has been carried out to evaluate their pharmaceutical potentials. PBPs exhibit various bioactivities, such as antioxidant, anti-tumor, neuroprotective and hepatoprotective properties, and could be developed as photosensitizers in tumor therapy (Figure 3). Antioxidant Effects Oxidative stresses result from reactive oxygen species (ROS). In cases that a decline in antioxidant defense or increase in production of reactive species occurs, the accumulation of reactive species will cause damage to macromolecules, such as proteins, DNA and lipids, and thus lead to abnormal cellular metabolism and even cell death. Pharmaceutical Potentials Native PBPs have been utilized as food additives, natural colorants and fluorescent probes for tens of years. Much work has been carried out to evaluate their pharmaceutical potentials. PBPs exhibit various bioactivities, such as antioxidant, anti-tumor, neuroprotective and hepatoprotective properties, and could be developed as photosensitizers in tumor therapy (Figure 3). Antioxidant Effects Oxidative stresses result from reactive oxygen species (ROS). In cases that a decline in antioxidant defense or increase in production of reactive species occurs, the accumulation of reactive species will cause damage to macromolecules, such as proteins, DNA and lipids, and thus lead to abnormal cellular metabolism and even cell death. The antioxidant activity of PBP was firstly demonstrated by in vitro and in vivo assays by Romay et al. [63]. C-PC from Arthospira maxima was able to scavenge alkoxy radicals (RO•, IC50 = 76 mg/mL) and hydroxyl radicals (OH•, IC50 = 0:91 mg/mL). C-PC also inhibited luminol-enhanced chemiluminescence from zymosan-activated human polymorphonuclear leukocytes, microsomal lipid peroxidation (IC50 = 12 mg/mL) induced by Fe 3+ -ascorbic acid, and the glucose-oxidase-induced inflammation in mouse paw, a model of inflammatory response in which peroxide and hydroxyl radicals are involved. Further study showed that C-PC exhibited anti-inflammatory activity in four experimental models of inflammation. Such anti-inflammatory activity could be, at least partially, explained by the antioxidative and oxygen free radical scavenging properties [64]. C-PC from A. platensis effectively inhibited CCl4-induced lipid peroxidation in rat liver in vivo [65]. Both native and reduced PC significantly inhibited peroxyl radical-induced lipid peroxidation in rat liver microsomes and the inhibition was concentration-dependent with an IC50 of 11.35 and 12.7 mM, respectively. The results suggested that the covalently linked PCB is involved in the antioxidant and radical scavenging activity of PC. Sonani et al. [66] described the in vitro antioxidant activity of three major PBPs, which were isolated from the marine cyanobacterium Lyngby asp. A09DM. The results showed significant and dosedependent antioxidant activities of all PBPs in an order of PE > PC > APC. The potential application of PE as antioxidant needs deep investigation in the future. Direct evidence for the involvement of phycobilins in antioxidant activity was shown by PCB prepared from PBPs. When the concentrations of PC and PCB prepared from A. platensis were equal on a phycobilin basis, the antioxidant activity was almost the same as that of PC in the AAPH-containing reaction mixture, indicating that PCB accounted for the majority of the antioxidant activity [67]. PC and PCB also exhibited antioxidant activity against peroxynitrite (ONOO-). Scavenging of ONOO-by PC and PCB was established by examining their interactions. The relative antioxidant ratio and IC50 value indicated that PC is a more efficient ONOO-scavenger than PCB [68]. PC and PCB derived from Aphanizomenon flos-aquae was shown to have similar activities against peroxyl radicals, The antioxidant activity of PBP was firstly demonstrated by in vitro and in vivo assays by Romay et al. [63]. C-PC from Arthospira maxima was able to scavenge alkoxy radicals (RO•, IC 50 = 76 mg/mL) and hydroxyl radicals (OH•, IC 50 = 0:91 mg/mL). C-PC also inhibited luminol-enhanced chemiluminescence from zymosan-activated human polymorphonuclear leukocytes, microsomal lipid peroxidation (IC 50 = 12 mg/mL) induced by Fe 3+ -ascorbic acid, and the glucose-oxidase-induced inflammation in mouse paw, a model of inflammatory response in which peroxide and hydroxyl radicals are involved. Further study showed that C-PC exhibited anti-inflammatory activity in four experimental models of inflammation. Such anti-inflammatory activity could be, at least partially, explained by the antioxidative and oxygen free radical scavenging properties [64]. C-PC from A. platensis effectively inhibited CCl 4 -induced lipid peroxidation in rat liver in vivo [65]. Both native and reduced PC significantly inhibited peroxyl radical-induced lipid peroxidation in rat liver microsomes and the inhibition was concentration-dependent with an IC 50 of 11.35 and 12.7 mM, respectively. The results suggested that the covalently linked PCB is involved in the antioxidant and radical scavenging activity of PC. Sonani et al. [66] described the in vitro antioxidant activity of three major PBPs, which were isolated from the marine cyanobacterium Lyngby asp. A09DM. The results showed significant and dose-dependent antioxidant activities of all PBPs in an order of PE > PC > APC. The potential application of PE as antioxidant needs deep investigation in the future. Direct evidence for the involvement of phycobilins in antioxidant activity was shown by PCB prepared from PBPs. When the concentrations of PC and PCB prepared from A. platensis were equal on a phycobilin basis, the antioxidant activity was almost the same as that of PC in the AAPH-containing reaction mixture, indicating that PCB accounted for the majority of the antioxidant activity [67]. PC and PCB also exhibited antioxidant activity against peroxynitrite (ONOO-). Scavenging of ONOO-by PC and PCB was established by examining their interactions. The relative antioxidant ratio and IC50 value indicated that PC is a more efficient ONOO-scavenger than PCB [68]. PC and PCB derived from Aphanizomenon flosaquae was shown to have similar activities against peroxyl radicals, and higher activities than well-known antioxidants such as Trolox, ascorbic acid and reduced glutathione [69]. These findings indicate that PCB is mostly responsible for the antioxidant activity of PC. Recently, phycobilins including PCB, PEB, PUB and PVB was reported to potent phytochemical inhibitors to Mpro and PLpro proteases of SARS-CoV-2 [70]. The structure of phycobilin is similar to that of bilirubin, a physiological and chainbreaking antioxidant [71]. This similarity may explain the antioxidant activities of PBPs. However, a number of studies have demonstrated that apo-PBPs, which do not carry phycobilin chromophore, were able to quench ROS. Ge et al. [72] expressed α subunit (6×His-apo-ApcA), β subunit (6×His-apo-ApcB) and 6×His-apo-ApcAB from Asterocapsa nidulans (formerly Anacystis nidulans) UTEX 625 in E. coli. The results showed that recombinant apo-ApcA and apo-ApcB had stronger antioxidant activities than native APC and apo-ApcAB. It is proposed that the combination of α and β subunits covered the active domain in the subunit and thus reduced the antioxidant activities. In another study, recombinant apo-ApcA and holo-ApcA from Synechocystis sp. PCC6803 were produced in E. coli [73]. Like native APC, both proteins exhibited hydroxyl radical and peroxyl radical scavenging activities, with a descending order of recombinant holo-ApcA > recombinant apo-ApcA > native APC. However, Pleonsil et al. [74] showed that the antioxidant ability of apo-CpcB from A. platensis was much lower than that of native PC. Nevertheless, the antioxidant activities of apo-PBP indicate that some active sites beyond phycobilin are located on the PBP. The active sites may be related to sulfur-containing amino acids [75]. It has been shown that cysteine and methionine residues, especially when located on the protein surface, play important roles in protecting the cell from oxidative damage through its thiol functional group [76]. The apo-CpcA contains four cysteine and six methionine residues. These amino acid residues may contribute to the antioxidant activity of the apo-protein [74]. Alternatively, PBPs were believed to chelate and reduce the ferrous ion efficiently, implying the combined involvement of both electron-donating and metal-ion-chelating ability of the PBP-constituting amino acids in expressing antioxidant activity [67]. For example, an amino acid with a hydrophobic side chain is good proton donor and metal ion chelator. Similarly, acidic, basic and aromatic amino acids are supposed to sequester metal ions. It is hypothesized that the antioxidant action of PBPs differs depending on different mechanisms associated with side chains of the various constituting amino acids [67]. The antioxidant effects are affected by several factors such as light, pH and denaturing agents. When exposed to light, PC produced hydroxyl radicals while in the dark PC scavenged the hydroxyl radicals. An increase in pH above 7.0 or denature of PC by sodium dodecyl sulfate or urea led to the loss of the ability to produce hydroxyl radicals, and concurrently an increase in antioxidant capacity [77]. Moreover, the trypsin-digested fragments of apo-PC exhibited antioxidant activities. These data indicate that some active sites are buried in the native conformation and show antioxidant activities when exposed on the surface of apo-proteins. Anti-Tumor Effects Cancer is one of the main diseases causing death in the world. At the cellular level, cancer cells are characterized by indefinite cell proliferation, disability of apoptosis and invasion of cell growth. Therefore, cancer therapy can be achieved through inhibition of tumor cell proliferation, induction of tumor cell apoptosis and cell cycle arrest, and limitation of tumor cell migration. Increasing evidence has confirmed the inhibiting effects of PBP on different types of cancer, including breast cancer, liver cancer, lung cancer, colon cancer, leukemia and bone marrow cancer. The effective dosages of PBP on cancer might differ, depending on the tumor cell lines [78][79][80][81]. Notably, high dosages of PBP did not induce significant adverse effects or mortality in animal experiments [82,83]. Regulation of the cell cycle is critical for cell proliferation, differentiation and apoptosis. The development of cancer is closely associated with dysfunction of cell cycle regulation [84]. PBPs can affect the cell cycle, causing cell cycle arrest. Liu et al. [85] reported the inhibitory effect of C-PC from A. platensis on the growth of human chronic myelogenous leukemia-blast crisis K562 cells. These cells were stopped to progress through S-phase and arrested at the G1 phase. The treatment of HT-29 and A549 cells with C-PC from Oscillatoria tenuis led to a decrease in the G2/M phase compared with the control. The percentage of S phase in treated cells increased and concurrently the percentage of cells in G0/G1 phase increased. Flow cytometry analysis also revealed the effect of C-PC on the accumulation of cells in the G0/G1 phases. These findings indicated that C-PC led to cell cycle arrest in the G0/G1 phases in cancer cells, HT-29, and A549 cells. Jiang et al. [86] constructed tumor-targeted nano-drug C-PC/CMC-CD59sp nanoparticles. These nanoparticles are composed of carbocymethyl chitosan (CMC), C-PC and CD59-specific ligand peptide (CD59sp) and were found to induce G0/G1 cell cycle arrest and inhibit the proliferation in cervical cancer HeLa and SiHa cell. In vivo experimentation showed that the cell cycle was regulated via up-regulating p21 expression and down-regulating the expression of Cyclin D1 and CDK4 [86]. Induction of tumor cell apoptosis is an important strategy to treat cancer. Early research showed that C-PC treatment of human chronic myeloid leukemia cell line K562 led to the release of cytochrome c into the cytosol and poly ribose polymerase cleavage. The results also showed down-regulation of anti-apoptotic Bcl-2 but without any changes in proapoptotic Bax and thereby changing the Bcl-2/Bax ratio towards apoptosis [87]. Li et al. [88] reported that the growth of HeLa cells was inhibited by C-PC treatment in a dose-dependent manner. Electron-microscopic examination showed that C-PC could induce characteristic apoptotic features, including cell shrinkage, membrane blebbing, microvilli loss, chromatin margination and condensation into dense granules or blocks. Agarose electrophoresis of genomic DNA of C-PC-treated HeLa cells showed a fragmentation pattern typical for apoptotic cells. Flow-cytometric analysis of HeLa cells treated with different concentrations of C-PC demonstrated an increasing percentage of cells in the sub-G0/G1 phase. Moreover, Caspases 2, 3, 4, 6, 8, 9, and 10 were activated in C-PC-treated HeLa cells. These results indicated that C-PC down-regulated the anti-apoptotic gene, activated pro-apoptotic gene expression and then facilitated the transduction of apoptosis signals [88]. Recently, Jiang et al., [84] found that C-PC effectively inhibited MDA-MB-231 cell proliferation, induced cell apoptosis and triggered G0/G1 cell cycle arrest. C-PC-mediated apoptosis was regulated by the inhibition of the ERK pathway and the activation of the JNK pathway and p38 MAPK pathway. Recent reports showed that PBP could inhibit epithelial-to-mesenchymal transition [89,90]. C-PC inhibited epithelial-to-mesenchymal transition (EMT) in human cervical cancer Caski cells by up-regulating E-cadherin expression and down-regulating N-cadherin expression. The expression of Twist, Snail and Zeb1 transcription factors related to EMT was also downregulated. The data revealed that C-PC reversed TGF-β1-induced EMT in cervical cancer cells and down-regulated the TGF-β/samd signaling pathway-induced G0/G1 arrest of tumor cell cycle [90]. Interestingly, recombinant subunits of PBP, which were expressed in E. coli in their apo form, were also found to have anti-tumor activities. Treatment with recombinant apo-CpcB (rCpcB) inhibited four tumor cell lines' proliferation and induced apoptosis and led to the depolymerization of microtubules and actin-filaments. rCpcB interacted with membrane-associated β-tubulin and glyceraldehyde-3-phosphate dehydrogenase (GAPDH). In the treated cells, caspase-3 ad caspase-8 activities increased and nuclear level of GAPGH decreased significantly. This study indicated that the molecular mechanism of rCpcB-mediated tumor cell inhibition may be different from that of the whole C-PC [91]. Recombinant apo-APC was also reported to be able to inhibit H22 hepatoma in mice, with inhibition rates ranging from 36% to 62% with dosages from 6.25 to 50 mg/kg/day [49]. PBPs can be used in combination with chemotherapy drugs to improve the safety and efficacy and reduce dosage of the single drug during cancer therapy [84]. Gantar et al. [92] revealed the synergic effects of C-PC from Limnothrix sp. 37-2-1 (Cyanobacteria) with topotecan (TPT) on Prostate Cell Line (LNCaP). When a low dosage of TPT was combined with C-PC, the cancer cells were killed at a higher rate than when TPT was used alone at full dose. The use of two compounds together increased the level of radical oxygen species and activated the activities of caspase-9 and caspase-3, induced apoptosis of tumor cells, and diminished side effects of topotecan [92]. Yang et al. [93] reported that combined use with ATRA and C-PC significantly reduced the dose and side effects of ATRA on HeLa cells. The combination therapy down-regulated anti-apoptotic protein Bcl-2, up-regulated the expression of pro-apoptotic Caspase-3 protein, inhibited Cyclin D1, cell-cycle-related CDK-4 and complement regulatory protein CD59 expression and induced the HeLa cell apoptosis. Bingula et al. showed that when lung cancer A549 cells were treated with a combination of betaine and C-PC, an up to 60% decrease in viability was observed, which is significant compared with betaine (50%) or C-PC treatment alone. Combined treatment reduced the stimulation of NF-κB expression by TNF-α, increased the amount of the proapoptotic p38 MAPK, and induced a cell cycle arrest in G2/M phase for~60% of cells. At present, PBPs are not used in clinical cancer treatment, possibly due to the efficacy not being high enough. In addition, the short in vivo half-life of PBPs limits their application as anti-cancer drugs. From the literatures described above and recent reviews [4,72,84], it is obvious that anti-tumor activities and their underlying mechanisms were mostly evaluated using C-PCs as the drugs. APC and PE are spectrally and somewhat structurally different from C-PC, which might offer novel activities against tumors. In this respect, it is of significance to examine the actions of APC and PE in tumor therapy in future work. Anti-Inflammatory Effects It was reported by Remirez et al. [94] that C-PC from Arthospira maxima exhibited an anti-inflammatory effect in azymosan-induced arthritis model in mice. C-PC reduced in a dose-dependent manner ear oedema induced by arachidonic acid and 12-O-tetradecanoyl phorbol-13-acetate in mice as well as carrageenan-induced rat paw oedema. These antiinflammatory activities may be related to the antioxidative properties and down-regulation of cytokine secretion and arachidonic acid mechanism. Later studies showed that PBPs exhibited anti-inflammatory activities in various models such as glucose-oxidase-induced inflammation in mouse paw, in carrageenan-induced rat paw edema, arachidonic acidand tetradecanoylphorbol acetate-induced ear edema model in mice, zymosan-induced experimental arthritis in mice, and acetic-acid-induced colitis in rats [95]. Isolated enzyme and whole-blood assays indicated that C-PC from Spirulina platensis is a selective inhibitor of cyclooxygenase-2 (COX-2), which is upregulated during inflammation. Reduced PC and PCB are poor inhibitors of COX-2 without selectivity, implying that apoprotein plays a key role in the selective inhibition of COX-2 [96]. In cyclophosphamide (CYP)-induced cystitis in mice, C-PC relieved symptoms by inhibiting bladder inflammation through COX-2 and EP4 expression [97]. Antidiabetic Effects Diabetes mellitus is a metabolic disorder characterized by hyperglycemia and alterations in carbohydrate, fat and protein metabolism. In streptozotocine-induced type 2 diabetic rats, C-PE was found to ameliorate diabetic complications by reducing the oxidative stress and the oxidized low-density lipoprotein-triggered atherogenesis [98]. Administration of C-PE reduced food intake, organ weights, serum concentrations of glucose, and cholesterol, and increased body weight, total protein, bilirubin and ferric-reducing ability of plasma values. In addition, hepatic and renal tissues demonstrated significant decreases in TBARS, lipid hydroperoxide and conjugated diene contents, with increases in superoxide dismutase, catalase, glutathione peroxidase, reduced glutathione, vitamin E and vitamin C levels [98]. The administration of PC significantly decreased the body weight, fasting plasma glucose, and 24 h random blood glucose levels, and suppressed the abnormal enlargement of islets observed in the pancreas of KKAy mice. It was proposed that the antidiabetic effect of C-PC on KKAy mice is related to its ability to improve insulin sensitivity, reduce insulin resistance of peripheral target tissues and regulate glucolipide metabolism [99]. In db/db mice, a rodent model for type 2 diabetes, it was showed that oral administration of C-PC (300 mgkg −1 for 10 weeks) protected against albuminuria and renal mesangial expansion, and normalized urinary and renal oxidative stress markers and the expression of NAD(P)H oxidase components. Thus, it is concluded that oral administration of PC and PCB may offer a novel and feasible therapeutic approach against diabetic nephropathy [100]. Recent study suggested that the antidiabetic activity might be related to inhibition of α-amylase and α-glucosidase. An in silico analysis predicted the molecular interaction between PC and α-amylase and α-glucosidase enzymes. Molecular docking simulations indicated that PC inhibits the enzymes by binding to the active site and causing a disruption on substrate-enzyme binding. PC seems to play a crucial role in establishing the interaction within the cavity of active sites of the two enzymes [101]. Neuroprotective and Hepatoprotective Effects The neuroprotective role of PC was demonstrated in kainate-injured brains of rats [102]. Oral administration of C-PC reduced microglial and astroglial activation induced by kainic acid, indicating that some metabolites of this protein crossed the hemato-encephalic barrier and exerted antioxidant effects in the hippocampus. It is suggested that C-PC could be used to treat oxidative stress-induced neuronal injury in neurodegenerative diseases, such as Alzheimer's and Parkinson's. In rat cerebellar granule cell cultures, PC showed a neuroprotective effect against 24 h of potassium and serum deprivation and prevented potassium/serum deprivation-induced apoptosis [103]. Pentón-Rol et al. [104] demonstrated that C-PC given either prophylactically or therapeutically was able to significantly reduce the infarct volume. In addition, C-PC exhibited a protective effect against hippocampus neuronal cell death and prevented the lipid peroxidation and increased ferric reducing ability of plasma in serum and brain homogenates. These findings suggest that C-PC may represent a potential preventive and acute diseasemodifying pharmacological agent for stroke therapy. In SH-SY5Y neuronal cells, tertbutylhydroperoxide induced a significant reduction in cell viability, and this reduction was effectively prevented by treatment with C-PC in the low micromolar concentration range. C-PC displayed a strong inhibitory effect against an electrochemically generated Fenton reaction. It was concluded that C-PC is a potential neuroprotective agent against ischemic stroke, resulting in reduced neuronal oxidative injury and protection of mitochondria from impairment [105]. It seems that the neuroprotective effect of PC is related to its antioxidant properties. However, the anti-inflammatory and immuno-modulatory properties could also contribute to its neuroprotective properties [104]. It was reported that orally administered C-PE exhibited favorable effect on hepatocellular, hepatobiliary, kidney and redox biomarkers against CCl 4 -induced toxicity in rats [106]. It was concluded that orally administered C-PE could be broken down in the gastrointestinal tract by proteolytic enzymes into low-molecular-weight proteins and bilirubin, and thus mediate the pharmacological effects. Ou et al. [107] reported that C-PC was effective in vitro and in vivo in protecting against CCl 4 -induced hepatocyte damage. A possible mechanism is that C-PCs protect the hepatocytes from free radicals' damage induced by CCl 4 . C-PC may be able to block inflammatory infiltration through its anti-inflammatory activities by inhibiting TGF-β1 and HGF expression in in CCl 4 -induced hepatic damage. Immunomodulatory Effects It has been accepted that the ability of immune regulation is key for the body against various diseases. The effects of PBPs against diseases could be attributed to their immunomodulatory properties. An early study found that the survival rate of the tumorbearing mice which were dietary-supplemented with PC was significantly higher than that of untreated groups. This was consistent with the changes in lymphocyte activity in each group, indicating that PC had certain stimulating and promoting effects on the immune system [108]. Nemoto-Kawamura, C. [109] suggested that PC enhances biological defense activity against infectious diseases through sustaining functions of the mucosal immune system and reduces allergic inflammation by the suppression of antigen-specific IgE antibody. Pentón-Rol et al. [104] demonstrated that C-PC was able to prevent or downgrade experimental autoimmune encephalitis expression and induced a regulatory T cell response in peripheral blood mononuclear cells from multiple sclerosis patients. The immunomodulatory activity of the PBP may be associated with their antioxidant properties. Ivanova et al. [110] found that PC could stimulated the lymphocyte antioxidant defense system of occupationally exposed subjects. Lee et al. [111] illustrated that PBP may protect cells from oxidative damage by regulating the body's immunity and increasing its ability to repair cell damage. However, many studies in recent years showed that the immune mechanism of PBP was related to its anti-inflammatory activity at the cellular and even the genetic level. Chen et al. [112] showed that C-PC had the capability to induce secretion of inflammatory cytokines, including TNF-α, IL-1β, and IL-6. Treatment with C-PC also increased proIL-1β and COX-2 protein expression in a dose-dependent manner and rapidly stimulated the phosphorylation of inflammatory-related signaling molecules, including ERK, JNK, p38 and IκB. Grover et al. [113] presented that C-PC exhibited immunomodulatory activities by suppressing the synthesis of pro-inflammatory cytokines, interferon-γ (IFN-γ), and tumor necrosis factor-α (TNF-α) in a dose-dependent manner in Balb/c mice. Exposure of PC to human mononuclear cells leads to the generation of Treg cells. Such an effect is similar to what is mediated by HO1 induction. BV, the product of HO1, is structurally homologous to phycobilins. In animal cells, BV is rapidly reduced to bilirubin by biliverdin reductase. Interestingly, injection with bilirubin in mice induced Treg cell formation. It was thus proposed that phycobilins would mimic the effects of biliverdin on Treg induction [114]. Photodynamic Therapy Photodynamic therapy (PDT) is a therapeutic option for various types of cancers, such as skin tumors, lung tumors, oral tumors and stomach tumors. Effective PDT leads to cancer cell damage and death by inducing ROS-mediated damage, vasculature damage, and immune defense activation. Compared with conventional chemotherapy, PDT selectively kills tumor cells but does not damage normal cells. PDT can mediate cell death directly. The underlying mechanisms include type I and type II reactions. Type I: triplet photosensitizers directly produce ROS, which aew then transformed into O 2 − , OH − or H 2 O 2 to kill cancer cells. Type II: the triplet photosensitizer reacts with triplet oxygen molecules ( 3 O 2 ) to produce singlet oxygen ( 1 O 2 ), and then deoxidizes the substrate [61]. A photosensitizer is the critical element during PDT therapy. An ideal photosensitizer is characterized by: 1. High affinity for tumor cells or selective accumulation in tumor tissues, 2. low dark cytotoxicity, 3. high absorption of light ranging from 600 nm to 800 nm, 4. high quantum yield in ROS production, and 5. high lightdependent cytotoxicity [115,116]. Hematoporphyrin and porphyrin derivatives, which have a tetrapyrrole group, are the commonly used photosensitizers in cancer treatment. Morcos et al. for the first time presented the potential application of PC as a cytotoxic photosensitizer [117]. Cytotoxicity was evaluated by measuring the viability of mouse myeloma cells in culture after incubation with PC (0.25 mg/mL) and irradiation by 300 J/cm 2 at 514 nm. After 72 h post-treatment, the cells showed 15% viability compared to 69% and 71% for control cells exposed to laser only or PC only, respectively [117]. In C-PC-treated mice, the weight of immune organs, proliferation of immunocytes, and expression of pro-apoptotic Fas protein were increased, whereas the tumor weight and the expressions of anti-apoptotic proteins (NF-kB and P53) and CD44 mRNA were comparatively decreased. When combined with He-Ne laser irradiation, the effects of C-PC treatment were enhanced [118]. Thus, the anti-tumor activities by C-PC-mediated PDT may be related to the facilitation of apoptosis signal transduction. The in vitro photodynamic effect of C-PC against breast cancer cells was demonstrated to be related to ROS production [119]. In the presence of light, C-PC did not exhibit any visible toxicity. Under illumination at 625 nm, ROS mediated by C-PC killed MDA-MB-231 breast cancer cells in a dose-dependent manner. A recent work described a hybrid material through conjugation of C-PC to biosilica for the PDT of tumor-associated macrophages [120]. The conjugation enhanced the relatively weak photodynamic effect of C-PC, leading to a high photodynamic activity under 620 nm laser irradiation. The enhanced photodynamic activity might be attributed to the enrichment of C-PC-biosilica hybrid on the surfaces of tumor-associated macrophages cell. R-PE and its subunits were reported to exhibit inhibitory effects on mouse tumor cell S180 and human liver carcinoma cell SMC 7721. Compared with the hexamer, subunit of R-PE seemed to be more effective. An in vivo experiment showed that the survival rate of S180 cells decreased from 90% down to 58% with the increase in R-PE concentration from 10 mg/mL to 100 mg/mL. In the same way, the survival rate of S180 cells decreased from 75% to 44.6% by α subunit, 90.6% to 40.1% by β subunit, and 91% to 31% by γ subunit. The β subunit exhibited not only a better PDT effect but also emitted more fluorescence, which can be used a fluorescent marker for detection of binding sites. In addition, its lower molecular size allows the β subunit to enter the tumor cell more easily [121]. Similar to C-PC, APC covalently binds PCB as the chromophore. Laser photolysis and pulse radiolysis study of APC from A. platensis showed that 248 nm laser-flash photolysis led to triplet state and radical cations of APC, which were generated by ionization via a monophotonic process. These findings indicated the potential use of APC as type I and type II photosensitizer [122]. Up to date, no research is available on the evaluation of recombinant PBPs as a photosensitizer. As has been stated in Section 3.2, the preparation of PBP from recombinant E. coli is feasible. Notably, recombinant holo-subunits of PBP have smaller sizes, facilitating their accumulation in the tumor sites. In addition, protein engineering techniques make it possible to fuse PBP to target entities, such as antibodies, for improving affinity to tumor cells. In the future, it is of importance to assess PDT effects of the recombinant PBP systematically. Other Biological Activities Other activities including antiviral activity, intestinal flora modulation and wound healing stimulation have been reported. APC inhibited the replication of enterovirus 71 and influenza virus cultured in vitro [123]. Shih and Chueh [124] confirmed that APC extracted from A. platensis exhibited anti-enterovirus 71 activity and pointed out that its antiviral mechanism is related to the inhibition of virus proliferation and a reduction in cell apoptosis by reducing the synthesis rate of viral RNA. Using high-throughput 16S rDNA sequencing, Qi et al. [125] examined the responses of gut microbiota in H22-bearing mice to dietary recombinant PE (RPE) supplementation. The results showed that dietary RPE could modulate the gut microbiota of the H22-bearing mice by increasing the abundance of beneficial bacteria and by decreasing that of detrimental bacteria among intestinal bacteria. These findings provide evidence for the mechanism by which bioactive proteins affect intestinal nutrition and disease resistance in animals. Apart from the above-mentioned effects, it was presented that C-PC stimulated wound healing through a urokinase-type plasminogen activator-dependent mechanism, although a detailed molecular mechanism is yet to be elucidated [126]. Conclusions and Future Perspectives PBPs represent a large family of light-harvesting biliproteins found in cyanobacteria, cryptomonads and red algae. They have been used as food colorants, nutraceuticals and fluorescent probes in immunofluorescence analysis for many years. Increasing reports have described their various health-promoting features, demonstrating the pharmaceutical potentials of these valuable proteins. It is expected that the pharmaceutical application of these proteins could be achieved in the next decades. To this end, systemic work on PBP absorption, transport, metabolization, molecular target and mechanisms of actions need to be further investigated. The health-promoting effects of PBPs are mainly tested for C-PC obtained from Spirulina. Other types of PBPs (APC and PE) and PBPs derived from other algal species such as cyanobacterial Anabaena marina, Aphanizomenon flos-aquae, Oscillatoria tenuis, red algae Neopyropia yezoensis (formerly Porphyra yezoensis), Porphyridium purpureum (formerly Porphyridium cruentum) and many other species also exhibit similar bioactivities. These proteins are structurally different from C-PC and may offer novel biological properties contributing their applications. However, production of PBPs from these species on a large scale have not been achieved and the potential applications of these PBPs should be explored intensively. The production of recombinant PBPs by engineered microorganisms offers an attractive source for PBPs. The processes of microorganism cultivation and PBP purification are easy to perform and scale up, making it possible to prepare a large amount of PBPs at a low cost. The recombinant PBPs, mainly produced as a single subunit either in apo-or in holo-form, are structurally distinct from the native ones. Interestingly, these proteins also exhibit bioactivities such as anti-tumor and anti-oxidant effects. The small size of recombinant PBPs may facilitate the absorption and transportation to the target tissues. In particular, the bioactivities of PBP may be ascribed to its specific domains. It is intriguing to reconstitute and express these domains in heterologous hosts and evaluate their pharmaceutical potentials. However, until recently, such work had been scarcely carried out, possibly due to the obstacles during the preparation of recombinant proteins. A number of recombinant PBPs have been successfully prepared in our lab. We are willing to provide the plasmids and strains contributing to further explorations of these proteins. Author Contributions: Conceptualization, H.C.; writing-original draft, H.C. and H.Q.; writingreview, editing, and revision, H.C., H.Q. and P.X. All authors have read and agreed to the published version of the manuscript. Funding: This work was financially supported by start-up funding for scientific research from Shandong University of Technology (to H.C.) and the strategic priority research program of the Chinese Academy of Sciences under award numbers XDB42030302.
2022-07-13T16:30:52.446Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "0603674e5a5b627968c95fd7e6c5fcd9dbdb420f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/20/7/450/pdf?version=1657592128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45fa73903493f0464bb0886e5f0efb9359fa0d18", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235278536
pes2o/s2orc
v3-fos-license
Design of a multifuntional nursing wheelchair This paper designs a wheelchair with multi-function nursing ability. With Raspberry Pi as the main control board, it has functions such as assisting in getting in and out of bed, term inal monitoring, intelligent risk avoidance. It also has the ability of rehabilitation training for th e elderly with stroke or similar conditions, to prevent muscle atrophy and other problems cause d by long-term inability to exercise independently. Finally, the lightweight aluminum profile m aterial is used as the main body to complete the prototype production and function verificatio n. The prototype test shows that this new type of intelligent nursing wheelchair has a reasonabl e design, and the integration of multiple functions has brought great convenience to the caregiv ers and users. It has the prospect of large-scale promotion and marketing. Introduction According to the 2015 Report on World Population Ageing published by the Population Division of the United Nations, In 2015, there were 900 million people aged 60 or above, accounting for one eighth of the world's total population. It is estimated that there will be 1.4 billion people in 2030, accounting for 1/6 of the total population, and nearly 2.1 billion people in 2050, accounting for 1/5 of the total population. Among such a large number of elderly people, nearly 60% of them have leg problems and cannot take care of themselves due to various diseases [1]. In addition, according to the World Health Survey, about 785 million (15.6%) people aged 15 and over live with a disability, of which 190 million (3.8%) have severe disabilities, such as quadriplegic, leg disability, etc. [2]. These elderly, disabled, patients need a lot of nursing staff to help them complete daily travel, assist in getting in and out of bed, artificial hand and foot rehabilitation training, it requires a lot of manpower and time. However, the WHO 2020 report states that there are now 27.9 million nurses worldwide, accounting for more than half (59%) of the world's health workers. It can be seen that the total number of nurses only accounts for 0.378% of the world's population. Compared with the number of people in need of care, there is a big shortage of nursing staff in the world. However, there is no nursing wheelchair on the market that has the function of automatic nursing and can carry on the function of rehabilitation and exercise to the user's body according to the demand. Wheelchair design in this article can make use of automatic armrest, feet and head pad on the limbs, the neck, back rehabilitative training in any direction, at the same time users can adjust the angle and height to assist user in getting in and out of bed.The, wheelchair also carries Intelligent terminal 2 monitorin and infrared obstacle avoidance and other functions, which can realize the user to go out for a walk, in the sun, in the case of no one care. Meanwhile, in order to be convenient for the user to take the escalator, corresponding to the wheelchair is equipped with a travel switch electromagnet and a series of auxiliary devices, making it easier for users to travel safely. The nursing wheelchair reduces the working intensity of the nursing staff, greatly facilitates the rehabilitation training of the users, and improves the autonomy and quality of life of the users. Overall structural design According to the existing medical literature: Take China as an example, the number of strokes in China is about 70 million, and the number of PVS is about 1.4 million and keeps an annual growth rate of 70,000-80,000. High blood pressure, irregular work and rest and mild depression may lead to cerebral vascular rupture resulting in stroke, most patients often accompanied by partial paralysis of the body, systemic paralysis (PVS) and other symptoms, For wind-stroke syndrome users and other similar patients, this article designs a type of nursing wheelchair like figure 1, with Scissor-Type lifting institutions to control the height, using the activity type nursing armrest, back of a chair, by devices such as legs, complete periodic rehabilitation training, using the mecanum wheel, and has auxiliary up and down the escalator electromagnet structure, so as to complete the corresponding auxiliary functions. The principle of Scissor-Type lifting institutions Through the linear actuator to push the shaft to do rotating movement so as to drive the Scissor-Type lifting institutions up and down movement. [3] The hinge connecting the push rod and the shaft is welded to ensure that there is no relative rotation in the process of movement, which ensures the stability and safety of the wheelchair in the process of overall rise and fall. The user can adjust the height of the seat by remote control via Bluetooth. When close to the bed, use the sensor device near the seat (more on this later) to lift to the level of the bed. Cooperating with armrest and legrest putting flat, it completes the function of assistance in getting in and out of bed and taking the objects placed on high places, which facilitates the daily life of the user. The schematic diagram is as follows: ICAMMT The principle design of nursing armrest The main function of the armrest is to complete the rehabilitation training of the user's hand by completing the periodic slow and periodic pulsating cyclic rotation. It can also be flatten when assisting the function of getting in and out of bed. It is convenient for the user to move his or her body slightly to get into bed. So we can use the parallelogram linkage steering, The schematic diagram is as follows: Figure3 Armrest mechanism principle design drawing Armrest is movable using symmetrical double -hinge movable hinge. The handrail is also made of aluminum profiles with the same cross-section size. A beveled edge is provided in the upper support frame to mount the air spring, the parallelogram linkage steering mechanism is installed in the place where the support frame is recessed, and the travel of the air spring is also solved by the drawing method. The design drawing is shown below: Figure4 Armrest parameter design drawing The principle design of nursing backrest and nursing legrest The nursing legrest requires you to rotate around the part of your seat that's connected to it, considering that the wheelchair can be lifted or lowered, which may cause changes in the motor push rod stroke, Therefore, in this result, the other end of the push rod motor is fixed under the seat plate of the wheelchair, so that the travel of the push rod on the leg does not change during the lifting process. The main function of backrest and armrest is to assist the user by the leg and back rehabilitative exercise, depend on both sides of the legs have their respective degrees of freedom are unconstrained which can work alone, in the subsequent process can be finished under the control of various types of rehabilitation training, can according to user's requirements in terms of motion curve fitting, the common movement law has the bicycle repeated movement, the simulation walks the movement and so on. The main principle is similar to the oscillating guide-bar mechanism, the schematic diagram is as follows: Figure5 Legrest mechanism principle design drawing The principle of the backrest is similar to that of the leg part and will not be repeated. the schematic diagram is as follows: Figure6 Backrest mechanism principle design drawing Shock absorption design of Mecanum wheel The moving chassis of the wheelchair uses four Mecanum wheels with a diameter of 100mm. The speed direction of a single wheat-wheel is perpendicular to the rubber roller, and the omnidirectional movement is carried out by the combined speed of each wheel. But the Mecanum wheel has some problems, such as unstable driving and large vibration. In order to ensure the safety and comfort of the users, based on the existing damping methods, we add a spring washer and damping plate to the driving structure of the wheel-wheel, and distribute the resistance moment through a cushioning damping arm to achieve the effect of damping. [ the realization of control system The overall control system flow chart is shown in figure 8. The temperature and humidity sensor and the infrared distance sensor are connected through an Arduino to get the temperature and humidity data and the distance from the chassis to the ground respectively. The remote-control handle which is connected to the Arduino as well controls the electric push rods for rising and falling wheelchair and controls the motors for moving wheelchair. The encoder feeds back the pulse signal which is processed into the motion information such as the speed of the motor to the Arduino and then upload the information to the Raspberry Pi through serial. The Raspberry Pi is connected to a Lidar, an inertial measurement unit (IMU), a camera, a Buzzer and a LED lamp. Lidar gets information about the environment, the IMU gets information about its own acceleration and the camera gets images from time to time. These data along with the motion information from Arduino, is uploaded from the Raspberry Pi to the terminal via the WIFI for algorithmic processing and monitoring. The Mecanum wheel is driven by a planetary gear reduction motor, which can convert the low torque and high speed input into the high torque and low speed output through the gear set, providing high torque power for driving heavy load wheelchair. For the safety of the user, the motor used in this work provides 35.5KG.CM torque. The idling limit is about 90 RPM (3 rad/s) for each wheel, while the maximum linear speed of a single hub with a radius of 50 mm is about 0.5 m/s. The control of the chassis is to operate the joystick on the armrest of the wheelchair, and the user can freely control the direction of the wheelchair. With Simultaneous Localization and Mapping (SLAM) technology, the wheelchair can stop at the designated target position and avoid the movable or immovable obstacles on the way. [5] However, considering the size and mass of the wheelchair, the mobility of the wheelchair and the safety of the elderly, this design is realized without using the automatic navigation mode, only using the SLAM algorithm as the assistant of the manual remote control, to achieve terminal monitoring, recording the movement path and alarming for short-range obstacles. Compared with GPS positioning, SLAM positioning can reveal the exact position of the wheelchair relative to the surrounding environment in indoor and small scenes. The wheelchair is convenient for the elderly to do activities alone in the nursing home or nearby outdoor. When they need help, the caregivers can quickly lock their position, which can reduce the pressure of some of the caregivers, and also facilitate the elderly who have the ability to control the wheelchair and need to go out for enjoying the sun or other activities. The terminal monitoring The System adopts the Robot Operating System (ROS) framework, and the SLAM algorithm completes its own positioning and recognition of the surrounding environment, which is transmitted to the terminal Alarm for close range After processing the point cloud information returned by the lidar, if the distance from the object around the wheelchair is less than a certain threshold and the actual speed is greater than a certain threshold, the LED light flashes to prompt the operator to pay attention to deceleration. Anti-rollover In the bottom of the wheelchair is equipped with infrared ranging sensors, if a sudden change in the distance is detected, it means that the wheelchair encounters pits, stairs and other similar situations, the motor will brake and buzzer will alarm. If the IMU detects that the wheelchair has a large turnover, it will give feedback and alarm to the terminal. Assistance in getting in and out of bed The specific process is as follows: firstly, the user controls the wheelchair to the bedside and makes it connecting to the bed. And then the lifting mechanical structure will automatically lift until that the pressure sensor mounted on the side of the wheelchair reaches to the edge of the bed which returns pressure signal and control signal. When the cushion is equal to the height of the bed, the armrest, backrest and legrest are placed flat in turn. When the overall structure is parallel to the bed, the user only needs to slightly move his or her hips to get into bed. Even the problem of getting into bed for the users with inconvenient legs and feet is solved. other auxiliary functions The electromagnet device is used to start the electric signal by remote control when the user goes up and down the escalator, so that the electromagnet has the magnetic force adsorbing the magnetic escalator, improving the running stability and ensuring the safety of the user. Wheelchair load-bearing capacity checking When using 1640 aluminum profile material, let's say the user weighs 100kg, therefore, the force of the component bars of each Scissor-Type rod is 250N. Through the simulated force analysis, it can be seen that the stress and strain of the maximum stress bar are both within the adaptive range, so it can withstand most users. Verification of the function of assistance in getting in and out of bed Considering that the average height of the existing beds for Asian people in the market is about 45-55cm, we conduct 3 groups of experiments to compare the experiments of automatically rising to the same height as the bed with the automatic lifting structure and the pressure sensor beside the cushion at different heights. It can be seen from the experimental data that the accuracy of the automatic lifting structure is basically maintained at more than 85%, which is basically in line with the use of all kinds of height beds on the market. Tabe2 Movement of armrests Armrests are driven repeatedly by 12V small steering gear, the switch module controls the starting of the whole armrest rotation. Two switches are used on both sides of the armrest. The angular velocity of repeated motion can be controlled through the bluetooth of the mobile phone (if there is no input angular velocity, the default is π/60 rad/s), maximum range of motion is 105°, repeated pulsating cycles can prevent muscle atrophy in users who have been unable to exercise for a long time, and can also help middle-aged and elderly people who have suffered a wind-stroke syndrome move their bodies. Legrest and backrest movements Legrest and backrest movement mechanism is similar to the armrest, without further elaboration, difference on the leg and the back of a chair of the driver uses to compile the 220 v industrial push rod, the activities of the highest can complete 135 °. Two legs drive by two push rods, which can be manually inputted motion curve required to complete the rehabilitation exercise, the wheelchair with simulated rehabilitation function has two legs and walking and running, cycling motion simulation, make the user can also do leg training in a wheelchair. Verification of path logging and monitoring Due to the difference between each motor and the wheel itself, the four wheels each set a set of PID parameters, adjust the size of the parameters so that the wheelchair can walk straight forward on the flat and start and stop smoothly. In the actual environment for testing, the PC terminal open ROS visualization tool rviz to view the laser point cloud information and movement status. Remote control wheelchair for moving and mapping and other operations. You can see the wheelchair's current location and the solid red line between the green starting point and the red ending point in figure 15 Design and validation of other auxiliary functions In the overall design of the wheelchair, we have added some features that are convenient for the user and caregiver. 1) In view of the common inclined escalator in the market, the wheelchair is easy to roll over or slide rapidly under the action of gravity during the operation of the escalator, so we add electromagnet to the chassis and push hands to the back of the wheelchair. When the user enters the inclined escalator, the electromagnet device can start the chassis through the switch, and the magnetic suction force of the escalator is used to stabilize the overall wheelchair, which greatly guarantees the overall safety performance and autonomy of the wheelchair. With this safety measure, the user can take the escalator autonomously. 2) Because the elderly use less intelligent tools, and most of them rarely use electronic products in their daily life, the intelligent wheelchair is added in the design of the temperature sensing module, which can measure the real-time temperature, convenient to the old dress selection and collocation. Besides, we use Bluetooth module to design the one-button call function and automatic positioning function. When the elderly has an accident or needs help, press the one-button call button, and the signal will be fed back by the Bluetooth module connected to Arduino. Through the transmission of the communication module, the carer's phone will be automatically dialed to prevent accidents. 3) In daily life, users always need to carry a lot of small parts and items, which are often easily dropped to the side and can not be picked up for themselves. However, in the traditional wheelchair, the inconvenience of storage and picking up of items is rarely taken into consideration. To solve these problems, we use embedded armrest device, and the armrest is provided with a storage structure, convenient for users to adjust the position of the armrest and can put medicine, remote control, reading glasses and other daily necessities. Conclusion This wheelchair meets the needs of users in terms of structure and control, after the prototype test, it completes the function of the expected, it is suitable for the occasions where more nurses are needed and the number of users with the difficulty in walking is huge. It alleviates the strain on the caregiver workforce. In addition, it can serve users independently, greatly meet the needs of users, and has the prospect of large-scale listing and promotion of use.
2021-06-03T01:36:32.026Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7293347ab6c9bd60963e72b76d4ee363f2b41ee7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1885/5/052044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7293347ab6c9bd60963e72b76d4ee363f2b41ee7", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Physics" ] }
247639388
pes2o/s2orc
v3-fos-license
Border control in question: transformation of anti-cholera measures in Japan at the end of the19th century This paper aims to examine how Japan’s medical authorities explored a flexible way of border health control against cholera at the end of the 19th century. Since quarantine measures provoked diplomatic tensions with neighbouring countries, the Japanese government considered implementing a scheme of medical inspection of vessels from infected ports. However, Japanese geographical conditions seemed unsuitable for this measure. As a result, a permanent border control was established on the one hand and, on the other hand, bacteriological examinations were carried out within the territory. This exploration of flexible border control, which comprised the domestic realm, aimed to set up a reliable outbreak alert network, but cholera epidemics revealed the lack of material conditions of this system. From the turn of the century, the authority began to seek more technical solutions. Introduction Border control against infectious diseases is a crucial subject in the history of medicine. For example, historian Erwin Ackerknecht described, in his seminal article on the 19 th century anticontagionism, how economic and commercial interests influenced public health policies in European countries, driving them to abolish maritime quarantine measures. [1] These measures, restricting vessels' movements for fear of importing diseases, were questioned not only in Europe and America 1 , but also in the Middle East [2] or Japan in the end of the 19th century. The literature on Japanese quarantine policies explains this phenomenon as a result of diplomatic pressures imposed by Western powers, seeking to ensure smooth trade flows. [3] This paper aims to explore further this question, and to show that the Japanese medical authorities challenged the idea of border and sought to expand the scope of epidemic control within the domestic realm. To shed light on this issue, the British case proves to be helpful: the so-called "English system", a flexible border control system, protecting economic interests as well as health security [4], led to the creation of "port sanitary zones", in which the domestic health service checked the suspects. [5] In addition to this spatial notion of disease control, which blurs the concept of linear border, the present paper would draw attention for the temporal and technical dimensions of disease containment practices. The paper will show that the actors in the Japanese case sought to find technical methods, with which they could alert of the presence of cholera as early as possible. [6] To investigate this question, the present paper analyses discourses and actions of Japanese scholars working on the control of cholera epidemics, in the 1880s and 1890s, mainly for the Health Board of the Home Ministry. The paper consists of three sections. First, it provides a brief account of cholera epidemics and quarantine measures in Japan during the 19th century. Then it examines how doubts on the effectiveness of quarantine measures emerged after several experiences of cholera outbreaks. Finally, it scrutinizes the introduction of bacteriological examination of the sick as a tool for an early and accurate alert of the presence of the disease. Historical context: cholera outbreaks and quarantine measures Japan witnessed eight cholera outbreaks on its territory during the 19th century (1822, 1858, 1877, 1879, 1882, 1885-86, 1890, 1895). The first two took place by the end of the feudal era. After experiencing its horrific scourges, feudal rulers published an Official version of the Prevention Theories on Epidemic Poison [Kanban ekidoku yobô setsu] in 1862 to prepare scholars for a next potential outbreak, in which the word "quarantine" was presented and translated for the first time in Japanese language as a preventive measure that Western countries were exercising against cholera. [7] When the disease came back with virulence in 1877 and 1879, the Meiji government, which had abolished the feudal regime and established the Empire of Japan in 1868, tried to control it with Western medical knowledge. Quarantine was one of the main actions that the government aimed to carry out against the epidemics. However, it provoked diplomatic conflicts: foreign consuls contested and breached the quarantine rules imposed on the vessels of their respective country, complaining to the Japanese authorities that these measures did not make sense for the prevention of cholera. Instead, they carried out themselves medical inspections of the vessels of their nationality. [8,9] Despite these conflicts, quarantine rules acquired legal force through the enforcement of the Law for the Maritime Prevention of Cholera Dissemination, on July 14, 1879. However, since quarantine was a target of criticism, new rules were implemented on the June 23, 1882, with the Law for the Examination of Ships arriving from Cholera Infected Areas. In doing so, the government replaced quarantine measures by medical inspection, and appointed medical inspectors at the main ports to check the sanitary condition of vessels coming from infected areas, as well as the health conditions of the crew on board. Even though this shift was a consequence of diplomatic influence on border control with the aim of reducing obstacles for trade, as the literature has pointed it out, the central idea behind these measures -delineating the health border where infected objects and human beings were to be controlled -remained unquestioned. From the middle of the 1880s, Japanese scholars began to doubt the effectiveness of the maritime border control. personnel was increased. However, quarantine practices, like medical inspection rules, are only practicable once those ports have been declared infected. As Japan is close to Chinese ports and South Sea islands where cholera is constantly spreading, if action is taken for the implementation of control measures only after official information on outbreaks is obtained, it is too late [to prevent cholera from reaching the territory]". [10] The point of Nagayo was that the border control could not play its preventive role in the Japanese context since it depended on diplomatic information which took too much time to be confirmed. In other words, the diplomatic alert network did not work as well as intended. This fear was reinforced in May 1891, when the Japanese ambassador in Hong Kong alerted the Ministry of Foreign Affairs about vessels leaving the port of Bangkok. The ambassador assumed that this port was infected, even though the local government had not yet recognized the presence of cholera. [11] The Central Hygiene Committee, a consultative body of the Health Board, organized a special meeting to react to this information, but was compelled to wait until the British embassy announced the Bangkok port was infected, which finally allowed to quarantine and disinfect the vessels concerned. Border control in question One of the consequences of this episode was the enactment of the Law for the Enforcement of the Practice of Quarantine on Ships arriving from Overseas Ports on June 22, 1891, which established medical inspection for all vessels from foreign ports, without questioning whether the departure port was infected by cholera. [12] In other words, the Japanese authority installed a permanent health border, keeping a systematic surveillance on these vessels. Nonetheless, the reinforcement of the maritime border was not the only solution for the problem. Identification of cholera bacilli within Japanese territory was required to establish a domestic alert network, making it possible to obtain accurate information as early as possible. For this purpose, laboratory tools were mobilized as a means of alerting on the presence of cholera. Bacteriological examination as outbreak alert tool Simultaneously with the rise of doubt about the border control following the outbreak of 1885 1886, the Japanese state medical body began to explore the use of bacteriological knowledge and laboratory tools for the control of cholera epidemics. At that time, the regional health departments were weekly reporting the number of suspected cholera cases to the police, from where they were transferred to the Home Ministry. Between 1877 and 1900, the Ministry recorded between 500 and 900 cases each year, except the years with an outbreak. This declaration system, set up to take preventive measures, was however contested by populations as well as by physicians: for the former, it meant police officers would come to their home and send them to a quarantine hospital as soon as they would show suspected symptoms; for the latter, its diagnostic criteria was quite confusing. [13] The Health Board sought to overcome this problem with the new laboratory techniques. The goal was to carry out bacteriological examinations to detect cholera cases with accuracy at an early stage of disease propagation. In 1885, the Board began to send officers trained in the bacteriology to the region where suspected cases were reported as soon as it had received information. [14] Bacteriological examination had a twofold purpose: in cases where cholera bacilli were absent, the Ministry could reassure surrounding communities and avoid taking unnecessary control measures; in cases where the officer detected cholera bacilli, the authority could take early actions before the propagation of microbes had occurred. However, while reports on the absence of bacilli in 1888, 1889 and 1901 made it possible to keep calm on the territory, [15] when bacilli were detected in 1890, bacteriological examinations were barely useful to take action quickly. In July of that year, Tôichirô Nakahama, an officer of the Health Board, had been ordered to travel from Tokyo to Nagasaki, a southern port city where a couple of SHS Web of Conferences 136, 0 0 (2022) MATTERS OF CONTAINMENT 2020 https://doi.org/10.1051/shsconf/202213602002 20 2 suspected cases had been identified. He travelled there and detected cholera bacilli, [16] but this did not prevent microorganisms from spreading simultaneously and insidiously. By the end of the year, the Home Ministry registered 35,227 deaths. Nakahama suggested that local health personnel themselves should carry out bacteriological examination: however, since only several scholars working for the Health Board had acquired the techniques indispensable for bacteriological examination, this new strategy could not provide early warning, as occurred with the diplomatic maritime control. To deal with this problem, training and technical courses on bacteriological tools were taught successively in Japan since 1892 at the Institute for Infectious Diseases, Tokyo University, the Tokyo Microscope Academy, as well as under the direction of the Private Hygiene Society of Great Japan, which had local branches. [17] But a new cholera outbreak in 1895, caused by soldiers returning from the Sino-Japanese war, posed challenges to the measures based on bacteriological examination. If the number of health personnel who had learned bacteriological techniques had substantially increased, the shortage of medical equipment, especially microscopes, was critical. [7] The outbreak revealed that, whether with old or new techniques, medical and public health infrastructure was still indispensable to take early action. Conclusions The present paper has analysed questions raised by maritime border control in late 19 th century Japan, as well as the subsequent transformation of cholera control practices and its difficulties. All this reflects a fundamental discordance between the unpredictable nature of cholera epidemics and the disease-containing measures which require collective organization and preparation. The Japanese authority mobilized border medical inspection and domestic bacteriological examination against cholera -the latter integrating the role of border control within the domestic public health structure -to avoid generalized quarantine measures, but political, social and material factors hindered these actions. In the aftermath of the 1895 outbreak, authorities began to explore more technical solutions, i.e., vaccination campaigns that could allow to avoid outbreaks despite the presence of cholera bacilli. This measure, launched despite the reluctance of Japanese bacteriologists toward vaccination in the last decade of the 19th century, might be considered as a consequence of the long and severe struggle for constructing an alert network in cholera control measures.
2022-03-25T15:26:16.686Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1116897172f8154e64394d6653a932f4e531dfd5", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/06/shsconf_moc2022_02002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e19862fc02b4a534f5434a84ce78b30d3b67135c", "s2fieldsofstudy": [ "History", "Medicine" ], "extfieldsofstudy": [] }
55400320
pes2o/s2orc
v3-fos-license
Awareness about mosquito-borne infections among agricultural and horticultural college students: Coimbatore, South India Mosquito-borne infections are caused by bacteria, viruses or parasites transmitted by mosquitoes. They can transmit disease without being affected themselves. Mosquitoes are regarded as well-known vectors of various diseases which include dengue, chickungunya, malaria, Japanese encephalitis, filariasis, yellow fever, Zika fever etc., These mosquitoes breed in all sorts of stagnant water (tyres, drums, coconut shells). Some species can also breed and survive in freshwater bodies. But this development depends on the species, geographic places and temperature. Mosquito-borne infections especially dengue is being the major concern among the general public in the present days. Nearly 700 million people are affected by mosquito borne illness each year resulting in over one million deaths. Thirty-four percent of dengue infection in worldwide is reported in India. The fatality rate is about 90% among the infected children which is less in adults. INTRODUCTION Mosquito-borne infections are caused by bacteria, viruses or parasites transmitted by mosquitoes. They can transmit disease without being affected themselves. Mosquitoes are regarded as well-known vectors of various diseases which include dengue, chickungunya, malaria, Japanese encephalitis, filariasis, yellow fever, Zika fever etc., These mosquitoes breed in all sorts of stagnant water (tyres, drums, coconut shells). Some species can also breed and survive in freshwater bodies. But this development depends on the species, geographic places and temperature. Mosquito-borne infections especially dengue is being the major concern among the general public in the present days. Nearly 700 million people are affected by mosquito borne illness each year resulting in over one million deaths. 1 Thirty-four percent of dengue infection in worldwide is reported in India. The fatality rate is about 90% among the infected children which is less in adults. Dengue is an arboviral infection which is transmitted by the mosquito Aedes aegyptii. It presents clinically after an incubation period of 3-14 days, as fever of sudden onset with headache, retrobulbar pain, conjunctival injection, joint pain, maculopapular rash and lymphadenopathy. However, more severe instances can lead to dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), which could be fatal. 2 The fatality rate is higher in infected children than in adults. The main reason for the incidence is due to lack of awareness, increased urbanization, amplified mosquito population due to inadequate public health infrastructure. Agricultural and Horticultural college students who are involved in field activities are at higher risk for developing the infections. Hence, their knowledge about these infections is significant. The primary aim of our study is to assess the awareness about mosquito-borne infections among Agricultural and Horticultural college students of all the academic years who are involved in the field activities. Also, to analyze the preventive measures and sanitary methods followed by the students to prevent the infection. Study design Cross-sectional study. Inclusion criteria Inclusion criteria were Agricultural and Horticultural students of all the academic years who are involved in field activities. Exclusion criteria Exclusion criteria were students who are not involved in field works. We framed a questionnaire which consists of 24 multiple choice questions, which tests the awareness about mosquito-borne infections among Agricultural and Horticultural students of all the academic years who are involved in field activities. After getting informed consent from the participants, the questionnaire were distributed and collected after filling immediately. The data obtained were processed and grading of the knowledge was done based on their scores. They were categorized into poor (0-8), average (9-16) and good (17-24). Statistical significance was calculated using the software SPSS version 20 (Chi-square test). The Institutional Human Ethics Committee approval was obtained (No.131/2018). Our study population consists of 26% males (64/250) and 74% females (186/250) ( Figure 1). According to the grading done based on their scores obtained, 1.6% of the males had poor knowledge, 51.6% had average knowledge and 46.9% of them had good knowledge. Among the females, only 0.5% of them had poor knowledge, 44.6% had average knowledge and 54.8% of them had good knowledge ( Figure 2). Comparing the overall knowledge with regard to gender, females had comparatively fair knowledge than males. But based on the chi-square value, there is no association between knowledge and gender (Table 1). The total study populations were divided into two groups based on their age. First group includes students less than 20 years of age (125/250). The second group consists of students whose age is more than or equal to 20 years (125/250) ( Figure 3). Based on their scores, 0.8% of students in the first group had poor knowledge, 35.2% of them had average knowledge and 64% had good knowledge. Among the second group students, 0.8% had poor knowledge, 58.4% of them had average knowledge and 40.8% had good knowledge. Comparing the overall knowledge with regard to age, students whose age is less than 20 years had fair knowledge when compared to the students with age more than or equal to 20 years ( Figure 4). Chi-square value reveals that there is a significant association between knowledge and age (p<0.001) ( Table 2). Our study population consists of 72% (180/250) Agricultural students and 28% (70/250) Horticultural students ( Figure 5). Based on the scores obtained, only 0.6% of the Agricultural students had poor knowledge, 45% of them had average knowledge and 54.5% had good knowledge. Among the Horticultural students, 1.4% had poor knowledge, 51.4% of them had average knowledge and 47.1% had good knowledge. Comparing the overall knowledge with regard to the course, Agricultural students had comparatively fair knowledge than Horticultural students ( Figure 6). But the chi-square value shows that there is no association between knowledge and course (Table 3). DISCUSSION Our study intended to assess the knowledge of Agricultural and Horticultural students related to mosquito-borne infections especially dengue. The study population consists of 250 students (26% males; 74% females) of all academic years who are involved in the field activities. The grading of the knowledge was done based on their scores. They were categorized into poor (0-8), average (9-16) and good (17-24). Overall knowledge was good among females than males. Students of age less than 20 years have comparatively fair knowledge than students of age more than or equal to 20. The knowledge was good in agricultural students when compared with Horticultural students. These results are similar to the study conducted previously among the university students by Payghan and Nayyar et al. 3,4 In case of Aedes, 79.2% were collected from the grid chamber, 13.2% from the pumping well and 7.1% from the collecting chambers. 5 The collecting chambers and the grid chambers were part of a non-functional old waste water disposal system, while the pumping well was a functional part of the waste water irrigation system. Irrigated fields were always mosquito negative. This was due to percolation of water through the porous soil, which resulted in the rapid elimination of potential mosquito breeding sites. Culex species usually breed profusely in polluted gutters, blocked drains and other water retention habitats with organic matter unlike Aedes and Anopheles mosquitoes which prefer clean ground pods and manmade containers respectively. 6 Among 250 students, 68% of them think improper sanitation is the reason for sudden outbreaks of dengue fever. Majority of the students think that it is the responsibility of each and every citizen to keep the surrounding environment clean in order to control the mosquito-borne infections rather than pointing out the Government and NGOs. 80% of them think that there is lack of education and awareness about dengue among general public. In order to prevent the breeding of mosquitoes in the stagnant water which they use for their field works, they have implemented larvivorous fish (Gambusia) and mosquito fogging in their campus. Most of the students are aware of the source of mosquito breeding, diurnal variation of mosquito biting, preventive measures and medical helpline. But they are unaware of the mosquito species, diagnostic tests and presenting symptoms of infections. CONCLUSION Since the students are exposed to many field activities, they are at increased risk of getting infected. Hence, their knowledge about these infections is significant. Several outbreaks of dengue in Coimbatore over the past few years show that there is need of awareness campaigns in the community and implementation of proper public health infrastructure. Therefore, not only creating awareness is important, it should also be looked into that people implement their knowledge into day-to-day practice to prevent the mosquito-borne infections.
2019-04-27T13:10:28.614Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "e064415e9a77f36b6d9c098b59c44ad3a6819de7", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/3324/2381", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ca1d3b0e1ce597c59642e9d0e28f3607c2704b97", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Geography" ] }
250123110
pes2o/s2orc
v3-fos-license
Evolution and Applications of Recent Sensing Technology for Occupational Risk Assessment: A Rapid Review of the Literature Over the last decade, technological advancements have been made available and applied in a wide range of applications in several work fields, ranging from personal to industrial enforcements. One of the emerging issues concerns occupational safety and health in the Fourth Industrial Revolution and, in more detail, it deals with how industrial hygienists could improve the risk-assessment process. A possible way to achieve these aims is the adoption of new exposure-monitoring tools. In this study, a systematic review of the up-to-date scientific literature has been performed to identify and discuss the most-used sensors that could be useful for occupational risk assessment, with the intent of highlighting their pros and cons. A total of 40 papers have been included in this manuscript. The results show that sensors able to investigate airborne pollutants (i.e., gaseous pollutants and particulate matter), environmental conditions, physical agents, and workers’ postures could be usefully adopted in the risk-assessment process, since they could report significant data without significantly interfering with the job activities of the investigated subjects. To date, there are only few “next-generation” monitors and sensors (NGMSs) that could be effectively used on the workplace to preserve human health. Due to this fact, the development and the validation of new NGMSs will be crucial in the upcoming years, to adopt these technologies in occupational-risk assessment. Background Since personal samplers were implemented in the 1960s, personal sampling has become a widely accepted practice (or, rather, the reference method) for exposure assessment in occupational hygiene [1,2]. Traditionally, personal sampling depends on relatively slow turnarounds between sample collection and subsequent laboratory analysis, which uses standardized methods to generate results, and this can limit the optimal implementation of workplace-risk-mitigation strategies in terms of promptness and efficacy [2,3]. On the other hand, "real-time" monitoring (i.e., by using "direct-reading" devices) allows for: (i) sensing the presence of specific hazards, (ii) collecting a data sample with a high temporal resolution, and (iii) having real-time feedback which, on the contrary, should be delayed when using the traditional approach of sample collection, followed by a subsequent off-line analysis [2][3][4]. These advantages, in occupational-hygiene applications, can potentially provide data of different nature for risk-management purposes; the information on the exposure to the hazard can be available in a more timely way, thus, the implementation of risk mitigation measures may be faster and more efficient (e.g., workers who receive real-time information can mitigate their own exposure, by changing their behavior and/or the procedure they are performing) [4]. The advent of wearable and unobtrusive sensors has allowed for measuring the parameters of interest (e.g., gas/vapor and aerosol concentrations, noise intensity, fatigue and heat stress) during real-life activities [5]. Further, data from real-time monitoring may have completely different characteristics (e.g., high volume of data, elevated generation velocity, heterogeneity of data), compared to those from traditional monitoring, which determines a great interest and challenge in the collection, storage, and modeling of data [6]. The development of cheaper (and, sometimes, also lightweight, miniaturized, and technically advanced) sensors (e.g., low-cost sensors-LCSs) and monitors (e.g., low-cost monitors-LCMs) is essential to promote the abovementioned activities and might have the potential to transform exposure-assessment approaches in occupational settings [7]. Following a definition already adopted in a previous publication [8], low-cost sensors and monitors are intended to mean that the cost of a single unit does not exceed the order of magnitude of a few hundred USD. The spread of a "next-generation" of sensor devices for occupational hygiene (i.e., low-cost, miniaturized, placeable, wearable, and implantable sensor technologies) and their role in the future of workplace-exposure assessment and risk assessment have recently been discussed [3,4,6,8]. Hereafter, a systematic collection of the up-to-date literature, aiming to outline the relative core topics, will be discussed. Problem Statement NGMSs ("next-generation" monitors and sensors), which refers to "miniaturized" and/or "wearable" sensors and/or monitors, are expected to make the exposure and risk assessment in occupational settings more convenient and comprehensive [4]. A recent systematic review [8] analyzed the use of NGMSs in occupational hygiene for airborne hazards: results outlined that these applications are less frequent than in environmental hygiene, and this is probably due to the fact that policy-and legislation-based decisions requires a high-level detection limit for data, precision, accuracy, and completeness [9]. Despite that, NGMSs devices can provide new resources in the occupational-safety and health-management fields [10][11][12][13][14][15][16][17][18][19][20][21][22][23]. Indeed, the studies considered in the above-mentioned systematic review, demonstrated overall that NGMSs provide useful data, if properly calibrated. NGMSs can also be easily adopted to improve exposure assessment studies in terms of spatio-temporal resolution, wearability (=prolonged use), and adaptability to different types of experimental designs and applications. For example, wearable sensors and devices could be used in various application fields such as: (i) ergonomic analysis [24], (ii) assessment of the weather's effect on outdoor workers [25], or (iii) exposure to chemical substances which can affect workers' health [26]. Personal-level sensors are also creating new opportunities for exposure-assessment studies. Technologies to study the environment, such as monitors and sensors, have always had a higher, more important role in the investigation of the occupational-risk-assessment process [27]. The application of Internet of Things (IoT) technologies also allows to connect NGMSs to each other and/or to smartphone apps and to upload data onto cloud platforms using Bluetooth or Wi-Fi technologies, which report back, in real-time, the recorded data [19,28]. Nevertheless, some drawbacks must be also considered. For example, NGMSs cannot be applied as reference-grade instrumentation in monitoring exposure to airborne chemicals for regulatory purposes [8]. NGMSs should be used paired with traditional methods for a period to allow the hygienist to calibrate the sensors in the most efficient way to obtain significant data for the risk assessment [29][30][31]. Another relevant issue regards power supply, in fact, most NGMSs cannot run for many hours without recharging [8]. Further, concerning ethics, a balance should be found among the respect for privacy and the intrusiveness that accompanies ubiquitous worker monitoring; mutual worker-employer trust must be achieved regarding the management of the large amount of data that can be generated by monitoring with NGMSs [4]. From a much more operational point of view, another weakness of NGMSs could be due to their misplacement during the workers' activities. In fact, these situations may generate a lack of data quality, and it might be a serious problem for the industrial hygienist's evaluations, in the case that the data should be analyzed for risk-assessment purposes [8]. Aim of the Study The main purpose of this study is to analyze the scientific literature in order to understand the state-of-the-art technology concerning the use of NGMSs in the exposure and risk-assessment processes in occupational settings. In particular, the present study aims to focus on how NGMSs could be used in the field and on their practicability in the risk-assessment procedure. In addition, the authors decided to study several areas of interest regarding the technologies and the main technical aspects of wearable sensors in the field of industrial hygiene. Materials and Methods A rapid [32,33] systematic review of the literature was performed using the outcomes from Scopus database following PRISMA guidelines [34]. The main topic of interest involved in this work was about placeable, wearable, and implantable sensors and their application in occupational exposure assessment studies. Only scientific papers written in English were considered. A list of keywords was arranged in queries following the writing rules required by Scopus database, obtaining the final query (Table 1) adopted to get the papers from the database. Table 1. Query used in the Scopus database. Database Search Query Scopus (TITLE-ABS-KEY ("sensor*" AND "occupation*")) AND (TITLE-ABS-KEY ("occupational exposure" OR "human exposure" OR "exposome" OR "miniaturized sens*")) AND (TITLE-ABS-KEY ("sensor network" OR "wearable sens*" OR "crowd sensing" OR "participatory sensing" OR "mobile sensor node" OR "low cost sensor" OR "citizen science" OR "mobile phone app*" OR "lightweight device*" OR "bluetooth" OR "air pollution sens*" OR "portable device" OR server OR cloud OR "miniaturized sensor*")) At the end of the research process, 40 papers were found in Scopus ( Figure S1). The last research was conducted on 22 March 2022 (first research: 27 May 2021; weekly updates were performed starting in March 2022 until the submission date). After reading each retrieved paper, the obtained information was schematically organized in a dedicated database by two of the authors (A.B. and G.F.). The papers were analyzed to find out information regarding the aims of the studies, the investigated risk factors, the application of NGMSs, and their technical features (e.g., sensor technology, devices' dimensions, weight and cost, batteries performance, availability of mobile apps, connection and/or IoT technology, application (in the laboratory or in the field), and their availability (prototype or device on the market)). In the opinion of the authors, these are the most important topics regarding the implementation of the risk-assessment process in real, occupational settings. It should be noted that conducting a systematic review of the available evidence on the use of NGMSs in the exposure and risk-assessment process in occupational settings would have been a useful tool. It would allow the interpretation of the results of individual studies within the context of the totality of evidence and provide the evidence-base for guidelines or policy briefs. However, due to the high level of methodological rigor, systematic reviews require considerable time and skills to execute. When timely access to information is needed, "rapid reviews" instead of systematic reviews are a possibility that can be considered [32]. Rapid reviews are a form of knowledge synthe-sis, in which components of the systematic review process are simplified or omitted to produce information in a timely manner [33]. The present review is, therefore, configured as a rapid review, and the limitations that characterize this type of approach must be considered [32,35]. Gaseous Pollutants Gaseous pollutants such as carbon monoxide (CO), nitrogen oxides (NOx), and ozone (O 3 ) cause a range of deleterious respiratory-and cardiovascular-health effects [63]. The main airborne gaseous pollutants studied in the literature, using NGMSs technologies, are: (i) CO, (ii) oxidizing gases such as O 3 and nitrogen dioxide (NO 2 ), (iii) methane (CH 4 ), (iv) sulfur dioxide (SO 2 ), and (v) benzene (C 6 H 6 ) ( Table 2). In more detail, two papers [57,58] present new tools and techniques for the monitoring of airborne pollutants, in particular for C 6 H 6 , O 3 , and SO 2 . Leghrib and co-workers [58] used an array of plasma-treated metal-decorated carbon nanotubes for the quantification of airborne benzene, suggesting use for a selective, low-cost, and wearable sensor. Wan and co-workers [57] introduced a new miniaturized planar electrochemical-gas sensor for a rapid monitoring of multiple inorganic gases (i.e., oxygen (O 2 ), O 3 , SO 2 , and CH 4 ). In more detail, they presented the whole sensor's construction process, outlining how, from a sensor available on the market, one could build a new customized, low-cost, and wearable device with good reliability to be applied for exposure and risk-assessment procedures [57]. The gas sensor consists of a porous polytetrafluoroethylene substrate, which allows fast gas diffusion and roomtemperature ionic liquid as an electrolyte. To enhance adhesion between the electrodes and the substrate, for platinum-electrodes production, a metal-sputtering technique was used. Thus, compared with other already adopted gas sensors, the one proposed by Wan and collaborators is among the most promising toward a miniaturized, inexpensive, rapidresponse, low-power, and multi-gas sensing system for the exposure monitoring of gaseous hazards [57]. Another work concerning the evaluation of CH 4 concentrations was conducted by Shamasunder and co-workers [52], who tested the capacity of low-cost sensors for localized-exposure estimates. Johannessen and co-workers [38] presented a CO sensor's fabrication process and the design of an IoT network to collect the real-time information gained by the instrument. In more detail, they designed and built a CO-sensor module employing a low-cost (i.e., USD 100) sensor that is commercially available. CO-sensor modules were built based on the EE-02 sensor (Exploratory Engineering, Telenor Digital AS, Trondheim, Norway), which enables long-range connectivity. After some tests conducted both in the laboratory and in the field at an incineration plant, they acknowledged that the best choice to detect rapid and short-term variations of CO levels in workplaces could be real-time monitoring conducted for an extended period of time. However, the investigation of the sensor accuracy and resistance toward interfering gases, is and will be, a crucial point in the evaluation of CO-occupational exposure. Regarding CO 2 , its monitoring could be developed as a new approach to alert individuals using all forms of respiratory support, when breathing becomes stressed before overt symptoms appear. For example, Pleil and co-workers [48] have analyzed results from recent experiments employing in-mask (Tunable Laser Spectroscopy-TLS) CO 2 sensors, to evaluate if these could become a reliable earlywarning tool. Assessing the accuracy of the commercially available sensors is an important step to understand their application in specific tasks, where future research should be focused on. In this regard, Isiugo and co-workers [26] evaluated the performance of different gas sensors for the measurement of ambient O 3 and NO 2 . Results of their work showed that the performance of the tested instruments was influenced by environmental conditions and that, overall, only one of the three tested sensors had an appropriate accuracy. Zuidema and colleagues [11,45] realized a multi-sensor network using low-cost sensors distributed in the work environment. The multi-sensor network was used to design hazard maps of a heavy-vehicle-manufacturing facility. This could offer insight into the sources, areas of high/variability in concentrations near different activities, and distribution of hazards. The most relevant issue emerged from this study was related to the need of a proper calibration of the instruments. In fact, all the sensors were primarily calibrated in laboratory and then underwent field calibration. The multi-hazard network acquired data continuously for 5 months: all the data gained by the sensors were collected in a database and then used to obtain a hazard map of the facility for each investigated airborne pollutant (i.e., CO, PM, O 3 , and NO 2 ). The so-obtained hazard maps could be used to evaluate if the adopted control strategies are effective, offering an advantage over traditional industrial-hygiene approaches. Further, the mapping tools provided by a multi-hazard sensor network, combined with information on workers' locations, can be used to estimate personal exposure to multiple occupational hazards. For this purpose, personal direct-reading instruments were deployed for the same contaminants, to evaluate the ability of the multi-sensor network to potentially provide personal-exposure estimates for any employee whose position can be tracked. Available studies also outline other benefits of a multi-sensor network and/or personal instrument, based on low-cost, customizable, low-power-demand sensors. In fact, this instrument configuration could be useful to identify and divide the working areas by their levels of hazardousness, helping all the workers to control their level of exposure to hazardous pollutants, according to the respective occupational-exposure-limit values [14]. The most relevant problem rising from the available literature is that the measurement error derived from low-cost sensors can often be attributed to issues of sensitivity and specificity, in part due to sensor drift, degradation over time, or responsiveness to non-target species. This problem must be further investigated, especially in terms of instrument accuracy, which must be improved, despite the fact some laboratory results for particular pollutants are very promising [57,58]. A characteristic that arises from the reviewed studies is the customizability, intended as the propensity of the monitors to be easily adapted, integrated, and assembled with other components, which can be developed starting from a sensor already available on the market. Particulate Matter Epidemiological and toxicological studies show that a number of negative effects on human health are possibly related to particulate matter (PM) exposure [64]. Recently, miniaturized, low-cost sensors for PM have become increasingly available, making possible a sensor network to elaborate and characterize maps of particle concentration with high spatial and temporal resolution [45]. Among the reviewed papers, three different on-field studies [45,47,54] that used low-cost and wearable sensors for PM measurements were available. These were performed in three different occupational settings, namely a heavyvehicle-manufacturing facility, an agricultural setting, and hairdressing salons. The first study [45] aimed to design a method that uses hazard-mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational-PM concentration. The proposed protocol is based on a statistical methodology to define the eventual removal order of the sensors located in a manufacturing facility, to determine their optimal location based on preliminary hazard-mapping data. The main aim was to preserve the locations with higher temporal PM variability, to produce the most accurate hazard maps. The statistical methodology presented by Berman and colleagues [46] is very promising because, although in this case it is used for the analysis of PM, it could be modified and used for different hazards and occupational settings to obtain very accurate hazard maps and to implement risk assessment. For example, this methodology could be helpful for a large-scale preliminary-monitoring campaign. Then, based on the obtained results, a subsequent monitoring campaign could be planned based on a reduced number of optimally placed sensors, to perform long-term-exposure assessment. Another study by Zuidema and co-workers [41] is somehow connected to the abovementioned study, since it presented an example of how to apply a correction factor to low-cost sensors to obtain the most-possible-performant sensor network in a heavy-vehicle-manufacturing plant. In the second study [54], three different sensors were used to assess the PM exposure of farmworkers. The first two sensors involved in the study were low-cost sensors (i.e., OPC-N3 by Alphasense Ltd., Essex, UK, and AirBeam2 by HabitatMap, Brooklyn, NY, USA, which is composed of the PMS7003 PM sensor produced by Plantower) and the third was a higher-cost device (i.e., GRIMM Mini-WRAS 1371 by Grimm Aerosol Technik GmBH & Co. KG, Ainring, Germany). The study was conducted over 5 non-consecutive days, from 8:00 a.m. to 4:00 p.m. Results outlined that OPC-N3 performed better than AirBeam2 and if compared to a gravimetrical filter measurement, had generally higher averages than the filter concentration. However, after excluding the data point where the air-sampling pump failed to run, they found a good agreement between OPC-N3 and the filter measurement. These findings suggest that OPC-N3 may be suitable for some agricultural-exposure measurements. The article by Shao and colleagues [47] was related to a pilot study concerning the exposure assessment of PM among hairdressers. Indoor PM concentrations in hair salon were characterized, and the performance of three low-cost sensors (uRAD A3 by Winsen ZH03A PM sensor Magnasci SRL, Romania; Flow by Plume labs; AirVisua Pro, AVPM25b PM sensor, USA) were compared to a portable monitor (DustTrak, 8530, TSI Incorporated). The results of the tested low-cost sensors were very promising. Among these, the uRAD and AirVisual were the sensors that tracked better with their reference device (DustTrak) during most of the sampling time. Contrarily, the FLOW low-cost sensor did not perform as well. Overall, these studies outlined that several NGMSs characterized by acceptable performance are available, but not all of them perform as well as the more expensive instruments taken as reference in these evaluation studies. For this reason, to date, NGMSs cannot totally replace traditional approaches in occupational-exposure assessment, but they can fill other gaps, such as improving data in terms of spatial and temporal resolution [8]. Other Risk Factors for Workers' Health and Safety The results of the studies included in this review outlined that new sensor technologies can be used for the evaluation of other relevant workplace-related health and safety risk factors. These include: (i) physical agents and worker's physiological parameters related to thermal stress/strain and (ii) posture assessment (to prevent work-related musculoskeletal disorders-WMSDs). Physical Agents and Workers' Physiological Parameters Temperature Physiological-temperature monitoring could improve both safety monitoring and work-rest planning, to maximize effective and safe performance (a safe work environment is where workers are physically capable of doing all the required job tasks [19]). There are two key research problems in monitoring thermal-work stress: (i) accurately determining an individual's thermal-work strain and (ii) using the thermal-work-strain state to optimize human performance [65]. Among those reviewed, two studies were performed in cold environments [19,53], and four studies focused on hot environments [5,25,36,65]. Most of these studies used the same instrumentation (i.e., Thermochron iButton, HOBO Pendant Temperature, and Garmin vivoActive HR) to measure physiological metrics (i.e., body temperature and heart rate-HR) which were employed to calculate the thermal stress and strain of the investigated subjects. All these devices are wearable tools available for the consumer on the market. The objective of the study conducted by Sugg and co-workers [53] was to assess the personal ambient temperature (PAT), which is intended as the personally experienced ambient temperature among workers in a cold environment. The symptoms of prevailing chronic diseases are aggravated by cold weather. Scarce physical and mental performances, due to uncomfortable thermal sensations, numbness of hands, and lower body temperatures, could be influenced by low temperature [53]. In this study, workers reported on their experience concerning several cold-related issues, ranging from numbness in the hands and feet to shivering. Especially for outdoor-PAT data, results outlined that the ambient temperature information coming from personal monitoring devices generally agreed with ambient-weather-station data. Significant differences among devices, despite usage by the same participants, was highlighted. The authors reported a great variability among the same subjects, depending on the sensor choice and placement. This is an issue that affects the identification of participants who wore their devices improperly. The long reaction times of the iButton and HOBO devices to temperature measurements may be a limitation of small-scale spatiotemporal studies, because workers could move between different microenvironments in a short period. Further information on this topic is presented by Nelson and colleagues [19]. The authors also focused on occupational workers in cold environments using Thermocron iButtons to collect ambient temperature, but the aim of their study was to understand the workers' response to (i) the report-back process, (ii) their perception of exposure to cold environments, (iii) the potential of behavioral modification, and (iv) the understanding of personal-biomonitoring results (i.e., heart rate). A report-back packet that displayed the study's results, containing biomonitoring information, was given to participants at the end of data collection. After that, a survey to assess potential behavioral modifications and preferences of health-data formatting was conducted. Participants found this process very useful; in fact, they expressed a greater willingness to modify their occupational behaviors to reduce their cold exposure. In terms of promoting behavioral change to cold temperature, the results of this study suggest that reporting the outcomes of each worker could be an effective way to protect workers from these problematics. Not only is the exposure to cold environment is a factor of hazard, but also occupational heat exposure is a crucial workplace hazard, and it is related to increases in health-related illness and injuries due to fatigue, as well as declines in safety and worker vigilance. Workers' performance and productivity can also be affected by exposure to heat, which may cause their decline [53]. In recent years, there were several improvements in low-cost, wearable-sensor technology, aiming to study the individual's daily temperature exposure using single-point measures, which provide an up-to-date method to improve the scale of detailed time-location records and important knowledge about microclimate variability for outdoor workers. Sugg and coworkers [25] conducted a study to demonstrate space and time patterns of PAT exposure and feasibility of using wearable sensors to measure PAT in an at-risk group of outdoor workers. In more detail, the authors aimed to: (i) show the ability to evaluate PAT exposure in both space and time, (ii) characterize site-specific and personal variability in PAT, and (iii) examine how PAT varied between multiple microenvironments. To do that, each participant was equipped with Thermochron iButton devices positioned outside of their collar with the devices facing outward, to collect data regarding ambient environmental conditions. Moreover, a few participants wore Garmin Vivoactive HR watches, to acquire locational and contextual data on heart rate. Sugg and co-authors found out that indoor workers, compared to outdoor ones, have a wider choice about where they spend their workday and may have more opportunities and resources to mitigate their exposure. Despite that, nearly 90% of the participants found the information provided to them useful in mitigating their own heat exposure while at work. A particular study was performed in laboratory setting on firefighters by Coca et al. [36]. The authors tried to demonstrate the accuracy of wearable sensor in this type of working activity. Indeed, firefighters experience tremendous physical stresses in the course of their duties, both metabolically and environmentally. In this study, plethysmographic sensors (LifeShirt System, VivoMetrics, Ventura, CA, USA) incorporated into a vest were used. Some of the physiologically variables monitored by these wearable devices were heart rate, respiratory rate, skin temperature, oxygen saturation, tidal volume, and minute ventilation. Physiological data were stored onto a small, portable data recorder carried in a pouch attached to the vest and telemetered in real-time to a laptop computer. The conclusion of this study indicates that LifeShirt Systems measure somewhat accurately within the hot, moist environment of standard firefighter gear. In fact, most of the physiological outcomes were not statistically different from the physiological data recorded on standard laboratory equipment (i.e., 12-lead ECG skin electrodes, skin-temperature sensor (SQ2020-1F8, Grant Instruments Ltd., Cambridgeshire, UK), Nonin X-pod pulse oximeter (Nonin Medical Inc., Plymouth, MN, USA)). This study obtained very promising results, but it also suggested that additional experiments in actual firefighting scenarios are warranted to determine the accuracy in field settings. UV Radiation In addition to temperature exposure, the exposure to ultraviolet radiation (UV) is also a relevant issue, especially for outdoor workers. Long exposure to the sun results in an over assumption of UV rays, which has a beneficial effect on human physiology in a normal dose, but overexposure could lead to several diseases, such as DNA mutations and subsequent skin-cell carcinoma, benign tumors, fine and coarse wrinkles, mottled pigmentation, and other cellular-proliferative diseases [66]. To avoid all these potential risks, the workers must be provided with alerts about overexposure events. Pievanelli and colleagues [59], in a conference paper, presented an operational scheme for the realization of compact, wireless sensors that are able to detect physical agent (i.e., UV) exposure, which are suitable for the protection of workers employed outdoors. The platform is made up of mobile, wireless sensor nodes, designed to be attached on clothes, to be simply worn by anybody. Even if the whole set of components has been identified and tested, the single elements have not been integrated together. Further, among those reviewed, we found two manuscripts regarding in-the-field UV exposure [37,61]. Sabburg et al. [37] collected UV-A irradiance data to quantify the effect of clouds in UV-A exposure, using an integrated sky camera and radiation system during an autumn and winter period. Baczynska et al. [61] over 2016 and 2017 measured the in-flight UV exposure of pilots in England, using GENESIS-UV sensors for measurement inside the cockpits. Commercial pilots are a particular category of workers, due to the fact that they are at twice the risk of melanoma and skin cancer than the general population [67,68]. Pilots are exposed to solar and UV radiations that may be significantly higher at flight altitudes than on the ground. In this study, sensors were clipped to the shirt at chest levels, and the pilots had also to fill a diary that included date, time, and other flight information. From this study emerged that the direct method of inflight spectral measurement is challenging, and the use of small, wearable sensors may be a promising solution. However, wearable sensors often cannot be used for measurement of the solar radiation that is filtered through aircraft windshields, without correction factor. As a matter of fact, in this study, the GENESIS-UV sensors had strong wavelength-dependence and needed a correction factor to make the acquired data evaluable. Noise Among those reviewed, three papers focused on occupational exposure to noise. Two of these were performed by Zuidema and co-workers [11,45] and in each one a custom sound-pressure-level (SPL) sensor was used to assess the workers' exposure to noise in a heavy-vehicle-manufacturing facility. Results obtained from the custom sensor were compared to those obtained by means of a reference instrument, which was the model "XL2" (NTi Audio AG, Liechtenstein). The noise sensor developed for these two studies is composed of a microprocessor that is plugged into an omnidirectional condenser microphone, and it was calibrated by playing a calibration sound, with an acoustic generator and an amplifier, between 65 and 95 dB. The sensor's response was then compared to the reference sound level. Since the noise is a physical hazard that does not disperse from a certain source the same as other pollutants, the sensors' network that had been created had a limited ability, due to the abovementioned issues, to capture impact or impulse noise. Misistia and colleagues [40] conducted a study to assess the response to the BOP (blast over pressure) of wearable sensors against industry-standard pressure transducers. In this case, the Tourmaline ICP pressure-sensor model (PCB piezometric, Depew, NY, USA) was adopted to evaluate the sensor's orientation error in military-personnel helmets. The experimental procedures were conducted under controlled laboratory conditions using a shock tube, and some of the findings were verified in the field. The sensor used for this study was the Black Box Biometrics (B3) Blast Gauge. On this sensor, a scheme is printed to identify the specific location where it must be worn, namely on the back of the helmet, on the left shoulder, and on the chest. The authors individuated three factors that might influence the recorded-pressure values from wearable sensors, which are: (i) the orientation of the body in respect to the sources of the blast waves, (ii) the intensity of the shock waves, and (iii) the local geometry of the ambient around it. Results of this study revealed that there is an underestimation error in the reflected pressure for B3 sensors, but the incident overpressure peak is comparable to the PCB, so the impulse values are overestimated, regardless of the tested configuration of the B3s. Despite Misistia's study focused on military personnel, the results could be extended to all workers exposed to high-frequency noise. The B3s' configuration could be easily worn during the work shift, and it might report back interesting data to compare to the standard PCB sensors present in the workplace, thanks to the wireless communications capabilities of the instrument. Laser Concerning hand-held Laser exposure, from an occupational-risk-management point of view, the market launch of hand-held laser-processing devices should be closely related to the safety of the machines. Personal protective equipment such as protective eyewear or clothing must not be considered, as in any risk-management procedure, as the first choice to prevent injuries and manage the risks in workplaces. Indeed, these strategies are adopted only in those cases when it is not possible to eliminate the sources of risk. In our literature research, we found a paper regarding this problem that was performed in a laboratory setting by Puester and colleagues [18,51]. This study aimed at the qualification and adoption of safety measures for the use of a hand-held laser instrument. The sensors selected to conduct the laboratory investigation were: (i) tactile sensors, (ii) inductive sensors, (iii) capacitive sensors, (iv) ultrasonic sensors, (v) inclination sensors, (vi) acceleration sensors, (vii) gyroscopes, and (viii) temperature sensors. Depending on the output and on the body parts' distance to the process zone, if the laser radiation become accessible, critical irradiance on the human body can occur. Apart from irradiance, exposure time is the second critical value. To avoid lesions, laser radiation must be isolated or deactivated as soon as possible under fault conditions. The investigations reveal solutions to equip laser devices with safety-related parts and safety control to minimize the risks from laser radiation. Mechanical Vibration Mechanical vibrations are known to affect the hand-arm system or the whole body of workers who use machines or equipment that produce vibrations. Austad et al. [56] used the IsenseU, a flexible, wearable, and robust sensor, suitable for being integrated into clothing, to assess the hand-arm-vibration exposure in a laboratory setting. The findings of this study showed that the IsenseU sensor can be useful for estimating the vibrationexposure time, the frequency-weighted acceleration, and daily-exposure values. Moreover, if the sensor might be integrated into the sleeve of a jacket, vibration-exposure measurement could be performed concurrently with skin-and ambient-temperature measurement. To conclude, in our review of the literature, we found that there are relatively few studies focused on next-generation sensors for assessing exposure to physical agents. This topic should be further investigated to obtain more complete information to improve riskassessment processes in workplaces. The most tempting prospect is that next-generation work instrumentations and personal protective equipment should be equipped with new, small sensors that could provide real-time feedback about emissions and/or the exposure of workers to work-related risks. Posture Assessment and Work-Related Musculoskeletal Disorders MSDs (musculoskeletal disorders) can be defined as a group of disorders or injuries that could deform a subject's inner body while it is stressed. Examples of MSDs include bursitis, carpal tunnel syndrome, and tendonitis [24]. Work-related musculoskeletal disorders (WMSDs) refer to MSDs that are due to workplace activities associated with physical job tasks. According to the Occupational Safety and Health Administration (OSHA), which is based in the United States, there are eight risk factors related to WMSDs, including (i) extreme temperature, (ii) repetition, (iii) static postures, (iv) vibration, (v) quick motion, (vi) compressed or contact stress, (vii) force, and (viii) awkward postures [69,70]. Most of the time, awkward postures can be prevented by re-setting the workplace layout or selecting a proper tool for workers, but different work tasks are affected with different types of risks, so the challenge is to find out new customized solutions that can solve the specific issue. A specific job-hazards analysis could identify the workplace's risk, but it may be tricking to carry it out because of the complexity of the job and the manual effort needed to monitor work processes [71]. Among those reviewed, five different papers were found regarding this topic [24,42,43,49,62], and the main outcomes are reported here. In recent years, wearable sensors have been used for quantitative instrumental-based biomechanical-risk-assessment studies to prevent work-related musculoskeletal disorders. Instrumentation-based tools are generally not included in the standardized methods for biomechanical-risk-assessment studies. because the ones commonly used are based on observational and subjective approaches. The spread of Industry 4.0 may represent a new scenario in which the computational capabilities and network connections that characterize smart, wearable sensors are able to be transparent, sensitive, responsive, and adaptive to workers' movements, allowing for real-time, online monitoring of working tasks. Recently, several methods have been developed, accepted by the international literature, and used in the workplace to attempt to reduce the WMSDs. About this, the most innovative wearable technologies and the electronic smart devices that support these types of investigations to improve the biomechanical-risk assessment, adapt them to all the work situations and outline the limits of the up-to-date standardized methods, without interfering with the workers' activities. This allows real-time estimation of the risk, providing direct feedback to the end-user, who is constantly monitored directly while at work. Several commercial, wearable inertial sensors have the possibility to stream data to a remote computer or a web server in real-time. This allows for recording, processing, and reviewing sensor data online and affording new opportunities in near-real-time, for rapid feedback about work postures to subjects or to managers and supervisors. Moreover, body-worn inertial sensor technology provides several opportunities to improve the safety and health of workers who do physical tasks [62]. Despite the widespread use of these new tools, there are still too few scientists and engineers predicting the use of wearable technologies for biomechanical-risk assessment, although (i) the need to obtain increasingly quantitative evaluation, (ii) the recent miniaturization process, and (iii) the need to stay updated with a constantly evolving manual handling scenario are asking for their use. Therefore, regarding biomechanicalrisk assessment, the adoption of new innovative technologies is at an initial stage [42]. Concerning MSDs, construction jobs are one of the most labor-demanding compared to other industries. Often, construction workers exceed their natural physical capability to make up for the increasing challenges and complexity in this business. Due to this fact, construction jobs are among the most ergonomically hazardous, because they often involve activities such as body twisting, manual handling, heavy lifting, and working in awkward positions, which are all potential causes of WMSDs in workers. The most common ones are tendonitis, sprains, back pain, strains, and CTS. The postures of different body parts re generally measured in terms of the degree of bend from the neutral posture, to identify the risks associated with the awkward postures. Sensor-based direct measurement of risk factors provides a great opportunity for unobtrusive and precise ergonomic assessment of construction tasks. Nevertheless, calibrating, setting up, and using a complicated sensor network requires expertise that is normally less than what is expected from most construction workers and field workers. Even if such technologies are on the market, the economical effort, as well as the time commitment necessary to purchase, install, and maintain the tools, may be considered an impeding factor. Commonly, the most reliable sensor used for biomechanical-risk assessment is the IMU (inertial measurement unit) [42]. These sensors allow for the measure of the orientation, position, velocity, and acceleration of the body posture. An important study on selected subjects was made by Nath and co-workers [24] in a laboratory environment. The authors used a "two smartphones" configuration, through the devices' 3D accelerometer sensor, to demonstrate the potential of mobile devices in ergonomic assessment. For data treatment, they used the sensory ones gained by the smartphones, which were mounted on the worker's upper-arm and waist, while the worker is performing a task. The posture during a screwdriving task was analyzed, and, in this case, the position of the two smartphones produced the most distinctive features for most manual jobs performed by field workers. The data collected from the smartphone on the upper arm were used for measuring total flexion, while the data collected by the smartphone mounted on the waist were used to measure trunk flexion. Accelerometers in IMUs have a generally higher sensitivity than those in smartphones, but for static postures, a smartphone's built-in inertial sensors are as reliable as other standard tools, since they are commonly equipped with a high number of sensors that can be activated to collect several types of data. This type of accelerometer seems to report back significant and useful data to assess any anomalies in the body postures. The results presented focused on posture analysis for trunk and shoulder flexions, but, with a few modifications for other types of field activities (e.g., manual tasks, manual handling, and manual lifting), the developed methodology and the analysis techniques can be generalized. Moreover, the proposed method is applicable for various occupations that are exposed to WMSDs due to awkward positions. Conclusions The outcomes of this review indicate that the number of research articles involving NGMSs for the implementation of the risk-assessment process has steadily increased in the past decade, and it is constantly growing. With the spread of the Fourth Industrial Revolution ("Industry 4.0"), the main problem of industrial hygiene ("occupational hygiene 4.0") is how to improve the risk-assessment process in these new high-tech plants, while updating the traditionally adopted procedures. The concept of Industry 4.0 is the implementation of industry toward an intelligent model, in which collaborative robotics and new technologies interconnect workers and machine tools [6]. To properly preserve workers' safety and health during (and beyond) the Fourth Industrial Revolution, in the last 10 years, the interest in wearable and low-cost sensors has been increasingly growing. The state-of-the-art technology regarding wearable sensors is in the early stages, as it is mostly considering some specific work-related health and safety parameters. To date, there are only few NGMSs that could be properly used at a workplace for exposure-assessment and risk-assessment purposes. Despite that, due to the continuous advancement in new technologies, the performance and the number of NGMSs will be further improved, always obtaining more advanced sensors. Smart devices currently available on the market must be considered as a resource. An advantage in the usage of small, wearable sensors is the possibility to obtain a complete dataset over the entire work shift, even though the reliability of the batteries of these new sensors must be further improved and designed. Moreover, it was highlighted that significant differences among devices could occur, despite their usage in the same conditions. Additionally, a fundamental step that is mandatory before the usage of these technologies is the evaluation of the performance and the reliability of the sensors. To conclude, a preliminary study about the new technologies has been conducted. The authors are also confident that the scientific research regarding these topics could improve the up-to-date available literature, to support the development of proper instrumentations to implement the risk-assessment process.
2022-06-30T15:18:07.261Z
2022-06-27T00:00:00.000
{ "year": 2022, "sha1": "401019f67b763366d55ccbb87f0191ce1281cf57", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/13/4841/pdf?version=1656317042", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e45f76f8ef78f825fb1d95d335876791e3d21f5", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
115132904
pes2o/s2orc
v3-fos-license
Dynamics of an $n=1$ explosive instability and its role in high-$\beta$ disruptions Some low-$n$ kink-ballooning modes not far from marginal stability are shown to exhibit a bifurcation between two very distinct nonlinear paths that depends sensitively on the background transport levels and linear perturbation amplitudes. The particular instability studied in this work is an $n=1$ mode dominated by an $m/n=2/1$ component. It is driven by a large pressure gradient in weak magnetic shear and can appear in various high-$\beta,$ hybrid/advanced scenarios. Here it is investigated in reversed shear equilibria where the region around the safety-factor minimum provides favorable conditions. For a certain range of parameters, a relatively benign path results in a saturated"long-lived mode"(LLM) that causes little confinement degradation. At the other extreme, the quadrupole geometry of the $2/1$ perturbed pressure field evolves into a ballooning finger that subsequently transitions from exponential to explosive growth. The finger eventually leads to a fast disruption with precursors too short for any mitigation effort. Interestingly, the saturated LLM state is found to be metastable, it also can be driven explosively unstable by finite-amplitude perturbations. Similarities to some high-$\beta$ disruptions in reversed-shear discharges are discussed. Introduction In tokamaks, crossing certain operational boundaries in plasma density (n e ) [1], current (I p ) [2,3], or pressure (β) [4] can lead to a disruption, a sudden and uncontrolled loss of thermal and magnetic energy in the plasma. Among these, high-β disruptions are particularly challenging, not only because of the high thermal energy content of the plasma, but also because of their extremely fast time-scales in some cases. Most high-β disruptions are mediated by a (neoclassical) tearing mode (NTM), typically with the mode numbers m = 2, n = 1, that for various reasons lock to the wall, grow in size and eventually cause a loss of confinement [5,6]. Even when they do not lead to disruptions, NTM's tend to degrade confinement significantly so that their avoidance or stabilization in the ITER ELMy H-mode baseline scenario has been a high-priority research item (see for example [7,8]). When the plasma β is pushed higher beyond the no-wall limit in "hybrid/advanced tokamak" regimes, more dangerous n = 1 kink modes can become unstable. In the presence of a close-fitting wall, these are generally transformed into slow-growing resistive wall modes (RWM's) [9]. Again, if they are allowed to lock to the wall, RWM's can lead to disruptions. Fortunately, plasma rotation [10,11,12], kinetic effects [13,14,15], coupled with feedback-control methods [5,16,17], can stabilize RWM's well above the no-wall β limit. Since both NTM's and RWM's grow on a slow, resistive time scale, disruptions caused by these modes are easily identified by their long precursors on various diagnostics. In fact, because of their relatively slow time scale, these are precisely the type of disruptions that are targeted by various disruption mitigation schemes, which require at least a few 10's of milliseconds of warning time [18,19]. As stated earlier, however, tokamaks disrupt for a wide variety of reasons (see for example [6]), and not all disruptions follow this slow path where their arrival is well-advertised in advance; some in fact occur with little warning. Unfortunately, their very fast time scale apparently makes detailed studies difficult, and it is likely that their rare appearance in the literature does not accurately reflect their actual frequency in the experiments. There do exist some documented high-β disruptions with precursors of the order of a millisecond or less. For example: β-limit disruptions in TFTR due to toroidally localized ballooning modes in the presence of n = 1 magnetohydrodynamic (MHD) activity [20], localized resistive interchange modes that couple to a global n = 1 mode and lead to a disruption in negative central shear (NCS) discharges in DIII-D [21,22], and disruptions following an internal transport barrier (ITB) collapse in JET [23]. In these discharges, some of the important details were clearly different: in TFTR, at least initially, the q = 1 surface was involved, whereas DIII-D and JET presumably both had q min 2. But generally, a large pressure gradient in regions of weak magnetic shear is believed to have played an essential role. Thus, one of the things we will do in this work will be a short review of the resistive and ideal stability of such configurations. However, linear stability analysis alone cannot explain the fast time scale of these disruptions. The mode that is involved has to be growing near Alfvénic rates to account for the time scale, but it is not clear how a discharge evolving on the slow transport time scale can generate an unstable mode with a near-Alfvénic growth rate without producing a long series of precursor oscillations during its sub-Alfvénic period. The point raised above is in fact part of a more generic problem: If an event (e.g., a sawtooth crash, edge-localized mode (ELM) crash, disruption, etc.) that makes macroscopic changes in the state of a discharge in a time scale τ e is attributed to some global instability, the instability growth rate γ e at the time the event is observed has to be commensurate with that time scale, i.e., we need to have γ e ∼ O(1/τ e ). We can safely assume 1/γ e ∼ τ e τ t , where τ t is a characteristic transport time scale; otherwise the "event" cannot be distinguished from ordinary transport. If we assume that the changes in the mode growth rate occur entirely because of modifications to the background equilibrium by transport, then there will necessarily be a long period during which 1/τ t < γ(t) < γ e , i.e., a time when the mode is growing faster than the transport rate but does not yet have the eventual "crash" rate. But then we are faced with two related questions: (i) During this period, can the mode grow without being detected? The short answer is probably "no," since it is hard to imagine how such a mode could avoid generating a long series of precursors during the period mentioned. (ii) Since it is now growing faster than the transport time scale, would it not "selfstabilize" and saturate without causing the "event" by modifying the equilibrium faster than transport processes? Here a general answer is again difficult, but a mode that depends very sensitively on local conditions for stability can probably "self-stabilize" and saturate more easily than a global mode like an m = 1 resistive kink or one that is responsible for a major disruption. Thus, disruptions or other events that occur without long precursors seem to require a different evolution scenario than the one proposed above. Instead of the mode growth rate γ(t) slowly evolving with the background equilibrium, we have to consider mechanisms that can make changes in γ(t) at a rate much faster than expected from transport alone. A mechanism proposed by Hastie [24] for fast sawtooth crashes and by Callen [25] for the DIII-D disruption mentioned earlier assumes that, in response to a linearly increasing plasma pressure driven by auxiliary heating, the growth rate of a pressure-driven mode grows as γ(t) ∝ β 1/2 ∝ t 1/2 , where the plasma β is defined as β = 2µ 0 p /B 2 and . denotes a volume average. Then for large times, the plasma displacement, assumed to satisfyξ = γ 2 ξ, can be shown to evolve as Alfvén velocity, L is a global length scale, and τ h is the slow "heating time," comparable to the "transport time" mentioned earlier, τ h ∼ τ t . Thus, this mechanism seems to lead to a faster-than-exponential growth and possibly explain the near-absence of precursors before some sawtooth crashes or disruptions. However, Cowley [26] has shown that, because of the large separation in the MHD and transport time scales, this path to super-exponential growth requires an unrealistically small initial perturbation and can be ruled out. There are, however, nonlinear processes in plasmas that can generate explosive (faster-than-exponential) growth while the underlying mode is still not far from marginal stability. In a numerical study of the semi-collisional/collisionless m = 1 mode using a reduced two-fluid model, nonlinearities involving the parallel pressure gradient were shown to give a near-exponential increase in the growth rate of the mode [27], providing a possible explanation for precursor-less, fast sawtooth crashes. Similarly, Cowley and colleagues [26,28,29,30] have shown that the nonlinear evolution of pressure-driven modes can generate finite-time singularities, again demonstrating how a long period of precursors can be avoided during a fast disruptive event. In this work we extend our study of a specific example [31], a pressure-driven n = 1 kink-ballooning mode that can continue to grow exponentially well into its nonlinear regime and become explosive with an apparent finite-time singularity at the end. We show that it can actually exhibit two very different types of nonlinear behavior depending on small differences in the assumed transport levels and linear perturbation amplitudes. In addition to the explosive behavior, it can also display a more benign evolution and saturate in a "long-lived mode" (LLM) with only minor confinement degradation [32]. The LLM itself is shown to be a meta-stable state; it can be pushed into the explosive regime with small changes in the transport coefficients or with a finite-size perturbation. The experimental context for this computational study is KSTAR discharges with q 95 7, q 0 2, and a low inductive current fraction, similar to some hybrid/advanced scenarios [33]. With on axis electron cyclotron resonance heating (ECRH), the pressure profile peaks and drives an ideal m/n = 2/1 mode that saturates at a small amplitude. The resulting long-lived mode (LLM) survives many tens of seconds (as long as ECRH is maintained), with only a small effect on confinement [32]. Although there are no documented examples for the explosive version of this mode in KSTAR, in the absence of any detailed study of KSTAR disruptions, their existence cannot be ruled out. The computational tool used here is the CTD code, which solves the nonlinear MHD equations in toroidal geometry (see [34] and the references therein). Before moving on to a discussion of the nonlinear results, in the next section we briefly review the salient features of the linear stability of pressure-driven modes. Linear stability A general understanding of the stability of pressure-driven modes can be obtained from a cursory examination of the ideal MHD energy integral, written here in its "intuitive form" (plasma contribution only) [9,35]: The largest stabilizing contribution tends to be the |Q ⊥ | 2 term in the first integral representing the line-bending energy, where Q = ∇ × ξ × B 0 is the perturbed field. The destabilizing pressure-gradient and parallel current terms are grouped together in the second integral. The pressure gradient makes a destabilizing contribution to δW p only in those regions where the field line curvature where κ · ∇p > 0. By having the displacement vanish where the curvature is favorable, κ · ∇p < 0, the net destabilizing contribution from the pressure forces can be maximized. This is the path the ballooning modes take, but they pay a price in excess line-bending energy since the perturbation is not constant along the field lines. Another path for pressure-driven instabilities opens up if the magnetic shear is weak in a region of finite width. Simplifying and expanding Q around a rational surface, we have Q ⊥ ξ ⊥ (ik · B 0 ), and Thus, if the global shear s ≡ rq /q is weak enough in regions with strong pressure gradients, interchange-like modes become possible even in "Mercier-stable" equilibria with q 2 > 1, first recognized by Zakharov [36]. In fact, a rational surface is not necessary for instability. With q (m + )/n, 0 < 1, and s 1, the line-bending energy can be minimized again since Q 2 ⊥ ξ 2 ⊥ 2 , which can be overcome by a strong-enough pressure gradient. This simple piece of physics, strong pressure drive coupled with weak shear, is behind the quasi-interchange mode [37,38,39,40] (for q min 1) and the "infernal" modes [41,42,43] (for q min > 1), both pressure-driven modes in low-shear equilibria. The former was studied in the context of fast sawtooth crashes caused by an internal m/n = 1/1 mode. The latter are particularly dangerous global modes that can be unstable much below the n → ∞ ballooning limit and lead to major disruptions. They are typically thought of as "low-n" modes, but of course the same physics can also make the n = 1 mode unstable, which will be the focus of this work. For computational economy, our earlier work [31] focused on the nonlinear evolution of pressure-driven n = 1 modes in circular geometry, in both monotonic and weaklyreversed q profiles. Here we will extend it to non-circular geometry and provide a brief review of the linear properties of the relevant modes. Partly because of KSTAR's recent interest in advanced scenarios with internal transport barriers (ITB's), we will mainly consider reversed-shear equilibria with q min > 2. Typical equilibrium profiles used in the linear and some of the nonlinear calculations are shown in Fig. 1. The shaped geometry has κ = 1.5 (elongation) and δ = 0.6 (triangularity) within a perfectly conducting boundary; these geometric parameters are held fixed, except when we revisit circular-geometry. The CTD code uses a conformal transform from the poloidal plane to a unit circle in (ρ, ω) coordinates to deal with weakly-shaped equilibria [44]. The coordinate axis is shifted to approximately align the ρ = const. surfaces with flux surfaces, but ρ is not a flux coordinate. For this reason, the plots as in Fig. 1 (b,c) show both the ω = 0 (outboard) and ω = π (inboard) sections of the mid-plane. Note that a simple pressure profile without an internal or edge transport barrier is used to simplify the discussion. As expected, resistivity enlarges the instability domain for the n = 1 mode (n > 1 stability is not considered in this work) so that an unstable mode is observed well below the ideal MHD stability limits. However, we find that the nature of the unstable resistive mode can be confusing. The "infernal" mode theory predicts a mode with a tearing scaling, γτ A ∝ S −3/5 , at low β. Close to the ideal stability boundary, the resistivity scaling is weaker, γτ A ∝ S −3/13 , becoming independent of S beyond the ideal limit [42]. Here we define τ A as the shear-Alfvén time, Then the magnetic Reynolds (Lundquist) number is given by S = τ R /τ A , where τ R = µ 0 a 2 /η is the resistive diffusion time, and a, R 0 are the minor and major radii of the torus, respectively. Throughout this work, S is defined in terms of the value of resistivity at the coordinate axis, but the resistivity itself is in general a function of the poloidal coordinates such that η(ρ, ω)J ζ0 (ρ, ω) = E 0 = const. The electric field E 0 is associated with the "loop voltage." Numerically it is used to prevent the Ohmic diffusion of the equilibrium current during long nonlinear calculations. The well-known interchange theory also predicts a resistive mode with the usual interchange scaling, γτ A ∝ S −1/3 [45], but only for reversed-shear equilibria. If we briefly recall the relevant theory, the Mercier (ideal interchange) modes are unstable in circular geometry [46]. Although rare, Mercier modes have actually been observed experimentally [47]. Resistive instabilities require D R ≡ D I + (H − 1/2) 2 > 0, where H ∝ −p /q . We see that D R > 0 is possible, even when Mercier stable (D I < 0), with weakly-reversed shear (H < 0) at high enough β. At lower β this mode also reverts to the tearing scaling with S −3/5 . Stability of the equilibrium in Fig. 1 is summarized in Fig. 2, where we plot the growth rate of the n = 1 mode as a function of the magnetic Reynolds number S for various values of the normalized β, β N = β(āB)/I p . Hereā = a[(1 + κ 2 )/2] 1/2 is an equivalent minor radius defined for an equilibrium with elongation κ, and I p is the plasma current. The S-scans are performed at a constant magnetic Prandtl number, P M = µ/η = 10, where µ is the normalized viscosity coefficient. Although normalized viscosity tends to be higher than resistivity in fusion plasmas, this value of P M is chosen entirely for numerical reasons. At β N = 1.82 ( Fig. 2 (a)), there is a weakly unstable resistive mode; both the infernal mode and the resistive interchange theory seem to predict here a tearing-like scaling with γτ A ∝ S −3/5 (the blue dashed line), but we find that a stronger dependence with γτ A ∝ S −3/4 (the red line) is a better fit to the numerical data. Neither one of these theories takes into account viscous effects. The classical viscous-tearing mode theory that assumes P M < 1 predicts a mode with the S −2/3 scaling [48], which is somewhat weaker than the S −3/4 scaling we observe. It is possible that for P M 1, the S −2/3 scaling changes to S −3/4 , but that possibility has not been investigated. At β N = 2.62 ( Fig. 2 (b)), the mode is still resistive and has a clear resistive interchange scaling, γτ A ∝ S −1/3 . Here it is possible that there is a resistive infernal mode (with the S −3/13 scaling [42]) that is in competition with the interchange, but it is not observed numerically. The weak reversed shear (not considered in the infernal mode theory) may be making the resistive interchange the dominant mode in this particular parameter regime. At an even higher β (β N = 3.35, panel (c)), the mode is ideally unstable (with wall). During this study, β N was increased in large steps so that the locations of the transition points between various regimes are not known, a task left for a future work. Since there is no q = 2 rational surface in the plasma for the series of equilibria considered here (q min > 2), and because of the wide region of weak magnetic shear around q min (see Fig. 1), the eigenfunctions do not exhibit a distinctive "singular" behavior there. In fact, they have the features of a global kink mode, as seen in Fig. 3. For β N = 1.82 (panels (a,b)), there is a strong coupling to an m/n = 3/1 mode near Above we discussed in some detail the linear stability for q 0 = 2.15, q min = 2.02, mainly to place our nonlinear calculations below in some context. Summarizing our other linear results, for a more deeply-reversed equilibrium with q 0 = 2.57, q min = 2.02 we find no ideal instability for β N ≤ 3.08, the limit of our numerical explorations for this q-profile. On the other hand, for an equilibrium with q 0 = 2.03, q min = 2.02 (much weaker central shear), we find that β N ≥ 2.72 is ideally unstable, although the actual stability boundary has not been explored and is probably lower (but higher than β N = 1.90, where we find a resistive mode). The explosive instability and disruptions Nonlinearly the pressure-driven n = 1 mode can turn into an explosive instability, as it was first demonstrated in circular geometry [31]. Here we discuss the nonlinear evolution of the mode in shaped geometries using the linear results of the previous section as starting points. An ideally unstable n = 1 mode (e.g., one from Fig. 2 (c)) with its large growth rate will naturally lead to a fast disruption. As discussed at some length in the Introduction, however, MHD modes are not "born" in this robustly unstable state. They tend to come into existence as weak resistive instabilities as the equilibrium slowly (on transport time scale) passes through some marginal stability point due to the evolving discharge conditions. Hence our goal in this section is to demonstrate how a weak resistive instability can evolve into a robust mode that will result in a fast disruption with only a brief period of precursors. Thus we start with a weakly unstable equilibrium similar to that of Fig. 1; some of the relevant parameters are: q 0 = 2.145, q min = 2.023, q l = 9.424, β N = 1.67, which results in an even weaker instability than that of Fig. 2(a). The nonlinear calculations are performed at S = 10 6 , P M = µ/η = 10, using 21 toroidal Fourier modes. The poloidal Fourier expansions have m ∈ [0, 64] for n = 0 and m ∈ [−5, 64] for n ∈ [1, 20]. The finite difference scheme in the radial direction uses 192 grid points. Some of the algorithmic details of the CTD code used here can be found in [34] and the references therein. With weak shear and q min > 2, an important feature of the linear eigenfunction is the dominance of the m = 2 poloidal component. This is clearly seen in Fig. 4(a), which shows the quadrupole geometry of the pressure perturbation (some coupling to an m = 3 on the outside is also visible). This perturbation leads to an elliptical deformation of the flux surfaces in the core plasma that eventually forms a ballooning finger, as seen in Figs. 4(b-c). The finger pushes through the flux surfaces on near-Alvénic time scales and brings the core plasma in contact with the boundary (Fig. 4(d)). Using typical parameters for modern tokamaks (B = 3 T, n e = 10 19 cm −3 , R 0 = 3 m), the state with no visible deformation in Fig. 4(b) and the final state in Fig. 4(d) are separated by less than 1 ms. Thus an actual disruption following this path would have a very short warning time. The rapidity of the final disruptive phase is clearly due to a nonlinear increase in the growth rate rather than a slow, transport time scale change. This is seen in Fig. 5 where the kinetic energy in the n ≥ 1 modes (excluding the n = 0 equilibrium flows) is plotted. Two points are immediately obvious: (i) The mode continues to exponentiate well into the nonlinear regime with a growth rate γτ A = 1.70 × 10 −3 (the dashed red line in panel (a)). (ii) Instead of saturation, the late stages are characterized by a superexponential or explosive growth. In fact this phase has the appearance of a a finite-time singularity (panel (b)) where the growth has the form where t i = 2773.9, t f = 4277.6, and the exponent ν = 2.05. In our earlier calculations in circular geometry the explosive phase was even faster, with ν = 3.37 [31]. The slowdown seen here has both physical and numerical sources: The underlying n = 1 mode is inherently more stable in shaped geometry. And, because the poloidal spectrum for each toroidal mode is much wider in shaped geometry, fewer (n max = 20) toroidal modes were used here. In circular geometry we were able to use n max = 30, which allowed us to continue the calculations further into the explosive phase. a) b) /meq2Run14_partial/test11/nonlin33 The ballooning finger of Fig. 4 has an extended structure along the field lines with q s = m/n = 2/1 helicity. As seen in Fig. 6, this symmetry is preserved as the finger moves outward. Because of magnetic shear, however, it becomes more localized in both parallel (along the field) and perpendicular directions (compare panels (a) and (b)) as it moves through regions of increasing safety factor in order to reduce the line-bending energy. Thus, under experimental conditions when it eventually comes into contact with the wall or limiter, its thermal content will be deposited in a small area and possibly cause serious damage. There have been observations of ballooning fingers during high-β disruptions in TFTR [20,49]. JET also has reported similar results. During disruptions following an ITB collapse, localized disturbances in the ECE data that propagate from the ITB to the edge at velocities approaching 3km/s are seen [23]. These experimental observations can be associated with the radial propagation of a ballooning finger as shown here. In Figure 6: Pressure contours in the (ω, ζ) plane at t = 4270.9. The outboard (inboard) midplane is at ω = 0 (π). Two constant-ρ surfaces are shown: (a) At ρ = 0.31, the finger shows a ballooning structure but is almost completely extended around the torus along the field lines. (b) Further out at ρ = 0.84, the finger is more localized, in both parallel and perpendicular directions. Note that the m/n = 2/1 helicity is preserved. fact, a comparison of synthetic ECE diagnostics from our calculations with the JET data from Ref. [23] (their Fig. 4) shows good agreement, as seen in Fig. 7. Although there are several observations in KSTAR that may be associated with fast, high-β disruptions without significant precursors, the necessary analysis to link them formally to an explosive instability has not been carried out. Bifurcated states Generally, away from exact marginal points, we expect small changes in a relevant parameter to result in similarly small changes in the evolution of an unstable mode. Thus, small differences in the resistive dissipation level rarely have a significant impact on the saturation width of a tearing mode. However, there are counter examples where, for instance, an increase in the Prandtl number (ratio of viscous to resistive dissipation) beyond a threshold leads to a qualitatively different nonlinear regime [50]. Here we demonstrate an extreme case where a small change in a transport coefficient leads to a bifurcation between a benign, saturated state (the long-lived mode, LLM) and an explosive instability for the n = 1 kink-ballooning mode. For computational economy, we expand upon our earlier results [31,32] while staying in circular geometry. The bifurcation is summarized in Fig. 8 where we follow the nonlinear evolution of the mode starting with the same initial conditions and linear perturbation, but using slightly different transport coefficients. With S = 10 6 , thermal conductivity κ ⊥ = 4 × 10 −6 and viscosity µ = 1×10 −5 , the mode goes through an exponential growth phase but saturates at a small amplitude (curve (1) in Fig. 8(a)). This regime is identified with the longlived mode (LLM) observed in KSTAR, where an m/n = 2/1 perturbation is seen in the electron cyclotron imaging (ECEI) data for many tens of seconds during the current flat-top period. The experimental conditions under which the LLM was observed is described in more detail in [32,51]. This bifurcation can be understood qualitatively if we assume the explosive phase has a finite threshold in the perturbation amplitude. Dissipation affects both the linear growth rate and the nonlinear saturation amplitude of the unstable mode. The higher dissipation level clearly causes the mode to saturate below the apparent threshold. This point is confirmed in the next section where we show that the curve (1) of Fig. 8(a) represents a continuous set of metastable states. In order to bring out the details, the axisymmetric component (n = 0) has been subtracted. Metastability The n = 1 mode for the set of equilibria we consider here is linearly unstable, which implies that an infinitesimal perturbation will grow in time exponentially, at least until the mode attains a finite amplitude. The results of the previous section imply that whether it turns into a LLM or becomes explosive is determined by a critical perturbation amplitude that itself is a function of the dissipation coefficients, ξ cr = ξ cr (η, µ, κ ⊥ ). If it nonlinearly saturates below the threshold, ξ < ξ cr , the result is a benign long-lived mode. Above ξ cr it becomes explosive. Thus, the pressure-driven n = 1 mode is said to exhibit metastability. In this section this theoretically predicted behavior [30] is demonstrated numerically. Clearly, the quasi-equilibrium states representing the early nonlinear phase of the long-lived mode are metastable with respect to finite (as opposed to infinitesimal) perturbations. Transition to the explosive regime requires a perturbation amplitude above a threshold. The critical amplitude increases in time, which is expected since the background pressure profile relaxes due to dissipation, thus gradually reducing the free-energy source for the mode. One dimensional model In this section we present a simple system that exhibits linear instability, metastability and explosive behavior, which should make the nonlinear results of the previous sections more intuitive and easier to understand. Of course the results of this one dimensional model are not meant to be a quantitative explanation for the complex nonlinear behavior seen with the full MHD equations. We start with the 1D equation of motion for a particle moving in a potential φ and experiencing a damping force: where µ is the damping coefficient. We choose the potential to be the quartic polynomial , where c 0 = −0.48, c 1 = 0.8, c 2 = 2, c 3 = 3. As seen in Fig. 10 (a), the point ξ = 0 represents a linearly unstable equilibrium (we will consider only ξ ≥ 0 here). With any positive velocity perturbation (v 0 > 0) the particle will move down the potential hill, exhibiting "linear instability." But if the damping coefficient is large enough (µ > µ cr (v 0 )), it will not be able to climb out of the well and eventually settle at the metastable equilibrium point marked with m at ξ = 1.39. This behavior is shown in Fig. 10 (b), where, after a period of exponential growth, the displacement ξ(t) exhibits damped oscillations. However, with only slightly lower dissipation, the particle is able to move past the saddle point s at ξ = 2.60 and become "explosively unstable," as seen in panel (c). Of course the same result can be achieved by keeping the dissipation level constant but increasing the initial perturbation (panel (d)). The damped oscillations of Fig. 10 (b) correspond to the long-lived mode described by curve (1) in Fig. 8 (a). The explosive instability in Fig. 10 (c) that develops at a lower dissipation level corresponds to the curves (2) and (3) of Fig. 8 (a). Finally the explosive instability in Fig. 10 (d) that results from a higher initial velocity corresponds to the explosive curve 1b in Fig. 9 that follows a larger perturbation of the initial equilibrium. Curves 2b, 3b of Fig. 9 have not been simulated with the 1D model, but clearly they correspond to large-velocity perturbations of the damped oscillations in Fig. 10 (b). Summary and discussion In this work we demonstrate that the nonlinear evolution of a pressure-driven n = 1 kink-ballooning mode can exhibit a bifurcation between a benign final state with little confinement degradation-a long-lived mode (LLM), and an explosive instability that results in a fast disruption with very short precursors. The bifurcation depends sensitively on assumed transport levels and the initial perturbation amplitude. Large diffusive transport or too small a perturbation leads to a saturated n = 1 LLM. Equivalently, there is a transport-dependent critical perturbation amplitude, v cr = v cr (η, µ, κ ⊥ , . . .), such that v > v cr leads to explosive behavior. The long-lived mode itself is metastable and can be pushed into the explosive regime, again with a finite perturbation above a threshold. Thus it is possible that a LLM can abruptly terminate with a fast disruption. Since a benign LLM is a possible end state, it is clear that the initial n = 1 instability has to be weak and not too far from an instability threshold; a robust and ideally unstable n = 1 is unlikely to saturate without serious deleterious effects on confinement. Thus we have concentrated on weak, resistive modes far from ideal instability boundaries. This choice follows also from the expectation that an MHD mode does not come into existence as a robustly unstable ideal mode with a large growth rate; the resistive thresholds tend to be much lower and the instability generally appears first as a weak, resistive mode. But this feature (weak instability) that makes a LLM possible would at the same time seem to make it difficult to explain a fast disruption with little or no precursors, the other possible end state of the bifurcation. This difficulty is resolved by the numerical observation, with some theoretical support (e.g., [30]), that a weakly growing n = 1 mode can become explosive nonlinearly. Thus, a feeble resistive instability can transform into a robust ideal mode in a short period of time. Although a detailed understanding of the nonlinear process responsible for this transformation is lacking at this point, a simple one-dimensional model is presented that mimics essentially all its important features: a linear instability that can either saturate in a metastable state or lead to a nonlinear explosive instability. Of course the most important consequence of this explosive instability is that it obviates the need for a long period of mode evolution on transport time scales where it gradually becomes stronger with the changing background equilibrium. Thus a long series of precursor oscillations are avoided. One particular feature of the pressure-driven n = 1 mode that plays an important role in the transition to the explosive regime is the quadrupole geometry of the pressure perturbation due to the dominant m/n = 2/1 harmonic. This perturbation naturally adds an elliptical deformation to the flux surfaces, which can nonlinearly turn into a ballooning finger. Once formed, the finger rapidly moves outward, pushing through flux surfaces while essentially maintaining its original 2/1 symmetry. To minimize bending of the field lines as it moves into regions with q 2, it becomes localized both in parallel and perpendicular directions. Thus, although it is originally quite extended along the field lines, it turns into a highly local nonlinear structure as it moves to the edge, transporting a significant portion of the energy in the core to a small area on the wall. Although not shown here but discussed elsewhere [31], the rapid ejection of the core is accompanied by stochastization of the field outside the finger, while the finger itself remains well-confined by regular flux surfaces. These features are consistent with jet-like flows in some high-β disruptions in TFTR [20] and JET [23]. It is likely that the global n = 1 mode responsible for the well-documented fast high-β disruption in DIII-D [21,22] also has similar origins. Finally, ITER disruption mitigation efforts seem to be based on the anticipation that a resistive wall mode or a tearing mode would slowly lock to the wall prior to the disruption, giving at least a few tens of milliseconds of warning time. Without a perfect pressure-profile control system, fast disruptions of the type discussed in this work would make the efficacy of this approach questionable in some of the advanced scenarios planned for ITER. Acknowledgements This work was supported by MSIP, the Korean Ministry of Science, ICT and Future Planning, through the KSTAR project.
2017-06-05T06:11:26.000Z
2017-06-05T00:00:00.000
{ "year": 2017, "sha1": "b1508af9d481a4a18ac1fed6637b913f298fd533", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.01204", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b1508af9d481a4a18ac1fed6637b913f298fd533", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1458402
pes2o/s2orc
v3-fos-license
Health-Promoting Properties of Eucommia ulmoides: A Review Eucommia ulmoides (EU) (also known as “Du Zhong” in Chinese language) is a plant containing various kinds of chemical constituents such as lignans, iridoids, phenolics, steroids, flavonoids, and other compounds. These constituents of EU possess various medicinal properties and have been used in Chinese Traditional Medicine (TCM) as a folk drink and functional food for several thousand years. EU has several pharmacological properties such as antioxidant, anti-inflammatory, antiallergic, antimicrobial, anticancer, antiaging, cardioprotective, and neuroprotective properties. Hence, it has been widely used solely or in combination with other compounds to treat cardiovascular and cerebrovascular diseases, sexual dysfunction, cancer, metabolic syndrome, and neurological diseases. This review paper summarizes the various active ingredients contained in EU and their health-promoting properties, thus serving as a reference material for the application of EU. Introduction Eucommia ulmoides (EU) (commonly called "Du Zhong" in Chinese language) belong to the family of Eucommiaceae, a genus of the small tree native to Central China [1]. This plant is widely cultivated in China on a large scale because of its medicinal importance. About 112 compounds have been isolated from EU which include lignans, iridoids, phenolics, steroids, and other compounds. Complementary herbs formula of this plant (such as delicious tea) has shown some medicinal properties. The leaf of EU has higher activity related to cortex, flower, and fruit [2,3]. The leaves of EU have been reported to enhance bones strength and body muscles [4], thus leading to longevity and promoting fertility in humans [5]. Delicious tea formula made from the leaf of EU was reported to reduce fattiness and enhance energy metabolism. Flavonoid compounds (such as rutin, chlorogenic acid, ferulic acid, and caffeic acid) have been reported to exhibit antioxidants activity in the leaves of EU [6]. Although there has been enough literature on phytochemical properties of EU, few studies however existed on the pharmacological properties of the various compounds extracted from the barks, seeds, stems, and leaves of EU. This review paper will elucidate detailed information regarding different compounds extracted from the various parts (barks, seeds, stem, and leaf) of EU and the prospective uses of these compounds in health-promoting properties with scientific lines of evidence and thus provide a reference material for the application of EU. Chemical Composition of Eucommia ulmoides Various compounds isolated from different parts of EU are shown in Table 1. Lignans and Iridoids. Lignans and their derivatives are the key components of EU [7]. To date, 28 lignans (such as bisepoxylignans, monoepoxylignans, neolignans, and sesquilignans) have been isolated from bark, leaves, and seeds of EU. Iridoid glycoside, a class of secondary metabolites, is the second main component of EU. Iridoids are typically found in plants known as glycosides. Twentyfour iridoids have been isolated and identified from EU (Table 1). These isolated compounds include geniposidic acid, aucubin, and asperuloside which have been reported to have wide pharmacological properties [8][9][10]. Two new compounds of iridoids, Eucommides-A and -C, have recently been isolated. These two natural compounds are considered as conjugates of iridoid and amino acids. However, the mechanism underlying their activity is not available [11]. Phenolic Compounds. Phenolic compounds which are derived from the foods have been reported to have positive impact on human health [12,13]. About 29 phenolic compounds have been isolated and identified from EU [14]. Total content of phenolic compounds (in gallic acid equivalents of all the extracts) was analyzed using the Folin-Ciocalteu phenol reagent. Effects of seasonal variation on the contents of some compounds and antioxidants have been reported. Within the same year, higher contents of phenolics and flavonoids were discovered in the leaves of EU in August and May, respectively. Rutin, quercetin, geniposidic acid, and aucubin existed in higher concentration in May or June [15]. Moreover, higher activity of 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity and metal ion chelating ability were found in the leaves of EU harvested in August. Increased content of food antioxidants was also reported in May when compared to other periods of the year [15]. The leaf of EU has been found to be a rich source of aminoacids, vitamins, minerals, and flavonoids such as quercetin, rutin, and geniposidic acid [11,16]. A total of 7 flavonoids have been isolated from Eucommia plants [17]. Rutin and quercetin are the most important flavonoids [18]. Flavonoids are important compounds which are common in nature and are considered as secondary metabolites and function as chemical messengers, physiological regulators, and cell cycle inhibitors. Polysaccharides. Polysaccharides from EU for 15 days at the concentrations of 300-600 mg/kg were reported to exhibit protective effects on kidneys as observed by malonaldehyde and glutathione levels after renal perfusions [21]. Histological examination also showed evidence of antioxidative properties. Extracts from the bark of EU using 70% ethanol also showed protective effects against cadmium at 125-500 mg/kg [22]. Histological examination also showed that EU in combination with Panax pseudoginseng at 25% and 50% weight, respectively, for six weeks at a dose rate of 35.7-41.6 mg/kg exerted light protective effects on glomerular filtration rate [8]. Two new polysaccharides have been separated from EU, which are eucomman A and B [23]. Other Ingredients and Chemicals. Amino acids, microelements, vitamins, and fatty acids have also been isolated from EU [17,[21][22][23]. Sun et al. also discovered new compounds such as n-octacosanoic acid, and tetracosanoic-2,3dihydroxypropylester from EU [24]. Fatty acid composition of oil extracted from the seed of EU showed different concentrations of polyunsaturated fatty acids such as linoleic acid, linolenic acid (56.51% of total fatty acids, TFAs), and linolelaidic acid (12.66% of TFAs). Meanwhile, the main monounsaturated fatty acid isolated from the seed was found to be isoleic acid (15.80% of TFAs). Dominant saturated fatty acids isolated include palmitic acid and stearic acid which represent 9.82% and 2.59% of TFAs, respectively [25]. Health-Promoting Compounds of Eucommia ulmoides 3.1. Protective Effects on Cardiovascular System. In Chinese traditional medicines, Eucommia is considered as a major herbal tonic for cardiac patients. Eucommia bark extract is an active component used for antihypertensive formulations. It has been confirmed in many human as well as animal models as a vasorelaxant. Lignan from EU when administered to rats of the Okamoto strain (SHR) at the dose rate of 300 mg/kg for 16 weeks resulted in improved vascular remodeling and reduced mean arterial blood pressure. EU minimizes blood pressure at the dose of 500-1000 mg/kg. However, in high fructose fed diet, it develops insulin resistance and hypertension [26][27][28]. Supplementation of 500 and 1000 mg of EU for 8 weeks and thrice daily for 2 weeks showed minimal reduction in blood pressure and reduction in systolic and diastolic blood pressure [29]. Antihypertensive effect on the parasympathetic nervous system has been reported following the application of EU [30]. EU also serves as a vasorelaxative agent depending on nitric oxide and assumed to be linked with potassium channels [31]. EU has beta blocking potential which at 0.5% w/v reduces isoproterenol-stimulated lipolysis from 2.67 to 1.4 times the buffer control [29]. EU has been demonstrated to prevent hypertensive remodeling which is associated with aldose reductase inhibition [32]. The application of lignans from EU under condition of hypertension due to vascular remodeling was reported to serve as a new therapeutic agent [33]. EU also showed antihyperlipidemic properties by suppressing hepatic fatty acid and cholesterol biosynthesis [34]. In hyperlipidemic hamsters, dietary supplementation with Evidence-Based Complementary and Alternative Medicine 3 Evidence-Based Complementary and Alternative Medicine 5 [44] leaf extract of EU at the dose of 0.175 g/100 g for 10 weeks reduced the concentrations of triglycerides, total cholesterol, low-density lipoprotein cholesterol (LDL-C), nonhigh-density lipoprotein cholesterol (non-HDL-C), and free acids in plasma and hepatic lipids compared to control group (fed 10 g, coconut oil, 0.2% cholesterol, w/w) [34]. In a similar manner, 1 mg or 5 mg intraduodenal injection of EU leaf extract reduced plasma triglyceride levels [65]. Antioxidant Effects. Antioxidant compounds from Eucommia plant reduced the level of free radicals [66,67] and improved the disease condition caused by oxidative stress [68,69]. Strong antioxidant properties of EU have been established under in vivo and in vitro studies [70,71]. Extracts from EU reduced the level of hydrogen peroxide which expresses some caspase proteins by MC3T3E1 cells up to half concentration from 12.5 to 25 g/mL [71]. Extract of EU was reported to increase the actions of erythrocyte, superoxide dismutase, and catalase and glutathione peroxidase and reduce the concentration of hydrogen peroxide and lipid peroxide in erythrocytes, liver, and kidney [70]. Studies on diabetic rats indicated that superoxide dismutase (SOD) can be enhanced by Eucommia bark. Eucommia also increases the level of other antioxidant enzymes in the blood to neutralize free radicals [70]. Phenolics and flavonoids of medicinal herbs contributed significantly to oxidative activities in EU [34,[69][70][71][72]. Phenolics and flavonoids safely react with free radicals by donating a hydrogen atom or an electron and terminate chain reaction before the vital organs are damaged [73]. Antioxidant properties from leaves of the EU roasted cortex and seeds were analyzed by calculating radical scavenging activity of 2,2diphenyl-1-picrylhydrazyl and ferric reducing antioxidant power and lipid peroxidation inhibition capacity in acarotene/linoleic acid system. Results indicated that leaf of the extract showed maximum DPPH radical scavenging activity with reducing rate and inhibition rate of 81.40%, followed by butylated hydroxytoluene (BHT) (76.60%) and the roasted cortex extract (16.72%). However, the seed extract had the lowest activity of 7.65%. In ferric reducing antioxidant power assays, the order of ferric reducing activities of EU extracts from leaf, seed, and roasted cortex was compared with positive control. In the -carotene/linoleic acid emulsion system, the leaf extract showed better antioxidant capacity (43.58%) than the roasted cortex extract (26.71%) or seed extract (25.10%) [74]. In addition, aucubin compounds of EU have been demonstrated to exhibit photoprotective effects against oxidative stress. Ultraviolet (UV) B radiation produces free radicals in the skin which induce the synthesis of metalloproteinases (MMPs) causing photoaging in the skin, wrinkling, and discoloration which are prone to cancer. Aucubin played a vital role in defense mechanism against free radicals caused by UV irradiation [75]. Suppression of HIV infection has also been reported with daily intake of EU extracts or its alkaline extracts in tea formula. Alkaline extract of EU leaf in combination with 22% uronic acid, 27% reducing sugars, and 46% neutral sugars reduced HIV-induced cytopathicity (HTLV-III) with Lv et al. also demonstrated that the samples from EU Oliver had potent inhibitory activity against the HIV gp41 six-helix bundle formation [79]. Antiobesity Effects. Previous studies have shown that EU has antiobesity and antimetabolic syndrome properties [8,26,34,80,81]. It has been demonstrated that both Eucommia leaf extract (ELE) and Eucommia green leaf powder (EGLP) markedly suppressed body weight and white adipose tissue (WAT) in female ICR mice fed high-fat diets (HFD). The antiobesity effect of Eucommia green leaf extract (EGLE) has been linked to various compounds such as geniposidic acid, asperuloside, and chlorogenic acid which was isolated from the extract [8]. Application of water extract from the leaf of EU at the rate of 5% diet was reported to reduce fat accumulation rate in osteoporotic mice [4] although application of 500-1000 mg/kg EU leaf extract beyond 4 weeks showed no effect on fat accumulation in fructose overfed rats [26]. Neuroprotective Effects. The stem bark extract of EU exhibited acetylcholinesterase inhibition properties in vitro (172 g/mL) IC 50 and neuroprotective effects against betaamyloid proteins [4]. It also inhibits 30-70% of cytotoxicity and efficacy of oxidative biomarkers when applied at a concentration of 2.5 g/mL [10]. Stem bark extract of EU showed higher protection activity against memory dysfunctions at the dose of 10-20 mg/kg with intracerebral injection of betaamyloid proteins in rats [4]. Metabolic Modulation and Bones. Eucommia cortex extract can be used in the control of osteoporosis. This is because Eucommia extract is actively involved in mechanisms which initiate osteoblast, enhance osteogenesis, decrease osteoclast, and thus prevent osteolysis [82]. Total glycosides from Eucommia ulmoides seed (TGEUS) have been shown to improve bone density and femur strength in rats [83]. Daily administration of TGEUS at the rate of 400 mg/kg body weight/day to normal and Dawley rats was reported to significantly increase bone mineral density and showed improvements in microarchitecture structure of the femur bone [83]. Eucommia cortex extract was reported to induce the release of growth hormone (GH) responsible for bone maturation and bone remodeling. Products of alcoholic extraction from Eucommia bark were reported to be very potent in the release of growth hormone secretagogue. Increasing signals of estrogen receptor alpha has been shown to increase the growth of bone [84]. An exception to this effect was noticed in ovariectomized rats which showed no effect on the growth of bone [47,82]. In menopausal research model, 5% diet of the EU was observed to minimize the bone loss in ovariectomized rats [61]. Eucommia cortex fed at the dosage of 300-500 mg/kg showed reduced bone mass which is not significantly different from group fed with estradiol drug [61]. Antioxidant properties of Eucommia leaf extract were also reported to contribute positively to the promotion of bone growth by improving cell integrity during oxidative stress when applied at a reduced dosage (6.25 g/mL) [71]. Therefore, Eucommia extract can be established as a therapeutic agent under conditions of osteoporosis [85]. Phytoestrogenic Properties. EU was reported to exhibit phytoestrogenic and androgenic properties [84]. Eucommia bark contains isoflavonoids, with estrogen like properties, which bind to human estrogen receptors. None of these isoflavonoids has male hormone like effect that interacts with human androgen receptor. Eucommia bark has been reported to show bimodal phytoandrogenic and hormone enhancing effects [84]. Androgen receptors play a key role in male as well as in female physiology such as skeletal muscle development, bone density, and sex drive [86,87]. Ethanol extracts of Eucommia bark were reported to attach in a weak manner to activated androgen receptors with high affinity and produce testosterone at the rate of 5-25 ng/mL in mammalian COS-7 cells [84]. Oral induction of the ethanol extract showed no increase in prostatic weights at the dose of 1-50 mg. However, 20% increases in prostatic weights were observed by increasing the dosage up to 5000 g injection [84]. Application of EU at a concentration of 50 ng/mL enhanced the signals of estradiol in a manner similar to androgen receptors [84]. However, the promoting effect of EU on the cortisol and progesterone receptors was not observed [84]. In vivo animal studies conducted using oral administration of EU extracts potentiated androgenic and hormonal effects. A form of tripartite synergism between sex steroid receptors, sex hormones, and lipidic augmenters isolated from EU was found by Ong and Tan [84]. It has been shown that the activities of sex hormone in the body are optimized with the application of EU [84]. Hepatoprotective Effects. Study was conducted on different doses of Eucommia ulmoides and carbon tetrachloride on Sprague Dawley male rats to investigate the protective effects of EU in response to CCl 4 induced acute liver lipid accumulation. Results demonstrated that Eucommia ulmoides Oliv. cortex extracts (EUCE) significantly decreased the hepatic lipid accumulation induced by CCl 4 . EU enhances lysosomal enzyme activity relieving protein folding requirement which turns into attenuation of ER stress. ApoB secretion was improved by effects of ER stress; along this, it regulates biotransformation of CCl 4 and its resultant inhibition of ROS accumulation [88]. Future Perspective and Conclusion This review paper discusses health-promoting properties of EU on cardiovascular system and antioxidant, antibacterial, antiviral, anti-inflammatory, antiobesity, and neuroprotective effects and metabolic modulation on bones and phytoestrogenic properties. These health-promoting properties have attracted much interest in the extraction and functional development of active ingredients of EU. In further studies, molecular mechanisms underlying certain health-promoting properties of EU need to be explored. Conflict of Interests The authors declare that there is no conflict of interests.
2018-04-03T03:13:58.287Z
2016-03-02T00:00:00.000
{ "year": 2016, "sha1": "3b23faac7c2a7107ab33330d0ae00042d8de376e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2016/5202908.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d75464e737d7cc6c7aab5c7224d1a6b6e3e2f394", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
55902031
pes2o/s2orc
v3-fos-license
Spectroscopy and Photometry of Cataclysmic Variable Candidates from the Catalina Real Time Survey The Catalina Real Time Survey (CRTS) has found over 500 cataclysmic variable (CV) candidates, most of which were previously unknown. We report here on followup spectroscopy of 36 of the brighter objects. Nearly all the spectra are typical of CVs at minimum light. One object appears to be a flare star, while another has a spectrum consistent with a CV but lies, intriguingly, at the center of a small nebulosity. We measured orbital periods for eight of the CVs, and estimated distances for two based on the spectra of their secondary stars. In addition to the spectra, we obtained direct imaging for an overlapping sample of 37 objects, for which we give magnitudes and colors. Most of our new orbital periods are shortward of the so-called period gap from roughly 2 to 3 hours. By considering the cross-identifications between the Catalina objects and other catalogs such as the Sloan Digital Sky Survey, we argue that a large number of cataclysmic variables remain uncatalogued. By comparing the CRTS sample to lists of previously-known CVs that CRTS does not recover, we find that the CRTS is biased toward large outburst amplitudes (and hence shorter orbital periods). We speculate that this is a consequence of the survey cadence. Introduction In cataclysmic variable stars (CVs), a white dwarf primary star accretes matter by way of Roche lobe overflow from a binary companion, which resembles a main-sequence star. The variety of CV behaviors leads to a complicated taxonomy (Warner 1995). Many CVs undergo dwarf nova outbursts, thought to be caused by accretion disk instabilities which greatly increase the rate at which matter moves inward in the disk. Other CVs, called novalike variables, remain in a bright state for years at a time. In still others, called AM Her stars or polars, the matter that is transferred becomes entrained in a strong white-dwarf magnetic field, and is funneled directly onto the white dwarf's magnetic pole. The main driver of CV evolution is thought to be a gradual loss of orbital angular momentum. This causes the Roche critical lobe of the secondary star to shrink, leading to a shortening of the orbital period P orb , and driving mass transfer on long (Gyr) timescales. The mechanisms by which angular momentum is lost are not fully understood. It is often supposed that magnetic braking of the secondary star predominates at longer periods (> 3 hr), and that magnetic braking becomes inefficient at short period, so that gravitational radiation predominates. Around P orb = 70 min, the secondary becomes degenerate and its radius begins to increase with mass, leading to a slow increase in the orbital radius. This turnaround is often called the period bounce, even though it is thought to take place very slowly. The histogram of CV orbital periods shows a significant dip at roughly 2 hr < P orb < 3 hr, known as the gap . This is often explained as follows. As the secondary loses mass, its thermal timescale increases to become comparable to the time for the orbit to evolve, with the result that the secondary exceeds its equilibrium radius. At about three hours, the secondary becomes fully convective, reducing the efficiency of magnetic braking. As the orbital evolution slows, the secondary detaches from its Roche lobe, shutting down mass transfer. The detached system continues to evolve to shorter periods, crossing the gap and eventually re-establishing contact with the Roche lobe near P orb = 2 hr. While the mechanism by which this happens is somewhat speculative, Knigge (2006) shows that a discontinuity in the secondary stars' radii occurs across the gap. In a steady state, the number of stars with a given P orb should be inversely proportional (roughly) toṖ , the rate at which the period changes. IfṖ really is very slow at short periods, then there should be a large population of short-period CVs, and the gradual turnaround at the period bounce around 70 minutes should lead to a 'spike' in the distribution (Patterson 1998;Barker & Kolb 2003;Gänsicke et al. 2009). Efforts to confront theories such as this with observation have often been frustrated, because the sample of known CVs is incomplete in ways that are difficult to quantify. The discovery channels for CVs include the following: (1) Dwarf nova outbursts are conspicuous -they last for days or weeks and typically have amplitudes of several magnitudes. (2) Nearly all CVs have unusual colors compared to normal stars, most conspicuously ultraviolet excesses arising from accretion processes or (in some cases) the underlying white dwarf. (3) With the exception of some novalike variables and dwarf novae near the peak of outburst, nearly all CVs show emission lines, especially in the Balmer sequence; these can be strong enough to be noticed in surveys such as IPHAS (Witham et al. 2007). (4) A great many other CVs have been discovered as optical counterparts of X-ray sources. (5) CVs at minimum light can be rather faint (M > 10), and a small handful have turned up in proper motion surveys. New, large samples of CVs with consistent selection criteria are potentially useful for clarifying issues such as the space density and orbital period distribution of these objects. Because the colors of CVs overlap those of quasars, the Sloan Digital Sky Survey (SDSS) turned up a large number of spectroscopically confirmed CVs (Szkody et al. 2002(Szkody et al. , 2003(Szkody et al. , 2004(Szkody et al. , 2005(Szkody et al. , 2006(Szkody et al. , 2007(Szkody et al. , 2009(Szkody et al. , 2011 hereafter referred to as SzkodyI-VIII). Gänsicke et al. (2009) compiled orbital periods for 137 of these; with the SDSS sample, they were finally able to discern the long-predicted period spike. Like the SDSS, the Catalina Real Time Survey (CRTS; Drake et al. 2009) has discovered many new CVs. While the SDSS CVs were originally selected by color -they were chosen for spectroscopy largely because their colors overlap those of quasars -the CRTS selects entirely by variability. Briefly, the CRTS surveys most of the accessible sky at Galactic latitudes |b| > 10 • and declination δ > −30 • every lunation, using a 0.7 m Schmidt telescope in the Catalina mountains near Tucson, Arizona. They search for variability using a master catalog that reaches to m ∼ 22. Objects that show abrupt outbursts of > 2 mag amplitude lasting less than few weeks are classified as likely CVs, making the CRTS a prolific source of dwarf novae in particular. The CRTS maintains a catalog of "Confirmed/Likely" CVs on the World Wide Web 2 . We downloaded this catalog on 2012 March 7, when it contained 584 objects, and used this data set for the present analysis. The CRTS CV sample is of great interest because of its depth and selection criteria, so it is a natural choice for follow-up studies similar to Gänsicke et al. (2009). Woudt et al. (2012) describe high-speed photometry of 20 objects, mostly from the CRTS, and give orbital periods for 15, including two eclipsing dwarf novae and two superhumpers. Only two of their sample had periods longer than the 2-3 hour period gap. The period distribution of their sample also showed the spike just above the turnaround. In addition, they found that dwarf novae with more recorded outbursts tended to have longer orbital periods than those with fewer outbursts. Here, we report on followup spectroscopy and photometry of the CRTS CVs listed in Table 1. Most of the CRTS CVs are too faint for us to follow up, so we selected this sample based largely on apparent brightness. Operational considerations, such as the ease with which observations could be interleaved other programs, the nearness of the target to opposition at midnight, and (for radial velocity targets) the strength and tractability of emission and absorption spectra also entered into target selection. all of our spectroscopy. In Section 3 we describe the spectra of the 28 objects for which we have only a only a quick spectrum. The eight objects for which we have radial velocity time series are discussed in Section 4; Table 3 gives the velocities, and Table 4 lists parameters for sinusoidal fits to the velocity time series. Table 5 gives magnitudes and colors of objects for which we have standardized photometry. Finally, in Section 5 we consider how the CRTS CV samples overlaps with other lists of CVs, discuss the apparent selection biases of CRTS and the implications for the CV population in general. Equipment and Techniques All our data are from MDM Observatory on Kitt Peak, Ariznoa. Nearly all are from the 2.4m Hiltner telescope, but a single photometric measurement was kindly taken by J. Halpern at the 1.3m McGraw-Hill telescope. The spectroscopic observing setups were as follows. Modular spectrograph. For most of the spectra we used the modular spectrograph and a 600 line mm −1 grating. The detector was either 'Templeton', a 1024 2 thinned SITe CCD that gave 2 A pixel −1 from 4600 to 6700Å, or 'Nellie', a thick 2048 2 CCD that gave 1.7Å pixel −1 from 4460 to 7770Å, with vignetting toward the ends of the range. The choice of detector was dictated by availability during a series of controller upgrades. With the modspec, targets are centered using the image reflected from the polished slit jaws. For this, we used new (2010) slit viewing optics and a self-contained Andor Ikon CCD camera unit; this has greatly improved the acquisition of faint targets with this instrument. All of our radial-velocity time series were taken with the modular spectograph. For wavelength calibration we obtained comparison lamps in twilight and used the night-sky λ 5577 line to track spectrograph flexure during the night. We also observed flux standards in twilight when the sky was clear, and bright O and B-type stars in order to correct approximately for telluric absorption. Ohio State Multi-Object Spectrograph (OSMOS). This versatile instrument (Martini et al. 2011) images the focal plane onto a 4 k × 4 k CCD, at a scale of 0.273 arcsec pixel −1 . Filters can be placed in the parallel beam of the reducing camera for direct imaging, or volume-phase-holographic grism disperser can be inserted for spectroscopy. In spectroscopic mode, one can insert slits in two different locations in the focal plane, which yield different wavelength coverage; we used the 'inner' slit, which gives coverage from 3960 to 6875Å at 0.75Å pixel −1 . OSMOS has no slit-viewing camera, so targets are acquired by taking a direct image (without slit and disperser) and then moving the telescope to align the target with the known location of the slit. We inserted a V filter for the acquisition exposures, and therefore obtained a rough V magnitude (without a color transformation) for our targets. Some humid weather in 2011 September intermittently caused condensation in the center of the detector window; fortunately, the important Hα feature was outside the affected region. For most of the spectroscopic reduction we used IRAF 3 routines. To extract one-dimensional spectra from the modspec images, we used a local implementation of the optimal-extraction algorithm of Horne (1986). We computed synthetic V magnitudes for our spectra using the passband tabulated by Bessell (1990). Our slit was usually 1.1 arcsec wide, which means that an unknown fraction of the light was lost; also, many of our spectra were taken through thin cloud. Experience suggests that our synthetic magnitudes are accurate to ∼ 0.2 mag in good conditions; in poor conditions, they can be considerably too faint. We selected eight of our targets for time-series radial velocity observations, with the aim of determining orbital periods. For these, we pushed observations to large hour angle in order to avoid daily cycle count aliases. To measure emission-line radial velocities we used the convolution algorithms described by Schneider & Young (1980), in which an antisymmetric function is convolved with the line profile and the zero of the convolution is taken as the line center. We used methods described by Shafter (1983) to tune the convolution function for maximum signal. We estimated uncertainties in the velocities by propagating the counting-statistics errors in the spectral channels; these estimates do not include possible systematic effects. For absorption-line velocities, we used the Tonry & Davis (1979) convolution algorithms as implemented by Kurtz & Mink (1998). To search for periods we used a 'residual-gram' method described by Thorstensen et al. (1996). Once we had established a period, we fit the time series with sinusoids of the form v(t) = γ + K sin[2π(t − T 0 )/P ]. Uncertainties in the fit parameters were estimated from the scatter around the best fit using the procedure described by Cash (1979). For direct imaging, we mostly used the 'Nellie' CCD, which gave 0.24Å per pixel. This detector is insensitive in the ultraviolet, so we used BV RI filters (although R and B were sometimes skipped). We derived photometric transformations from observations of Landolt (1992) standard stars. The scatter in the transformations was generally < 0.05 mag. As noted earlier, we also have some direct images from OSMOS. Table 2 shows that we obtained only brief exposures of most of our spectroscopic targets, generally taking one or two exposures in a single visit. Our main purpose was to verify that the objects showed spectra typical of CVs. Figs. 1 and 2 show these exploratory spectra. Of the 28 objects surveyed, 27 appear to be bona fide CVs, the exception being CSS0350+35. Here are brief descriptions. Exploratory Spectra CSS0051+20. Our one spectrum has modest signal-to-noise, but shows clear Hα emission and confirms that this is a CV. The continuum has a shape that hints at an M-dwarf contribution, but we cannot be sure this is real. CSS0208+37. The Balmer emission lines are strong and appear double-peaked, suggesting a low orbital inclination. A prominent blue continuum may be from an accretion disk, or may be from a white dwarf photosphere; on the other hand, the Balmer decrement appears extreme, with Hβ much stronger than Hα, suggesting that an instrumental effect might be enhancing the blue end of the spectrum. Dwarf novae declining from outburst can sometimes show blue continua, but our synthesized and measured magnitudes (V = 18.3 and V = 18.23, respectively) are both a bit fainter than the CRTS minimum magnitude of 17.9, ruling out this interpretation; the filter photometry was obtained only two days before the spectrum. CSS0350+35. The spectrum is a good match to an M1 dwarf, and shows only a narrow Hα line similar to a dMe star. While this might be a CV in which mass transfer has stopped, there is no sign of a white dwarf in the spectrum, so the spectrum is consistent with a flare star. The light curve at the CRTS does not rule out either a CV or flare star, so the flare star classification appears likely. This is, notably, the only object in our spectroscopic sample that appears not to be a CV. CSS0401+08. Our spectrum is consistent with a dwarf nova in outburst, with weak Hα emission and weak, broad Hβ absorption (with a hint of a central reversal) on a blue continuum. The magnitude synthesized from the spectrum (V = 16.5) is also well above the 18.5 mag minimum listed in the CRTS. CSS0440+02. The emission lines are broad (∼ 1900 km s −1 FWHM) and just show double peaks at our signal to noise ratio. The system is probably close to edge-on. CSS0447+09. This shows evidence for a K-type contribution, in the form of an absorption dip near 5170Å, and absorption in the Na D lines. The signal-to-noise ratio is not good enough to warrant a detailed decomposition of the spectrum, but the absorption features suggest that about half the light comes from a mid-K star, probably in the range K4 to K6. This suggests that the orbital period is likely to be ∼ 6 h or greater (Knigge 2006), but a much shorter period is possible if the secondary has lost much of its mass (Thorstensen et al. 2002). CSS0505+08. We detect Hα at rather low signal-to-noise ratio, confirming the CV nature of the object. CSS0514+08. Both Hα and Hβ are detected in emission. The FWHM of Hα is around 1000 km s −1 , suggesting an intermediate orbital inclination. No secondary star is seen at our signal-to-noise ratio. CSS0518−02. This object shows an unusual spectrum, with a blue continuum and a narrow (∼ 300 km s −1 ), relatively weak Hα emission line. The NaD lines are detected in absorption, but the continuum does not show any other convincing late-type features, so the NaD may be interstellar. The spectrum's synthetic magnitude (V = 16.9) suggests that the object was somewhat brighter than minimum light (m = 17.5), but not dramatically so. The CRTS light curve shows many outbursts of ∼ 2 magnitudes. This may be a Z Cam-type dwarf nova, with a persistent plateau state near m = 17.5, or could be some other kind of novalike variable. CSS0545+02. The red Digital Sky Survey and the SDSS finding chart (linked from the CRTS site) show a nebulosity around this object, strongest along a northeast-southwest axis, extending to a radius of about 20 arcsec from the object. In our spectrum, the apparent sharp absorption features at Hα (6563Å), [NII] (λλ 6548 and 6584), and [SII] (λλ 6717 and 6730) are strong nebular emission features in the sky that have been over-subtracted. The long-slit spectrum shows extended weak Hα, Hβ, [NII] and [SII] emission along 2.6-arcmin length of the slit, and [OIII] λλ 4959 and 5007 emission extending to ∼ 18 arcsec from the star. The stellar spectrum shows Hα emission with a width of ∼ 2000 km s −1 , very much resembling a CV. The CRTS light curve hangs mostly at m = 16.5 or so, but the SDSS has g = 18.7, in good agreement with our V = 18.75 and V = 18.9 synthesized from our spectrum; the nebula may be affecting the CRTS measurement. If this really is a CV, the nebulosity is especially intriguing; it is also possible that this is a planetary nebula central star, in which case the broad Hα might arise from a stellar wind. The CRTS light curves shows some faint upper limits, so the system may eclipse. CSS0558+00. Our single, low signal-to-noise spectrum shows weak, broad Hα on a blue continuum. The synthesized magnitude (V = 16.6) is well above the minimum m = 19.0 found by CRTS. It appears we caught this in a relatively rare outburst; the CRTS light curve shows only three outbursts, even though the object is well-observed. Our filter photometry, obtained 9 days earlier than the spectrum, shows the object much fainter, at V = 20.45. This is a dwarf nova. CSS0905+12. This appears to be an ordinary dwarf nova observed near minimum light. Hα has a FWHM of 1200 km s −1 , suggesting an intermediate orbital inclination. CSS1055+09. The spectrum shows a contribution from an M dwarf which is presumably the secondary star. Because of our limited spectral coverage and signal-to-noise, we can only roughly estimate the secondary's spectral type to be M3 ±2. The secondary appears to contribute at least half the continuum light in the region of our spectrum. The easy visibility of the secondary suggests that the orbital period lies above the 2-3 hour gap. CSS1139+45. Our spectrum is very poor, but shows broad Hα emission, confirming that this is a CV. The synthesized magnitude and spectrum both indicate it was at minimum light. CSS1211−08. This relatively bright object shows strong Balmer and HeI lines. Hα has a FWHM of 800 km s −1 , indicating a fairly low orbital inclination, and the spectrum and synthesized magnitude indicate that the spectrum was taken near minimum light. CSS1556−08. Our spectrum has poor signal-to-noise, but does show broad Hα and Hβ emission. The synthesized V = 19.3 is fainter than the CRTS minimum, but conditions were partly cloudy at the time. CSS1616−18. The spectrum is typical of dwarf novae in outburst, with narrow Hα emission, broad Hβ absorption, and a blue continuum. The synthesized V = 15.9, while the CRTS lists variation between m = 14.9 and m = 17.8. CSS1649+03. Broad Hα emission is clearly detected, but the poor signal-to-noise ratio precludes further analysis. Our photometric measurements and the synthesized magnitudes both agree well with the magnitude at minimum listed in the CRTS. CSS1720+18. Strong, broad Hα and Hβ confirm the CV status. Our spectrum has poor signal-to-noise, which is unsurprising given the object's faintness. CSS1727+13. Hα has a FWHM of 1200 km s −1 , indicating an intermediate inclination. The spectrum is typical of dwarf novae at minimum light, and has a synthesized V = 19.0. It was taken only 3 days after photometric measurements showing V = 16.96, indicating that the star had faded quickly following an outburst. CSS1735+15. Our spectrum was taken in partly cloudy conditions, but does show Hα in emission. In addition, NaD absorption is present, and the K-star features near λ5168 are just detected. The orbital period is probably well longward of the 2-3 hr gap. We think that the hump in the continuum near 5300Å is probably an artefact. CSS1752+29. The Hα emission has a modest equivalent width, and the continuum is blue, suggesting that the system was in outburst. Magnitudes from the OSMOS acquisition image and the synthesized spectrum agree nicely at V ∼ 17.3, while the CRTS lists m = 18.3 for minimum light, so the system was likely declining from outburst. Again, the continuum hump near 5300Å is probably an artefact. Kato et al. (2009) found superhumps with P sh = 0.063759(22) d = 91.8 min. Using the P orb -P sh relation derived by Gänsicke et al. (2009), we predict P orb = 89.8 min. Our spectrum shows narrow Hα (FWHM ∼ 400 km s −1 ), indicating a low orbital inclination. CSS2059−09. The Hα line has a FWHM of 1500 km s −1 , suggesting a fairly low orbital inclination. The CRTS light curve shows a gradual brightening of the minimum magnitude over time, with outbursts superposed. Our spectrum and acquisition image were taken in partly cloudy conditions. CSS2213+17. The Hα portion of our spectrum was unaffected by a dewar condensation problem, and shows a broad emission line, confirming that this is a CV. CS2227+28. This spectrum was also affected by the condensation, but Hα was again in the clear portion, and is nicely detected with a somewhat triangular profile, indicating a moderate orbital inclination. Radial Velocity Studies We obtained time series spectroscopy for eight targets. Figs. 3 and 4 show the average spectra, periodograms, and folded velocity curves, and Table 4 gives the parameters of the sinusoidal fits. We discuss the objects in order of RA. CSS0501+20 The spectrum is typical of dwarf novae at minimum light. The lines are single-peaked, and Hα has a FHWM of 1100 km s −1 . We adopt P orb = 107.7 ± 0.2 min. An alternate choice of daily cycle count gives 116.7 min, but the Monte Carlo test described by Thorstensen & Freed (1985) assigns this a probability below 1 per cent. At this orbital period, this is likely to be an SU UMa star showing superhumps and superoutbursts. CSS0519+15 The equivalent widths of the emission lines are rather smaller than in most dwarf novae at minimum light, suggesting that we caught the system somewhat above minimum. The lines are single-peaked and the FWHM of Hα is ∼ 800 km s −1 , indicating a fairly low inclination. Our P orb , 122.3(4) min, places this in the period range of the SU UMa stars. CSS0647+49 The spectrum shows a conspicuous contribution from a late-type star. Using spectra of stars classified by Keenan & McNeil (1989), we find that the companion's spectral type to be K4.5 ± 1 subclass, and that its contribution to the spectrum is equivalent to V = 18.0 ± 0.4. In Fig. 3, the upper spectral trace shows the average spectrum, and the lower shows the result of subtracting a scaled K4 star from the average. Most of our observations are from 2011 March, but we also have velocities from January and September. An unambiguous cycle count over the whole interval yields P orb = 8.9160 ± 0.0005 hr. The emission-and absorption-line velocities are both modulated; the emission-line modulation is 0.510 ± 0.007 cycles out of phase from the absorption, consistent with the half-cycle offset expected if the emission lines trace the white dwarf motion. If we assume that this is the case, then the mass ratio (secondary to white dwarf) is 0.73 ± 0.04. With this mass ratio and a typical white dwarf mass of M wd = 0.8 M ⊙ , the orbital inclination i would be near 35 • . While this is only meant to be illustrative, the rather low secondary velocity amplitude (K 2 = 117 ± 4 km s −1 ) implies an orbital inclination low enough that eclipses are very unlikely. We estimate a distance using the secondary's contribution as follows. If we fix P orb at its measured value and assume that the secondary star fills its Roche critical lobe, then the secondary's radius, R 2 depends almost entirely on its mass M 2 ; furthermore, the dependence of R 2 on M 2 is weak. Evolutionary calculations by Baraffe & Kolb (2000) suggest that a K4 star in an 9-hour orbit should have M 2 = 0.7 ± 0.2 M ⊙ . As a check, taking our mass ratio at face value and assuming a broadly typical M 1 = 0.8 M ⊙ for the white dwarf gives M 2 = 0.58 M ⊙ , in reasonable agreement. With this mass range, the approximation given in eqn. 1 of Beuermann et al. (1998) constrains R 2 to be 0.85 ± 0.10 R ⊙ . From data in Beuermann et al. (1999), we estimate the surface brightness of a K4.5 dwarf to be such that it would have M V = 6.3 ± 0.5 if it had R = 1 R ⊙ . Scaling this to the secondary's radius yields M V = 6.6 ± 0.6. The secondary's synthetic V magnitude is 18.0, but this is probably a little too faint, because the synthetic magnitude of the average spectrum (V = 17.2) is fainter than the V = 16.84 we find from the more accurate filter photometry. Discrepancies this large are to be expected (Section 2). Correcting for these losses gives V ∼ 17.6 for the secondary contribution. At this celestial location (l = 166.19 • , b = 19.79 • ), Schlegel, Finkbeiner, & Davis (1998) estimate the total Galactic extinction to be E(B − V ) = 0.11, which taking R = 3.3 makes A V = 0.36, assuming the star lies outside the dust layer. Putting all this together yields (m − M ) 0 = 10.9 ± 0.7, or a distance d = 1300(+500, −400) pc. Notice that we did not assume that the secondary star follows a main-sequence mass-radius relation, but rather combined the Roche size constraint with the surface brightness. CSS0814−00 This is an SU UMa-type dwarf nova, and was observed in superoutburst by Kato et al. (2009), who detected superhumps. The superhump period P sh was not determined cleanly, but appeared to be 0.0763 d. Photometry obtained by Warner & Woudt (2010) gave P orb = 1.796 h, or 0.0748 d, in rough accordance with expectations based on the superhump period. Our spectroscopic period is essentially identical, at 0.07485(5) d. The emission lines are barely double-peaked, suggesting that the inclination is not small, but Warner & Woudt (2010) make no mention of an eclipse. As far as we know, this is the only system studied here which has a period determination in the literature. CSS0902−11 Like CSS0647+49, this object also has a strong contribution from the secondary star. By comparing the spectrum with stars classified by Keenan & McNeil (1989), we estimate the secondary's spectral type to be K7 ± 1 subtype, and find that the secondary's contribution is nominally equiv-alent to V = 19.1. However, many of the exposures used in the average spectrum were taken in partly cloudy conditions and mediocre seeing. Our average spectrum has synthesized V = 18.4, but our best exposures have V = 17.8. The CRTS light curve shows that the source is fairly steady when not in outburst, at m = 17.5, consistent with our best exposures, so it appears that the mean spectrum from which the secondary magnitude was derived is about 0.6 mag too faint. We therefore adopt V = 19.1 − 0.6 = 18.5 for the secondary contribution. The emission-line radial velocities did not yield a period, but the absorption spectrum showed an unambiguous modulation at 6.62 ± 0.01 hr. The relatively small velocity amplitude of the secondary (K 2 = 100 ± 6 km s −1 ) constrains the inclination to be fairly low for any realistic white dwarf mass, so eclipses are not expected. We can once again estimate a distance using the secondary's contribution, following the same procedure we used for CSS0647+49. Using Baraffe & Kolb (2000) as a guide, we estimate M 2 = 0.55 ± 0.15 M ⊙ , from which we infer R 2 = 0.68 ± 0.08 R ⊙ at this period. The surface brightness for K7 ± 1 is equivalent to M V = 7.25 ± 0.5 for a 1 R ⊙ star (Beuermann et al. 1998) CSS0912−03 The emission lines are notably broad -the FWHM of Hα is nearly 1700 km s −1 -and the lines show incipient double peaks, so the inclination is likely to be fairly high. Even so, the radial velocity amplitude is modest. We detect a modulation in the emission line velocity, but with the available the choice of daily cycle count is ambiguous, so we give two possible periods in Table 4. Both possible periods are well below the 2-3 hour gap, so it is likely that this will prove to be an SU UMa-type dwarf nova with superoutbursts and superhumps. CSS1706+14 The spectrum is typical of dwarf novae at minimum light. We obtained radial velocities on a single night in 2011 June, after which the star went into outburst, suppressing the Hα emission and ending the measurements. In 2012 May we obtained velocities on three consecutive nights. From the combined data we adopt a period near 0.0582 d, but periods near 0.111 d and 0.0552 d (the latter being a daily cycle count alias of our adopted period) are not completely ruled out. The ∼ 345 d gap in the time series created fine-scale ringing in the periodogram. To derive the period uncertainty in Table 4, we shifted the 2012 May data back in time by an integer number of periods, removing the gap. The exact period used in this artificial shift has essentially no effect on the resulting period uncertainty. CSS1729+22 The spectrum shows weak M-dwarf features, in particular the extra continuum around 5950-6170Å, and the band head near λ6180. The features are too weak to derive good constraints on the spectral type, but an M1.5 dwarf contributing around 25 per cent of the flux at 6500Å gives a reasonably good match. As one might expect from the spectrum, the period is relatively long; the best-fitting P orb is 7.12(3) hr, or 3.37 cycle d −1 ; however, we cannot entirely rule out an alias at 4.45 cycle d −1 , or 5.40 hr. The CRTS Sample and the Cataclysmic Variable Population As noted earlier, the CRTS CV sample is of great interest in characterizing the CV population. In this section, we consider what the CRTS sample can tell us about the completeness of the available CV sample. The number of non-CVs included in the CRTS sample appears to be small. In the sample of 36 objects for which we obtained spectra, we found only a single apparent non-CV. The fainter objects in the CRTS sample were beyond our magnitude limit, and it is possible that the fainter end of the sample includes a greater fraction of interlopers. However, the selection criterion -outbursts of more than 2 mag -should be fairly robust even for faint objects. We assume, then, that essentially all CRTS CVs are real CVs. Other Samples Used for Comparison We compared the CRTS list to several other samples of CVs, which we describe here. The SDSS Sample. SzkodyI-VIII list 286 CVs. Although the SDSS Data Release 8 covers some areas at low galactic latitude that CRTS does not (Aihara et al. 2011), all of the SDSS CVs lie within the nominal sky coverage of the CRTS, so for practical purposes the SDSS coverage is entirely contained within the CRTS coverage. SzkodyI-VIII do not tabulate the subtypes of their objects, though they do give limited information on this. To enable more detailed comparisons, we classified the objects in SzkodyI-VIII, primarily on the basis of their spectra, supplemented by the information in the text of the papers. Our classification scheme was as follows: DN This class was used for objects known to be dwarf novae, and for objects whose spectra resemble those of dwarf novae. The spectra classified this way tended to have strong, broad Balmer emission (with Hα equivalent width usually greater than 30Å ), relatively flat disk continua (in F λ vs. λ), and weak or absent HeII λ4686. Objects showing blue continua and white-dwarf absorption wings around Hβ were classified as DN-W; if a K or M-type secondary was present, we classified the object as DN-2. In some cases the SDSS spectrum shows the object in outburst. Dwarf novae in outburst can be difficult to distinguish from novalike variables, but in these cases the flux level in the spectrum will usually be much greater than expected based on the imaging data. NL This class included spectra showing blue continua, without white-dwarf absorption, and relatively weak emission lines, or stronger emission lines and substantial HeII λ4686 (typically half the strength of Hβ in those cases). The Balmer absorption wings in a novalike variable can superficially resemble white-dwarf absorption, but with experience the distinctive whitedwarf line profile can usually be distinguished from the disk absorption lines seen in UX-UMa type variables. AM These showed HeII λ4686 similar in strength to Hβ, or other evidence of a strong magnetic field such as cyclotron humps. NCV We assigned 14 objects to this "Non-CV" class. This heterogeneous group include objects whose spectra resembled reflection-effect white dwarf systems, subdwarf B stars, and chromospherically active M dwarfs. One, SDSS J1023+00, has proven to be a binary containing a millisecond radio pulsar Wang et al. 2009;Thorstensen & Armstrong 2005). Removing 14 apparent non-CVs from the SDSS sample leaves us with 272 objects. Because of the limited information -especially the lack of long-term light curves -the classifications should be considered somewhat rough. They are given in Table 6. The Ritter-Kolb Catalog. Ritter & Kolb (2003) have maintained a catalog (hereafter RKcat) of CVs and related object with known or suspected orbital periods. Some of these were discovered in the CRTS, but entries that do not match with CRTS objects clearly were not. In our comparisons, we used the cataclysmic binary list, 'cbcat', from version 7.16 of the Ritter-Kolb catalog (hereafter RKcat) 4 , which contains 926 objects, of which 582 are in the CRTS survey area. RKcat provides subclassifications similar to the ones we invented for the SDSS sample. Table 7 shows the number of cross-matches, and non-matches between the CRTS sample and the lists detailed in the previous section. Comparison with Other Samples. There are only 44 cross-matches between the CRTS and the SDSS CV samples. Fig. 5 shows histograms of the SDSS sample and the CRTS objects that lie in the SDSS footprint. Note the following: 1. At minimum light, most of the CRTS objects are too faint to have been identified as CVs by SDSS. The fainter CRTS objects are for the most part detected in the SDSS imaging data (provided they were in the SDSS footprint), but were too faint to be selected for spectra. 2. Among the brighter CRTS objects, somewhat more than half are not in the SDSS sample. 3. The top panel shows that only a rather small fraction of SDSS objects are recovered by CRTS. Why does CRTS miss so many CVs? As noted earlier, one expects CRTS to preferentially select dwarf novae, and to be less sensitive to other CV subtypes. Consistent with this, 40 of the 44 cross-matches between SDSS and CRTS are objects that we classified as 'DN'. The RKcat and GCVS comparisons (Table 7) also show this tendency for CRTS to select dwarf novae; of the 134 CRTS objects that are listed in SDSS, RKcat, or GCVS, 123 of them are classified as dwarf novae in at least one of these catalogs, and only 11 are other kinds of CV. We therefore confirm that, as one might expect, CVs that are not dwarf novae are largely passed over by CRTS. The objects recovered by CRTS are mostly dwarf novae, but are the objects not recovered by CRTS not dwarf novae? In the latter part of the table, we give the numbers of CVs that lie in the nominal CRTS survey area, but which are not recovered by CRTS. As expected, dwarf novae constitute a smaller portion of the unrecovered objects; the aggregate dwarf nova fraction is 348/602, rather than 123/134. However, these figures also imply that over half the unrecovered objects actually are dwarf novae. Somehow, 348 dwarf novae that we know of in the CRTS survey area have slipped through its net. How can we account for this? Some of these non-recoveries (or 'misses') are to be expected, because dwarf novae from one subclass -the WZ Sge stars -erupt very infrequently, in some cases on timescales of decades or more. An even more extreme example, the star GD 552 (Unda-Sanzana et al. 2008), appears identical to a short-period dwarf nova at minimum light, but it has no observed outbursts. Some fraction of the WZ Sge stars will have been missed simply because they have not outburst during the ∼ 5 years that CRTS has operated. To explore the effect of outburst interval on CRTS selection, we used data from RKcat, which gives an average outburst interval (which they denote T 1) for some of the listed dwarf novae. We arbitrarily chose 700 days, roughly half the time span of the CRTS survey, as the dividing line between short and long outburst intervals. In addition, we assigned to the long-interval group any dwarf nova subclassified as 'WZ'. Many dwarf novae could not be assigned because because T 1 was not given (and they were not classed as WZ), but sufficient information existed to classify 137 dwarf novae from the CRTS footprint; 95 of these were in the short-interval group, and 42 in the long-interval group. The objects that were recovered in CRTS included 18 from the short-interval group, and 9 from the long-interval group; among RK dwarf novae that were not recovered by CRTS, there were 77 short-interval systems, and 33 long-interval systems. The fact that so many short-interval systems are not recovered by CRTS argues strongly that long outburst intervals are not the main reason CRTS is missing so many dwarf novae (although it must account for some cases). Indeed, the ratio of short to long interval dwarf novae for CRTS/RK matches is remarkably similar to the ratio in the group of RK dwarf novae that are not matched to CRTS. The main reason for the incompleteness must therefore lie elsewhere. Dwarf nova outbursts will be missed if they occur between observations. Perusal of CRTS light curves suggests that some parts of the sky are covered rather infrequently. A good number of dwarf nova outbursts must therefore 'slip through the cracks' in the coverage. This is exacerbated by the 2-magnitude criterion; not only must the object be caught in outburst, but during that part of the outburst where it is more than 2 magnitudes above minimum. To see whether outburst amplitude might be an important selection factor, we again turned to RKcat, this time finding the outburst amplitude ∆m. For objects that did not have superoutburst magnitudes listed, we let ∆m = mag1 − mag3 (in their notation), but if superoutburst magnitudes were available, we used mag1 − mag4. Fig. 6 shows the cumulative distribution functions of ∆m for the recovered and unrecovered dwarf novae. As expected, CRTS is nearly blind to objects with ∆m < 2; more interestingly, there is a significant bias against smaller outburst amplitudes extending all the way up to ∆m = 6. This effect probably arises because, in any given snapshot, a large-amplitude object will have a greater likelihood of being caught at ∆m > 2 than a small-amplitude object, which would have to be fairly close to its peak brightness in order to exceed the survey threshold. It seems, then, that the CRTS survey is biased toward large outburst amplitudes. How might this affect other quantities? In Fig. 7 we plot ∆m against P orb , for those dwarf novae in the CRTS footprint that have both quantities tabulated in RKcat. While there is a great deal of scatter at any given P orb , there is a clear trend for short-period dwarf novae to have greater outburst amplitude, and hence a greater likelihood of being discovered by CRTS. The CRTS sample should therefore be biased toward shorter orbital periods. This must contribute at some level to the preponderance of short periods found in this paper and by Woudt et al. (2012). The fact that so many of the CRTS CV sample are new discoveries, and its continuing effectiveness in finding new ones, both show that a great many CVs remain undiscovered. Future synoptic surveys with faster cadence, and less-stringent variability criteria should discover many more CVs. Summary We obtained spectra of 36 CRTS CV candidates, and confirmed that all save one appear to be bona fide CVs. For eight of the objects we obtained spectroscopic periods, and found that three of them had P orb longward of the 2-3 hour gap. In addition, we examined the overlap between the CRTS CV sample and others previously-existing samples. Most CRTS CVs are new discoveries, but CRTS has not recovered even a majority of the known dwarf novae in its footprint. This suggests that a great many CVs remain undiscovered. Analysis of the recovered and unrecovered samples shows that the CRTS sample is biased toward large outburst amplitudes, which in turn biases it toward shorter orbital periods. We gratefully acknowledge support from NSF grants AST-0708810 and AST-1008217, and thank Dartmouth undergraduates Jason Spellmire and Erin Dauson for conscientious and cheerful assistance at the telescope. Note. -The CRTS name encodes the date of outburst before the colon, and the J2000 celestial coordinates after the colon. The number of outbursts (column 3) is from a perusal of the light curves (see text). Magnitudes at maximum and minimum are from CRTS. Standardized magnitudes and colors for some of the objects can be found in Table 5. The last two columns give the total numbers of spectra and direct images we obtained. a The CRTS light curve for CSS1219-19 shows a secular increase from m ∼ 19.5 to m ∼ 17.5, with apparently significant short-term variability, but no clearly-defined outbursts. b CSS2029−15 has been named SY Cap. Note. -Column 2: Times are listed as Julian date minus 2 450 000. Times for single visits are given to the hundredth; times given to the nearest day are averages for multi-night observations. CSS1055+09 was observed on two nights. Column 3: OS stands for OSMOS, and M stands for Modspec, with detector Templeton (T) or Nellie (N). Column 4: V magnitudes synthesized from the spectrum; they are ideally good to ±0.2 mag, but larger errors are possible because of clouds and seeing. Magnitudes could not be synthesized for the spectra of CSS2213+17 and CSS2227+28 because of condensation on the detector window. Column 5: Positive equivalent widths refer to emission. a CSS0350+35 appears to be a dMe star (see text).
2012-07-12T19:36:14.000Z
2012-07-12T00:00:00.000
{ "year": 2012, "sha1": "dddf94ae45c456456344f93a56e460317725bbda", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.3070", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dddf94ae45c456456344f93a56e460317725bbda", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
254200138
pes2o/s2orc
v3-fos-license
Novel Roles of Nanog in Cancer Cells and Their Extracellular Vesicles The use of extracellular vesicle (EV)-based vaccines is a strategically promising way to prevent cancer metastasis. The effective roles of immune cell-derived EVs have been well understood in the literature. In the present paper, we focus on cancer cell-derived EVs to enforce, more thoroughly, the use of EV-based vaccines against unexpected malignant cells that might appear in poor prognostic patients. As a model of such a cancer cell with high malignancy, Nanog-overexpressing melanoma cell lines were developed. As expected, Nanog overexpression enhanced the metastatic potential of melanomas. Against our expectations, a fantastic finding was obtained that determined that EVs derived from Nanog-overexpressing melanomas exhibited a metastasis-suppressive effect. This is considered to be a novel role for Nanog in regulating the property of cancer cell-derived EVs. Stimulated by this result, the review of Nanog’s roles in various cancer cells and their EVs has been updated once again. Although there was no other case presenting a similar contribution by Nanog, only one case suggested that NANOG and SOX might be better prognosis markers in head and neck squamous cell carcinomas. This review clarifies the varieties of Nanog-dependent phenomena and the relevant signaling factors. The information summarized in this study is, thus, suggestive enough to generate novel ideas for the construction of an EV-based versatile vaccine platform against cancer metastasis. Introduction The development of effective vaccines to prevent cancer metastasis is a socially important and urgent issue [1,2]. Although the quantity of target cancer cells or cancerous cells in prognostic patients might be very small, they can produce metastasis, as well as reactivate primary tumor sites, by the self-seeding of circulating tumor cells [3]. For such cases, the use of vaccines is well understood to be a strategically promising method. Immune cells can respond to malignant cells and activate protection systems that destroy or render them harmless. In fact, immune cells, such as dendritic cells, have been recognized as efficient resources for extracellular-vesicle (EV) vaccines. The only idea yet to be considered in the research, however, is whether immune cells can appropriately respond to cells with a high degree of malignancy. Cancer cells remaining in prognostic patients are likely to be cells with a high resistance to drugs and chemical stress [4,5]. Those malignant cells and EVs should, therefore, be crucial targets that might conversely undermine immune-protection systems. Therefore, a great expectation has arisen in the field for a novel idea to convert negative malignant factors to positive ones that support immune functions. The potential roles of cancer cell-derived EVs, as well as immune cell-derived EVs, should be considered for the construction of EV-based versatile vaccine platforms against cancer metastasis. Why Nanog? The first step of our experiment was to create a malignant cancer cell line with high metastatic potential. Mouse melanoma cell lines, B16-F10 and B16-BL6, were selected as 2. Why Nanog? The first step of our experiment was to create a malignant cancer cell line with high metastatic potential. Mouse melanoma cell lines, B16-F10 and B16-BL6, were selected as the baseline for developing a novel cell library. These cell lines were genetically modified to create Nanog-overexpressing cell lines Nanog + F10 and Nanog + BL6. Nanog is a principal factor essential for the maintenance of the undifferentiated state (pluripotency, stemness) of embryonic stem cells. Nanog was thought to be able to increase the stemness of various other cells, and the stemness of cancer cells was suggested to be a crucial factor of malignancy. As expected, Nanog overexpression could enhance the metastatic potential of melanomas, indicating that a melanoma was made more malignant. EVs derived from B16-F10 cells exhibited a metastasis-promotion effect in the same way as those reported in other studies about other cancer cell-derived EVs. Unexpectedly, however, EVs derived from Nanog + F10 cells exhibited a metastasis-suppression effect (Figure 1). Such a Nanogdependent effect of EVs was also observed for colon cancer. This result began with a simple idea of Nanog overexpression to increase stemness. However, it revealed an attractive phenomenon from the perspective of EV-based vaccines. Therefore, it would be important to investigate the detailed role of Nanog in this phenomenon. Nanog expression level in cancer cells can be changed by genetic, chemical, microenvironmental, or physical factors. The higher the Nanog expression level, the higher the metastatic potential. The role of EVs in cancer metastasis has been thought to follow the metastatic potential of cancer cells, that is, EVs derived from metastatic cancer cells exhibit metastasis-promoting effects. However, in the case of cancer cells with a very high metastatic potential, contrary to our expectations, EVs may promote cancer metastasis. Roles of Nanog in Cancer Cells A high quantity of research papers have reported the potential roles of Nanog in various types of cancers. There were two or three recently published papers for each type of cancer selected, and they are summarized in Table 1. Short comments for respective papers are described, below, under Sections 3.1-3.13. Nanog expression level in cancer cells can be changed by genetic, chemical, microenvironmental, or physical factors. The higher the Nanog expression level, the higher the metastatic potential. The role of EVs in cancer metastasis has been thought to follow the metastatic potential of cancer cells, that is, EVs derived from metastatic cancer cells exhibit metastasis-promoting effects. However, in the case of cancer cells with a very high metastatic potential, contrary to our expectations, EVs may promote cancer metastasis. Roles of Nanog in Cancer Cells A high quantity of research papers have reported the potential roles of Nanog in various types of cancers. There were two or three recently published papers for each type of cancer selected, and they are summarized in Table 1. Short comments for respective papers are described, below, under Sections 3.1-3.13. Breast Cancer The overexpression of NANOG increased cell adhesion. Additionally, p53, a tumorsuppressive gene, decreased. Concomitantly, the expression of downstream factors, such as Gadd45a, also decreased. The enhancement of Nanog expression promoted migration and invasion activities. Tumorigenesis was not induced by Nanog overexpression alone but by the co-expression with Wnt-1 [6]. Treatment with mTOR inhibitors and chemotherapeutic agents increased NANOG expression in a similar manner to hypoxia. Concurrently, the translation of a subset of SNAIL and NODAL mRNA isoforms was activated. The accumulation of these proteins enhanced the stem cell phenotype, increased drug resistance, and promoted metastasis [7]. Cervical Cancer CD59 binds to C8 and C9 and, therefore, inhibits the formation of the membrane attack complex (MAC) that requires C9. Therefore, complement-dependent cytotoxicity (CDC) via MAC is inhibited by CD59. In a NANOG-overexpressing cell line (CaSki-NANOG), CD59 was up-regulated, and the resistance to CDC increased. NANOG directly bound to the CD59 promoter to enhance its expression activity [8]. According to the cancer immunoediting theory, heterologous tumor cells are continuously subject to host immune surveillance [32,33]. Cells vulnerable to immune surveillance are eliminated, while cells that evade detection and killing proliferate. Based on this idea, a method to create cancer cells with high immune-resistance levels, the vaccination-induced cancer evolution (VICE) method, was developed. TC-1(P3), which was obtained by repeating the subculturing process (removing the cancer cells that were inoculated into mice and then inoculating them into mice again) by this method three times, was the first cell line. Nanog expression was increased 10-fold compared to TC-1(P0). An increase in stemness markers (CD133, CD44, aldehyde dehydrogenase (ALDH)) was also observed [9]. It was shown that the higher the degree of malignancy of cancer cells, the higher the expression level of Nanog. Colon Cancer/Colorectal Cancer In colon cancer, LGR5 and NANOG are assumed to be stem cell markers. Therefore, the possibility of therapeutics targeting these markers was investigated in this study. As an example, furin, which belongs to the subtilisin-like proprotein convertase family, was investigated. Furin is involved in the activation of the functions, such as calcium transport, in colon cancer. Inhibitors of furin, such as PDX-1, Spn4A, and decanoyl-RVKRchloromethylketone (CMK), were applied to investigate the effect of furin inhibition in vivo. As a result, it was understood that furin inhibition reduced the expression of stem cell markers and the malignancy of cancer cells [10]. In colorectal cancer, serum deprivation induced increased chemoresistance and enhanced dormancy through the increased expression of dormancy markers, and it also induced enhanced Nanog expression. The knockdown of Nanog abolished dormancy, whereas the overexpression of Nanog promoted dormancy through the transcription of P21 and P27. In the dormant state, cancer cells are malignant. Thus, enhanced Nanog expression is a factor in malignant transformation [11]. Embryonic Carcinoma NANOG was shown to promote tumorigenesis in embryonic carcinomas. miRNAs that suppress NANOG expression were sought. The upstream factors of NANOG were surveyed. PKC was confirmed to be involved in the regulation of NANOG expression. A genome-wide analysis of miRNA expression was performed in the embryonal carcinoma cell line NT2/D1 in the presence of the PKC activator phorbol 12-myristate 13-acetate (PMA). As a result, an increased expression of MIR630 was confirmed. The transfection of MIR630 into embryonic carcinomas suppressed NANOG. The reactive site was NANOG 3 UTR [12]. Somatic Cancer HeLa (cervical cancer) and HCT116 (human colon cancer) were used as somatic cancer cells. Rad51 is a protein involved in the homologous recombination (HR) repair of DNA damage. This protein prevents cancer cells that have been damaged by chemo or radiation therapies from dying. Therefore, Rad51 inhibitors were considered as effective for cancer treatment. Nanog was shown to be effective as a Rad51 inhibitor. Nanog interacted with Rad51 at the C or CD2 domain. Nanog-C/CD2 peptides were directly delivered to somatic cancer cells via nanoparticles or cell-membrane permeable peptides. The introduction of Nanog or moieties contributed to tumor suppression [13]. Hepatocellular Cancer NANOG was activated by the TLR4-E2F1 pathway. NANOG suppressed mitochondrial oxidative phosphorylation genes (OXPHOS) and enhanced fatty acid oxidation (FAO) in tumorinitiating stem-like cells. FAO enhanced self-renewal and chemoresistance properties. On the other hand, restoring OXPHOS suppressed the self-renewal property [15]. Melanoma The relationship between different motility modes and metastatic potential in human melanoma A375 was also investigated in this study. Mobility includes mesenchymal and amoeboid migrations [34]. A375 showed a mesenchymal motility mode, but the overexpression of NANOG or OCT4 increased amoeboid migration, resulting in an increased metastatic potential [17]. Nanog was up-regulated in mouse melanomas under hypoxia. This increased regulatory T cells (Treg) through the increased expression of Tgf-β1. Tregs are CD4 + T cells that release the anti-inflammatory cytokine IL-10 and suppress immune responses. As a result, the proliferation and metastasis of cancer cells were promoted. The targeted inhibition of Nanog reduced Treg-like immunosuppressive cells and increased CD8 + T cells (cytotoxic T cells), resulting in the suppression of cancer growth and metastasis [18]. Pancreatic Cancer In rare and highly malignant cancer stem cells, the hedgehog/glioma-associated oncogene homolog (HH/GLI)-signaling pathway regulates self-renewal, initiates and sustains tumor growth, and promotes drug resistance and metastasis [35]. The inhibitory effect of natural α-mangostin on this signaling pathway was examined. As a result, the expression of target genes (Nanog, Oct4, c-Myc, Sox-2, and KLF4) of this signal transduction system was inhibited, and an antitumor effect was observed. Conversely, the overexpression of Nanog abolished its inhibitory effect, suggesting that the effect of α-mangostin was mainly obtained by inhibiting Nanog expression. At the same time, it was concluded that the method targeting Nanog is preclinically effective for the prevention and treatment of pancreatic cancer [22]. Prostate Cancer The relationship between cell-cell adhesion and the malignancy of prostate cancer cells DU145, PC3, and 22Rv1 was investigated. The overexpression of NANOG enhanced the ability to evade attacks from the NK cell MTA cell line (CD4 and CD56-positive T-cell line) [23]. NANOG suppressed the expression of ICAM1, a cell-adhesion molecule. Without ICAM1 on the cell surface, NK cells cannot recognize it, and cancer cells escape attack from NK cells. Squamous Cell Carcinoma In esophageal squamous cell carcinomas (ESCCs), the knockdown of NANOG clearly reduced cancer cell proliferation and the ability to resist drugs. It was presumed that IL-6/STAT3 was down-regulated [25]. In the case of head and neck squamous cell carcinoma (HNSCC) cells, a comparative analysis of CD44 + cells (indicator of stemness) and control CD44 (−) cells revealed that Nanog or ERK1/2 was highly expressed in CD44 + cells. Thus, it was determined that they exhibited migration ↑, invasion ↑, radiotherapy resistance ↑, and EMT ↑ properties [26]. Nanog and ERK1/2 appeared to exhibit synergistic effects. For HNSCC, 348 postoperative patients were also investigated [27]. As a result, NANOG protein was highly expressed in 72%, and SOX2 was highly expressed in 30%. The prognosis was better in NANOG's and SOX2 s high expressions. In other words, NANOG and SOX2 can be used as good prognostic markers. Moreover, NANOG was also tumor site-specific and correlated with a favorable prognosis for pharyngeal tumors (rather than laryngeal) [27]. NANOG is probably uniquely considered a good prognostic marker. NANOG also serves as an independent prognostic factor in nasopharyngeal carcinomas [28]. The case of oral squamous cell carcinomas (OSCCs) was also investigated in 120 patient samples following surgery. As a result, the expressions of NANOG and OCT4 were higher in patients with lymph node metastases than in those without lymph node metastases, suggesting the possibility of NANOG as a malignant prognostic marker. However, protein and mRNA expression levels sometimes did not match. At the mRNA level, it was positively correlated with other cancer malignancy-associated factors: OCT4, SOX2, NOTCH1, AGR2, and KLF4 [29]. Cancer Stem Cells The validity of NANOG as a cancer stem cell marker was discussed. It was proposed that NANOG might be considered as one of the markers, based on the following obser- vations of multiple types of cancer cells, when NANOG is overexpressed. Following the enhancement of NANOG expression, there appeared increased expressions of BMI and SNAIL1/2, followed by the suppression of E-cadherin expression in various cancer cells (glioblastoma, non-small lung cancer, HNSC, colon cancer, and A549). In addition, an increased expression of NANOG ⇒ STAT3 ⇒ miR21 was followed by the down-regulation of programmed cell death 4 (PDCD4), resulting in the enhanced anti-apoptotic and chemoresistance properties of cancer cells. All of these are factors that increase the migratory ability, proliferative ability, and epithelial-mesenchymal transition (EMT), resulting in an increased malignancy as cancer cells [30]. PD-1-Treated Patients and Their Model Mice Programmed cell death protein 1 (PD-1) inhibitors and PD-L1 inhibitors are a group of checkpoint-blocking anticancer agents that block the activity of PD-1 and PDL1 immunecheckpoint proteins present on the cell's surface. This immune-checkpoint inhibitor has emerged as a frontline therapy for several types of cancer. Using the transcriptional data obtained from cancer patients treated with PD-1 therapy and a newly established murine preclinical anti-PD-1 therapy-refractory model, NANOG was identified as a factor that enhanced patients' resistance to immune-checkpoint inhibitors. NANOG regulated this immune checkpoint by suppressing T-cell infiltration and increasing resistance to killing by cytotoxic T lymphocytes (CTLs) through a histone deacetylase 1-dependent (HDAC1dependent) regulation of CXCL10 and MCL1 [31]. Summary of NANOG Roles The role of Nanog, which has been clarified for various cancer cells, is related to the growth and migration of cancer cells themselves, as well as the interaction with various extracellular factors from the perspective of the effects on cancer cell functions ( Figure 2). The following points summarize the contents of Table 1. (a) High levels of Nanog expression are associated with increased malignancy, which has been observed in many types of cancers. The only exception is the case of HNSC. (b) Nanog targeting, alone, does not necessarily lead to cancer cytocide. (c) The degree of malignancy of cancer cells is not solely governed by Nanog. (d) Cancer cells with high levels of Nanog expression have high metastatic potential. It shows potential as a marker of malignant prognosis. Indeed, Nanog has shown promise as a marker for predicting the efficacy of PD-1 therapy. (e) Molecular mechanisms, leading to malignant transformations, greatly differ depending on the type of cancer. There are almost no research reports about why NANOG signaling differs depending on cancer types. This point should be clarified for the use of Nanog as a therapeutic target. (f) From a therapeutic perspective, the enhancement of immune functions is essential and, therefore, novel ideas are required to combine Nanog-targeting therapy with immunotherapy. Transcriptome Analysis The transcriptome analysis of mouse melanoma cells was conducted to clarify the differential expression intensities between a cell line, B16-BL6, and its Nanog-overexpressing cell line Nanog + BL6. The up-regulated top-16 genes and down-regulated top15 genes were depicted [19]. The functional roles of up-regulated top-7 genes and down-regulated top-3 genes are illustrated in Figure 3A,B, respectively. Transcriptome Analysis The transcriptome analysis of mouse melanoma cells was conducted to clarify the differential expression intensities between a cell line, B16-BL6, and its Nanog-overexpressing cell line Nanog + BL6. The up-regulated top-16 genes and down-regulated top15 genes were depicted [19]. The functional roles of up-regulated top-7 genes and down-regulated top-3 genes are illustrated in Figure 3A,B, respectively. Slc37a4 is a protein that transports glucose-6-phosphate (G6P) from the cytosol to the endoplasmic reticulum (ER). G6P is dephosphorylated in ER and released as glucose out to intercellular space or blood vessels. When cancer cells form colonies, glucose diffusion from the outer solution to the central cells takes a much longer time when compared to cells on the outer surface of the colony. In such a case, if a series of cells in contact with each other can relay glucose, glucose transport can be performed rapidly. The increased expression of Slc37a4 may contribute to the activation of such a glucose relay. The accelerated glucose supply throughout the colony will accelerate cell growth. The acceleration of energy production as ATP is facilitated by five genes (mt-Co2, mt-Atp8, mt-Atp6, mt Co3, and mt-Nd4) that may contribute to the acceleration of oxidative phosphorylation. Vesicle-associated membrane protein 8 (Vamp8) is involved in surviving the emergency of starvation. When cancer cells are placed in a state of starvation, they transport their own cytoplasm into autophagosomes, digest it, and use the nutrients. The most down-regulated gene is Jak. Immunosuppression and malignant tumors are caused by the dysfunction of the Jak-STAT-signaling pathway. The down-regulation of Jak causes a similar condition and also produces more malignant melanomas. Glut4 facilitates glucose uptake. Tbc1d1 suppresses this uptake of glucose. Therefore, the suppression of Tbc1d1 stimulates glucose uptake activity and promotes cancer cell growth. Regarding Tgf-β1, however, it is necessary to consider its dual roles: tumor-suppressive in early stage tumors but tumor-promotive in advanced cancers [36][37][38]. Tumor-promotive roles are the promotion of angiogenesis, immunosuppression, and apoptosis induction. Transcriptome analysis indicated the down-regulation of Tgf-β1, although melanoma cells were made more malignant, which was supported by in vitro and in vivo tests. Table 1 are rearranged according to the phenomena of cancer cells. (3.1), (3.2), denote the number of sub-sections with a description of the phenomenon. Transcriptome Analysis The transcriptome analysis of mouse melanoma cells was conducted to clarify the differential expression intensities between a cell line, B16-BL6, and its Nanog-overexpressing cell line Nanog + BL6. The up-regulated top-16 genes and down-regulated top15 genes were depicted [19]. The functional roles of up-regulated top-7 genes and down-regulated top-3 genes are illustrated in Figure 3A,B, respectively. Experimental Analyses The in vitro and in vivo tests were conducted to investigate the effects of Nanog overexpression on the functional roles of melanoma cells. The cell lines were B16-F10, B16-BL6, Nanog + F10, and Nanog + BL6. The characteristic functions to be studied for a metastatic property evaluation are summarized in Figure 4. Slc37a4 is a protein that transports glucose-6-phosphate (G6P) from the cytosol to the endoplasmic reticulum (ER). G6P is dephosphorylated in ER and released as glucose out to intercellular space or blood vessels. When cancer cells form colonies, glucose diffusion from the outer solution to the central cells takes a much longer time when compared to cells on the outer surface of the colony. In such a case, if a series of cells in contact with each other can relay glucose, glucose transport can be performed rapidly. The increased expression of Slc37a4 may contribute to the activation of such a glucose relay. The accelerated glucose supply throughout the colony will accelerate cell growth. The acceleration of energy production as ATP is facilitated by five genes (mt-Co2, mt-Atp8, mt-Atp6, mt Co3, and mt-Nd4) that may contribute to the acceleration of oxidative phosphorylation. Vesicleassociated membrane protein 8 (Vamp8) is involved in surviving the emergency of starvation. When cancer cells are placed in a state of starvation, they transport their own cytoplasm into autophagosomes, digest it, and use the nutrients. The most down-regulated gene is Jak. Immunosuppression and malignant tumors are caused by the dysfunction of the Jak-STAT-signaling pathway. The down-regulation of Jak causes a similar condition and also produces more malignant melanomas. Glut4 facilitates glucose uptake. Tbc1d1 suppresses this uptake of glucose. Therefore, the suppression of Tbc1d1 stimulates glucose uptake activity and promotes cancer cell growth. Regarding Tgf-β1, however, it is necessary to consider its dual roles: tumor-suppressive in early stage tumors but tumor-promotive in advanced cancers [36][37][38]. Tumor-promotive roles are the promotion of angiogenesis, immunosuppression, and apoptosis induction. Transcriptome analysis indicated the down-regulation of Tgf-β1, although melanoma cells were made more malignant, which was supported by in vitro and in vivo tests. Experimental Analyses The in vitro and in vivo tests were conducted to investigate the effects of Nanog overexpression on the functional roles of melanoma cells. The cell lines were B16-F10, B16-BL6, Nanog + F10, and Nanog + BL6. The characteristic functions to be studied for a metastatic property evaluation are summarized in Figure 4. The glucose uptake activity is greater in cancer cells than in normal cells, and an analytical method used for visualizing glucose uptake activity has been introduced into cancer diagnostic methods. A pathological observation method that utilizes fluorescent glucose ( Figure 5) was developed to distinguish between cancer cells, normal cells, and cells likely to become cancerous [39,40]. Normal cells only take up 2NBDG (D-type fluorophore), whereas cancer cells take up both 2NBDG and 2NBDLG (L-type fluorophore). In fact, it was confirmed that four melanoma cell lines tested could take in both 2NBDG and 2NBDLG. Furthermore, it was suggested that the total uptake of 2NBDG and 2NBDLG might be used as a marker of the degree of cancer cell malignancy. The glucose uptake activity is greater in cancer cells than in normal cells, and an analytical method used for visualizing glucose uptake activity has been introduced into cancer diagnostic methods. A pathological observation method that utilizes fluorescent glucose ( Figure 5) was developed to distinguish between cancer cells, normal cells, and cells likely to become cancerous [39,40]. Normal cells only take up 2NBDG (D-type fluorophore), whereas cancer cells take up both 2NBDG and 2NBDLG (L-type fluorophore). In fact, it was confirmed that four melanoma cell lines tested could take in both 2NBDG and 2NBDLG. Furthermore, it was suggested that the total uptake of 2NBDG and 2NBDLG might be used as a marker of the degree of cancer cell malignancy. Cell-cell glucose relay was suggested to be one of Slc37a4′s roles. Accordingly, it suggested the promotion of glucose uptake as well. The knockdown of Slc37a4 caused a decrease in the uptake rate of 2NBDG, but there was no effect on the uptake rate of 2NBDLG [unpublished data]. Therefore, the up-regulation of Slc37a4 contributed to the increase in glucose uptake. Since the expression level of Tgf-β1 was a controversial matter, it was analyzed at mRNA and protein levels. As a result, its expression was down-regulated at both levels [19]. Another study conducted elsewhere [18] demonstrated a conflicting result. Nanog expression in melanomas increased under a hypoxic condition, and the increase in Tgf-β1 expression followed. We suspected that this inconsistency might be caused by the difference in the expression level of Nanog. The Nanog expression level, up-regulated by a hypoxic condition, might be much lower than that up-regulated by genetic overexpression. The response of Tgf-β1 was concluded to be Nanog-expression level-dependent. The expression of matrix metalloproteinase (MMP)s was studied since MMPs were believed to be relevant to invasion, although they were not included in the top 31 genes. They observed the increase in MMP9, a secretion-type MMP [19]. The interaction with immune cells was thought to principally occur via the EVs described below. The involvement of macrophage and Treg was investigated. The results of the studies performed show that Nanog overexpression made melanoma cells more aggressive. In addition, melanoma cell lines with co-overexpressing Nanog, with Oct3/4 and/or Sox2, were created in order to further enhance stemness. Consequently, however, any combination could not create a cell line with a greater metastatic potential than the cell line with the overexpression of Nanog alone. Cell-cell glucose relay was suggested to be one of Slc37a4 s roles. Accordingly, it suggested the promotion of glucose uptake as well. The knockdown of Slc37a4 caused a decrease in the uptake rate of 2NBDG, but there was no effect on the uptake rate of 2NBDLG [unpublished data]. Therefore, the up-regulation of Slc37a4 contributed to the increase in glucose uptake. Since the expression level of Tgf-β1 was a controversial matter, it was analyzed at mRNA and protein levels. As a result, its expression was down-regulated at both levels [19]. Another study conducted elsewhere [18] demonstrated a conflicting result. Nanog expression in melanomas increased under a hypoxic condition, and the increase in Tgf-β1 expression followed. We suspected that this inconsistency might be caused by the difference in the expression level of Nanog. The Nanog expression level, up-regulated by a hypoxic condition, might be much lower than that up-regulated by genetic overexpression. The response of Tgf-β1 was concluded to be Nanog-expression level-dependent. The expression of matrix metalloproteinase (MMP)s was studied since MMPs were believed to be relevant to invasion, although they were not included in the top 31 genes. They observed the increase in MMP9, a secretion-type MMP [19]. The interaction with immune cells was thought to principally occur via the EVs described below. The involvement of macrophage and Treg was investigated. The results of the studies performed show that Nanog overexpression made melanoma cells more aggressive. In addition, melanoma cell lines with co-overexpressing Nanog, with Oct3/4 and/or Sox2, were created in order to further enhance stemness. Consequently, however, any combination could not create a cell line with a greater metastatic potential than the cell line with the overexpression of Nanog alone. Tumor-Promoting Effect The functional roles of dendritic cell-derived EVs and cancer cell-derived EVs have been well discussed in the literature [1,41,42]. Dendritic cell-derived EVs are regarded as promising materials for immunotherapy. In contrast, cancer cell-derived EVs are considered unsuitable. In fact, all of the 31 cases summarized in [42] showed that the effect of cancer cell-derived EVs on immune cells was the suppression or inactivation of immune activity. The active substances delivered by EVs were unknown in 10 out of 31 cases. Other cases, however, were six Fas cases, six TGF-β cases, and five miRNA cases. In the case of melanomas involving the Fas ligand, Jurkat and other lymphoid cells induced apoptosis associated with caspase activation [43]. Colon cancer also expressed the Fas ligand and TNF-α at the same time, resulting in the induction of T-cell apoptosis both in vivo and in vitro. When cancer cell-derived EVs are taken up by other cancer cells of the same type, they change the properties of those cancer cells. There were eighteen types of cases summarized, and in all cases, EVs increased the proliferation, migration, invasion, EMT, and metastasis of cancer cells that took them in [4]. It also promoted the polarization of macrophages to the M2 type (tumor-promotive). In these examples, EV-delivered active substances included integrin α V β 6 , apolipoprotein E, EGFR, Wnt4, IL-6, and TGF-β, as well as cell-specific miRNAs and long non-coding (lnc) RNAs. On the other hand, cancer cell-derived EVs are also transported to normal fibroblasts, and once taken up, the fibroblasts release EVs that have suppressive effects on immune cells [44]. Furthermore, cancer cells that have undergone anticancer drug treatment may be highly resistant to the drug. EVs released from such highly resistant cancer cells may change the non-resistant allogeneic cells to resistant cells. The effects of EVs derived from cancer cells that were resistant to anticancer agents, such as tamoxifen [45,46], cisplatin [47,48], and gemcitabine [49], were investigated. Cancer cells exposed to those EVs became more resistant to respective anticancer agents. EVs secreted from liver cancer stem cells induced Nanog in differentiated cancer cells, resulting in increased resistance to the anticancer drug regorafenib [50]. Small EVs secreted from gastric cancer cells enhanced the stemness of other gastric cancer cells and increased their resistance to oxaliplatin [51]. In another case, temozolomide (TMZ)-resistant and sensitive tumor cells were obtained from each of the TMZ-resistant (n = 36) and sensitive (n = 33) glioma patients. Circular RNA circ_0072083 expression was increased in resistant cells, and its knockdown reduced resistance, concomitantly reducing NANOG expression. EVs containing circ_0072083 released from resistant cells increased the resistance of sensitive cells to TMZ both in vitro and in xenograft models [5]. Metastasis-Inhibitory Effect Cancer cells that have undergone chemotherapy, radiation therapy, and heat stimulation may increase resistance to each factor. This creates an increase in malignancy. EVs released from such malignant cancer cells enter other cancer cells and strengthen their resistance to the same factor as described above. However, EVs are also taken up by immune cells, such as dendritic cells. As a result, dendritic cells receive malignant cancer cell information and damage-related molecular patterns (DAMPs), such as DNA and RNA, which may enhance antitumor activity by activating intracellular virus-sensing pathways and producing inflammatory cytokines. EVs released from breast cancer cells treated with the anticancer drug topotecan contained DNA that activated dendritic cells via a stimulator of interferon gene (STING) signaling [52]. In addition, when breast cancer model cells were irradiated with therapeutic radiation, the EVs released from these breast cancer cells were taken up by dendritic cells, and then, they activated the cyclic GMP-AMP synthase (cGAS) within the dendritic cells. In this case, dendritic cells were also activated via STING signals. In vivo, the EVs elicited a CD8 + T-cell response and presented tumor-preventive effects [53]. The final case was the metastasis-suppressive effect of melanoma-derived EVs that initiated this review. There are still a few cases of metastasis inhibitory effects by cancer cell-derived EVs. Comparison of Metastatic Potential between B16-F10 and Nanog + F10 Metastatic colonies were analyzed two weeks after the introduction of mouse melanoma B16-F10 and Nanog + F10 from the mouse tail vein. Preliminary studies revealed that the highest number of metastatic colonies was generated on the liver. Therefore, liver was focused on as a predominant target organ, and the number and volume of metastatic colonies were quantitatively analyzed. As a result, those of Nanog + F10 increased 2.5 and 2.4 times, respectively, compared to B16-F10 [20]. At the same time, in vitro tests were conducted separately to investigate cell proliferation and migration. The results support the enhancement of the metastatic potential of Nanog + F10. Although there are few papers that report the quantitative studies conducted on the role of TGF-β1 in EVs, we obtained a couple of papers that may support the validity of such a threshold level. Exosomes derived from melanoma A375 cells contained 10-15 pg/µg TGF-β and inactivated T cells, suggesting a metastasis-promotive role [54]. In contrast, EVs derived from murine colon carcinoma cells that had been genetically modified with an overexpression of shRNA for Tgf-β1 could induce tumor growth inhibition [55]. This suggests a metastasis-suppressive role at a sufficiently low level of Tgf-β1. Tgf-β1 is involved in the regulation of EMT, suppressing EMT in the early stages of tumors but conversely promoting EMT in the late stages of tumors, but its concentration dependence is unclear [37,56,57]. Considering that Tgf-β1 is associated with various factors, it is conceivable that the concentration dependence of Tgf-β1 is not simple. Although the concentration dependence of Tgf-β1 revealed in this study is a phenomenon in a limited concentration range, it is highly suggestive in considering the multifaceted role of Tgf-β1. Role of Immune Cells in Preventing Metastasis Regarding (ii), we first examined the involvement of macrophages according to the test schedule. As a result of examining the expressions of six types of macrophage markers (pan-macrophage [CD68, F4/80], M1-type [CD80, CD86], and M2-type [CD163, CD206] macrophage markers), it was revealed that only the expression of the tumor-promotive M2-type marker CD163 was significantly reduced [20]. Tumor-associated macrophages that exhibit tumor-promotive effects are M2-type macrophages, the majority of which are CD163-positive macrophages [58]. In addition, a positive correlation between the infiltration of CD163-positive macrophages into cancer and PD-L1 expression in cancer has been reported from observing tissues of various cancer patients [59][60][61][62]. PD-L1 is an immunosuppressive receptor that suppresses T-cell proliferation and cytokine secretion [62]. Therefore, it is possible that the reduction in CD163 by Nanog + F10-EVs reduced the suppressive effect on T cells, resulting in increased anti-tumor immunity activity. Regarding (ii), we examined the effects of Nanog + F10-EV on Foxp3, which was a specific marker of Treg activation in the spleen, and observed that the expression of Foxa3 was significantly suppressed. Treg inhibits cytotoxic T cells and macrophages by secreting cytokines, such as IL-10 and IL-35, and the cytotoxic T-lymphocyte antigen 4 (CTLA-4) ligand. Treg also inhibits acquired immunity by suppressing dendritic cells [63]. An artificial Treg inhibitor introduced into mice increased the tumor infiltration of cytotoxic T cells and suppressed subcutaneous melanoma cell tumors [64]. Therefore, it was inferred that the suppression of Foxp3 in the spleen contributed to the metastasis-suppression effect of Nanog + F10-EV. Quantitative Analysis of the Effects of EVs Taken Up by Macrophages The involvement of CD163 was investigated by in vitro experiments using a macrophage cell line J774.1. In a similar manner to the in vivo test described above, Nanog + F10-EV caused a suppression effect on CD163 expression in J774.1. Subsequently, this suppressive effect of Nanog + F10-EV is further analyzed quantitatively. J774.1 cells are fractionated with a cell sorter according to the differences in EV uptake ( Figure 6). Then, each fraction was tested for its invasion ability with Transwell test kits. The number of filtrated cells is counted and compared to the control. Higher uptake of Nanog + F10-EV will result in higher infiltration. macrophage markers), it was revealed that only the expression of the tumor-promotive M2-type marker CD163 was significantly reduced [20]. Tumor-associated macrophages that exhibit tumor-promotive effects are M2-type macrophages, the majority of which are CD163-positive macrophages [58]. In addition, a positive correlation between the infiltration of CD163-positive macrophages into cancer and PD-L1 expression in cancer has been reported from observing tissues of various cancer patients [59][60][61][62]. PD-L1 is an immunosuppressive receptor that suppresses T-cell proliferation and cytokine secretion [62]. Therefore, it is possible that the reduction in CD163 by Nanog + F10-EVs reduced the suppressive effect on T cells, resulting in increased antitumor immunity activity. Regarding (ii), we examined the effects of Nanog + F10-EV on Foxp3, which was a specific marker of Treg activation in the spleen, and observed that the expression of Foxa3 was significantly suppressed. Treg inhibits cytotoxic T cells and macrophages by secreting cytokines, such as IL-10 and IL-35, and the cytotoxic T-lymphocyte antigen 4 (CTLA-4) ligand. Treg also inhibits acquired immunity by suppressing dendritic cells [63]. An artificial Treg inhibitor introduced into mice increased the tumor infiltration of cytotoxic T cells and suppressed subcutaneous melanoma cell tumors [64]. Therefore, it was inferred that the suppression of Foxp3 in the spleen contributed to the metastasis-suppression effect of Nanog + F10-EV. Quantitative Analysis of the Effects of EVs Taken Up by Macrophages The involvement of CD163 was investigated by in vitro experiments using a macrophage cell line J774.1. In a similar manner to the in vivo test described above, Nanog + F10-EV caused a suppression effect on CD163 expression in J774.1. Subsequently, this suppressive effect of Nanog + F10-EV is further analyzed quantitatively. J774.1 cells are fractionated with a cell sorter according to the differences in EV uptake ( Figure 6). Then, each fraction was tested for its invasion ability with Transwell test kits. The number of filtrated cells is counted and compared to the control. Higher uptake of Nanog + F10-EV will result in higher infiltration. Figure 6. Experimental protocol to analyze the effects of EV-uptake quantity on the invasion ability of macrophages. EVs are labeled with a fluorescent probe. J774.1 cells are fractionated, with a cell sorter, into P1, P2, and P3 fractions, respectively, according to the intensities of fluorescence of EVs. Each fraction of J774.1 cells is co-cultured with Nanog + F10 cells and tested for invasion ability with Transwell ® invasion assay kits. The number of melanoma cells that invade the Transwell membrane are counted. Figure 7 summarizes a mechanism in which Tgf-β1, CD163, and Foxp3 are involved. This is specific to melanomas. of J774.1 cells is co-cultured with Nanog + F10 cells and tested for invasion ability with Transwell ® invasion assay kits. The number of melanoma cells that invade the Transwell membrane are counted. Figure 7 summarizes a mechanism in which Tgf-β1, CD163, and Foxp3 are involved. This is specific to melanomas. Prospects for EV Cancer Vaccines There are many studies on cancer cell-derived EVs. However, there are only a few papers [20,52,53] reporting metastasis-suppression effects. Among them, only one paper [20] addresses the Nanog-dependent phenomenon. Therefore, metastasis suppression by cancer cell-derived EVs is, to date, an extremely rare phenomenon. Recently, however, a similar phenomenon was observed for colon cancer-derived EVs. We expect that similar anti-metastasis effects will be observed for EVs derived from Nanog-overexpressing cells of other cancers in the near future. To elucidate the molecular mechanism of the metastasis-suppression phenomenon, much effort must be focused on the quantitative analyses of various cargos of EVs. In the case of melanomas, Tgf-β1 was selected as a predominant factor, and an idea for its quantity threshold in EVs could be proposed. However, various other components coexist in EVs. It is necessary to analyze them to evaluate their possible involvement. Based on these analyses, we will be able to discuss whether metastasis is suppressed or promoted as a total effect. Nanog + F10-EV and F10-EV are a suitable pair for the differential analysis of EV cargo components. Our plan is to analyze those components, such as miRNAs and cytokines. Although the analytical results only concentrate on melanoma-relevant matter, they are sure to contribute to the construction of an EV-based versatile vaccine platform against cancer metastasis. Prospects for EV Cancer Vaccines There are many studies on cancer cell-derived EVs. However, there are only a few papers [20,52,53] reporting metastasis-suppression effects. Among them, only one paper [20] addresses the Nanog-dependent phenomenon. Therefore, metastasis suppression by cancer cell-derived EVs is, to date, an extremely rare phenomenon. Recently, however, a similar phenomenon was observed for colon cancer-derived EVs. We expect that similar anti-metastasis effects will be observed for EVs derived from Nanog-overexpressing cells of other cancers in the near future. To elucidate the molecular mechanism of the metastasis-suppression phenomenon, much effort must be focused on the quantitative analyses of various cargos of EVs. In the case of melanomas, Tgf-β1 was selected as a predominant factor, and an idea for its quantity threshold in EVs could be proposed. However, various other components coexist in EVs. It is necessary to analyze them to evaluate their possible involvement. Based on these analyses, we will be able to discuss whether metastasis is suppressed or promoted as a total effect. Nanog + F10-EV and F10-EV are a suitable pair for the differential analysis of EV cargo components. Our plan is to analyze those components, such as miRNAs and cytokines. Although the analytical results only concentrate on melanoma-relevant matter, they are sure to contribute to the construction of an EV-based versatile vaccine platform against cancer metastasis.
2022-12-04T16:10:07.103Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "ff5c8c720aa3fbe6e9c34e0077435168702d276a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/23/3881/pdf?version=1669904290", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "829e750c939a2c9901aa736c5625d7d13b1f8421", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
245140379
pes2o/s2orc
v3-fos-license
Mapping Environmental Impacts of Rapid Urbanisation and Deriving Relationship between NDVI, NDBI and Surface Temperature: A Case Study Urbanisation is a complex global phenomenon driven by unorganised expansion, increased immigration, and population explosion. Changes in land cover are one of the most critical components for managing natural resources and monitoring environmental impacts in this context. In the present study, a hybrid classification approach was applied to Landsat data to get insight into the urbanisation of the Chandigarh capital region from 2000 to 2020. The results demonstrate an increasing urbanisation tendency on the city’s outskirts, particularly in the north-western and southern directions. The most considerable alterations were seen in the class vegetation as it swiftly transformed to built-up regions. Two indices, namely NDVI and NDBI and surface temperature images, were also derived from studying their inter-relationships. The paper suggests a positive linear relationship between surface temperature and NDBI while a negative correlation between NDVI and NDBI. Such studies may help city planners to take timely and appropriate efforts to reduce the environmental consequences of urbanisation. Introduction Urbanisation is popularly defined as the increase in the population of urban areas. [1] defined urbanisation as follows, "Urbanisation is not a product. It is a process by which people, instead of living in predominantly dispersed agricultural villages, start living in towns and cities dominated by industrial and service functionaries. It involves multiplication of urban places and/ or an increase in size of cities." The phenomenon of urbanisation is global. The current population of the world is 7.9 bn, and it has been predicted to increase to 8.5 billion by 2030 [2], out of which 5 bn people will be living in cities. India is not far behind in this global phenomenon. It has been projected that India will add 416 million urban dwellers by 2050the highest amongst all the countries [3]. An increase in urban areas leads to the development of built infrastructure, which traps the incoming solar radiation, the heat released from vehicular exhausts and other such sources, leading to the urban heat island effect. With its inherent ability of synoptic, periodic and cost-effective coverage, remote sensing is gaining popularity to study such an increase in urbanisation. Several authors [4][5][6][7][8][9][10][11] have reported the suitability of remote sensing data to map, monitor and detect changes associated with rapid urbanisation [12]. In the case of Chandigarh, the study area of this research, it is projected that by the year 2021, its population would be around 1.95 mn (at the current growth rate), almost four times for which it was initially built. Thus, the present study aims to map the environmental impacts of increasing urban areas in Chandigarh and its neighbouring cities over the past two decades using satellite data. A comparison between the surface 2 temperature and built-up area has also been carried out to assess their inter-relationship. Until recently, no such research has been reported from the study area. Study area The city of Chandigarh lies 250 km north of New Delhi, the national capital of India. It lies between longitudes 76°43'17" E -76°50'19" and latitudes 30°39'57" N -30°47'05" N. It was designed by a French architect and has the distinction of being the first planned city of India. The study area of Chandigarh capital region (CCR) includes Chandigarh and the neighbouring cities of Zirakpur, Kharar, Mullanpur, Sahibzada Ajit Singh (SAS) Nagar and Panchkula ( Figure 1). Data set Multi-temporal Landsat datasets covering a time frame of two decades from 2000 (ETM+) to 2020 (OLI) were acquired from the USGS Earthexplorer website. The data were obtained from nearly the same day (15 Oct. 2000 and14 Oct. 2020) each year to eliminate seasonal variance. Georeferencing of the satellite data was done using the 1:50000 scale topographical maps obtained from the Survey of India department. The municipal boundaries of individual cities were digitised using the maps obtained from the respective urban planning departments. Fieldwork is essential for any type of remote sensing analysis. In the present study, land use/land cover (LULC) information and ground control points (GCPs) were collected during the fieldwork. Methodology 2.3.1. Deriving radiance images. For ETM+, the digital number (DN) values have been converted to top-of-atmosphere (TOA) spectral radiance using equation (1) [13] and for OLI data using equation (2) [14] where L* is the TOA radiance received at the sensor; Lmin and Lmax are the minimum and maximum spectral radiance for the sensor, respectively; DN is the quantised and calibrated standard product pixel values; DNmax is the maximum grey level; ML is band-specific multiplicative rescaling factor; AL is band-specific additive rescaling factor. Atmospheric correction of the radiance images was done using the FLAASH function of ENVI software. where Tr = TOA brightness temperature (°K); K1 and K2 = band-specific thermal conversion constant from the metadata, respectively. Calculation of indices. Normalised Difference Vegetation Index (NDVI), given by [15], was calculated to assess the vegetation cover, and Normalized Difference Built-up Index (NDBI) [16] was calculated to delineate the built-up area. Later, relationships between NDVI, NDBI and temperature images were assessed. Image classification. The objective of the present research is to map the urban areas and vegetation; therefore, the images were divided into only three level-1 classes [17] -Built-up, Vegetation and Others. A hybrid approach of utilising both unsupervised and supervised classification was used. Firstly, unsupervised ISODATA clustering was used to classify the images yielding 20 spectral clusters. The truly homogeneous clusters corresponding to a particular LULC were merged and labelled based on the field data. Accuracy assessment was carried out using the well-known error or confusion matrix approach [18]. Equalised random sampling method was used to select 30 samples from each class. Google Earth imagery was used as a reference for the classified images. Figure 2 depicts the resulting land cover maps. The maps show that the land use pattern is changing rapidly, with the Built-up area significantly modifying the LULC of the study area. The city of Chandigarh has urbanised in all directions, but maximum urbanisation (increase in class Built-up) occurred outside the city in Zirakpur (south) and Kharar (north-west). One of the major causes of this increase is the expansion of the existing domestic airport to an international airport. An airport road (black arrow in figure 2) was built to cater to the increasing traffic. This caused the development of urban areas on both sides and the subsequent population of nearby towns. Table 1 shows the classification accuracies derived from error matrices. The overall accuracy of 2000 and 2020 maps was 94.4 per cent and 95.5 per cent, respectively, above the standard threshold of 85 per cent. The high accuracy could be attributed to classification comprising of only three level-1 classes. Spatiotemporal patterns of temperature images The surface temperature derived from satellite images gives an overview of global, regional and local variations over time. It is critical to obtain surface temperatures and use them in various analyses to evaluate the problem linked with the environment [19]. The temperature images for 2000 and 2020 are given in figure 3. Vegetated areas tend to lower the temperature due to evapotranspiration that maintains the heat flux [20]. However, as this cover is lost and changed into impervious surfaces, the solar radiation is reflected, leading to higher temperatures captured by the thermal satellite sensors. In the present study, vegetation loss over the two decades leads to an increase in thermal signature, especially in the western parts of the 2020 image (figure 3b). Note that the black arrow in figure 3b shows the newly constructed airport road and a corresponding higher temperature than vegetated areas. The red arrow marks the location of the international airport, which has grown in size over the two decades. In October (the month of satellite images), the average temperature remains around 24°C in the study area. Because of increasing urbanisation, this temperature could be seen to have risen above 40°C in 2020, pointing to the urban heat island effect. Figure 4(a-b) shows the relationships between NDVI and NDBI, while figure 4(c-d) shows between NDVI and NDBI. It could be observed that the relationship of NDBI and temperature for the year 2020 figure 4b) shows a moderate positive correlation indicating that as built-up areas increase, they trap heat and thus, the surface temperature increases. The results are in line with published literature [21][22][23][24][25]. Since the newly developed built-up areas have expanded at the expense of vegetated areas, thus the NDVI and NDBI show a negative correlation for both years (figures 4c-d). This complements the results of land use changes in figure 2. The negative correlation between NDVI and NDBI corroborates the fact that vegetation lowers the temperature. [22] reported that the expansion of built-up areas could be characterised utilising NDVI. Thus, either of the indices -NDVI indirectly or NDBI directly-could be used to assist surface temperature measurements. NDBI (x-axis) and NDVI (y-axis) for (c) 2000 (d) 2020. Conclusion Urbanisation is a complex diffusion process and a critical driver of land use change. The burden on already scant environmental resources and infrastructure increases as the urban population grows. It is evident from the land use maps of the present research that built-up areas are increasing at the expanse of surrounding agricultural/vegetated lands. This study also implies that remote sensing data helps indicate the direction of change of land use over a period of time. The expansion of urban areas will inevitably continue in the future, but careful review and modification of land use regulations and decisions are required to limit this development. Such studies should indeed be conducted regularly to assist city planners in focusing on the specific locations and prioritising their strategies to combat the environmental consequences of urbanisation.
2021-12-15T20:13:19.176Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "f13735380666dd195837095bf14865d570888f6f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/940/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f13735380666dd195837095bf14865d570888f6f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
88521299
pes2o/s2orc
v3-fos-license
A characterization of signed discrete infinitely divisible distributions In this article, we give some reviews concerning negative probabilities model and quasi-infinitely divisible at the beginning. We next extend Feller's characterization of discrete infinitely divisible distributions to signed discrete infinitely divisible distributions, which are discrete pseudo compound Poisson (DPCP) distributions with connections to the L\'evy-Wiener theorem. This is a special case of an open problem which is proposed by Sato(2014), Chaumont and Yor(2012). An analogous result involving characteristic functions is shown for signed integer-valued infinitely divisible distributions. We show that many distributions are DPCP by the non-zero p.g.f. property, such as the mixed Poisson distribution and fractional Poisson process. DPCP has some bizarre properties, and one is that the parameter $\lambda $ in the DPCP class cannot be arbitrarily small. 1. Convolutions of signed measure model and quasi-infinitely divisible Székely (2005) spoke of flipping two "half-coins" (which have infinitely many sides numbered 0, 1, 2, . . . whose even values are assigned negative probabilities) to obtain a fair coin with outcomes 0 or 1 with probability 1/2 each. The negative probabilities arise because his probability generating function (p.g.f.) G(z) = √ 0.5 + 0.5z has negative coefficients. He went on to consider the general n-th root of a p.g.f. as a generating function with negative coefficients. In this work we continue along the same lines. In short, the aim of this paper is to determine necessary and sufficient conditions on a discrete distribution such that the [G(z)] 1 n (or [ϕ(θ)] 1 n ) is also the p.g.f. (or characteristic function) of a signed measure with bounded total variation. Székely used the word "signed infinitely divisible" to describe a phenomenon that writing a discrete probability mass as a convolution of signed point measures. Notice that central to the inversion problem of the Central Limit Theorem is the search for some characteristic function ϕ(θ) such that [ϕ(θ)] 1 n is also a characteristic function. This is indeed the very definition of an infinitely divisible distribution, or the deconvolution problem. In the foundation of this body of work is the fundamental Lévy-Khinchine result: "A distribution is infinitely divisible if and only if it has a Lévy-Khinchine representation." Continuing along these lines, Sato (2014), Nakamura (2013) proposed the following definition of quasi infinite divisibility for distributions having a Lévy-Khinchine representation but with signed Lévy measure. By constructing a complete Riemann zeta distribution corresponding to Riemann hypothesis, Nakamura (2015) showed that a complete Riemann zeta distribution is quasi-infinitely divisible for some conditions. Based on quantum physics, Demni and Mouayn (2015) constructed a generalized Poisson distribution and derived a Lévy-Khintchine-type representation of its characteristic function with signed Lévy measure. A random variable X on R is called quasi-infinitely divisible if the characteristic function of X has the following form. Definition 1.1 (Quasi-infinitely divisible). A distribution µ on R is said to be quasi-infinitely divisible if its characteristic function has the form with a ∈ R, σ 0, and corresponding measure ν a bounded signed measure (that is, a quasi-Lévy measure) on R with total variation measure |ν| satisfying ∫ R min(x 2 , 1) |ν| (dx) < ∞. The above definition also appears in Exercise 12.3 of Sato (2013), p. 66, where it is shown that X is not infinitely divisible (in the classical sense) if ν is a signed measure. The Lévy-Khinchine representation with signed Lévy measure is unique; see Exercise 12.3 in Sato (2013). Problem 1.1. Find a necessary and sufficient condition for the Lévy-Khinchine representation with signed Lévy measure to hold. This is an open problem posed by Professor Ken-iti Sato, see p29 of Sato (2014). When X in (1) is non-negative (a "subordinator" version of (1)), it is given as an unsolved problem in Exercise 4.15(6) of Chaumont and Yor (2012). Denote the nonnegative integers by N = {0, 1, 2, . . .}. Let ν be a signed point measure on N and remove the normal component in (1). Then the characteristic function of the DPCP distribution (see Definition 2.1 below) is a discrete version of the Lévy-Khinchine representation with signed Lévy measure. Baishanski (1998) considered a complex-valued (which includes negative-valued) probability model for nfold convolutions of i.i.d. integer-valued "random variables" (X 1 , . . . , X n ) with complex-valued probabilities a v e ivθ , with a v = 1, and P(X 1 + · · · + X n = v) = a nv . He charted this territory perhaps not for the sake of statisticians or probabilists, but certainly to the benefit of analysts. His work stemmed from an open problem related to the complex-valued probability model which was first posed by Beurling (1938) and later quoted by Beurling and Helson (1953). It is called the problem of "Norms of powers of absolutely convergent Fourier series" and has been investigated at length in many, many papers; see Baishanski (1998) for a review. We only present the problem here and show its relationship to the DPCP distribution. Under what conditions on f are the f n bounded? Discuss the asymptotic behavior of f n as n → ∞. When a v 0, the behavior of a nv has been firmly established via the Central Limit Theorem and gives rise to the normal law in particular, and stable laws in general. In the case of complex-valued probability, Baishanski (1998) For some results in this direction, see Baez-Duarte (1993) and Baishanski (1998). Here, we consider the case of real-valued probabilities. van Haagen (1981) proved Kolmogorov's Extension Theorem for finite signed measures which guarantees that a suitably "consistent" collection of finitedimensional distributions will define a signed r.v.. Kemp (1979) studied the circumstances under which convolutions of binomial variables and binomial pseudo-variables (with negative probability density) lead to valid distributions (with positive probability density). Karymov (2005) obtained asymptotic decompositions into convolutions of Poisson signed measure that is appropriate for a broad range of lattice distributions. Let (Ω, F, µ) be a signed measure space with µ(Ω) = 1 and (S, B) be a measurable space. A mapping X : Ω → R is a signed random variable if it is a measurable function from (Ω, F) to (S, B). Every signed random variable X has an associated signed measure. The signed measure is µ(B) = P(X ∈ B) = P(X −1 B) for each B ∈ B. Remark 1 : In this paper, we write "r.v."(or "distribution") for a random variable with ordinary (not negative) probabilities, and we write "signed r.v."'(or "signed distribution") for a random variable that permits negative probabilities. Definition 1.2 (Signed probability density). A signed random variable X has signed probability distribution The signed discrete distribution is well-defined provided the total variation ∫ Ω |µ|(dx) is finite. Taking a signed discrete distribution as an example, the absolute convergence of ∞ k=1 a k guarantees that all rearrangements of the series are convergent to the same value. Signed random variables defined this way allow for treatment analogous to the classical case with the concepts of independence, expectation, variance, r th moments, characteristic functions, etc. operating in the natural way. Without the condition of absolute convergence the negativity of α k would cause exp to be undefinable, since we know from the Riemann series theorem that a conditionally convergent series can be rearranged to converge to any desired value. The next example drives home this point. Example 1.1 (Discrete uniform distribution). Let X be Bernoulli r.v. with probability of success p = 0.5. The logarithm of the p.g.f. of X is The present paper has concerned itself with the study of certain classes of discrete random variables, in particular, those of the infinitely divisible variety. We started from William Feller's famous characterization that all discrete infinitely divisible distributions are compound Poisson distributed. Now, not all discrete distributions are infinitely divisible (not, at least, in the classical sense). But following the idea of Székely and others we extend our notion of infinitely divisible to include those distributions whose n th root of their p.g.f. is also a p.g.f. in a generalized sense. Szekely's "signed" ID is based on convolution of signed measures and Sato defines his "quasi" ID from Lévy-Khinchine representation with signed Lévy measure. The goal of this paper is to find a connection with these two kinds of generalised ID under the discrete r.v.. The paper is structured as follows: In Section 2, after giving the definition of signed probability model, we present several conditions guaranteeing that a discrete distribution is a signed discrete infinitely divisible distribution (or discrete pseudo compound Poisson). Also, we exemplify some famous discrete distributions which belong to the discrete quasi infinitely divisible distributions, such as the mixed Poisson distribution and the fractional Poisson process. In the same way, in Section 3, we conclude that a distribution is signed integer-valued ID if and only if it is integer-valued pseudo compound Poisson. In Section 4, some bizarre properties of signed discrete infinitely divisible distributions are discussed, and we mention a research problem of finding characteristic function's Jørgensen set. Discrete pseudo compound Poisson distribution Feller's characterization of the compound Poisson distribution states that a non-negative integer valued r.v. X is infinitely divisible (ID) if and only if its distribution is a discrete compound Poisson distribution. Taking N and Y i 's to be independent, X can be written as where N is Poisson distributed with parameter λ and the Y i 's are independently and identically distributed (i.i.d.) discrete r.v.'s with P {Y 1 = k} = α k . Hence the p.g.f. of the compound Poisson distribution can be written as where z is a real number. (In this paper, all the arguments z of a p.g.f. are taken to be real numbers z ∈ [−1, 1].) For more properties and characterizations of discrete compound Poisson , see Jánossy et al. (1950), Section 12.2 of Feller (1968), Section 9.3 of Johnson et al. (2005), and Zhang and Li (2016). However, Feller neither shows nor claims that the sum of the coefficients in n n P (z) − 1 is bounded for any n, that is, it leaves the question of whether This result of Feller's can be viewed as a discrete analogue of the derivation of Lévy-Khinchine's formula; see Itô (2004), Sato (2013), Meerschaert and Scheffler (2001) for the general case. When some α nk are negative, it is necessary to find a new method which guarantees that the extension of Feller's characterization relating to the n th convolution of a signed measure is valid. If it turns out that some α i are negative in the p.g.f. of (3), then the Y i in (2) will have negative probability and a fortiori will rule out any chance for ϕ Example 2.1. From the Taylor series expansion it follows that is negative whenever k is even. Here is an explicit expression for the probability mass function of the DPCP distribution: see Jánossy et al. (1950), Johnson et al. (2005) for the discrete compound Poisson case (all α i are nonnegative). Characterizations It turns out that there already exist a few characterizations of the DPCP distribution in the literature. Indeed, Lévy (1937b) derived the recurrence relation of the density of the DPCP distribution when α i might be negative-valued. If we take P j to be the empirical probability mass function, then the recursive formula in (6) can be used to estimate the parameters α j , for j = 1, 2, . . . (see Buchmann and Grübel (2003), p. 1059). The name "pseudo compound Poisson" was introduced by Hürlimann (1990). For the general situation, the following Lévy-Wiener theorem provides us a shortcut on necessary and sufficient conditions for a distribution to be DPCP. The proof is non-trivial; see Zygmund (2002) or Lévy (1935). The simple case H(t) = t −1 is due to Wiener. The next two corollaries are direct consequences of the Lévy-Wiener theorem. a j e ijθ and f = ∞ j=0 |a j | < ∞. If f (θ) has no zero, then the f n are bounded. Next, we give a lemma about the non-vanishing p.g.f. characterization of DPCP, see Zhang et al. (2014). We restate the proof here. Lemma 2.2 (Non-zero p.g.f. of DPCP). For any discrete r.v. X, its p.g.f. G(z) has no zeros if and only if X is DPCP distributed. Proof. It is easy to see that the p.g.f. of a DPCP distribution has no zero. On the other hand, if G(z) has no zeros, taking z as a complex number, let z = re iθ , for 1 r 0. We have |p k | = 1. By applying the Lévy-Wiener theorem for all r ∈ [0, 1], ln G(e iθ ) has an absolutely convergent Fourier series. Therefore, X is DPCP distributed from Definition 2.1. For instance, P (z) = 1 3 + 2 3 z on |z| 1 has no pseudo compound Poisson representation since 1 3 + 2 3 z = 0 for z = − 1 2 . Next, we define signed discrete infinite divisibility (ID) as an extension of discrete infinite divisibility. Firstly, we show that p.g.f. of a signed discrete infinitely divisible distribution never vanishes. Secondly, we obtain an extension of Feller's characterization by employing the Lévy-Wiener theorem. Definition 2.2. A p.g.f. is said to be signed discrete infinitely divisible if for every n ∈ N, G(z) is the n th -power of some p.g.f. with signed probability density, namely The notion of signed discrete infinite divisibility first appeared in Székely (2005) where he discusses the conditions under which k G(z) is absolutely convergent in the special case that G(z) is the p.g.f. of a Bernoulli distribution. To get a characterization for signed discrete ID distributions, we need Prohorov's theorem for signed measures, see p. 202 of Bogachev (2007). Applying Prohorov's theorem for bounded and uniformly tight signed measures generalises the continuity theorem for signed p.g.f.'s. (2007)). Let (E, τ ) be a complete separable metric space and let M be a family of signed Borel measures on E. Then the following conditions are equivalent: (i) Every sequence µ n ∈ M contains a weakly convergent subsequence. (ii) The family M is uniformly tight and bounded in the variation norm (a signed measure µ in a topological Let G be as in Definition 2.2 and take E = N and is a sequence of uniformly tight signed bounded point measures. The next lemma is an extension of the continuity theorem for p.g.f.'s. With slight modifications, the necessity part directly follows the proof of the p.g.f. case; see Feller (1968) p. 280. for k = 0, 1, . . ., if and only if the limit Proof. Necessity: We suppose (7) holds and define G(z) by (8). If p (n) k is a bounded and tight signed measure, then there exists an M such that max |p k | , p (n) k ≤ M for k = K + 1, K + 2, . . .. When 0 < z < 1, it follows that (8) is true. Sufficiency: Assuming (8) is true for 0 < z < 1, then there is a subsequence {n (1) } so that lim } is bounded and uniformly tight. From Prohorov's theorem for signed measures, we get a sub-subsequence {n (2) } such that lim (7). If every convergent subsequence p (n (1) ) k did not have the same limit p k , then neither would p (n (2) ) k . This contradiction leads to the truth of (7). Moreover, the final result is deduced from following equalities: for 0 < z < 1. The continuity theorem for the p.g.f.'s of signed r.v.'s will be applied in the derivation of the general form for discrete quasi ID distributions. Putting all the above together we get a generalisation of the discrete compound Poisson distribution. Now we state and prove our characterization of discrete quasi infinitely divisible distributions. Theorem 2.1 (Characterization of signed discrete ID distributions). A discrete distribution is signed discrete infinitely divisible if and only if it is a discrete pseudo compound Poisson distribution. Proof. Sufficiency: Given n ∈ N and P X (z), if X is DPCP distributed then By using the Lévy-Wiener theorem, we have Necessity: Lemma 2.5 and Lemma 2.2 say respectively that the p.g.f. of a signed discrete ID r.v. X has no zeros and any p.g.f. that has no zeros is the p.g.f. of a DPCP distribution. Consequently, X is DPCP distributed. is not a p.g.f. unless a term with a sufficiently small negative coefficient is preceded by one term with a positive coefficient and followed by at least two terms with positive coefficients as well (see Johnson et al. (2005), pp. 393-394), namely, the conditions are a 1 > 0, a m−1 > 0, and a m > 0. Milne and Westcott (1993) considered the multivariate form of (6) and gave some conditions under which the exponential of a multivariate polynomial is a p.g.f. For m = 4, van Harn (1978) gave four inequalities to ensure is a p.g.f.; the restrictions are a, b, c, d > 0 and b ≤ min{ a 2 3 , c a , ad 2c , c 2 3d }. Next we give a few examples of DPCP distributions and the Bernoulli distribution. Example 2.2. The p.g.f. P (z) = p + (1 − p)z on |z| ≤ 1 has the pseudo compound Poisson representation (i) If p > 0.5 then X is DPCP distributed since P (z) has no zeros. (ii) If p = 0.5 then ∞ i=1 a i is conditionally convergent and P (z) has the zero z = −1. (iii) If p < 0.5, then ∞ i=1 a i is divergent and P (z) has zero z = p p−1 . A more general example comes from the following corollary, see also Zhang et al. (2014). The next corollary will be useful in fitting zero-inflated discrete distribution. The paper Beghin and Macci (2014) extended the fractional Poisson process to the discrete compound where |z| 1, α i 0, 0 < ν 1, and {X n } n 1 is a sequence of i.i.d. discrete r.v.'s independent of the fractional Poisson process N ν λ (t). The Proposition 3.1 of Vellaisamy and Maheshwari (2016) shows that the one-dimensional distributions of the fractional Poisson process N ν λ (t), 0 < ν < 1, are not infinitely divisible. Fortunately for us, it is quasi ID distributed because Mittag-Leffler functions have no real zero as 0 < ν < 1. It remains to use the following lemma, then Corollary 2.5 follows. Proof. It can be shown that E 1 (z) = e z has no zeros for all non-negative z. Just considering negative z in the case 0 < µ < 1, the proof can be found in Theorem 4.1.1 of Popov and Sedletskii (2013) which states: "The +∞), then E ρ (z; a) has no negative roots." Remark 2: The other proof can be found in p. 453 of Feller (1971). The Mittag-Leffler function E µ (x) can be written as a moment generating function E µ (t) = Ee tX > 0, where X is the transformation of a positive α stable distributed r.v. Y = X − 1 α with moment generating function Ee −tY = e −t α . Corollary 2.5. The discrete compound fractional Poisson process M (t) is DPCP distributed; so too is the fractional Poisson process. As another special case, we have the following result. Another generalization of Poisson distribution, the mixed Poisson distribution, is also DPCP. Corollary 2.6. Let X be a mixed Poisson r.v. with p.m.f. where F (λ) is a distribution function, then X is DPCP distributed. Proof. To prove this corollary we need to show that the p.g.f. of the mixed Poisson distribution has no zeros. Obviously, Notice that if F (λ) is an infinitely divisible distribution, then X is discrete compound Poisson distributed, see Maceda (1948). This is the well-known Maceda's mixed Poisson with infinitely divisible mixing law. The mixed Poisson distribution is widely applicable in the non-life insurance science. Signed integer-valued ID In this section we define a class of signed integer-valued infinitely divisible distributions which is wider than the class of integer-valued infinitely divisible distributions. Next, we show that a signed integer-valued infinitely divisible characteristic function never vanishes. Then we show that a distribution is signed integervalued infinitely divisible if and only if it is an integer-valued pseudo compound Poisson distribution. Definition 3.1 (Integer-valued pseudo compound Poisson, IPCP). Let X be an integer-valued random variable X with P(X = k) = P k , k ∈ Z. We say that X has an integer-valued pseudo compound Poisson distribution if its characteristic function has the form The early documental record of IPCP is in Paul Lévy 's monumental monograph of modern probability theory, see page 191 of Lévy (1937a). Recent research about IPCP can be found in Karymov (2005). As an example, consider ϕ(θ) = 1 3 + 2 3 e iθ . We would like to write ϕ(θ) as an exponential function. On our first attempt we might try the Taylor series expansion for ln 1 3 + 2 3 z , but 1 3 + 2 3 z vanishes at z = −1/2, so this method cannot be employed. On our second attempt we might look to the Fourier inversion formula for ln 1 3 + 2 3 e iθ since 1 3 + 2 3 e iθ has no zeros. Let f (θ) be an integrable function on [0, 2π]. The Fourier coefficients c n for n ∈ Z of f (θ) are defined by In this example the Fourier coefficients of ϕ(θ) are c n = 1 Remark 3 This example illustrates that a discrete r.v. may be signed integer-valued ID r.v. but not be signed discrete ID r.v.! We enlisted Maple to compute the c n 's and graphed them in Figure 1. Here, c 0 = λ and c n = λα n for n = 0. The Lévy-Wiener theorem guarantees that ∞ n=−∞ |α n | < ∞. In the following, we list three equivalent characterizations of an IPCP distributed r.v. with character- (10). The axiomatic derivations of IPCP distributions can be obtained from any one of these characterizations. Then, X can be decomposed as where the Y i are i.i.d. integer-valued signed r.v.'s with signed probability density P( (2 • ) Sum of weighted signed Poisson: X can be decomposed as where N k for k ∈ Z are independently signed Poisson distributed N k ∼ P o(α k λ) with signed probability density P (N k = n) = (λα k ) n n! e −α k λ , where α k and λ are defined in 1 • . (3 • ) Difference of discrete pseudo compound Poisson: Let exp α − −i λ − (z i − 1) be the p.g.f.'s of discrete r.v.'s X + and X − , respectively. α k and λ are defined in 1 • . Then X can be seen as a difference of two independent r.v.'s Proof. It is easy to check (1 • ) -(3 • ) by examining the characteristic function. Next, we discuss the if-and-only-if relationship between the signed integer-valued ID and integer-valued pseudo compound Poisson distributions. This equivalence also holds for integer-valued infinitely divisible and integer-valued compound Poisson distributions. Definition 3.2. A characteristic function (or the integer-valued r.v. X) is said to be signed integer-valued infinitely divisible if for every n ∈ N, ϕ X (θ) is the n-power of some characteristic function with signed probability density, namely, To obtain our characterization for signed integer-valued ID distributions we need Prohorov's theorem for signed measures, the proof of which being found in Bogachev (2007). Applying Prohorov's theorem, we obtain a continuity theorem for characteristic functions with signed probability densities. For more reading on the application of Prohorov's theorem to signed measures, see Theorem 2.2 and Theorem 2.3 of Baez-Duarte (1993). Lemma 3.1 (Continuity theorem for signed characteristic functions, Baez-Duarte (1993)). (i) µ n ⇒ µ if and only ifμ n ⇒μ a.e. and {µ n } is bounded and tight; (ii) Let µ n be a complex measure andμ n be the characteristic function of µ n . Ifμ n → g a.e. and {µ n } is bounded and tight, then there exists a signed measure such thatμ = g and µ n ⇒ g. The continuity theorem for signed characteristic functions can be used to deduce the general theorem for signed integer-valued ID. Proof. Sufficiency: For every n ∈ N, ϕ X (θ), if X is IPCP distributed, then By the Lévy-Wiener theorem we have Hence, X is signed integer-valued ID. Necessity: Lemma 3.2 says that the characteristic function of a signed integer-valued ID r.v. X never vanishes. Judging from the Lévy-Wiener theorem, any characteristic function with no zeros is the characteristic function of an IPCP distribution. Therefore X is IPCP distributed. 4. Some bizarre properties of DCP related to signed r.v. Ruzsa and Székely (1983) proves a theorem related to signed random variables and Székely (2005) gives the following result. Proposition 4.1 (Construction of signed random variables). If f ∈ L 1 and ∫ f dω > 0, then one can find a g ∈ L 1 + with g ≡ 0 such that f * g ∈ L 1 + , where L 1 + is the set of all non-negative integrable functions. Moreover, we can choose g such that its Fourier transform is always positive. Proposition 4.2 (Fundamental theorem of negative probability). If a random variable R has a negative probability density, then there exist two other independent random variables Y, Z with ordinary (not negative) distributions such that R + Y = Z in distribution, where the operation + is under 'convolutional plus'. Thus R can be seen as the 'difference' of two ordinary random variables. This fact gives a new explanation for why some α k may be negative by Theorem 2.1. Let ϕ R (θ), ϕ Y (θ), and ϕ Z (θ) be the corresponding characteristic functions in Proposition 4.2. Then a nk e ikθ and ∞ k=1 b nk e ikθ ≡ 0, from Proposition 4.1. Hence, where v nk and v nk are non-negative. They satisfy a nk e ikθ and ∞ k=1 b nk e ikθ both have no zero, and the Lévy-Wiener theorem makes sense. Here we show an example. Suppose there exists a negative a k in According to Corollary 4.2 below, for λ > 0 sufficiently small, ϕ(θ) cannot be a characteristic function. It is evident that [ϕ(θ)] λ is still a characteristic function since λ ∈ N \ {0} implies N \ {0} ⊂ Λ(X). Given the independent convolution of a Bernoulli r.v. and a Gamma distributed r.v. X + Y , where X, Y is nondegenerate, Nahla and Afif (2011) find the set Λ(X + Y ). If X and Y are non-degenerate independent r.v.'s which have respectively Bernoulli and negative binomial distributions, Letac et al. (2002) find the set Λ(X + Y ). Nakamura (2013) gives a signed discrete infinitely divisible characteristic function ϕ(θ) such that [ϕ(θ)] u , u ∈ R, is not a characteristic function except for the non-negative integers. To find the Jørgensen set of a signed discrete ID characteristic function or a quasi ID distribution defined by Lévy-Khinchine representation with signed Lévy measure (1) is a research problem. Here we show that the Jørgensen set of signed discrete infinitely divisible Bernoulli r.v.'s X is N. Proof. Trivially, N belongs to the Jørgensen set of X. It remains to show that r ∈ (0, ∞]\N cannot be in the Jørgensen set. From Taylor's formula, we have For each r < i − 1, i = 2, 3, . . . , we consider two cases: (i) if there exists an i such that c i < 0, then the result follows; (ii) if there exists an i such that c i > 0, note that Corollary 4.2. If there exists a negative α k in (4), then ϕ X (θ) = exp ∞ k=1 α k λ(e ikθ − 1) is not a characteristic function for λ > 0 sufficiently small. Proof. The p.m.f. of X is (5). For m 1, we have lim λ→0 Pm λ = α m < 0. This is in contradiction of the fact that P m /λ > 0. Remark 4 This theorem shows that we should not replace X by X(t) for t ∈ [0, t) (namely, replace λ by λt) in (4) to have the DPCP process. The DPCP processes have a strange property that the t can only belong to the set Λ(X) which have semigroup properties. It is well-known that Poisson processes are characterized by stationary and independent increments. (iv) Let discrete probability mass be given by P i (t) = P( X(t) = i| X(0) = 0), with the probability of 1 and i 2 events taking place in [t, ∆t + t) given by P 1 (∆t) = λ∆t + o(∆t) and Then {X(t), t ≥ 0} is a Poisson process. Proposition 4.5 is called the Bateman theorem, see Section 2.3 of Haight (1967). The processes in Proposition 4.6 can be seen in Section 3.8 of Haight (1967), where they are known as stuttering Poisson processes. For the case discrete r.v., Jánossy et al. (1950) extended axioms for the Poisson process to Proposition 4.6. We may ask if there exist similar extensions of Proposition 4.6 to DPCP processes. As the contradiction in the proof of Corollary 4.2 shows, if some a k in (3) are negative, this will yield P k (∆t) = α k λ∆t + o(∆t) < 0. However, P k (∆t) must be non-negative. So there are not DPCP processes X(t) on t ∈ [0, +∞) as some a k in (3) are negative. To avoid the above contradictions, we highlight a strange property of DPCP processes on the semigroup Λ(X). Conclusion and comment It was the great mathematician Leopold Kronecker who once said, "God made the integers; all else is the work of man." It is in the spirit of that proverb that the present work deals with discrete(integer-valued) r.v.. Feller's characterization of discrete ID (namely, non-negative integer-valued infinitely divisible distributions) says that a distribution is discrete ID if and only if it is discrete compound Poisson. Further, we define the discrete pseudo compound Poisson (DPCP) distribution whose p.g.f. has the exponential polynomial form e P (z) , where P (z) may contain negative coefficients (except coefficient P (0)). Using the definition of generalized infinitely divisible, then, we have an extension of Feller's characterization which is that a distribution is discrete quasi-ID if and only if it is discrete pseudo compound Poisson, which is made possible by Prohorov's theorem for bounded and tight signed measures. This theorem could be applied to the continuity theorem for p.g.f.'s with negative coefficients, or the continuity theorem for characteristic functions with signed measure. If exp λ ∞ k=1 α k (e ikθ − 1) is characteristic function, the parameter λ can not tend to 0 when some α k take negative values. This property is related to a characteristic function's Jørgensen set, which is the set such that the positive real power of the characteristic function maintains as a characteristic function. It is easy to see that the Jørgensen set of an infinitely divisible characteristic function is R + . To find the Jørgensen set of a quasi-infinitely divisible characteristic function is an open problem; there are only some special cases of Jørgensen sets in the literature, see Nakamura (2013), Nahla and Afif (2011), Letac et al. (2002), Albeverio et al. (1998).
2017-01-14T08:42:54.000Z
2017-01-14T00:00:00.000
{ "year": 2017, "sha1": "4a7f39af9dbb0e6ade320cdfd9641c402a4d1b05", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1701.03892", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4a7f39af9dbb0e6ade320cdfd9641c402a4d1b05", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247154932
pes2o/s2orc
v3-fos-license
Four-point functions with multi-cycle fields in symmetric orbifolds and the D1-D5 CFT We study $S_N$-invariant four-point functions with two generic multi-cycle fields and two twist-2 fields, at the free orbifold point of the D1-D5 CFT. We derive the explicit factorization of these functions following from the action of the symmetric group on the composite multi-cycle fields. Apart from non-trivial symmetry factors that we compute, the function with multi-cycle operators is reduced to a sum of connected correlators in which the composite fields have, at most, two cycles. The correlators with two double-cycle and two single-cycle fields give the leading order contribution in the large-$N$ limit. We derive explicit formulas for these functions, encompassing a large class of choices for the single- and the double-cycle fields, including generic Ramond ground states, NS chiral fields and the marginal deformation operator. We are thus able to extract important dynamical information from the short-distance OPEs: conformal dimensions, R-charges and structure constants of families of BPS and non-BPS fields present in the corresponding light-light and heavy-light channels. We also discuss properties of generic multi-cycle $Q$-point functions in $M^N/S_N$ orbifolds, using a technology due to Pakman, Rastelli and Razamat. Introduction and summary The AdS/CFT correspondence has been a two-way lane leading to insights both into quantum gravity and into aspects of the strong coupling structure of quantum field theories. Significant computational (and conceptual) developments have sprung from modern technologies devised to compute correlation functions using the full resource of the symmetries of holographic CFTs. Progress has been notable, in particular, in the context of four-dimensional N = 4 SYM dual to AdS 5 × S 5 , with some results extending to other instances of AdS d+1 /CFT d with d > 2. Meanwhile, the study of correlation functions in AdS 3 /CFT 2 has progressed at a somewhat different pace. Methods such as Witten diagrams and Mellin transforms meet some technical difficulties when faced with the idiosyncrasies of two-dimensional CFTs [1][2][3]. On the other hand, it is precisely the exceptional nature of CFT 2 and of AdS 3 × S 3 that makes AdS 3 /CFT 2 special [4][5][6][7], and correlation functions in the holographic symmetric orbifold CFT particularly relevant. On the problem of computing four-point functions in the D1-D5 CFT One of the ongoing programs for computing four-point functions in AdS 3 /CFT 2 uses 'microstate geometries' as a tool [1,2,[8][9][10][11][12][13][14]. Microstate geometries are horizonless solutions of Type IIB supergravity that are asymptotically AdS 3 × S 3 × M . They are part of the conjectured 'fuzzball' resolution of black holes formed by bound D1-D5 branes wrapping T × M , with M being T 4 or K3 [15,16]. The dual CFT 2 , called 'the D1-D5 CFT', is a N = (4, 4) superconformal theory in the moduli space of the symmetric orbifold M N /S N . A vast collection of such geometries has now been found, largely due to the development of the fuzzball program. In particular, 1 4 -BPS geometries have been completely classified, being dual to superpositions of Ramond ground states; classes of 1 8 -BPS geometries are known as well [15][16][17][18][19][20]. In the semi-classical limit where N 1, the central charge of the D1-D5 CFT, c = 6N , is very large. As c → ∞, an operator is said to be 'heavy' if its conformal weight scales as h H ∼ c, or 'light' if its weight h L is fixed and finite. Operators dual to specific (microstate) geometries are heavy: for example, the Ramond ground states have h H = 1 24 c. On the other hand, probe-like fields in the bulk are light. If O H and O L are heavy and light, respectively, the four-point function fields, are generically single-cycle. Typically, Ramond ground states are in fact made of many cycles, also known as 'strands', with different lengths and "spins". Hence (1.1) is typically a complicated function, with all the S N monodromy properties and selection rules that ensue. One way to still have a not-very-complicated function is to take the light fields in (1.1) to be untwisted. This simplifies the permutations to such an extent that one does not even need to resort to covering surfaces. Examples of such computations have been considered in many places [1, [8][9][10]14], leading to very interesting results as mentioned above. It is well known, however, that the complete holographic bulk-boundary dictionary -for, say, light NS chiral fields with conformal dimension one or two -must include both untwisted and twisted (with twist 2) fields with equal conformal dimensions. A summary of our results In the present paper we consider examples of four-point functions with the simplest configuration of twisted light NS fields. That is, our goal is to study correlators where all fields are non-trivially twisted and, besides, some of the fields (e.g. the heavy Ramond ground states) can be multi -cycle. The paper can be divided in two parts. In Sects.2-3, we study general properties of correlation functions with multi-cycle fields in M N /S N orbifolds. In Sect.4, we apply our general results to the D1-D5 CFT, computing a collection of four-point functions with Ramond ground states, NS chiral fields and the deformation modulus. More precisely, in Sect.2 we study generic Q-point functions of twisted fields, and extract their decomposition into components associated with equivalence classes of permutations in S N . (To improve clarity, detailed derivations of the results of Sect.2 are presented App.B.) We follow the work of [33], but with some relevant differences. First, we keep the twists generic, not restricted to single cycles. Second, the derivation, in Ref. [33], of the N -dependence of twisted correlators relies heavily on a diagrammatic interpretation of connected functions, while ours does not. Instead, we use a construction of equivalence classes of twist permutations entering in a given Q-point function. This is also a technology developed in [33]: the equivalence classes are in one-to-one correspondence with diagrams for connected correlators. Hence, although we do not resort to the diagrammatic interpretation, our analysis is in fact an application of the methodology of [33] to (often disconnected) correlation functions with generic, multi-cycle fields. That is a powerful technique, relating an orbifold CFT to the geometry of coverings of the Riemann sphere, via the conjugacy classes of twists, being thus related to 'Hurwitz theory' (see e.g. [34]). Here we try to outline how this language can be used explore symmetries of twisted correlation functions. Specifically, we want to compute four-point functions involving (light) fields Z [ ] with single-cycle twists of length = 2, and (heavy) multi-cycle fields [ ζ,n (X ζ [n] ) N ζ n ], with arbitrary twist given by a partition of N , ζ,n (X ζ [n] ) N ζ n (∞)Z [2] (1) Z [2] (v,v) ζ,n (X ζ [n] ) N ζ n (0) , where n,ζ N ζ n n = N. (1.2) Here we use an index ζ to possibly distinguish between distinct components of the multli-cycle field which have the same cycle length n. For example, in the case where the multi-cycle field is a Ramond ground state, ζ indicates the R-charges of the strands. We focus on twists = 2 for the single-cycle Z fields for two reasons. First, it is the simplest non-trivial twist. Second, the interesting moduli that deform the free orbifold CFT into an interacting theory dual to SUGRA solutions lie in the twisted sector with = 2 [35,36]. We will specifically consider the marginal deformation operator O (int) [2] which is a scalar under all SU(2) symmetries of the N = (4, 4) superconformal algebra. We also consider NS chirals with = 2 which include, in particular, another set of operators with dimension one, the "middlecohomology" NS chirals. The function (1.2) is typically disconnected. By this, we mean that it factorizes into products of functions involving only some of the operators that compose the multi-cycle field -not only products of two-point functions, but also of three-and four-point functions with "smaller" composite fields. In other words, the 'disconnected four-point functions' addressed in this paper are not "bubble diagrams"; in fact, they are still dynamical objects. It is well-known that twisted correlation functions are associated with ramified covering surfaces of the Riemann sphere [37,38]. The nomenclature 'disconnected' also agrees with the fact that, since the correlator factorizes, its associated covering surface can be seen as a product of disconnected surfaces. One important information to be extracted from correlation functions is how they depend on N or, at least, how they scale with large N . For connected correlators, the exact dependence found in [33] reproduces the result of [37], where g is the genus of the covering surface. For multi-cycle fields, the disconnected functions are associated with disconnected covering surfaces, for which the g is not well-defined. Still, we find a natural generalization of (1.3), featuring the Euler invariant χ instead of g, Disconnected* multi-cycle Q-point function with R ≥ Q cycles ∼ χ N 1 2 (χ−R) . (1. 4) The Euler invariant χ is a well-defined, additive property of disconnected surfaces. For connected surfaces/correlators, it reduces to χ = 2 − 2g, and, if the fields are single-cycle, i.e. R = Q, Eq.(1.4) reduces to (1.3). The reason for the * in Eq.(1.4) is that the formula requires some assumptions about the twists: the number of cycles and their lengths must both be kept fixed when N → ∞. These assumptions are very natural for connected single-cycle functions, but they do not hold for some of the most important examples of multi-cycle fields. In particular, they do not hold for functions like (1.2), unless N ζ 1 → ∞. To find how functions that do not fulfill the assumptions of (1.4) depend on N can be a rather difficult problem in general, that is strictly dependent on the twists of all fields entering the correlator. Let us note that many results that we derive in Sect.2 were previously found by Dei and Eberhardt in [39]. In Sect.3, we apply the language developed in Sect.2 for generic Q-point functions, to study (1.2) in full detail. Generalizing our previous works [40,41], where similar functions were considered, here we work at the level of M N /S N orbifolds, i.e. focusing only on the twists, not on the specific form of the fields X ζ [n] and Z [2] . Because transpositions are the simplest non-trivial elements of S N , we are able to derive in detail the structure of these four-point functions, including the explicit way it factorizes into connected parts containing only double-cycle components of the original multi-cycle field, multiplied by "symmetry factors". The double-cycle functions (1.5) always appear in the factorization of (1.2) in association with a covering surface of genus zero. While, for connected correlators with the same number of twists, the genus-zero contributions always dominate over higher genera, in the factorization of (1.2) there are also genus-one contributions, but with only one single-cycle component X ζ [n] , instead of the composite double-components seen in (1.5). By themselves these single-cycle, genus-one functions contribute at order N −2 , which is the same as the double-cycle, genus-zero functions (1.5). But we show that, when multiplied by their symmetry factors, the genus-zero contributions do dominate when N is large. Let us note that, apart from the argument that the genus-zero functions dominate at large N , we will not take N to be large in our formulas, so most of our formulas are exact at finite N . This is why, through most of the paper, we avoid the nomenclature 'heavy' and 'light' fields, preferring 'multi-cycle' and 'single-cycle' fields instead. It should be kept in mind, nevertheless, that, in the D1-D5 CFT examples we consider, the multi-cycle fields are almost always heavy (Ramond) fields, the single-cycle fields are always light, and that heavy-light correlators in the semiclassical limit are an important part of our motivations. Focusing on (1.5), the appropraite genus-zero covering map was derived in [40]. Here we present a detailed analysis of the geometry of the covering surfaces, and the relation between coverings and permutation classes. The goal is to understand how the geometry dictated by the twists controls the form of the correlation functions. The connected functions can be decomposed into H 'Hurwitz blocks', where H is the Hurwitz number of different coverings of the sphere. 1 These blocks are defined by the roots of an algebraic equation, which cannot be found in closed form when n 1 = n 2 , but nevertheless fix many properties of the correlation functions. In particular, they determine the twists of the fields appearing in the OPE channels, (1. 6) where Cs are structure constants, B a two-point function normalization, and curly brackets indicate conformal families. The appearance of the composite field containing the operator W [n 1 −n 2 ] , with twist length n 1 − n 2 , is the result of an interesting interaction between the twist permutations in the v → 0 channel, which was previously overlooked in our papers [40,41], and is now analyzed in detail within this more general context of M N /S N orbifolds. When n 1 = n 2 , we also find a special symmetry between covering surfaces (or equivalence classes, or Hurwitz blocks), that allow us to compute the correlators in closed form on the base sphere, while "reducing the Hurwitz number by half". In Sect.4 we turn our attention from M N /S N orbifolds in general to the D1-D5 CFT (at the free-orbifold point) specifically. We derive a pair of "master formulas" that encompass many different choices for the operators in (1.2)-(1.5). The multicycle fields can be Ramond ground states or composite NS chirals, and the singlecycle fields can be Ramond fields, NS chirals or the scalar deformation modulus taking the CFT away from the free-orbifold point. With these functions, we can use the technology of Sect.3 to extract conformal data. Besides the twists, we can find the dimensions of the operators and the structure constants in the OPEs (1.6). In Refs. [40,41] this analysis was performed when Z [2] is the interaction modulus O (int) [2] , and the X ζ [n] are Ramond ground states of the n-twisted sector. Here, with our general functions, we can extend these results to find the OPEs between O (int) [2] and NS chirals, between NS chirals and Ramond ground states, and also between single-cycle and composite NS chirals themselves. In the latter case, we note that the form of the correlation functions is restricted by the NS chiral ring, and show that we can recover some known structure constants [42][43][44] by taking n 2 = 1, reducing the composite field to a single-cycle one. The D1-D5 CFT's N = (4, 4) superconformal algebra has a symmetry under 'spectral flow' [45], that changes weights and R-charges of fields, and relates states in the NS and Ramond sectors. In §4.4, we discuss the effect of spectral flow on four-point functions, and show how it connects specific pairs of functions derived with our master formulas. We close with a brief discussion of our results and a few comments concerning the derivation of a new family of four-point functions related to D1-D5-P superstrata. These four-point functions, which involve excitations of the left-moving twisted Ramond ground states, can be found in terms of the correlators calculated in the present paper, using standard N = 4 super-conformal Ward identities. Multi-cycle correlators on S N orbifolds The M N /S N orbifold is made by N identical copies of a "seed theory" in M , each copy labeled by an index I = 1, · · · , N , and all independent, so that the total central charge is c = N c seed . The Hilbert space decomposes into twisted sectors created by 'bare twist fields' σ g (z). The permutation g acts on copy indices of operators going around the twist, O I (e 2πi z)σ g (0) = O g(I) (z)σ g (0). Every g ∈ S N can be uniquely decomposed as a product of disjoint cycles of different lengths n i , (n i ) = (I 1 , · · · , I n i ) ∈ Z n i , The conformal weight of σ g , with g given in (2.1), is the sum where h σ n is the dimension of the single-cycle components [37,46]. The same is true for anti-holomorphic weighth σ [Nn] , and the total dimension is ∆ σ . The weight (2.2) is the same for any g in the conjugacy class [g] = {hgh −1 | h ∈ S N }, associated with the partition Twists corresponding to individual permutations are not invariant under actions of S N . An invariant field can be built by summing over the orbit of g ∈ S N under the action of S N by conjugacy, The factor S [g] ensures that the S N -invariant two-point function normalization is the same as that of its (non-S N -invariant) components. In §B.1 we show that where the order of the centralizer of g in S N is 2 The result (2.5) was previously found in [39]. For single cycles g = (n)(1) N −n , it yields the usual normalization factor [33,37,48] which we will denote by S n , and for double cycles g = (n 1 )(n 2 )(1) N −n 1 −n 2 we obtain a normalization denoted by S n 1 ,n 2 , Excited twisted fields can also be combined into S N -invariant operators, in the same way as (2.4), and with the same normalization (2.5). Twisted Q-point functions Q-point functions of twisted operators are subject to selection rules associated with the permutations carried by the fields. A fundamental property of a twisted correlator, possibly with a collection X of excitations, is that the permutations must compose to the identity id ∈ S N , otherwise the function would have ill-defined monodromy. If the g i can be separated into two disjoint sets {g i } = {g k } {g }, such that one set commutes with the other, the function factorizes, where Y and Y are excitations of the respective sets of bare twists. A function which cannot be factorized in such a way is called 'connected'. In this section, we will be interested only on the S N -related properties of twisted correlators, so we now consider functions of bare twists σ [g] only. The Q-point function of invariant operators is the sum be the permutations in the r.h.s. of (2.10), and consider an ordered sequence {p 1 , · · · , p Q } that satisfies (2.8), (2.11) This will also be satisfied by every other sequence in the equivalence class α defined by α : i.e. a global conjugation of every p i by the same k ∈ S N . Moreover, all correlators σ p 1 (z 1 ) · · · σ p Q (z Q ) with the {p i } in a given class α will be equal by symmetry, because the global conjugation only relabels every copy in the twists, and all copies are identical. If we denote the set of all such conjugacy classes by Cl, the sum over orbits in (2.10) can be replaced by a sum over all α ∈ Cl, where we take one representative correlator for each class α, and multiply it by a "symmetry factor" N α , counting the number of permutations in α. It is convenient to separate the classes according to the number c of distinct copies that participate in non-trivial cycles (i.e. cycles of length n > 1). 3 This number is, of course a class property, so we can decompose Cl = ∪ c Cl c . In the end, the r.h.s. of Eq.(2.10) becomes In Appendix B we give several examples of classes α and discuss the set Cl in detail. Some of the classes are made of permutations that factorize, as in (2.9), one or more times. The type of factorization is, also, a class property. In App.B we show that the symmetry factor N αc is basically the same for every class α c ∈ Cl c , (2.14) The only class-dependent factor, ν αc , is given by Eq.(B.18). In classes α c where no two-point function factorizes, ν αc = 1; in classes α c where there is a factorization of d two-point functions with cycles n j , j = 1, · · · , d, we have ν αc = 1/ j n j . Eqs.(2.13) and (2.14) contain the exact N -dependence of the twisted Q-point function, This formula generalizes a result of [33], which only considers connected functions. The connected classes α ∈ Cl g can be described in a diagrammatic language developed in [33]; each class α corresponds to a different diagram, and the sum in Eq.(2.22) is a "sum over diagrams". The large-N limit The way Eq.(2.15) depends on N seems to dwell solely in the coefficients multiplying the last sum over α c ∈ Cl c . If this was the case, it would suffice to expand these coefficients as functions of N to find scaling of the function as N → ∞. This works for single-cycle correlators with cycles of fixed length [33], but when we consider multi-cycle fields, there are subtleties. A multi-cycle twist may be allowed to have a large number of cycles; an important example is g i = (n) N/n ∈ S N . So the centralizers of g i may depend on N , in Eq. (2.15). Also, the number of terms in the sum over α c ∈ Cl c may be very large, scaling with N . In summary, determining the scaling of a multi-cycle Q-point function in the large-N limit is a problem that depends intrinsically on the specific properties of the twists involved. The detailed analysis of a relatively simple case is the subject of Sect.2.1 below. But, under certain assumptions, we can find an interesting generalization of the results of [33]. Let us isolate trivial cycles in the twist permutations, such that the order of the centralizers are If we now assume that the cycle lengths n are fixed as N → ∞, we can use Stirling's formula to find expanding the factor N !/(N − c)! as well, Eq.(2.15) becomes, up to order 1/N , The factor = e c− 1 is the same for all classes α c , depends on the lengths n r and on c but not on N . Let R be the total number of non-trivial cycles in all the permutations g i , and n r > 1, r = 1, · · · , R, be their lengths, in such a way that we can write the sum in the exponent of N in Eq Note that the total number of these cycles is R ≥ Q, and if all g i are single-cycle permutations, R = Q. If we further assume that the sum over classes α c ∈ Cl c in Eq.(2.21) also do not depend on N , then we have found the leading large-N scaling of the function. This assumption about the sum over classes is not unrelated to the assumption used to derive (2.18). If there is a finite number of cycles with fixed (and finite) lengths n r , then it is reasonable to expect a finite number of non-vanishing classes satisfying the condition (2.11). (This is true, for example, in the case of single-cycle fields.) Eq.(2.21) can be written as if we define the number We can interpret χ as the Euler characteristic of covering surfaces, as follows. It is well-known that a connected twisted correlator is associated with a ramified covering surface Σ of the 'base sphere' S 2 base [37]. In a connected correlator, the number R of non-trivial cycles is the number of ramification points of Σ, the number c of distinct copies entering these cycles is the number of sheets of Σ, and the order of the ramification point associated with the cycle (n r ) is its length n r . With this ramification data, the Riemann-Hurwitz formula gives the genus of the (connected) covering surface Σ to be which is compatible with (2.23), i.e. χ = 2−2g. But some of the classes α c may give disconnected correlators, which are products of connected functions. The latter are each associated with a covering surface Σ i , and we can assoaciate the factorized correlator with their disjoint union Σ 1 · · · Σ . The c non-trivial copies and the non-trivial cycles will be split among the factorized correlators in such a way that Eq.(2.23) gives, schematically which is the appropriate behavior of the Euler characteristic. Note that the maximum value of the Euler invariant is χ = 2 when the covering surfaces have g = 0, followed by χ = 0 for g = 1, and for higher genera χ < 0. So, in Eq. (2.22), as in the standard case, the leading-N contribution to the correlator comes from (possibly disconnected) the zero-genus covering surfaces. Figure 1: Moving twist operators on S 2 base . Each twist creates a branch cut (dotted lines). Whenever a twist crosses a branch cut, its permutation changes. Eq.(2.22) is a natural generalization, for disconnected functions, of the wellknown scaling of connected Q-point functions as ∼ g N −g+1− 1 2 Q [37]. Of course, for single-cycle, connected functions, our derivation above can be reduced to that of [33]. See also the more general results of [39]. Let us stress that formulas (2.21) and (2.22) for the large-N scaling only hold under certain assumptions about the twists of the multi-cycle fields. Essentially, we are assuming that, when N → ∞, the number of cycles in the correlation function does not proliferate -hence, although the function may be disconnected, it disconnects into a product of a finite number of connected functions/covering surfaces. The monodromy of classes The twist σ p (z) creates a branch cut at z ∈ S 2 base . When an operator crosses it, the copy indices are permuted by the action of p. If σ p 1 crosses the branch cut of σ p 2 counterclockwise (Fig.1b), p 2 acts on p 1 by left conjugation, and σ p 1 becomes σ p 1 where p 1 = p 2 p 1 p −1 2 . (If the branch cut is crossed clockwise, p 2 acts by right conjugation on p 1 → p −1 2 p 1 p 2 .) Completing the circular movement of the first twist, the second one crosses a branch cut and is also affected (Fig.1c). The final configuration (Fig.1d) has two different twists than the initial one ( Fig.1a), but the product of the permutations is preserved: Thus, when we rotate the σ p i (z i ) around each other inside a Q-point function we obtain a function with different twists. The condition (2.8) is, however, preserved, as a result of (2.26). Suppose we start with a twisted Q-point function whose permutations belong to the equivalence class α. After rotating the twists as in Fig.1, the final permutations belong to a different class α = α. The fact that moving twist fields around each other moves between the different equivalence classes α ∈ Cl was called "channel crossing symmetry" in [33]. (Since each class is associated with a diagram, "channel crossing" is a symmetry of the set of all diagrams under the monodromies of a connected correlation function [33].) In summary, correlation functions of individual permutations, such as (2.8), do not have well-defined monodromies, because individual twists are altered when they go around another twist. But the S N -invariant functions (2.10) do have well-defined monodromies, because they are a sum over all equivalence classes α, hence explicitly channel-crossing symmetric. Untwisted operators When the correlation function contains operators in the untwisted sector, the discussion above must be modified. In this case, the sum over conjugacy orbits of (2.4) is not a good definition for an S N -invariant untwisted operator. Instead, it should be replaced by a fully symmetrized tensor product . (2.27) Here it should be understood that the copies I entering the symmetrized tensor product are all different. The normalization factor S , whose structure is different from the one in (2.4), counts the number of equivalent terms in the symmetrized product, and is derived in §B.4. When there is only one untwisted field, we have a simple sum over copies: Fields with this structure appear, for example, in [9]. There is a way to extend the definition (2.27) to products of composite twisted fields, which is widely used in the literature concerned with fuzzballs and microstate geometries. A detailed derivation of the normalization factor analogous to the one in (2.27) can be found e.g. in [12]. This type of construction of S N -invariant twisted fields is different from the sum over orbits that we use here, and the normalization factor in [12] differs from the one in Eq.(2.4). 4 We note that, although it seems perhaps less intuitive than the straightforward symmetrization of copies, the sum over orbits is particularly useful for correlators whose fields are all twisted, as it is amenable to the equivalence-class decomposition of [33,44,49] discussed in §2.1. Four-point functions with two fields of twist two From now on, we will be interested in four-point functions of the type where v is a anharmonic ratio. Z [2] is an S N -invariant single-cycle field of length 2, and It is usual to interpret single-cycle fields as "winding stands", see e.g. [15,50]. In this language, Z (2) and X ζ (n) are excitations of a 2-wound and an n-wound strand, respectively. The index ζ labels possibly different excitations of the multiple strands that make up X [N ζ n ] , and the bar overX ζ (n) indicates a field with opposite charges. For example, in the D1-D5 CFT, ζ labels different SU(2) charges. 5 In the D1-D5 CFT, we will mostly be interested in the case where Z [2] is a NS chiral or the interaction operator, and X ζ (n) are Ramond ground states, but, at this point, we focus only on the twist structure, which rules the factorization properties of the full correlator. Factorization The correlation function (3.1) factorizes because the cycles of Z [2] andZ [2] can overlap with at most four of the cycles of X [N (s) n ] andX [N (s) n ] . To find the exact way the factorization occurs, recall that the r.h.s. of (3.1) is a multiple sum over orbits, and, apart from normalization factors, each term has the structure The term (3.3) factorizes in two different ways, depending on how the cycles of the four operators overlap. The first is a four-point function with only one component of each heavy field (we omit position dependences for brevity) and the other possibility is a four-point function with double-cycle fields . (3.4b) 5 We use X to denote arbitrary fields. They are not to be confused with the bosons XȦ A I of the seed CFT defined in App.A, which do not appear in the main text. This restricted factorization follows because in both Eqs. (3.4) there is, implicit, a product of factorized two-point functions X ζ (n) (∞)X ζ (n) −1 (0) = 1, and the fields X ζ (n) andX ζ (n) whose cycles do not overlap with Z (2) norZ (2) must all match in such a way that none of these two-point function vanishes, see §B. 3. 6 Besides (3.4) there is, sometimes, a third possible type of factorization of (3.3), resulting in a product of three-point functions. This only happens for some special configurations of the cycles in the composite field [ ζ,n (X ζ (n) ) N ζ n ], and for special configurations of the R-charges, including that of Z [2] . Since this type of factorization is not generically present, we will ignore it in the remaining of this paper, and henceforth it should be always understood that the composite fields are not such that this factorization occurs. Still, we note that, in the cases where it does occur, the contribution of the factorized three-point functions can be determined in a similar way as we derive the contributions of the connected four-point functions below. We give a more detailed discussion of this case in §B.3. Applying Eq.(2.15) to the four-point function (3.1), we get (3.5) Here g = n (n) Nn , with n nN n = N , is the permutation in the multi-cycle field. The number of active copies is constrained by the conjugacy class of g. We now assume, for simplicity, that N 1 = 0, i.e. that all cycles entering the correlation function are non-trivial. Then all N copies enter the correlation function nontrivially, and 7 c = N (if g = n (n) Nn , with all n > 1). Assuming (3.6), and using (2.6), The classes α can be divided into two subgroups, denoted α 0 and α 1 , according to whether they factorize as in (3.4a) or (3.4b), respectively. In each case, many twopoint functions factorize, and inserting the factors ν α given by (B.18) we find the final result in (3.7). Among the terms in the classes α 0 , are all possible pairings of componentsX ζ i (n i ) , and of components X ζ i (n i ) , from the multi-cycle fields. The classes with the same pairing reconstruct the connected part of the S N -invariant function with double-cycles and with the number of colors restricted to c = n 1 + n 2 , that is We assumed n 1 = n 2 ; for n 1 = n 2 = n, the overall coefficient in the r.h.s. must be multiplied by 2!. The index 0 in the function · · · 0 ∼ α 0 · · · α 0 indicates that its associated covering-surfaces have genus g = 0, as given by the Riemann-Hurwitz formula (2.24) with c = n 1 + n 2 . Similarly, the classes α 1 , which have c = n, reconstruct the connected function whose index indicates that the associated covering surfaces have genus g = 1. Combining everything, we gather that Eq.(3.7) gives The coefficients in each sum are 'symmetry factors', given by the number of equivalent ways of forming pairs of components from the original multi-cycle fields; see [41]. (They are squared because there are two multi-cycle fields.) The function P(q) is the number of ways to pair q objects, We have reduced the four-point function with two full multi-cycle fields to a sum of connected four-point functions with, at most, double-cycle fields. Note that to arrive at Eq.(3.10) we have not used the large-N approximation. Although the connected functions · · · 0 and · · · 1 have genera g = 0 and g = 1, respectively, we see from Eq.(2.22) that, for large N , both scale as ∼ N −2 . This is because · · · 0 has an extra pair of cycles giving an extra pair of ramification points to the covering surface. The symmetry factors, which depend on the multiplicities N ζ n , also depend on N because they are constrained by n,ζ N ζ n n = N . Hence, depending on the configuration of this partition, and on how the large-N limit is taken (e.g. leaving the cycle's lengths fixed or not), the symmetry factors can also become large. It is to be expected that if some of the N ζ n grow parametrically with N , the terms with P(N ζ n ) dominate the r.h.s. of (3.10), as they contain factorials. In this case, the genus-one functions end up being subleading. As a concrete example, consider the multi-cycle field 8 In this case, the second sum in (3.10) is void: Since n 1 (p/N ) + n 2 (q/N ) = 1 2 , if we keep the cycles' lengths fixed in the large-N limit, there must be a large number of both components, i.e. p, q 1. Using Stirling's formula, we see that the g = 1 terms in the last line are subleading to all terms with double-cycle fields. An even simpler example is a field with only one type of component, 14) The four-point function simplifies further, For N/n 1, we find again that the genus-one term is strongly suppressed. Genus-zero covering surfaces and Hurwitz blocks We have seen that the main ingredient of the four-point functions (3.10) are the connected functions From now on we omit the label 0, and always assume that we are dealing with the connected function with a genus-zero covering surface, which is obtained with the covering map [40] . (3.17) The ethos of a covering map [38] is to cover the "base Riemann sphere" S 2 base z, where (3.16) is evaluated, with a ramified surface Σ g t of genus g, whose ramification points have the property of trivializing the twists in (3.16). The map (3.17) defines such a covering surface with g = 0, i.e. a covering of the sphere by the sphere. The pair of (disjoint) twist insertions at z = 0 lift to the pair of ramification points t = 0 and t = t 0 with ramifications n 1 and n 2 . The same happens to the pair of twists at z = ∞. The single-cycle twists at z = 1 and z = v must, each, be lifted to one ramification point, which we call t = t 1 and t = x, respectively. At these points, the map must have the correct monodromy, i.e. the derivative must be factorizable as z (t) ∼ (t − t 1 )(t − x). This imposes relations among the parameters t 0 , t 1 , t ∞ and x, that can be satisfied by choosing (3.18) The asymmetry between n 1 and n 2 in Eqs.(3.17)-(3.18) is "fictitious": it stems from a freedom in parametrizing the covering map [40]. Without loss of generality, we will consider n 1 ≥ n 2 . The covering surfaces with the same ramification data, i.e. the same number of ramification points with fixed orders and positions on S 2 base , are not unique. The number H of such surfaces it is a Hurwitz number [33,44]. It is the number of inverses of found by inserting (3.18) into (3.17). This is equivalent to the algebraic equation roots x a (v), a = 1, · · · , H. Also, as explained in [33] (see also [44]), there is also a correspondence between covering surfaces and the equivalence classes α that compose the S N -invariant function (3.16), cf. (3.8), In other words, in the first line there are H different equivalence classes α with g = 0, each class associated to one of the solutions x a (u) in the second line. So the sum in (3.22) is over the H = 2 max(n 1 , n 2 ) topologically distinct covering surfaces with R = 6 ramification points of orders given by the twists, and c = n 1 +n 2 sheets, as per the Riemann-Hurwitz equation (2.24). As v sweeps S 2 base , each function x a (v) fills one out of H disjoint regions, which compose again the entire Riemann sphere. We will call this regions 'Hurwitz regions'. Eq. (3.22) shows that the S N -invariant function A ζ 1 ,ζ 2 n 1 ,n 2 (v,v), with domain on S 2 base , is decomposable as a sum of H 'Hurwitz blocks', derived from the function A ζ 1 ,ζ 2 n 1 ,n 2 (x), with domain in the x-plane. It is crucial that all Hurwitz blocks are summed for the total function to be S N -invariant, because each block correspond to one of the equivalence classes α. As discussed by the end of §2.1, when one twisted operator revolves around another (c.f. Fig.1) the equivalence classes are shuffled. Hence a missing Hurwitz block makes the monodromies of the correlation function not well defined. It is often possible to compute the "Hurwitz block function" A ζ 1 ,ζ 2 n 1 ,n 2 (x) in closed form. We will do this in Sect.4 for a varied collection of operators. But even when this is the case, it is, in general, not possible to write the S N -invariant function itself in closed form, because the x a (v) are the roots of Eq.(3.20), which has order higher than 5 for almost all twists. Exact Hurwitz bloks for composite fields with equal cycles The exception is when n 1 = n 2 . The polynomial (3.20) and we can find its H = 2n solutions exactly: where v 1 2n is a (single) 2nth root of v. The division of the x-plane into H = 2n disjoint Hurwitz regions can be clearly seen in the plot of v(x), shown in Fig.2 for n = 3. The shading of the plot follows the contours where |v(x)| = constant, distinguishing the curves traced on the x-plane when v goes in circles around the origin of the base sphere. All regions meet at the two critical points x = ±1. As stated above, each of the H regions of the x-plane are associated, on the one hand, to a topologically distinct ramified covering of S 2 base and, on the other hand, to a distinct class α of permutations satisfying (2.11). But in the case of n 1 = n 2 , there is a subtlety. The functions (3.24) can be grouped in n pairs related by inversion: for a = 0, · · · n − 1. This is a global conformal transformation of the x-plane, which suggests that the two solutions x a and x a+n describe covering surfaces with the same Figure 3: Covering surfaces for n 1 = n 2 = 3. In each panel, the horizontal axis is Re(t) and the vertical axis Im(t). Blue patches are the preimages of the upper-half plane Im(z) > 0, and pink patches the preimages of the lower-half plane Im(z) < 0, under the covering map (3.17). The positions of the ramification points t 0 , t 1 , t ∞ depend on the position of the ramification point x according to (3.18). The 6 panels correspond to the 6 solutions x a of Eq. This is highlighted by the green arrows in Fig.3; following the arrow in the upper panel we find the sequence t 1 → t ∞ → t 0 → x → 0. Rotating the plane we get an arrow in the opposite direction, to be contrasted with that indicated in the bottom panel: The relative positions of every point are the same, except for t = 0 and t = t 0 , which are swapped. As ramification points, t = 0 and t = t 0 are equivalent: they are both the preimages of z = 0, and with equal ramification because they correspond to cycles of equal length. In this sense, swapping these two points does not matter, and the number of distinct ramified coverings is reduced to H = n. This "reduction by half of the Hurwitz number" when n 1 = n 2 = n can also be seen from the perspective of H as counting the number of different equivalence classes α. An explicit construction of the 2 max(n 1 , n 2 ) different classes in the function (3.22) can be found in Appendix B of Ref. [40]. It is clear from the construction given there that when n 1 = n 2 the otherwise distinct 2 max(n 1 , n 2 ) inequivalent classes are grouped in pairs, and only n distinct classes remain. Eq.(3.25) is the manifestation of this pairing in terms of the geometry of the covering surfaces. But there is still one further subtlety. In (3.22) we may have different excitations of the strands n 1 and n 2 , even if the strands have the same length. Then the ramification points t = 0 and t = t 0 are "decorated" with different operators, and are distinct, even though they are equivalent with regard to the twist structure. In summary, for functions with double-cycles of the same length, the S N -invariant four-point function (3.22) is where x a (v) are given in closed form by Eq.(3.24). Furthermore, when, in addition to the twists having cycles of the same length, the excitations are also equal, i.e. ζ 1 = ζ 2 , then the Hurwitz blocks have the symmetry A ζ,ζ n,n (x a (v)) = A ζ,ζ n,n 1 xa(v) (3.27) and only half the terms in the sum (3.26) are independent. Composite fields with unequal cycles The geometry of the x-plane is more complicated when n 1 = n 2 . In Fig.4 we show it for n 1 = 7, n 3 = 3. There are H = 2 max(n 1 , n 2 ) = 14 regions, v(x) taking all values in C inside of each. (The number of regions can be found by counting, say, the different red streaks in the plot, where Arg(v) π.) The main difference from Fig.2 is that in Fig.4 there is an inner region with three critical points, which collapse into a trivial one when n 1 = n 2 . The trefoil structure of the innermost regions (labeled 1, 5 and 14 in Fig.2c) around the middle-point x = − n 1 −n 2 2n 2 is the same for any values of n 1 = n 2 . Increasing the difference n 1 − n 2 increases just the number of "petals" (labeled 2, 3 and 4) between x = − n 1 n 2 and x = − n 1 −n 2 n 2 , as well as the symmetric ones (labeled 11, 12 and 13) between x = 0 and x = 1. There are always n 1 − n 2 such petals at each side. The petals and the trefoil are associated with the twist structure of the four-point function's OPE channels. OPEs and Hurwitz blocks The four-point function (3.16), there are two inequivalent OPE limits: In v → ∞, the OPE is equivalent to (3.29b) in what concerns the twists, but with the double-cycle fields having opposite charges. (That is, the operators appearing in the fusion rules are different, but with the same twists discussed below.) The critical points of v(x) correspond to OPE limits. Although it is not possible to find x a (v) in closed form for n 1 = n 2 , we can find them asymptotically for v ≈ 0, 1. At v = 1, there are two critical points The root x ℵ has multiplicity one, and x ‫ג‬ has multiplicity three. Explicitly, expanding v(x) in the vicinity of (3.30) and inverting the series, These functions are plotted as magenta and cyan contours with |v(x)| ≈ 1 in Fig.4bc. The contours extend to x = ∞ in a single direction, but move towards x ‫ג‬ from three different directions in the inner trefoil region. points and inverting the series expansion, we find The two functions can be visualized in Fig.4b-c as the orange and green contours with |v(x)| ≈ 0. The contours split in two closed parts. One part, given by x (v), circles around x = − n 1 n 2 , as shown in Fig.4b. They avoid n 1 − n 2 regions in the petals-trefoil patch in Fig.4c, thus crossing a total of H − (n 1 − n 2 ) = n 1 + n 2 regions. The other closed contour, given by x ‫ג‬ (v), encircles x = 0, passing over the n 1 − n 2 remaining regions. As v → 0, the contour x (v) tightens around x = − n 1 n 2 , and x (v) tightens (much faster) around x = 0. Fusion rules The twists of operators resulting from the OPEs (3.29) must appear as branches of the correlation function A ζ 1 ,ζ 2 n 1 ,n 2 (v,v). The branches correspond to the multiplicity of roots x a in the coincidence limits. The multiplicities can be read both from the leading powers in the expansions (3.31) and (3.32), and also from the number of regions around the critical points discussed above in Figs.2 and 4. For v → 1, since x ℵ (v) has no branch cuts, the OPE has an untwisted field U [1] , while the third-order branch of x ‫ג‬ (v) indicates an operator S [3] of twist 3. This agrees with the composition of permutations: a product of two transpositions is either the identity or a cycle of length three, (3.34) Hence the OPE (3.29a) reads where Cs are structure constants and {· · · } indicates conformal families. Similarly, in the OPE (3.29b), two types of resulting permutations contribute to the four-point function, [2] × [(n 1 )(n 2 )] = [n 1 + n 2 ] + [(n 1 − n 2 )(n 2 )(n 2 )]. (3.36) In the first term in the r.h.s., a transposition (2) joins two cycles (n 1 )(n 2 ) into a single cycle of length n 1 + n 2 ; this is what we find from the branch cut of x (v) in (3.33a). In the other type of contribution, the transposition splits the longer cycle in two: (2) × (n 1 ) = (n 1 − n 2 )(n 2 ). The resulting cycle of length n 1 − n 2 is seen in the branch cut of x (v) in (3.33b). The total fusion rule extracted from the four-point function is where BX X is the normalization constant of a two-point function. Composite fields with equal cycles When n 1 = n 2 , the solutions in the coincidence limits are as can be seen directly from (3.24). Solutions x ‫ג‬ (v) and x (v) are missing. It follows that, for n 1 = n 2 , there is no operator S [3] in the v → 1 channel, and no W [n 1 −n 2 ] in the v → 0 channel. This can also be understood from the perspective of S N selection rules. For example, σ [3] disappears because there is no three-point function satisfying (2.8) with a cycle of length 3 and two double-cycle twists [(n)(n)]. Four-point functions of twisted fields in the D1-D5 CFT We now turn to the D1-D5 CFT at the free orbifold point. Conventions for the notation of fields are given in App.A. The holomorphic Ramond ground states of the n-twisted strands can be written in bosonized language as All have conformal weight h R n = n 4 = nc seed 24 , the correct weight of a spin filed in a CFT with central charge nc seed . are distinguished by their SU(2) charges (j, j). 10 For R ± (n) and RȦ (n) , respectively, (j = ± 1 2 , j = 0) and (j = 0, j = ± 1 2 ). NS chiral primaries can be expressed in bosonized form as (see e.g. [42][43][44]). They have conformal weights and R-charges h (0) n = 1 2 (n − 1) = j (0) n ; h (2) n = 1 2 (n + 1) = j (2) n ; h (1) n = 1 2 n = j (1) n . The operator that drives the theory away from the free orbifold point is a specific excitation of the 2-twisted lowest weight NS chiral with super-current modes, This is an exactly marginal deformation, with dimensions h = 1 =h, and it is a singlet of all the SU(2)s, with j = = 0, j =j = 0. From the single-cycle fields above, we can build multi-cycle, S N -invariant fields such as the Ramond ground states of the full orbifold, and also composite NS chirals where #O (p) denotes the number of strands of the type p entering the composite fields. In both Eqs.(4.5)-(4.6), the multiplicities form a partition of N = ζ,n nN ζ n . In the large-N limit, the Ramond ground states (4.5) are heavy, h R ∼ N . The multi-cycle NS chirals (4.6) can also be heavy, if the number of lowest-cohomology components is parametrically small. The fermionic exponentials in (4.11) are lifted to ramification points on the covering surface, with an appropriate factor depending on the local behavior of the map (3.17); see Eqs.(C.2)-(C.3). The resulting covering-surface correlator is a six -point function, 12) The relation between A αβ|σˆ |σˇ n 1 ,n 2 (x) cover and the base-sphere correlator A αβ|σˆ |σˇ n 1 ,n 2 (x) is A αβ|σˆ |σˇ n 1 ,n 2 (x) = e S L A αβ|σˆ |σˇ n 1 ,n 2 (x) cover (4.13) where S L is a Liouville action induced by the covering map [37]. In fact, e S L is the correlation function of the bare twists within (4.11), and is universal, independent of the specific excitations that define X ζ and Z. The algorithm by Lunin and Mathur [37,38] to derive S L involves a careful regularization of the path integral around the ramification points. (See also [52] for a very detailed account.) An alternative, described in [41,53], is to use the stress-tensor method to compute the bare-twist correlation function, bypassing the regularization procedure. 11 The results for A αβ|σˆ |σˇ n 1 ,n 2 (x) cover and S L are given in Eqs.(C.5) and (C.6), respectively, yielding our desired master formula: with the exponents Some constant factors involving n 1 and n 2 have been absorbed into the constant C Z , which also takes into account the arbitrariness of normalization of the bare twists, and is fixed by the correct normalization of the correlation function in the identity OPE channel. The function (0) , (4.15) is more complicated than (4.11) because the deformation operator O (int) [2] is not simply a fermionic exponential, but a linear combination of terms with bosonic factors and contributions from the integral defining the modes of the super-current excitation, cf. (4.4). Its computation, carried on in Appendix B of Ref. [41], is, nevertheless, completely analogous to the one above, including the same Liouville factor, because the twist structure is identical. In the end, we obtain with the exponents and This result is, in fact, more general than the one derived in Ref. [41], because in the latter case we had restricted our attention to double-cycle Ramond fields, for whicȟ σ 2 =ˇ 2 =σ 2 =ˆ 2 = 1, hence the last two terms in each exponent P i vanishes. The present result allow us to give also the correlators for [X ] made by NS chirals, using the dictionary in Table 1. Composite fields with equal cycles In §3.2 we showed that when n 1 = n 2 = n, the covering maps develop a symmetry that reflects upon the Hurwitz blocks. We can check this property, using our master formulas. The correlators (4.16) and (4.14) simplify considerably in this case, and The symmetry (3.27) of the Hurwitz blocks can be checked explicitly. We see that A αβ|σˆ |σˇ n,n (x) = A αβ|σˆ |σˇ n,n (1/x) and A int|σˆ |σˇ n,n (x) = A int|σˆ |σˇ n,n (1/x) iff are identical, as expected from the discussion leading to Eq. (3.27). Since the x a (v) are expressible in closed form (3.24), we can write a closed formula for the correlation functions directly on the base sphere, where we have used the fact that K 0 + K − + K + = 2h Z . When n = 1, there are only two inverse functions, (1) Z {α,β} [2] (v) X using the appropriate expression for the N -dependent factor. Note that functions with n = 1 scale as N 0 . There are only two non-trivial twists, hence two ramification points in the covering surface of the connected correlators, so R = 2 in Eq.(2.22). OPE limits We can now derive not only the twists but the conformal dimensions and structure constants of operators appearing in the OPE limits v → 1 and v → 0. In the channel v → 1, the Huwitz blocks where x → x ℵ (1) = ∞, give for both A αβ|σˆ |σˇ n 1 ,n 2 (x), where h Z is given by Eq.(4.9), and for A int|σˆ |σˇ n 1 ,n 2 (x), where h Z = 1. Looking at the power of the leading singularity, we see that the untwisted operator U [1] in the OPE (3.35) is the identity. Since we have assumed that the individual cycle fields are normalized, the arbitrary constant in the correlator is now fixed to The Hurwitz blocks where x → x ‫ג‬ (1) again have the same form for A αβ|σˆ |σˇ n 1 ,n 2 (x) and A int|σˆ |σˇ n 1 ,n 2 (x), with a constant that is readily computable but given by a cumbersome expression in general. The power of the leading singularity shows that twist-three operator S [3] in the OPE (3.35) is a primary with dimension h p = 2 3 , that is the bare twist σ [3] . In channel v → 0, the function (4.14) expands as n 1 +n 2 + · · · (4.25a) Here e iψ simply denotes an unimportant phase that is not necesseraly the same in all functions. The leading order coefficients give the structure constants in the OPE for the fields in Table 1. We can read the conformal weights from the leading powers (4.25). For the single-cycle field Y [n 1 +n 2 ] , where h Z and h XX are given in (4.9). Similarly, the dimension of the composite The same analysis holds for the functions (4.16) with the deformation operator O (int) [2] , with weight h = 1. The leading-order expansions are n 1 +n 2 + · · · (4.29a) We can read the conformal data of the OPE (4.30) and find the weights where h XX is given in (4.9). Functions with NS chiral fields and other examples Although the exponents (4.14b), (4.16c), (4.16d) may look complicated functions, they are, in fact, usually very simple after the parameters of Table 1 are inserted. We now discuss some examples of functions and their conformal data. Functions with only NS chiral fields If we take every field in the correlator to be an NS chiral, the resulting function is constrained by the NS chiral ring. Only a restricted number of three-point functions involving (single-cycle) NS chirals is non-vanishing [42][43][44], and the OPEs of fields in the ring are non-singular. This reflects on the structure of the functions (4.14) at x = 0 and x = − n 1 n 2 , i.e. at the v → 0 channel. Namely, powers of x and (x + n 1 n 2 ) are positive, so that there are no singularities, or zero, when the corresponding field is absent from the OPE. These features can be seen in the list of formulas (D.1). By contrast, the function is finite at both limits. Eqs.(4.27)-(4.28) give the dimensions h Y = 1 2 (n 1 + n 2 − 1) and h W = 1 2 (n 1 − n 2 + 1). The former is the correct dimension of a lowest-weight NS chiral of twist n 1 + n 2 , and the latter of a highest-weight chiral of twist n 1 − n 2 . Hence the OPE (4.26) reads (4.39) The appearance of O (0,0) [m] and O (2,2) [m] in the OPE with the composite field agrees with what one should expect from the single-cycle OPE of the chiral ring. The structure constants squared, |C 1 | 2 and |C 2 | 2 , can be read from value of (4.38) at x = 0 and x = − n 1 n 2 , combined with the multiplicities and the "dressing" factor for N -dependence: As a third and final example, we consider The function vanishes at x = 0, so there is no composite operator with W [n 1 −n 2 ] in the OPE. But it is finite at x = − n 1 n 2 , with an operator of dimension h Y = 1 2 (n 1 + n 2 + 1), i.e. the highest-weight NS chiral: The (square of the) structure constant can be read from by evaluating (4.42) at x = − n 1 n 2 and using the multiplicity and dressing factor: If we take n 2 = 1 and n 1 = n > 1, the lowest-weight chiral in the composite field becomes the vacuum. The N -dependent dressing factor, which is proportional to |Cent[(n 1 )(n 2 )(1) N −n 1 −n 2 | = n 1 n 2 (N − n 1 − n 2 )!, becomes proportional to |Cent[(n)(1) N −n | = n(N − n)!, so we must multiply (4.44) by a factor of (N − n − 1) to obtain the result This matches precisely with a known structure constant computed, e.g. in [43], providing a very non-trivial check of our results. The effect of spectral flow The N = 4 superconformal algebra has an automorphism called 'spectral flow' [45]. The currents are transformed, and fermionic modes (and boundary conditions) are changed by a continuous parameter usually called spectral flow 'units'. Flow by ξ units affects the R-charge and the Virasoro currents in such a way that the weight and R-charge of a field changes as while the super-currents G αA (z) have their modes shifted by ± 1 2 ξ. Since every NS chiral has h = j, their spectral flow by ξ = −1 gives a field with h −1 = 1 24 c, that is a Ramond ground state. Which NS chiral flows to which Ramond ground state is seen from the R-charges. For example, in the n-twisted sector, with c = 6n, the lowest weight NS field O (0) (n) has R-charge j = n−1 2 , so it flows to the Ramond ground state R − (n) , with R-charge j = n−1 2 + 6n 12 ξ = − 1 2 . Overall, (4.47) Naturally, spectral flow relates pairs of functions involving these fields. In fact, it is usual in the literature on the D1-D5 CFT to compute three-point functions with fields on the NS sector, and then relate these to functions on the Ramond sector (where SUGRA states live) via spectral flow; see for example [52,54]. Given a state |Ψ , the automorphism of the Hilbert space will map it to another state |Ψ ξ , while an operator O(z) will be mapped to O ξ (z), with a linear operator U ξ such that preserving amplitudes Ψ| O |Ψ . In the free orbifold CFT, the linear operator has a natural implementation in terms of the bosonized fermions, inserted at the origin (i.e. at past infinity), This is an S N -invariant operator, including all copies I = 1, · · · , N of the free bosons that bosonize the fermions. Moving U ξ past a bare twist σ g , for any g ∈ S N , only has the effect of shuffling the copies, which leaves U ξ invariant, hence U ξ σ g = σ g U ξ . Bosons also commute with U ξ . Let O(z) be a primary fermionic field which can be written in bosonized language as an exponential of a linear combination of the φ r,I , multiplied (or not) by a bare twist σ g . The most important examples of such fields are the composite NS chirals (4.6) and Ramond ground states (4.5). Commutation with U ξ is then Here j is the R-charge of O. The first equation follows from the commutation of U ξ and σ g , along with the well-known formula (see e.g. [55]) for commuting a pair of exponentials: there are sums over a, b, and the c-number exponential in the r.h.s. includes the two-point function φ a (z)φ b (z ) = δ ab log(z − z ), valid for our bosons; cf. Eq. (A.3). The second equation in (4.50) also uses that U −1 ξ = U −ξ = U † ξ , as readily seen from the explicit realization (4.49). Since U ξ commutes with bare twists and bosons, which are R-neutral, formulas (4.50) actually hold for these fields as well. To confirm that U ξ in (4.49) is indeed the correct operator leading to (4.46), we can look at how it affects the weight and the charge of a state |O = O(0) |∅ generated by an operator that transforms as (4.50). According to (4.48), we have (4.51) The dimension of the state in the r.h.s. is a sum of the dimensions of O and U ξ , plus a factor of jξ coming from z jξ . Since the exponential (4.49) has weight h = c 24 ξ 2 and R-charge j = c 12 ξ, c = 6N , the result matches (4.46). Alternatively, we can explicitly write the most general possible exponential and take the OPE with (4. The weight and the charge of this last exponential again agree with (4.46). Further, by looking at (4.1)-(4.2), Eq.(4.52) explicitly reproduces the map (4.47) between Ramond ground states and NS chiral states. If we are considering just a specific n-twisted sector of Hilbert space generated by a bare twist σ (n) , the sums over copies I = 1, · · · , N in all exponentials above can be replaced by sums over only the n copies in the corresponding cycle (n), say I = 1, · · · , n. This is possible because fields in different copies commute, so the normal-ordered exponential in (4.49) can be readily factorized. 12 We can in fact repeat the argument above, using these restricted sum over copies, to derive the transformation of the single-cycle fields (4.1)-(4.2) more directly. This restricted version of the U ξ operator also defines a notion of spectral flow on the individual ntwisted sectors (or n-twisted "strands"), where the transformations (4.46) hold with c = 6n < 6N . Although quite useful for some computations on the free orbifold, these individual flows are broken when the theory is deformed by O (int) [2] , because its twist mixes different sectors, as discussed in [53]. Only the full spectral flow of the c = 6N theory, involving all N copies simultaneously, is preserved. In order to relate our four-point functions by spectral flow, it is convenient to regard them as two-point functions on non-trivial states. We can be rather general: consider a state |X , created by an operator X (z) which transforms as in (4.50). Now consider the expectation value of a pair of conjugate operators Z andZ on the flowed state |X ξ = U ξ |X . Using the transposition property (4.50) twice, where j Z is the R-charge of Z, and passing U ξ overZ at z = 1 gives a trivial factor. In the last line, we used that U † ξ = U −ξ = U −1 ξ . This computation relates correlators of the fields Z andZ on different states |X and |X ξ . But looking at the r.h.s. of the first line, we see that if we insert id = U ξ U −1 ξ between fields, to get X | (U −1 ξZ (1)U ξ )(U −1 ξ Z(v) U ξ ) |X , we can also find a relation between functions with flowed operators on the (fixed) state |X . In summary, We can now apply these results to four-point functions of the type (3.1), where the Z fields carry a twist σ [2] , and the states |X are created by the multi-cycle fields (3.2). Factorization lets us consider only the functions with double-cycle states in Eq.(3.10), so we focus on the four-point functions (4.11), which are given by the master formulas computed in §4.1. We will omit the various indices of A αβ|σˆ |σˇ n 1 ,n 2 (v) for economy: Z {α,β} [2] (1) Z {α,β} [2] (v) X Z {α,β} [2] (1) Z {α,β} [2] (v) X The functions (4.13) are written in terms of x, that should be related to v by the inverse covering maps x a (v) and Eq. (3.22). Using Eq. (3.19), we then have Written this way, the shift in the exponents K i in (4.14b) is explicit. Let us emphasize that, although Eq.(4.58) is parameterized by x, we are performing a standard spectral flow on the base sphere. 13 For example, consider the function in Eq.(4.38), with only lowest weight NS chirals (4.59) Here α = −β = 1, see (4.60) the same result that we find if we apply the master formula (4.14) directly to the function in the last line. As another example, take the function (4.37), , and formula (4.58) gives (4.62) 13 Variants of the original [45] automorphism of the superconformal algebra are known, e.g. the 'fractional spectral flow' related to fractional modes in twisted sectors [56], and the recently introduced "partial spectral flow" [57] that changes only two of the four fermions. which is, again, what we find using the master formula (4.14) directly. The interaction operator is more complicated than the exponential operator for which we have derived the transformation (4.50) above, but it has been shown [57,58] that O (int) [2] is, in fact, invariant under spectral flow, 14 hence it actually does obey (4.50), being R-neutral. Now, the first equation in the chain of equalities (4.54) tells us that four-point functions including O (int) [2] and states related by spectral flow must be equal. For example, based solely on spectral flow applied to the function (4.35), we conclude that This is, indeed, the correct function for Ramond fields found by the master formula, and previously known from [40] (see Eq.(61) ibid.). Discussion and further developments The present paper tries to contribute to a problem that is particularly important for the fuzzball conjecture: the complete description of the D1-D5 CFT at the free orbifold point and away from it. This requires the derivation of all three-and fourpoint functions involving the symmetric orbifold's Ramond and NS fields (and some of their excitations), the complete list of their OPEs and the full spectrum of the non-BPS fields that might appear at the OPE channels. We have given here a detailed description of twisted Q-point functions in M N /S N orbifolds, applying a technology of [33] to correlators with multi-cycle twisted fields. We have thoroughly analyzed a special class of relatively simple four-point functions where all operators are twisted: two being composite, multi-cycle fields and two being single-cycle fields with twists of length 2. We showed how to decompose these functions into connected parts where the multi-cycle fields are reduced to doublecycle fields, then studied these connected functions, with a detailed discussion of the geometry of the genus-zero covering surfaces. Q-point functions with multi-cycle fields are disconnected, and can become rather complicated. Even extracting the large-N dependence is a task that strongly depends on the types of twist in the composite fields. We have shown that if the fields are composite but have a finite number of cycles, i.e. if the number of cycles does not grow with N → ∞, then the function scales as ∼ χ N 1 2 (χ−R) , which is a natural generalization of the well-known formula ∼ g N −g+1− 1 2 Q for connected functions, the genus g replaced by the Euler characteristic χ. But if the number of cycles in the composite field grows with N , this generalized formula does not apply. This happens for important types of composite fields, like Ramond ground states [(R [n] ) N/n ], with n fixed, that source well-known Lunin-Mathur geometries [59]. In our examples of functions involving these types of field, the total N -dependence comes from computing the N -dependent number of factorizations of the total correlator into connected parts. This factorization strongly depends on the structure of the twists involved in the function. Here the non-composite fields are simple twist-2 single-cycle fields, which yield a manageable result. It would be interesting to try to find a way of determining the N -dependence in a more general way. It would also be important to explore the connection of our results with those of [14]. After reducing the factorized multi-cycle four-point function into a sum of connected functions with a finite number of cycles (in our example, the remaining composite field has at most two cycles), we can use covering surfaces methods. The full S N -invariant correlator is a sum of 'Hurwitz blocks', each associated with one of the H allowed topologies of covering surfaces, where H is a Hurwitz number. Different types of coalescences of ramification points in these surfaces dictate the resulting twists of operators that appear in the OPE channels of the four-point function. Twists configurations can restricts the correlators to such an extent that, for special classes of functions subject to other restraints, e.g. the ring of NS chiral fields in the D1-D5 CFT, Hurwitz theory may suffice to fix the correlators completely [44]. We would like to explore the structure of Hurwitz blocks in more generality, as well as their connection with conformal blocks. Since many four-point functions involving untwisted light fields are already known, let us mention some uses of the functions with twisted light NS fields we have calculated. One possible application is in the reconstruction of S-matrix elements of a process of absorption and emission of light (or massless) quanta from the heavy object in the bulk, as suggested in [60]. Also, our correlators can be used for deriving functions with 1 8 -BPS operators, relevant for 3-charge microstate solutions [61,62]. These operators are chiral excitations of Ramond ground states, and the corresponding functions can be obtained from derivatives of the functions derived here, using Ward identities. Many particular examples of such correlators are known in the context of D1-D5-P superstrata bulk geometries. For example, in [12] it is shown that the Ward identity for the simplest Virasoro excitation L −1 amounts to applying a differential operator D v to the function of unexcited fields, 15 As our four-point functions are known in closed form only in terms of the coveringsurface variables x,x, the question arises of whether we could translate this Ward identity to a differential operator in terms of x instead of the base-sphere anharmonic ratio v. The answer is rather simple: since we do know the mapping function v(x) explicitly, we can rewrite D v as an operatorD x acting on our functions A(x,x), where v (x) = dv/dx. Therefore the problem of reconstructing four-point functions with excited states from our correlators -even in more complicated cases involving also other generators, say J + −1 and integer powers of it -is rather straightforward. Let us note, as a last comment, that once these functions are known, the methods of [41,53] can be used: one computes integrals of the four-point functions with the deformation O (int) [2] to find the anomalous dimension of the heavy fields at second order in conformal perturbation theory. Thus we may assess the renormalization or the protection of the excited states. Acknowledgments The work of M.S. is partially supported by the Bulgarian NSF grant KP-06-H28/5 and that of M.S. and G.S. by the Bulgarian NSF grant KP-06-H38/11. M.S. is grateful for the kind hospitality of the Federal University of Espírito Santo, Vitória, Brazil, where part of his work was done. We would like to kindly thank an anonymous referee for comments leading to the improvement of the text, in particular the addition of a discussion about spectral flow. A. Conventions for the D1-D5 CFT Here we gather definitions and notations for the seed N = (4, 4) CFT. In general, we follow [41]. The R-symmetry group is SU(2) L × SU(2) R . We work with (T 4 ) N /S N , and there is an additional global group SU(2) 1 ×SU(2) 2 . In the superalgebra, the Rcurrents J a I (z),J˙a I (z), and supercurrents G αA I (z),GαȦ I (z) have indices in the SU(2) groups: a = 1, 2, 3 andȧ =1,2,3 are triplets of SU(2) L and SU(2) R ; α = ± anḋ α =± doublets of SU(2) L and SU(2) R ; A = 1, 2 andȦ =1,2 doublets of SU(2) 1 and SU(2) 2 , respectively. The index I = 1, · · · , N distinguishes the N identical copies of the seed SCFT. Each copy can be realized in terms of four real bosons plus four real holomorphic and four real anti-holomorphic fermions. They are written in complexified form as XȦ A I (z,z), ψ αȦ I (z) andψαȦ I (z), respectively. Fermions can be conveniently bozonized by chiral bosons φ r (z) andφ r (z), and similarly forψαȦ I (z). Exponentials are always normal-ordered throughout the paper. See [58,63] for cocycles that we ignore. The non-vanishing two-point functions are Two-point functions between fields on different copies vanish. The "magneticcomponents" J 3 of the R-current and J 3 of the SU(2) 2 current can be most conveniently written in bosonized form, We denote the respective eigenvalues as j, j, and the ones in the anti-holomorphic sector as,j. Note that these are "magnetic", not "azimuthal" quantum numbers. B. Derivation of the N -dependence of twisted Q-point functions We now derive the key formulas of Sect.2 in detail. As mentioned in the text, we use the technology of [33], but generalized for generic, multi-cycle permutations, and without recurring to diagrams. B.1. Two-point functions First, we derive the normalization factor S [g] of the S N -invariant twist σ [g] in Eq.(2.4). We want to compute the two-point function We omit the arguments z = 0, z = 1 for economy of space. The functions inside the sum, which contain individual elements of S N , vanish unless The For a fixed element h in the first sum in (B.1), we count the non-vanishing terms in the sum over h . This is the number of elements h ∈ S N that solve the equation h gh −1 = q for fixed g and fixed q = (hgh −1 ) −1 ∈ S N (B.3) Note that q ∈ [g], hence ∃ k ∈ S N such that q = kgk −1 , and The By construction, all the terms in this last sum over h are non-vanishing as they trivially satisfy (B.2), resulting in the factor |S N | = N !. This gives the normalization factor S [g] = N !|Cent[g] in Eq.(2.4). B.2. Q-point functions The Q-point function of S N -invariant fields is a multiple sum We will now follow [33], but with some differences: we do not rely on the existence of diagrams; we do not assume that the g i are single cycles; we do not assume (for now) that the functions are connected. Our goal is to extract the N -dependence of the function (B.8) which comes from the multiplicity of equivalent terms. In the r.h.s. of Eq.(B.8), the correlation functions' twists are individual representatives elements p i = h i g i h −1 i ∈ S N within the conjugacy classes [g i ] in the l.h.s. The nonvanishing correlators are those for which Q i=1 p i = id. A non-vanishing function σ p 1 (z 1 ) · · · σ p Q (z Q ) will depend on how the copies inside the permutations interact. All functions whose sets {p i } = {p 1 , · · · , p Q } are related by a global permutation must be equal, as that amounts to an overall relabeling of all copies, and the CFT copies are identical -only their relative positions within the cycles matter. Thus we have equivalence classes, denoted by α, of the ordered list of permutations {p i }, and functions with {p i } in the same equivalence class α are equal by symmetry: Here we show four different classes α 1 5 , α 2 5 , α 3 5 , α 4 6 ∈ Cl contributing to (B.11), and two representatives of each class. 16 The boldface numbers c in α c indicate the number of distinct copies entering the permutations non-trivially. Distinct copies are painted with distinct colors. The coloring emphasizes that a class is determined not by the specific copies (i.e. the algarisms) that enter the cycles, but by their relative positions within the cycles -that is, different orderings of the colors. 3 ) −1 . In classes α 3 5 and α 4 6 the double cycles p 1 and p 4 factorize, because they contain copies that do not appear in the other permutations. Note that the permutations in these examples might be elements of S N with N 6, but we omit the trivial cycles (of length one). We can organize the sum in (B.8) as a sum over the different classes α ∈ Cl (we are leaving the normalization factors S [g i ] behind for a while), In the first line, {p α i } is an arbitrary representative of class α and N α is the number of collections {p i } ∈ α. In the second line, we decompose the sum further, by cataloguing the classes α ∈ Cl into subsets Cl c ⊆ Cl, according to the number c of distinct copies entering the non-trivial cycles of {p i }. By construction, ∪ c Cl c = Cl. since |Cent[p α i ]|, only depends on the cycle structure of p a i , which is the same as that of g i . The same is true for the classes α c , which are special types of α, hence The remaining factor W αc counts the number of different sequences {p αc i } which are not identical but still belong to the same equivalence class α c . For example, 17 Not just a different collection {p α i } ∼ {p α i } ∈ α within the same equivalence class α, but exactly the same collection {p α i }. in (B.12b) we can see two sequences {p α 2 5 i } with different individual permutations (compare each cycle in the first line with the one immediately below it in the second line) that belong to the same conjugacy class α 2 5 (compare the relative positions of repeated copies, i.e. the order of the colors). To find W αc , we proceed in two steps. First, we must choose the c copies that will enter the non-trivial permutations out of the N copies available, The final remaining factor w αc counts the number of ways we can arrange the c copies and still find collections {p i } within the same class. This number w αc , which we will determine shortly, can depend on c and on the cycle structure of the [g i ], but it clearly cannot depend on N . So we have already completely determined the N -scaling dependence of the Q-point function. It turns out that w αc has a subtle dependence on the factorization of the functions in class α c . As we can see from the examples (B.12), some classes will have disconnected correlators, and note that connectedness is indeed a class property: all functions σ p α 1 (z 1 ) · · · σ p α Q (z Q ) in the same class α factorize the same way. So the r.h.s. of (B.13) decomposes further, αc∈Clc N αc σ p αc 1 (z 1 ) · · · σ p αc Q (z Q ) fully connected classes w αc · · · + αc∈ once disconnected classes w αc · · · · · · + αc∈ twice disconnected classes w αc · · · · · · · · · + · · · (B.17) The possible types of factorizations of the initial correlator will depend on the original cycle structure of the [g i ], and also on c. For example, it is possible that the cycle structure of the [g i ] be incompatible with fully connected classes -this is what happens with the functions considered in Sect.3 -and in this case, the first sum in the r.h.s. above is void. The factor w αc is given by w αc = c! ν αc , ν αc = 1 if no two-point function factorizes 1/[ n j ] αc if one or more two-point function factorizes (B.18) fields are equivalent, i.e. have the same s), so there are N p 1 options. Then one must choose p 2 copies out of the remaining N − p 1 copies to enter the second parenthesis, and there are N −p 1 p 2 options. And so on. The total number of equivalent terms is therefore which is the result appearing in (2.27). Note that we have not required that every one of the N copies appear in each term (B.25), that is, we have not required that i p i = N . If this is the case, then the expression in the last line of (B.26) simplifies If there are only two powers, p 1 = q and p 2 = N − q, this formula reduces to (2.28). Factorization of four-point functions For untwisted composite fields, since there is no sum over orbits of trivial cycles, we must redo our computations. For definiteness, consider the operator in (2.28), and the two-point function [1] † (∞) Z † [2] (1) Z [2] (v,v) X p There are two sums over orbits of the 2-cycles, and symmetrization of the copies in the composite fields. Leaving the normalization factors, a generic term in the sum has the following permutation structure (coordinates omitted for economy of space) (B.29) The function can factorize in three ways, depending on the interaction of the cycles in the middle. If the cycles are disjoint, then the factorization is which vanishes because the remaining correlators do not satisfy the fundamental condition (2.8). If the cycles are not disjoint, they can either compose to a three cycle, or be the inverse of each other. In the former case, if h 1 (2)h −1 1 h v (2)h −1 v = (3), then the correlator also vanishes because, again, it fails to satisfy (2.8). The final remaining possibility is that the cycles are the inverses of each other; then the factorized function does satisfy (2.8), so this is the only non-vanishing factorization.
2022-02-28T06:46:57.980Z
2022-02-24T00:00:00.000
{ "year": 2022, "sha1": "c840e4f2b35cc0c6116f1c3b97fe6077375b9653", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ac7aff432381c43f7b74dce08672f8229dc9e9ae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251557605
pes2o/s2orc
v3-fos-license
Concomitant Posteromedial Elbow Dislocation with Lateral Condyle Humerus Fracture in a 7 Years Old – A Rare Case Report and Review of the Literature Introduction: Traumatic dislocations of the elbow are a rare injury in children. Concomitant elbow dislocations and lateral condyle fractures are even rarer. There is a wide variability in the outcomes of these injuries as there is no consensus regarding its management. We report one such rare case in a 7-year-old child. Case Report: A 7-year-old child was brought to the emergency room with an alleged history of fall on outstretched hand sustaining injury to the left elbow. He was diagnosed with a posteromedial elbow dislocation, along with fracture of the lateral condyle. The patient underwent emergency reduction of the elbow under appropriate anesthesia following which there was persistence of varus and valgus instability, for which the lateral condyle was fixed with a standard pinning through a lateral approach with three Kirschner wires. The patient was immobilized for 6 weeks in a plaster, after which mobilization was started. At 3 months follow-up, the patient showed good functional outcome with full range of motion. Conclusion: We report an exceedingly rare case of concomitant elbow dislocation and fracture of the lateral condyle in children. If emergent reduction of the dislocation and anatomic reduction of the fracture is achieved, satisfactory outcomes can be expected. Delayed recovery of the elbow motion is common, but full range of motion can be expected in the long run. Case Report An emergent reduction of the dislocation was planned with simultaneous fixation after informed consent of the parents and under appropriate anesthesia. Elbow was reduced with gentle traction and flexion with pressure from the medial aspect of the elbow to guide it into reduction. Clinical assessment of reduction was done with the regain of the normal contour of the elbow and smooth motion arc. The adequacy and concentricity of the reduction were confirmed by the image intensifier (Fig. 2). Post-reduction the joint was unstable to varus and valgus stresses. Through a standard lateral approach, the lateral humeral condyle was anatomically reduced and three (1.25 mm) Kirschner's wires were used to fix it. Reduction checked under image intensifier and intra-operative arthrogram (Fig. 3). According to Milch Classification, Milch type 2 injuries which include the fracture line traversing the trochlear groove are more common than Milch type 1 injuries which are thought to be more stable due to the presence of trochlear rim [10]. In the pediatric population, the association of elbow dislocation and a LCF is rare. Available reports on this issue are scarce and mostly limited to single cases, with wide variability in the reported outcomes. As the elbow joint is a hinge requiring mobilization to avoid notorious stiffness, prompt diagnosis and emergent reduction of the dislocation followed by stable fixation of lateral condyle fragment are necessary for achieving the objective. The purpose of this case report is to discuss the treatment and outcomes of these exceedingly rare injury and to review the literature. A 7-year-old boy was brought to the emergency room with an alleged history of fall on an outstretched hand while cycling sustaining injury to his right elbow. Following injury, the patient developed swelling, deformity, and difficulty in using the extremity. There was no history of other related or remote trauma. The radiographs of the affected region were advised and showed a posteromedial elbow dislocation with a fracture of lateral condyle (Fig. 1). Postoperatively, a back slab was applied for 2 weeks. Stitches were removed and the slab was converted to a long-arm cast. At 6 weeks, the cast and wires were removed, radiograph (Fig. 4) was found to be satisfactory, and rehabilitation started with an active-assisted range of motion. Final follow-up at 3 months revealed good functional and radiologic results (Fig. 5, 6). The range of motion was 0-110° (as compared to 0-120° in opposite elbow) and there was no varus/valgus instability. Discussion The present study reports the outcome of a 7-year-old boy who presented to our institution with concomitant elbow dislocation and a LCF Song type V. The initial radiographs demonstrated a posteromedial dislocation. The elbow dislocation was successfully reduced by closed manipulation followed by an anatomic reduction of the LCF by open reduction. No complications were seen in the perioperative period, and full range of motion was observed within 6 weeks after removal of the wires with union ensured clinicoradiologically. Kirkos et al. [7], in a study, observed excellent range of motion in three patients and 15 degrees extension lag in one patient due to a small intra-articular gap due to inappropriate reduction. Cheng et al. [5] reported three cases with elbow dislocation associated with LCF and observed suboptimal range of motion in two cases, the authors observed that poor functional range of motion is inevitable in case of unsatisfactory anatomical reduction by either open or closed reduction and if the joint is incongruent. Tomori et al. [13] emphasized to increase awareness of these injuries for achieving satisfactory outcomes. Silva et al. [14] reported that satisfactory outcomes can be expected if prompt reduction and anatomical reduction is achieved in his case series of 12 cases with a mean follow-up of 51 weeks. We report an exceedingly rare case of concomitant posteromedial elbow dislocation with LCF in the pediatric age group. Emergent reduction and anatomical reduction is the key to satisfactory outcomes. Delayed recovery of elbow motion is common, but full recovery of the ROM can be expected in the long term. Available literature data have suggested that the combination of an elbow dislocation and a LCF in the pediatric population is rare and often suboptimal articular reduction can culminate in nonunion/malunion/stiffness, thereby resulting in poor outcomes. Author recommends use of intraoperative arthrogram for anatomical reduction.
2022-08-15T15:09:38.485Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "d9e12e4c9c0ab2d8fac9124c0bcc23e9ed5c429f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.13107/jocr.2022.v12.i03.2732", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b322068bc2cf51f99edd9557a11313447b161f06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238250544
pes2o/s2orc
v3-fos-license
Checkpoint inhibitors, fertility, pregnancy, and sexual life: a systematic review Immune checkpoint inhibitors (i.e. anti-PD1, anti-PDL1, and anti-CTLA4) have revolutionized the therapeutic approach of several cancer types. In a subset of metastatic patients, the duration of the response is so long that a cure might be hypothesized, and a treatment discontinuation strategy could be proposed. Considering that long-term efficacy, some patients could also plan to have a child. Moreover, immunotherapy is moving to the early setting in several diseases including melanoma and breast cancer that are common cancers in young patients. However, there is a paucity of data about their potential detrimental effect on fertility, pregnancy, or sexuality. Herein, we conducted a systematic review with the aim to comprehensively collect the available evidence about fertility, pregnancy, and sexual adverse effects of checkpoint inhibitors in order to help clinicians in daily practice and trialists to develop future studies. INTRODUCTION Immune checkpoint inhibitors (ICIs; i.e. anti-PD1, anti-PDL1, and anti-CTLA4) have revolutionized the therapeutic landscape in oncology. [1][2][3] In particular, these compounds have increased the survival in both the metastatic and adjuvant settings in several types of malignancies. [1][2][3] In a subset of metastatic patients, the duration of the response is so long that a cure might be hypothesized, and a treatment discontinuation strategy could be proposed. [4][5][6][7][8][9] In light of the long-term efficacy, some patients could also plan to have a child. Moreover, immunotherapy is moving to the early setting in several diseases including melanoma and breast cancer that are common cancers in young patients. [10][11][12] In opposition to the vast body of evidence regarding the clinical utility of ICIs, there is a paucity of data about any detrimental effect on fertility, future pregnancies, or sexuality. 13 This gap of knowledge could complicate the therapy proposal, especially in young patients. This is of particular importance in light of the European Society for Medical Oncology (ESMO) and European Society of Human Reproduction and Embryology guidelines recommending a fertility counseling in all patients, including those in the metastatic setting. 14,15 Therefore, the unknown gonad toxicity of immunotherapy represents an important unmet need in this field. Herein, we conducted a systematic review (see Supplementary Appendix S1, available at https://doi.org/ 10.1016/j.esmoop.2021.100276) with the aim to comprehensively collect the available evidence about fertility, pregnancy, and sexual adverse effects of ICIs in order to help clinicians in daily practice and researchers to develop future studies. In particular, we describe four major classes of adverse effects: primary hypogonadism, secondary hypogonadism, pregnancy impairment, and altered libido and sexual life. Finally, we discuss some practical clinical issues linked to fertility and sexuality and a possible methodology for future clinical trials. PRIMARY HYPOGONADISM Primary hypogonadism refers to the direct damage of gonads, that is, ovaries or testicles. 16,17 Clinically, this translates into a reduced or impaired production of viable oocytes or spermatozoa and a fertility compromise. From a biochemical perspective, primary hypogonadism can be suspected by a reduced level of sexual hormones (e.g. testosterone and estradiol) with a concomitant increase of gonadotropins [follicle-stimulating hormone (FSH) and luteinizing hormone (LH)]. 18,19 In women, there could also be a reduction of anti-Müllerian hormone concentration, 18,19 a substance that has been linked to the ovarian reserve and, therefore, with the reproductive potential. 20 Some data show that ICIs might cause primary hypogonadism (Table 1). However, the evidence is weak and in the form of case reports or case series. Moreover, to the best of our knowledge, no data in women have been published. Two case reports described a case of orchitis and epididymal-orchitis during treatment with anti-PD1/anti-CTLA4 and anti-PD1, respectively. 21,22 In the first case, there was a spontaneous resolution of the orchitis, while the second one also developed encephalitis, leading to a steroid therapy initiation with subsequent regression of symptoms. In another case report, a normozoospermic man treated with anti-PD1 and anti-CTLA4 developed azoospermia 2 years after the immunotherapy. 23 In a recent case series, Scovell et al. 24 retrospectively analyzed testicular histology of patients treated with anti-PD1 and anti-CTLA4 that underwent autopsy. An agematched control cohort not treated with immunotherapy was used. Of the seven men treated with ICIs, six (86%) had impaired spermatogenesis, including one focal active spermatogenesis, two hypospermatogenesis, and three Sertolicell-only syndrome. In the control group, only two of the six men (33%) had impaired spermatogenesis. In a recent cross-sectional pilot study, Salzmann et al. 25 analyzed the sperm of 22 patients currently or previously treated with ICIs. Among them, 82% had a normal spermiogram, three showed azoospermia, and one oligoasthenoteratozoospermia. However, three patients with pathologic spermiogram had significant confounding factors (previous inguinal radiotherapy, chemotherapy and chronic alcohol abuse, and bacterial orchitis). On the contrary, one patient with a normal pretreatment spermiogram showed azoospermia with an inflammatory infiltrate after ICIs therapy. While these data suggest that primary hypogonadism in male might be a rare immune-related adverse effect, it should be noted that only five patients in this study had a pretreatment spermiogram available, making it difficult to draw any definitive conclusion. Finally, the retrospective study of Peters et al. 26 showed that low testosterone levels were present in 34 of 49 (69%) men treated with anti-PD1 and/or anti-CTLA4. Interestingly, the vast majority of patients reported fatigue, but only three were treated with testosterone replacement therapy. Although this study is speculative, several methodological biases limit the demonstration of causality between checkpoint inhibitors and the drop in testosterone levels. For example, only 61% of patients had a baseline testosterone measurement, and the sampling time during the treatment was inconstant. However, from a preclinical perspective, an alteration of testicles during immunotherapy seems to be possible. 27 Indeed, it has been shown that monkey testicular weight decreases during treatment with anti-CTLA4, even though sperm did not show any histopathological changes. Moreover, anti-CTLA4 seems to bind ovarian connective tissue in monkeys, but no histopathologic changes have been documented. 27 Comprehensively, ICIs might cause primary hypogonadism, especially in men. However, the frequency of this adverse effect, its magnitude, the duration after the discontinuation of immunotherapy, and the implications on fertility are unknown. SECONDARY HYPOGONADISM Secondary hypogonadism refers to the damage in the hypothalamus or, more frequently, in the pituitary gland causing a reduced activation of the hypothalamice pituitaryegonadal axis. 28,29 Clinically, this translates into a reduced or impaired production of viable oocytes or spermatozoa and a compromised fertility. 28,29 Biochemically, secondary hypogonadism can be suspected when reduced levels of sexual hormones (e.g. testosterone and estradiol) coexist with a concomitant decrease of gonadotropins (FSH and LH). 28,29 Of note, secondary hypogonadism often arises in panhypopituitarism, which also causes secondary hypothyroidism and secondary adrenal insufficiency. Hypophysitis and panhypopituitarism are well-known and well-described adverse events of checkpoint inhibitors (Table 1). However, most data focus on consequential adrenal insufficiency and hypothyroidism because of their potential life-threatening consequences. Nevertheless, in the context of putative curable disease or long-lasting remission, the reproductive sequelae should be considered. Anti-PD1, anti-CTLA4, and their combinations can cause hypophysitis, but with different frequencies. 30 The rate of hypophysitis is 5.6% for anti-CTLA4, 0.5%-1.1% for anti-PD1, and 8.8-10% for the combination. 30 Moreover, the rate of grade 3-4 toxicities seems to be higher for anti-PD1 compared with anti-PDL1. 31,32 Of note, endocrine toxicities, including hypophysitis, seem to be a chronic adverse effect. 33,34 In comparison to hypophysitis, hypogonadism is described as an uncommon adverse event. 31,35 However, it is unknown how many hypogonadisms are primary or secondary, and an underestimation of hypogonadism seems to be plausible in light of the absence of routine testing of sex hormones. 31,35 The lack of widespread biochemical testing of FSH, LH, testosterone, and estradiol might be relevant because a report showed that hypogonadotropic hypogonadism could manifest even in the absence of alteration of other endocrine axes (e.g. pituitaryethyroid and pituitaryeadrenal). 36 In other words, secondary hypogonadism may arise without panhypopituitarism, making its diagnosis challenging. However, the frequency of isolated hypogonadotropic hypogonadism during treatment with checkpoint inhibitors remains unknown and needs to be clarified. Finally, as shown by Tulchiner et al., 37 patients treated with anti-PD1 could develop an increased LH-to-FSH ratio and estradiol levels. However, for an unknown reasons, this phenomenon seems to occur only in men. Comprehensively, hypopituitarism is a well-demonstrated side-effect of ICIs. However, more information needs to be collected on the frequency and the magnitude of disruption of the pituitaryegonadal axis by these compounds. In addition, the effect of immunotherapy-induced hypopituitarism on fertility remains unknown. PREGNANCY Mother and fetus have a different genetic makeup. Therefore, an immunologic tolerance toward the father antigens has to be developed to avoid miscarriage. This tolerance appears to be modulated by a myriad of molecular systems. 38 While the complete coverage of such tolerogenic mechanisms is beyond the scope of this review, some of them will be briefly discussed (see Supplementary Table S1, available at https:// doi.org/10.1016/j.esmoop.2021.100276 for a list of preclinical studies on PD1/PDL1/CTAL4). In particular, both PDL1 and CTLA4 are expressed at the fetomaternal interface during gestation and have a significant role in fetomaternal tolerance. [39][40][41] Therefore a fetotoxic effect of anti-PD1/-PDL1 and anti-CTLA4 could be anticipated. The pivotal role of the PD1ePDL1 axis has been experimentally validated through the pharmacological inhibition of PDL1 in a model of allogeneic mice pregnancy. 40,42,43 Upon blocking this axis, there is a fivefold increase in the risk of miscarriage. 40 However, this effect was seen only in allogeneic pregnancy and not in the syngeneic one. This observation highlights that fetomaternal immunotolerance and PDL1 expression are modulated by the degree of allogenicity (the more the allogenicity, the more important are the immune tolerance mechanisms). Therefore the effects of anti-PD1/anti-PDL1 antibody on the fetus are anticipated to be patient specific and strongly linked to the paternal antigenic components. Besides, the role of CTLA4 was experimentally validated. Indeed, in a monkey model, anti-CTLA4 treatment caused reduced maternal weight, higher abortion rates, stillbirth, and premature delivery. 44 These adverse effects were seen mainly in the third trimester of pregnancy. Unlike many chemotherapeutic agents that exert fetotoxic effects in the first trimester of pregnancy, 45,46 ICIs might have the maximum toxicity in the third. 38,47 Although validated data of trimester-specific toxicity are lacking, it is well-known that the placenta changes its capacity to transport immunoglobulins during months. 48,49 In particular, placental immunoglobulin Gs (IgGs) are relatively scarce during the first 6 months, while, in the last trimesters, they sharply increase, reaching similar or superior levels to those of maternal blood. 48,49 Moreover, it appears that IgG subclasses impact antibody transport across the placenta. IgG1s are the globulins transported with the best efficiency, followed by IgG4, IgG3, and IgG2. [48][49][50][51] Table 2 lists the available checkpoint inhibitors classified by their theoretical capacity to cross the placenta. From a clinical point of view, there is a paucity of data on the fetotoxic effect of anti-PD1/anti-PDL1/anti-CTLA4. This could be explained by the recommendation to avoid these compounds during pregnancy based on preclinical evidence discussed above. However, in the last years, some case reports have been published [52][53][54][55][56][57] (Table 3). Thus, it appears that, at least in some cases, pregnancy and childbirth are compatible with anti-PD1 and/or anti-CTLA4 therapy. However, it is not possible to interpolate the real frequency of regular pregnancy/delivery. Nevertheless, while case reports could suffer the positive-result bias, 58 an unpublished communication (D. Minor, January 2017 52 ) described seven cases of women present in the Food and Drug Administration (FDA) database that were treated with anti-CTLA4 during pregnancy. Among them, there was a case of spontaneous abortion, one stillbirth, one ectopic pregnancy, two pregnancies terminated by voluntary abortion, and two pregnancies with unknown outcomes. 52 No information on malformation or immune-mediated fetal adverse events was present on the FDA database. From the six case reports available, none of the fetuses experienced a malformation linked to immunotherapy, and only one had potentially immune-mediated hypothyroidism (Table 3). Comprehensively, while checkpoint inhibitors seem to be fetotoxic, this adverse effect might be patient specific, varying by the antigenicity of the fetus. Nevertheless, in some patients, normal pregnancy and delivery seem to be possible even though the abortion-to-delivery ratio in a general population remains unknown. Moreover, the impact of immunotherapy on future pregnancies and the presence of checkpoint inhibitors in breast milk have to be clarified. LIBIDO AND SEXUAL LIFE Although many phase III clinical trials with ICIs evaluated the quality of life as a secondary endpoint, sexuality remains a neglected topic (Table 1). To the best of our knowledge, only two studies evaluated this topic. The first study is a case report that describes the development of vulvitis in a woman treated with an anti-PD1. 59 Although autoimmune phenomena to external genitalia could severely impair sexuality, the exact incidence of these phenomena is currently unknown. The second is a pilot cross-sectional study involving 25 males currently or previously treated with ICIs. None of them reported an impairment of sexual function or sexual activity. Only one reported a light restriction of erectile function. 25 While these data seem to suggest a limited toxicity of ICIs on sexuality, a larger sample size and a prospective design are needed to draw any firm conclusions. Despite the current gap of knowledge, a special consideration regarding hypophysitis and its consequences on sexuality can be made. Impairment of the pituitarye gonadal axis might culminate in a reduction of sexual hormones. It is well known that sexual hormonal deficiency can reduce fertility and lead to physicale and psychologicale sexuality disturbance. 60,61 CURRENT CLINICAL PRACTICES Despite the exiguity of data regarding fertility and sexuality perturbations by checkpoint inhibitors, some pragmatic approaches applicable in daily clinical practice can be depicted (Table 4). First, regarding primary hypogonadism, it is essential to discuss with the patient the possibility of altered gametogenesis and the subsequent reduced fertility. Accordingly, fertility-preservation strategies [62][63][64] (gamete cryopreservation) should be offered, especially in the curative setting where a cure can be achieved and family planning can be made. Although such a strategy could be pursued in a metastatic setting, it is better to avoid a delayed therapy initiation in favor of a fertility-preservation strategy, especially in high-burden disease. In addition, ovarian function preservation with luteinizing hormone-releasing hormone (LHRH) agonists, as used in other cancers 65 (e.g. breast cancers 66 ), should not be offered during treatment with immunotherapy given without cytotoxic therapy because of a lack of evidence on immunotherapy-related gonadotoxicity risk and the benefit of LHRH agonists in this setting. However, if ICIs are used together with chemotherapy, this strategy should be considered. 65 Second, it is important to discuss the possibility of hypopituitarism-induced infertility and its virtual chronic persistence. However, even with hypopituitarism, it should be noted that it is possible to produce viable gametes and become pregnant, especially with adequate hormonal stimulations, 67,68 but again, data on pregnancy rate after immunotherapy-induced hypopituitarism are not available. Third, it should be recommended to avoid pregnancy during checkpoint inhibitor treatment. In particular, it is suggested to use at least one contraceptive method. If the woman gets pregnant or is already pregnant, it should be discussed along with the pros and cons of concurrent treatment. In the case of metastatic disease with a longterm complete response, treatment discontinuation could be discussed. After delivery, it is pivotal to follow-up child for potential complications including development of an autoimmune disease. As stated by others, 69 prematurity has to be avoided as much as possible. While there are not high-quality data on the impact of immunotherapy on future pregnancy outcomes or the presence of checkpoint inhibitors in breast milk, a pragmatic approach could be to wait several months from the end of the treatment to the beginning of pregnancy or breastfeeding. In particular, as stated by the European Medicines Agency (EMA) and FDA Normal development milestones at 9 months (one twin was missing the left hand, which was interpreted as a consequence of strangulation by amniotic cord) 30730328 1 Anti-PD1 I 1 (preterm) Yes (thyroiditis) Normal development milestones at 6 months compatibly with preterm delivery 32073510 1 Anti-CTLA4 þ anti-PD1 I 2 (twins, preterm) No Discharged from neonatal care intensive unit after 28-30 days irAEs, immune-related adverse events; NA, not available. a One spontaneous abortion, one stillbirth, one ectopic pregnancy, two pregnancies terminated by voluntary abortion, and two pregnancies with unknown outcomes. technical sheets, the minimum time from the end of therapy should be 3 months for ipilimumab 70 and durvalumab, 71 4 months for pembrolizumab, 72 and 5 months for nivolumab 73 and atezolizumab. 74 Fourth, the virtual possibility of reduced libido and impaired sexuality, especially in the case of hypophysitis, should also be discussed. In such cases, an evaluation of the pituitaryegonadal axis might be helpful (e.g. FSH, LH, testosterone, estradiol). If sex hormone deficiency is diagnosed, a hormone replacement therapy should be considered, if clinically applicable. Of note, fatigue is a valuable and often underrated symptom of hypogonadism. 26,75,76 FUTURE CLINICAL TRIALS Considering the paucity of data regarding the effect of checkpoint inhibitors on fertility, pregnancy, and sexuality, there is an urgent need for new evidence to orientate clinical practice (Table 4). Although a retrospective study could be helpful and a precious starting point, a welldesigned prospective trial could be the best solution. In particular, reproductive health outcomes should be included in standard toxicity assessments of all clinical trials. 77 It seems that checkpoint inhibitors may cause primary hypogonadism, especially in men. However, the frequency of this side-effect needs to be clarified, as well as its duration after treatment discontinuation, the laboratory perturbations of sex hormones (e.g. testosterone, estradiol), and the impact on fertility, pregnancy, and regular delivery rate. Regarding secondary hypogonadism, currently, there is strong evidence about the causative role of immunotherapy in hypophysitis and hypopituitarism. However, beyond the monitoring of thyroid and adrenal functions, better documentation of a pituitaryegonadal axis impairment is highly warranted. Moreover, because a case of isolated secondary hypogonadism has been described, documentation about its frequency in a real-world setting should be generated. Therefore the evaluation of FSH, LH, testosterone, and estradiol should be performed. Again, the implication on fertility and pregnancy of secondary hypogonadism needs to be evaluated. While many fetotoxic effects of ICIs have been documented, the vast majority of evidence is preclinical. Therefore, a multi-institutional effort finalized to collect anecdotal data in humans accidentally exposed to these agents during pregnancy is highly warranted. Moreover, it is important to annotate the potential short-and long-term toxicities in children exposed in utero to these agents, including the risk of autoimmune diseases. Finally, the consequences of ICIs on libido and sexuality are currently neglected. Therefore, the use of validated questionnaires 78,79 at different timepoints could be carried out. CONCLUSIONS ICIs have revolutionized cancer treatments because of their extraordinary efficacy. Therefore, it is anticipated that their use is going to increase further in the near future. Paradoxically, the toxicities induced by ICIs on fertility, pregnancy, and sexuality are poorly understood. From the currently available evidence, these compounds could cause primary hypogonadism, secondary hypogonadism, and, theoretically, libido and sexual impairment. In addition, based on preclinical data, conception and pregnancy should be avoided during treatment with anti-PD1/anti-PDL1/anti-CTLA4. Nevertheless, at least in some cases, a regular delivery seems to be possible. The data discussed above can be helpful to clinicians to better aid patients in daily clinical practice. An international effort to bridge the current knowledge gap will be fundamental. ACKNOWLEDGEMENTS This work was supported by the Italian Ministry of Health (Ricerca Corrente). FUNDING This work was supported by contribution from the 5Â1000 funds per la Ricerca Sanitaria (no grant number). DISCLOSURE MG is on the advisory boards of Novartis, Eli Lilly, Pierre-Fabre, all outside the submitted work. ML acted as a consultant for Roche, AstraZeneca, Eli Lilly, and Novartis and received speaker honoraria from Roche, Sandoz, Takeda, Pfizer, Eli Lilly, and Novartis outside the submitted work; also acknowledges the support from the Associazione Italiana per la Ricerca sul Cancro (AIRC; grant number MFAG 2020 ID 24698) and the Italian Ministry of Health 5Â1000 funds 2017 (no grant number) for pursuing in his research efforts in the field of oncofertility. FP is on the advisory board of Amgen; reports research fundings from AstraZeneca; travel grants from Celgene; research funding from and is on the advisory board of Eisai; advisory board of Eli Lilly; honoraria from Ipsen; is on the advisory board of and honoraria from MSD; is on the advisory boards of Novartis, Pierre-Fabre, and Pfizer; is on the advisory board and received travel grants and research funding from Roche; and honoraria from Takeda, all outside the submitted work.
2021-10-03T06:16:56.688Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "510bddfe76efa944123ce199e8bf804c7bbc9872", "oa_license": "CCBY", "oa_url": "http://www.esmoopen.com/article/S2059702921002386/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec475e39d3ce21218764a7e2d0dbbacaaa2adf3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259202524
pes2o/s2orc
v3-fos-license
The relationship between the incidence of X-ray selected AGN in nearby galaxies and star-formation rate We present the identification and analysis of an X-ray selected AGN sample that lie within the local ($z<0.35$) galaxy population. From a parent sample of 22,079 MPA-JHU (based on SDSS DR8) galaxies, we identified 917 galaxies with central, excess X-ray emission (from 3XMM-DR7) likely originating from an AGN. We measured the host galaxies' star formation rates and classified them as either star-forming or quiescent based on their position relative to main sequence of star formation. Only 72% of the X-ray selected sample were identified as AGN using BPT selection; this technique is much less effective in quiescent hosts, only identifying 50% of the X-ray AGN. We also calculated the growth rates of the black holes powering these AGN in terms of their specific accretion rate ($\propto \mathrm{L_X/M_*}$) and found quiescent galaxies, on average, accrete at a lower rate than star-forming galaxies. Finally, we measured the sensitivity function of 3XMM so we could correct for observational bias and construct probability distributions as a function of accretion rate. AGN were found in galaxies across the full range of star formation rates ($\log_{10} \mathrm{SFR/M_\odot\ yr^{-1}} = -3\ \mathrm{to}\ 2$) in both star-forming and quiescent galaxies. The incidence of AGN was enhanced by a factor 2 (at a 3.5$\sigma$ significance) in star-forming galaxies compared to quiescent galaxies of equivalent stellar mass and redshift, but we also found a significant population of AGN hosted by quiescent galaxies. INTRODUCTION Galaxies grow by forming stars, a process driven by the availability of cold gas. This material is also thought to fuel the growth of SMBHs (Alexander & Hickox 2012). However, a lot of uncertainty remains over exactly how the growth of galaxies and their central SMBHs co-evolve over cosmic time. Large scale surveys of AGN activity across the electromagnetic spectrum have been employed to shed light on the nature of this relationship (e.g. Aird et al. 2018;Mullaney et al. 2012a;Rosario et al. 2013;Smolčić et al. 2017;Reines et al. 2013, and many others). By exploring how the incidence of AGN changes with the SFR of its host galaxy several key pieces of evidence have been uncovered that suggest the growth of SMBHs and their host galaxies are connected. One of the most well-constrained correlations highlighting the relationship between SMBH and galaxy growth was discovered through including SMBH mass measurements from the nearby Universe. Using a mixture of ground and space-based observations, Magorrian et al. (1998) measured the dynamical masses of a sample of 32 SMBHs and uncovered a relationship between a When directly attempting to connect the AGN activity with the host galaxy's SFR, however, the results are not clear. On the one hand, numerous studies found that the average SFR of AGN-hosting galaxies increased out to at least ≈ 3 (e.g. Harrison et al. 2012;Mullaney et al. 2012b;Rosario et al. 2012Rosario et al. , 2013, and that the SFR is found to tightly correlate with the average AGN luminosity (e.g. Mullaney et al. 2012a;Chen et al. 2013). On the other hand, it was found that AGN at a fixed X-ray luminosity can have a broad range of SFRs (e.g. Alexander et al. 2005;Mullaney et al. 2010) -in some cases covering up to 5 orders of magnitude (Rafferty et al. 2011). As the galaxy population evolves through cosmic time, there is a significant change in the star-forming composition of the overall galaxy population. Since = 2, there has been a significant build up of the quiescent galaxy population, particularly in higher mass galaxies (Brammer et al. 2011;Tomczak et al. 2014; Barro et al. 2017), which highlights a significant amount of star-formation quenching in this period. This decline in the density of star formation across recent cosmic time is thought to be driven by a decreasing density of molecular gas (Popping et al. 2012;Maeda et al. 2017). A concurrent decline in the AGN activity of star-forming galaxies could imply a relationship between star-forming activity and black hole fuelling. Kauffmann & Heckman (2009) described this kind of activity through their "feast and famine" fuelling model. By analysing the Eddington ratio distributions of a sample of optically-selected SDSS AGN they were able to demonstrate the existence of two distinct populations of AGN implying there were different regimes of black holes growth. The "feast" mode is associated with galaxies containing significant amounts of star formation in their central regions. The large amounts of cold gas required for star formation fuels black hole growth. The "famine" mode is associated with galaxies hosting older stellar populations. In this case, black hole growth is regulated by the rate at which stars lose their mass. Whilst the optical selection method used in Kauffmann & Heckman (2009) produced an incomplete sample (Jones et al. 2016), this relationship between star formation and black hole accretion has been observed in more recent studies with more complete samples. For example, Aird et al. (2019) explores the effect of star formation on a sample of AGN from the CANDELS survey (0.1 ≤ ≤ 4), selected using Chandra X-ray data. After applying observational corrections to this sample, they find evidence of an SFR-dependent fuelling mechanism reflective of the model proposed in Kauffmann & Heckman (2009). On the other hand, Torbaniuk et al. (2021) explored this relationship on a sample on AGN at < 0.33 using data from SDSS DR8 and 3XMM DR8. They found twice as many AGN in star-forming galaxies compared to quiescent hosts. They then corrected their sample for observational incompleteness using 3XMM upper limits to study the intrinsic accretion rate distribution in this region. They found systematically larger accretion rates in star-forming galaxies across all stellar masses. Their results imply that both black hole accretion and star formation are fuelled by a common source. In this paper, we explore the star-forming properties of galaxies in the nearby Universe ( < 0.35) and investigate how this affects the likelihood of finding an X-ray selected AGN. For this analysis, we use techniques developed in our previous work (Birchall et al. 2020(Birchall et al. , 2022. In these papers we investigated the relationship between host galaxy properties and the incidence of X-ray selected AGN in dwarf galaxies, and then in the full galaxy population. Our galaxies were taken from the MPA-JHU (based on SDSS DR8) and X-ray information from 3XMM DR7. We used upper limits to correct our sample for observational incompleteness and used this to measure the probability of finding AGN activity in a galaxy of a given mass and redshift. This paper is structured as follows. First, we describe how our AGN sample was constructed ( §2). Second, we present our approach to classifying the star-forming properties of the galaxies in our sample, making sure to account for the effect of changing stellar mass and redshift ( §3). Then we assess the agreement between our X-ray selected sample and BPT selection, and how star-forming activity affects that classification ( §4). After that, we investigate the rate at which the black holes powering our AGN are accreting material and explore how this changes with its star formation classification ( §5). With this data we then measure the probability of finding AGN activity as a function of accretion rate distributions ( §6) and use them to calculate how the fraction of galaxies that host AGN varies with stellar mass, redshift and star formation rate ( §7). Throughout, we assume Friedmann-Robertson-Walker cosmology: Ω = 0.3, Λ = 0.7 and 0 = 70 km s −1 Mpc −1 . DATA & SAMPLE SELECTION To define a parent sample of local galaxies for this study we used the MPA-JHU catalogue 1 . This catalogue is based on the Sloan Digital Sky Survey Data Release 8 (SDSS DR8), and includes stellar masses, star formation rates (SFRs) and emission line fluxes for 1,472,583 objects classified as galaxies by the SDSS pipeline. Whilst the MPA-JHU catalogue is formally deprecated by the SDSS, we found the alternatives (Wisconsin, Portsmouth and Granada 2 ) to be insufficient for our purposes. See Birchall et al. (2022) for more details on this analysis. The X-ray data comes from the 3XMM DR7 catalogue (Rosen et al. 2016). It is based on 9,710 pointed observations with the XMM-Newton EPIC cameras in the ∼ 0.2 − 12 keV energy range. We use fluxes in the 2 − 12 keV range, by summing the 2 − 4.5 keV and 4.5 − 12 keV bands, then converted them to luminosities using the MPA-JHU redshifts, assuming Γ = 1.7. We used this catalogue, instead of 4XMM, because this was the data used in Birchall et al. (2022) and the analysis required the use of comprehensive upper limits data from Flix 3 (Carrera et al. 2007). At the time of performing this analysis, 3XMM DR7 was the most recent version of the serendipitous sky survey available with this infrastructure. See Birchall et al. (2020) for a more detailed description of both these catalogues. We use the ARCHES cross-correlation tool, xmatch (Pineau et al. 2017) to match these catalogues. It is an astronomical matching tool which can identify the counterparts of one catalogue to multiple others and compute the probabilities of associations using background sources and positional errors. We broke down the data into individual XMM fields and matched SDSS and X-ray object therein. Using a 90% probability of association as the matching threshold left us with a well-matched sample of 1,559 X-ray emitting galaxies. See Birchall et al. (2022) for more details on the matching process. Our AGN X-ray selection technique was developed in Birchall et al. (2020). First, we modelled the combined emission coming from other X-ray emitting sources -X-ray binary stars ( Gas , based on the Mineo et al. (2012) model) and hot gas emission ( XRB , based on the Lehmer et al. (2016) model) -and compared it to the observed Xray luminosity. After we calculated these contributions, we summed them and compared this quantity to the observed X-ray luminosity, , . Any object that met or exceeded the following criterion was considered to be an AGN, (1) Using this criterion, 949 X-ray emitting galaxies were classified as AGN. For the purposes of measuring an accurate AGN fraction we had to ensure we had a statistically complete sample of galaxies above a given stellar mass limit. As in our previous work, we adopt a redshift-dependent stellar mass limit for our sample corresponding the mass above which 90% of galaxies lie in narrow redshift bins (Δ ∼ 0.05). This results in the removal of 32 objects. See Birchall et al. (2022) for full details. STAR-FORMING CLASSIFICATION The galaxy population in the nearby Universe is strongly bimodal: there are star-forming galaxies that lie close to or above the main sequence (Noeske et al. 2007;Elbaz et al. 2011) and quiescent galaxies that lie below. Salim et al. (2007) found that for a large sample of star-forming SDSS galaxies, the main sequence of star formation at a given redshift is described by a power law, SFR ∝ Mass 0.65 . However, given the relative amount of star formation has changed dramatically through cosmic time (See Madau & Dickinson 2014, among many others) the exact form of the main sequence relation will evolve with redshift. In this section, we outline the process by which we classify galaxies as star-forming or quiescent using their SFRs relative to this evolving main sequence relation. Star-forming or Quiescent Our method for splitting the sample into different star-forming classes is based on Moustakas et al. (2013). They calculate a quantity referred to as the "rotated SFR", SFR rot , which attempts to account for the effect of changing stellar mass on SFR. Thus SFR rot has the form, log 10 (SFR rot ) = log 10 (SFR) − 0.65(log 10 * − 10) where SFR is in units of M yr −1 and stellar mass ( * ) is in units of M . We plotted histograms of SFR rot binned by redshift to produce distributions of this mass-independent SFR for the underlying galaxy population. This then allowed us to identify the SFR rot corresponding to the local minimum between the star-forming and quiescent peaks in each redshift bin. Fitting a straight line to the change of SFR rot minima with redshift produced an appropriately normalised equation which could be used to split the whole galaxy population into star-forming and quiescent, SFR SF/Q . It took the form, log 10 (SFR SF/Q ) = log 10 (SFR) − 0.65(log 10 * − 10) + − 0.79 0.77 (3) Figure 1 shows the results of this analysis. Our AGN sample is plotted as large, coloured points on the mass and SFR plane and contrasted with the underlying MPA-JHU galaxy population. Each panel contains a green line, described by equation (3), which is used to split these AGN into star-forming (blue stars) and quiescent (red circles). Any objects that lie above the line are classified as star-forming, those below it are quiescent. SFR relative to the Main Sequence Dividing the sample into star-forming and quiescent galaxies is useful to compare the general effect of star-formation on AGN activity. However, to understand this effect in greater detail we extended the above analysis to further divide the galaxy sample by its changing level of star-formation. For this quantity we shifted equation (3) With this equation we could establish 5 bins of log 10 (SFR/SFR MS ) to track the changing level of star-formation in the sample: • Starburst: Star-forming galaxies with excess star-formation relative to the main sequence (log 10 (SFR/SFR MS ) > 0.3) • Main Sequence: Star-forming galaxies with SFRs consistent with the main sequence (−0.3 ≤ log 10 (SFR/SFR MS ) ≤ 0.3) corresponding to 50% of the total star-forming galaxy population • Sub-Main Sequence: Galaxies with SFRs lower than the bulk of the main sequence (−0.965 ≤ log 10 (SFR/SFR MS ) < −0.3) consisting of weak star-forming galaxies • Quiescent (High): Quiescent galaxies with SFRs in the top 50% of this population (−1.8 ≤ log 10 (SFR/SFR MS ) < −0.965) • Quiescent (Low): Quiescent galaxies with SFRs in weakest 50% of that population (log 10 (SFR/SFR MS ) < −1.8) Figure 2 shows how each bin maps onto the AGN sample. By taking this approach we have ensured there are sufficient numbers of observed AGN in each bin and that, for a fixed redshift, an increase in the log 10 (SFR/SFR MS ) tracks only the effects of SFR. BPT CLASSIFICATION AGN activity impacts the host galaxy's emission across the electromagnetic spectrum. The BPT diagnostic (Baldwin et al. 1981) is a commonly used technique which can identify the dominant source of ionising radiation in the optical part of the spectrum. It compares the ratios of various emission lines to determine whether star formation, AGN or a composite of both processes are the likely dominant source of ionisation in any given galaxy. Birchall et al. (2020) found that this diagnostic missed around 85% of our X-ray selected AGN in dwarf galaxies. Birchall et al. (2022) found that BPT selection missed a similar proportion of AGN in low mass galaxies but that its accuracy increased towards higher stellar masses. In this section, we investigate how star formation affects the accuracy of this diagnostic. The AGN sample contrasted with the underlying galaxy population (black contours, containing 90%, 70%, 50%, 30%, and 10% of the population). The green dashed line describes the changing minimum of the bimodal SFR distribution for this galaxy population and is used to determine whether an AGN is "star-forming" (blue stars) or "quiescent" (red circles). See section 3.1 for more information on how this was calculated. Of the 917 AGN hosts we identified in section 2, 658 had strong detections ( Line Flux Line Flux Error > 3) in the required emission lines. These were used in our BPT analysis. In figure 3, we show the BPT and X-ray-selected AGN on the BPT plane, with star-forming AGN as blue stars, and their quiescent counterparts as red circles. Black lines separate the AGN hosts into different classifications: objects with ionisation signatures predominately from star formation lie in the bottom-left, those dominated by AGN emission lie in the top-right, and those with composite spectra are in the central region (Kewley et al. 2001;Kauffmann et al. 2003). A subset of the MPA-JHU galaxies are shown underneath these points, in grey, to illustrate the underlying BPT distribution. The vast majority of the X-ray selected star-forming AGN have significant enough detections in all the BPT emission lines to make it the BPT diagram. This population can be clearly seen across the full extent of the diagnostic. However, under half of the X-ray selected quiescent AGN have significant enough detections to make it onto the BPT diagram. This population is concentrated largely in the AGN region. By requiring a significant detection in each BPT emission line, objects with relatively large amounts of optical ionisation are much more likely to make it onto figure 3, regardless of its source. However, this is not reflected in the X-ray luminosities. There is no systematic difference in the X-ray luminosity distributions of BPT-detected and non-detected sources for either the full AGN population or just the quiescent AGN. By definition, star-forming AGN produce significant amounts of optical emission from both forming stars and AGN activity. Thus, these objects will have higher levels of optical emission on average. This results in a higher fraction of these objects having significant enough emission lines to make it onto the BPT diagram. It also means that these objects can appear across all classifications, although, a BPT classification of 'composite' or 'star-formation-dominated' does not exclude the presence of AGN activity (Birchall et al. 2020(Birchall et al. , 2022. Quiescent galaxies, on the other hand, are defined by their reduced levels of star formation. This means that the only route by which they can appear on the BPT diagram is through significant AGN activity. This explains why these objects are concentrated in the AGN region. It could also explain why these objects are disproportionately missing from the BPT diagnostic: if there is little star formation and the AGN activity is optically quiet, then there is no source to produce strong enough emission lines to make it onto the BPT diagnostic. To investigate this, we first analysed the individual emission lines and found that H most frequently missed the significance threshold. Cid Fernandes et al. (2010) also found a significant population of SDSS galaxies were missed out from the BPT diagram due to lack of significant H line detections. In addition, it was galaxies that would have been classified as AGN (should the significance criterion not have been required) that were most affected. We then looked at how the fraction of BPT-detected AGN changes with stellar mass and star-forming classification. The results are inset in figure 3. The blue, star-forming distribution shows high proportions of BPT-detected AGN, reaching 94% in the 10.5 ≤ log 10 ( * / ) ≤ 11 bin and never dropping below 67%. However, the red, quiescent distribution peaks at 77% in the 10 ≤ log 10 ( * / ) ≤ 11, before rapidly dropping to 17% in the highest mass bin. There was only one quiescent AGN detected in the lowest mass bin which was missed, hence the 0% detection rate. Since, by definition, quiescent AGN are not forming as many new stars as their star-forming counterparts, stellar absorption lines, like H , could dominate their spectra. These absorption lines will be most pronounced in the spectra of the highest mass galaxies as they contain the largest amount of stellar material. Thus, weak AGN emission lines could be dominated by strong absorption lines and produce the significant drop in the fraction of high mass, quiescent AGN shown on the BPT diagram in figure 3. This suggests that the BPT diagnostic is not effective at identifying optically-weak emission from AGN, especially in quiescent galaxies. SPECIFIC BLACK HOLE ACCRETION RATE AGN are powered by the accretion of matter onto a central supermassive black hole. Measuring the accretion rate is important as observing only the absolute X-ray luminosity can provide a biased picture, especially when examining the AGN content of galaxies that span a broad mass range. Consider two black holes, one black hole is growing at a higher accretion rate in a lower mass galaxy, another black hole with a lower accretion rate in a high mass galaxy. It is possible that these two galaxies could emit the same X-ray luminosity, obscuring the activity occurring at the central black hole. Thus, to break this degeneracy, we need to investigate the specific black hole accretion rate (sBHAR), sBHAR . This quantity compares the bolometric AGN luminosity with an estimate of the black hole's Eddington luminosity to provide an estimate of how efficiently material is being accreted. It has the following form, sBHAR = 25 2−10keV 1.26 × 10 38 × 0.002 * ≈ bol Edd (5) and is taken from Aird et al. (2012). Using 0.002 * implies a perfectly constant correlation between the masses of the SMBH and stellar bulge which, in reality, is dependent on galaxy morphology, among other properties (e.g. Blanton & Moustakas 2009;Jahnke et al. 2009). However, our intention when using this correlation is to present an Eddington-scaled accretion rate quantity rather than accurately recreate an Eddington ratio. By using this scaling to calculate sBHAR -a tracer of the rate of black hole growth relative to the host galaxy's stellar mass -it allows for ease of comparison with the literature. Figure 4 shows the observed AGN sample distributed on a stellar mass and X-ray luminosity plane, and coloured based on their starforming classification. Star-forming AGN are shown as blue stars, and quiescent AGN are red circles. Lines of constant sBHAR are shown to indicate its approximate value for different masses and luminosities. The clearest difference between star-forming and quiescent AGN is in the highest mass galaxies (log 10 M * /M 11). Here the brightest, most actively accreting AGN are in star-forming galaxies, and the dimmer, weakly accreting AGN are in quiescent hosts. This effect is repeated across the stellar mass range to a lesser degree, with star-forming AGN generally having higher values of sBHAR. This is evident in the inset panel, where we plotted the sBHAR distributions split by star-forming classification. It is clear that quiescent AGN are dominated by relatively low accretion rates, whereas star-forming AGN are much more evenly spread and peak at much higher rates. In fact, for quiescent AGN, the mean log 10 sBHAR ∼ −3.1, whereas for star-forming AGN, the mean log 10 sBHAR ∼ −2.7. Whilst this sample is subject to a selection bias, it clearly shows there is an increased level of accretion activity in star-forming galaxies. This result has been seen in numerous samples at similar (Torbaniuk et al. 2021) and higher redshifts (e.g. Chen et al. 2013;Yang et al. 2017). COMPLETENESS-CORRECTED PROBABILITY DISTRIBUTIONS No AGN selection technique is perfect. Our approach, briefly described in section 2, aimed to overcome the preference that a flat luminosity threshold has towards selecting higher mass galaxies (which typically have higher X-ray luminosities) by modifying the threshold value based on the host galaxy properties. However, figure 4 shows that despite our efforts the sample is still dominated by higher mass galaxies. Birchall et al. (2020) outlines, in detail, the method we developed to reduce the effects of this bias. By correcting for 3XMM's varying detection sensitivity, this method allows us to investigate how the underlying distribution of AGN varies with changes in host galaxy properties. Birchall et al. (2022) showed that the X-ray luminosity probability distributions were still subject to an apparent observational bias, despite the completeness corrections applied. So, we only consider the sBHAR-based probability distributions. In this section, we will briefly describe how the completeness corrections are calculated, how they are applied to the observational data to create these distributions, and what they tell us about the effect of star formation on the incidence of AGN in the nearby Universe. See Birchall et al. (2020Birchall et al. ( , 2022 for a more comprehensive description of this process. Creating the Probability Distributions The detection sensitivity of 3XMM varies significantly across its observed sky area. This variation brings the possibility that lower luminosity AGN in this region may have been missed because 3XMM is insufficiently sensitive to detect them. To correct for this, we used Flix, 3XMM's upper limit service, to measure the 0.2 − 12 keV upper limit at the centres of the 22,079 MPA-JHU galaxies in the 3XMM footprint. We restricted detections to the PN8 band which reduced our AGN sample from 917 to 739. Using the Flix upper limit and the MPA-JHU redshift and stellar mass, we calculated the X-ray luminosity upper limit and sBHAR for each galaxy. With these properties, we could construct a cumulative histogram normalised by the total number of galaxies in the current mass and redshift range. These luminosity sensitivity functions allowed us to determine the fraction of galaxies where an AGN accreting above a given rate could be detected. Using these sensitivity functions, we can, for example, calculate the probability of finding an AGN in a given galaxy as a function of specific black hole accretion rate, in bins of increasing stellar mass. To construct this distribution, we first split up the AGN sample into a series of stellar mass bins. These are then further broken as a function of sBHAR . This gives us an observed AGN count distribution. We also created a bespoke sensitivity function for each of these stellar mass bins. From this, we constructed a binned probability distribution by dividing the number of AGN in a given galaxy sub-sample and sBHAR bin, AGN, , by the number of galaxies . Stellar mass against the observed X-ray luminosity for the 917 AGN detected in section 2. Star-forming AGN are shown as blue stars, and quiescent AGN are red circles. Several grey lines of constant sBHAR have also been plotted for reference. The inset panel shows how the AGN population is distributed along the sBHAR axis when split by star-forming classification. This distribution has not been corrected to account for X-ray sensitivity limits, for more information on this see section 6. where such an sBHAR would be detectable, gal . Thus, where ( sBHAR, ) is the probability of observing an AGN in a given galaxy sub-sample, accreting above the limiting accretion rate sBHAR, . The results of this process are shown in figure 5, the probability of finding an AGN as a function of sBHAR in bins of increasing mass. Reference information is also printed on each panel including the size of each AGN population and the stellar mass range of the galaxies included therein. By applying correction fractions, extracted from the sBHAR sensitivity function, to the observed AGN counts we were able to provide robust measurements of the true incidence of AGN within the nearby galaxy population. Figure 5 shows the results of this calculation. It is clear that AGN exist across the stellar mass range despite the favouring of higher mass AGN seen in figure 4. In addition, we see that AGN populations are well described by power law distributions, as found in Birchall et al. (2020) and Birchall et al. (2022), with AGN being found predominantly at lower sBHAR (see also Aird et al. 2012, and others). Errors on the data points are found using the confidence limits equations presented in Gehrels (1986) so the size of the error is proportional to the number of detected AGN in a given bin. Power laws were fit to the data in each of these panels with the following form, where ( ) is the probability of observing an AGN as a function of sBHAR and centred on a value . Each power law is centred on the median accretion rate for the sample, log 10 = −2.55. The star-forming AGN population are shown as blue stars, and the quiescent AGN population are red circles The power laws are shown as dashed lines, indicating how the probability of finding an AGN changes as a function of sBHAR . The error regions surrounds each power law and was calculated by performing a 2 fit with equation (7) to the data points and errors in each bin. Fit parameter errors were estimated by taking the square-root of the covariance matrix's diagonal. With this we could outline the extent of the uncertainty in each fit. Encouragingly, the power law provides a good fit in nearly all of the bins. Probability Distribution Comparison Using the method described above we have created a series of probability distributions investigating how other host galaxy properties might affect the incidence of AGN in the nearby Universe. Figure 6 adds probability distributions for redshift and log 10 (SFR/SFR MS ), split up into quiescent (left-hand column) and star-forming (righthand column) AGN. Each panel contains numerous probability distributions, represented by dashed lines, which are coloured to indicate the respective stellar mass (top row), redshift (middle row) or log 10 (SFR/SFR MS ) (bottom row) bin. The best-fit coefficients and associated errors used to calculate these distributions are presented in appendix A. The error regions were all fit to data points but were not included on figure 6 to aid clarity. The redshift and log 10 (SFR/SFR MS ) distributions with data points are presented in appendix B. The star-forming AGN population are shown as blue stars, and the quiescent AGN population are red circles. Power laws (dashed lines) have been fit to the data in each panel and displayed alongside their 1 uncertainty (pale region). Information about the number of PN-8-detected AGN and parent galaxies in the each mass range are printed on the panels. See section 6.1 for more details on how these plots were constructed. Table A lists the coefficients required for equation (7) to recreate these distributions. Normalisation The normalisation of the distributions in figure 6 do not change significantly. There is some evidence of increase in the overall probability of finding AGN in star-forming galaxies as we shift towards higher redshifts and star formation rates, but the magnitude of this effect appears to be fairly small. Not all of these distributions cover the full extent of log 10 sBHAR axis. This can be seen most clearly in the stellar mass row, with the distributions describing the lowest mass, quiescent AGN and highest mass, star-forming AGN. In addition, the star-forming column of both the redshift and log 10 (SFR/SFR MS ) rows show a slight shift towards higher accretion rates at higher values of the respective property. These changes are due to the limited number of observations in these bins. However, this does highlight changes in galaxy population across the star-forming classifications. In the stellar mass row, we see that star-forming galaxies dominate at lower stellar masses and quiescent at higher masses. And in the redshift and log 10 (SFR/SFR MS ) rows, a slight favouring of higher accretion rate galaxies. Slope The clearest changes in slope can be seen in the redshift row of figure 6. The quiescent distributions are much steeper than those composed of star-forming AGN. There is also a distinct steepening between the quiescent distributions, showing that lower accretion rates are favoured with increasing redshift. The star-forming distributions, however, do not show this trend. In fact, the highest redshift, star-forming distributions appear significantly flatter than their quiescent counterparts, suggesting that star formation facilitates higher accretion rates at higher redshifts. There is not much change in slope in the log 10 (SFR/SFR MS ) row. Given the nature of this quantity the quiescent/star-forming division occurs along the log 10 (SFR/SFR MS ) axis, so there are only two bins in the left panel and three on the right. There does not appear to be any consistent trend within the stellar mass row, nor any clear effect when comparing the quiescent and star-forming populations. Whilst the highest mass, star-forming bin has a distinctly flat gradient, it is poorly constrained and is consistent with the lower mass bin within the errors. AGN FRACTIONS Integrating under the probability distributions shown in figure 6, allowed us to further analyse how star formation affects the AGN population in the nearby Universe. Figure 7 shows the AGN fraction as function of each property, split up into star-forming and quiescent populations. As before we are only considering the sBHAR-derived fractions; the corresponding integration limits are shown in the bottom-left corner of each panel. These limits were chosen because they represent the bulk of the measured activity for our AGN sample. In this section, we will outline the results of this calculation and explain their significance. It is encouraging to see that the top and middle panels, exploring the AGN fraction with stellar mass and redshift respectively, highlight similar trends shown in Birchall et al. (2022). In that work we found little change in AGN fraction with stellar mass, averaging around 1%. In the top panel of figure 7 there is a similarly flat AGN fraction with stellar mass, averaging about 1% for the quiescent galaxies, and 2% for star-forming galaxies. Birchall et al. (2022) also showed that AGN fraction with redshift increased from around 1% to 10%. In the middle panel of figure 7 there is also a clear increase in AGN fraction with redshift. Between = 0 and 0.35, AGN fraction rises from 0.5% to 4.5% for quiescent galaxies, and from 1.5% to 7% for star-forming objects. Splitting the AGN sample by star-forming classification shows that star-forming galaxies have slightly enhanced AGN fractions. However, this enhancement does not appear statistically significant. To check its significance, we calculated the overall fraction of AGN found in star-forming galaxies and compared it to the fraction in quiescent galaxies. The star-forming AGN fraction was found to be enhanced by a factor of 2 at a > 3.5 significance. Thus there does appear to be a real increase in the incidence of AGN in star-forming galaxies. Azadi et al. (2015) observed a similarly sized star-formation-driven enhancement in a different X-ray selected AGN sample out to ≈ 1.2. This enhancement of AGN fraction in star-forming galaxies is also reflected in the bottom panel of figure 7. The AGN fraction rises from 0.7% to 3.8% with increasing log 10 (SFR/SFR MS ). A similar positive correlation has been observed between average black hole accretion rate and SFR in samples at higher redshifts (e.g. Chen et al. 2013;Yang et al. 2017;Pouliasis et al. 2022). Both redshift and log 10 (SFR/SFR MS ) appear to drive strong increases in AGN fraction. To disentangle these effects, we divided the galaxy and AGN populations and re-plotted the AGN fractions as a function of log 10 (SFR/SFR MS ) for these new samples. It is clear from figure 8, that AGN fraction increases with SFR throughout the sample. Whether split at the median mass (log 10 ( * / ) = 10.89) in the top row or redshift ( = 0.11) in the bottom row, both trends show a systematic increase in fraction and a steepening gradient between the low and high value bins. The systematic increase between the low and high redshift bins is expected given the observed trend in figure 7. Similarly, splitting the sample by mass will shift the average redshift of each bin. So, we would expect an increase in the average fraction in the higher mass driven by the increased redshift. To confirm this, we consolidated each set of fractions from the low and high mass bins into quiescent and star-forming classifications at each panel's median redshift (0.08 for lower mass galaxies, 0.15 for higher mass galaxies). We found that the increase between these consolidated fractions was consistent with the overall AGN fraction increase with redshift. We also see a steepening gradient between low and high value bins which is likely due to changing combinations of star-forming and quiescent hosts. As outlined previously, star formation is known to change with both stellar mass and redshift. So, increasing the average redshift of the bin will change the sample's star-forming properties: the proportion of AGN in star-forming galaxies will increase, those in quiescent galaxies will decrease and produce the observed steeper increase. Whilst this has not been able to precisely disentangle the effects of redshift and log 10 (SFR/SFR MS ), we have confirmed the effects seen previously. Stellar mass has little effect on the AGN fraction in the nearby Universe, and anything we might see is consistent with a redshift-driven increase. In addition, we have shown that the star-formation-driven enhancement is present at fixed stellar masses and redshifts throughout the nearby Universe. With this analysis, we have shown that the incidence of AGN activity within given accretion limits in the nearby Universe is driven both by the host galaxy's redshift and log 10 (SFR/SFR MS ). What connects these quantities is the availability of cold gas: the level of star formation is strongly correlated to its availability in a given galaxy, and its abundance is thought to increase at earlier cosmic times (Mullaney et al. 2012b;Popping et al. 2012;Vito et al. 2014). Thus, our results highlight a strong connection between the availability of cold gas and the level of AGN activity. However, there is still a significant population of AGN within quiescent galaxies. Whilst diminished relative to star-forming galaxies, this population of quiescent AGN implies that another fuelling mechanism is possible, perhaps fuelled by stellar mass loss, as theorised in Kauffmann & Heckman (2009). SUMMARY & CONCLUSIONS We have investigated how star formation influences the distribution of AGN in the nearby Universe. For this analysis, we created two sets of definitions designed to track changing rates of star formation whilst accounting for redshift and mass driven enhancements. The first definition split the sample into star-forming and quiescent populations; the second, split the sample into even more refined bins of star formation relative to the galactic main sequence. We then applied these star-forming classifications to the BPTselected AGN sample. Only 72% of the X-ray selected sample were identified as AGN using the BPT diagnostic. In particular, it is much less effective in quiescent hosts: identifying near all star-forming AGN compared to only 50% of the X-ray AGN. The other half of the quiescent sample was made up of higher mass galaxies which did not have sufficiently significant emission line activity to make it onto the BPT diagnostic. We believe this is may be due to stellar absorption lines hiding the weaker optical emission lines from low levels of AGN activity. This is further evidence to suggest that the BPT diagram is not effective at identifying weakly accreting AGN, especially in quiescent host galaxies. Next, we investigated how star-forming classification affects the rate at which black holes accrete material. We found that across stellar mass, AGN in star-forming galaxies generally have much higher accretion rates than their quiescent counterparts with a mean difference between star-forming classifications of log 10 sBHAR ∼ 0.5. This effect was most pronounced at the very highest stellar masses (log 10 M * /M 11). We built on the probability distribution analysis from Birchall et al. (2020Birchall et al. ( , 2022 by applying the star-forming classifications to highlight how they affect the AGN population in the nearby Universe. The strongest star-formation-driven changes were seen in the redshift-binned distributions. Quiescent AGN showed a significant steepening with redshift, appearing to favour much lower accretion rate activity. Star-forming galaxies appeared to show the opposite trend with flatter distributions at higher redshifts. This implies star formation facilitates more active accretion at higher redshifts. Finally, by integrating under these distributions we could calculate the fraction of galaxies in the nearby Universe hosting AGN and determine the effect of star formation. Reassuringly, both AGN fraction trends with stellar mass and redshift seen in Birchall et al. (2022) are recreated when split by star-forming classification. There is little change in AGN fraction with stellar mass, and a noticeable increase with redshift. We found that star formation increases the incidence of AGN by a factor of 2 at a > 3.5 significance. This enhancement is also seen when binning the AGN fraction as a function of log 10 (SFR/SFR MS ). However there is still a significant population of AGN in quiescent galaxies in the nearby Universe. In conclusion, we have shown that star formation has an effect on the AGN distribution in the nearby Universe by facilitating higher black hole accretion rates and increasing the probability of finding AGN activity in a given galaxy. However, AGN are still prevalent in quiescent galaxies suggesting additional fuelling mechanisms, such as from stellar mass loss, can also facilitate AGN activity. In addition, this research made use of Astropy, 4 a communitydeveloped core Python package for Astronomy (Astropy Collaboration et al. 2013. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. KB would also like to thank F-X Pineau for his help with constructing the xmatch scripts, and Duncan Law-Green for the help accessing the Flix archive. DATA AVAILABILITY STATEMENT The data underlying this article has been derived from publicly available datasets: the MPA-JHU Catalogue (based on SDSS DR8) & 3XMM DR7. Section 2 outlines where these catalogues are available and how the final sample was derived. The underlying data will also be shared on request to the corresponding author. Table A1 outlines the best-fit coefficients, and associated errors, for equation (7) to create every probability distribution shown in figure 6. log 10 = −2.55 Quiescent AGN Star-forming AGN log 10 Stellar Mass (M ) log 10 log 10 8.00 -9.00 −2.44 ± 1.27 −0.52 ± 0.04 −2.36 ± 0.33 −0.98 ± 0.58 9.50 -10.00 −2.77 ± 0.80 −0.91 ± 0.25 −2.33 ± 0.31 −0.93 ± 0.37 10.00 -10.50 −2.60 ± 0.23 −0.55 ± 0.36 −2.36 ± 0.14 −0.57 ± 0.19 10.50 -11.00 −2.50 ± 0.11 −0.68 ± 0.15 −2.03 ± 0.07 −0.48 ± 0.11 11.00 -11.50 −2.62 ± 0.14 −1.04 ± 0.15 −2.14 ± 0.15 −0.90 ± 0.21 11.50 -12.00 −2.53 ± 0.00 −1.28 ± 0.02 −1.82 ± 0.14 0.00 ± 1.36 z log 10 log 10 0 -0. Table A1: Best-fit coefficients used to fit equation (7) to all probability distribution configurations. Figures B1 and B2 show the probability distributions for star-forming and quiescent AGN, split by redshift, and log 10 (SFR/SFR MS ). The remaining trends with sBHAR are presented for the sake of transparency, to show the strength of our power law fits to the data. All the plots have the same form as figure 5. This paper has been typeset from a T E X/L A T E X file prepared by the author. Figure B1. Additional probability distribution used to calculate fits and associated error regions for the other host galaxy properties. This figure shows the probability of a galaxy hosting an AGN as a function of sBHAR and split into bins of redshift. The star-forming AGN population is shown as blue stars, and the quiescent AGN population as red circles. As with figure 5, power laws (dashed lines) have been fit to the data in each panel and displayed alongside their 1 uncertainty (pale region). These plots were constructed using the method outlined in section 6.1. Figure B2. Additional probability distribution used to calculate fits and associated error regions for the other host galaxy properties. This figure shows the probability of a galaxy hosting an AGN as a function of sBHAR and split into bins of log 10 (SFR/SFR MS ) (see ( §3.2 for more information on how this quantity was calculated). The star-forming AGN population is shown as blue stars, and the quiescent AGN population as red circles. As with figure 5, power laws (dashed lines) have been fit to the data in each panel and displayed alongside their 1 uncertainty (pale region). These plots were constructed using the method outlined in section 6.1.
2023-06-21T01:16:36.969Z
2023-06-08T00:00:00.000
{ "year": 2023, "sha1": "46848424a9fc947eb3607307afc6ac07f4cf9782", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "46848424a9fc947eb3607307afc6ac07f4cf9782", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246282645
pes2o/s2orc
v3-fos-license
Extraction of DNA from face mask recovered from a kidnapping scene As technology advances on daily basis, new ideologies arise in using substantial evidence collected from crime scenes, these substance ranges from biological materials such as blood, hair strand, saliva, mucus, tears and other biological materials [1]. DNA which is present in every biological material can often be available on surfaces handled at a crime scene. Due to the limited amount of DNA in these samples, forensic professionalism and it extraction ef iciency is very important if the samples are to be used as evidence in an investigation. Fabric (face mask) is chosen as a source for it high use in everyday crime scenes [1]. Introduction As technology advances on daily basis, new ideologies arise in using substantial evidence collected from crime scenes, these substance ranges from biological materials such as blood, hair strand, saliva, mucus, tears and other biological materials [1]. DNA which is present in every biological material can often be available on surfaces handled at a crime scene. Due to the limited amount of DNA in these samples, forensic professionalism and it extraction ef iciency is very important if the samples are to be used as evidence in an investigation. Fabric (face mask) is chosen as a source for it high use in everyday crime scenes [1]. DNA analysis has become the golden standard in many crime laboratories around the world. Deoxyribonucleic acid, commonly known as DNA, has become the "golden standard" for the identi ication of perpetrators at crime scenes [2]. This molecule contains the instructions necessary to create every type of cell in a person's body. Approximately 0.1% of DNA varies among people. This 0.1% is the main focus of forensic DNA investigations. Due to DNA's abundance in the body; multiple luids can be used as a source for DNA. Good sources of DNA include blood, saliva, and semen, often visible to the naked eye [3]. Applying science to the legal area is fundamentally one of the noble ideologies that will greatly assist in determining what took place, where it happened when it happened and who was involved in the scene [4]. It is not involved in, and will not determine why something happened. But rather focuses on when it happens what happens and who was directly involved in the act [5]. Forensic investigation plays a major role on evidence in unveiling physical evidence so that crime or civil con lict can be resolved. It is the duty of the forensic scientist to collaborate with court of law in translating the legal issues into an appropriate [4]. The scienti ic question, and to advise the judiciary on the capabilities and limitations of current techniques. Physical evidence (samples) collected from a crime scene are of a different kind, majorly samples that contain biological material (DNA) or samples that can have a stain of biological material (face mask, etc.) and the biological material can be extracted from it. In forensic science, natural scienti ic techniques can be Abstract Deoxyribonucleic acid (DNA) extraction has considerably evolved since it was initially performed back in 1869. It is the fi rst step required for many of the available downstream applications used in the fi eld of molecular biology and forensic science. Blood samples is one of the main body fl uid used to obtain DNA. This experiment used other body fl uids such as saliva, sweat tears and mucus. There are many diff erent protocols available to perform nucleic acid extraction on such samples. These methods vary from very basic manual protocols to more sophisticated methods included in automated DNA extraction protocols. This experiment used extraction kit (Zymo research). The DNA result from isolated saliva samples on the facemask range from 133.7, 213.6, 599.1 and 209.1 mg/ml. theoretically; such DNA is of much quantity and quality and can be used for forensic investigation when recovered from a crime scene. The DNA from isolated tears samples on the face mask ranges from 707.7, 202.5, 99.2, and 62.6 mg/ml. Theoretically, such DNA is of much quantity and quality and can be used for forensic investigation when recovered from a crime scene. The DNA from isolated tears samples on the face mask ranges from 615.3, 66.2, 78.5, and 68.2 mg/ml. theoretically, such DNA is of much quantity and quality and can be used for forensic investigation when recovered from a crime scene. Extracted DNA from saliva and sweat produced visible bands on agarose gel, mucous stain produce obscure band on agarose gel and the tears stain produce invisible bands. DNA from sweat satin, saliva stain, mucus stain and tears stain in face mask can be used as alternative for forensic investigation. applied to determine the state of a piece of physical evidence at the time of collection. Using the scienti ic method, inferences are made about how the evidence came to be in that state. These inferences then limit the events that may or may not have taken place in connection with said evidence. The law interprets elements of a crime; science contributes evidence helping to determine if an element is present or absent. [3]. The inferential part of the forensic result must be emphasized. The scienti ic result presented in court does not conclude if the suspect is guilty or innocent. Rather, forensic science only unfolds information on what may have transpired and link the act to who may have been involved. It does not assert whether the action was legal or illegal. In recent years, crime of different forms has been the major threat to life and property of Nigerians in Nigeria. Though some certain items such as metals and woolen materials are recovered from the crime scene and in some cases blood stains and saliva are found around the scene, yet the criminals are still not known. A study has shown that DNA being the genetic material of an organism is present in each material the organism comes in contact with. The material may be woolen linen or metal [2]. The unique nature of DNA variation from one individual to another individual makes it easy and possible to differentiate between every individual by DNA typing or ingerprint. Therefore when samples with DNA are collected from a crime scene, it becomes possible to trace the individual in whom sample was DNA was found in the scene. The extraction of DNA from a face mask can be a substantial evidence to prove whether a person is guilty of an accused crime or not. When such materials are collected from a crime scene and DNA extracted from them, the DNA will be compared with that of the suspect. If the crime scene DNA matches with that of the suspect, it therefore, proves the fact that the suspect used the face mask and can be used to link the suspect to the crime. But if the DNA did not match with that of the suspect, the suspect has no link to the crime except if there are other evidence to link the suspect to the crime [5]. In recent years, investigation depend on blood and hair strands from crime scenes for DNA ingerprinting, therefore other body luids may not be necessary as a source of DNA for forensic investigation. This work is limited to DNA extraction, quanti ication, and agarose gel electrophoresis. PCR, UV-vis spectrum absorption, gene sequencing and other molecular biology techniques are outside the scope of the research and therefore not captured. Sample collection Four people were invited to participate in the research, the eligibility criteria include healthy males and females adult age 18 years and above. The purpose of the study was explain to them and how they are expected to participate. Each participant was given face mask to wear for about one (1) to two (2) hours. Participant A: undergo some form of rigorous exercise during the period in which the face mask was worn such that sweat from the face soaked the face mask, the face mask was carefully collected and soaked in a buffer inside a sample bottle to wash out the biological material (sweat) and the bottle was labeled for identi ication. Other samples being saliva, mucous, and tears were also collected from the participant for comparison and were also labeled for identi ication. Participant B: was instructed to talk a lot during the time of putting on the face mask such that saliva expectorates on the face mask, the face mask was carefully collected and the part covering the mouth was carefully cut out into a buffer in a sample bottle to wash out the biological material (saliva) and the bottle was labeled for identi ication. Other samples being sweat, mucous, and tears were also collected from the participant for comparison and were also labeled for identi ication. Participant C: was asked to cough in most of the time during which the mask is worn in other to expectorate mucus on the face mask, the face mask was carefully collected and the part covering the mouth was carefully cut out into a buffer in a sample bottle to wash out the biological material (mucus) and the bottle was labeled for identi ication. Other samples being sweat, saliva, and tears were also collected for comparison from the participant and were also labeled for identi ication. Participant D: was asked to sniff methylated rob, this condition caused shading of tears from the eyes on the mask, the face mask was carefully collected and the part covering the eye was carefully cut out into a buffer in a sample bottle to wash out the biological material (tears) and the bottle was labeled for identi ication. Other samples being sweat, mucous, and saliva were also collected from the participant for comparison and were also labeled for identi ication. DNA extraction The samples transferred into sample bottles for safety reasons and labeled appropriately for easy identi ication. The label was given using numeric igures from 1 to 16 according to the numbers of samples collected. DNA was extracted from the samples following ZYMO extraction kits procedure. After the extraction, the DNA was quanti ied using NANODROP, a machine used in measuring the quality, purity and quantity of extracted DNA. The sample labeled one (1) to four (4) represent DNA from saliva, Sample labeled ive (5) to eight (8) represent DNA from sweat, Sample labeled nine (9) to twelve https://doi.org/10.29328/journal.jfsr.1001029 Tears samples The Table 3 below shows the DNA result from isolation of tears stain from the samples' face mask ranging from 707.7, 202.5, 99.2 and 62.6 mg/ml. theoretically; such DNA is of fair quantity and quality (best when ampli ied using PCR) and can be used for forensic investigation when recovered from a crime scene. Table 4 below shows the DNA result from the isolation of mucus stain from the samples' face mask ranging from 615.3, 66.2, 78.5 and 68.2 mg/ml. theoretically; such DNA is of fair quantity and quality (best when ampli ied using PCR) and can be used for forensic investigation when recovered from a crime scene. Figure 1 shows the agarose gel electropherogram of DNA isolated from saliva, sweat, mucus and tears. Due to partial dissolution, only a few extracted DNA which are saliva and sweat DNA were able to produce visible bands. The extracted DNA with low quality mucous and tears showed invisible bands. Agarose gel preparation 1% agarose gel was weighed using an electronic weighing balance (OHAUS) and was dissolved into 100 ml of TAE buffer. The gel was melted in a microwave for 2mins to form a homogeneous solution and was allowed to cool to about 60º then 2μℓ ethidium bromide (gel stain) was added. The gel was cast to a gel tank (Scie-PLAS) with a 16 well comb and allowed to solidify. The comb was removed gently to avoid distortion of the wells. The solidi ied gel was placed in a gel tank (Scie-PLAS) and the tank was illed with 1x TAE buffer till it submerge the gel. Agarose gel electrophoresis 2μℓ of gel loading dye (6x blue) was pipetted on aluminum fuel and 10μℓ of the individual DNA sample was mixed with the 2μℓ of the gel loading dye (6x blue) and these were loaded into the gel wells of the tank. The gel tank was connected to a light source. The samples were allowed to separate in the gel for 30 minutes. DNA is negatively charged, thus the DNA in the sample separated and moved toward the positive electrode. Table 1 below shows the DNA result from the isolation of saliva stain from the samples' face mask ranging from 133.7, 213.6, 599.1 and 209.1 mg/ml. theoretically; such DNA is of fair quantity and quality (best when ampli ied using PCR) and can be used for forensic investigation when recovered from a crime scene. Table 2 below shows the DNA result from the isolation of sweat stain from the samples' face mask ranging from 133.2, 310.2, 253.3 and 85.2 mg/ml. theoretically; such DNA is of fair quantity and quality (best when ampli ied using PCR) and can be used for forensic investigation when recovered from a crime scene. From the agarose gel electropherogram result (Figure 1), the wells on the gel are numbered from 1 to 16. In the experiment, four different samples being saliva, sweat, tears and mucus were used. From well one (1) to four (4) represent DNA from saliva, well ive (5) to eight (8) represent DNA from sweat, well nine (9) to twelve (12) represent DNA from tears and well (13) to 16 represent DNA from mucus. Figure 2 shows the agarose gel electropherogram of DNA isolated from saliva, sweat, mucus and tears, the gel was 1 g and dissolved completely on Microwave forming a homogeneous mixture and was most suitable for the research, therefore the DNAs move easily. Sweat samples From the agarose gel electropherogram result (Figure 2), the wells on the gel are numbered from 1 to 16. In the experiment, four different samples being saliva, sweat, tears and mucus were used. From well one (1) to four (4) represent DNA from saliva, well ive (5) to eight (8) represent DNA from sweat, well nine (9) to twelve (12) represent DNA from tears and well (13) to 16 represent DNA from mucus. a high amount of bacterial contaminants and other possible contaminants present in the saliva. In the sweat samples, the extracted DNA purity ranges between 1.46, 1.47, 1.53 and 1.58. Thus, indicating the presence of impurity or contaminants in the extracted DNA from sweat. The result from the tears sample ranges between 1.37, 1.40, 1.62 and 1.72. The result indicates good purity in two samples (1.62 and 1.72) with the other two samples having high level of impurity (1.37, 1.40). This possibly may be due to the presence of bacterial contaminants in the samples. From the result gotten from the mucus samples, the quality of the DNA ranges between 1.78, 1.80 and 1.85 which shows a high level of purity. Unlike blood samples, studies on sweat stain, saliva stain, mucus stain and tears stain have not been commonly performed. Many facts have not been disclosed, particularly those for forensic identi ication. The amount of sweat stain, saliva stain, mucous stain and tears stain attached in the face mask, was limited and dry, which directly affected the level of DNA produced. Lower DNA level from sweat stain, saliva stain, mucous stain and tears stain in face mask affected the result of agarose gel electrophoresis visualization, where the DNA bands of the mucous and tears stain looked obscure. It can be therefore concluded that in certain trace evidence, sweat stain, saliva stain, mucous stain and tears stain can be noticed as they may be very valuable as an alternative for identi ication. From the result above, 11 samples produced bands of different forms, the saliva and sweat samples produced bold bands while the mucous sample produced faint bands. However, 4 samples, which were tears samples from all the participants could not produce any band as a result of low quality DNA in them. Conclusion DNA from sweat stain, saliva stain, mucous stain and tears stain can be used as an alternative for forensic identi ication. Generally, DNA isolation from sweat stain, saliva stain, mucous stain and tears stain in face mask may have lower DNA level or quantity due to the minute amount of the body luid present in the face mask Recommendation Extracted DNA from mucous stain produces an obscure band on agarose gel and the tears stain produces invisible bands, hence, the use of PCR techniques can be applied to amplify the bands for better analysis in conditions where those are the only samples collected from the crime scene. Comparative purity The DNA purity is evaluated by measuring absorbance from 230 nm to 320 nm to detect other possible contaminants. The most common purity calculation is the ratio of the absorbance at 260 nm divided by the reading at 28 0nm. Good-quality DNA will have an A 260 /A 280 ratio of 1.7-2.0. A reading of 1.6 does not render the DNA unsuitable for any application, but lower ratios indicate more contaminants are present. DNA purity (A 260 /A 280) = (A 260 reading -A 320 reading) ÷ (A 280 reading -A 320 reading) From the results gotten from the saliva samples, the quality of the DNA ranges from 1.49 to 1.48. This possibly is due to
2022-01-26T16:06:20.050Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "b4182452cf18082b4d29f3a12a129d7d7dcefd93", "oa_license": null, "oa_url": "https://doi.org/10.29328/journal.jfsr.1001029", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d376e49d7a74691cfd70b6b18d766cea3ab811cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
259896549
pes2o/s2orc
v3-fos-license
Blinkverse: A Database of Fast Radio Bursts The volume of research on fast radio bursts (FRBs) observation have been seeing a dramatic growth. To facilitate the systematic analysis of the FRB population, we established a database platform, Blinkverse (https://blinkverse.alkaidos.cn), as a central inventory of FRBs from various observatories and with published properties, particularly dynamic spectra from FAST, CHIME, GBT, Arecibo, etc. Blinkverse thus not only forms a superset of FRBCAT, TNS, and CHIME/FRB, but also provides convenient access to thousands of FRB dynamic spectra from FAST, some of which were not available before. Blinkverse is regularly maintained and will be updated by external users in the future. Data entries of FRBs can be retrieved through parameter searches through FRB location, fluence, etc., and their logical combinations. Interactive visualization was built into the platform. We analyzed the energy distribution, period analysis, and classification of FRBs based on data downloaded from Blinkverse. The energy distributions of repeaters and non-repeaters are found to be distinct from one another. Introduction Fast radio burst (FRB) is a type of bright single pulse at radio frequencies, with enormous energy generation of millisecond duration. Among over 700 FRBs that have now been reported since the first discovery in 2007 [1], the majority of FRB discoveries have been found to be one-offs. The number of current repeating sources has reached up to 63. With the rapid increase in the number of FRB discoveries in recent years, a mass of remarkable breakthroughs have been made in the research of FRBs, such as repeaters [2][3][4][5], burst characteristics [6,7], ambient environment [8][9][10][11][12][13], and host galaxies [14][15][16][17]. The number of FRB discoveries has increased greatly, which also demands higher requirements for data collation and analysis. Several databases are currently available for a range of FRB properties. The Transient Name Server (TNS) 1 is the official IAU mechanism for reporting new astronomical transients (ATs) including FRBs. The Fast Radio Burst Catalogue (FRBCAT) 2 is a specific repository for FRB properties but has not been actively updated since July 2020 [18]. The contents of FRBCAT have been migrated to the TNS. FRBSTATS 3 provides a platform for recording FRB bursts and a visualization interface to plot the parameter distributions [19]. Meanwhile, a clustering method, called densitybased spatial clustering of applications with noise (DBSCAN), has been applied in the FRBSTATS platform to distinguish repeaters from non-repeaters automatically. Compared with the available databases mentioned above, our Blinkverse database possesses the most comprehensive information on published FRBs and a dynamic visualization platform for fruitful statistical results. Researchers can obtain the target data by constraining one or more parameter of the FRB. The searching capability of Blinkverse is stronger than previous databases. In the following sections, the architecture of the database platform will be introduced in Section 2, including data description and data availability. The advantages of Blinkverse compared to other databases will be subsequently listed in Table 2. Section 3 will provide several examples of data analysis using data readily downloaded from our database, such as energy distribution, period analysis, and classification of FRBs. Concluding remarks are provided in Section 4. Platform Architecture MongoDB, a multi-cloud database service, offers a suitable NoSQL database to serve as the catalogue infrastructure for the platform. This platform is separated into three modules ( Figure 1): overview, data description, and data availability. The various statistical charts on the homepage display an overall overview of the data from Blinkverse. The schema lists and description of the parameters are available in the data description. The display format of the data is divided into two types: FRB source information and pulse information, among which the repeated bursts of pulse information are listed separately. This database is under development and will be improved in the next 2-3 years under sufficient investigation to make it more useful. Figure 2 displays a homepage with a statistical overview of the observed events. A celestial map is displayed in the middle of this page. All recorded FRBs have been marked on this map with white dots for non-repeaters and red dots for repeaters. The interactive operation allows users to click one of the FRBs on the map to obtain the information they want. An individual visualization page ( Figure 3) has the same effect for choosing an interesting event. Overview The number of FRB discoveries is displayed below the celestial map. A total of 735 FRBs covers 63 repeaters and 672 non-repeaters. Over 500 FRBs have been discovered in 400-800 MHz by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) with a large collecting area and wide field of view since 2018 [20], whereas most FRB discoveries before 2018 were made at Parkes radio telescope [21]. A pie chart in Figure 2 displays the count of pulse detections of FRBs from various telescopes. The quantity of all the FRB bursts reaches up to~5600, which mostly contributes to the repeater of FRB20121102A and FRB20201124A detected by the Five-hundred-meter Aperture Spherical radio Telescope (FAST) with high sensitivity [5,22]. Data Description and Data Availability We reviewed multiple observational papers and relevant database websites to identify common parameters used for characterizing the properties of FRBs [2][3][4][5]22,23]. All the data in the database were obtained from various studies in the literature and datasets. The relevant links to the references for each burst are provided on Blinkverse. Based on our findings, we proposed new schema lists (see Table 1) with improved descriptions of various aspects of FRBs. Two types of schema list have been created to record the information on the burst properties and positions of FRB sources. We may add or modify parameters if necessary in the future. Figure 4 shows the generic search options and the portion of the FRB properties. The generic search for FRB sources includes telescope, observational date, FRB name, or position. In addition, we also provide an advanced search that supports logical relationship statements for convenient searching of specified parameters or a combination of parameters. Based on the already searched FRB sources, users can further select the desired parameters to download. The way to obtain data from the database is simple and flexible. We provide a download button on the website. Users can choose the parameters we provided (see Table 1) and click the download button to obtain the data in CSV format. Additionally, we also provide an online mapping service, where choosing the parameters for the x-axis and y-axis can facilitate drawing curves or scatterplots online. The burst properties and positions of FRB sources are restored in the database using the name of the FRB (for example, "FRB20121102A") as a connector. The name of each FRB has been marked with a label of "REPEAT" or "NON_REPEAT" in the database to distinguish between repeaters and non-repeaters. A separate page is designed for repeaters due to their significance in research. Users can click on a repeater FRB and see all of its individual bursts and dynamic spectra. Reference -URL of the burst discovery paper where the event was first reported 1 MJD is corrected to the solar system barycenter and referenced to infinite frequency. Considering the fact that the arrival time of the pulse is influenced by the motion of the Earth, the arrival times of the pulse are transformed to the solar system barycenter using the software pintbary [28]. 2 The unit of energy is 10 37 erg. 3 [27]. We just record the data from references without any modification. The value of DM is empty when these parameters are absent in the literature. Table 2 compares our Blinkverse platform with other main data websites. The Blinkverse database is a comprehensive platform and includes information from multiple observation devices, multiple bands, FRB host galaxies, corresponding dynamic spectrum charts, diverse visualization, and a simplified interface. An explanation of Table 2 is provided below: Comparison with Other Data Websites Telescope: The databases in Table 2, except CHIME/FRB, contain a large number of data obtained by various telescopes. CHIME/FRB is a special database that only preserves the FRB data obtained by the CHIME telescope. Host galaxy: All the databases record "ra" and "dec" to describe the position of an FRB. FRBCAT calculates and records "redshift" in addition. Dynamic spectra: We provide an interactive interface to show the dynamic spectra of FRB bursts. For a specific FRB source, the burst spectrum from every different epoch can be readily queried and presented. This offers users a much more readable data visualization platform, and as a consequence, the user will be able to identify each spectrum easily and conduct more efficient data analysis. This feature surpasses other databases or studies in the literature where all the spectra are usually only presented on a single collective figure. Search: We provide the generic search for FRB sources including telescope, observational date, FRB name, or position, and the advanced search that supports logical relationship statements. Conversely, only FRB names can be searched in CHIME and FRBCAT. Visualization: TNS, FRBCAT, FRBSTATS, and CHIME/FRB only provide lists of FRB bursts without any visualization. Blinkverse has an interactive visualization interface. The positions of FRBs are marked on the celestial sphere. Update: The frequency of the database update is not regular according to our experience. In contrast, Blinkverse is updated regularly every week. Download: Download formats supported by the database. Examples of Data Mining with Blinkverse Users can easily access data from the Blinkverse database via the REST API, which is an API that conforms to the design principles of REST, using the requests module. The well-defined data structure enables straightforward data analysis. Users can download the data from the website and read it into a DataFrame format using pandas, or directly access DataFrame format data by calling the API using the provided sample code. Here, we present several simple examples of data analysis. Upon obtaining the data, the first step is to check their distribution. Taking energy as an example, we replicated the energy distribution shown in [5] for FRB 20121102A using seaborn.displot. By filtering out bursts from source FRB 20121102A and MJD > 58724, we can show the bimodal energy distribution, as in Figure 5. Furthermore, the Blinkverse database contains relatively complete and long-term burst information, making the search for long FRB periods possible and easy. Similarly, we used FRB 20121102A as an example to search for its long period. We extracted the MJD column from the data filtered by source FRB 20121102A, and used scipy.signal.lombscargle to calculate the power of the period in the range of 2-365 days. In Figure 6, we reproduce the 157-day period of FRB 20121102A [29,30]. The Blinkverse database records various properties for bursts, making multi-parameter analysis or FRB classification possible. We selected bursts having DM, Flux, Fluence, Width, and Freq, and attempted to classify FRBs using these five parameters. Here, we used two methods, decision trees and random forests, to show the classification of FRB. Decision trees are a supervised learning method that uses a tree-like structure to represent decision rules to solve classification problems [31]. Random forests are an ensemble learning algorithm composed of multiple decision trees. Their basic idea is to construct different decision trees by randomly selecting samples and features, and then vote or average the classification results of each tree to obtain the final prediction [32]. Random forests have high accuracy and generalization performance. The confusion matrix is a table used to evaluate the performance of a classification model, showing the number of correct and incorrect predictions of the model for each class. The confusion matrix after fitting the data with random forests is shown in Figure 7, indicating that only a small number of bursts are misclassified. The majority of bursts can be correctly predicted by the model. In addition, by examining the importance of parameters in the random forest model, it can be seen that Bandwidth and Fluence contribute the most to FRB classification. This is consistent with previous research, indicating that non-repeating FRBs typically are brighter and have wider bandwidth than repeating FRBs [33][34][35][36][37]. NonRepeater Repeater As decision trees are prone to overfitting, we only used a two-level decision tree to classify FRBs (Figure 8), and similarly, we found that Bandwidth was the most important parameter for distinguishing between repeating and non-repeating FRBs. We calculated the values of energy to classify the repeaters and non-repeaters. The isotropic equivalent burst energy is calculated following the equation where z is the redshift, if the redshift is measured by the emission lines detected in the high-S/N LRIS spectrum, the z-value is used to calculate the luminosity distance (D) adopting the standard Planck cosmological model [38]. If the redshift is not measured, the distance and redshift can be calculated using the YMW16 electron density model [27]; F = S ν W eq is the specific fluence, S ν is the peak specific flux, and ν is the observed frequency of each pulse. We calculated the energy distributions of repeated and non-repeated bursts separately, as shown in Figure 9. Using the K-S test, we obtained a p-value of 0.0097, which is less than 0.05, indicating that the distributions of the two groups are different. Consistent with CHIME observations, repeater bursts have a longer duration and are narrower in bandwidth than non-repeater bursts [39]. The differences between the two groups can be verified by several parameters. Conclusions We have developed a comprehensive open-access FRB database named Blinkverse. The main characteristics of Blinkverse include the following: (1) Blinkverse has 30 parameters, such as fluence, frequency, energy, polarization, etc. (see Table 1), which are more comprehensive than those in FRBCAT. (2) Blinkverse has an interactive visualization interface that TNS, CHIME/FRB, and FRBSTATS do not have. The positions of FRBs are marked on the celestial sphere. Users can click on the map to obtain sources and their parameters. (3) FRB sources can be retrieved through Blinkverse based on parameter searches and their logical combinations, making it more versatile and accessible than TNS. (4) Blinkverse is updated weekly. (5) Blinkverse facilitates the systematic analysis of the FRB population and its multiparameter characteristics. As an example, we utilized Blinkverse to find that the energy distributions of repeaters and single events are distinct from each other. Data Availability Statement: The data presented in this study are openly available in major database sets such as https://www.chime-frb.ca/ for CHIME/FRB and https://www.wis-tns.org/ for reported FRBs and multiple observational papers. The relevant links to the references of each burst are provided on Blinkverse. researchers who have contributed to the study of fast radio burst observations. Conflicts of Interest: The authors declare no conflicts of interest. Abbreviations The following abbreviations are used in this manuscript:
2023-07-15T15:26:12.620Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "b1cb0f23d26a9f2a01db7597adedc90e697af5bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1997/9/7/330/pdf?version=1689066417", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "80a87bdaaaf86ca43818939b6cdeb9f051460efb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
457124
pes2o/s2orc
v3-fos-license
Axial torsion as a rare and unusual complication of a Meckel's diverticulum: a case report and review of the literature Introduction In 1809, Johann Friedrich Meckel described the embryology of a small bowel diverticulum, which now bears his name. Meckel's diverticulum is the most common congenital abnormality of the gastrointestinal tract, with a prevalence ranging from 1% to 4% of the population. The majority are clinically silent and are incidentally identified at surgery or at autopsy. The lifetime risk of complications is estimated at 4%, with most of these complications occurring in adults. It is these cases that can cause problems for the clinician, as the diagnosis can be elusive and the consequences extremely serious. Case presentation We present the case of a 68-year-old Caucasian man with axial torsion of a Meckel's diverticulum around its base, a rare complication. He presented with acute, severe abdominal pain, and a clinical diagnosis of perforated acute appendicitis was made. Laparotomy revealed a torted Meckel's diverticulum with distal necrosis and perforation, which was resected. His recovery was uncomplicated, and he was discharged to home six days post-operatively. Conclusion Torsion is an extremely rare complication of Meckel's diverticulum. Its presentation can be elusive, and it can mimic a number of different, more common intra-abdominal pathologies. Imaging appears to be an unreliable diagnostic tool, and the diagnosis is usually made intra-operatively. Factors pre-disposing these patients to axial torsion of Meckel's diverticulum include the presence of mesodiverticular bands, a narrow base, excessive length, and associated neoplastic growth or inflammation of the diverticulum. The importance of searching for a diseased Meckel's diverticulum at laparotomy in appropriate circumstances is highlighted. Once identified, prompt surgical excision generally leads to an uncomplicated recovery. Introduction Johann Friedrich Meckel first described the embryological origin of congenital diverticulum of the mid-gut in 1809 [1]. Meckel's diverticulum (MD) results from incomplete obliteration of the most proximal portion of the vitelline or omphalo-mesenteric duct occurring during weeks five to seven of fetal development [2]. It is thought that the terminal band represents an aberration in the developmental vitelline arteries, which in turn arise from the superior mesenteric or the ileocolic artery [3]. This fibrous band connects the diverticulum to the umbilicus [4]. Total failure of closure can result in an umbilical fecal fistula. Proximal ductal closure can lead to an umbilical sinus, whereas distal closure leads to MD [5]. Seventy-four percent of MD cases terminate with a blind distal end [5]. Histologically, all four intestinal layers are present within MD, and the mucosa may contain ectopic gastric, pancreatic, jejuna, or duodenal epithelium in up to 50% of specimens [5,6]. MD is invariably found on the anti-mesenteric border of the ileum, with 90% located within 90 cm of the ileocecal valve [2]. Its size is also variable, with the majority being short and wide-mouthed, with a mean length of 2.9 cm and a mean width of 1.9 cm, which is why it is sometimes called an ileal appendix [7]. Giant MD are defined as those larger than 5 cm, with one recorded specimen measuring 16 cm × 4 cm [2]. MD is more often diagnosed in men, as they are more prone to complications [1]. The most common childhood complication is rectal bleeding due to ileal peptic ulceration secondary to ectopic gastric mucosa [7,8]. Intestinal obstruction is the more common presentation in adults, caused by either intussusception or small bowel volvulus around a diverticular band anchored to the anterior abdominal wall. Other common complications include acute inflammation leading to perforation and hemorrhage [1]. Rarer complications include MD perforation with foreign bodies, strangulation in Littré's hernia, primary neoplasms, or vesicodiverticular fistulae [7,9]. Axial torsion of MD is an extremely rare complication [1,10]. Torsion of MD is the result of axial twisting around its base. This can occur around a persistent mesodiverticular band or with an absent band and a free-ended diverticulum. The exact mechanism for this is unclear. The degree of torsion varies and can compromise diverticular circulation, leading to necrosis and perforation [2]. Case presentation A 68-year-old Caucasian man presented to our hospital with acute, severe abdominal pain. An examination of the patient revealed that he was septic and had a distended abdomen with rebound tenderness in the hypogastrium and the right iliac fossa. His rectal examination was unremarkable. His blood test revealed a raised white cell count, 15.4 × 10 3 /μl, and a high C-reactive protein level at 208 mg/L. The patient had normal renal function and a normal hemoglobin level. An abdominal radiograph revealed dilated small bowel loops, and a clinical diagnosis of perforated acute appendicitis was made. No other pre-operative investigations were carried out, and following fluid resuscitation, a laparoscopy was performed. Laparoscopy revealed purulent fluid within the pelvis. The appendix could not be visualized, but the periappendicular region appeared normal. The laparoscopy was converted to a laparotomy. Surgical exploration revealed a torted MD with distal necrosis and perforation. The necrosed tip of the diverticulum was adherent to the adjacent mesentery ( Figure 1). The appendix, the rest of the bowel, and the viscera appeared normal. The twisted MD was resected along with an 8 cm flange of ileum that was encompassed within the vascular territory of the inflamed, unhealthy, and friable mesentery. An end-to-end seromuscular, single-layered anastomosis using a 4-0 synthetic absorbable suture, was performed to restore the continuity of the small bowel. Thorough washout of the peritoneal cavity was performed, and a pelvic drain was inserted. The patient's recovery was uncomplicated, and he was discharged to home six days post-operatively with routine follow-up. Discussion This case report presents the unusual case of torsion of MD. By reviewing the previous literature, we aim to identify the possible etiology, main clinical features, appropriate investigations, and operative management associated with this variant. The etiology of axial torsion of MD remains unclear. On the basis of the available literature, we have identified several risk factors. Although primary neoplasms arising within MD is rare, representing less than 1% of cases [11], they may be a potential risk factor. A large review of 1605 cases of complications of MD identified only 24 cases [9]. A variety of benign and malignant histological types have been reported, including leiomyoma, fibroma, hemangioma, neurofibroma, carcinoid tumor, adenocarcinoma, fibrosarcoma, and leiomyosarcoma [11]. Benign lesions within MD, such as lipomas, have also been recognized as a potential cause of torsion [12]. Complications associated with this presentation include intussusception, with the tumor as the lead point, mechanical intestinal obstruction, volvulus, inflammation, and axial torsion [13]. Fibrous vitelline bands may exist and connect the MD to the abdominal wall, increasing the chance of its torting [5]. An increase in diverticular length and the size of the base is an important predisposition for all types of complications [14]. The larger and longer the MD, the greater the risk of torsion [2]. This risk is increased further if the MD has a narrow neck and is less likely to tort around a wider neck [14,15]. Pain is always a presenting feature of a torted MD but is more frequently localized to the right lower quadrant [16]. Pain duration may range from 24 hours of colicky episodic pain to three years of intermittent pain. The patient described by Tan and Zheng [14] was discovered to have a giant MD, which was thought to be causing repeated episodes of torsion and ischemia during this time. The pre-operative diagnosis of MD is rarely considered [4]. Common incorrect diagnoses have included appendicitis [17], small bowel obstruction, cholecystitis, or an amoebic liver abscess. The latter case, reported by Webster [18], represents a case of an MD that was fixed within a sub-phrenic location. The mobility of MD can therefore determine its clinical features, which vary with its position within the abdomen. Therefore, it can also make radiological investigation confusing. When clinically suspected appendicitis is insufficiently inflamed, further abdominal exploration is important [16]. Because of its various forms of presentation and unreliable imaging, torsion of MD is frequently misdiagnosed. Special investigations appear to have little value in the diagnosis of acute MD complications. Abdominal radiographs are usually normal but may reveal an ileus or perforation [4]. Less common radiographic appearances have included gas-filled diverticula being mistaken for emphysematous cholecystitis, intussusception in infants, and even a report of MD containing calculi simulating gallstones [8]. Ultrasound may exclude intussusception, which can avoid unnecessary interventions such as attempts at reduction by the use of enemas. The MD appears similar to the bowel, with a layered wall; however, when torted, it mimics a cystic, tube-like, non-peristaltic structure [8]. The major difference is acute appendicitis. A larger size and a location far from the ileocecal region would favor the diagnosis of axial MD torsion [8]. Computed tomographic scans may also be misleading, as described in case reports of a torted MD's being mistaken for a loculated cystic pelvic mass [3,19]. Appendicitis is the main pre-operative diagnosis, while other diagnoses include small bowel obstruction, acute cholecystitis, and liver abscess [2,18,20]. Macroscopic intra-operative observations have been reported as torsion, ischemic appearance, hemorrhagic, gangrenous, and perforated with purulent peritonitis [10]. A further observation from the previous literature is that the degree of torsion is inversely proportional to the viability of the MD. In cases where there is a greater degree of torsion, there is also a greater vascular compromise to the MD [2]. This risks infarction and perforation, which are associated with greater morbidity. The postoperative period may be complicated by intra-abdominal abscess or either clinical or microscopic evidence of lower gastrointestinal bleeding [10,20]. The management of symptomatic MD is surgical resection. A wedge resection of the MD is generally carried out, and occasionally some ileum is resected by end-to-end anastomosis [7]. Diverticulectomy for MD found incidentally has been criticized, as a potential 800 asymptomatic resections are required to prevent a single patient from complications [5]. However, if the MD is left intact, any fibrous bands attached to it must be excised to prevent any future torsion or obstruction [5]. Conclusion In summary, this case report describes a patient with torsion of MD. Imaging appears to be unreliable in the detection of torted MD, and the diagnosis is usually made intra-operatively. Major risk factors for torsion appear to include an increased size of the MD with a narrow base, potentially compromising blood supply and leading to gangrene, the presence of a fibrous mesodiverticular band, and the rare presence of neoplasm. The importance of suspecting MD pathology in the differential diagnosis and its confirmation at laparotomy has been highlighted. Once identified, prompt surgical excision generally leads to an uncomplicated recovery. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2017-06-24T17:10:40.635Z
2011-03-28T00:00:00.000
{ "year": 2011, "sha1": "3d1c93d48d64627d236824c00579369d9ac9f4a0", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-118", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25900a852a6b830501a5bbde720ec6e948c7dd69", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237156453
pes2o/s2orc
v3-fos-license
Improved cytocompatibility and antibacterial properties of zinc-substituted brushite bone cement based on β-tricalcium phosphate For bone replacement materials, osteoconductive, osteoinductive, and osteogenic properties are desired. The bacterial resistance and the need for new antibacterial strategies stand among the most challenging tasks of the modern medicine. In this work, brushite cements based on powders of Zinc (Zn) (1.4 wt%) substituted tricalcium phosphate (β-TCP) and non-substituted β-TCP were prepared and investigated. Their initial and final phase composition, time of setting, morphology, pH evolution, and compressive strength are reported. After soaking for 60 days in physiological solution, the cements transformed into a mixture of brushite and hydroxyapatite. Antibacterial activity of the cements against Enterococcus faecium, Escherichia coli, and Pseudomonas aeruginosa bacteria strains was attested. The absence of cytotoxicity of cements was proved for murine fibroblast NCTC L929 cells. Moreover, the cell viability on the β-TCP cement containing Zn2+ ions was 10% higher compared to the β-TCP cement without zinc. The developed cements are perspective for applications in orthopedics and traumatology. Introduction Currently, the need to manufacture bone implants is highly requested for several bone defects in the human body [1]. Indeed, the incidence of bone deficiencies associated with fractures, surgical resections, and osteoporosis is rapidly growing [2]. Commonly, substitutional materials for bone tissues rely on the use of autologous grafts, which represent the "gold standard" of bone substitution [3]. However, their use is limited by donor site morbidity, available quantities (especially when bone defect is large), higher risk of infection, and other complications [4]. Allogeneic bone is related to limited donor source and safety problems [5]. In order to overcome these limitations, synthetic bone materials have been developed as valid alternative to autologous grafts being able to reduce infections and complications. Moreover, there is an increasing need of clinically available and degradable implants in orthopedics [6]. Indeed, implants can be divided into two large groups: non-biodegradable metals and alloys (for example, titanium, cobalt and their alloys, steel, etc.), and biodegradable materials. Non-biodegradable metal implants have been used for more than 100 years for healing bone defects and trauma, but they need to be removed after healing and/or often be replaced upon healing by an additional surgery [7]. The most promising bone implant materials are biodegradable ones, more suitable in some clinical applications such as fracture treatment, resorbed in the human body and can be substituted with time by the natural bone tissue, avoiding repeated surgical interventions. An ideal bone replacement material should be osteoinductive, osteoconductive, and osteogenic. The main concern for the use of synthetic materials for bone reconstruction is their ability to integrate and vascularize in the body. The most common types of calcium phosphates applied for synthetic bone implant materials are hydroxyapatite (HA), tricalcium phosphate (TCP), and biphasic calcium phosphates. The advantage of TCP is that it is a rapidly resorbing ceramic with respect to HA that resorbs slowly. TCP can be used for the treatment of bone defects, dental materials, biomedical cements, implant coatings, and other applications [8][9][10][11][12][13]. Depending on the temperature, TCP exists in the form of three polymorphic modifications: α, β, and β′. Only two modifications are stable at room temperature: α-TCP and β-TCP. α-TCP crystallizes in a monoclinic crystal lattice, while β-TCP in a rhombohedral one. Both polymorphic modifications are used in medicine, but due to the difference in physical and chemical properties, their application areas are different. In aqueous media, α-TCP hydrolysis leads to the formation of calcium-deficient HA. Due to the high reactivity of α-TCP, it is often used as the main powder component in bone cements. β-TCP is widely used to manufacture mono-or biphasic ceramics and can also be applied for bone cements [14][15][16][17][18][19]. Relevant clinical issues for biomedical implants concern osteointegration, inflammation, and infections. Biomaterial associated infection is a devastating complication in orthopedic and trauma surgery that often leads to multiple surgeries, pain, functional losses, increased morbidity, and even mortality. Thus, in this scenario, the development of new biomaterials with improved antimicrobial properties that prevent bacterial adhesion, colonization, and proliferation is promising. Zinc (Zn) is an important biological trace element that plays a role in the normal growth and development of the skeleton. Its content in human bones (0.0126-0.0217 wt%) is about 28% of the total amount of Zn in the body (0.0030 wt% of Zn in tissues) [20]. The lack of Zn slows down the growth of the bone mass and has a negative influence on the bone metabolism [21]. On the other hand, Zn deficiency is a factor of risk for bone osteoporosis [22]. Also, it is an essential trace element for promotion of osteoblast differentiation and proliferation [23], and it is a component of many metallo-enzymes and proteins including alkaline phosphatase [24]. An ideal bone replacement material should stimulate the growth of natural bone tissue. For this purpose, it is possible to use growth factors as additives to bone cements. However, growth factors have high cost [25] and undesirable side effects [26,27]. The addition of ions is often considered as an alternative way with respect to the use of growth factors. They are not only significantly cheaper, but also reduce the probability of negative side effects on the body. Thus, taking into account an important role of Zn in the human body described above, the introduction of Zn 2+ ions should have additional positive effects, stimulating the formation of native bone. Furthermore, Zn is well-known to possess antimicrobial properties [28,29]. Therefore, introducing Zn in biomaterials could provide antimicrobial properties [30], avoiding infections during surgery. Previous research showed the possibility of replacing calcium in calcium phosphates with Zn ions. In [31], the preparation and physical-chemical characterization of Znsubstituted calcium phosphate powders was performed. Authors [31] demonstrated that up to about 10 at.% (about 15 wt%) of Zn ions do not significantly influence the crystal lattice of HA and brushite, whereas authors [32] showed that Zn can replace calcium in the β-TCP structure up to about 10 at.% with the preference of the Ca(5) site. Substituted brushite cements are good candidates for use in restoration of bone and periodontal defects, and some issues regarding their clinical applications and trends are reviewed by authors [33]. Authors [16] reported Znsubstituted brushite cements and investigated their antibacterial activity. It was demonstrated that 0.6 wt% of Zn substitution leads to the inhibitory effect toward Escherichia coli. It should be stressed that the suitable amount of substitution ion is an important issue. The cytotoxicity with respect to osteoblastic and other cells depends on the concentration of Zn. Authors [34] report that for β-Zn-TCP ceramics, the Zn content higher than 1.20 wt% causes cytotoxicity to osteoblastic (MC3T3-E1) cells. For α-Zn-TCP powder, with concentration of Zn lower than 0.11 wt %, the cytocompatibility was approximately the same as that for the pure α-TCP powder [35]. According to [36], the optimum Zn content in α-Zn-TCP is around 0.03 wt%, and this material was able to stimulate more bone formation with respect to the pure α-TCP. The present study aimed to develop Zn-substituted brushite cement with a simplified recipe and improved characteristics. A resorbable brushite bone cement with neutral acidity, suitable for surgical on-site use, is reported. Instead of β-TCP (as in the previously obtained brushite cement based on Zn-substituted TCP [16]), as initial powder we used an equimolecular mixture of β-TCP and monocalcium phosphate monohydrate (MCPM), and as hardening liquid (instead of a liquid based on singlesubstituted magnesium phosphate with orthophosphoric acid [16]), we used 8% of aqueous solution of citric acid containing 30% of ammonium citrate. Here, as a suitable substitution ion amount we propose a brushite cement based on Zn-substituted β-TCP with a concentration of Zn of 1.40 wt% and investigate its physical, chemical, and biological properties, including the cytotoxicity study with the NCTC L929 fibroblast cells from murine subcutaneous connective tissue. Introducing a suitable concentration of Zn 2+ substitution we aim to impart the developed cements also with antibacterial properties and assess them against bacteria strains of Enterococcus faecium, Escherichia coli, and Pseudomonas aeruginosa. The antibacterial characteristics of the cement are of particular importance for orthopedic applications in view of bacterial resistance issue and the need for new antibacterial strategies, standing among the most challenging tasks of the modern medicine. Materials and methods Zn-substituted and non-substituted TCP powders were obtained by heterogeneous phase synthesis using mechanochemical activation [14]. A planetary mill with the rotation speed of 15,000 rpm was used as an activator. For synthesis, powders of freshly calcined calcium oxide (chemical grade, Chimmed, Moscow, Russia), ammonium dihydrophosphate (chemical grade, Chimmed, Moscow, Russia), and zinc nitrate (chemical grade, Chimmed, Moscow, Russia) were placed in Teflon container, taken in molar ratios, calculated from the reaction Eq. (1). The interaction of components was carried out according to the following schemes (1, 2): Corundum balls for material milling in a planetary mill (MP 4/1, "Technocentr" Ltd, Rybinsk, Russia) were taken in the ratio of 1:5. After 20 min of mixing, 300 ml of distilled water was added to the mixture, and the milling was continued for other 20 min. After that the suspension was separated from the balls, and the resulting product was filtered out and dried at 105°C. To obtain cement powder, the Znsubstituted TCP powder was pre-calcined at 900-1100°C, disaggregated by passing through a sieve with a cell diameter of 400 µm, and mixed with MCPM (Sigma-Aldrich, St. Louis (Missouri) USA) in a molar ratio of 1:1 [37]. The hardening liquid was prepared by dissolving 30 g of ammonium citrate and 8 g of citric acid in 62 ml of water. Cement samples were obtained by mixing powder of Zn-substituted or nonsubstituted β-TCP and MCPM with the hardening liquid in a ratio of 3:1. A teflon cylindrical mold with a diameter of 8 mm was used for forming the samples. Samples were hardened in 100% humidity during 24 h. The phase composition was investigated by the X-Ray Diffraction (XRD) method. Phase analysis of the obtained compounds was performed on a Shimadzu XRD6000 diffractometer (Kyoto, Japan) (CuKαλ = 1.5405 Å radiation, JCPGS file data). The Zn content in the powders and in the cements was determined using an emission optical spectrometer with inductively coupled plasma Optima 5300 DV. The setting time for cements with the ratio of cement powder: hardening liquid = 3:1 was 8 min, which is optimal for the preparation of cement paste during surgical operation. The hardening time was determined using the Vicat device "OGC-2" for determining the setting time of cements (Ekaterinburg, Russia) by immersing a needle with a crosssection area of 1 mm 2 with a load of 100 g in the cement paste formed after mixing the cement powder with the hardening liquid. The cement was considered solidified when the needle did not leave a sign on the sample. For the pH measurement, a cement sample of 0.5 g was immersed in 1 ml of distilled water. The pH of the cement was measured using a glass combined electrode, which was lowered into the solution formed over the cement sample. Measurements were carried out by the pH-meter Expert-001 (Moscow, Russia). Brushite cement based on β-TCP and Zn-containing brushite cement based on Zn-substituted β-TCP were examined during the experiment. The cements' solubility was studied for samples based on β-TCP and Zn-β-TCP, soaking them in 0.9% sodium chloride solution containing TRIS buffer (Chimmed, Russia) at 37°C for 60 days. The microstructure of cement samples was studied by a scanning electron microscope (SEM) (Tescan Vega II, Brno, Czech Republic) with energy dispersion X-ray INCA spectrometer. The accelerating voltage of the electron gun was 17-21 kV. Cement samples based on Zn-substituted and non-substituted β-TCP were placed in physiological solution for 60 days, after which changes in the microstructure were investigated. Fourier transform infrared spectroscopy (FTIR) spectra of the synthesized compounds were obtained for samples in mixtures with potassium bromide using the Nikolet Avatar-330 spectrometer (Nikolet Instrument, Madison, USA) in the range of 400-4000 cm −1 . For mechanical testing, five cylinders with a diameter of 8 mm (+/0.1 mm) and a height of 16 mm (+/0.1 mm) were prepared for each cement type: β-TCP and Zn-substituted β-TCP cement. The mechanical strength of cement samples under compression was measured 7 days after cements setting using the mechanical machine Instron 5800 (Norwood (MA), USA). Conventional (continuous wave) electron paramagnetic resonance (EPR) were carried out by a Bruker Elexsys 580/ 680 spectrometer in the X-band (ν MW = 9-10 GHz). In order to generate paramagnetic centers in the original material, X-Ray irradiation of the initial powders was made by the URS-55 source (U = 55 kV, I = 16 mA, W anticathode) with the estimated dose of 20 kGy at room temperature for 60 min. In accordance with the requirements of ISO 10993.5-2011 [38,39], the study of cytotoxicity of materials was carried out using extracts of materials in Dulbecco's modified Eagle's culture medium (DMEM) (Sigma-Aldrich, St. Louis (Missouri), USA) with the addition of 100 units/ml of penicillin/streptomycin (Sigma-Aldrich, St. Louis (Missouri), USA). All the samples were subjected to sterilization at 180°C before the test. The extracts preparation was performed under aseptic conditions at 37°C for 3 days. For each material, three samples were used. The ration of the material surface area to the DMEM volume was 3 cm 2 /1 ml. The NCTC L929 fibroblast cells of murine subcutaneous tissue were provided by the Institute of Cytology of the Russian Academy of Sciences (Moscow). The NCTC L929 cells were seeded into the wells of 96-well plate at a concentration of 30,000/cm 2 in DMEM, containing 5% of fetal bovine serum (Sigma-Aldrich, USA). After 18 h, the medium was replaced by 100 µl of extracts of the tested materials and cultured at 37°C in the atmosphere of 5% CO 2 for 24 h. After that, extracts of materials were investigated for cytotoxicity using the MTT test consisting in the reduction of tetrazolium salt (3-[4,5dimethylthiazole-2-yl]-2,5-diphenyltetrazolium bromide, MTT) by mitochondrial and cytoplasmic dehydrogenases of active alive cells and resulting in the formation of blue formazan crystals, which are soluble in dimethylsulfoxide. Antibacterial activity was assessed for the E. coli, E. faecalis, and P. aeruginosa bacteria species. The bacteria were grown in the Mueller Hinton Agar (HiMedia, India) nutrient medium. First, cement samples were sterilized by UV radiation for 30 min. Antibacterial activity was tested by positioning the materials' disks on a freshly seeded bacteria lawn. The bacteria concentration of 10 6 colony forming units/ml (100 µl per dish) was used. Petri dishes with bacteria and disk samples were situated in a thermostat and cultured for 18-24 h. The antibacterial activity was determined from the width of bacteria zone of inhibition. Statistical treatment of the results was performed using the Origin program "Origin Pro 2016 (64 bit)" (OriginLab Corporation, Northampton, MA, USA), the error was taken as a standard deviation from the average value, and the differences were considered reliable, according to the Mann-Whitney U criterion, at p < 0.01. Results and discussion According to the elemental analysis, the Zn amount in the β-TCP powder was 6.40 wt%, while in the cement-1.40 wt%. The pH values of the β-TCP and Zn-β-TCP cements were determined at different time points starting from their setting, from 1 min up to 60 min. It was found that immediately after setting the cements were characterized by a weak acidity, i.e., pH = 5.5. After 60 min, the pH increased to 6.5 due to the continued interaction between the cement powder and hardening liquid, since the dissolved ammonium citrate interacted with the calcium-containing components (β-TCP and MCPM) to form non-soluble calcium citrate on the surface of these components. Calcium citrate nanoparticles covered the surface of β-TCP and MCPM, slowing down the chemical interaction between them. The increase in pH was due to the fact that the amount of the acidic component of MCPM decreased during the chemical interaction with TCP and, therefore, the concentration of hydrogen ions decreased as well. According to the XRD results shown in Fig. 1A, C, the final phase of cements after hardening was composed of dicalcium phosphate dihydrate, or brushite (DCPD, CaHPO 4 ·2H 2 O) and β-Ca 3 (PO 4 ) 2 . No impurity phases corresponding to zinc salts were found. The interaction between the components of the cement powder occurred according to Eq. (3): After soaking in physiological solution, the HA phase was detected in the XRD patterns, along with DCPD (see Fig. 1B, D). This is associated with the transformation of DCPD into HA, according to the following scheme (4): A semi-quantitative comparison between the peak positions of the experimental diffraction patterns showed that Zn substitution in the TCP lattice of powders did not induce any particular position shift (see Table 1). This is probably due to the relatively small amount of the introduced Zn (1.4 wt%) that led to a negligible modification of the TCP lattice parameters. This is in agreement with a previous study [32], where a decrease of only 0.7% of lattice parameters was detected with the introduction of 5 at.% (about 8 wt%) of Zn into the TCP lattice. The FTIR spectra were obtained for Zn-substituted and non-substituted powders, cements, and cements after soaking in the physiological solution for 60 days, all shown in Fig. 2. In Fig. 2A, the FTIR spectra of Zn-substituted and not substituted powders seem to be very similar. The regions of the most intense vibrations corresponding to the PO 4 3− group (ν 4 ) are highlighted at 565, 603, and 900-1100 cm −1 [40,41]. In addition, a small band at 700 cm −1 and a small shoulder of the broad band at 900-1100 cm −1 , both attributed to the P 2 O 7 4− group from In the FTIR spectra of cements shown in Fig. 2B, the peaks corresponding to deformation oscillations in the hydroxyl group can be distinguished at 3570 and 632 cm −1 [41,42]. In addition, the peaks at 1300-1550 cm −1 related to the CO 3 2group [42] are well visible. The spectra of cements after soaking, presented in Fig. 2C, are characterized by more intense bands related to the PO 4 3− groups. In all the spectra, the band at 2355 cm −1 is present, it can be attributed to CO 2 from the air. Its appearance is due to some features of the samples preparation procedure [43]. TCP powders were investigated by EPR. Nominally "pure" TCP as well as Zn-substituted TCP, according to their chemical formulas, are EPR silent. In the investigated samples, a small amount (x < 0.001) of manganese (Mn 2+ ) ions was revealed by EPR as six lines, due to the hypefine interaction between the Mn electron spin and 55 Mn nuclei (I = 5/2) with A = 9-10 mT [28,44,45]. No other EPR The signal appeared after the X-ray influence (Fig. 3). Its intensity in sample annealed at 900°C was two orders of magnitude lower than that in the TCP annealed at 400°C that indicates the efficiency of annealing to remove impurities. Further, we will discuss in detail the data obtained only for the Zn-substituted TCP powder sample annealed at 900°C. Figure 3 shows that two groups of the EPR signals were detected. One (a pair of lines separated by 50.0 mT) is very often obtained for HA and TCP synthesized by solid state reaction [44,46] and belongs to the hyperfine structure for the trapped hydrogen-H 0 stable radical with I = 1/2 [47,48]. In contrast with the results of [46], the hyperfine constant A was found to be higher (50.0 vs. 49.9 mT) and no additional splittings were observed in our experiments. In [47], it was supposed that H atom is trapped in β-TCP at the interstitial site between the two groups of PO 4 3− in the B column. Our data allow suggesting that in the investigated Zn-TCP sample, H is trapped rather at substitutional sites of TCP. Another group of signals detected around the g ≈ 2.00 (Fig. 4) can be ascribed to the presence of two paramagnetic [48]. One with the g-factor of g 1 = 2.0062 is probably due to the PO 4 2− stable radical formed by electron ejection from the PO 4 3− group [49]. Usually, one detects PO 4 2− radicals of axial symmetry [50] with g ⊥ = 2.0062 and g || = 2.0134-2.030. Probably, g || coincides with one of the components of the second signal with the values of g x = 2.0035, g y = 2.0014, g z = 1.998, typical for the carbonate radical CO 2 − of orthorhombic symmetry in various apatite matrices [48,50], substituting PO 4 3− group [51]. The morphology of cements investigated by SEM is shown in Fig. 5. The cements obtained with non-substituted β-TCP (A, B) are characterized by a large particle morphology compared to the cements obtained with Zn-β-TCP (C, D), having a finer surface aspect. This could be connected to an increase in dispersion when Zn 2+ ions are introduced. Soaking in physiological solution during 60 days led to the change in the morphology of samples (see Fig. 5E-H). The samples have a non-uniform microstructure, with some large particles and newly formed smaller particles on them. According to the XRD, after soaking, the detected phases were DCPD and HA (Fig. 1B, D). The FTIR analysis revealed that for the cements based on non-substituted β-TCP, the intensities of peaks assigned to hydroxyl and carbonate groups decreased after keeping in physiological solution, while for cements based on Zn-TCP, the intensities of these peaks increased (see Fig. 2B, C). This experimental evidence could likely be explained by a finer, nanoparticle nature of the Zn-β-TCP cement, compared to non-substituted β-TCP. Smaller particles interact faster with the physiological solution, so the intensity of the peaks of the carbonate and hydroxyl groups increased in the FTIR spectra. The compressive strength for cement sample cylinders made of non-substituted β-TCP and of Zn-β-TCP was measured 7 days after their setting. According to the experimental results, the average compressive strength for the Zn-β-TCP cements was about 17.5 ± 1.6 MPa, whereas for the non-substituted cements about 15.4 ± 1.6 MPa. Therefore, an increase in the compressive strength for the substituted Zn-β-TCP cement is attested. A similar result was obtained in our previous work [14], in which iron substituted TCP cement was investigated. Also in that case the compressive strength for substituted cement was higher compared to the non-substituted one. Such a difference can probably be explained by a finer and denser structure of the Zn-β-TCP cement compared to the nonsubstituted β-TCP cement (see SEM observations in Fig. 5A, C). Usually, a higher strength is observed for cements with a denser structure, i.e., with more contacts between particles [28]. The compressive strength of the cements also changed with the time elapsed after mixing of the powder and the hardening liquid. A few minutes after setting, it was 0.5-1 MPa, and 1 h after setting-1-2 MPa. After 1 day, the compressive strength increased to 2-3 MPa, and the maximum strength of about 19 MPa, was reached after 7 days of setting. The investigation of the metabolic activity of the NCTC L929 cells in the developed material extracts was carried out using the MTT test (Fig. 6). Due to the hydrolysis of brushite, which leads to the formation of orthophosphoric acid, the extracts were acidified. To solve this issue, 0.1 M of NaOH was added to adjust pH up to 7.4. The MTT test showed a significant difference (p ≤ 0.01) between the samples of β-TCP and Zn-β-TCP cements. As can be seen from Fig. 6, the introduction of Zn in the cement contributed to the increase in the cell viability in about 10% with respect to the non-substituted cement, which can be explained by the positive effect of Zn 2+ ions on the proliferation of fibroblast cells. It is likely due to the fact that zinc plays an important role in various biological processes and participates in a number of metabolic functions, necessary for growth and development of cells. Indeed, in bone tissue, Zn 2+ ions have been proven to promote osteoclast apoptosis, reduce their activity and inhibit their adhesion to the β-TCP, blocking early and much greater absorption of β-TCP, providing favorable conditions for new bone formation [52]. We also investigated the antibacterial activity of the developed brushite cements, the obtained results are reported in Fig. 7 and Table 2. As can be observed from the obtained experimental data, both the TCP and Zn-TCP cements exhibit antibacterial activity. As it is evidenced in Table 2, the Zn-substituted TCP cement shows a stronger antibacterial activity, which is expressed in a larger Table 2). This conclusion is valid for all the three tested bacteria species: E. Coli, E. faecium, and P. aeruginosa, which are the most frequent in orthopedic surgery [53]. Postsurgical infection is a common and serious complication that leads to reduced quality of life, multiple surgeries, and high treatment costs. Indeed, postsurgical infection after osteosynthesis has an incidence of 0.5-10% for closed fracture [54] and up to 50% for open fractures [55]. It occurs in 1-14% of patients after spine surgery [56] and has similar incidence for several orthopedic and trauma procedures involving implantable devices [57]. The socioeconomic burden and management of such infections are incredibly challenging. Therefore, the development of innovative materials with intrinsic antibacterial properties and capable of enhancing osteointegration is desirable for clinical use. By adding Zn 2+ ion, we aim to stimulate bone metabolism, cell adhesion [58] inhibit bacterial growth to avoid infections and enhance bone formation, regeneration, and mineralization [59]. In [60], we demonstrated that silicon-substituted TCP possesses antibacterial activity against E. Coli. In that work, it is reported that when the diameter of inhibition zone is about 10-15 mm, the material is characterized by a weak antibacterial activity, about 15-20 mm-by a moderate one, and if more than 20 mm-by a significant antibacterial activity. Based on this, it is possible to conclude that the TCP cement is not active against E. Coli and moderately active against E. faecium and P. aeruginosa, whereas Zn-TCP cement possesses weak antibacterial activity against E. Coli and is significantly active against E. faecium and P. aeruginosa. Based on our results it can be concluded that the Znβ-TCP cement has positive effect on the cell viability and, at the same time, has a negative effect on bacteria. These properties make this material an ideal candidate for use in the fabrication of orthopedic implants. Conclusions In this work, the cement based on the Zn-substituted β-TCP powder with a simplified preparation recipe and improved characteristics was developed. Its properties were studied in comparison with the reference nonsubstituted β-TCP cement. The initial cement powder was composed of equimolecular mixture of β-TCP and MCPM, and the hardening liquid-of 8% of aqueous solution of citric acid containing 30% of ammonium Fig. 7 The inhibition of growth of: E. Coli for A Zn-TCP cement and B TCP cement; C E. faecium, and D P. aeruginosa for Zn-TCP cement (1) and TCP cement (2) Zn-TCP cement 14 24 23 citrate. The final components of the cements were DCPD and β-TCP. The setting time of cements was 8 min (the ratio of cement powder: hardening liquid = 3:1), which is optimal for preparation and application of the developed cements for bone defects during surgery. The Zn 2+ content was selected to be 1.40 wt%. The pH of the cements reached 6.5 within 60 min after setting. After soaking in physiological solution for 60 days, the morphology and composition of cements changed. The final phases were DCPD and HA. The EPR measurements showed the presence of the trapped hydrogen and confirmed that annealing at 900°C led to the significant reduction of carbonate impurities embedded into the β-TCP structure. The average compressive strength for the Zn-β-TCP cement was about 17.5 ± 1.6 MPa, whereas for the nonsubstituted β-TCP cement-about 15.4 ± 1.6 MPa (measured 7 days after cements' setting). Therefore, the Zn-addition led to the enhancement of the mechanical properties of the β-TCP cement. The NCTC L929 fibroblast cell viability on the developed Zn-β-TCP cement was 10% higher compared to cement without Zn and possess antibacterial properties against E. coli, E. faecium, and P. aeruginosa. β-TCP is widely used in the clinics. It can be readily doped with ions, which may provide functionalities like bactericidal activity. Indeed, the developed Zn-substituted β-TCP cement is promising in the clinical setting in view of its significant antibacterial activity, improved mechanical properties and enhanced cell viability. This finding confirms that the novel material could be a valid strategy for a range of biomedical application in humans. Therefore, it could offer promising potential for bone replacement and repair in moderate and non-load-bearing defects that are prone to infection in orthopedic and trauma setting. Despite these promising properties, the amount of the substitution ion should be carefully optimized, in order to enhance the antibacterial activity, trying to maintain the positive effect on cells. Compliance with ethical standards Conflict of interest The authors declare no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
2021-08-18T13:42:52.117Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "34d5df13bb87ef7b9af7ed6ebd418fbe4d4103b2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10856-021-06575-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34d5df13bb87ef7b9af7ed6ebd418fbe4d4103b2", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259993602
pes2o/s2orc
v3-fos-license
Intraoperative lung ultrasound improves subcentimetric pulmonary nodule localization during VATS: Results of a retrospective analysis Abstract Background Video‐assisted thoracoscopic surgery (VATS) resection of deep‐seated lung nodules smaller than 1 cm is extremely challenging. Several methods have been proposed to overcome this limitation but with not neglectable complications. Intraoperative lung ultrasound (ILU) is the latest minimally invasive proposed technique. The aim of the current study was to analyze the accuracy and efficacy of ILU associated with VATS to visualize solitary and deep‐seated pulmonary nodules smaller than 1 cm. Methods Patients with subcentimetric solitary and deep‐seated pulmonary nodules were included in this retrospective study from November 2020 to December 2022. Patients who received VATS aided with ILU were considered as group A and patients who received conventional VATS as group B (control group). The rate of nodule identification and the time for localization with VATS alone and with VATS aided with ILU in each group were analyzed. Results A total of 43 patients received VATS aided with ILU (group A) and 31 patients received conventional VATS (group B). Mean operative time was lower in group A (p < 0.05). In group A all the nodules were correctly identified, while in group B in one case the localization failed. The time to identify the lesion was lower in group A (7.1 ± 2.2 vs. 13.8 ± 4.6; p < 0.05). During hospitalization three patients (6.5%; p < 0.05) in group B presented air leaks that were conservatively managed. Conclusion Intracavitary VATS‐US is a reliable, feasible, real‐time and effective method of localization of parenchymal lung nodules during selected wedge resection procedures. INTRODUCTION The incidence of incidental diagnosis of pulmonary nodules has dramatically increased over the last few years. 1 With the widespread use of high-resolution computed tomography (HRCT), in fact, a large number of pulmonary nodules are detected annually, but only those with features of malignancy undergo surgical treatment. Surgery with radical intent is the cornerstone of early stage non-small cell lung cancer (NSCLC) therapy. 1 Anatomical lobectomy followed by sampling or dissection of mediastinal lymph nodes, is considered the present gold standard of treatment. Conversely, limited resections are reserved for patients with poor performance status. 2 Khereba et al. 3 localized 43 of 46 pulmonary nodules by ILU with a 93% success rate 3,4 Nowadays, minimally invasive thoracic surgery, including robot-assisted thoracic surgery (RATS) and video-assisted thoracoscopic surgery (VATS), is the preferred surgical technique for patients with pulmonary nodules due to its advantages in rapid recovery and minimal invasiveness. VATS lobectomy, as suggested in current studies, is a suitable treatment for patients with early stage NSCLC in terms of safety, local control of cancer and survival. 3,5 The advantages of VATS are the reduction of functional impairment and reduction of postoperative pain, and therefore of general complications, mortality and morbidity. 6,7 Successful VATS identification of pulmonary nodules depends on their intraoperative research by direct visualization or palpation. The nodules, determined by screening computed tomography (CT) and requiring resection, are nowadays smaller and smaller 8 and characterization of nodules smaller than 1 cm can be extremely challenging. 9 Also, solitary and deep-seated pulmonary nodules are difficult to palpate or identify during VATS. 10 Methods used to identify the location of pulmonary nodules include bronchoscopic marker placement, CT-guided percutaneous marker placement, three-dimensional (3D) printing, intraoperative ultrasound (US), intraoperative molecular imaging (IMI) and artificial intelligence (AI)-assisted identification. One of the most promising techniques to achieve the latter target is the adoption of the intraoperative lung ultrasound (ILU) performed during VATS. The aim of the current study was to analyze the accuracy and efficacy of use of ILU associated with VATS to provide a simple and safe real-time methodology to visualize solitary and deep-seated pulmonary nodules smaller than 1 cm compared to VATS alone. Study design This was a retrospective single-center observational study whose primary aim was to confirm the validity of ILU as a safe and effective localization method to visualize nodules smaller than 1 cm during VATS. The study is reported according to the STROBE statement for cohort studies 11 and led in compliance with the principles of the Declaration of Helsinki. Written informed consent was obtained from all participants during preoperative communication and the protocol was approved by the Ethics Committee of the University of Luigi Vanvitelli of Naples (32 655/2021). Written informed consent was obtained from all patients. Study setting and study population Patients with solitary and deep-seated pulmonary nodules less than 1 cm were included in our study from November 2020 to December 2022 at the Thoracic Surgery Department of the Vanvitelli University of Naples. The inclusion criteria were: a single without characterization deep-seated pulmonary lesion <1 cm indicated for VATS, age > 18 years, no history of previous thoracic malignant disease, no history of preoperative pulmonary biopsy and no contraindications for surgery. The exclusion criteria were: recent myocardial infarction or unstable angina, severe neurological problems, a prolonged prothrombin time (PT-INR) >1.5 or a platelet count <30 000, impossibility to tolerate single lung ventilation and pregnancy, emphysema and patients with nodules close to the hilar zone. All subjects were preoperatively assessed during a specialized thoracic surgery evaluation. All patients underwent preoperative HRCT, contrastenhanced CT, and 18 F-fluorodeoxyglucose positron-emission tomography/computed tomography ( 18 F-FDG PET/CT) to record the localization and size of the lesions. All surgeries were performed by experienced thoracic surgeons who had performed over 300 thoracic oncology procedures using VATS and were experienced in pulmonary ultrasound. Clinical data were collected from a prospectively maintained electronic database. Patients who received VATS aided with ILU were considered as group A and patients who received conventional VATS were in group B and considered the control group. All patients considered in the analysis presented with without characterization, PETenhanced, subcentimetric deep-seated pulmonary nodules; therefore an extemporaneous intraoperative examination was necessary in any case to address the surgery. Conventional VATS Patients were placed in a lateral decubitus position, general anesthesia was induced, and double-lumen endotracheal intubation with contralateral single lung ventilation was performed; patients underwent ultrasound-guided fascial plane blocks of the chest wall using long-lasting local anesthesia in order to reduce postoperative pain. Lung specimens were not ventilated but rather semi-inflated and inflated. The VATS approach employed was the anterior triportal approach according to Hansen et al. which consists of two 1-1.5 cm lower access incisions, located in the seventh or eighth intercostal space, in the posterior and anterior axillary lines, respectively, for two thoracoscopic ports and a 4-5 cm port incision, placed in the fourth intercostal space, in the anterior axillary line for a utility incision. 12 After careful exploration of the cavity and identification of the nodule, the surface of the nodules was cauterized with a cautery stick, then a wedge resection with a 2 cm margin was performed; the specimen was sent to the department of pathology to confirm the accuracy of excision. VATS associated with ILU Patients were placed in a lateral decubitus position, general anesthesia was induced, and double-lumen endotracheal intubation with contralateral single lung ventilation was performed; patients underwent ultrasound-guided fascial plane blocks of the chest wall using long-lasting local anesthesia in order to reduce postoperative pain. Lung specimens were not ventilated but rather semi-inflated and inflated. The VATS approach employed was the anterior triportal approach according to Hansen et al., 12 which consists of two 1-1.5 cm lower access incisions, located in the seventh or eighth intercostal space, in the posterior and anterior axillary lines, respectively, for two thoracoscopic ports and a 4-5 cm port incision, placed in the fourth intercostal space, in the anterior axillary line for a utility incision. 13 A BK 5000 ultrasound (US) processor was used for the localization of small lung nodules. A sterile intracavitary laparoscope probe 38 cm in length, 10-mm in diameter and a flexible tip, equipped with a convex array transducer with frequencies ranging from 4 to 12 MHz, were introduced through one of the VATS ports ( Figure 1). A setting for superficial tissue with tissue harmonics, electronic focusing at the planar interface level and time gain <50%, were used. The probe was inserted into the chest through the operating hole, and the mediastinal, costal and diaphragmatic surfaces of the lung were explored for subcentimetric nodules. These can only be visualized when the lung is completely deflated. The surgeon, applying light pressure on the lung surface with the ultrasound probe, reducing residual air, localizes deeper nodules if possible. During the examination, the probe was perpendicular to the pulmonary surface and warm sterile saline was used to improve surface contact. VATS-US was adopted to identify size, localization and US pattern of the lesions of interest. In cases where pulmonary nodules were found their ultrasound characteristics were recorded. Subsequently, the surface of the nodules was cauterized with a cautery stick, then a wedge resection with a 2 cm margin was performed; the specimen was sent to the department of pathology to confirm the accuracy of excision ( Figure 2). Ultrasound parameters of pulmonary nodules With the aid of the ILU, the margins of the nodules were assessed and classified as "well defined" or "jagged", according to their shape as "regular" or "irregular" and according to their echogenicity, as hypoechoic or hyperechoic (Figures 2 and 3). The presence or absence of inner hyperechoic striae and/or spots was assessed. "Size" was defined as the mean between the maximum and minimum diameter of the nodule 14 (Figures 3-5). Outcome measures Mean overall operative time was evaluated in minutes. Mean time for the identification of the nodules was also assessed in minutes. The presence of any perioperative complications (i.e., air leak or hemorrhage) was recorded and reported during hospitalization. The rate of identification of the nodules was expressed as the percentage nodules identifiable with the VATS alone and with the VATS aided with ILU in each group. Study outcomes The primary outcome of the current study was the analysis of the identification rate and time of subcentimetric solitary and deep-seated pulmonary nodules in patients who had undergone VATS alone and VATS aided with ILU. The secondary outcome was the evaluation of perioperative complications and hospitalization in patients who had undergone VATS alone and VATS aided with ILU. Statistical analysis Statistical analysis was performed using Excel 2011 (Microsoft) and via the Graph Pad Prism 9 program F I G U R E 1 The BK 5000 ultrasound (US) processor was used for the localization of small lung nodules. A sterile intracavitary laparoscope probe 38 cm in length, 10-mm in diameter and a flexible tip, equipped with a convex array transducer with frequencies ranging from 4 to 12 MHz, was introduced through one of the video-assisted thoracoscopic surgery (VATS) ports. according to recently published guidelines. 15 Categorical data are reported as raw numbers with percentages in parenthesis. Continuous data are reported as means ± standard deviation or as medians with range in parenthesis, according to the distribution. The differences between results were analyzed using an unpaired t-test if they were summarized as means, the Mann Whitney U test if they were summarized as medians, or the Fisher's exact test if they were reported as percentages. For dichotomous variables (presence or absence of a condition), counts were made in the subgroups without performing probabilistic tests. The variables that were found to be significantly equal in the comparison between subgroups (that is, with the same mean or the same percentage composition in the subgroups) were considered neutral for the purpose of determining the final outcome. A p-value less than 0.05 was considered statistically significant. Study population From November 2020 to December 2022 of 83 patients referred for isolated solitary and deep-seated pulmonary nodules smaller than 1 cm, 74 met the inclusion criteria and F I G U R E 2 An ultrasound (US) probe was inserted by an expert ultrasound surgeon into the chest through the operating hole and light pressure was applied on the lung surface for localization of lung nodules. F I G U R E 3 Irregular shaped lung nodule that was hypoecoic on ultrasound. Hyperecoic striae are present. F I G U R E 4 Irregular shaped lung nodule that was hypoecoic on ultrasound. Hyperecoic striae are present. underwent a pulmonary wedge resection. Forty-three patients received VATS aided with ILU (group A) and 31 patients received conventional VATS (group B). The demographic and preoperative findings are detailed in Table 1. Primary outcome Mean operative time was 84 ± 7.3 min in group A and 99 ± 4 min in group B ( p < 0.05). No significant intraoperative complications occurred. In group A all the nodules were correctly identified (43/43, 100%), while in group B, in one case (30/31, 96.7%), the localization failed and it was necessary to enlarge the incision to about 2 cm, in order to facilitate the insertion of the hand and the localization of the nodule by finger palpation. This event increased the difficulty of the operation. 14 The time to identify the lesion was lower in group A (7.1 ± 2.2 vs. 13.8 ± 4.6; p < 0.05). Moreover, as a result of the difficulty in nodule sampling in three cases in group B, further resection and intraoperative histological examination were performed. The median range of the depth from the pulmonary surface was 3.9 ± 2.2 versus 4.2 ± 1.5 in group A and group B ( p = 0.679), respectively. After the pathological response, the patients showing squamous carcinoma (27 patients in group A and 19 patients in group B) or adenocarcinoma (9 patients in group A and 6 patients in group B) underwent lobectomy. Conversely, wedge resection was adequate in cases of breast (group A 3 and group B) or colon (group A 4 and group B 3) metastases. The intraoperative and pathological findings are summarized in Table 2. Secondary outcome The mean hospitalization was 3.1 ± 5.3 days in group A and 3.5 ± 7.1 days group B (p = 0.287; unpaired t-test). During hospitalization three patients (6.5%; p < 0.05) in group B had air leaks that were conservatively managed maintaining in site the chest drainage for 10 days (Table 3). In the other patients in both groups the chest tube was removed on the third postoperative day. DISCUSSION The introduction of minimally invasive thoracic surgery and consequently the intraoperative localization of subcentimetric pulmonary nodules has become very important in early lung cancer, and VATS has been adopted as an important tool in the treatment of this devastating disease through minimally invasive surgery. VATS is especially useful in patients with clinically debilitated conditions or with marginal pulmonary reserves. In fact, thoracoscopy is associated with less postoperative pain and morbidity and earlier recovery than open thoracotomy. [12][13][14][15][16] Intraoperative ultrasound during VATS origins from laparoscopic surgery; intraoperative high-frequency ultrasound probes are used to detect small and deep lesions with a size of 10 mm or less in the lung or mediastinum and to stage lung lesions. Therefore, the use of intraoperative ultrasound was doubted for localization purposes in the lung because the air in the parenchyma often inhibits proper ultrasound examination; pulmonary collapse is mandatory for nodule localization, nevertheless failure to detect a lesion may require conversion to thoracotomy. Failure of localization of pulmonary nodules less than 1 cm in VATS has previously been reported in up to 16.1% of cases and, in these studies, minimally invasive surgery was converted to open thoracotomy in up to 54% of cases. 17,18 Different preoperative techniques have the purpose of marking the nodule favoring its localization, such as microcoiling, indocyanine tattooing, 19 radiolabeling and hooking. 20 Although these methods have a success rate up to 100% and open thoracotomy may be avoided in over 50% of cases, complications such as pneumothorax, air embolism, intraparenchymal and bleeding is about 16.7%. Moreover, hook wire dislodgment from the pulmonary parenchyma is not a rare event; it requires two different spaces including the computed tomographic facility and the operating room. Successes are operator-dependent and require a multidisciplinary approach that often involves a high-cost. Advances in imaging technology in ultrasound (US) have resulted in a higher rate of utilization of this procedure. 21 VATS with a triportal approach has been found to reduce postoperative complications, such as functional impairment, pain and mortality, decreased operation time and intraoperative hemorrhage when compared to open surgery. 22 using ILU. However, in three cases nodules were neither visible nor palpable. However, the main limitation of VATS is that palpation of the lung surface is not always possible, in particular in cases of small or deep lesions. 25 Lung palpation, in fact, could be difficult during VATS due to the limited area that can be reached by the operator's finger and the increased risk of major complications. Therefore, surgeons are searching for additional techniques for their localization. 26 The benefits of VATS are reduced by missing the localization of target lesion or by spending a lot of time searching for it. 27,28 According to Hou et al., the palpation method was considered to have failed if it took more than 12 minutes. 27 The detection of central nodules necessitates complete desufflation of the lung to avoid imaging artifacts; consequently, a systematic evaluation of the pulmonary parenchyma without artifacts due to the high acoustic impedance difference encountered by the ultrasound beam at the interface between the aerated lung parenchyma and the overlying chest wall soft tissues should be performed. Moreover, a strict collaboration with the anesthesiological equipment is essential. Therefore, ILU is a complementing, real-time and safe method for the localization of small nodules. 29,30 It is a new technique that necessitates development for its ability to exactly localize invisible or nonpalpable pulmonary nodules in real-time during VATS, 31,32 in order to perform more diagnostic biopsies and achieve safe surgical margins. 33,34 To the best of our knowledge, this is the first study in the literature to analyze the efficacy of ILU during VATS in identifying subcentimetric intraparenchymal nodules that lie at a depth up to 5 cm from the pulmonary surface. In the current study no intraoperative complications occurred in patients who underwent ILU in association with VATS. With the aid of ILU all the nodules were correctly identified (43/43, 100%), while with VATS alone the localization failed in one case (30/31, 96.7%) and it was necessary for the incision to be enlarged by about 2 cm, in order to facilitate insertion of the hand and localization of the nodule by finger palpation making the surgery more challenging. Moreover, the time to identify the lesion was lower in group A (7.1 ± 2.2 vs. 13.8 ± 4.6; p < 0.05), guaranteeing important benefits, especially in unfit patients. Therefore, in group A the use of a last generation ultrasound machine allows the study of all the pulmonary parenchyma, the easy identification of several anatomical landmarks and then the easier and quick detection of the subcentimetric centroparenchymal nodule. Moreover, as a result of the difficulty in nodule sampling of three cases in group B, a further resection and further intraoperative histological examination were necessary. The ultrasound localization was necessary in the first phase since all patients considered in the analysis had without characterization, PET-enhanced, subcentimetric pulmonary nodules; therefore, an extemporaneous intraoperative examination was necessary in any cases to address the surgical procedure. The use of intraoperative ultrasound, considering the ultrasonographic features of the nodules, allows us to easily identify suspicious nodule that should be examined, even if at 5 cm depth, avoiding unnecessary resections and performing a parenchyma sparing sampling. The identification of suspicious nodules in such conditions with palpation alone during conventional VATS, in fact, could be extremely challenging. Subsequently, after the pathological response, the patients in cases of extemporaneous examination showing squamous carcinoma (27 patients in group A and 19 patients in group B) or adenocarcinoma (9 patients in group A and 6 patients in group B) underwent lobectomy. Conversely, wedge resection was adequate in cases of breast (3 in group A and 3 in group B) or colon (group A 4 and group B 3) metastases. Mean hospitalization was lower in group A (3.1 ± 5.3 days vs. 3.5 ± 7.1 days), even if not statistically significant ( p = 0.287). Noteworthy, during hospitalization three patients (6.5%; p < 0.05) in group B presented air leaks that were conservatively managed, while the postoperative course in group A was free from complications. However, the study by Kondo et al. demonstrated that ultrasound-trained surgeons can safely and effectively locate nonpalpable nodules using intraoperative ultrasound. 9 Other studies have confirmed that the success of intraoperative ultrasound investigation of pulmonary nodules can be really promising (93%-98%). 4,24,35,36 Imperatori et al., in their comparative series, were able to detect all the pulmonary nodules using ILU (assessment percentage 100%), compared to about 95% by finger palpation. Most of the lung nodules included (96%) were <2 cm in size and the dimension of the seven nodules that escaped detection by digital palpation was statistically lower compared to that of palpable nodules. Hou et al. also showed that the localization success rate of the subcentimetric pulmonary nodules by ILU was 100%, suggesting that ILU could detect nodules within a certain range of size and depth. 27,37 Noteworthy, during VATS, US allowed us to explore almost completely the visceral pleura; the intraoperative ultrasound probe, in fact, can reach areas of the lung that cannot be reached by finger palpation alone. Thus, ultrasound also appears to be a compensative procedure in case of the failure of CT-guided intraoperative marking. This study had some limitations. First, ILU is an operator-dependent technique which requires large experience with a variable learning curve. Therefore, it is performed only in tertiary level hospitals. Therefore, the methods are not easily reproducible, and the results are not easily generalizable. Moreover, in patients with emphysema the ultrasound localization of the nodules is more challenging and were not considered in the current study. Finally, other limitations were the limited number of cases reported and the retrospective design of the study. In conclusion, intracavitary VATS-US is a reliable, feasible, real-time and effective method of localization of parenchymal lung nodules during selected wedge resection procedures that in the reported series showed excellent results in terms of intra-and postoperative complications and regarding the accuracy and timing of the localization. As with any ultrasound procedure, ILU requires a long training and experience, especially for the problems related to the not correctly collapsed lung. In the current series, intraoperative ultrasound during VATS proved to be a safe and reliable method for the realtime identification of pulmonary nodules not evaluated by digital palpation, therefore IUS is extremely useful for subcentimetric nodules which, being centrally located or in any case not being in a subpleural position, they can more easily escape the touch of the surgeon during manual palpation.
2023-07-21T06:17:50.677Z
2023-07-20T00:00:00.000
{ "year": 2023, "sha1": "25b982419907de0b465c4772448fd11acb864fc2", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.15027", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "ac32580f6a08cf9c353e6bf3e76733bdbcf33106", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253024525
pes2o/s2orc
v3-fos-license
Alveolar macrophages and airway hyperresponsiveness associated with respiratory syncytial virus infection Respiratory syncytial virus (RSV) is a ubiquitous pathogen of viral bronchiolitis and pneumonia in children younger than 2 years of age, which is closely associated with recurrent wheezing and airway hyperresponsiveness (AHR). Alveolar macrophages (AMs) located on the surface of the alveoli cavity are the important innate immune barrier in the respiratory tract. AMs are recognized as recruited airspace macrophages (RecAMs) and resident airspace macrophages (RAMs) based on their origins and roaming traits. AMs are polarized in the case of RSV infection, forming two macrophage phenotypes termed as M1-like and M2-like macrophages. Both M1 macrophages and M2 macrophages are involved in the modulation of inflammatory responses, among which M1 macrophages are capable of pro-inflammatory responses and M2 macrophages are capable of anti-proinflammatory responses and repair damaged tissues in the acute and convalescent phases of RSV infection. Polarized AMs affect disease progression through the alteration of immune cell surface phenotypes as well as participate in the regulation of T lymphocyte differentiation and the type of inflammatory response, which are closely associated with long-term AHR. In recent years, some progress have been made in the regulatory mechanism of AM polarization caused by RSV infection, which participates in acute respiratory inflammatory response and mediating AHR in infants. Here we summarized the role of RSV-infection-mediated AM polarization associated with AHR in infants. Introduction Respiratory syncytial virus (RSV) is the dominant cause of lower respiratory tract infection in children younger than 2 years of age worldwide. It is estimated that 4 million children are admitted to hospitals for RSV infection and 200,000 of the hospitalized children die each year (1,2). Due to the immature composition and functions of their immune cells and molecules, infants infected with RSV often progress to lower respiratory tract inflammation, and some of them can develop a chronic lung disease (3,4). When re-infected or exposed to allergens, this infection in infants can manifest as recurrent wheezing. The pandemic of the coronavirus disease 2019 (COVID-19) has changed the epidemic pattern of RSV; it is estimated that the recurrence of RSV will be more intense in the future and may become a major economic burden to society (3,4). Alveolar macrophages (AMs) are the important part of the respiratory tract's innate immune barriers and play a key role in engulfing pathogens and antigen presentation (5,6), and together with epithelial cells, contribute to setting the threshold and the quality of the innate immune response in the acute and convalescent phases of RSV infection. It has been reported that AM polarization is driven by RSV in a variety of microenvironments to exert multiple biological effects (7). Polarized AMs participate in local inflammatory responses and in mediating intercellular communication to stimulate naive lymphocyte differentiation (8,9), thus regulating the intensity of the inflammatory response, which is associated with immunosensitization and the pathology of airway hyperresponsiveness (AHR) in the late life of infants infected with RSV (10-12). Therefore, immunomodulatory therapy targeting AMs may be one of the approaches to further explore effective treatment strategies. In this paper, we summarize the potential association between AM polarization and AHR after RSV infection in infants. RSV infection and host response RSV is a single-stranded negative-sense RNA virus belonging to the Pneumovirus genus of the Paramyxoviridae family (13). Its genome can encode 11 proteins that play roles in mediating viral replication, packaging, and assisting the virus to escape immune surveillance. Glycoprotein binds to glycosaminoglycans on the cell surface, interfering in immune cell recruitment and various cytokine production. Fusion protein mediates the fusion between the virus and the cell membranes of the host to form syncytia. Non-structural protein 1 and 2 inhibit interferon (IFN) production and its signaling conduction (14). Phosphoprotein inhibits exogenous apoptotic signals and contributes to persistent RSV infection in macrophage-like cells (15). By disrupting the host gene transcription and interfering with the synthesis of mitochondrial proteins, matrix protein weakens the body's immune recognition of RSV (16). In host cells, RSV activates pathogen-associated molecular patterns (PAMPs), which promotes the maturation of antigenpresenting cells (APCs) to express pattern recognition receptors, toll-like receptors (TLRs), and retinoic acid-inducible gene 1 (RIG-I)-like receptors (RLRs) (17)(18)(19). RSV can also invade lung macrophages directly, is recognized by mitochondrial antiviral signaling protein (MAVS)-coupled RLR (12), and can activate nuclear transcription to regulate innate immune responses ( Figure 1). The expression of pro-inflammatory mediators and the recruitment of inflammatory cells to the infected or injured tissue and their migration across the endothelium are crucial events in early immune extravasation defense against RSV infection (20). AM-mediated lung pathological lesions are usually not invaded by RSV directly, but mainly immune-mediated inflammatory responses (21). The acute infection phase is dominated by airway inflammation such as bronchiolitis, and the convalescent phase is characterized by airway hypersensitivity. Both of them belong to airway hyperresponsiveness. A variety of molecules are involved in the acute phase across epithelial cells (ECs), including interleukin (IL)-6, tumor necrosis factor-a (TNF-a), granulocyte-colony stimulating factor, granulocyte-macrophage colony stimulating factor (GM-CSF), chemokines (CXCL8, CXCL10, and CCL5), antibacterial factors including nitric oxide (NO), b-defensins, lysozyme, and lactoferrin (10, 17), which might cause tracheal smooth muscle spasm, hyperemia, edema, inflammatory cell aggregation, secretion, and cell shedding to block the airway (22)(23)(24)(25). Reinfection or inhalation of allergens during the convalescent period can both trigger the overexpression of CD8 T and Th2-like cytokines involved in triggering wheezing. Classification and characteristics of AMs Lung macrophages are usually divided into two subpopulations depending on their distinct locations: AMs located on the surface of the alveoli cavity and interstitial macrophages (IMs) located in the interstitial pulmonary stromata (26,27). In inflammatory states, AMs are recognized as the resident airspace macrophages (RAMs) and the recruited airspace macrophages (RecAMs), depending on their origins and wandering characteristics ( Figure 2) (28,29). RAMs are steady-state "AMs" that derive mainly from embryonic yolk sacs and fetal liver cells (30), which reside on the surface of the alveoli cavity for a long time. RAMs are not evenly distributed in each alveolus, and notably only 30-40% of alveoli contain RAMs. Most of the RAMs crawl in and between alveoli through the pores of Kohn to monitor the microenvironment, while the remaining 10% of RAMs are entirely sessile (5). In the physiological environment, there is contact inhibition between RAMs, which contributes to preventing RAMs from accumulating in the alveoli. This distribution characteristics are regulated in part by IL-34 and macrophage-colony stimulating factor (M-CSF) in the alveoli (31). Through the regulation of GM-CSF and the mechanistic target of rapamycin complex 1, RAMs, as long-lived cells, can proliferate in situ to replenish themselves without the need for mononuclear macrophages from circulating blood as supplement or replacement, with an annual renewal rate of about 40% (32). GM-CSF have been confirmed to upregulate the expression of anti-apoptotic genes in RAMs, which is necessary to promote maturation and prolong their lifespan (5,33). RAMs, being capable of engulfing foreign particles and endogenous proteins (including surfactants and cell debris) to initiate an immune response, play a key role in regulating the innate immunity of the respiratory system and preventing infection from inhaled pathogens. Moreover, together with alveoli ECs, RAMs can also contribute to maintaining lung tissue homeostasis and the intensity of the inflammatory response (34). The distributions of RAMs in the steady-state microenvironment are in the dynamic equilibrium of "selfsufficiency". During endotoxin-induced acute inflammation or exposure to a large number of pathogens, RAMs are the first sentinel of the respiratory tree and constitute the dominant immune cell in the steady state to metabolize pro-inflammatory effectors, including the recruitment of platelets, neutrophils, and other inflammatory cells, which contribute to co-participating in and regulating the onset and development of the disease (35). RecAMs belong to the subpopulation of IMs that travel towards the site of inflammation in the alveolar cavity in pathological conditions. IMs originate in bone marrow monocytes, circulating through the bloodstream into the interstitial tissues of the lungs and being in transitional states. IMs can patrol in the interstitium of different alveoli, where they identify different inflammatory or necrotic and exfoliated cells and exert a phagocytic effect, which, in turn, release IL-10 to maintain microenvironment homeostasis (9,36). In the acute phase of infection, IMs will be chemotactic to the alveoli cavity and recruited to become RecAMs (37). In addition, RNA gene sequencing showed that the immunoprogramming of RecAMs was dynamic (32, 35) and could develop into the same phenotype and provide the same functionality as RAMs during the peak inflammatory periods (38,39), including the production of pro-inflammatory cytokines and elimination of pathogens. RecAMs release anti-inflammatory factors to repair pathologically damaged tissues when the inflammation is RSV activates APCs to express PRRs, including TLR and RLR, and release pro-inflammatory factors. In the acute phase, RSV-infected epithelia express interferons, cytokines, chemokines, and antimicrobial factors involved in the airway inflammation reaction. The pathological lesions caused by these inflammatory mediators experience two stages: bronchiolitis and post-viral AHR. APCs, antigen-presenting cells; RSV, respiratory syncytial virus; PRRs, pattern recognition receptors; TLR, Toll-like receptor; RLR, retinoic acid-inducible gene 1 (RIG-I)-like receptor; AHR, airway hyperresponsiveness; IFN, interferon; IL, interleukin; TNF-a, tumor necrosis factor-a; G-CSF, granulocyte colony-stimulating factor; GM-CSF, granulocyte-macrophage colony-stimulating factor; NO, nitric oxide. subsiding. RecAMs program apoptosis after the inflammation is gone, whereas RAMs will continue to survive and sustainably replenish themselves. This causes the amount of AMs to form an emergency dynamic cycle between the homeostasis phase and the inflammatory phase (32). Inflammation-activated AM polarization Both RAMs and RecAMs can be activated to divide into M1 and M2 phenotypes according to the microenvironment changes (5). Conventional studies label nitric oxide synthase (NOS) and arginase (Arg) to determine the activation states of M1 and M2, respectively. However, recent studies have shown that both NOS and Arg can be co-expressed within the same cell (32), and AM polarization is not a distinct "dichotomy" but is multidimensional, dynamic, and complex (40). Moreover, the classic "M1 and M2" classification remains representative. M1-like macrophages exacerbate the airway inflammatory response that may be associated with long-term airway sensitization (41). In contrast, M2-like macrophages are capable of anti-inflammatory responses and repairing damaged tissues to maintain immunity balance (5). Once the microenvironment of the alveolars changes, the phenotypes and the functions of M1 and M2 could be reversed. Based on single-cell RNA sequencing, AMs can be identified as five clusters with unique transcriptome characteristics and presumed functions at three different stages (32): physiological homeostasis, acute inflammatory phase, and convalescent phase. Sources and classification of AMs. In physiological homeostasis, AMs are equivalent to RAMs, which originate from embryonic yolk sacs and fetal liver cells. In the event of a large number of microbial invasion or inhalation of allergens, IMs are recruited into the alveoli, known as RecAMs. After the inflammation subsides, RecAMs coming from IMs will undergo programmed apoptosis, while RAMs maintain their original distribution characteristics under the action of IL-34, M-CSF, and GM-CSF. AMs, alveolar macrophages; IMs, interstitial macrophages; RAMs, resident airspace macrophages; RecAMs, recruited airspace macrophages; IL, interleukin; GM-CSF, granulocyte-macrophage colony-stimulating factor; M-CSF, macrophage colony-stimulating factor; TNF-a, tumor necrosis factor-a. RSV infection and AM polarization The mechanism by which RSV triggers AM polarization is through promoting a regulatory immune mediator response in three pathways ( Figure 4): cytokines, intercellular communication signaling (including epithelia-macrophages as well as macrophages-lymphocytes), and RSV invades AMs and directly triggers AM polarization. Cytokines It is well known that IFN-g is the classic pathway to cause macrophage polarization. RSV infection might stimulate the secretion of IFN-g from CD8 T cells and NK cells in lung tissues (42-45), which, in turn, regulates inflammatory responses and promotes immunopathology by initiating AM polarization (46). AM polarization activated by IFN-g is age-related, with significant differences among adults and infants. There is a high level of expression of sialic acid-binding immunoglobulin agglutinin (Siglec-1) ligand CD43 on the membranes of CD4 T cells in adults through antagonizing signals from monocytes and inhibiting the release of IFN-g by CD4 T cells, thus preventing AMs from polarizing into M1 phenotype. In contrast, due to the lower CD43 expression on CD4 T cell membranes in infants, the IFN-g secreted by monocyte-mediated CD4 T cells is not affected by Siglec-1 signaling in RSV infection (47). Although infants lack specific memory T cells and their IFN-g expression is delayed, the role of IFN-g on AMs gradually dominates as the RSV infection progresses, with the increased CD43 expression being age-related. Therefore, IFN-g has significant gradual age differences in M1-like polarization effects (11,48,49), which is one of the main reasons why the inflammatory response and pathological damage by RSV are different from those of adults (12). GM-CSF also promotes AM polarization in RSV infection, but it plays a secondary role (50). RSV can also induce the production of pro-inflammatory factors that mediate the expression of macrophage migration inhibitor factor (MIF) through reactive oxygen species, 5lipoxygenase, cyclooxygenase, and PI3K signaling channels, driving AM polarization to produce TNF-a, monocyte chemoattractant protein-1, and IL-10 (51). Intercellular communication RSV-infected airway ECs might activate AM polarization through intercellular communication such as the Notch-Jagged pathway (24,(52)(53)(54). Notch is a ligand-receptor interaction that Inflammation activates the polarization of AMs. Depending on functions, polarized AMs are roughly divided into pro-inflammatory M1 and antiinflammatory M2. Based on their single-cell RNA sequencing analysis, AMs can be identified as five clusters at three time points throughout the inflammatory phase, indicated by the red square bracket and arrow. Clusters 1 and 2 contain cells with RAM markers that are present during both homeostasis and inflammation and are dominated by M2-like functions in the homeostasis and convalescent phase, marked by blue parentheses and arrows. Clusters 3, 4, and 5 exist only during inflammation and are predominantly characteristic of RecAMs (herein noted by yellow arrows and square brackets). Among them, clusters 3 and 4 are dominated by M1 gene expression (herein annotated by the purple square bracket and arrow). Cluster 5 has a relatively low expression of both M1 and M2 genes. Each cluster has corresponding cellular characteristics that reflect the cell-derived sources and exhibits different functions. RAMs, resident airspace macrophages; RecAMs, recruited airspace macrophages; M, macrophage. triggers a highly conserved signaling cascade with a family of four members (Notch 1-4) (55). Notch-Jagged intercellular communication initiates intracellular digestion and modification of the Notch family, by forming a cross-nuclear complex, to initiate AM polarization in coordination with NFkb signaling and regulates the development of lymphatic lines such as thymus cells, NK cells, and regulatory T cells (Tregs) in the thymus (56,57). It has been shown that the signal exchange between infected ECs and AMs not only affects the polarization of AMs directly but also further regulates the differentiation and functions of T cell subsets. In addition, ECs can also interact with AMs through the ligand-receptor of CD200 and program deathligand-1 (24). RSV direct activation AMs can engulf RSV particles directly and recognize viral RNA sequences by PAMPs. Via MAVS and RIG-I-like receptors, RSV replication activates AM nuclear transcription to release type I and type II interferons and recruits inflammatory cells (12, 58). RSV infection can maintain inefficient replication within macrophages, forming latent infections (15, 59). By inducing immune cells to express MIF (51), it contributes to weakening the migration of AMs subsequently (5). Through receptor-interacting protein kinase 1 and 3 and mixed-lineage kinase domain-like, RSV upregulates TNF-a and the apoptotic-related gene caspase-8 from the AMs' autocrine pathway, thereby exacerbating necrotizing apoptosis and lung tissue damage in airway histiocytes (53). RSV invades AMs through inducing the expression of type I IFN to promote the aggregation of inflammatory monocytes (infMo) (12), which can drive M2like macrophages to express high matrix metalloproteinase-12 and thus exacerbating airway hyperresponsivity (60). AM polarization in the different stages of inflammation To maintain homeostasis, AMs exert mainly immuno suppressive effects by inhibiting the antigen presentation functions of lung dendritic cells or inducing CD4 T cells to be unresponsive (61). It can also secrete a variety of immunomodulatory molecules such as IL-10, TGF-b, NO, and prostaglandin to reduce lung inflammation. Polarized AMs have a dual effect of pro-inflammatory and immune tolerance in the different phases of RSV infection to maintain the intensity of the inflammatory response and the stability of the internal environment and promote tissue repair (34). Three signaling pathways for AM polarization activated by RSV infection, cytokines represented by IFN and GM-CSF, intercellular communication using the Notch-Jagged pathway as an example, and the direct activation signal by RSV. RSV, respiratory syncytial virus; NK, natural killer cells; IFN, interferon; GM-CSF, granulocyte-macrophage colony-stimulating factor; AMs, alveolar macrophages; ECs, epithelial cells; M, macrophage; infMo, inflammatory monocytes; MIF, macrophage migration inhibitory factor; TNF-a, tumor necrosis factor-a. Inflammatory period Airway ECs and AMs, as the first defense cells of the respiratory tract, can recruit neutrophils through the secretion of molecules to synergistically eliminate pathogens. Damaged lung ECs can induce the loss of the immunosuppressive ligand expression of AMs via direct cell-cell contact, which may regulate polarized AMs to M1 phenotype (24). M1 produces pro-inflammatory functions in the acute phase of infection and exhibits a stronger phagocytic activity (62,63). RSV-mediated AM polarization is mainly through cytokine activation pathways, consisting of IFN-g, TLR-2, -4, and -9 ligands, lipopolysaccharide, and GM-CSF, manifested as M1-like functions. While inhibiting IL-10 receptor signaling, polarized AMs activate NF-kb nuclear transcription by JAK-STAT1/2 phosphorylation signal to express CD16, to release proinflammatory cytokines TNF-a, IL-6, IL-1b, IL-12, and IL-23, and to secrete inducible nitric oxide synthase, which can promote the development of inflammation and upregulate the Th1-like response (64)(65)(66). Moreover, in the mitogen-activated protein kinase-dependent pathway, polarized AMs express IL-33 and are capable of activating NF-kB signaling by the production of Th2related cytokines (13). AMs are important effector cells to secrete IFN-I, and their secretion levels are age dependent. RSV induces the overexpression of IFN-I in adults but, contrarily, inhibits its production in infants (58). IFN-I inhibits RSV replication by upregulating antiviral gene expression and can also recruit monocytes to differentiate into infMo to exert an antiviral activity (12). Immaturity in the production of IFN-I by infants is one of the molecular bases for their susceptibility to develop severe lung inflammation after an RSV infection. During the acute inflammatory phase, RecAMs are rapidly recruited into the alveoli to participate in the removal of pathogens, promoting inflammation, while RAMs inhibit this inflammation. During the period of inflammation regression, most RecAMs are programmed cell death, while RAMs persist. Within 2 months of infection, the phenotypes and functions of some RecAMs are gradually similar to those of RAMs to supplement the RAMs consumed (67). The increased expressions of IFN-I receptor alpha chain, IFN-induced GTP-binding protein Mx2, 2′-5′oligoadenylate synthetase 1 (OAS1), OAS2, ribonuclease L, and IFN-induced transmembrane protein 3 in AMs also enhance RSV clearance (68). This phenotype exists in the acute phase of other respiratory virus infections, such as influenza virus (69)(70)(71)(72). Convalescent period In the convalescent phase of infection, the AM phenotype is more inclined to M2, which is manifested by the secretion of IL-10 to modulate the Th17-mediated inflammatory response (9), such as upregulating Tregs, inhibiting lung inflammation driven by inflammatory cells (including neutrophils), and promoting tissue repair (68, 73). AMs are polarized into M2 phenotype mainly under M-CSF stimulation. According to the different cytokine expression profiles, M2 can be divided into three subtypes: M2a, M2b, and M2c. M2a releases a small amount of IL-10, the decoy receptor IL-1RII, and the IL-1 receptor antagonist (IL1ra), predominated by the inflammatory responses of type Th2, which might be associated with airway sensitization. M2b releases the proinflammatory factors TNF-a, IL-1, and IL-6 and a large number of IL-10. Dominated by a high level of IL-10, M2b regulates the signals of inactivated immunity and inflammation through inhibiting the proliferation and differentiation of T cells to exert anti-inflammatory and immune-regulating effects. As an antiinflammator, IL-10 regulates immune and inflammatory signals, including inhibiting the proliferation and differentiation of T cells to exert anti-inflammatory and immune-regulating effects. M2c is activated by autocrine IL-10 and TGF-b, modulating the immune response and assisting in tissue remodeling (65, [74][75][76]. Thus, during the convalescent phase of lung tissue inflammation, the functions of RAMs and RecAMs gradually switch to the phenotype of different M2 subtypes, promoting tissue repair and pathogen clearance. Post-viral AHR The functional transformation of IMs in the transition from the inflammation period to convalescence is a major intrinsic factor in tissue repair. Early in the convalescent phase of inflammation, M2a is dominated by IL-4 secretion, which, in turn, upregulate the Th2 type immune response leading to AHR, which is associated with wheezing. In the middle and late phases of convalescence, AMs are gradually converted to M2b, mainly secreting IL-10 and TGF-b, regulating the Th17-like immune response negatively, which may promote the production of functional Treg cells, form a positive feedback loop, and inhibit the tolerance of effector T cells to aspiration antigens. IL-10 is mostly secreted by activated IMs by the TLR4/MyD88 pathway. IMs account for about 55% of CD45+ cells that secrete IL-10, compared with less than 5% of CD4 T cells. Activated IMs can impair neutrophil inflammation, mucus production, and the expression of neutrophil-activated cytokines (IL-17, GM-CSF, and TNF-a) in alveoli, negatively regulating the Th2-and Th17-mediated responses (9). In contact with harmless antigens, AMs co-express TGF-b and retinal dehydrogenase 1/2 (77), inducing the production of nTreg cells to maintain immune tolerance (78). The responses caused by RSV have shown antithesis in immune inflammation and immune tolerance as well as in viral clearance (78). A moderate inflammatory response helps the host defend against pathological harm caused by harmful microorganisms. Decreased immune tolerance can lead to chronic inflammation such as asthma. When infants are reinfected with RSV, the Th1-type immune response might produce IFN-g, TNF-a, IL-1b, and IL-22 (68, 79), thereby activating CTL and NK cells to clear the virus (10). However, infants are mainly characterized by the Th2-and Th17-like response (80), and the Th2-type immune memory expresses IL-4, IL-5, and IL-13, which down-regulate Th1, leading to reduce the virus clearance rate and increase the inflammation (9,81). It means that the pathological basis of AHR may be closely related to an excessively unbalanced immune response. During convalescence or RSV re-infection, infants fail to develop airway immune tolerance due to the formation of Th2 immune memory and the downregulation of Treg cells, which may induce eosinophilic asthma. In addition, platelets are also involved in the recruitment of immune cells in the regulation of the conversion of AMs' functions. Stimulated by sCD40L of CD4 T cells, plateletsexpressed P-selectin binds to PSGL-1 on the Treg cell membrane to form platelet-Treg aggregates. It is one of the keys to promoting the recruitment of Treg cells to the lungs and releasing anti-inflammatory factors IL-10 and TGF-b. The interaction of platelets with Treg cells is involved in regulating the transcriptional reprogramming of AMs and initiating the polarization of AMs towards anti-inflammatory phenotypes, which effectively relieve lung inflammation (82). At different stages of RSV infection, the phenotypes and functions of AMs change to play a pro-inflammatory and steady-state role, balance and protect the local alveolar microenvironment, and avoid excessive immunopathological damage (59). AM-mediated T cell differentiation Intercellular signaling interactions between airway epithelial cells, AMs, and T lymphocytes may be associated with airway sensitization. RSV might upregulate the expression of Notch signaling protein ligand Dll4 in APCs and lung ECs. The blockade of Dll4 (Notch-Jagged ligand of the signaling pathway) might promote the production of Th2-like cytokines (IL-5 and IL-13), mainly through inducing IL-17A+CD4+T cells to differentiation and IL-17A expression. Thus, it might result in excessive immunopathological damage (57). Upregulated Dll4 promotes T cells to express SET and MYDN domain containing protein 3 through the classic Notch signaling pathway, which contributes to Foxp3 gene methylation and Treg cell differentiation and promotes IL-10 expression (83). Furthermore, RSV promotes the upregulation of Jagged-1 and the downregulation of Jagged-2 in bronchial epithelial cells, which is beneficial to the differentiation of Th2 cells. Besides this, if the expression of Jagged-1 is inhibited, it promotes Th1 and inhibits the differentiation of Th2 cells (54). Thus, the species activity of Notch ligands affects the direction of differentiation of T cells. Whether there are differences in the expression levels of different Notch ligands and whether they are age-related are still unclear. Polarized AMs affect T cell differentiation in many waysfor example, ultra-fine particles induce AMs to express Jagged-1 and promote allergen-specific T cell differentiation into Th2 and Th17 through the Jagged 1-Notch 4 pathway (84). Lung damage caused by mechanical ventilation upregulates the expression of Notch signal-related proteins and promotes the polarization of AM to M1 phenotype, which, in turn, aggravates airway inflammation (85). Therefore, given the important role of AM polarization and T cell differentiation, experimental evidence is still needed to confirm if RSV infection regulates T cell differentiation through AM polarization, of which it is involved in the later body's sensitization state. However, after RSV infection, conclusive evidence is needed on how AM polarization affects the imbalance differentiation of T cell associated with the formation of AHR. Prospect of AMs as target for the treatment of AHR-related viral infection Immunomodulatory therapies target AMs that exist in multiple potential sites during a viral infection of the respiratory tract. In the case of rhinovirus infection, AMs can be M1/M2 polarized by GM-CSF/M-CSF or IFN-g/IL-4 stimulation (86, 87). M1/M2 herein can be likewise classified by their functions and origins rather than dichotomy. In rhinovirus-induced asthma exacerbations, M1-like monocyte-derived macrophages (MDMs) can produce antiviral IFN, while M2-like MDMs significantly enhance the production of Th2-type chemokines (88), where MDMs are commonly classified as RecAMs (89). Furthermore, the inception of rhinovirus-induced AHR may share the analogical pathways with RSV-induced AHR in adaptive immunity-for example, the synergistic interactions between Th2 and Th17 immune responses, in which cytokines (including but not limited to IL-33, IL-13, and IL-17A) are released, mediate eosinophilic and neutrophilic aggregation, jointly inducing AHR (90,91). After inflammation is controlled, AHR is often characterized by eosinophilic AHR mediated by Th2-like cytokines (IL-5 and IL-13) mediated by immune memory (92). Whether associated with viral infections or the inflammatory cascade, immunomodulatory therapies for AMs will be quite promising and potential. In the case of homeostasis or M-CSF stimulation, AMs produce anti-inflammatory factors such as IL-10, which result in tissue repair and remodeling similar to those of M2-like functions (93,94). The current clinical studies of GM-CSF and its receptors are relatively numerous (95)-for instance, the outcomes of severe COVID-19 patients who received a single intravenous dose of mavlimumab to inhibit GM-CSF signaling were relatively better compared with the normal controls (96). However, most of these preclinical research models that inhibit GM-CSF signaling to control inflammation are used in adults and few for infants. Therefore, for RSV infection in infants, a large amount of experimental data is required to prove that GM-CSF and M-CSF signals can target AM polarization. Considering that AMs' functions in different microenvironments can be reversed, it is necessary to be cautious when using cytokines such as M-CSF to promote the proliferation and polarization of AMs. In homeostasis and convalescence, most AMs are RAMs with M2-like characteristics. Perhaps it is possible to try to obtain RAM-like cells in vitro from embryonic liver cells, which have been reported to have similar functions to primary RAMs (97,98). This may be clinically applied to alveolar lavage therapy (replenishing RAMs) to promote lung repair. In addition, in intercellular signalings, AMs, as APCs, can regulate immune response types that follow through Notch signaling. Combined with Part-6, upregulating Dll4 and Jagged-2 and blocking or downregulating Jagged-1 may inhibit the production of Th2 and Th17-like cytokines and promote Treg cell differentiation. The desired scenario is to increase virus clearance while maintaining the stability of the lung microenvironment to avoid excessive immune damage. Further studies may be considered from the perspective of IL-10 modulating the adaptive immune response (99,100). There are currently reports of a hydrogelbased approach to deliver IL-10 to the lung locally without bleeding or other complications (101). This may be a promising clinical treatment strategy. Conclusion In conclusion, RSV infection can affect the polarization of AMs in a variety of ways. At different stages, AMs can regulate the differentiation of T cell by expressing different cytokines to maintain a moderate inflammatory response and homeostasis (102,103). AMs manifest as M1-like functions, perform proinflammatory functions during the early phase of RSV infection, and gradually change to M2. Immunomodulatory therapy targeting AMs is a potential direction for preventing wheezing associated with RSV infection. Author contributions YW and DZ conceptualized the study design. YW, JZ, XW, PY, and DZ wrote the initial drafts of the manuscript. XW and PY revised the text and participated in the modification diagram. All authors contributed to the article and approved the submitted version. Funding This study was funded by the National Natural Science Foundation of China (81670007). This paper was supported by t h e G e n e r a l P r o j e c t o f H u b e i P r o v i n c i a l H e a l t h Committee (WJ2021M173).
2022-10-21T13:39:50.657Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "63729e6da1a5c7a812036e86cf9d3604430891b6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "63729e6da1a5c7a812036e86cf9d3604430891b6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263309849
pes2o/s2orc
v3-fos-license
Dynamic Evolution of Bacterial Ligand Recognition by Formyl Peptide Receptors Abstract The detection of invasive pathogens is critical for host immune defense. Cell surface receptors play a key role in the recognition of diverse microbe-associated molecules, triggering leukocyte recruitment, phagocytosis, release of antimicrobial compounds, and cytokine production. The intense evolutionary forces acting on innate immune receptor genes have contributed to their rapid diversification across plants and animals. However, the functional consequences of immune receptor divergence are often unclear. Formyl peptide receptors (FPRs) comprise a family of animal G protein–coupled receptors which are activated in response to a variety of ligands including formylated bacterial peptides, pathogen virulence factors, and host-derived antimicrobial peptides. FPR activation in turn promotes inflammatory signaling and leukocyte migration to sites of infection. Here we investigate patterns of gene loss, diversification, and ligand recognition among FPRs in primates and carnivores. We find that FPR1, which plays a critical role in innate immune defense in humans, has been lost in New World primates. Amino acid variation in FPR1 and FPR2 among primates and carnivores is consistent with a history of repeated positive selection acting on extracellular domains involved in ligand recognition. To assess the consequences of FPR divergence on bacterial ligand interactions, we measured binding between primate FPRs and the FPR agonist Staphylococcus aureus enterotoxin B, as well as S. aureus FLIPr-like, an FPR inhibitor. We found that few rapidly evolving sites in primate FPRs are sufficient to modulate recognition of bacterial proteins, demonstrating how natural selection may serve to tune FPR activation in response to diverse microbial ligands. Introduction Formyl peptide receptors (FPRs) encompass a family of vertebrate G protein-coupled receptors (GPCRs) that play crucial roles in the recruitment and activation of leukocytes during infection (Kim et al. 2009;Kretschmer et al. 2010;Bloes et al. 2015;Bufe et al. 2015;Weiß and Kretschmer 2018;Leslie et al. 2020).FPRs were originally identified when researchers observed leukocyte migration toward N-formylated peptides, which are present in bacterial and mitochondrial proteins but not eukaryotic nuclear-encoded proteins (Schiffmann et al. 1975).This led to the discovery of FPR1 as a host receptor which detects formylated peptides (Pike et al. 1980;Zigmond 1981).Since then, additional microbial and host-derived ligands have been identified for specific FPR homologs.Of the three FPRs in humans, each has been shown to possess a unique ligandbinding profile with a range of downstream responses (Karlsson et al. 2009;Kretschmer et al. 2010;Schepetkin et al. 2014).For example, recognition of lipoxin A by FPR2 leads to the suppression of inflammatory signaling, whereas binding of bacterial-specific formylated peptides by FPR1 results in induction of the inflammatory response and cell chemotaxis toward ligand source (Le et al. 2002;John et al. 2007;Schepetkin et al. 2014). Upon FPR activation in neutrophils, these crucial innate immune cells contribute to pathogen clearance through phagocytosis, release of toxic granule molecules, and production of reactive oxygen and nitrogen species (Önnheim et al. 2014;Bufe et al. 2015).Neutrophils constitute roughly 50% of circulating bloodstream leukocytes and are capable of detecting nanomolar concentrations of pathogen-derived peptides via FPR1 and FPR2 (Le et al. 2002;Fu et al. 2006).Natural killer cells, monocytes, and macrophages also express high levels of FPRs which contribute to activation and chemotaxis of these immune cell types (Crouser et al. 2009;Kim et al. 2009;Leslie et al. 2020). Immune receptors are under persistent selective pressure to detect an array of rapidly evolving microbial ligands (Daugherty and Malik 2012;Aleru and Barber 2020).Beneficial mutations that enhance immune responses are expected to spread rapidly in host populations via positive selection.Previous studies have detected signatures of positive selection in FPR1 and FPR2 in the mammalian lineage through observation of elevated rates of nonsynonymous to synonymous substitutions (dN/dS) at several codon sites (Muto et al. 2015).Many other proteins involved in host defense against pathogens including Toll-like receptors (TLRs), major histocompatibility complex (MHC) genes, and transferrin family genes have been subject to repeated positive or balancing selection during mammalian evolution (Hughes and Nei 1989;Sawyer et al. 2005;Barreiro et al. 2009;Barber and Elde 2014;Barber et al. 2016;Levin and Malik 2017).FPRs play a central role in innate immunity, and heightened molecular signatures of positive selection suggest sequence variation may play an important functional role in immunological adaptation in mammals. In the present study, we integrate genetic and experimental approaches to investigate the consequences of FPR variation between primate and carnivore species.Our findings indicate that rapid evolution of FPR orthologs between closely related species can have major impacts on recognition of distinct pathogen ligands. Loss of FPR1 in New World Primates To begin to investigate the consequences of FPR evolution on immune functions, we compiled a data set of FPR homologs from various primate and carnivore species.Unexpectedly, we found evidence for a loss of FPR1 expression in New World primates demonstrated by a lack of FPR1 cDNA detectable in whole blood, brain, lung, and other RNA-Seq data from AceView (Thierry-Mieg and Thierry-Mieg 2006) (fig.1).This lack of expression suggested that the FPR1 gene may have been downregulated, lost, or pseudogenized in New World monkeys.Further investigation identified pseudogenes as well as a lack of annotated or homologous FPR1 New World monkey genes in the NCBI database.These incomplete gene sequences nonetheless shared sequence identity with related FPR1 genes (fig.2).To determine whether FPR1 genes in New World monkeys were subject to gene degradation or unexpressed or degraded mRNA, we scanned available New World monkey genomes using a BLAT search for regions of similarity to FPR1 genes both manually and by use of the bioinformatics tool GAMMA (Stanton et al. 2021).We tested for the presence of predicted exons using the GENESCAN tool (Burge and Karlin 1997) but failed to identify exons for any of the gene regions in New World monkeys containing significant similarity to marmoset FPR1.We aligned regions identified as putative pseudogenes in New World monkeys to the human FPR1 reference and the marmoset FPR1 pseudogene sequences (which are available as high coverage, well-annotated entries in the NCBI database).Sapajus apella, Cebus imitator, Saimiri boliviensis, and Aotus nancymaae have substantial similarity to FPR1 at syntenic loci in their genomes (adjacent to the FPR2 and FPR3 genes on chromosome 19) but lack one or more features of a functional gene (fig.2A and B).The S. boliviensis pseudogene is the most striking, as this region has the least conservation (77.5% identity to marmoset FPR1 in 271 nucleotide region located on the plus-strand adjacent to the FPR2 and FPR3 genes on chromosome 19), lacks evidence of a start or stop codon, and lacks significant sequence identity to marmoset FPR1 outside of the 271 nucleotide region.It should be noted that each of these genomes possesses different levels of coverage and/or one or more builds.However, S. boliviensis, A. nancymaae, and Callithrix jacchus have multiple builds with high coverage (S. boliviensis: 2 builds, most recent 111× coverage; A. nancymaae: 4 builds, most recent 132×; C. jacchus: 11 builds, most recent 40× coverage).We probed the genomes listed against gelada FPR1 nucleotide sequence obtained from NCBI with GAMMA, using the identity threshold of 50%, and obtained no hits for any of the primate genomes tested.Collectively, these findings indicate that the FPR1 immune receptor has been lost in New World primates. Rapid Divergence of FPR Ligand-Binding Domains in Primates and Carnivores To test whether sequence divergence in FPRs may be driven by natural selection, we performed phylogenetic analysis by maximum likelihood (PAML) on publicly available primate and carnivore sequences.Primate and carnivore FPR trimmed gene sets were analyzed using codeml from the PAML package (Yang 2007), and omega values were assessed for statistical significance by chi-squared analysis.Our reasoning for inclusion of carnivores in addition to primates was due to Carnivora encoding only a single FPR homolog, FPR2.We were curious whether reduced gene copy number in this clade could alter the strength or patterns of natural selection compared with other taxa.Our analysis revealed evidence for positive selection acting on FPR1 and FPR2 genes in primates, largely consistent with previous studies across mammals, particularly in the extracellular loops which participate in ligand binding (Muto et al. 2015) (fig.3).Notably, positions 170, 191, and 271 in FPR1 were also identified in this previous study.We performed additional testing for branch-site episodic positive selection using aBSREL (Smith et al. 2015) and found evidence of heightened selection on the branch of the phylogenetic tree leading to the New World monkey Dynamic Evolution of Bacterial Ligand Recognition by Formyl Peptide Receptors in FPR2.The sites under positive selection that appeared in carnivores mapped to the N-terminus and the third and fourth extracellular domains, the exact regions that appear to be undergoing selection in primates as well (fig.3).To determine how these observed evolutionary changes may influence FPR immune functions, we next took an empirical approach to assess interactions between FPR homologs and known microbial ligands. Recognition of Bacterial Ligands by Mammalian FPRs To assess the functional consequences of sequence variation in primate FPRs, we focused on interactions with the pathogenic bacterium Staphylococcus aureus due to its expression of both inhibitors and activators of FPRs (Prat et al. 2009;Kretschmer et al. 2010;Stemerding et al. 2013;Koymans et al. 2017).S. aureus is a Gram-positive bacterium known for its multiplicity of virulence factors, antibiotic resistance, and ability to infect a broad range of mammals including primates and livestock (Thomer et al. 2016;Balasubramanian et al. 2017;Haag et al. 2019;Kwiecinski and Horswill 2020;Cheung et al. 2021).This adaptable microbe colonizes the nares of roughly 30% of humans and is also a major cause of skin and soft tissue infections, bacterial sepsis, pneumonia, and other life-threatening infections (Thomer et al. 2016;Kwiecinski and Horswill 2020;Cheung et al. 2021).S. aureus produces many potent toxins believed to contribute to its virulence, including enteroxins, superantigens, proteases, leukocidins, and alpha-hemolysin (Tam and Torres 2019).S. aureus evades host immune responses in part through release of protein inhibitors of TLRs, FPRs, and complement receptors (Postma et al. 2004;Prat et al. 2009;Thammavongsa et al. 2013).Strains sampled from wild gorillas, chimpanzees, green monkeys, and colobus monkeys contain the gene for the virulence factor staphylococcal enterotoxin B (SEB), shown to potently activate FPRs (Schaumburg et al. 2012) and produce formylated peptides that can induce cell migration in neutrophils.Staphylococcus aureus also produces a number of molecular inhibitors that directly bind and inactivate FPRs, such as FLIPr and FLIPr-like (Stemerding et al. 2013). We tested binding of SEB or FLIPr proteins fluorescently labeled with fluorescein (fig.4A and B; supplementary figs. 1 and 2, Supplementary Material online) to human HEK293T cells expressing primate FPRs by flow cytometry.Binding was assessed based on FITC + after incubation and washing of unbound labeled protein.Our results for SEB revealed evidence of binding for many of the receptors tested (fig.4).All of the primate FPR1 proteins were found to bind SEB, whereas humans were the only species for which FPR2 bound (fig.4C-F).The S. aureus inhibitor FLIPr-like did not bind FPR proteins with any clear phylogenetic relationship (fig.4B-D and F).Most strikingly, bonobo FPR1 bound FLIPr-like similarly to human FPR2, not human FPR1, despite sharing 98% sequence identity with human FPR1.We next considered which FPR amino acid positions were most likely to be responsible for binding differences observed.Without crystal structures available for FPRs, we generated homology models using AlphaFold2, which has demonstrated high accuracy that is comparable or better to resolved nuclear magnetic resonance structures at the single Ångström level for protein in solution (Zweckstetter 2021).Our results matched the pattern of binding affinity seen in our experimental results, with strongest binding in FPR2, followed by bonobo and least for human FPR1. GBE Gibbs free energy for ligand docking experiments are often given less weight than the relative values, because the environment where binding occurs plays a strong role in the precise value of the free energy of the binding interaction, so we used Glide docking score as a relative measure of binding affinity to validate the likely accuracy of our predicted structures (Friesner et al. 2004). The AlphaFold structures were docked to the FPR extracellular region using the seven amino acids that form the region of FLIPr and FLIPr-like N-terminal peptide required for its inhibitory activity using Schrödinger Glide docking software, with the best docking pose in the expected conformation.Because we used a fragment of an intrinsically disordered region of the larger protein, FLIPr-like, we rejected poses where the terminal lysine was not projecting from the pocket and selected the highest binding affinity/ lowest Glide score from the docking run.We also ran these same experiments using i-TASSER to predict the structures with the same docking parameters used in Schrödinger Glide.Both experiments produced similar results. The results of these experiments suggested that FPR1 in humans may have less propensity for FLIPr-like inhibition compared with bonobo FPR1 and human FPR2.Looking closer at the structures revealed a site under selection is likely responsible for this difference in activity: position 170 in FPR1 is a methionine in bonobos, resulting in less obstruction of the binding pocket of FPR1 (fig.5A and B).We found the ligand is predicted to bind much deeper in the pocket than FLIPr-like (supplementary fig.3, Supplementary Material online), suggesting the difference between bonobo FPR1 and human FPR1 activation by f-MLF might not be affected by the occluding structure formed by site 169/170 in the human FPR1 predicted structure (fig.5C). Discussion Our analysis suggests that FPR1 function was lost early in the New World monkey lineage.It is possible this gene loss reflects neutral genetic processes due to functional redundancy among FPR genes.Alternatively, FPR1 loss may have been beneficial under certain conditions due to potential fitness costs associated with FPR1 activity.In this regard, FPR activation has been associated with human disease states such as glioblastoma, breast cancer, and inflammatory diseases (Khau et al. 2011;Cussell et al. 2019;Leslie et al. 2020).At present, the evolutionary processes underlying FPR1 loss in New World primates remain unclear.Like other immune receptor families (Barber et al. 2017;Águeda-Pinto et al. 2019;Jacquet et al. 2022), gene duplication and loss have occurred periodically during the evolution of the FPR gene family in mammals as illustrated by the wide range of copy number across taxa.Studies in rodents have revealed remarkable functional plasticity in FPRs (Dietschi et al. 2017).This plasticity likely explains in part why these genes have undergone repeated duplication and loss.Future work leveraging animal genetics to explore redundancy and costs of FPR function may aid in resolving these questions. Our results indicate that bonobo FPR1 binds to S. aureus inhibitor FLIPr-like in a manner similar to human FPR2, even though it shares far more sequence similarity with human FPR1.This suggests a small number or even a single amino acid change could be responsible for this difference in activity.On the other hand, mutations that reduce interactions with pathogen inhibitors may also alter binding to crucial FPR agonists.Future studies leveraging site-directed mutagenesis to assess binding to large sets of FPR ligands could improve our understanding of the pleiotropic consequences of receptor variation within and across species.Notably, we observed the presence of several FPR1 polymorphisms in the human population that are predicted to influence bacterial ligand recognition (supplementary fig.4, Supplementary Material online).The site which forms a "ridge" in bonobo FPR1 relative to humans, T170M, naturally occurs in the human population, as well as an additional T170P polymorphism which occurs at low frequency (supplementary fig.4, Supplementary Material online).The consequences of these human FPR variants for recognition by FLIPr-like and other bacterial inhibitors remain unknown. In addition to influencing receptor activation or repression, rapid evolution of FPRs may have additional consequences related to infectious disease susceptibility.It has recently been shown that FPR1 is recognized by the type 3 secretion system of Yersinia pestis, mediating leukocyte killing during plague infection (Osei-Owusu et al. 2019).This study further demonstrated that the human FPR1 R190W polymorphism is protective against Y. pestis infection.Thus, future studies exploring the functional consequences of FPR evolution would greatly contribute to our understanding of these crucial immune receptors in health and disease. Phylogenetic Analyses We inferred amino acid sites exhibiting elevated dN/dS using multiple computational methods.Our data set included available nucleotide coding sequences (cDNA) of FPR1 for 18 primate species (human, drill, mangabey, red colobus, black and white colobus, snub-nosed monkey, golden snubnosed monkey, Sumatran orangutan, Bornean orangutan, gorilla, chimpanzee, bonobo, white-cheeked gibbon, green monkey, crab-eating macaque, pig-tailed macaque, gelada, and olive baboon), with areas of ambiguity and stop codons removed.A gene tree for FPR paralogs was generated with phylogenetics by maximum likelihood (PhyML) with 1,000 bootstraps.Potential sites under positive selection were detected using the PAML package (Yang 2007), which detects signs of positive selection from the frequency of nonsynonymous/synonymous amino acid substitutions at each site (ω = dN/dS) based on maximum likelihood.Additional computational methods MEME (Murrell et al. 2012) and FuBar (Murrell et al. 2013) from the Datamonkey adaptive evolution server were cross-referenced, and sites that appeared in more than one analysis with high confidence (P < 0.01) were included.aBSREL analysis which tests for branch-site episodic selection was also performed (Smith et al. 2015). Cloning and Lentiviral Transduction of FPRs in HEK293T Cells FPR1 genes for human, bonobo, gelada, and red colobus and FPR2 genes for human, capuchin, dog, and cat were cloned from cDNA (human FPR1 and FPR2), synthesized by Genewiz (gelada and red colobus), or synthesized as gBlocks by IDT (capuchin, dog, and cat) including Kozak sequence and C-terminal Flag-tag.DNA fragments were subsequently cloned into the pBABE lentiviral vector using SLIC or Gibson cloning methods.Expression was verified in cell lines by Western blot using anti-Flag tag antibodies (Monoclonal ANTI-FLAG® M1, Sigma Aldrich #F3040).Surface expression was verified for FPR1 using Thermo Fisher FPR1 polyclonal antibody (PA5-33534), and cell lines with comparable cell surface expression were used for binding experiments.FITC labeling was performed per manufacturer's instructions, and the Thermo Scientific™ Pierce™ Dye Removal Columns (part number 22858) were used to remove excess dye. Flow Cytometry Cells expressing FPRs from primates (as described above) were counted and suspended at 10 5 cells/ml 4 μg of FITC-labeled FLIPr-like, or SEB proteins were incubated in 100 μl sterile phosphate-buffered saline + 100 nM PMSF at 4 °C with nutation for 1 h, washed 3× with 1 ml ice-cold PBS, and analyzed on a SONY SH800 flow cytometer and assessed for binding by proportion of cells positive for 488+ as compared with negative control (nontransfected HEK cells incubated with fluorescently labeled proteins in parallel with test samples). Dynamic Evolution of Bacterial Ligand Recognition by Formyl Peptide Receptors FIG. 1.-Expression of FPR paralogs among simian primates.FPR expression levels in (A) whole blood, (B) brain, and (C) lungs across hominoid (human and chimpanzee), Old World monkey (pig-tailed macaque, crab-eating macaque, baboon, and mangabey), and New World monkey (marmoset, squirrel monkey, and owl monkey) species.Data obtained from the AceView NIHTPR RNA-Seq data set. FIG.3.-Evidence of repeated positive selection acting on extracellular domains of primate and carnivore FPRs.Sites in (A) primate FPR1, (B) primate FPR2, and (C) carnivore FPR2 with elevated dN/dS as determined by PAML and HyPhy.Residues of the transmembrane domain are denoted in yellow on the protein diagram, with the majority of high dN/dS sites (teal) located in the extracellular (top) ligand-binding loops.Species used for phylogenetic analyses and are indicated in the phylogenies for each gene.Sites in with elevated dN/dS as determined by PAML and HyPhy are denoted with black arrows in the amino acid alignment. FIG. 5.-Rapidly evolving surfaces of FPRs contribute to predicted bacterial ligand binding.(A) i-TASSER predicted structures for human FPR1, bonobo FPR1, human FPR2, and cat FPR2.Sites exhibiting elevated dN/dS in the extracellular region are indicated in teal.Sites 170 and 190, which are polymorphic in human populations, are also indicated.(B) Multiple sequence alignment for sites under selection at FPR extracellular regions of interest.Divergent sites shown in purple highlight differences between human and bonobo FPR1.Sites under selection are indicated by teal (FPR1) and gray (FPR2) arrows.(C) Predicted structures of FPR1 reveal an occluding structure that may affect binding of FLIPR-like.Site 170, which is variable between humans and bonobos, is highlighted in teal. Saimiri boliviensis displayed the most significant degeneration in this region, encoding only a short 271 nucleotide sequence with significant FPR1 similarity and no apparent start or stop codons.(B) Conserved synteny of the FPR gene cluster across New World monkeys. FIG. 2.-Loss of FPR1 in New World primates.(A) Evidence for inactivating mutations in FPR1 within the New World monkey lineage.
2021-08-06T13:21:34.903Z
2021-08-03T00:00:00.000
{ "year": 2023, "sha1": "9d73992be6719dcb46810140c9f4b36c64fdcc8c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gbe/advance-article-pdf/doi/10.1093/gbe/evad175/51820283/evad175.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2b0a0215f93d089278719e558990bd080819c08", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267121272
pes2o/s2orc
v3-fos-license
Establishing a Receptor Binding Assay for Ciguatoxins: Challenges, Assay Performance and Application Ciguatera, a global issue, lacks adequate capacity for ciguatoxin analysis in most affected countries. The Caribbean region, known for its endemic ciguatera and being home to a majority of the global small island developing states, particularly needs established methods for ciguatoxin detection in seafood and the environment. The radioligand receptor binding assay (r-RBA) is among the in vitro bioassays currently used for ciguatoxin analysis; however, similarly to the other chemical-based or bioassays that have been developed, it faces challenges due to limited standards and interlaboratory comparisons. This work presents a single laboratory validation of an r-RBA developed in a Cuban laboratory while characterizing the performance of the liquid scintillation counter instrument as a key external parameter. The results obtained show the assay is precise, accurate and robust, confirming its potential as a routine screening method for the detection and quantification of ciguatoxins. The new method will aid in identifying high-risk ciguatoxic fish in Cuba and the Caribbean region, supporting monitoring and scientific management of ciguatera and the development of early warning systems to enhance food safety and food security, and promote fair trade fisheries. Introduction Radioligand receptor binding assays (r-RBA) coupled to scintillation technology have been developed and utilized since the 1990s to detect various classes of marine algal toxins [1][2][3].These pharmacological assays rely on the binding affinity of toxins to specific biological receptors, allowing the measurement of their combined toxic potency in complex samples containing multiple related toxin congeners [4][5][6].The introduction of a highthroughput microplate format later provided a convenient way for monitoring purposes and potential regulatory application, particularly when analyzing large numbers of samples within a short timeframe [7,8]. The principle of the r-RBA relies on the competition between an unlabeled toxin and a tritiated toxin for a finite number of available receptor sites provided by a brain membrane preparation.The binding of the radioligand to the receptor sites is proportionally reduced in the presence of increasing concentrations of the unlabeled toxin, and it is evaluated by the measurement of tritium, a low-energy beta emitter (18.6 keV), using the liquid scintillation The principle of the r-RBA relies on the competition between an unlabeled toxin and a tritiated toxin for a finite number of available receptor sites provided by a brain membrane preparation.The binding of the radioligand to the receptor sites is proportionally reduced in the presence of increasing concentrations of the unlabeled toxin, and it is evaluated by the measurement of tritium, a low-energy beta emitter (18.6 keV), using the liquid scintillation counting method.A sigmoidal competition curve can then be constructed by measuring the concentration of the radioligand receptor complex across a range of concentrations of toxin standard.Dose-response curve fitting is performed using a four-parameter logistic fit with a variable slope or Hill equation [9], from which the amount of toxin in an unknown sample is calculated.Figure 1 shows a schematic representation of this assay.-3).Plate wells on the left depict three scenarios (suitable for purified toxin, reference material, or toxin-contaminated sample extract) ranging from absence or low concentrations of unlabeled toxins to the presence of saturating concentrations of unlabeled toxins.Labeled toxin bound to receptors in the y-axis are expressed in counts per minutes (cpm). The microplate r-RBA developed for paralytic shellfish toxins (PST), algal-derived neurotoxins targeting voltage-gated sodium channels, has been, since 2012, an AOAC Official Method of Analysis [10] and is recognized as an approved method for PST monitoring in shellfish by the Food and Drug Administration in the USA.A receptor binding assay was also developed for the brevetoxin (BTX) and ciguatoxin (CTX) groups of algal toxins [3,11], neurotoxins also targeting the same protein as the PSTs. The r-RBA for CTX analysis, though assessed through single-lab validation [12], remains to be validated through interlaboratory studies.In general, the validation of CTX analytical methods, including the r-RBA and other chemical-based or bioassays that have been developed to detect and quantify CTX in food matrices [13][14][15][16], have been hampered mostly by the scarcity of standards and reference materials (in particular for CTX from the The microplate r-RBA developed for paralytic shellfish toxins (PST), algal-derived neurotoxins targeting voltage-gated sodium channels, has been, since 2012, an AOAC Official Method of Analysis [10] and is recognized as an approved method for PST monitoring in shellfish by the Food and Drug Administration in the USA.A receptor binding assay was also developed for the brevetoxin (BTX) and ciguatoxin (CTX) groups of algal toxins [3,11], neurotoxins also targeting the same protein as the PSTs. The r-RBA for CTX analysis, though assessed through single-lab validation [12], remains to be validated through interlaboratory studies.In general, the validation of CTX analytical methods, including the r-RBA and other chemical-based or bioassays that have been developed to detect and quantify CTX in food matrices [13][14][15][16], have been hampered mostly by the scarcity of standards and reference materials (in particular for CTX from the Atlantic and Caribbean region), and limited interlaboratory comparison [17,18].As a result, no legally enforceable guidance has yet been provided in terms of analytical methods or permissible levels. CTXs are responsible for ciguatera poisoning, a foodborne intoxication nowadays considered the most common non-bacterial fish intoxication.It is caused by the consumption of fish or invertebrates that have accumulated CTX via food web transfer [17].Globally, there is a limited number of countries with the capacity to monitor CTX.Some EU countries are screening high-risk fish species for CTX contamination from areas prone to ciguatera [19] as a management option.However, small island developing states (SIDS) in the Caribbean, which are directly and the most impacted by ciguatera, lack capacity for specific CTX analysis.In these ciguatera endemic countries, management strategies mainly involve banning of high-risk fish species, a measure that is generally applied in a systematic manner even though the geographic distribution of risky areas within endemic areas and the toxicity within species varies significantly.As a result, fish that could safely be consumed are removed from the market, thereby reducing access to high quality food and hence adversely affecting food security, nutrition and livelihood of local communities. The establishment of routine monitoring for ciguatoxin in seafood and the environment has the potential to significantly enhance the sustainable use of marine resources through better characterization of ciguatera risk and the identification of safe fishing areas, or the revision of existing risky species lists.In that objective, establishing validation processes and improving the robustness of biotoxin monitoring tools are essential steps.This study describes a single-lab validation of the r-RBA in a Cuban laboratory, including the assessment of external key parameters such as the beta counter. In this work, a more accessible microplate beta counter option is used, equipped with liquid scintillation technology (Plate CHAMELEON V Multilabel Counter, Hidex, Turku, Finland).The choice was based on its lower cost and reduced complexity compared to the microplate beta counters commonly used in r-RBA applications (namely the MicroBeta and TopCount, PerkinElmer, Waltham, MA, USA).The methodology used in previous publications [12,20] was first adapted to suit the specific characteristics of this instrument.Subsequently, the performance of the assay was characterized by analyzing the critical parameters of the calibration curve.Twenty-four samples of high-risk fish species captured in a ciguatera hot spot in Cuba were extracted and analyzed using the newly established RBA. Optimization of Counting Measurements on CHAMELEON V Scintillation Counter The counting background variability was first assessed counting twenty-five random wells of a 96-well filter microplate (without the addition of scintillant) for one (as previously recommended) and for two minutes because of the high background announced by the supplier.There were significant differences in cpm mean values between the two counting times (unpaired t test, p < 0.0001).A very high variability was observed when counting for one minute with extreme values of 24 and 95 cpm (Figure 2A).Averaged instrument background values ranged between 52.4 and 72.8 cpm when counting for two minutes over 10 plate readings, with a global mean of 63.9 ± 5.9 cpm (Figure 2B).Relative standard deviation within each plate reading ranged between 8.01 and 10.7% with a mean value of 9.3%.Counting for two minutes improved the repeatability of the measures and was thus chosen for the development of the RBA. Three different scintillant cocktails, MaxiLight, Optiphase and AquaLight, were compared while assessing the counting efficiency of the instrument using 96-well black/white Isoplates and 100 µL of cocktail volume.There were significant differences in cpm values among the three cocktails tested (Kruskal-Wallis test; p = 0.0002).Total counts increased Averaged instrument background values ranged between 52.4 and 72.8 cpm when counting for two minutes over 10 plate readings, with a global mean of 63.9 ± 5.9 cpm (Figure 2B).Relative standard deviation within each plate reading ranged between 8.01 and 10.7% with a mean value of 9.3%.Counting for two minutes improved the repeatability of the measures and was thus chosen for the development of the RBA. Three different scintillant cocktails, MaxiLight, Optiphase and AquaLight, were compared while assessing the counting efficiency of the instrument using 96-well black/white Isoplates and 100 µL of cocktail volume.There were significant differences in cpm values among the three cocktails tested (Kruskal-Wallis test; p = 0.0002).Total counts increased from about 1000 to 4000 cpm when using Optiphase versus MaxiLight.Using MaxiLight, the counting efficiency on 96-well black/white Isoplate reached 30% under the conditions tested.Consequently, MaxiLight was chosen as the scintillation cocktail to use in the RBA protocol. There was no significant difference in cpm values between the addition of 50 and 30 µL of MaxiLight cocktail in 96-well filter microplate (Mann-Whitney test; p = 0.2).The volume was therefore reduced from that of the previously published protocols, i.e., from 50 µL to 30 µL, to reduce the volume of liquid radioactive waste. Receptor Binding Assay Performance The precision of the data, expressed by the relative standard deviation (RSD) among the triplicate measurements for the standards and QC data points, averaged 6.3% (±3.7, n = 185).Individual Hill slope and EC 50 values calculated from 17 experiments were in the range encompassed by 2SD (i.e., no outlier was identified) and were visually illustrated using control charts (Figure 3A,B).Additionally, Hill slope and EC 50 values complied with the quality control criteria, with requirement of 20% variability around the −1 value for the Hill slope and 30% variability around the calculated mean, for the EC 50 . Out of the 17 QC individual values, two were outside the range encompassed by 2SD.The same two points were either below 2.1 nM or above 3.9 nM, values that state the 3% of the expected values of 3 nM (Figure 3E).Hence, these two outliers and, according to the assay acceptance rule, the corresponding curve data were removed before computation of the mean values. The variability within sample triplicate measurements averaged 7.4% (±4.1, n = 126).Upper (EC 80 ) and lower (EC 20 ) limits were quantified for each individual experiment and used to determine the quantification range.Accordingly, sample falling below EC 80 were reported as below the LOQ.Quantified cpm values for samples below (RBA + ) and above (RBA − ) individual EC 80 expressed in cpm are presented as Supplementary Materials. Toxicity Analysis of Fish Samples The developed r-RBA protocol was applied to detect and quantify CTX in 24 fish captured in an area known to be at risk for ciguatera.Eighteen samples (75%) presented toxin levels higher than the LOQ of the assay and were classified as RBA + (Table 1). The concentrations of ciguatoxins in the analyzed individuals ranged from 2.8 to 8.3 ng BTX-3 equivalents g −1 fish.All four species collected in the study, which were identified as high risk according to the Cuban regulation [21] exhibited positive results in the RBA (RBA + ) (Table 1).The highest toxicity value corresponded to a specimen of Mycteroperca venenosa (Linnaeus, 1758).used to determine the quantification range.Accordingly, sample falling below EC80 were reported as below the LOQ.Quantified cpm values for samples below (RBA + ) and above (RBA − ) individual EC80 expressed in cpm are presented as Supplementary Materials. Toxicity Analysis of Fish Samples The developed r-RBA protocol was applied to detect and quantify CTX in 24 fish captured in an area known to be at risk for ciguatera.Eighteen samples (75%) presented toxin levels higher than the LOQ of the assay and were classified as RBA + (Table 1). Discussion This work presents the development and operation of the ciguatoxin-receptor binding assay in a laboratory located in Cuba.It evaluates the assay reproducibility and repeatability while also assessing its robustness with the focus of promoting the effective transfer, adoption and application of a reliable and accessible method for CTX analysis in SIDS and other lower economy countries [22] threatened by ciguatera poisoning.The r-RBA is a specific and sensitive functional bioassay that fulfils the requirements of high-throughput and quantitative analysis.Even if the use of radioactivity could be perceived as a limitation, the routine use of the r-RBA is relevant in those laboratories where the instruments are available and where guidelines of a radiation protection program can be met.In addition, the microplate format of the r-RBA is versatile and can be used as an AOAC-validated method for paralytic shellfish toxins testing [10].The approach taken in this study was to use a more accessible microplate beta counter option, with lower cost and reduced complexity and assess assay performance by analyzing and, when possible, comparing critical parameters with those of the commonly used beta counters [12]. Plate CHAMELEON V Scintillation Counter is not equipped with an automatic plate transporter; therefore, when counting time per well is set at two minutes, the throughput of the assay is lower in comparison to PerkinElmer counters, Microbeta and TopCount.Using Microbeta, for example, the r-RBA requires only three hours to process a full plate, and one analyst can run an estimated 32 samples per day, with up to eight samples per plate run in triplicate at two dilutions [12].However, a total of 16 samples per day can be analyzed with the Hidex CHAMELEON V counter. As stated in the instrument specifications, the background in the CHAMELEON V counter was less than 100 cpm, a relatively high value in comparison to the values frequently found in the Microbeta counter of less than 10 cpm.RSD within each plate reading ranged around approximately 10%, which makes background subtraction unnecessary while processing r-RBA data, taking into account that the accepted variability among replicated cpm values is 30% [10].Some evidence suggests that the natural background in this instrument is affected by external factors such as temperature and illumination.Although this was not experimentally tested, and considering that the user manual does not give information about optimal environmental conditions for proper instrument functioning, we propose to include the assessment of background variability as a permanent control in the experimental protocol before counting an assay plate. The minimum binding value of the sigmoidal curve obtained with the CHAMELEON V counter reflects the combination of background and non-specific binding.The averaged background value obtained in this study allows the estimation of non-specific binding at around 70 cpm.This non-specific binding value is comparable to what was found in a previous study [12], where a standard of BTX-3 was used to estimate non-specific binding using a low background Microbeta counter. The maximum binding observed in this study, also affected by background levels, exhibited a higher variability compared to the findings reported by [12].These authors found no differences in determining the upper limit of quantification of the r-RBA using the standard deviation associated with maximum binding (max − 10 × SD of max) and using the EC 80 .In the current study, the EC 80 and EC 20 were used to delimit the quantification range in each experiment.The importance of monitoring the maximum as an assay performance acceptance criterion was raised for an RBA protocol developed for saxitoxins.The authors noted [23] that variability in the maximum can pose challenges in the RBA, occasionally occurring when one or more of the lowest three standards are not within control limits. The results presented here show the r-RBA is precise and accurate when using a 96-well microplate format with brevetoxin standard, confirming its potential as a routine screening method for the detection and quantification of ciguatoxins.The precision of standards, QC and sample data, expressed by the RSD among the triplicate measurements, were well below the accepted cut-off value 30% [10].All individual Hill slope values were in the range encompassed by 20% of the expected theoretical value of −1, that is, between −0.8 and −1.2 [10].The variability of EC 50 was below the accepted cut-off value 30%, and was lower than the value obtained by [12].The EC 50 mean value was in the range obtained by other authors [24,25] using different RBA protocols, which evidence the robustness of the assay.The mean Hill slope, EC 50 and QC values obtained in the study can be used as reference values for the assessment of assay performance when samples are analyzed. The results obtained demonstrate that this modified r-RBA protocol can be used for the routine analysis of CTX in fish samples.Additionally, the use of BTX as a standard offers advantages in terms of technique sustainability, given its commercial availability and greater cost-effectiveness (300 to 400 times lower) compared to CTX standards.It is worth noting that currently the only commercially available standards for CTX are of the Pacific types (e.g., P-CTX3C, P-CTX1B), while there are no available commercial standards for Caribbean type of CTXs (C-CTX). The conversion of analytical results from the BTX equivalent (or P-CTX3C equivalent) to C-CTX remains a challenging task due to the limited information available regarding toxicity equivalency factors for both BTX or CTX analogues.To our knowledge, only one published study by [24] has provided insights into the differences between BTX-3 and C-CTX1 affinity for voltage-gated sodium channels.This study reveals an eight-foldhigher affinity for C-CTX1 compared to the BTX-3, with reported EC 50 values of 0.34 and 2.77 ng mL −1 in the RBA, respectively.Consequently, considering this information, the concentration values quantified in the present study would be eight times lower if expressed in C-CTX1 equivalents. Ciguatoxin concentration in the analyzed fish individuals, expressed as BTX-3 equivalents, ranged from 2.8 to 8.3 ng g −1 .It is now well established that fish in Cuba may accumulate CTX [26]; therefore, the inhibition of tritiated brevetoxin by the extracts in the r-RBA is attributed to the presence of ciguatoxin.Although the possibility of BTX presence should not be ruled out, as these toxins can also occur in marine fish [27], to our knowledge, no blooms of Karenia brevis or other brevetoxin-producing microalgae (such as Chatonella spp., Heterosigma akashiwo and Fibrocapsa japonica [28]) have been reported in the sampled area. Applying the conversion factor mentioned above, the estimated concentration in equivalents of C-CTX1 range between 0.34 and 1.02 ng g −1 .Although these concentration values are lower than those previously reported for this study area (1.5 to 8 ng g −1 in equivalents of CTX3C or C-CTX1 [26]), they are above the limit recommended for the C-CTX1 of 0.1 ng g −1 of fish (FDA 2011), therefore anticipating a potential threat to human health.However, actual risk and limits in seafood for Caribbean ciguatoxins still need to be determined through a health risk assessment.The implementation of effective notification systems for ciguatera outbreaks that allow access to fish samples implicated in ciguatera poisoning in this region could help in achieving this goal. In order to improve ciguatoxin detection through r-RBA, continued optimization of this approach is still required to enhance its accuracy and reliability in the future.Potential avenues for improvement include a better characterization of the fish matrix influence, estimation of measurement uncertainty and participation in intercomparison exercises.Additionally, it is highly recommended to validate the use of BTX as a reference standard in the r-RBA to allow its unequivocal conversion into equivalents of C-CTX1, for which the availability of certified standards remains essential. Having the r-RBA established will help provide the scientific data needed to support the list of high-risk ciguatoxic fish included in the Cuban fish regulation.The monitoring and scientific management of ciguatera in Cuba can now be considered, including the development of early warning systems to support food security and promote fair trade fisheries. Materials and Methods To identify potential adjustment in the protocol, the first task consisted of summarizing the technical specifications of the instrument in comparison to that stated for the commonly used microplate scintillation counters MicroBeta and TopCount.These counters are among the most common microplate scintillation counters that have been traditionally used to quantify algal toxin through r-RBA [3,10,23,29].Providing coincidence counting due to two photomultiplier tubes (PMT) positioned above and below the sample that simultaneously detect signal, MicroBeta ensure high efficiency and extremely low background for a variety of radionuclides, including tritium [30].TopCount counters use a single PMT and feature Time-resolved counting technology, a method that discriminates between background and true counts and results in a superior sensitivity, high signal-to-noise ratios and virtually crosstalk-free counting when used with opaque plates [31].Background values as low as 20 counts per minute (cpm) and a high counting efficiency (more than 45%) are usually achieved with both single-and dual-PMT counters, thus making background assessment and eventual average subtraction unnecessary.The specifications of the MicroBeta and TopCount counters are provided in Table 2.The Plate CHAMELEON V Multilabel Counter supports liquid scintillation, fluorescence and luminescence technologies, with a single PMT located immediately above the sample well [32].However, Time-Resolved technology is not available for scintillation mode; therefore, high-performance counting on opaque plates such as the filter plates used in the RBA, is not guaranteed.As per manufactured specifications, background is relatively high with values stated as less than 100 cpm.Counting efficiency for tritium reaches a maximum between 30% and 50% depending on the plates used [32], values comparable to that announced by PerkinElmer counters MicroBeta and TopCount (Table 2). Due to the lack of information available in the scientific literature on the Plate CHAMELEON V Multilabel Counter, the first step was to characterize counting performance based on the reported manufacturer specifications.Counting time, selection of the most suitable scintillation cocktail and the volume of the cocktail to use in the assay were sequentially tested. Counting Performance Assessment The background of the beta counter was assessed using 96-well filter microplate.Twenty-five random wells were counted for one and two minutes each over ten plate readings in ten separate days.Due the fact that this counter has only one detector, additional counting times were not assessed.For example, it would take almost five hours to read a full plate assay using three-minute counts, which would decrease the throughput of the assay by half. The instrument counting efficiency was assessed using MaxiLight cocktail (Hidex, Turku, Finland) and black/white Isoplate.MaxiLight is a lipophilic cocktail with highest counting efficiency for organic and non-aqueous samples, and dry samples and filters.Other two available cocktails (Optiphase and AquaLight) were tested and compared as well. Optiphase (Hidex, Turku, Finland) is specifically produced for micro-volume counting, and as such, it is a commonly used cocktail in r-RBA application.Aqualight (Hidex, Turku, Finland) is a general-purpose scintillation cocktail capable of handling a broad range of solutes, combining high counting efficiency and low background.A solution of tritiated brevetoxin ( 3 H-PbTx-3), similar to the one used as a working solution in the r-RBA protocol (6.45 kBq mL −1 ), was used as a tracer.An amount of 35 µL of this solution (225.8Bq equivalent to 13,545 disintegrations per minute, dpm) and 100 µL of each cocktail were added to a 96-well black/white Isoplate in four replicates and counted for two minutes. Two different volumes of cocktail were tested on 96-well filter microplate (MultiScreen HTS FB Filter Plate MSFBN6B50, Millipore, Burlington, MA, USA): the original protocol's volume of 50 µL, and a reduced volume of 30 µL that adequately covered the well bottom and allowed for easy pipetting of the viscous solution using multichannel pipettor.Four replicates of 3.5 µL of the tracer solution (22.6 Bq equivalent to 1354.5 dpm) were added to the wells containing MaxiLight cocktail and counted for two minutes. Receptor Binding Assay Protocol Calibrations standards and a quality control (QC) of brevetoxin (BTX) were used in this study as per [20].The r-RBA experimental protocol was the one described in [12] with some modifications.Analytical grade chemicals and HPLC-grade solvents were used throughout the study.A stock solution of brevetoxin 3 (known as BTX-3 or PbTx-3) provided at 1 µg µL −1 (American Radiolabeled Chemical Inc., St. Louis, MO, USA) was used to prepare the calibration curve standards.Working bulk solutions (ranging from 0.06 ng mL −1 to 6 µg mL −1 ) were prepared by serial dilution in Phosphate-Buffered Saline Tween 20 (PBST buffer, pH 7.4, Sigma Aldrich, St. Louis, MO, USA) to reach final in-assay concentrations from 0.007 to 700 ng mL −1 .Similarly, the QC was prepared to reach a final in-assay concentration of 2.7 ng mL −1 (3 nM).The use of bulk reference dilutions minimizes the pipetting needed for setting up an assay routinely and improves day-to-day repeatability.They were prepared in advance and stored at 4 • C for up to 1 month.A buffer control was run with each BTX-3 standard curve as a negative control parameter that allows inter-assay comparison.The radiotracer 3 H-PbTx-3 was provided at 20 Ci mmol −1 , and 0.05 mCi mL −1 (American Radiolabeled Chemicals Inc., St. Louis, MO, USA).A working solution was prepared containing 8.75 nM of 3 H-PbTx-3 (6.45 kBq mL −1 ) in PBST buffer with bovine serum albumin (PBST/BSA; BSA 1 g L −1 ) for a final in-well concentration of 1 nM.A 2 mL aliquot of porcine brain membrane homogenate (Sigma Aldrich, St. Louis, MO, USA) was diluted before plating in 24.5 mL of PBST/BSA to yield approximately 0.8 mg mL −1 protein concentration in the assay. To perform the assay, 35 µL of PBST/BSA was first added to each well in a 96-well microtiter filter plate (MultiScreen HTS FB Filter Plate MSFBN6B50, Millipore, Burlington, MA, USA) to moisten the filter membrane.Then, 35 µL of BTX-3 standards, QC check or sample dilutions (see below) were added in triplicate to the corresponding wells.Last, 35 µL of the 3 H-PbTx-3 working solution and 195 µL of brain membrane homogenate were added in that order to each well.The plate was then incubated for 1 h at 4 • C before filtration and rinsing twice with 200 µL ice-cold PBST on a MultiScreen HTS vacuum manifold (Millipore, Burlington, MA, USA) system.However, certain modifications were necessary compared to the original protocols due to the absence of cassette holding the plate in the new counting instrument.Following the second rinse, it was not required to remove the underdrain of the filter plate.Instead, it was directly blotted using lint-free paper towels and sealed underneath with clear sealing tape.For control purposes, 3.5 µL of the working solution containing an activity of 22.6 Bq (equivalent to 1354.5 dpm) were added to an empty well in each run.After addition of scintillation cocktail (30 µL of MaxiLight as determined in this study), the top of the plate was sealed and incubated in the dark for one hour at room temperature.Radioactivity was then counted for two minutes. Data Analysis Seventeen BTX-3 calibration curves were run.GraphPad Prism (GraphPad Software, Inc. version 6.01, La Jolla, CA, USA) was used to generate BTX-3 standard curves and to perform data analysis.The r-RBA quality control included the analysis of assay and sample measurement acceptances as proposed by [10,33] during the validation studies for saxitoxin analysis (Table 3).Assay performance was assessed prior to curve-fitting the data, with the verification of the relative standard deviation (RSD) of toxin standard and QC triplicate data to be below 30% [10] (Table 3).The curve-derived parameters EC 50 , Hill slope, maximum binding (top plateau or max) and minimum binding (bottom plateau or min) and the QC of brevetoxin were used as assay critical control points.For each parameter, Q-Q plots were used to assess the distribution of the associated errors.Data that met assumptions of normality were then examined for outliers using the standard deviation (SD).Points that were above (Mean + 2SD) and below (Mean − 2SD) were removed.Hill slope was checked to be in a variability range of 20% around a theoretical value of −1 according to one receptor site in homologous competition experiments [10].Additionally, EC 50 was checked to be in a variability range of 30% [10] and the QC to be 3 nM BTX-3 (in-well concentration) ±30% of recovery (Table 3). Following sample data inspection (RSD < 30% of triplicate cpm data), toxin concentrations were estimated against a BTX-3 standard curve.Data were transformed in concentration values using GraphPad interpolating from unknown function using the Hill equation (formula below), within the acceptable upper and lower limits EC 80 and EC 20 , corresponding to 80 and 20% of specific binding, respectively (Figure 2), as defined in [12].The limit of quantification (LOQ) was calculated using EC 80 at a maximum matrix concentration of 0.6 g tissue equivalent mL −1 in assay as per [12].y = min+ max − min 1 + 10 (x −log EC50) Hill slope Toxicity Analyses of Fish Samples The RBA was then tested to determine CTX concentration in fish captured in Cuba, in an area identified as prone to ciguatera [26].Fish individuals were kindly provided by a professional fisherman.They were collected by hook and line at 20 m depth, south of Cayo Guano del Este, a reef ridge covered by corals, gorgonians and abundant macroalgae. No endangered or protected species were involved in this study.Species controlled under Cuban regulation [21] were selected preferentially.Total length (to the nearest cm) and weight (to the nearest g) were recorded.The fish were transported on ice to the laboratory where they were morphologically identified to species level as described in the Species Identification Guide for Fishery Purposes [34].Then, they were filleted and stored at −20 • C until ciguatoxin extraction and RBA analysis. Crude extracts were obtained using the extraction protocol described in [12].Briefly, tissue samples (5 g) were cooked in a water bath at 70 • C for 15 min and homogenized in acetone to extract soluble compounds.After centrifugation and drying acetone su- Figure 1 . Figure 1.Diagram showing the experimental protocol and data acquisition during a receptor binding assay.(A) Schematic representation of the main experimental steps within one of the 96 wells of the plate.(B) Illustration of a sigmoidal dose-response curve for the brevetoxin-3 (BTX-3).Plate wells on the left depict three scenarios (suitable for purified toxin, reference material, or toxin-contaminated sample extract) ranging from absence or low concentrations of unlabeled toxins to the presence of saturating concentrations of unlabeled toxins.Labeled toxin bound to receptors in the y-axis are expressed in counts per minutes (cpm). Figure 1 . Figure 1.Diagram showing the experimental protocol and data acquisition during a receptor binding assay.(A) Schematic representation of the main experimental steps within one of the 96 wells of the plate.(B) Illustration of a sigmoidal dose-response curve for the brevetoxin-3 (BTX-3).Plate wells on the left depict three scenarios (suitable for purified toxin, reference material, or toxin-contaminated sample extract) ranging from absence or low concentrations of unlabeled toxins to the presence of saturating concentrations of unlabeled toxins.Labeled toxin bound to receptors in the y-axis are expressed in counts per minutes (cpm). Figure 2 . Figure 2. Assessment of instrument background variability.(A) Pooled instrument background cpm values over 25 wells and 10 plate readings for 1 and 2 min counts.Dark horizontal lines and whiskers represent global mean and standard deviation over 250 individual values.(B) Control chart showing averaged instrument background (two-minute counts) over 10 plate readings.Whiskers represent SD over 25 individual measurements. Figure 2 . Figure 2. Assessment of instrument background variability.(A) Pooled instrument background cpm values over 25 wells and 10 plate readings for 1 and 2 min counts.Dark horizontal lines and whiskers represent global mean and standard deviation over 250 individual values.(B) Control chart showing averaged instrument background (two-minute counts) over 10 plate readings.Whiskers represent SD over 25 individual measurements. Figure 3 . Figure 3.Control charts of the curve-derived parameters Hill slope (A), EC50 (B), max (C), min (D) and internal quality control (E) of the receptor binding assay.Control limits were based on the mean ± 2SD of the 17 data sets after validating the assumption of normality.Control limits for QC were also based on the 30% of 3 nM BTX-3 of the 17 data sets.The arrows at (E) show the outlier QC values. Figure 3 . Figure 3.Control charts of the curve-derived parameters Hill slope (A), EC 50 (B), max (C), min (D) and internal quality control (E) of the receptor binding assay.Control limits were based on the mean ± 2SD of the 17 data sets after validating the assumption of normality.Control limits for QC were also based on the 30% of 3 nM BTX-3 of the 17 data sets.The arrows at (E) show the outlier QC values. Table 1 . List of the species analyzed in the study.RBA + indicate values above limit of quantification (LOQ) of the assay. Table 1 . List of the species analyzed in the study.RBA + indicate values above limit of quantification (LOQ) of the assay. Table 2 . Comparison among different scintillation counters used for RBA applications. PMT: photomultiplier tube (technical specification of the PMT are not provided by the supplier); TR-LSC: Timeresolved liquid scintillation counter. Table 3 . Quality control points of r-RBA for CTX detection and quantification.Assay acceptance RSD of triplicate cpm data of standards and QC < 30% Hill slope: 20% variability around a theoretical value of −1 EC 50 : 30% variability around a calculate mean value QC: 30% variability around a nominal concentration of 3 nM Sample measurement acceptance RSD of triplicate cpm data of samples < 30% Quantification of samples: unknown fall between EC 80 and EC 20
2024-01-24T16:19:48.611Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "3c61f73bffd0b931c3a8b1548048efd3f4c62d87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "804c20b7df6cbe5a3ec90316d6b71b8313a92639", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252928560
pes2o/s2orc
v3-fos-license
The Impact of Social Media Salience on the Subjective Value of Social Cues Like face-to-face interactions, evidence shows that interacting on social media is rewarding. However, the rewards associated with social media are subject to unpredictable delays, which may shape how they are experienced. Specifically, these delays might enhance the subjective desirability of social rewards and subsequent reward-seeking behavior by sensitizing people to the presence of such rewards. Here, we ask whether thinking about a recent social media post or conversation influences the subjective value of monetary and social rewards. Across two studies, we find that individuals who are thinking about a recent social media post are more likely to sacrifice small financial gains for the chance to see a genuine smile (a social reward) compared with those who are thinking about a recent conversation. This suggests that rather than satisfying social needs, thinking about social media interactions enhances the subjective value of social rewards, potentially explaining the incentive value of social media. Regardless of race, age, gender, or socioeconomic status, social media has become omnipresent in people's lives with about 72% of North Americans reporting that they are social media users (Pew Research Centre, 2021;Statistics Canada, 2021). One reason for its popularity is that it targets people's need for social connection and desire to build social relationships (Ahn & Shin, 2013;Sheldon et al., 2011). Indeed, social media has extended the capacity for human social connection by allowing people to establish, maintain, and promote social ties in situations where faceto-face interactions are not possible. One difference between the rewards obtained on social media and those associated with face-to-face interactions is their timing. Specifically, rewards in real-time conversation occur immediately and predictably (Heerey & Crossley, 2013), whereas rewards on social media are delayed by pseudorandom time increments. Specifically, people must revisit a social media post for anticipated likes, shares, and comments, which are variably delayed depending on when followers respond. This delay might affect reward responsiveness. For example, dopamine neurons in many brain regions are sensitive to reward timing and predictability (e.g., Ballard & Knutson, 2009;Bermudez & Schultz, 2014;Estle et al., 2007;Kable & Glimcher, 2007;Roesch et al., 2007;Wanat et al., 2010). Dopaminergic responses to unpredictable and delayed rewards subsequently shape how those rewards are experienced (Berns et al., 2001;de Lafuente & Romo, 2011), potentially leading to reward sensitization (Berridge & Robinson, 2016;Hellberg et al., 2019;Konova et al., 2018). Thus, social media use may sensitize the reward system to the presence of social rewards, thereby enhancing their value. Accordingly, for some people, social media use is associated with heightened sensitivity to reward magnitude and reduced sensitivity to risk (Meshi et al., 2019(Meshi et al., , 2020. If social media use does indeed affect people's sensitivity to social rewards, at least temporarily, we would expect people actively considering a recent social media post and the social feedback they have received to show heightened incentive salience (i.e., wanting; Berridge, 2007) and sensitivity to social rewards, relative to those considering a recent face-to-face conversation. Indeed, the ''social snacking'' hypothesis (Gardner et al., 2005) is well aligned with this idea. Specifically, people seek out makeshift ways to satisfy their need for social connection when they cannot engage in meaningful interactions. Because these proxy interactions are less adept at satisfying social connection needs (Gardner et al., 2005), they may enhance social reward seeking (Baumeister & Leary, 1995;Kra¨mer et al., 2018). Thus, while social media is momentarily rewarding, it may fail to fulfill social connection needs. Current Research The current research addresses this possibility by investigating whether the salience of social media use influences the subjective value of social rewards. We operationalize social rewards with images of genuine smiles, which differ in form and function from polite smiles. Genuine smiles activate the orbicularis oculi and zygomaticus major muscles, whereas polite smiles only activate the latter (Ekman et al., 1990(Ekman et al., , 2002Frank et al., 1993). Genuine smiles convey the presence of positive emotion in senders and elicit the same in receivers (Ekman, 1992;Ekman et al., 1990;Ekman & Friesen, 1982;Geday et al., 2003;Gunnery & Hall, 2015;Surakka & Hietanen, 1998). In addition, genuine smiles are perceived more positively than polite smiles in both real conversations and laboratory tasks (Averbeck & Duchaine, 2009;Gunnery & Ruben, 2015;Heerey & Crossley, 2013;Scharlemann et al., 2001;Shore & Heerey, 2011). Polite smiles, in contrast, are important social tokens, but do not tend to be associated with positive affect or social reward (Ambadar et al., 2009;Bogodistov & Dost, 2017;Martin et al., 2017). Here, we ask whether thinking about a recent social media post impacts the subjective utility of social rewards by examining the degree to which participants are willing to give up monetary for social rewards and how these findings compare with thinking about a recent synchronous conversation. Importantly, we only ask about the incentive salience (i.e., wanting) of social rewards and not their hedonic value (i.e., liking), which is thought to be independent (Berridge, 2007;Tindell et al., 2009). In two studies, we expect that individuals who are currently thinking about a recent social media post will demonstrate greater subjective utility for genuine smiles, compared with those who have posted recently but are not specifically thinking about their post and to those who held a real-time conversation. Exploratory analyses examine the impact of overall social media use on the utility of social rewards, and whether results are moderated by need to belong (Knowles et al., 2015). Methods Participants. Participants were recruited for the study on Prolific Academic in exchange for £2.50 GBP, as well as a small performance-based monetary bonus. We estimated a required sample size of 412 participants using a G*Power analysis for a MANOVA (global effects model) with 4 groups and 3 response variables (Faul et al., 2007). Estimate parameters included a = .05, 1 2 b = .90, and estimated effect size f 2 (V) = .01626 (based on Pillai V = .048), based on pilot study data (see Supplementary Materials). Knowing that we would need to delete cases due to data quality issues, we recruited a sample of 441 participants, for this online study. We subsequently excluded 21 participants for inattentive and/or invariant responding. Inattention was classified as responding faster than 225 ms on at least 40% of trials and invariant responding was classified as responding with the same response option on 90% or more of trials. We also removed one statistical outlier ( + 4.5 SDs from the mean of genuine smile utility). 1 Our final sample included 420 participants (235 male, 6 nonbinary; M age = 32.94, SD = 11.26). All participants gave informed consent and the University's Ethics Committee approved all study procedure (likewise for Study 2). Procedures. After participants consented, they received a message asking them to either make a post on their preferred social media platform or have a ''face-to-face conversation with a friend.'' Participants in the conversation condition were told that due to pandemic restrictions, they could have their conversation over a video-chat application (e.g., Zoom, FaceTime) if necessary. Approximately 24 h later, they received a reminder to complete the post or conversation and a link to the study. The link opened a Qualtrics survey (https://qualtrics.com) that randomly assigned them to either answer questions about their post/ conversation before the smile valuation task (https://pavlovia.org) or immediately afterward. Smile Valuation Task. This task has two phases, an exposure phase, in which participants learned to associate both a monetary and a social value with each of six computerized players, and a test phase, in which they used this information in the context of a choice task. On each exposure trial, participants viewed one player, depicted by a photograph of an actor in a neutral pose, in the center of the screen. Flanking the actor on either side, participants saw images of the heads and tails side of a coin ( Figure 1A). Participants attempted to guess the side of the coin the player had chosen on that trial. Participants received immediate feedback from the player about whether their choices were correct. Specifically, they were told that some of the players would smile to show a correct response, and some would give text feedback. They also knew that each time they received ''correct'' feedback they earned a small financial bonus ($0.02GBP), which they would receive at the end of the study. In reality, feedback was not associated with participants' choices in the exposure phase. Instead, three players provided rewards on 80% of trials and the remaining players provided rewards on 60% of trials, regardless of participant's choices (see Figure 1B). In addition, two players (one 80% player and one 60% player) provided reward feedback by smiling genuinely at participants, two players smiled politely at participants (one 80% and one 60% player), and the remaining players' feedback was presented with a text overlay that displayed the trial outcome value (''Win!''; ''Non-win.''). The four players who had smiled to indicate reward feedback indicated nonreward feedback with lowered eyebrows, whereas those that had provided text feedback remained in the neutral pose throughout the trial. There was no response time limit on the trials and feedback lasted 1.5 s. To ensure that specific player-value pairings did not systematically affect the outcome, the computer randomly assigned players to both monetary and social feedback conditions at the start of the task. Half the participants, randomly assigned, viewed female faces and half viewed male faces. Participants completed 120 exposure trials, 20 trials per player, in a fully randomized order. Participants had a rest break after each block of 40 trials. Once participants had completed the exposure phase of the task, they began the test phase. Test trials began with a choice ( Figure 1C). Participants viewed a pair of neutrally posed players and selected the one they wanted to play on that trial. Thereafter, trials continued as in the exposure phase. Participants chose between all possible player pairs (15 possible pairings) in random order. Each possible pairing appeared eight times (120 test-trials). Within pairings, each face appeared on the left and the right sides of the screen with equal frequency. Participants' decisions in the test phase (which player they selected, given the monetary and social values of the players within a pairing) served as the dependent variable in the task. These choices allowed us to estimate how much genuine and polite smiles and monetary feedback shaped choice behavior. For example, participants with a strong affinity for genuine smiles might prefer a genuinely smiling player with a lower monetary value over a higher monetary value neutral player. In other words, a participant's choice behavior allowed us to quantify the extent to which that participant was willing to sacrifice the chance to earn money for the chance to see a genuine smile. This value indicates the subjective utility of genuine smiles in monetary terms for that participant (see Heerey & Gilder, 2019;Shore & Heerey, 2011). Here, we are interested in the utility of genuine smile, polite smile, and monetary feedback, and how these change as a function of social media salience. Smile Stimuli. Smile stimuli in the task were obtained from 20 male and 20 female, 18-to 24-year-old actors. To elicit polite smiles in a video-recorded procedure, actors watched an experimenter pose the smile and imitated the action. Genuine smiles were elicited using an emotion induction paradigm. All actors reported experiencing positive emotion during the selected genuine smiles. Still photos were clipped from the peak of each expression. We recorded a minimum of five polite and five genuine smiles per actor. These were validated in a subsequent pilot study in which 88 participants discriminated genuine from polite smiles across the set of 400 photographs. Actors and images were selected such that the smiles were discriminable by at least 70% of the sample. Salience Manipulation. Either immediately before, or immediately after completing the smile-valuation task, participants answered a set of questions regarding their social media post or conversation. For example, those who posted on social media were asked to reflect on the type of post they had made and how it had been received (e.g., ''how many likes/comments did you receive?'' and ''to what extent was the feedback that you received positive?''), whereas those who had a conversation were asked to reflect on their experience talking to a friend (e.g., ''the conversation made me feel positive'' and ''the quality of the conversation met my expectations''). These questionnaires (along with the rest of the study materials, data, and analysis code) are available on the Open Science Framework (https://osf.io/ db2j9/?view_only=73a30f781b2440e1823b432494ee5d86). The primary purpose of these questionnaires was to manipulate post/conversation salience by calling the relevant interaction to mind. Participants in the post-task salience conditions answered the questions for completeness after the smile valuation task. Questionnaires. After completing the smile valuation task and answering questions specific to their post/conversation, participants completed a modified version of the Social Networking Time Use Scale (SONTUS; Olufadi, 2016), which measures social media use in different contexts to generate an estimate for how much time an individual spends on social media. For our purposes, we used a shortened version of the original questionnaire that consisted of 19 items (e.g., ''when watching TV,'' ''when you are shopping,'' ''when you are at work'') measured on a 5point scale ranging from 1 (''Never in the past week in this situation/place'') to 5 (''I used it every time I was in this situation/place during the past week''). Participants also answered questions about their general social media use. For instance, we asked how frequently participants logged onto social media platforms and how frequently they posted. These items served to gauge participants' typical social media usage. Finally, they responded to the Big Five Inventory (John & Srivastava, 1999) and the Need to Belong Scale (Leary et al., 2013) to explore relationships between task variables and extraversion and need for social belonging. Data Analysis. To examine the degree to which social and monetary rewards shape choice behavior within the smile valuation task, we individually modeled each participant's choices using a logistic model. The model estimated the probability that a participant would select the face on the left (P Left Face ), given relative differences in the type and frequency of social and monetary rewards within the face pairing. We used a standard logistic model to fit the choice data: The parameter y in the logistic regression was estimated as In this equation the bs are the estimated regression weights for each term in the model. b 0 refers to the intercept; b 1 is the degree to which monetary rewards influenced choice behavior; b 2 is the degree to which genuine smiles influenced choice behavior; and b 3 estimates the influence of polite smiles on choice behavior. The Xs in the equation represent the difference between the player on the left and the player on the right. X 1 codes the difference in the expected monetary value (the probability of winning money multiplied by the amount of a win; that is, 1.6 cents for the 80% faces vs. 1.2 cents for the 60% faces) between the players within a pair. For example, X 1 received a score of .40 if the player on the left rewarded more frequently. X 1 received a score of -.40 if the player on the right had higher monetary value. If both players had the same monetary value (e.g., a pair of 80% players), X 1 was equal to 0. X 2 coded for genuine smiles such that if the face on the left smiled genuinely and the face on the right did not, X 2 received a score of 1. If the smiles were reversed, X 2 was coded as 21. If both or neither face smiled genuinely, X 2 was coded as 0. X 3 coded for the presence of polite smiles in similar fashion. The model used an iteratively re-weighted, least squares algorithm to obtain the maximum likelihood estimate for each of the terms (O' Leary, 1990). Importantly, we determined the model coefficients on a participant-by-participant basis because that allowed us to ask whether participants for whom the social media post was salient showed enhanced sensitivity to social rewards, in the context of general individual variability in social reward utility. The model coefficients for each participant became the dependent variables in the hypothesis tests below. Insofar as a model coefficient differs from 0, that model term influences choice behavior. Results and Discussion Before testing our hypotheses, we conducted a 2 3 2 ANOVA to test for group differences in social media use. There were no significant effects of interaction type, social media versus conversation; F(1, 416) = 3.30, p = .070, h p 2 = .008; salience, pre-versus post-task, F(1, 416) = 0.58, p = .447, h p 2 =.001; or their interaction, F(1, 416) = 3.48, p = .063, h p 2 = .008. Likewise, there were no group differences in terms of how frequently participants logged on to social media sites, the positivity of feedback they receive, or how satisfied they are with the feedback they receive (Table 1). To test whether social media and conversation salience influenced the subjective value of social and monetary rewards, we conducted a 2x2 MANOVA with salience (pre-task, post-task) and interaction type (social media, conversation) as fixed factors and the individually estimated regression weights for monetary rewards, polite smiles, and genuine smiles as the dependent variables. The multivariate tests for the interaction condition (social media vs. conversation) and salience (pre-vs. post-task) and their interaction were all significant (Table 2). There were no significant main effects or interactions for monetary rewards or polite smiles (Table 3). However, there were significant main effect of interaction type, F(1, 416) = 12.78, p \ .001, h p 2 = .03, and salience, F(1, 416) = 7.07, p = .008, h p 2 = .02, and a significant interaction, Figure 3, included for descriptive purposes, shows how participants in each condition made decisions, given the relative differences in reward type and frequency within a given pair. For example, across conditions participants preferred high-to low-value faces; and participants in the social media salient (pre-task) condition preferred the genuinely smiling player, even when that choice was associated with financial loss. We also conducted exploratory tests to investigate possible moderators of the relationship between social media salience and genuine smile value. Previous research has shown that need to belong is predictive of social media use (Knowles et al., 2015) and although we found evidence of this association, it did not affect the relationship between the genuine smile utility and social media salience (see Supplementary Materials). Together, these results suggest that social media salience is the important factor in these results and that the mere salience of social interaction, as measured in the conversation condition does not appear to promote this effect. To corroborate our findings, Study 2 is a pre-registered replication and extension of Study 1 that allowed us to rule out several alternate explanations for these results (https://osf.io/7d6hx?view_only=3670331dfe2 b480d8c2488eac4371155). Methods Participants. Participants were recruited for the study on Prolific Academic in exchange for £3.00 GBP and a small performance-based monetary bonus (£1.00-£2.00 GBP). We used G*Power to conduct an ANOVA fixed effects, special, main effects, and interactions power analysis, with an estimated effect size f = 0.196, a = .05, 1 2 b = 0.95, numerator df = 1, and groups = 4 (Faul et al., 2007). According to this analysis we would need 341 participants to achieve 95% power. However, because this is a replication of Study 1, in which we collected 440 participants before exclusions, we aimed to collect 440 participants (actual N = 442) for Study 2 rather than the 341 suggested by the power analysis. We excluded 20 participants for inattentive and/or invariant responding and one participant who was a statistical outlier ( + 4.5 SDs from the mean of monetary reward utility) 2 . Our final sample included 421 participants (187 males, 7 nonbinary; M age = 38.26, SD = 12.64). Procedures. Participants completed the same procedure as above with several additions. We included the Revised UCLA Loneliness Scale (Russell et al., 1980), post-game ratings of each player examining how ''good'' they were to play (1 = worst to play; 6 = best to play), and a smile discrimination task in which participants viewed photos of smiling faces (including the faces they viewed in the task) and identified whether each smile was genuine or polite. Finally, we included a short manipulation check at the end of the study in which participants estimated the frequency of their conversations and social media posts in the past 48 h, rated these for positivity and satisfaction. They also rated the degree to which they had had a conversation and social media post on their mind when they began the main task. Note: The value of money, genuine smiles, and polite smiles in the pre-task and post-task conditions for participants who made a social media post (left set of violins) versus had a real-time conversation (right set of violins). Blue fill (dark gray) represents participants in the pre-task condition and gray fill represents participants in the post-task condition. Within each violin, white dots represent the median and the white notches represent the 95% CI of the median; the horizontal lines show the means; the dark gray bars represent the interquartile range (IQR); and the light gray lines represent 1.5 times the IQR. The shape of the violin shows the probability density function of the data distribution. Individual data points are shown with colored dots. Results and Discussion As in Study 1, we conducted a 2x2 ANOVA to test for group differences in social media use prior to testing our hypotheses. There were no significant effects of interaction type, F(1, 416) = .99, p = .321, h p 2 = .002; salience, F(1, 416) = 0.14, p = .707, h p 2 \ .001; or their interaction, F(1, 416) = 2.73, p = .099, h p 2 = .007, on overall social media use. There were also no significant group differences in terms of how frequently participants logged on to social media sites, feedback positivity, or satisfaction with feedback (Table 4). Manipulation check data showed that participants who answered questions pre-task reported thinking a lot about their post or conversation (depending on the condition) and less about the other condition, whereas those in the post-task conditions were less occupied with the post or conversation (Table 5). These results suggest that our manipulation had its intended effect. We then tested our hypothesis using a 2 3 2 MANOVA with the individualized regression weights for monetary rewards, polite smiles, and genuine smiles as the dependent variables and interaction type (conversations, social media) and salience (pre, post) as the independent variables. The multivariate tests for the interaction condition and salience and their interaction were all significant (Table 6). Followup investigations of the univariate tests revealed that there were no significant main effects or interactions for monetary rewards, whereas the value of polite smiles was only influenced by salience, such that those in the pre-task conditions valued polite smiles more than those in the posttask conditions (M Difference = 0.29, 95% CI = [0.049, 0.538], t = 2.36, p Tukey = .019, h p 2 = .013) ( Table 7). Because polite smiles are important social cues, this finding is consistent with the notion of increased desire for social rewards, however, because it was not statistically significant in Study 1, we do not discuss it further. Genuine smile utility, was significantly influenced by interaction type, F(1, 417) = 15.37, p \ .001, h p 2 = .035; salience, F(1, 417) = 10.48, p = .001, h 2 = .024; and their interaction, F(1, 417) = 15.62, p \ .001, h p 2 = .036, (Figure 4). Consistent with expectations, a post hoc Tukey test revealed that those in the pre-task social media condition valued genuine smiles more than those in any other condition (post-task social media: M Difference = 0.96, 95% CI = [0.473, 1.448], t = 5.08, p Tukey \ .001; pre-task conversation: M Difference = 1.05, 95% CI = [0.564, 1.539], t = 5.56, p Tukey \ .001; post-task conversation: M Difference = 0.95, 95% CI = [0.463, 1.440], t = 5.02, p Tukey \ .001). Figure 5 describes participants' decisions strategies across the player pairs for visualization. Figure 6 shows participants' explicit ratings of the faces across conditions. We expected that participants in the high social media salience condition would rate genuinely smiling faces as ''better'' compared with other participants. To examine this, we conducted a salience (high/low) 3 interaction-type (social media/conversation) 3 monetary value (high/low) mixed ANOVA, with ratings of the high-and low-value faces as the dependent variables (Table 8). Importantly, the interaction-type 3 salience interaction was significant, showing that participants in the high social media salience condition rated genuinely smiling faces more highly than any other group (post-task social media: M Difference = 0.49, 95% CI = [0.084, 0.903], t = 3.19, p Tukey = .008; pre-task conversation: M Difference = 0.57, 95% CI = [0.169, 0.988], t = 3.75, p Tukey = .001; post-task conversation: M Difference = 0.52, 95% CI = [0.115, 0.936], t = 3.40, p Tukey = .004). A similar analysis involving politely smiling faces showed no significant interaction (Table 8). We conducted exploratory analyses to investigate possible moderators of this effect. We found no significant moderators of this relationship. However, we did find that need to belong correlated significantly with social media use and that active forms of social media use were associated with decreased loneliness. None of these findings were related to genuine smile utility (see Supplementary Materials). General Discussion Results from these studies suggest that individuals for whom social media use is salient demonstrated greater subjective desire for genuine smiles than did those for whom social media use was not currently in mind. Indeed, across both studies, participants in the high social media salience condition were willing to give up an average of .85 cents (SD = .82) per trial, relative to their peers in the other conditions (M = .32 cents/trial, SD = .63). They also rated genuinely smiling players more favorably than did other participants (Study 2). Furthermore, individuals who answered questions about a real-time conversation before versus after the smile valuation task did not differ in the extent to which the possibility of seeing genuine smiles shaped their choice behavior, meaning that this effect is driven by social media salience, rather than the simple act of making a social media post or thinking about social interactions more generally. This idea is consistent with research showing that reward context modulates subjective reward utility (Louie & Glimcher, 2012). These results suggest that, when salient, social media interactions increase the subjective utility of social rewards to a greater degree than salient face-to-face conversations. Participants' choice behavior in the subsequent task demonstrated the enhanced incentive salience (Berridge, 2007) of social rewards. This finding may explain why people find it difficult to stop scrolling a social media feed once they get started and why cues that enhance the salience of social media (e.g., alerts from social media apps) may pull people to return to it. Note: The value of money, genuine smiles, and polite smiles in the pre-task and post-task conditions for participants who made a social media post (left set of violins) versus had a real-time conversation (right set of violins). Blue fill represents participants in the pre-task condition and gray fill represents participants in the post-task condition. Within each violin, white dots represent the median and the white notches represent the 95% confidence interval of the median; the horizontal lines show the means; the dark gray bars represent the interquartile range (IQR); and the line gray lines represent 1.5 times the IQR. The shape of the violin shows the probability density function of the data distribution. Individual data points are shown with colored dots. Note: The proportion of choices participants allocated to a particular face, given relative differences in reward type and frequency within a given pair in Study 2. Error bars show 6 1standard error of the mean. Figure 6. Player Ratings Across Groups Note: Average ratings of how good each player was to play. Error bars show 95% confidence interval. As we have suggested throughout this article, social media use and its effects on people's wellbeing is controversial (e.g., Clark et al., 2018;Hou et al., 2019;Knowles et al., 2015;Lee et al., 2013). Here, we asked participants to focus on the more interactive outcomes of social media (likes, shares, and comments), rather than on the experience of social connectedness per se. This focus might have heightened social reward salience in the present participants. Future research should seek to disentangle the influence of these specific outcomes from a focus on general social connectedness, which may be more sustaining. This work, however, is not without limitation. First, although we discuss the effects of social media on social reward utility, stimuli in the smile valuation task (photographs of smiling actors) are limited in their ability to serve as real-world social rewards. Indeed, it is unlikely that photos of smiling faces are as powerful as the smile of a friend in a face-to-face interaction. Second, although we tried to make the questions assessing the social media post and the conversation as similar as possible, subtle differences in the outcomes of these interaction modalities may have affected task results. Third, our study design does not allow strong conclusions about the mechanism responsible for this effect. For example, social media salience may stimulate a need for social connection (Clark et al., 2018), thereby sensitizing people to social reward cues. Alternately, as we have suggested, the timing of reward delivery (Kable & Glimcher, 2007) may be the central factor driving this result. Future work should seek to disentangle these effects by manipulating both feelings of social connectedness and reward delivery. Finally, we make no inferences about the longevity of this effect. Because data were collected at a single time point, it is unclear how long social media salience enhances desire for social reward. Conclusion Taken together, our findings suggest that when social media use, but not social interaction more generally, is salient, people show enhanced utility for social rewards. Although we did not examine this specifically, social reward salience may have subsequent consequences for outcomes such as mood and behavior. It is likely the case that this effect plays a role in explaining the persistence and popularity of social media. It may also provide a partial explanation for prior reports noting divergent outcomes of social media use (e.g., Burke et al., 2010;Clark et al., 2018;Seabrook et al., 2016). Finally, this finding suggests that one way to reduce the pull of social media, might be to make alerts, followers, and feedback less salient, thereby reducing people's desire to engage in this domain. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Data collection was funded by an internal grant (start-up fund) awarded to Dr. Erin Heerey from Western University. Supplemental Material The supplemental material is available in the online version of the article. 1. In both studies, statistical outliers were classified as 6 4.5 SDs from the mean of the subjective value of monetary rewards, polite smiles, and/or genuine smiles. 2. The decision to exclude this participant from the analyses was not pre-registered, however it does not change the interpretation of the findings.
2022-10-18T17:01:31.731Z
2022-10-13T00:00:00.000
{ "year": 2023, "sha1": "933de3f200f92ab87e410bc68f5a2700268062e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/19485506221130176", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "99070dff3de69ff583fc02b6c6d49cbd6ac9e9db", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [] }
3402853
pes2o/s2orc
v3-fos-license
Urachal borderline mucinous cystadenoma Abstract Rationale: Urachal borderline mucinous cystadenoma is very rare and has only 9 cases in the current literature with the biological behavior between adenoma and adenocarcinoma. Patient concerns: We reported a 41-year-old man with moderate lower abdominal pain, and the imaging examination found an irregular cystic lesion extending from umbilicus to the dome of urinary bladder with significant separations and calcifications. Diagnoses: The diagnosis was confirmed according to the specific anatomical location and pathological examination which was proved as mucinous cystadenoma with low malignant potential. Interventions: The patient undertook radical excision and partial cystectomy. Outcomes: His postoperative condition was good. Lessons: Urachal borderline mucinous cystadenoma can be located by image examination, which may also offer several diagnostic tips according to separation, calcification, and enhancement in computed tomography scan. When combined with pathological findings, qualitative diagnosis can be determined. Surgical resection should be chosen as an optimal treatment. Our present study reviewed the clinical and biological information of all previous cases which were diagnosed as urachal borderline mucinous cystadenoma and we supplemented more data for further study. Introduction Urachus, which connects the umbilicus and anterior wall of bladder, is an embryological remnant of allantois. [1] During the embryonic evolution process, urachus closes to become an umbilical median ligament. [2] However, if it undergoes an incomplete atresia resulting in a patent urachus, it may become the primary site of various lesions, including cyst, fistula, tumor, diverticulum, and so on. [3] Among them, urachal tumors are less reported and have attracted more attention due to their potential invasive ability. Furthermore, urachal tumors are mostly originated from epithelium and their pathological classifications were covered by the 2016 World Health Organization (WHO) classification of genitourinary tumors, [4] among which, the urachal borderline mucinous cystadenoma is especially rare, with only 9 clearly diagnostic cases revealed by published literatures. [5][6][7][8][9][10][11][12][13] Obviously, its diagnostic and therapeutic experience remains limited. We aimed to share a case of urachal borderline mucinous cystadenoma from the aspects of clinical, imaging, operative, and pathological findings to provide more information for further study. Clinical findings A 41-year-old man attended our urology outpatient department with complaints of swelling and pain in lower abdomen for more than 1 year. The pain, which only extended to moderate degree, was paroxysmal and without obvious incentive. He had no history of gross hematuria, irritative urinary symptoms, osphyalgia, or abdominal mass. During urinating, the patient had no attention about filamentous mucus, which was later discovered by microscopic observation in routine urinalysis. Otherwise, he had a history of appendicectomy, and he denied recent weight loss or family history of tumor. All other blood analysis including tumor markers of free and total prostatespecific antigen (PSA) and chest computed tomography (CT) completed before surgery showed no abnormalities. Imaging findings Ultrasound examination of abdomen showed a mixed echo between bladder dome and abdominal wall on umbilical level. The mass, which was measured as 101 Â 42 Â 33 mm, was possessed of well-defined boundary, irregular shape, light vascularity, and several separations nearing bladder. Abdominal plain CT scan revealed a heterogeneous, lobulated hypodense mass measuring 3.8 Â 3.3 cm in enterocoelia of subumbilical plane (Fig. 1). More precisely, the complex cystic lesion extended from umbilicus to anterosuperior dome of the bladder. This hypo-density lesion showed no obvious enhancement in contrastenhanced CT scan, but nonetheless, we also clearly observed asymmetrical septa and dense shadow at the margin of cystic wall, which showed mild enhancement in delayed phase (Fig. 2). There were no image features of metastatic lymph nodes or other sites. Radiologically, a diagnosis of urachual cystadenoma with unknown malignant potential was put forth. Operative findings The patient subsequently underwent radical resection of urachal mass and partial cystectomy. At laparotomy, a cystic tumor connecting urachal remnant and dome of bladder was disclosed behind the peritoneum. Intraoperative view did not see evidence of pseudomyxoma peritone (PMP), which has been previously reported that it may originate from urachal remnants and is characterized by the intraperitoneal spread of mucus. [8,14] Evaluation for invasive signs of lymph node and other organs including ovary was negative. The patient was uneventful during postoperative course. Pathological findings Grossly, we observed a polycystic mass measuring 3 Â 3 Â 2 cm without capsule. In cross-section, there was a smooth wall filled with thick gelatinous mucus within the lumen. Histological examination revealed irregular glands floating in the mucous lake and part of the glandular epithelial dysplasia with deeply stained nuclei and pseudostratified epithelium (Fig. 3B). For comparison, we also selected the normal region of cystic wall which was lined by single columnar epithelium without abnormal cellular morphology or growth patterns (Fig. 3A). Otherwise, the cystic wall was lined by mucus columnar epithelium, with formation of visible nipple and secretion of mucus, and no tumor cells were observed in stroma (Fig. 3C). Otherwise, the performance of blue particles or pieces proved the presence of calcification (Fig. 3D). On the basis of dysplasia and noninvasion of stroma, we made a diagnosis of borderline condition of urachal mucinous cystadenoma. Taking into account the histologically low malignant potential and no evidence of lymph node or distant organs metastasis, regardless of radiological, pathological, or intraoperative aspects, the patient was recommended with radical resection without adjuvant chemotherapy. The importance of follow-up has been emphasized in case of recurrence or canceration. Discussion The urachus is gradually blocked as a fiber cord and walks in the space between abdominal fascia and peritoneal loose connective tissue (Retzius gap) during the period of the 4th or 5th month of embryonic development, and bladder descends into the pelvis at the same time. The wall of the urachal tube consists of 3 layers of structure, which from inside to outside are followed by the transition epithelium, connective tissue, and residual smooth muscle cells. Epithelial neoplasms originating from urachus can be divided into nonglandular, glandular, and mixed neoplasm. The nonglandular neoplasms include urothelial neoplasm, squamous cell neoplasm, neuroendocrine neoplasm, and mixed-type neoplasm. Glandular neoplasm can be further classified as adenomas, mucinous cystic tumor of low malignant potential, and adenocarcinoma. These three stages of tumors may cover the tumors' transformation process from benign to deterioration. [4] In above classifications, mucinous cystic tumor of low malignant potential means adenocarcinoma in situ with borderline biological behaviour, which suggests that early detection and treatment are necessary for better survival. We made a detailed analysis about the previous reports, and our present case was also included ( Table 1). Among the 10 cases of urachal borderline mucinous cystadenoma, 2 cases of female and 8 cases of male were reported. Patients were aged from 29 to 72 years, and the median age was 54 years old. The vast majority of initial symptoms were intermittent pain and/or mass in abdomen, excepting only 1 patient who had the chief complaint of haematuria and mucusuria during his visit. Most patients showed normal results about biochemistry and hematologic examinations. Our present patient had mucusuria, which was a rare but significant symptom for the diagnosis of urachus disease. [15] In addition, there was a case who had mild monocytosis and elevated fibrinogen; these laboratory findings in patients with pseudomyxoma peritonei were uncommon and not discriminative, but their appearance often prompts the risk of infection caused by urachal remnant. [16] Tumor markers have become the key basis for the diagnosis of various neoplastic lesions, but not all diseases have sensitive tumor markers. Considering the low incidence of urachal tumors, the present literature has not yet given a relatively sensitive tumor marker. Some researchers recorded the level of carcinoembryonic antigen (CEA) and carbohydrate antigen 19-9 (CA19-9), but found no abnormities, except only 1 patient who had elevated CEA. When our patient was in the initial diagnosis, we checked PSA which has been shown to be associated with bladder neoplasm, [17] and we found both free and total PSA were within normal range. The advice was proposed to identify sensitive and specific tumor markers of urachal neoplasms after more relevant cases were reported. Otherwise, there were 3 patients with PMP which was originated from the urachal mucinous cystic tumor of low malignant potential. PMP is a clinically uncommon phenomenon with low incidence rate of only one millionth. [18] We always find massive amounts of mucous fluid in peritoneal cavity in people suffering from this disease, which is commonly caused by (Table 1). Next, we reviewed the imaging and pathologic data of all present cases. Among all of the patients, the CT scan of the abdomen and pelvis showed a circular or irregular cystic lesion with low density connecting the urinary bladder and umbilicus. All of the lesions had various size and no invasive signs of near organs or lymph nodes. Several special imaging signs which were worthy of our mention were also recorded. First of all, intracapsular separation was significant in CT scan of 3 patients and the postoperative specimens showed multiloculated mass in 5 patients. Next, calcification was also meaningful in 7 cases and the CT imaging often appears as minute peripheral calcification. We hypothesize that lobulation and marginal calcification are common scenes, but combining with previous literature, we find that these manifestations are still not specific, and these signs can also be expressed in urachal mucinous adenocarcinoma. [19][20][21] As for the enhanced CT scan signs, most literatures were not mentioned, with only 2 patients who were clearly pointed out with no enhancement, and another 2 patients who showed minimal peripheral enhancement. In conclusion, CT scan provides accurate information of position, size, and invasion for urachal lesions, but the value for qualitative diagnosis is limited. Several characteristics in CT scan including intracapsular separation, calcification, and enhancement may only give us some tips, but cannot give a definite diagnosis. Pathologically, all patients showed mucinous cysts with abundant mucin, no matter whether there was papillary or loculated structures. Microscopically, the cyst wall of all cases was lined by the epithelium covered by atypical columnar cells with nuclear pleomorphism and less polarity, which were the most reliable bases for diagnosis of low malignant potential. Apart from this, the absence of invasive signs including stroma and surrounding or distant organs supported that the disease was not deteriorated. All basic histological presentations were unanimous and supported our diagnosis. All patients received the radical excision and partial cystectomy. Intraperitoneal lavage and excision of partial peritoneum in patients with PMP were conducted; there was 1 patient who took 5 0 -deoxy-5-fluorouridine (1200 mg/d) orally for 4 years after surgery. The follow-up time ranged from 6 months to 7 years, and no patients showed evidence of recurrence or metastasis. Conclusions We conclude that the risk of deterioration is low, and early surgical resection and abdominal lavage may maintain long survival time. In our patient, after resection, he was in good condition and had no complications; we forecast that he will have a good prognosis according to our above review. We reported this rare case which is the 10th case in present published researches. We expect that our report will help to supply more data about treatment and biological manifestations of urachal borderline mucinous cystadenoma for further study.
2018-04-03T01:15:49.926Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "3ac35e854f6e0d0f7ebfd773e6724a89bab7ec88", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000008740", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ac35e854f6e0d0f7ebfd773e6724a89bab7ec88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259203852
pes2o/s2orc
v3-fos-license
Computational projects with the Landau-Zener problem in the quantum mechanics classroom The Landau-Zener problem, where a minimum energy separation is passed with constant rate in a two-state quantum-mechanical system, is an excellent model quantum system for a computational project. It requires a low-level computational effort, but has a number of complex numerical and algorithmic issues that can be resolved through dedicated work. It can be used to teach computational concepts such as accuracy, discretization, and extrapolation, and it reinforces quantum concepts of time-evolution via a time-ordered product and of extrapolation to infinite time via time-dependent perturbation theory. In addition, we discuss the concept of compression algorithms, which are employed in many advanced quantum computing strategies, and easy to illustrate with the Landau-Zener problem. I. INTRODUCTION The Landau-Zener problem is an exactly solvable problem in quantum mechanics that describes how a quantum particle tunnels between two states as a function of the speed with which it traverses an avoided crossing. 1,2 The exact solution involves mapping the time-dependent Schrödinger equation onto the so-called Weber equation, which is solved with parabolic cylinder functions. But because these functions are not so familiar to most students, this mapping is rarely taught. Instead, because the system is just a two-state system, one can compute the results numerically. This brings in issues related to discretization and to accuracy, which can be particularly acute for high accuracy because the solution has slowly decaying oscillations that make determining the final tunneling probability challenging without invoking some form of averaging. Instead, one can use time-dependent perturbation theory to append the long-time results and achieve much higher accuracy solutions. This makes the Landau-Zener problem an excellent choice for a computational project in a quantum mechanics class. The time evolution, via a Trotter product formula, is easy to code. Appending the time evolution at long times requires a mastery of time-dependent perturbation theory and the interaction representation. Modifying the discretization size and the time cutoff for the time evolution allows students to understand issues related to the accuracy of the computation. Finally, this specific problem has a few different compression strategies that can be employed-these strategies replace the product of a string of operators by a single operator exactly equal to the product. Compression strategies are employed in quantum computing to reduce the depth of a quantum circuit. 3,4 Here, one can learn how such compression strategies work and how to parameterize SU(2) rotations in two different ways to complete the compression. In this work, we describe a student-led project on the Landau-Zener problem that will enable students to learn many of these different topics related to quantum mechanics and computation. This can be achieved even with beginner to intermediate competency with programming because the codes required are quite simple to implement. It also provides a nice mix between formal development and computational work, similar to much of contemporary research. The Landau-Zener problem was originally solved in 1932 by Landau, 1 Zener, 2 Stueckelberg, 5 and Majorana. 6 We also have found it discussed in two textbooks: Konishi and Pafutto 7 and Zweibach. 8 Interestingly, the Landau-Zener problem is not widely discussed in other quantum mechanics textbooks, even though it is ubiquitous in modern physics. Historically, it was initially applied to inelastic atomic and molecular collisions. Beyond collisions, two-level systems that exhibit nonadiabatic transitions include Rydberg atoms in rapidly rising electric fields, qubit states in an NV center in diamond, and double-quantum dots. Other systems include qubits based on Josephson junctions, charge qubits in semiconductor quantum dots, graphene devices with an avoided crossing near the Dirac point, ultracold molecules in a laser trap, and even timeresolved photoemission in charge-density-wave systems. A discussion of many of these applications is given in a recent review article. 9 The problem has also been discussed in the pedagogical literature. One study explores the accuracy of Runge-Kutta integration of the Schrödinger equation, 10 while another uses the Landau-Zener approximation, 11 which turns out not to be very accurate. The problem is mapped to the problem of a sphere rolling without slipping 12 and solved classically, and it is also solved using a simple conceptual approximation that averages probabilities, not probability amplitudes, due to the fast oscillations. 13 Finally, another approach uses contour integrals. 14 Our work focuses on developing a computational project that employs perturbation theory, compression, and numerical evaluation of the time-ordered product to explore the interesting physics and numerics. The remainder of the paper is organized as follows. In Sec. II, we introduce the Landau-Zener problem and discuss the Pauli spin matrix identities needed to work with the problem. In Sec. III, we describe the discretized time evolution via the Trotter product formula. In Sec. IV, we illustrate how time-dependent perturbation theory can append the time evolution of the semi-infinite tails using the interaction picture in a first-order expansion. Compression algorithms are dis-cussed in Sec. V, followed by implementation strategies for the classroom and conclusions in Sec. VI. II. THE LANDAU-ZENER PROBLEM AND PAULI SPIN-MATRIX IDENTITIES The Landau-Zener problem consists of determining the probability to transition from the ground-state to the excited state of a two-level system, after the two states approach each other with an avoided crossing and then depart from each other. The Landau-Zener system is described by the Hamilto-nianĤ where t is time, v is the rate at which the two levels approach each other, and δ is the coupling between them that determines the minimal energy gap (2δ ) of the avoided crossing (occurring at time t = 0). Both v and δ are real numbers with units of energy/time and energy, respectively. The symbols σ z and σ x represent Pauli spin matrices By diagonalizing the Landau-Zener Hamiltonian using time as a parameter, one obtains two instantaneous eigenenergy levels shown in Fig. 1(a). Initially, when t = −∞, the two energy eigenvectors |ψ ± given by are infinitely separated in energy and the system starts in the ground-state |ψ(−∞) = |ψ + . As the system evolves with time, the two levels E + (t) (the upper instantaneous energy level) and E − (t) (the lower instantaneous energy level) approach each other as t → 0 and then move apart as t → ∞. Because the ground state smoothly changes from |ψ + as t → −∞ to |ψ − as t → ∞, the probability to remain in the ground state for large positive times is given by Similarly, the probability to end in the excited state at long times is given by P We are interested in both of these probabilities when t → ∞, P + (∞) and P − (∞). As shown in Fig. 1(a), to transition from the lower-energy state to the higher-energy state, when traversing the avoided crossing, the system has to tunnel through a gap of size at least 2δ , so the probability to transition is associated with a tunneling process. Depending on the value of the rate v and the level separation δ , we distinguish between two types of transitions. If the system evolves adiabatically, that is, extremely slowly, it will always remain in the lower-energy state (the lower band); that is, it will make perfect transitions from one instantaneous ground state to another along its time evolution. According to the diagram in Fig. 1(a) this means that around t = 0 there will be a slow and smooth transition from |ψ + to |ψ − . If we let the system evolve diabatically (fast), it tunnels from the lower √ v 2 t 2 + δ 2 (the upper blue and the lower orange curve). At time t = 0, the two levels are separated by the minimal energy gap of E + (0) − E − (0) = 2δ . The green lines show the crossing of the (δ = 0) energy levels ±vt at time t = 0 in a system that has no coupling between the levels. The time evolution algorithm that we use is divided into three parts. The initial state |ψ(−T max ) is obtained either using perturbation theory applied to |ψ + at t = −∞ (what we consider as the perturbed state), or is set to |ψ + at −T max (what we consider as the unperturbed initial state). This initial state is then propagated towards |ψ(T max ) using an evolution operator in the Trotterized form with a time step of ∆t. An additional perturbation is applied to |ψ(T max ) to obtain the final state at t = +∞, or |ψ(T max ) is considered to be the final state (for the two different types of calculations). (b) Time evolution of the computed transition probability P + (t) = | ψ + |ψ(t) | 2 compared to the expected probability P + (∞) obtained from the analytical expression for the Landau-Zener transition. The rate is v = π and δ = 1; we use these same parameters for all of the numerical calculations in this paper. The inset in panel (c) shows the fast oscillations of the transition probability P + (t) with time on a backdrop of width vt 2 = 4Nπ (gray and white background), showing that these oscillations have a period proportional to ∼ t 2 . The amplitude of these oscillations for the interval [−T max , T max ] decays quite slowly with T max (as a power law) unless corrected by the time-dependent perturbation theory. to the upper band. The objective of the Landau-Zener calculation is to precisely determine these probabilities as functions of v and δ . Zener 2 solved the full time-dependent problem analytically by mapping the equation of motion of the system into the form of the Weber equation, which allowed him to obtain the exact solution P + (∞) = exp −πδ 2 /v . Here, we present a computational approach to find the same solution numerically. The first issue that arises when considering this problem from a numerical perspective is how to deal with the infinite times. Numerical simulations work with finite times, so how does one effectively start with a state at t = −∞ and obtain a result at t = +∞ on a computer? It is usually assumed that starting in the state |ψ + at some sufficiently large (but finite) negative time is justified, and will lead to an accurate numerical solution. But, the Landau-Zener problem is known for its slowly decaying oscillations of the transition probability P + (t) with time, which are illustrated on the right hand side of Fig. 1(b) and in Fig. 1(c). Although the time evolution of a two-level system is a simple problem to solve numerically and is not computationally demanding, the persistence of these slowly decaying oscillations presents a serious problem in accurately determining the transition probability as t → ∞. This problem might seem trivial here, since the analytical solution is known, but it becomes important for generalizations of the Landau-Zener problem, where the level separation is not linear in time and the exact solution is not known. In our numerical approach, we show how the time evolution can be divided into three parts, where propagation from −∞ to some cutoff time and from another cutoff time to +∞ can be resolved using time-dependent perturbation theory (see the gray areas in Fig. 1(a)), whereas the evolution on the finite time interval between the cutoffs can be computed using the Trotter product formula, which discretizes the time evolution operator. We discuss both these approaches in more detail in the following sections. Before we explain how to implement these quantummechanical concepts, we have to establish some mathematical prerequisites necessary to understand time evolution in quantum mechanics and time-ordered products of Hamiltonians based on Pauli matrices (two-level systems). Pauli matrices satisfy the commutation relations where the indices i, j and k represent the coordinates x, y, and z, the factor 2i is twice the imaginary number i, and ε i jk is the Levi-Civita (completely antisymmetric) tensor; this is a tensor that is equal to 1 when i jk is an even permutation of 123, is equal to −1 when i jk is an odd permutation of 123, and vanishes otherwise. Similarly, their anticommutator is given by where I is the unit matrix and δ i j is the Kronecker delta function. These two expressions can be combined to create the product formula for any two Pauli matrices The product formula is useful for evaluating the exponentials of weighted sums of Pauli matrices, which are needed to construct the time evolution operators. The exponential of a linear combination of Pauli matrices can be expanded into an infinite series Here γ is a 3-component vector of real numbers and the dot product is understood as γ · σ = γ x σ x + γ y σ y + γ z σ z (which is a 2 × 2 matrix). The quadratic term (i γ · σ ) 2 in the series is computed by using the product formula for two Pauli matrices. One obtains Here, the ∑ i j ε i jk γ i γ j σ k term is zero because one can interchange the i and j indices in the summation and show that The infinite sum can then be broken into two sums-those involving even powers and those involving odd powers. Each can be resummed to yield cos |γ| or sin |γ|. This simplification yields the generalized Euler identity for Pauli matrices which transforms the symbolic expression for an exponential of the Pauli matrices into a concrete 2 × 2 matrix. This result will be employed in computing the time-evolution operator. We also use the product formula of two Pauli matrices to compute the product of two exponentials of linear combinations of Pauli matrices via It is important to emphasize that, unlike exponentials of real numbers, the expression e i γ· σ e i γ · σ = e i( γ+ γ )· σ does not generally apply for exponentials of Pauli matrices. The reason for this is the last term in Eq. (10) with the scalar triple product. When this term is present, the product of the two exponentials does not commute and they can not be interchanged. In the case when two vectors γ and γ are colinear, then the tripleproduct term vanishes, the two coefficients can be summed, and the two exponentials do commute. III. COMPUTATIONAL APPROACHES TO THE TIME-ORDERED PRODUCT Regardless of how we propagate from t = −∞ to t = −T max (and t = T max to t = ∞), we still must use the computer to explicitly propagate from t = −T max to t = T max . To time evolve the state |ψ(−T max ) to the state |ψ(T max ) , we must apply the appropriate time-evolution operator. The time-evolution operator satisfies and depends on the initial time t 0 and the final time t. The time-evolution operator can be found from the facts that it must be unitary (so that it preserves the norm of the state at any time | ψ(t)|ψ(t) | 2 = 1) and that it must have the "semigroup property," which implies that time evolution is additive. In other words, evolving the state from t 0 → t and from t → t is the same as directly evolving the state from t 0 → t. Since the Hamiltonian must govern the time evolution, we are led tô Hamiltonians because the sign in the exponent is determined by convention. In the time-dependent case, we determine the full evolution operator by considering the time evolution over a short time interval from t to t + ∆t. We simply assume that the time interval is short enough that we can takeĤ(t) as being piecewise constant (over the time interval of length ∆t) and use the constant Hamiltonian time evolution operator for this piecewise constant Hamiltonian over the short interval ∆t. Then By applying a sequence of these operators one can construct a discretized version of the evolution operator which approaches the exact evolution operator as ∆t → 0. This is called the Trotter product formula. In the limit of ∆t → 0, the time-ordered product is conventionally written aŝ where T is the time-ordering operator, which orders times with the "latest times to the left." In our numerical approach to the Landau-Zener problem, the time evolution over the finite interval −T max ≤ t ≤ T max is performed via the Trotter product formula with the time step ∆t. We write the evolution operatorÛ(T max , −T max ) as a time-ordered sequence of exponentials of the Landau-Zener Hamiltonian after using the generalized Euler identity for the exponentials of the Pauli matrices in Eq. (9). Each of the exponential Trotter factors U(t + ∆t,t) is now a concrete 2 × 2 matrix, that acts on the state at a given time. Here, we use the exponentiated linear superposition of Pauli matrices with the vector γ = − ∆t h (vt, 0, δ ) T (which is a timedependent vector) for each Trotter factor. Note that in general exp [i(aσ z + bσ x )] = exp(iaσ z ) exp(ibσ x ) because the matrices σ z and σ x do not commute, as explained in the previous section. However, if the coefficients a and b are very small, as in the case of Eq. (15) for ∆t → 0, then exp [i(aσ z + bσ x )] ≈ exp(iaσ z ) exp(ibσ x ) because the error corresponding to the commutator of the two terms is on the order of ∼ ∆t 2 . Thus if the evolution operator is written in the Trotter product form, when ∆t is sufficiently small, one can approximate a Trotter factor viâ which we call the split-form of the evolution operator. We contrast this to the evolution operator in Eq. (15), that we call the exact form since there is no approximation (except assuming the Hamiltonian is constant over the time interval ∆t). The evolution operator expressed in the split form is accurate to the order of ∆t 2 , so the two forms should approach one another in the limit of ∆t → 0. The operator in the split form has an additional interesting feature. The exponentials of the σ x matrix exp(− ī h δ σ x ∆t) in the Trotter product sequence are constant for a fixed time step ∆t, whereas the σ z terms exp − ī h vtσ z ∆t depend on time t. At special times, T n , the exponent will satisfy the condition vT n ∆t/h = 2nπ. At these times, the given Trotter component with σ z (in the split form) is a unit matrix. The next σ z component in the Trotter product will have the exponent v(T n + ∆t)∆t/h which is the same as the earlier term v(T n−1 + ∆t)∆t/h and the same as the even earlier time v∆t∆t/h, and so on. This means that in the split form, the Trotter product repeats with period T = 2πh/ (v∆t). But we know that the exact time evolution operator is not periodic. So this periodicity is an artifact of using the split form. It means we must choose a ∆t such that the split form is not close to its periodic behavior over the interval −T max ≤ t ≤ T max . If we do not, then the accumulation of error in the split form leads to the transition probability repeatedly switching between P + (−T /2) and P + (T /2) as time passes through different T n points. To obtain precise results using the split method we should pick ∆t so that T > T max , ideally T max no larger than T /4. We now discuss how to perform the numerical calculation. The time evolution can be implemented in two different ways. One can write a function that computes the time-evolution operatorÛ(t + ∆t,t) at every time step and applies it to the state |ψ(t) to obtain |ψ(t + ∆t) (propagating the state), or one can multiply this time-evolution operator with the accumulated time-evolution operator for all previous timesÛ(t + ∆t, −T max ) =Û(t + ∆t,t)Û(t, −T max ) (propagating the operator). In the latter case, the state is obtained from Using the Trotter product formula combined with the generalized Euler identity for each Trotter factor enables the numerical solution of the Landau-Zener problem on the finite time interval [−T max , T max ]. Since we are working with 2 × 2 matrices that have an explicit form for each time step, this time evolution is relatively straightforward to program. However, we still need to determine the initial state |ψ(−T max ) . In the next section, we explain how to include the time evolution operator over the two semi-infinite intervals by using the interaction picture and the time-dependent perturbation theory. IV. EXTRAPOLATION TO INFINITE TIME WITH TIME-DEPENDENT PERTURBATION THEORY The roadblock to analytically determining the evolution operator for the Landau-Zener problem in Eq. (1) is the linear time dependence of the σ z term and the fact that the two Pauli matrices (σ z and σ x ) do not commute. This makes the timeordered product in Eq. (14) virtually impossible to solve analytically. For large positive and negative times, we must use timedependent perturbation theory. However while in conventional quantum instruction, the unperturbed Hamiltonian is always chosen to be the time independent piece and the perturbation is time dependent, here, the unperturbed part of the Hamiltonian is the large piece (the σ z piece for large |t|), and the perturbation is the constant piece (the σ x piece). So, we split the Landau-Zener Hamiltonian into two parts: the main time-dependent HamiltonianĤ 0 (t) = vtσ z and the timeindependent perturbationV = δ σ x . The evolution operator is then constructed in the interaction picture viâ whereÛ 0 (t,t 0 ) is the evolution operator for the unperturbed HamiltonianĤ 0 (t), andÛ I (t,t 0 ) is the evolution operator for the perturbation. In the interaction picture, we haveV I (t) = U † 0 (t,t 0 )VÛ 0 (t,t 0 ). The time ordered product T in Eq. (17) is to be understood in the same sense as in Eq. (14), meaning that we write it as an ordered sequence of exponentials of each term in the integrandV I (t). Note that Eq. (17) is typically not the exponential of the integral of the operatorV I (t); this only occurs if the integrand commutes with itself for different times. Note further that the notion of time-dependent perturbation theory here arises because as t goes to ±∞, the unperturbed pieceĤ 0 (t) is much larger thanV . We also emphasize that Eq. (17) is exact and does not contain any approximation (the perturbative approximation arises from Taylor expanding the evolution the operatorÛ I (t,t 0 ) as we show below). The evolution operator for the unperturbed Hamiltonian H 0 (t) can be computed analytically because it commutes with itself for all times. Hence, we can just integrate the unperturbed piece to findÛ 0 (t,t 0 ) = e − ī h 1 2 v(t 2 −t 2 0 )σ z . We next use this exact result to determine the perturbation in the interaction picture (which is a rotation of the σ x matrix about the z-axis). It becomeŝ which can be found by using the generalized Euler identity to determine each exponential factor and then multiplying the three matrices together. The strategy of perturbation theory is to approximate the time-ordered product forÛ I (t,t 0 ) in the evolution operator bŷ which is accurate for t close to t 0 , or whenV I is "small." Directly computing theV I (t) operator from Eq. (18) and then factoring the result in terms of Pauli matrices giveŝ The unit matrix can also be written as I =Ẑ(t 0 ) IẐ −1 (t 0 ) andÛ 0 (t,t 0 ) =Ẑ(t)Ẑ −1 (t 0 ), so the evolution operator becomeŝ Eq. (21) yields an approximate formula for the evolution operator in the interaction picture for any two times. We now apply it to computeÛ(−T max , −∞) andÛ(+∞, T max ). In the first case, the operatorẐ −1 (t 0 = −∞) on the right side of Eq. (21) can be neglected because we start from an eigenstate of the σ z operator (the |ψ + state) and acting withẐ −1 will just produce a global complex phase, which does not affect the probabilities. In the second case, when computingÛ(+∞, T max ) the operator on the left side of Eq. (21) (i.e.Ẑ(t = ∞)) can be neglected because we are interested only in the probability P + (∞) and that term also just contributes a complex phase, which cancels out. The matrix integral in Eq. (21) can be analytically computed in both cases. ForÛ(−T max , −∞) it equals where and Both off-diagonal components of the perturbation matrix, η(t) and ξ (t), are expressed in terms of the error function with a complex argument. The evolution operator obtained through perturbation theory is approximate and not necessarily unitary. This means the quantum state must be normalized "by hand" after applying the approximate evolution operator. In other words, we renormalize both at time t = −T max and at t = ∞. Note that the central integral in Eq. (21) does not change forÛ(+∞, T max ), which follows fromV I (−t) =V I (t) and from the change of variables t → −t , but theẐ(t) and Z −1 (t 0 ) operators do change. The improvement in accuracy for the calculation that includes the time-dependent perturbation theory arises from its use of a more accurate initial state at −T max . This improvement is shown in Fig. 2. In Fig. 2(a), we compare the phase of the state |ψ(t) obtained by propagating |ψ + from a distant time −T max = −1000 (blue curve) with the one obtained by perturbation theory as functions of time (t = T max , the orange curve). In general, the amplitude | ψ − |ψ(t) | in Fig. 2(b) matches the perturbed and unperturbed state, but as shown in Fig. 2(a) they differ by a phase. This initial phase difference introduces an error for later times that propagates through the Trotter product. The advantage of the perturbation theory is that it reduces the computational demand by reducing the time range [−T max , T max ] needed for the numerical calculation. Instead of propagating the state from a very distant past, one can obtain a precise result for a much smaller T max . The perturbation that brings the state to t = +∞ is even more important, because it removes the small but persistent oscillations of the transition probability, as we show in Sec. VI. Note that discussing time-dependent perturbation theory provides a number of benefits to the students: (i) it shows them how to extrapolate solutions from finite to infinite time; (ii) it illustrates how to employ time-dependent perturbation theory in a nonstandard fashion; and (iii) it shows how small errors in the initial conditions can propagate in a calculation and affect results at later times if they are not properly addressed. V. COMPRESSION ALGORITHMS In Sec. III, we explained how to compute the time evolution operator with the Trotter product and time discretization. Using the generalized Euler identity, the exponents at each time step can be converted into 2 × 2 matrices and the evolution operator can be computed by matrix multiplication. Replacing a sequence of exponentiated operators (as in the Trotter product) with a single exponential operator is called compression. Sophisticated compression algorithms based on the Cartan decomposition of Lie groups are employed in quantum computing to greatly reduce the depth of circuits. 3,4 In this section, we show how a simpler application using the two equivalent ways to represent rotations can be used to compress the timeevolution operator that we use in the Landau-Zener problem, which provides a nice opportunity to show how compression algorithms work in a quantum class. Instead of computing the evolution operator by matrix multiplication for each time step, compression relies on combining the exponential parameters (the vectors γ used in each exponential e i γ· σ factor) into a single vector, for the evolution operator over the finite time interval. The basic idea for compression comes from the fact that each exponent of a Pauli matrix represents a rotation on the Bloch sphere. A sequence of rotations about different axes can be replaced by a single rotation around a single axis. We focus on two compression algorithms that could be used to compute the evolution oper-atorÛ(T max , −T max ). The first algorithm is called XZX compression and it is based on a mathematical relationship between the Pauli matrices where one can compute a, b, and c from the known coefficients α, β , and γ. With the help of the generalized Euler identity, we convert the left and the right side of Eq. (26) into 2 × 2 matrices and compute the relationships between the coefficients by equating the four matrix elements of the final products on each side of the equation. This gives exact inverse trigonometric relations c = 1 2 arctan tan(β ) cos(α − γ) cos(α + γ) As shown in Fig. 3(a), the identity in Eq. (26) can be applied to the time evolution operator for the Landau-Zener problem written as a Trotter product in the split-form in Eq. (16). Here, the Trotter product is a sequence of alternating constant σ x and time-dependent σ z exponentials (the first row of Fig. 3(a)). Taking the three exponentials lying inside the red box in the first row of Fig. 3(a), we apply Eq. (26) and switch the ordering of the operators to the one in the second row of Fig. 3(a); that is, we change a product of exponentials in the form XZX into the form ZXZ. This re-expression of factors allows the now adjacent σ z exponentials on the edges of the red box on the second line to merge into a single exponential as emphasized by the red boxes in the third row of Fig. 3(a). The initial five exponential components are then compressed into three. This procedure is repeated until the Trotter product is reduced to just three exponential factors, which can each be computed using the generalized Euler formula. The second compression algorithm is computationally far more efficient than the XZX compression since it does not require any inverse trigonometric functions. We call this algorithm the nearest-neighbor algorithm because it involves Here, P + (∞) is the probability obtained by the exact Landau-Zener formula exp(−πδ 2 /v), while P + (∞) and P + (T max ) are the probabilities computed using the Trotterized evolution, with or without the perturbation, respectively. We also compare the accuracy when the Trotter step exponential is evaluated exactly, or when the Hamiltonian is split between σ z , and σ x terms in each Trotter step. In all cases T max = 30. High accuracy can only be attained for this value of T max when the infinite tails are included perturbatively. merging the neighboring exponentials in the Trotter product as shown in Fig. 3(b). In contrast to the XZX compression, which requires the split form of the Trotter product, the nearest-neighbor compression uses an exact form for each Trotter factor. This compression algorithm is based on Eq. (10) for the product of two exponentials e i γ· σ e i γ · σ = e i γ · σ . We simply need to construct the vector γ from the known γ and γ . Comparing the generalized Euler formula in Eq. (9) and its extension to a product of two exponentials in Eq. (10) we immediately find that and The previous two equations connects both the fourcomponent vector cos | γ|, sin | γ| γ | γ| for γ and the one for γ with the one for γ . Nearest-neighbor compression consists in computing these four-component vectors for every step in the Trotter product and combining them using Eq. (30) and Eq. (31). If the time interval [−T max , T max ] is divided into 2 N c time steps, then the compression can reduce the Trotter product to a single exponential in N c iterations (a logarithmic number of steps). Computationally, this method is much Fig. 4, namely ∆t = 10 −4 . One expects that the time step ∆t must be reduced to accurately reach larger T max values because of accumulated errors (here the ∆t value is held fixed). This clearly occurs for the split method, which is much more sensitive to ∆t errors, but surprisingly is not impeding the exact, perturbed approach through nearly 12 digits of accuracy. faster than XZX compression and even the direct matrix multiplication, but it requires a large amount of RAM memory for a small time step ∆t because the four-component vectors are kept in memory for every time step, whereas in the direct multiplication the evolution operator is computed "on the fly," which requires storing only its current value. The memory requirement can be reduced if the integration interval [−T max , T max ] is divided into smaller intervals and compression is applied to each one of them sequentially. There are two benefits of this work for the students. First, they learn about the idea of compression, which is becoming increasingly important in quantum computing and second, they learn how one can revise initial algorithms to make them computationally more efficient, an important skill of the computational physicist. VI. IMPLEMENTATION STRATEGIES AND CONCLUSION With these technical details completed, we now discuss how to implement this problem as a class project. Figure 4 shows the numerical accuracy achieved using different approaches (perturbation vs. no perturbation and exact vs. split form of the evolution operator), expressed as an error ∆P in determining the transition probability at infinity on the logarithmic scale. The logarithmic scale tells how many digits of accuracy one can achieve using different approaches. The advantage of the time-dependent perturbation theory is obvious since the unperturbed result converges very slowly to the expected probability. The figure also shows how the accuracy of the split-form of the Trotter product approaches the accuracy of the exact Hamiltonian for sufficiently small ∆t. The accuracy, in this case, is limited by our choice of T max = 30 so the computed results converge with decreasing ∆t. Similarly, Fig. 5 shows how the integration range [−T max , T max ] influences the computed transition probability. The persistent oscillations in the unperturbed system prevent determining the transition probability beyond two or three digits of accuracy even for T max = 1000. The slow decay of the unperturbed error suggests that achieving higher accuracy in this case is essentially impossible, even with increasing T max . The perturbed results show a high accuracy even for small T max . One way to increase the accuracy in the unperturbed case is to do timeaveraging once P + (t) starts to oscillate around the expected P + (∞) as done previously, 10 but it is difficult to systematically do this when the amplitude is also decreasing with time, especially to high accuracy. We also can infer how the split approximation is more sensitive to ∆t errors, since we used a fixed time step, rather than reducing it as the cutoff time increases; here we see that the accumulated errors due to the finite size of the time step worsen the accuracy for large cutoff times. This advanced computational concept is beautifully illustrated in this project. Note that one need not worry about round-off error associated with matrix multiplications in this work. Those errors appear to always be much smaller than the other intrinsic errors of the computational algorithm. The Landau-Zener problem is a challenging computational project for quantum-mechanics students, without requiring any knowledge of the higher level differential equations that are usually used to solve this problem. Students really get a taste of how computational physics works-they need to work through some nontrivial formalism to determine precisely what needs to be calculated and then they need to carefully program the results and run them. Finally, they need to examine the accuracy of the results. In the supplementary materials, we provide a well-documented Python package that includes the codes used to produce the results presented in this paper. Teams can be formed to work on implementing different approaches and collaborating to compare the different outcomes (e.g perturbation vs. no perturbation or split Trotter evolution vs. exact Trotter evolution) and discussing the computational efficiency and the numerical accuracy of each of these approaches. The workload necessary to solve this problem goes beyond a simple homework assignment, but it offers the opportunity for students to gain deep knowledge of quantum mechanics in a practical example that will allow them to acquire skills that are important for computational research.
2023-06-21T01:16:46.014Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "6a33e82d61e0c52e5fa20423d351740ccbc56337", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6a33e82d61e0c52e5fa20423d351740ccbc56337", "s2fieldsofstudy": [ "Computer Science", "Education", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229175691
pes2o/s2orc
v3-fos-license
Lifestyle InterVention IN Gestational diabetes (LIVING) in India, Bangladesh and Sri Lanka: protocol for process evaluation of a randomised controlled trial Introduction The development of type 2 diabetes mellitus disproportionately affects South Asian women with prior gestational diabetes mellitus (GDM). The Lifestyle InterVention IN Gestational diabetes (LIVING) Study is a randomised controlled trial of a low-intensity lifestyle modification programme tailored to women with previous GDM, in India, Bangladesh and Sri Lanka, aimed at preventing diabetes/pre-diabetes. The aim of this process evaluation is to understand what worked, and why, during the LIVING intervention implementation, and to provide additional data that will assist in the interpretation of the LIVING Study results. The findings will also inform future scale-up efforts if the intervention is found to be effective. Methods and analysis The Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) methodological approach informed the evaluation framework. Michie’s Behaviour Change Theory and Normalisation Process Theory were used to guide the design of our qualitative evaluation tools within the overall RE-AIM evaluation framework. Mixed methods including qualitative interviews, focus groups and quantitative analyses will be used to evaluate the intervention from the perspectives of the women receiving the intervention, facilitators, site investigators and project management staff. The evaluation will use evaluation datasets, administratively collected process data accessed during monitoring visits, check lists and logs, quantitative participant evaluation surveys, semistructured interviews and focus group discussions. Interview participants will be recruited using maximum variation purposive sampling. We will undertake thematic analysis of all qualitative data, conducted contemporaneously with data collection until thematic saturation has been achieved. To triangulate data, the analysis team will engage in constant iterative comparison among data from various stakeholders. Ethics and dissemination Ethics approval has been obtained from the respective human research ethics committees of the All India Institute of Medical Sciences, New Delhi, India; University of Sydney, New South Wales, Australia; and site-specific approval at each local site in the three countries: India, Bangladesh and Sri Lanka. This includes approvals from the Institutional Ethics Committee at King Edwards Memorial Hospital, Maharaja Agrasen Hospital, Centre for Disease Control New Delhi, Goa Medical College, Jawaharlal Institute of Postgraduate Medical Education and Research, Madras Diabetes Research Foundation, Christian Medical College Vellore, Fernandez Hospital Foundation, Castle Street Hospital for Women, University of Kelaniya, Topiwala National Medical College and BYL Nair Charitable Hospital, Birdem General Hospital and the International Centre for Diarrhoeal Disease Research. Findings will be documented in academic publications, presentations at scientific meetings and stakeholder workshops. Trial registration numbers Clinical Trials Registry of India (CTRI/2017/06/008744); Sri Lanka Clinical Trials Registry (SLCTR/2017/001) and ClinicalTrials.gov Registry (NCT03305939); Pre-results. Introduction 1. A number of GDM % estimates are listedare these all using the same criteria? Also across the three countries? Perhaps, just mention this (and which criteria has been used) as estimates can vary quite a lot depending on the criteria. Arora et al 2015 for instance found 9% GDM in Punjab using WHO 1999 vs 35% using WHO 2013. Also Meththananda Herath et al 2016 found GDM% to be 23% in Sri Lanka using IADPSG. 2. Line 140-141. A reference would be good e.g. from the DPP studies (Ratner et al 2008) . 3. Line 149-150. Is this the correct reference (15)? Methods 1. References to the HeLP-her and PregDiabCare programs would be good. 2. . Could you elaborate a bit on the evidence of feasibility and effectiveness from the PregDiabCare programme. 3. Line 196. Who were all the key stakeholders? Women with prior GDM? Healthcare providers? (which types of HCPs?) family members? Policy makers? Community leaders? 4. The authors provide a very nice summary of the COM-B model and its relevance for the study. the NPT, however, is not described in a similar way. Perhaps add a sentence or two summarising this theory as well? 5. It is not clear to me what quantitative data is collected (line 273.-314). Are we talking data from clinical measurements? Data from surveys? And are these validated questionnaires or? Perhaps expanding Table 1 providing more detailed information as often done in protocol papers. 6. Who will be conducting the qualitative evaluation? For this part, I think it is important to address / discuss issues like trustworthiness, the researcher's position etc. 7. Line 301-306. Could you elaborate a bit on the qualitative data collection? Including what information you expect to get from the interviews vs the focus group discussions? The two methods are useful for different purposes.. And also will you conduct both interviews and group discussion with participants, staff and stakeholders (and who are the stakeholders?)? or will some methods be more relevant for some groups? 8. Line 318. you write that "..help us understand what aspects of women's lifestyle are preventing them from achieving changes". I am not sure what is meant by this sentence. Is it really the women's lifestyle that are preventing them from making changes? Do you mean living conditions? Or what about aspects outside e.g. structural issues, lack of social support etc.? 9. Line 320. How is the weather part of the intervention? 10. Line 374-375. Perhaps explain what is meant by "attracted continuous investment by people". 11. Are the three locations comparable? Will there by any investigation comparing the intervention across the study sites? E.g. studying context specific aspects which may be relevant at some sites and not others. 12. Sampling (line 382-396). This section is slightly confusing. The 60-90 women recruitedwhat will they be recruited for? The quantitative evaluation? If so, then I will suggest you start by describing the quantitative sampling and then the qualitative sampling. 13. Line 382-396. How will the women be selected for the various evaluation methods? What will guide the recruitment? I assume it will be only women who receive the intervention, but this is not 100% clear. The recruitment is done by local site staff. This should be discussed. 14. Analysis (line 397-424). Please elaborate on the use of thematic analysis. How will the coding be performed and by whom? Will you move from codes to categories to themes or? Perhaps consider adding a reference to a methods paper? You write that it will be informed by the re-aim, BCW and NPTthis suggest a deductive or abductive approach. The challenges related to this should be noted in discussion. 15. Will interviews and group discussions be audio recorded and transcribed? 16. What about the analysis plan for the quantitative analyses? 17. How will data be managed? How will it be entered, stored, verified? How will you ensure participants anonymity? How will you ensure that data is safely stored and kept in accordance with regulations? 18. There is no discussion section. It seems relevant to discuss the strengths and limitations of the study, including addressing the position of the researchers. Also, what are the implications of the study? Reviewer 1 (Delphine Mitanchez) Comments to Author Comment Response The main objective is not defined as three specific objectives are defined. The main objective of this study is to evaluate and understand what worked, and why, during the implementation of the LIVING intervention. This main objective has now been emphasised in the abstract (line 73-74) and under Aims and Objectives (at line 171-172). The calculation of the number of subjects is not presented. The authors announce in the introduction that 1414 women with prior GDM will be recruited, but how this number was estimated? In addition, according to the evaluations, the number of participants changes (sampling, lines 381-396) but it is also unclear how these number were estimated? We have added to the manuscript at line 149151 to highlight that the calculation of the number of subjects and how they will be recruited has been described in the main trial protocol, which has been referenced (ref # 14). Thank you, we appreciate the need for further clarification here. 1414 is the total number of participants in the main trial. 60-90 participants will participate in the process evaluation. We have also added to the Sampling section to clarify that this sample was considered by the study investigators and implementers to be both feasible and sufficient to capture a variety of perspectives at each site based on previous GDM studies. What are the inclusion and exclusion criteria? We have included details on the inclusion and exclusion criteria at lines 150-154, with reference to the main trial protocol. In summary, the inclusion criterion for women is GDM diagnosis in their most recent pregnancy. At each participating hospital, women with GDM will be identified at 24-34 weeks of gestation using International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria. The inclusion criterion for randomization is the absence of Type 2 diabetes, i.e. confirmation of normal glucose tolerance (NGT), impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) at the 6-month postpartum OGTT. The exclusion criteria is travel time to hospital > 2 h; lack of availability of a household mobile telephone; use of steroids during pregnancy other than for lung maturation of the baby; and likelihood of moving residence in the next 3 years. The feasibility of the study is not discussed. The intervention being evaluated in this study has undergone pilot feasibility testing in HELPher and PregDiabCare. We have clarified in the Strengths and Limitations section that this study has been informed by prior studies that were found to be feasible and acceptable. In addition, we have referred to these studies again on page 7 of the manuscript, indicating that their contribution to the design of this study gives it a high probability of feasibility and acceptability in the South Asian context for women with GDM. The trial registration is not reported. The trial registration is now reported in the abstract and in the Ethics and dissemination section (line 440-443). The numbering of the figures is to be reconsidered: figure 1 is not numbered in the manuscript and figure 2 is numbered as 3 Figure 1 is numbered in the manuscript at line 225, 229, 258 and 261. Figure 3 is now labelled as Figure 2. Reviewer 2 (Karoline Nielsen) Comments to Author Comment Response The trial itself as well as the process evaluation seem to have been ongoing for quite some time (since Dec 2015) and is expected to end in Dec 2020. Although this is not entirely clearthe authors note that the data collection for the process evaluation will begin 2020 (line 436-437), but it is also mentioned that data We have clarified the timeline for data collection at line 463-465. The authors do not mention following any reporting guidelines/checklists. I recon the use of these may not be fully relevant for this paper, but nonetheless looking at the SPIRIT (and it's extensions) or COREQ may be helpful. In planning this study, we considered a range of reporting guidelines and checklists (CONSORT, STROBE, PRISMA, SPIRIT, STARD, CARE, AGREE, SRQR, ARRIVE, SQUIRE and CHEERS). We did not consider any of these guidelines to be suitable for process evaluation studies, or routinely used in the published literature for process evaluations. In addition, the CONSORT guideline was used in the main RCT trial paper and met all of the standards under that guideline. My co-authors and I could not find this study. We would be more than happy to add this reference. Line 149-150. Is this the correct reference (15)? We have amended this reference in the manuscript. Methods References to the HeLP-her and PregDiabCare programs would be good. We have now added these references. Line 190-192. Could you elaborate a bit on the evidence of feasibility and effectiveness from the PregDiabCare programme. We have added further detail here. Table 1 providing more detailed information as often done in protocol papers. Who will be conducting the qualitative evaluation? For this part, I think it is important to address / discuss issues like trustworthiness, the researcher's position etc. We have added further detail at lines 424-430. Line 301-306. Could you elaborate a bit on the qualitative data collection? Including what information you expect to get from the interviews vs the focus group discussions? The two methods are useful for different purposes.. And also will you conduct both interviews and group discussion with participants, staff and stakeholders (and who are the stakeholders?)? or will some methods be more relevant for some groups? We have added further detail at lines 351-354. Line 320. How is the weather part of the intervention? We have deleted this from the manuscript. Line 374-375. Perhaps explain what is meant by "attracted continuous investment by people". We have now clarified this point by referencing the NPT framework at line 391-393. Are the three locations comparable? Will there by any investigation comparing the intervention Our interviews with service providers and facilitators will allow us to study context across the study sites? E.g. studying context specific aspects which may be relevant at some sites and not others. specific aspects relevant at some sites and not others. This will facilitate a comparison between countries, but also across study sites. Sampling (line 382-396). This section is slightly confusing. The 60-90 women recruitedwhat will they be recruited for? The quantitative evaluation? If so, then I will suggest you start by describing the quantitative sampling and then the qualitative sampling. We have added further detail at lines 409-411. Line 382-396. How will the women be selected for the various evaluation methods? What will guide the recruitment? I assume it will be only women who receive the intervention, but this is not 100% clear. The recruitment is done by local We have added further clarifications at lines 409-412. We have also added a discussion of the potential limitations of local staff recruiting interview and focus group participants at line site staff. This should be discussed. 464. Analysis (line 397-424). Please elaborate on the use of thematic analysis. How will the coding be performed and by whom? Will you move from codes to categories to themes or? Perhaps consider adding a reference to a methods paper? You write that it will be informed by the re-aim, BCW and NPTthis suggest a deductive or abductive approach. The challenges related to this should be noted in discussion. We have addressed these points at lines 440 to 453. Will interviews and group discussions be audio recorded and transcribed? We have clarified this at line 419. What about the analysis plan for the quantitative analyses? We have added further detail here. How will data be managed? How will it be entered, stored, verified? How will you ensure participants anonymity? How will you ensure that data is safely stored and kept in accordance with regulations? We have added a paragraph on how data will be managed at lines 501-508. There is no discussion section. It seems relevant to discuss the strengths and limitations of the study, including addressing the position of the researchers. Also, what are the implications of the study? We have now added a section at lines 446-458. (12):4774-9. -Line 21. "self-care" may need an explanation -Line 22-24. Suggest you change wording to "no completed adequately powered trials of intervention to show such an effect in South Asian women". REVIEWER -Line 38. Should be T2DM not type 2 diabetes -Line 40-47. This is just a suggestion to the authors, but could be interesting if the authors also have data to assess the penetration and participation rates. I'm wondering how many women will be ineligible to participate e.g. because they move to their maternal home within the first months after delivery. Would be interesting with such estimates in order to assess generalisability. Methods and analysis -Page 7. Line 9-10. The authors write "we used these data and the theoretical models described above..". I'm not sure what models they are referring to. If it is the com-b and NPT is should be changes to 'described below' Methods (you have two headings named 'methods') -Page 9. Line 37-43. I find the description and use of the evaluation datasets unclear. When are the women recruited in the trial? When are the women randomized? When are the different data collection time points? How long does the intervention run? The authors refer to the trial protocol, but I would like to be able to understand this without having to look up another paper. Also, on page 13-14 under Analysis plan it appears that these data variables will not be used in the process evaluation. Is that correct? If that is the case, why is it mentioned? Sampling -Page 13 line 21-46. I will suggest that you separate the sampling descriptions for interviews and focus group discussion. It is slightly confusion at the moment with sometimes the total number given and other (line 39-40) only the anticipated number of the participants for the interviews are mentioned. Since focus group discussions and interview obviously are two different methodological approaches that serve different purposes it seems odd that you are combining them. Also there is some repetition in this section (line 27 and again line 45) Page 15. The strengths and limitation section ought to have its own heading. Also, I am not sure it is necessary to include the comment about your findings not being generalizable to other low-resource settings. You are investigating if the trial will be generalizable (which is important) -I'm not sure your process evaluations really needs to be. Qualitative research will always be context specific. Thank you for this clarification, we have now included this reference. VERSION 2 -AUTHOR RESPONSE Line 21. "self-care" may need an explanation We have now clarified "self-care" here (Proactively engaging in activities to preserve or improve her health) Line 22-24. Suggest you change wording to "no completed adequately powered trials of intervention to show such an effect in South Asian women". We have changed the wording to "no completed adequately powered trials of intervention to show such an effect in South Asian women". Line 38. Should be T2DM not type 2 diabetes Type 2 Diabetes has now been changed to T2DM. Line 40-47. This is just a suggestion to the authors, but could be interesting if the authors also have data to assess the penetration and participation rates. I'm wondering how many women will be ineligible to participate e.g. because they move to their maternal home within the first months after delivery. Would be interesting with such estimates in order to assess generalisability. Thank you. We have added here that we will investigate eligibility and reasons for ineligibility. We note that the fact that neither the sites nor the participants are selected randomly may limit generalisability of the findings. Methods and analysis Page 7. Line 9-10. The authors write "we used these data and the theoretical models described above..". I'm not sure what models they are referring to. If it is the com-b and NPT is should be changes to 'described below' We have clarified that we used the data emerging from the formative study to inform this process evaluation. Methods (you have two headings named 'methods') Methods now appears only once. Page 9. Line 37-43. I find the description and use of the evaluation datasets unclear. When are the women recruited in the trial? When are the women randomized? When are the different data collection time points? How long does the intervention run? The authors refer to the trial protocol, but I would like to be able to understand this without having to look up another paper. Also, on page 13-14 under Analysis plan it appears that these data variables will not be We have clarified the following points in the introduction: We have included answers to each of these questions in the introduction: When are the women recruited in the trial? Women are recruited into the study in pregnancy to assess for GDM, and 3 to 18 months postchildbirth, if diagnosed with GDM in the index used in the process evaluation. Is that correct? If that is the case, why is it mentioned? pregnancy, to assess glycaemic status for eligibility to participate in the trial. When are the women randomized? Women eligible for participation in the trial are randomized to the control or intervention group 3 to 18 months after childbirth. When are the different data collection time points? Follow up visits are conducted for all participants every 6 months from the date of randomisation. How long does the intervention run? The intervention is designed to take 48 to 52 weeks from randomisation to have all its componentsgroup sessions, intensification sessions if required, telephone messages, follow up telephone calls from the intervention facilitator delivered. The main trial outcomes will not be analysed in the process evaluation. However, we will present figures related to the study processes, e.g., duration between childbirth and glycaemic status assessment, duration of intervention, intervals between follow up visits. Sampling Page 13 line 21-46. I will suggest that you separate the sampling descriptions for interviews and focus group discussion. It is slightly confusion at the moment with sometimes the total number given and other (line 39-40) only the anticipated number of the participants for the interviews are mentioned. Since focus group discussions and interview obviously are two different methodological approaches that serve different purposes it seems odd that you are combining them. Also there is some repetition in this section (line 27 and again line 45) We have deleted the repetition in line 27 and line 45. In addition, we have separated sampling for interviews and focus group discussions. Page 15. The strengths and limitation section ought to have its own heading. Also, I am not sure it is necessary to include the comment about your findings not being generalizable to other low-resource settings. You are investigating if the trial will be generalizable (which is important) -I'm not sure your process evaluations really needs to be. Qualitative research will always be context specific. We have included a new heading for strengths and limitations. In addition, we have deleted the comment regarding the generalizability of our findings.
2020-12-15T21:59:32.666Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "5f0774c3cb1a00b5bf89238528a1f54b8d49f2c4", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/12/e037774.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b1816d296b58123c020e572f4fa5d197935b7b56", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }